text
stringlengths
1
1.87M
meta
dict
\section{Problem} Many applications, which have become everyday tools, offer to search and filter the vast data sources available on the Web. In particular, there is a multitude of platforms dealing with scientific literature. From the simple search engine for scientific articles to the social network for researchers, all use, as data, the daily publications produced around the world. For the researcher facing this deluge of information, it has become difficult, if not impossible, to conduct a regular and exhaustive monitoring of his areas of expertise. The ongoing research presented in this paper, done in partnership with an industrial player \footnote{\textit{Digital Scientific Research Technology} (\textit{DSRT}) and its web application \textit{Peerus}: \url{https://peer.us}.}, deals with the problem of learning representations in heterogeneous networks of documents applied to the recommendation of scientific literature in real time. If the scientific information of a publication is mainly contained in the text that composes it, rich supplementary information is nested in its metadata. Thus, networks of co-authors, citations and places of publication of an article contain important information for the realization of a scientific recommender system. As such, the scientific literature constitutes a heterogeneous attributed network (HAN) and since new papers are constantly published, this HAN grows in real time (Figure \ref{fig:data} shows an hypothetical example). However, limited information might be observed from the new aggregated nodes. For example, a newly published paper hasn't incoming citation links and a PhD student has a limited number of past co-authorship links. To face this lack of information, the approach considered in the proposed research is to focus on learning strong representations of the attributes, particularly the textual contents of the articles, that can reflect the partially observable underlying network structure. \begin{figure}[] \centering \begin{subfigure}[b]{0.23\textwidth} \centering \begin{tikzpicture}[auto, thick, scale=0.35] \edef\mya{0} \foreach \place/\name in {{(0,-2)/a}, {(2,0)/b}, {(0,2)/d}} \pgfmathparse{int(\mya+1} \xdef\mya{\pgfmathresult} \node[superpeers] (\name) at \place {\mya}; \foreach \pos/\i in {above right of/1, right of/2, below right of/3} \node[peers, \pos =b ] (b\i) {\i}; \foreach \speer/\peer in {b/b1,b/b2,b/b3} \path (\speer) edge[-] (\peer); \path (a) edge[-] (b3); \node[peers, above right of=d] (d1){6}; \path (d) edge[-] (d1); \path (b) edge[-] (d1); \edef\mya{3} \foreach \pos/\i in {below left of/1, below of/2} \pgfmathparse{int(\i+3)} \edef\mya{\pgfmathresult} \node[peers, \pos =a ] (a\i) {\mya}; \foreach \speer/\peer in {a/a1,a/a2} \path (\speer) edge[-] (\peer); \node[legendsp] at (4,-7) {\small{Papers}}; \node[legendp] at (0,-7.1) {\small{Authors}}; \end{tikzpicture} \end{subfigure}% ~~~~ \begin{subfigure}[b]{0.23\textwidth} \centering \begin{align*} A &= \begin{pmatrix} 0 & 0 & 1 & 1 & 1 & 0\\ 1 & 1 & 1 & 0 & 0 & 1\\ 0 & 0 & 0 & 0 & 0 & 1\\ \end{pmatrix}\\ &\\ &\\ D &= \begin{bmatrix} \text{"Learning Framework..."} \\ \text{"Supervised Machine..."} \\ \text{"Scholarly Literature..."} \\ \end{bmatrix} \\ & \end{align*} \end{subfigure} \caption{Hypothetical example of scientific literature data, here constituted of 6 authors and 3 papers. The HAN can be noted $G=(V,E,D)$ with $V$ the nodes, $E$ the edges and $D$ the textual content of the papers. We note A the biadjacency matrix of the graph $G$.} \label{fig:data} \end{figure} \section{State of the art} The quality and informativeness of data representation greatly influence the performance of machine learning algorithms. For this reason, a lot of efforts are devoted to devise new ways of learning representations \cite{bengio2013representation}. In Section \ref{soa:1}, I describe how the task of learning representations of nodes, \textit{i.e.} network embedding, is tightly connected to word embedding. Then, in Section \ref{soa:2}, I focus on the interplay between natural language processing and network embedding. Finally, in Section \ref{soa:3}, I present recent works that extend network embedding techniques to HAN. \subsection{From Word Embedding to Network Embedding} \label{soa:1} The distributional hypothesis \cite{sahlgren2008distributional} forms the basis of word embedding algorithms. This assumes that the distributional similarity of words correlates strongly with their semantic similarity. In other words, if we learn a representation of a word that allows us to predict the other words that occurred in its context, we obtain a representation of its meaning. Skip-Gram \cite{mikolov2013distributed} is an algorithm that builds representations of words by maximizing the log-likelihood of a multi-set of co-occurring word pairs. Skip-gram with Negative Sampling is a variation proposed in \cite{mikolov2013distributed} to effectively approach that log-likelihood. This is achieved by reducing the task to a classification which consists in distinguishing pairs of words that co-occur with false pairs that do not co-occur. An alternative approach, GloVe \cite{pennington2014glove}, learns representations of words by factoring a matrix of counts of co-occurrences of the words of a corpus. Its objective is to minimize the error of reconstruction of the matrix, considering only the non-zero values of co-occurrence counts. Even though the distributional hypothesis originated in linguistics and is naturally leveraged for word embedding, Perozzi \textit{et al.} establish the connection with network embedding. To do so, they show that the frequency at which nodes appear in short random walks follows a power-law distribution, like the frequency of words in language \cite{perozzi2014deepwalk}. They propose DeepWalk, that consists in applying skip-gram with hierarchical softmax on a corpus of node sequences, deemed equivalent to sentences, generated with truncated random walks \cite{perozzi2014deepwalk}. For some specific tasks, the representations learned with DeepWalk offer large performance improvements. Thus, many subsequent works focus on modifying or extending DeepWalk. Node2vec replaces random walks with biased random walks, in order to better balance the exploration-exploitation trade-off, arguing that the added flexibility in exploring neighborhoods helps learning richer representations \cite{grover2016node2vec}. VERSE \cite{tsitsulin2018verse} provides a scalable graph embedding algorithm by defining a versatile similarity matrix of the nodes and a learning algorithm using noise-contrastive estimation \cite{gutmann2010noise}, which provably converges to its objective in contrast to negative sampling. \subsection{Natural Language Processing in Networks of Documents} \label{soa:2} As a special case of attributed networks, graphs of documents bring together the fields of natural language processing (NLP) and network embedding. A wide variety of unsupervised learning algorithms to represent words and documents have been proposed, from the well-known bag-of-word model \cite{harris1954distributional} to the recently introduced attention-based Transformer \cite{vaswani2017attention} adapted for unsupervised pre-training \cite{devlin2018bert}. However, few works have fully explored the interplay between NLP techniques and network embedding. NetPLSA \cite{mei2008topic} adapts a topic modelling algorithm by regularizing a statistical topic model with a harmonic regularizer based on a graph structure. It generates topics that reflect the underlying communities of the network, providing cleaner topics than regular statistical models. In \cite{yang2015network}, Yang \textit{et al.} prove that skip-gram with hierarchical softmax can be equivalently formulated as a matrix factorization problem. They then propose Text-Associated DeepWalk (TADW), to deal with networks of documents. TADW consists in constraining the factorization problem, with a pre-computed representation of documents via LSA \cite{deerwester1990ilsa}. As such, each node can be represented as the concatenation of a network embedding, and a projected textual embedding. CANE \cite{tu2017cane} aims to improve node representations in a structured corpus by applying a mutual attention mechanism over the textual contents associated with the vertices of a graph. Given a connected pair of nodes, the model produces textual representations for each node contextually to the other node. In this manner, there are as many representations for a single node as it has neighbors. This model has the advantage to produce interpretable weights for the words of a pair of documents, highlighting those that explain the network structure. \subsection{Heterogeneous Attributed Networks} \label{soa:3} Real-world networks are often composed of several types of links and nodes which are associated with attributes. For example, the scientific literature is made of articles and authors, with directed citations links between papers, co-authorship links between authors and articles are associated with their textual content and their journal of publication. Many works have extended network embedding to handle heterogeneity and attributes in graphs. With Metapath2vec \cite{dong2017metapath2vec}, the authors propose to operate meta-path based random walks to handle heterogeneous nodes and links. These meta-path are hand-crafted schemes that guide a random walker over the network to generate nodes co-occurrences. Using a similar Skip-Gram based objective as DeepWalk, Metapath2vec achieves significant improvements on multi-class node classification and node clustering over traditional network embedding algorithms. \cite{huang2017label} introduces a Label informed Attributed Network Embedding (LANE) framework which jointly projects an attributed network and its labels into a unified embedding space by extracting their correlations. The mapping of the structural proximities in the attributed network and labels into an identical embedding space via correlation projections produces a significant improvement of the embeddings. Some works go beyond the factorization-based embedding approaches, introducing models that learn a function to generates embeddings by sampling and aggregating features from a node's local neighborhood. GraphSAGE \cite{hamilton2017inductive} makes use of learnable aggregator functions that allow to infer representations for unseen nodes, given their attributes and links. \cite{velickovic2018graph} adapt recent work on attention mechanism to compute a node representation by attending to its neighbors. \section{Proposed Approach} The proposed approach is intended to produce a novel model for learning representations of nodes and documents in a dynamic heterogeneous network with the goal to compute meaningful recommendations in real-time. The novelty results in the capacity of the model to infer representations of unseen documents for which no network information is available, in the same embedding space as the previously observed nodes. The approach is divided in three steps: \begin{enumerate} \item design of a first model to learn node representations that can handle textual attributes. By opposition to TADW, which rely on previously learned LSA representations, this model should learn word and document embeddings from scratch to ease the inference for unseen documents. This step should validate the possibility to learn meaningful text representations from graph information only. \item improvement of the model by focusing on natural language processing. The model would be able to predict the similarity of documents in a network, based on their textual content only. It would deal with more advanced NLP techniques and further take advantage of the interplay between word and document representations and the network topology. Compared to CANE, the representations should be produced only from text information (still using the network as training supervision) and a strong emphasis on link prediction for unseen documents should be put. \item integration of the heterogeneity to handle different types of nodes and links. This would allow to apply the model to a wider variety of tasks, such as user-item recommendation and expert finding. Handling the diversity of node and link types shouldn't rely on hand-crafted meta-path, as proposed in metapath2vec, but should be learned during the process similarly to GraphSAGE. At this point, the data provided by DSRT from \textit{Peerus} would serve as a strong online evaluation of the proposed model. \end{enumerate} Step (1) has been achieved and is detailed in Section \ref{metho2}. The results are presented in Section \ref{results1}. More work on its theoretical background will be done in a near future. Step (2) is ongoing research, that I briefly present in Section \ref{metho3} and for which I provide some preliminary results in Section \ref{results2} motivating the research direction. For all evaluations, I detail the datasets used and the experimental setups in Section \ref{metho1}. \section{Methodology} In this Section, I first provide an overview of the evaluations used for my research, then I detail a contribution corresponding to the first step of my thesis and I finally briefly address the planned methodology for the next steps. \subsection{Evaluation} \label{metho1} I first detail some datasets commonly used in the literature. Then I briefly present traditional experiments conducted for evaluating network embedding. \subsubsection{Datasets} I present below two small datasets, Cora and CiteSeer, according to the treatments applied in \cite{sen2008collective} as well as a larger dataset, DBLP, widely used by the scientific community: \begin{itemize} \item \textbf{Cora} \cite{mccallum2000automating} is a network of scientific articles in the field of artificial learning, grouping 7 classes (scientific subdomains) with 2708 documents, 1433 distinct words in the vocabulary and 5429 citations links. \item \textbf{CiteSeer} \cite{giles1998citeseer} is a network of scientific articles grouping 6 classes on 3312 documents, 3703 distinct words in the vocabulary and 4732 citations links. \item \textbf{DBLP} \cite{ley2002dblp} is a database of several million scientific articles in the field of computer science, started in 1993. A powerful disambiguation system makes it possible to identify the authors \cite{ley2009dblp}. \end{itemize} Many other non-scientific datasets that present similar data structures exist. Among others, Q\&A websites and online encyclopedia provide rich sources of data for which we can tackle similar challenges as with the scientific literature. Moreover, the industrial player supporting these research provides a large dataset of scientific literature with user log activities that allows to apply and evaluate the proposed models to online recommendation tasks. \subsubsection{Experiments} To evaluate network representation learning models, it is common to use the nodes embeddings as input space for a linear algorithm to classify the nodes. For each set of representations produced by a particular algorithm, the proportion of learning representations is varied from 10\% to 50\% and the average prediction accuracy of the classifier is computed over the rest of the node representations, given a set of ground truth labels. This evaluation was used for the results in Section \ref{results1}. An extension of this evaluation consists in observing only from 30\% to 70\% of the nodes when learning the representations. Then, the accuracies of classification are computed on the unobserved nodes, using only their attributes for prediction. As such, we evaluate the algorithm on its capacity to infer representation for new unseen documents. This evaluation was used in Section \ref{results2}. Note that this is different from the inductive prediction performed by GraphSAGE, which also infers representations for unseen nodes, but with the knowledge of the actual new links of these new nodes. Finally, to evaluate the model of step (2), link prediction constitutes a good evaluation task. Several ways to generate a pair of training/test set exist (random, temporal). The goal is then to distinguish unseen links from non-existing ones. The most suited metric for this is the ROC AUC. The same way as with the previous classification task, it is possible to extend this evaluation for unobserved new documents by hiding a proportion of the nodes (and not of the links) during learning. \subsection{Document Network Embedding} \label{metho2} In this section, I present the first contribution of my thesis \cite{brochier2019global}, \textit{GVNR}{} (Global Vectors for Node Representation), a model to learn node representations with an extension, \textit{GVNR-t}{}, to handle text-associated nodes. We seek to learn two sets of representations of the nodes $U \in \mathbb{R}^{n \times d}$ and $V \in \mathbb{R}^{n \times d}$, $n$ being the number of nodes in the network and $d$ the dimension of the learned embeddings. \subsubsection{Factorization Problem} We formulate a factorization problem on a random-walk based co-occurrence counts matrix $X$ generated from an input network, measuring the error of reconstruction only for positive coefficients and a fraction of randomly sampled zero coefficients: \begin{equation} \underset{U,V,b^U,b^V}{\mathrm{argmin}} \sum_{i=1}^n \sum_{j=1}^n s(x_{ij})\big(u_i \cdot v_j + b^U_i + b^V_j - \log (x_{ij})\big)^2 \end{equation} $b^U_i$ and $b^V_j$ are two learned biases for the pair of node embeddings. The function $s$ effectively selects the coefficients considered for measuring the reconstruction error: \begin{equation} s(x_{ij}) = \begin{cases} 1 & \text{if } x_{ij} > 0,\\ m_{i} & \text{else, with } m_{i} \sim \text{Bernoulli}\Big(\frac{k\times n_i}{n-n_i}\Big). \end{cases} \end{equation} It takes the value 1 for all positive coefficients of $X$, while for zero coefficients, its value is given by a Bernoulli random variable, $m_i$. It depends on the number of distinct nodes with which node $i$ co-occurs, $n_i$. We introduce a global hyper-parameter $k \in \mathbb{N^+_*}$, to control the proportion of zero coefficients incorporated into the reconstruction error. \subsubsection{Extended Model for Networks of Documents} \label{sec:extended_model} In this brief section, we show how to extend \textit{GVNR}{} to deal with networks where nodes are short text documents. Assuming word order is negligible for short documents (such as a scientific abstract), we can model them as bag-of-words and thus represent a document $j$ with a vector $\text{doc}_j \in \mathbb{N^+}^m$, $m$ being the size of the vocabulary. We can further assume that the meaning of a short text can be captured by averaging the representations of its words \cite{le2014paragraph}. Therefore, with $W \in \mathbb{R}^{m \times d}$ a word embedding matrix, we define the context-vector representation of a node in the following way: $v_j = \frac{\text{doc}_j ~ W}{|\text{doc}_j|_1}$ \subsection{Improving Document Network Embedding} \label{metho3} \textit{GVNR-t}{} is able to jointly learn word, document and node embeddings in a network of documents. However, the textual information could highly benefit from more recent works in the field of NLP. In this direction, the recently introduced Transformer as shown great promise in learning dependencies between words for text representation. Besides its low computational complexity and its strong results achieved on neural machine translation and unsupervised pre-training for language understanding, its core unit, the Scaled Dot-Product Attention, provides a good basis for extending \textit{GVNR-t}{}{}. The Scaled Dot-Product Attention takes as input, a set of keys $K$ and values $V$, corresponding to projected representations of words in a documents, and a query $q$, possibly any kind of vector lying in the same space as the keys. As output, it generates a weighted sum of the values, whose weights are produced by confronting the keys with the query, following the formula: $\text{Attention}(q,K,V) = \text{softmax}(\frac{qK^T}{\sqrt{d_k}})V$, $d_k$ being the dimension of the query and the keys. My current research focus on exploring the use of this attention mechanism for mutual attention between pairs of documents in a network. Using pre-trained word embeddings, I try to find a suitable variation of this unit for generating sparse weights (hence using other functions than the softmax) and I explore several ways to build an efficient query $q$ for mutual attention. \section{Results} \label{results} I first present the results obtained on multi-class classification by \textit{GVNR}{} and its extension with text and then I show some preliminary results indicating that more emphasis should be set on the representations of the textual content of nodes in a network. Finally, I show an example of a visualization of the weights learned by a preliminary model adapting the Scaled Dot-Product Attention for networks of documents. \subsection{Results for \textit{GVNR}{} and it Extension With Text} \label{results1} The results presented in Table \ref{citation1text} show the average accuracies for multi-class classification obtained on Cora. First, we observe that \textit{GVNR}{} produces competitive representations with DeepWalk. Its extension \textit{GVNR-t}{}, with the integration of the textual content of the documents, significantly improves the quality of the embeddings, achieving even better performances than TADW which relies however on textual representations more time-consuming, obtained with latent semantic analysis. \begin{table}[h] \center \caption{Results of multi-class classification on Cora.} \begin{tabular}{l|ccccc} \% of training data &10\% &20\% &30\% &40\% &50\% \\ \hline DeepWalk &67.8 &71.6 &74.5 &75.8 &79.2 \\ \textit{GVNR}{} ($x_{\text{min}}=1$) &\textbf{69.5} &\textbf{72.6} &\textbf{75.9} &\textbf{78.1} &\textbf{80.2} \\ \hline DeepWalk+LSA &73.8 &77.9 &78.4 &78.1 &78.1 \\ TADW &77.1 &78.8 &78.2 &78.8 &78.6 \\ \textit{GVNR-t}{} &\textbf{79.3} &\textbf{80.7} &\textbf{80.8} &\textbf{81.4} &\textbf{81.1} \\ \hline TADW (text only) &60.5 &69.3 &72.7 &73.6 &74.5 \\ \textit{GVNR-t}{} (text only) &\textbf{74.5} &\textbf{76.5} &\textbf{78.5} &\textbf{78.6} &\textbf{79.8} \\ \end{tabular} \label{citation1text} \end{table} \subsection{Motivation for a Stronger NLP Component} \label{results2} To gain more insight into the textual representations that are learned, Table \ref{citation1text} shows the accuracies obtained by TADW and \textit{GVNR-t}{} with their respective text components only. We see that \textit{GVNR-t}{} has significantly higher accuracies than TADW, but it is unclear if this is due to an underlying better natural language understanding. Table \ref{citation1unseen} shows the results of classification when predicting on unseen documents. We observe that if \textit{GVNR-t}{} is capable of generalizing on the text attributes of the nodes, TADW relatively fails. However, the results achieved by \textit{GVNR-t}{} are still lower than expected and motivates the use of more advanced NLP techniques to achieve better generalization. \begin{table}[h] \center \caption{Unseen documents classification accuracies on Cora.} \begin{tabular}{l|ccccc} \% of training data &30\% &40\% &50\% &60\% &70\% \\ \hline TADW &39.7 &48.9 &50.2 &51.5 &52.2 \\ \textit{GVNR-t}{} &\textbf{64.3} &\textbf{67.7} &\textbf{71.2} &\textbf{73.6} &\textbf{73.8} \\\hline \end{tabular} \label{citation1unseen} \end{table} \subsection{Mutual Attention for Networks of Documents} \label{results2} My ongoing research is meant to adapt the Scaled Dot-product attention mechanism for network of documents. The hope is to find a way to effectively infer weights for the words of the documents that strongly support (i.e highlight evidence for) the links in the network. Figure \ref{attention} shows an example of generated weights with a first draft of such a model. Interestingly, the model highlights words related to the field of reinforcement learning in both texts. \begin{figure}[h] \includegraphics[scale=1.65]{image.png} \caption{\label{attention} Mutual attention weights for a pair of documents extracted from Cora.} \end{figure} \section{Conclusion and Future Work} Text data and network data are the two most represented information types on the World Wide Web. Building meaningful representations for both is a crucial step for the design of efficient recommender systems. Particularly, the ever growing scientific literature constitute a dynamic heterogeneous text-attributed network. The interplay between the textual content of scientific publications and the networks dynamics of the actors of the research brings strong challenges. The proposed research aims at discovering an efficient model to represent the variety of nodes and links in the scientific HAN in a unique representation space in order to tackle a wide variety of recommendation tasks. The first works achieved during that research allowed to validate the complementarity of the two sources of information, text and graph, to learn meaningful representations. Ongoing research now aims at improving the natural language understanding component of the model, to truly be able to generate representations for streams of new documents. A last step will focus on extending the coverage of the types of nodes and links that the model can handle and intensively evaluating it on a wide variety of real-world recommendation tasks. \bibliographystyle{ACM-Reference-Format} \balance
{ "timestamp": "2019-03-01T02:17:38", "yymm": "1902", "arxiv_id": "1902.11058", "language": "en", "url": "https://arxiv.org/abs/1902.11058" }
\section{Analysis of heaps based on superexpensive comparison principle} Heaps are data structures supporting {\sl Insert\/}, {\sl FindMin\/}\ and {\sl DeleteMin\/}\ operations\footnote{All heaps mentioned in this article support naturally {\sl Meld\/}\ operation.}. Heaps of Fibonacci family \cite{FibonacciHeaps} support {\sl DecreaseKey\/}\ operation as well\footnote{and general {\sl Delete\/}\ with $O(\log n)$ complexity}. Let $n$ denote number of elements represented by the heap. Our goal is to achieve amortized analysis with complexity of {\sl DeleteMin\/}\ logarithmic ($O(\log n)$) and complexities of remaining operations constant ($O(1)$). Both Binomial and Fibonacci families of heaps have these properties, but Binomial heaps don't support {\sl DecreaseKey\/}\ operation. Both Binomial and Fibonacci family heaps represent heap elements as vertices in directed forest of heap-ordered trees (key in a child is no less than key in it's parent). Roots of the trees represent candidates for minimum. Internal representation of the forest will be discussed later. Superexpensive comparison principle leads us to implement {\sl Insert\/}\ in constant time by just adding an isolated vertex to the forest (standard implementation compares the element key with current minimum and updates minimum, what increases required time by a constant; the comparison result is not remembered as a graph edge). The basis of both these families of heaps is the {\sl FindMin\/}\ operation\footnote{Thanks to caching minimum, this is called just after {\sl DeleteMin\/}\ in standard implementation.}. Its worst case time is $O(n)$, but this time could be prepaid by preceeding operations. Each comparison would result in creating an edge between current tree roots, therefore reducing number of trees by $1$. {\sl FindMin\/}\ finishes with just one tree (unless we violate superexpensive comparison principle). Potential $\Phi_0$ equal to number of trees $\tau$ ($\Phi_0=\tau$) could pay for the comparisons and maintaining $\Phi_0$ would raise cost of inserts just by a constant. With appropriate internal representation, {\sl DeleteMin\/}\ operation (preceeded immediately by {\sl FindMin\/}) could be implemented in $O(1)$ worst case time removing current minimum and all incident edges. The number of trees in resulting graph would increase by the degree of the minimum minus $1$ so the amortized cost of {\sl DeleteMin\/}\ with respect to $\Phi_0$ is degree of the minimum. This is incentive for maintaining the trees as narrow as possible. To achieve small degrees of vertices, ranks and rank invariant were introduced. Each vertex $v$ gets nonnegative rank $r_v$. Rank invariant stays that the size of a vertex $v$ subtree must be at least exponential in the vertex rank ($\ge c\beta^{r_v}$ for a fixed $c>0$, and $2\ge \beta>1$). Rank invariant guarantees the maximal rank is logarithmic ($\lfloor\log_\beta (n/c)\rfloor$). Standard variants of Binomial and Fibonacci heaps maintain number of vertex children equal to its rank, but in that case they must forget results of some comparisons, what is against superexpensive comparison principle. The other option is to allow a number of children to differ from ranks. Number of $v$'s children $\iota_v$ smaller than rank $r_v$ would not be a problem, but for $\iota_v$ higher than rank we would need the differences to be prepaid in potential. We introduce $\Phi_1$ equal to sum of positive differences between a number of children and rank of a vertex ($\Phi_1=\sum_{v\mid \iota_v>r_v} \iota_v-r_v$). Thanks to the rank invariant for a minimum $m$, {\sl DeleteMin\/}\ increases $\Phi_0$ by at most logarithm plus decrease of $\Phi_1$ casued by difference of minimum's number of children and rank ($\Delta \Phi_0 = (\iota_m-1) = -1+r_m+(\iota_m-r_m) \le -1+r_m-\Delta\Phi_1 \le \lfloor\log_\beta (n/c)\rfloor - 1 - \Delta\Phi_1$). To finish the description of Binomial heap operations, we should show how ranks are defined, and how {\sl Insert\/}\ and {\sl FindMin\/}\ are implemented with respect to ranks. Rank of a new isolated vertex created by {\sl Insert\/}\ is initialised to $0$. Comparison of two roots of the same rank $r$ results in edge joining the trees increasing rank of it's root by $1$. If the sizes of the original trees were at least $c\beta^r$, size of resulting tree is at least $2c\beta^r\ge c\beta^{r+1}$ so this operation preserves rank invariant. {\sl FindMin\/}\ works in two phases. In the first phase roots of the same rank are detected and pairwise joins of the roots of the same rank are done (details later). In the second phase standard implemention finds minimal root without creating corresponding edges, while our representation creates edges without changing ranks. Standard impementation of {\sl FindMin\/}\ ends with at most logarithmic number of tree roots of different ranks, our implementation increases $\Phi_1$ by at most logarithm instead. Potential $\Phi_0$ is enough to pay for the first phase of {\sl FindMin\/}. The second phase time is bounded by both logarithm and the number of trees prior to the {\sl FindMin\/}. Standard implementation maintains pointer to the minimum after each operation. Therefore it calls {\sl FindMin\/}\ just after a {\sl DeleteMin\/}\ and lets {\sl DeleteMin\/}\ pay for the second phase. According to the superexpensive comparison principle we cannot maintain minimum between user calls of {\sl FindMin\/}\ in our implementation. Instead we introduce potential $\Phi_2$ equal to minimum of logarithm and number of trees ($\Phi_2=\min\{\tau, 1+\lfloor\log_\beta (n/c)\rfloor\}$). The potential $\Phi_2$ pays for the second phase of {\sl FindMin\/}. Maintaining $\Phi_2$ increases cost of {\sl DeleteMin\/}\ by at most logarithm ($1+\lfloor\log_\beta (n/c)\rfloor$), and cost of {\sl Insert\/}\ by at most constant ($O(1)$). When there is no {\sl DecreaseKey\/}, and we increment rank just when roots of the same size are joined, rank invariant for $c=1$ and $\beta=2$ holds (completing analysis of Binomial heaps). During {\sl DecreaseKey\/}\ the edge from affected vertex to its parent is removed and its parent rank is decremented. But this would not suffice to maintain the rank invariant for other vertices on the path to the root. Standard implementation of {\sl DecreaseKey\/}\ uses cut delaying strategy. Rank of any vertex could be decremented once making vertex critical. Second decrement results in making vervex noncritical, cut and rank decrement of its parent. According to the superexpensive comparison principle we cannot remove the edge, so we make vervex noncritical and instead of cut we just decrement the rank of its parent (what increments $\Phi_1$). So standard cascading cuts are transformed to cascading rank decrements. Criticality is important just for nonroot vertices. When {\sl FindMin\/}\ connects two tree roots, it makes the child noncritical. Let noncritical rank $\rho_v$ corresponds to rank ($r_v$) for a noncritical vertex $v$ and is one higher than rank ($r_v+1$) for a critical vertex $v$. To finish the analysis we have to show two facts. The first is constant cost of {\sl DecreaseKey\/}, the other is that the rank invariant persists. Let $\Phi_3$ be the number of critical nonroots ($\Phi_3=\bigl|\{v|v{\rm\ has\ parent} \wedge v{\rm \ is\ critical}\}\bigr|$). Vertices become critical only during {\sl DecreaseKey\/}. At most one vertex becomes critical during it and number of rank updates corresponds to number of vertices whose change from critical to noncritical, so {\sl DecreaseKey\/}\ is paid from $\Phi_3$. Actually $\Phi_3$ must also pay to potentials $\Phi_0$ or $\Phi_1$ (it pays to $\Phi_0$ in standard implementation and to $\Phi_1$ in our instead). We could number the children by order of joining. Just guarantors of the rank, therefore children whose joining increased rank and whose were not reverted from critical to noncritical are considered. At the time of $i$-th join, the rank of the vertex must be at least $i-1$ so the $i$-th child has the same noncritical rank. The rank of the $i$-th child could decrease from noncritical rank $i-1$ to $i-2$ iff the child becomes critical. Let $M_r$ be minimal possible size of the ancestors tree of vertex with rank $r$. Than $M_r=M_{r-2}+\cdots+M_0+M_0+1$. For $M_{r+1}$ the sum increases by $M_{r-1}$, therefore we get $M_{r+1}=M_r+M_{r-1}$ what leads to $\beta^2=\beta+1$ giving golden ratio $\beta=q=\frac{1}{2}(1+\sqrt{5})\approx 1.618034$ and close connection to Fibonacci sequence, that named the heaps. The main goal of \cite{ViolationHeaps} is to reduce the size of a heap representation. This is achieved by dividing vertices to active and inactive. Only active vertices have guarenteed constant access time to their parents. This leads to modification of rank invariant when only connected subtrees of active vertices are considered and the size of vertex $v$ and subtree of its active descendants is bounded by $c\beta^{r_v}$. In \cite{ViolationHeaps} only last two children of a vertex are active. We would consider at most last two children of a vertex to be active as well. The order of children is important. Removal of active children of a vertex of rank $r$ could result in an isolated vertex, therefore cut of the vertex could result in reduction of the vertex rank by $\Theta(\log n)$. That would be incompatible with $\Phi_1$ used in our analysis of Fibonacci heaps. This is why we should change the analysis and implementation slightly. We introduce the third kind of children: Let us call children who are noncritical or critical inner and introduce outer children\footnote{Introducing outer vertices for Fibonacci Heaps would make their description cleaner, otherwise it suffices to prevent decrementing rank under 0.}. For a vertex $v$ let $o_v$, $c_v$, resp. $n_v$ denote number of outer, critical, resp. noncritical children of $v$. First phase of {\sl FindMin\/}\ creates noncritical children, second phase of {\sl FindMin\/}\ creates outer children. Outer children of a vertex would be maintained on the start of its children list. They will be followed by inner children implicitly ordered by increasing noncritical rank. Potential $\Phi_1=\sum_v o_v$ would be number of all outer children. First change in decrement is that instead of cascading rank decrement we do cascading rank recomputation and it would be implicitly stopped at outer vertices. Another change is when critical vertex rank is decremented, we create outer vertex instead of creating noncritical one and we move the outer vertex out from it's original position among children to the first place of children list\footnote{At least we mark the vertex and do it at appropriate time.}. Last change in cascading rank recomputation is that only when rank of one of the last two children changes the rank of vertex should be recomputed. If the rank drops by more than one, it's considered as (at least) two decrements so even noncritical vertex becomes outer. To pay for bigger rank decrease during the recomputation, potential $\Phi_4$ acumulating delayed rank decreases should be introduced ($\Phi_4=\sum_v r_v-c_v-n_v$). We have to prove that $(*)$ $r_v\ge c_v+n_v$ holds all the time. We have to redefine ranks in a way rank invariant would hold even on active subtrees. If the two highest noncritical ranks of inner children differ by at most 1, we let them both be active and define rank of the parent to be one higher than the maximal one. Otherwise if there is an inner child, there must be child $w_0$ with highest noncritical rank $\rho_{w_0}$, we let it the only active child and the rank of its parent be the same. But in this case we don't accept the child $w_0$ to be critical, we should make such critical $w_0$ outer and recompute the rank if that happened. There will be exceptions at small ranks, but it would not affect the recurrence for the minimal active tree size $M_r$ for given rank $r$. \vbox to 100mm{ \kern100mm \hbox{\hss\kern 10mm\pdfximage width 12cm {padovan_6.pdf}% \rlap{\smash{\pdfsav \pdfrefximage\pdflastximage}}\pdfrestore \kern0cm} \vss \hbox{fig 1: Minimal size rekurence; grey color denotes critical child} \hbox{solid line connects active child to its parent} \kern5mm } If the active child is/becomes critical it's rank is one less than noncritical rank. So we got $M_r=1+M_{r-2}+M_{r-3}$. It leads to $\beta^3=\beta+1$ giving plastic number $\beta=p={\root3 \of {\frac{1}{2}\bigl(1+\sqrt{23/27}\bigr)}}+{\root3 \of {\frac{1}{2}\bigl(1-\sqrt{23/27}\bigr)}}\approx 1.324718$ and close connection to Padovan sequence, that named the heaps. We let the rank definition details, proof of invariant $(*)$, and proof the that rank recomputation cost remains constant for Padovan heaps to it's own chapter. Prior to it, we should mention important implementation details to support the analysis so far. \section{Implementation details} \vbox to 65mm{ \hbox{\hss\kern 5mm\pdfximage width 9cm {padovan_3.pdf}\rlap{\smash{\pdfsave\pdfsetmatrix{0 -1 1 0}\pdfrefximage\pdflastximage}}\pdfrestore \kern5cm} \vfil \hbox{fig 2: forest of heap ordered trees, it's representation in Fibonacci heaps (outer children} \hbox{variant), and in Padovan heaps; pink color denotes outer child} \kern5mm } Internal representation of the forest is a list of tree roots, each vertex points to the list of its children. For Decrement interface user should know pointers to vertices. This is why Insert would return pointer to the vertex for user's future use. To support Decrement, we require double linked lists. I prefere left list to be circular and right ending in \hbox{\sl null\/}\ pointer. The circularity of left list gives us access to both ends of the list allowing inserts on either end. This is not important for Fibonacci heaps, but we will use it in Padovan heaps to insert on diferent ends in first and second phase of {\sl FindMin\/}\ and it will allow us to reinsert children becoming outer to the propper place. Binomial family heaps neednot represent parent pointers, while Fibonaci family heaps require them for implementation of decrement preventing heap degeneration. Idea of \cite{ViolationHeaps} is that parent pointers neednot be present at all elements, and in Padovan heaps we save space by saving parent pointer only at right pointer of rightmost lists element. We can check $v$ is at the end of the list by fact the $v\to\hbox{\sl right\/}\to\hbox{\sl left\/}$ does not point to $v$. Accessing parent from elements in the middle of the list would be expensive, but we will not access them in Padovan heap implementation. We neednot maintain the vertex type information for roots, their corresponding field would be filled arbitrary. {\sl FindMin\/}\ sets the information appropriately when creating the edge. \section{Details of {\sl FindMin\/}\ implementation} First phase of {\sl FindMin\/}\ is usually implemented on RAM\footnote{The same could be implemented on pointer machine model (when we cannot allocate nonconstant array). Ranks could be implemented by pointers to global (expandnig when needed) list of integers. The list could store placeholders for pointers back to the roots of current {\sl FindMin\/}\ procedure.} using long enough array adressed by tree root ranks. Roots are put to empty places in array and if the place is already occupied, the root with same rank is identified and the pair could be compared and joined together according to the result. This leaves the place empty and increases rank of the resulting root. After all roots are put to empty places roots from nonempty places in the array are taken to form new list of roots. The final step is inefective especially when the array is almost empty. When we have roots in double linked list\footnote{Alternatively we could maintain double linked list of occupied places in the array, what would work even when children are not double linked (Binomial heaps). A stack of at least once used places would work as well. Its use would be charged to the first phase.}, we could change the procedure slightly. We don't remove the roots from the list when putting pointer to them to the array. We remove a root from the list only during join when it becomes a child\footnote{We must be careful in list of roots traversal to save pointer to the current root$\to\hbox{\sl right\/}$ before the root could be removed from the list.}. This implementation detail guarantees we have list of roots of different ranks after the list is traversed. We could clean the array by traversing the resulting list. Therefore we visit only nonempty places of the array and the time is bounded by the number of roots at the begining of the second phase of {\sl FindMin\/}, what is bounded both by the number of roots at the begining of the first phase of {\sl FindMin\/}\ and by $1+\lfloor\log_\beta(n/c)\rfloor$. The second phase of {\sl FindMin\/}\ links the last root with the second last according to the comparisson. Then it links third last with fourth last\footnote{They become second last and third last after the first link.}, and so on, during the cyclic traversal. The second phase ends when there is only one root on the list. \section{Analysis details for Padovan heaps} \vbox to 90mm{ \kern 225mm \hbox{\hss\kern -10mm\pdfximage width 16cm {padovan_9.pdf}\rlap{\smash{\pdfsave\pdfrefximage\pdflastximage}}\pdfrestore \kern 150mm\hss} \kern-150mm \hbox{fig 3: forest of heap ordered trees with ranks; yellow color denotes a misplaced outer child,} \hbox{nonrectangular shape denotes a dangerous vertex} \vss } Rank definition:\hfil\break Let us define \hbox{\sl null\/}\ pointers to be noncritical with rank -1. Let $w_0$ be last inner child of $v$ and $w_1$ second last, both could become \hbox{\sl null\/}\ if such a child does not exist.\hfil\break 1. If $\rho_{w_0}>\rho_{w_1}+1$, and $w_0$ is noncritical then only $w_0$ is active and $r_v=\rho_{w_0}=r_{w_0}$.\hfil\break 2. If $\rho_{w_0}>\rho_{w_1}+1$ and $w_0$ is critical\footnote{We define $r_v=\rho_{w_0}=r_{w_0}+1$ temporarily for the analysis.}, we made $w_0$ outer and recompute rank again.\hfil\break 3. Otherwise both $w_0$ and $w_1$ are active (or \hbox{\sl null\/}) and $r_v=\rho_{w_0}+1$. Rule 2 applies only temporarily, only the vertex whose rank is beeing recomputed could have this property. Let us call vertices whose rank correspond to rule 1 dangerous while vertices whose rank correspond to rule 3 safe. We will call application of either of rules 1., 2., 3. rank computation step. There can be several rank computation steps by rule 2 and at most one computation step by other rules. Vertices of rank 0 are safe. {\sl Insert\/}\ creates safe vertex of rank 0 (with both $w_0$, $w_1$ \hbox{\sl null\/}), rank updates by joins during first phase of {\sl FindMin\/}\ create safe roots as well. Ranks according to rule 1 could be achieved only by cuts and corresponding cascading rank recomputation process, rank of the vertex $v$ must have been at least $\rho_{w_0}+1$ when $v$ was safe. Therefore when vertex becomes dangerous it becomes simultaneously critical. Later it could become outer (either during the same rank recomputation) or during another {\sl Cut\/}\ or {\sl DeleteMin\/}. Padovan heaps prevent creation of dangerous noncritical inner vertices, which would otherwise appear, by changing first phase of {\sl FindMin\/}, by making roots safe before we use their rank. Details and cost of {\sl MakeSafe\/}, making one dangerous outer vertex safe, will be discussed later. If we include safe test\footnote{$r_v>\rho_{w_0}$ test needs constant time, for a vertex $v$ of a positive rank, as $w_0$ is always the rightmost child}, and we call {\sl MakeSafe\/}\ if it fails, in the first phase of {\sl FindMin\/}, following invariants $(**)$, $(*{*}*)$, $(*{**}*)$ would hold: $(**)$ noncritical inner vertices are dangerous only temporarily. Let inner children of $v$ from left to right in list are $i^v_0$, $i^v_1$, \dots, $i^v_{c_v+n_v-1}$. Then $$(*{*}*)\kern 1cm\hbox{$\rho_{i^v_k}\ge k$.}$$ $$(*{**}*)\kern 1cm\hbox{$\rho_{i^v_{k+1}}\ge \rho_{i^v_{k}}+1$ for $k\ge0$.}$$ As we allow joins adding inner children only of safe (noncritical) vertices of the same rank, {\sl FindMin\/}\ maintains $(*{**}*)$. Nothing changes noncritical ranks, just vertices could be removed from the list of inner children, but it is compatible with the invariant. So the noncritical ranks of inner children are strongly increasing and $(*{*}*)$, $(*)$ become its trivial consequences (for $(*{*}*)\Rightarrow (*)$ consider $\rho_{w_1}=\rho_{i^v_{c_v+n_v-2}}\ge c_v+n_v-2$, both rules 1. and 3. lead to $r_v\ge c_v+n_v$). There is one more detail we have not adressed yet. When rank recomputation process decrements the rank of an innner vertex second time, we have to make the vertex outer. But if it is not among the last two children, it has no quick access to the parent\footnote{we even cannot quickly detect if it has parent}. This is why we divide outer vertices to placed and misplaced. Second phase of {\sl FindMin\/}\ creates placed outer children, while decrement of a critical child's rank makes the child outer misplaced. We check whether the child is the rightmost or the secondrightmost. If not, the rank recomputation ends, otherwise we have access to the parent and we recompute parent rank and place the child and possibly other misplaced outer children in the process. As the parent rank is not decremented when a child become outer misplaced, we neednot include misplaced vertices in $\Phi_1$. But we should prepay future placing of them so we have to count them in another potential $\Phi_5$. Let $m_v$ be the number of outer misplaced children of $v$ and let $p_v$ be the number of outer placed children of $v$. Than $\Phi_1=\sum p_v$, and $\Phi_5=\sum m_v$. Let us call placing step the operation placing a misplaced child during a rank recomputation. Last potential we will need is $\Phi_6$, the number of dangerous vertices. \vbox to 93mm{ \kern100mm \hbox{\kern-2mm\pdfximage width 14cm {padovan_A.pdf} \rlap{\smash{\pdfsav \pdfrefximage\pdflastximage}}\pdfrestore \hss} \vss \hbox{fig 4: payment schema of the amortized analysis} \kern4mm } Now we have mentioned all the required potentials. The total potential according to which we do the amortised ananlysis is $t_0\Phi_0+t_1\Phi_1+t_2\Phi_2+t_3\Phi_3+t_4\Phi_4+t_5\Phi_5+t_6\Phi_6$ where $t_0$, $t_1$, $t_2$, $t_3$, $t_4$, $t_5$, $t_6$ are properly chosen constant times where $t_0\le t_1<t_2$, $t_1<t_5<t_6$, $t_5+t_6<t_3$, and $t_5+t_6<t_4$. In the following we will discuss constants $t_i$ and we recapitulate the time analysis is correct with them. Analysis of rank restoration was not stated yet. Remember that $\Phi_0=\tau$ be the number of trees in the heap, $\Phi_1=\sum p_v$ be the number of all outer placed children, $\Phi_2=\min\{\tau,1+\lfloor\log_\beta(n/c)\rfloor\}$, $\Phi_3=\sum c_v$ be the number of critical children, $\Phi_4=\sum_v r_v-(c_v+n_v)$ be the number of children cuts resp. making child outer not reflected in rank decrease yet, $\Phi_5=\sum m_v$ be the number of all outer misplaced children, and $\Phi_6$ be the number of all dangerous vertices. Time $t_0$ suffices to work with a root during the first phase of {\sl FindMin\/}\ (one step in list traversal, testing the root is safe and callig {\sl MakeSafe\/}\ if not ({\sl MakeSafe\/}\ time is not paid from $t_0$), testing emptiness of the same root rank identifying place, putting a root to the same root rank identifying place, removing a root from the same root rank identifying place and joining two trees with the same root rank including removal of one of them from the list of roots). Time $t_1$ should be at least $t_0$ plus the time required to remove information about parent of outer child during {\sl DeleteMin\/}\ operation. In fact, we maintain parent information only in the oldest outer child in case of root of rank 0, and the information is removed implicitly by joining the list of mimimum children with the list of tree roots, so no additional time is needed and therefore $t_1=t_0$. Time $t_2$ should be at least $t_1$ plus the time required in the second phase of {\sl FindMin\/}\ operation per remaining tree. We consider emptying the same root rank identifying places to be part of the second phase in this analysis so time for traversing one root and emptying coresponding place is incorporated in $t_2-t_1$ as well as time to join two roots including removal of one of them from the list and 2 steps to traverse a root during linking traversals of the list of trees. Time $t_5$ should be at least $t_1$ plus the time to cut an outer misplaced vertex from the children list, and time to place it as outer placed vertex to the list start, the parent is known when we need to access the list start. It includes the time to test the vertex is outer misplaced. Time $t_6$ should be at least $t_5$ plus the time to change the status of rightmost child from inner (critical or noncritical) to outer misplaced, and time for one rank recomputation step by rule 2 and one by other rule and the time to stop the while loop for {\sl MakeSafe\/}\ if the parent becomes safe. Time $t_3$ should be at least $t_5+t_6$ plus the time to change the status from critical to outer misplaced, and time to test that the vertex is critical and among the two rightmost children and step to the vertex parent (in that case). It also covers one rank recomputation step by rule 2 and one by other rule. Time $t_4$ should be at least $t_5+t_6$ plus the time to change the status from noncritical to outer misplaced, and time to test that the vertex is noncritical and among the two rightmost children and test the two rank differences exceeds 1 and step to the vertex parent in that case. It also covers one rank recomputation step by rule 2 and one by other rule and time for one step in while loop for {\sl MakeSafe\/}\ if the parent remains dangerous. Each placing step is paid by $t_5-t_1$, therefore cost of a rank computation step is constant. Rank recomputation of a vertex $v$ includes at most one step according to rules 1 or 3. $\Phi_3$ could be increased once if the affected vertex changes from noncritical to critical child during the recomputation. It could include several steps according to rule 2. Each recomputation step according to rule 2 decreases $\rho_{w_0}$ by at least 2 and $c_v$ by 1 and preserves $n_v$. It decreases $\Phi_3$ (not counting criticaity of recomputed vertex) by 1 and except the last $\rho_v$ is decreased to next $\rho_{w_0}$ so $\Phi_4$ decreases by at least 1. The last step ends with $\rho_v=\rho_{w_0}+1$ so $\Phi_4$ does not increase. Now we can return to {\sl MakeSafe\/}. What is its cost? And how it could be coded? While $v$ is dangerous, we could make $w_0$ outer misplaced and recompute the $v$'s rank\footnote{We could localise all vertices which should become outer this way by searching $v$'s children leftward, till we found two inner children whose noncritical rank differ by at most 1 or inner child of rank 0 is reached or we reach outer placed child. Traversed inner children and missplaced children are put to the left end of the children list as placed children.}. Time to making the vertex safe would be proportional to the number of vertices traversed. Outer misplaced vertices already have prepaid $t_5-t_1$ for its change to outer placed and move to the start of the list in $\Phi_5$. Let $i^v_{c_v+n_v-k}$, \dots, $i^v_{c_v+n_v-1}$ be inner children among traversed vertices. Except one step all recurent steps are paid by $\Phi_4$. The step unpaid this way does not increase $\Phi_4$. As we call {\sl MakeSafe\/}\ only at roots, $\Phi_3$ is not increased by it. And no new critical vertex is introduced. $\Phi_6$ pays for constant time and first increase of $\Phi_5$ so full {\sl MakeSafe\/}\ is prepaid. {\sl Meld\/}\ operation works in scenario when we use several heaps at the same time. Meld unions sets of two of them. The operation just connects lists of tree roots (and removes one dummy head). In analysis we should sum correspondning potentials. The final sum could only decrease in case $\Phi_2$ exceeded it's upper bound. So the cost for the {\sl Meld\/}\ operation with respect to the unified potential is constant. {\sl Insert\/}\ could be implemented as creation of the heap with only one tree with only one vertex followed by {\sl Meld\/}\ with the original heap\footnote{Better is to omit temporary creation of a dummy head.}. As the potential of the created heap is constant, the cost of {\sl Insert\/}\ operation is therefore constant. Let us bound {\sl FindMin\/}\ cost except the cost of {\sl MakeSafe\/}{}s. {\sl FindMin\/}\ ends with $\Phi_0=\Phi_2=1$, $\Phi_1$ is usually increased by {\sl FindMin\/}, $\Phi_3$ is not changed. $\Phi_4$, $\Phi_5$, and $\Phi_6$ are not changed except by {\sl MakeSafe\/}{}s. Constants $t_0$, $t_1$ and $t_2$ were chosen such that it's time and increase of $\Phi_1$ is fully paid by decrease of $\Phi_0$ and $\Phi_2$, so the cost of {\sl FindMin\/}\ is constant. {\sl DeleteMin\/}\ on minumum $v$ removes in constant time the tree with $v$ from the original heap (it was the only tree of the heap). It creates in constant time heap from $v$'s children and melds the two heaps\footnote{Again omitting temporary creation of a dummy head}. It increases $\Phi_0$ by $\iota_v-1=p_v+m_v+c_v+n_v-1\le p_v+m_v+r_v-1\le p_v+m_v+\lfloor\log_\beta(n/c)\rfloor$ and as $\Phi_1$ is decreased by $p_v$ and $\Phi_5$ by $m_v$ the cost is bounded by $O(1)+(\lfloor\log_\beta(n/c)\rfloor)(t_0+t_2)-p_v(t_1-t_0)-m_v(t_5-t_0)\in O(\log n)$ (we count with maximal possible increase of $\Phi_2$). Possible decrements of $\Phi_3$, $\Phi_4$, and $\Phi_6$ could only reduce the cost. {\sl DecreaseKey\/}\ could be implemented by {\sl Cut\/}\ followed by cascading rank update followed by decrement of the key in the newly forced root. The decrement of the key in the root requires constant time and does not change potential, so we have to prove constant cost of {\sl Cut\/}\ followed by cascading rank updates, to finish its analysis. {\sl Delete\/}\ could be implemented by {\sl Cut\/}\ followed by cascading rank update followed by delete of the newly forced root. The delete of the newly forced root analysis would be same as analysis of {\sl DeleteMin\/}\ (except now there could be more trees in the heap), so constant cost of {\sl Cut\/}\ followed by cascading rank updates would be sufficient to prove the cost is bounded by $O(1)+(\lfloor\log_\beta(n/c)\rfloor)(t_0+t_2)\in O(\log n)$. So, finally, what happens during {\sl Cut\/}($v$) and following cascading rank updates? We detect by $v\to\hbox{\sl right\/}\to\hbox{\sl right\/}\to\hbox{\sl left\/}\to\hbox{\sl left\/}\not=v$ the vertex is among the last two vertices on its list. If not, the update ends by cutting $v$. If it is, we check by $v\to\hbox{\sl right\/}\to\hbox{\sl left\/}\not=v$ the vertex is last. In both cases we know the vertex parent $p$ and we could test if it is not dummy head by $p\to\hbox{\sl right\/}\not=p$. Otherwise we end as $v$ was root. In the last case the rank recomputation starts at the parent $p$ after $v$ is cut. Cut of nonroot increases $\Phi_0$ by 1 and could increase $\Phi_2$ by 1 as well. Depending on type of cut vertex it could decrement either $\Phi_1$, $\Phi_3$, $\Phi_5$ or none of them. It could increase $\Phi_4$ by 1. If $p$ becomes dangerous, $\Phi_6$ is increased by 1 as well. It takes constant time and makes at most constant change to the potential so it's cost is constant. Cut of root acts similarly as we usually do not detect the vertex is root, but there is no change in the potential in that case. What we should analyse is the following rank recomputation process. We start by seeking the active inner children. While the last child is outer misplaced, we cut it and reinsert it as outer placed on the start of children list\footnote{We could change them to placed and move them all at once by at most 4 pointer changes when inner child is found.}. $\Phi_5$ is decremented by 1, $\Phi_1$ incremented by 1 and $t_5-t_1$ pays for the required time. When the last child is inner, we seek for the second last the same way\footnote{Thanks to $(*{*}*)$ we could stop seeking for second inner child when the second child is outer misplaced. We will not found $w_1$, but we would know rule 3 would not apply.}. Whenever the considered child is outer placed, we could stop the seeking. As result of the process we got those of $w_0$ and $w_1$ whose are important in constant cost, and we could detect which rule of the rank definition would apply. If it is rule 2, we have to make the critical vertex outer misplaced and continue the seeking. Cost of the rank recomputation of one vertex is time for one step by rule 2 and one step by another rule. When the rule 2 applies $k+1$ times, the rank droped by at least $2k+1$ so $\Phi_4$ was decremented by at least $k$ and $k$ uses of rule 2 are paid by\footnote{Decrements of $\Phi_3$ remain in reserve for other use} $k(t_4-t_5)$. To simplify the codding we could stop cascading rank recomputation when $p$ is dummy head or rank of $p$ does not change or when $p$ is not among the last two children. Calling the rank recomputation in the case rank cannot change would cost us at most constant as after recomputation of the rank the process terminates and it's worth the simplification. The cost of the last rank recomputation is constant as well as the second last. What should be analysed carefully are cases when the rank recomputation does not stop on the parent $g$ of $p$. This means $p$ must have been active inner child of $g$. If $p$ was a critical active child and its rank decrements by a positive amount, it becomes outer misplaced and $\Phi_3$ derements by 1, $\Phi_5$ increments by 1, and $\Phi_4$ cannnot increase. If $g$ becomes dangerous, $\Phi_6$ is increment by 1. $\Phi_3$ pays $(t_3-t_5-t_6)$ for the continuation. If $p$ was a noncritical active child and becomes outer misplaced, it's rank was decremented by at least 2. If rule 2 was applied on $p$, we can pay $(t_3-t_5-t_6)$ for the continuation by a decrement of $\Phi_3$. Otherwise $p$ lost exactly one inner child (could become outer) and $r_p-c_p-n_p$ (and therefore $\Phi_4$) was decremented by at least 1. $\Phi_5$ is incremented by 1 and $\Phi_6$ could increment by 1. $\Phi_4$ pays $t_4-t_5-t_6$ for the continuation. The last case is when the active children of $g$ remain, but the rule 2 applies to $g$. This could happen when $p$ was noncritical and the only active child of a dangerous $g$, and $p$ become critical. Thanks to $(**)$ we know $g$ cannot be noncritical inner vertex at the start of the {\sl Cut\/}. If $g$ was outer, rank recomputation would stop and the final cost is not important constant. If $g$ was critical, we would use $\Phi_3$ to pay continuation in it's predecessors. We just should discuss resources to pay for rank recomputation of $g$. It depends on the second applied rule. If rule 2 was applied at least twice, we have $\Phi_3$ decrement to pay from. If rule 3 applies, $g$ changed from danger to safe and we have $\Phi_6$ decrement to pay from. Finally if rule 1 applies, $r_g$ was decremented by at least 2 and just one inner child become outer, therefore we have $\Phi_4$ decrement to pay from. As the last two steps of cascading rank consolidation have constant cost and the other steps are fully paid by the potential decrease, the total rank recomputation cost is constant. \section{Concluding Remarks} The Fibonacci heaps don't require define outer children, but if we sacrifice a bit to allow them, we can use outer children even in Fibonacci heaps. The $\Phi_1$ would coincide with the original definition and there is no problem in stopping cascading rank consolidation in outer children (the rank invariant would hold in both cases with the same $\beta$). It will better correspond to standard Fibonacci heaps with the only difference some roots of standard Fibonacci heaps would be hidden as outer children of other vertices. All mentioned variants of Fibonacci heaps and Padovan heaps respecting the superexpensive comparison principle could be compared to standard Fibonacciho heaps by a following competition: Supporter of one structure defines sequence of calls to the methods on empty initial heap. The sequence is run on both implementations and the total times of the implementations are divided, and the result defines the gain. I claim that in such a game our variants would lose by at most constant when standard implementation supporter choses the sequence and on the contrary there exists a sequence where standard implementation loses by $\Omega(\log n)$ where $n$ is length of the sequence (prefix of sequence generated for increasing i by adding:$\{x_i={\sl Insert\/}(-i)$, $x_{i+1}=Insert(-(i+1))$, $m=FindMin()$, ${\sl DeleteMin\/}()\}$). The complicated second phase presented in {\sl FindMin\/}\ implementation details guarantees there would be at most two outer children of the minimum after each {\sl FindMin\/}\ and others would hide forever on deeper levels of the tree. This prevents creation of rank 4 so the average time remains constant. Actually you can see there is no {\sl DecreaseKey\/}\ in the used sequence, so the shown inefficiency doesn't need it. There is counterargument against this competition. As two standard Fibonacci heap implementations that differ only in small datail defining order of pairing the roots of the same rank could create sequences whose are good against the other. When creating heap of size $2^k+1$ by inserts in propper order they could force their implementation to create a heap where minimum is lone in its tree while the other vertices are in one huge tree. On the opoosite it's highly probable the minimum is in the huge tree in the other implementation. Following {\sl DeleteMin\/}\ and reinsert the deleted key could repeat the situation with costs of 1 and $k$. (To prevent the repetition the heap should link the last inserted vertex last, what actually our implementation does as we insert them to the end of the list.) The claim is not proven and we could probably improve the argument by defining fair competitions, but this is definitely out of scope of this paper. \vbox to 80mm{ \hbox{\hss\kern 2cm\pdfximage width 9cm {padovan_8.pdf}\rlap{\smash{\pdfsave\pdfsetmatrix{0 -1 1 0}\pdfrefximage\pdflastximage}}\pdfrestore \kern5cm} \vfil \hbox{fig 5: forest of heap ordered trees, it's representation in rank pairing heaps (one tree} \hbox{variant), and in Padovan heaps (isomorphism except the dummy root)} \kern5mm } Rank pairing heaps \cite{RankPairingHeaps} of B. Haeupler, S. Sen, R. E. Tarjan use 3 pointers per node as well as Padovan heaps, so they are alternative solution to the same problem. They are another implementation of Fibonacci heaps. Actually at chapter 6 they mention unfair links, so they follow the superexpensive principle there. The trees are isomorphic, but the balancig differs. Our argumentation is strictly based on the principle, while their just confirms the principle is worthwhile. \section{Summary} We have shown one pointer could be saved in Fibonacci heaps in rather conservative extension of their variant. Actually we gain assymptotically same amortized time bounds for operations, but the rank invariant leads to renaming the heaps to Padovan as the base of logarithm for {\sl DeleteMin\/}\ bound has changed. We have introduced the superexpensive comparison principle and showed the Fibonacci heaps could be implemented according to it, and Padovan heaps extension follows the principle as well. Real computers do not correspond to pointer machine model presented here. When we consider cache hierarchies, we will find these structures not well optimalised to block reads (cache misses). But this was not addressed by the article.
{ "timestamp": "2019-03-01T02:04:29", "yymm": "1902", "arxiv_id": "1902.10812", "language": "en", "url": "https://arxiv.org/abs/1902.10812" }
\section{Introduction} SLAM solves the problem of mapping unknown environments while estimating robot state. Though SLAM is actively researched for the past few decades, Cadena et al. \cite{pastPresent} note that there are still challenges in handling diverse environments and long-term continuous operations. SLAM systems operate on a wide range of sensor modalities each trying to exploit their benefits. In the past few years, LiDAR based SLAM systems have gained popularity over vision based systems due to their robustness to changes in the environment. However pure LiDAR based systems have their deficiencies. They fail in environments with repeating structures like tunnels or hallways. These environments are challenging to map and localize, and system which exploits the strengths of all the sensor modalities need to be deployed to succeed. We propose VIL-SLAM, which uses IMU, stereo cameras and LiDAR, and exploit their benefits collectively. Our experiments demonstrate that VIL-SLAM performs on par with pure LiDAR based systems in most cases and better on cases where pure LiDAR based systems simply fail. VIL-SLAM achieves this by integrating stereo VIO and LiDAR mapping with loop closure. To the best of our knowledge, this is the first work of this kind. In addition, we introduce a method to evaluate mapping results using a time-of-flight laser scanner (Faro). We also provide VIO validation results on the EuRoC MAV dataset. VIL-SLAM uses a tightly-coupled stereo VIO that performs fixed-lag pose graph optimization, LiDAR mapping that uses sparse 3D features for map registration, and loop closure that integrates sparse point cloud alignment with visual loop detection. Loop closure optimizes a global pose graph using an incremental solver. VIL-SLAM is designed to operate long term and in different environments robustly. The high frequency IMU measurements produce estimates which are reasonable for the short interval but quickly drift. When constrained with stereo visual measurements, we can correct the biases and estimate accurate relative motion (referred to as VIO). The relative motion estimate is used to aid LiDAR scan matching which then accumulates the high-fidelity 3D point clouds to form an accurate map. The robot's state estimate accumulates drift during long traversals. Loop closure addresses this issue by recognizing the revisited sites using either visual or LiDAR methods. Visual methods involve using Bag-of-Words \cite{refBoW} to recognize the place and Perspective-n-Point (PnP) algorithm to estimate the pose correction. In LiDAR methods, the places are recognized using segment based algorithms like SegMatch \cite{segMatch}, and pose correction is estimated using Iterative Closest Point (ICP) \cite{121791} algorithm. While the Bag-of-Words method is fast and versatile, it lacks the accuracy of the slow but robust LiDAR method which uses ICP. VIL-SLAM uses a hybrid approach where it first finds the loop closure candidate using Bag-of-Words technique, generates a rough estimate of the pose correction using Perspective-n-Point (PnP) algorithm, and then refines the rough estimate using ICP. \begin{figure}[t!] \centering \includegraphics[scale=0.145]{platform.png} \caption{(a) Experimental platform built. (b) Mapping result from an outdoor test. Streetlight is reconstructed clearly.} \label{platformFig} \vspace{-2mm} \end{figure} \begin{figure*}[t!] \centering \includegraphics[scale=0.5]{system_overview3.png} \caption{The system diagram of VIL-SLAM. Sensors are in gray and modules are in green. Arrows indicate how messages flow within the system. The dark thick arrows indicate the system real-time output and the light thick arrow indicates the output generated in post-processing near real-time.} \label{system_overview_block_diagram} \vspace{-4mm} \end{figure*} \section{Related work}\label{secRW} Current VIO literature introduces various formulations to integrate visual and inertial data. The literature characterizes different approaches into \textit{tightly-coupled system} \cite{refVINSMONO, refOKVIS, Hsiung18iros}, in which visual information and inertial measurements are jointly optimized, or \textit{loosely-coupled system} \cite{refDSO, refPTAM, ARMMFA, RTOVI}, in which IMU is a separate module and fused with a vision-only state estimator. The approaches could be further divided into either filtering-based \cite{refRSVIOupenn, ref11, refROVIO, Wu2015ASR, RTOVI, 6385458} or graph-optimization based \cite{refVINSMONO, refOKVIS, Hsiung18iros, Indelman2013InformationFI, DVISO}. Tightly-coupled optimization-based approaches, taking the benefit of minimizing residuals iteratively, usually achieve better accuracy and robustness with a higher computation cost. In our work, we bound the computation cost by forming landmarks in a structureless fashion and only optimizing for a fixed-size pose graph to achieve the real-time performance. Current state-of-the-art SLAM systems using just laser scanner are \cite{7353456, 7487648, 7279468, refLOAM, IMLSSLAM}, in which a motion model is required, either a constant velocity model or a Gaussian process. Approach in \cite{6907626} combines stereo cameras and a laser scanner. It has motion estimation generated from a visual odometry (VO) and refined by matching laser scans. The differences to our system are that they use multi-resolution grid map representation and ours uses sparse point cloud to localize and outputs dense point cloud. Also, VIO is usually more robust and accurate compared to a VO \cite{Delmerico2018ABC}. VLOAM \cite{refVLOAM}, which uses an IMU, a monocular camera, and a laser scanner is the most similar existing system to ours. One difference is that we use a tightly-coupled VIO as the motion model to initialize the LiDAR mapping algorithm whereas VLOAM uses loosely-coupled IMU and camera. Though our VIO is more robust, VLOAM has a more interactive system where information from both camera and LiDAR module could be used for IMU biases correction. One addition that VIL-SLAM has is the LiDAR enhanced loop closure. \section{System overview}\label{secSO} The system has four modules as shown in Fig. \ref{system_overview_block_diagram}. The visual frontend takes stereo pairs from the stereo cameras. It performs frame to frame tracking and stereo matching, and outputs stereo matches as visual measurements. The stereo VIO takes stereo matches and IMU measurements, performs IMU pre-integration and tightly-coupled fixed-lag smoothing over a pose graph. This module outputs VIO pose at IMU rate and camera rate. LiDAR mapping module uses the motion estimate from the VIO and performs LiDAR points dewarping and scan to map registration. The loop closure module conducts visual loop detection and initial loop constraint estimation, which is further refined by a sparse point cloud ICP alignment. A global pose graph constraining all LiDAR poses is optimized incrementally to obtain a globally corrected trajectory and a LiDAR pose correction in real-time. They are sent back to LiDAR mapping module for map update and re-localization. In post processing, we stitch the dewarped LiDAR scans with the best estimated LiDAR poses to have the dense mapping results (Fig. \ref{traj_map_res}). \section{Visual frontend}\label{secVF} Visual frontend accepts a stereo pair, and performs frame to frame tracking and stereo matching for the generation of a set of stereo-matched sparse feature points, namely, stereo matches. A stereo match could either be one tracked from previous stereo pair, or a new one extracted in this pair. The frame to frame tracking performance directly affects the temporal constraints quality while the stereo matching helps constrain the scale. These two tasks are crucial for any stereo visual odometry. Direct methods show robust and efficient temporal tracking results in recent years \cite{refDSO, refSVO}. Thus, we use Kanade Lucas Tomasi (KLT) feature tracker \cite{refLKT} to track all feature points in the previous stereo matches, either in the left or right image. Only when they are both tracked, we have a tracked stereo match and it is pushed into the output. Large stereo baseline helps scale estimation and reduces degeneracy issues caused by distant features. We use feature-based methods which are better suited to handle large baselines than KLT. If the number of tracked stereo matches is below a threshold, we perform feature extraction using Shi-Tomashi Corner detector \cite{refSHI}, followed by a feature elimination process in which features that have pixel coordinate distance to any existing features smaller than a threshold are deleted. ORB (Oriented FAST and Rotated BRIEF) \cite{refORB} descriptors are then computed on all survived features, followed by a brute-force stereo matching to obtain new stereo matches. The system initializes by performing stereo matching on the first stereo pair. \section{Stereo visual inertial odometry}\label{secSVIO} The goal of the stereo VIO is to provide real-time accurate state estimate at a relatively high frequency, serving as the motion model for the LiDAR mapping algorithm. A tightly-coupled fixed-lag smoother operating over a pose graph is a good trade-off between accuracy and efficiency. Optimization-based methods in general allow for multiple re-linearization to approach the global minimum. A fixed-lag pose graph optimizer further bounds the maximum number of variables, and hence the computation cost is bounded. Since bad visual measurements cause convergence issues, we enforce a strict outlier rejection mechanism on visual measurements. The system eliminates outliers by checking the average reprojection error, both stereo and temporal. The VIO proposed has \textit{IMU Pre-integration Factor} and \textit{Structureless Vision Factor} as constraints. The graph representation is shown in Fig. \ref{VIO_pose_graph}. Variables to be optimized are the states inside the window. Denote $\textbf{S}_t$ as the state variable at the stereo frame time $t$. $\textbf{S}_t$ contains the 6 Degrees of Freedom (DoF) system pose $\xi_t$ (IMU frame), the associated linear velocity $\textbf{v}_t$, accelerometer bias $\textbf{b}^a_t$, and gyroscope bias $\textbf{b}^g_t$. The window of state variables being estimated are of the most recent $N$ stereo frames. Past state variables are marginalized, producing prior factors on related variables. \begin{figure}[t!] \centering \includegraphics[scale=0.18]{vio_pose_graph.png} \caption{Fixed-lag pose graph formulation in the VIO. State variables being optimized are circled, where $i$ stands for the current state and $N$ is the window size. (a) The state to be marginalized is crossed. (b) After marginalization, prior factors are added back on related variables.} \label{VIO_pose_graph} \vspace{-2mm} \end{figure} \subsection{IMU pre-integration factor} We follow the IMU pre-integration method \cite{refIMUPre}\cite{refIMUPre0} to generate relative IMU measurements between $\textbf{S}_i$ and $\textbf{S}_j$. Using the pre-integration technique, re-linearization could be performed efficiently during optimization. The residual represented by the IMU pre-integration factor is $\textbf{r}^I_{ij}$, which consists of three terms: the residual of pose ($\textbf{r}_{\Delta\xi ij}$), velocity ($\textbf{r}_{\Delta\textbf{v}ij}$), and biases ($\textbf{r}_{\Delta\textbf{b}ij}$). \subsection{Structureless vision factor} Visual measurements are modeled in a structureless fashion, similar to \cite{refIMUPre}\cite{refIMUPre2}\cite{refIMUPre3}. Consider a landmark $p$, whose position in global frame is $\textbf{x}_p\in\mathbb{R}^3$, is observed by multiple states and denote the set of states observing $p$ as $\{\textbf{S}\}_p$. For any state $\textbf{S}_k$ in $\{\textbf{S}\}_p$, denote the residual formed by measuring $p$ as in the left camera image as $\textbf{r}^V_{\xi_{k,lc},p}$ ($\xi_{k,lc}$ is the left camera pose, obtained by applying a IMU-camera transformation to $\xi_k$): \begin{equation} \label{visualResidual} \textbf{r}^V_{\xi_{k,lc},p} = \textbf{z}_{\xi_{k,lc},p}-h(\xi_{k,lc},\textbf{x}_p) \end{equation} where $\textbf{z}_{\xi_{k,lc},p}$ is the pixel measurement of $p$ in the image and $h(\xi_{k,lc},\textbf{x}_p)$ encodes a perspective projection. Same formulation is derived for the right camera image. Iterative methods are adopted for optimizing the pose graph, and hence linearization of the above residual is required. Equation (\ref{linearizedVR}) shows the linearized residuals for landmark $p$. \begin{equation}\label{linearizedVR} \sum_{S_p}||\textbf{F}_{kp}\delta \xi_k + \textbf{E}_{kp}\delta\textbf{x}_p + \textbf{b}_{kp}||^2 \end{equation} where the Jacobians $\textbf{F}_{kp}$, $\textbf{E}_{kp}$ and the residual error $\textbf{b}_{kp}$ are results from the linearization and normalized by $\Sigma^{1/2}_c$, the visual measurement covariance. Stacking each individual component inside the sum into a matrix we have \begin{equation} ||\textbf{r}^V_p||^2_{\Sigma_C} = ||\textbf{F}_{p}\delta \xi_k + \textbf{E}_{p}\delta x_p + \textbf{b}_{p}||^2 \end{equation} To avoid optimizing over $\textbf{x}_p$, we project the residual into the null space of $\textbf{E}_{p}$: Premultiply each term by $\textbf{Q}_p\doteq\textbf{I}-\textbf{E}_{p}(\textbf{E}_{p}^\top\textbf{E}_{p})^{-1}\textbf{E}_{p}^\top$, an orthogonal projector of $\textbf{E}_{p}$ \cite{refIMUPre}. We thus have the \textit{Structureless Vision Factor}, for landmark $p$ as \begin{equation} ||\textbf{r}^V_p||^2_{\Sigma_C} = ||\textbf{Q}_p\textbf{F}_{p}\delta \xi_k + \textbf{Q}_p \textbf{b}_{p}||^2 \end{equation} \subsection{Optimization and marginalization} Given the residuals, the pose graph optimization is a \textit{maximum a posteriori} (MAP) problem whose optimal solution is \begin{equation} \textbf{S}^*_w = \arg\min_{S^*_w}(||\textbf{r}_0||^2_{\Sigma_0} +\sum_{i\in w} ||\textbf{r}^I_{i(i+1)}||^2_{\Sigma_{I}} + \sum_{p}||\textbf{r}^V_p||^2_{\Sigma_{C}}) \end{equation} where $\textbf{S}^*_w$ is the set of state variables inside the window. $\textbf{r}_0$ and $\Sigma_0$ are prior factors and their associated covariance. $\Sigma_I$ is the covariance of the IMU measurements. We use the Levenberg-Marquart optimizer to solve this nonlinear optimization problem. The most recent $N$ state variables are maintained inside the optimizer. Schur-Complement marginalization \cite{schurComplement} is performed on state variables getting out of the window. Prior factors are then added to related variables inside the window as in Fig. \ref{VIO_pose_graph}(b). \begin{figure}[t!] \centering \includegraphics[scale=0.18]{global_pose_graph.png} \caption{The global pose graph consists of the \textit{LiDAR Odometry Factor} and the \textit{Loop Constraint Factor}. $i$ stands for the current scan.} \label{global_pose_graph} \vspace{-2mm} \end{figure} \section{Lidar mapping}\label{secLSTMM} LiDAR mapping uses high frequency IMU rate VIO poses as the motion prior to perform LiDAR points dewarping and scan to map registration. Denote a scan $\chi$ as the point cloud obtained from one complete LiDAR rotation. Geometric features including points on sharp edges and planar surfaces are extracted from $\chi$ before dewarping \cite{refLOAM, refVLOAM}. The registration is then based on feature points from current scan to the map (all previous feature points), solved as an optimization problem by minimizing Euclidean distance residuals formed by the feature points as in \cite{refLOAM}. \subsection{LiDAR scan dewarping} Dewarping is required as points from a LiDAR scan are timestamped differently. Denote any time within a scan as $t_{i}$. We dewarp all points to the time of end of scan $t_{k+1}$ based on IMU rate VIO poses. Denote a LiDAR point at $t_i$ as $\textbf{P}_{i}$ and the dewarped itself as $\tilde{\textbf{P}}_i$, we have \begin{equation} \tilde{\textbf{P}}_{i}=(\textbf{T}_{k+1}^L)^{-1}\textbf{T}_{i}^{L}\textbf{P}_{i} \end{equation} where $\textbf{T}_{k+1}^{L}$, $\textbf{T}_{i}^{L}$ are LiDAR frame poses transformed from the closest IMU rate VIO poses \subsection{Scan to map registration} Feature points from the dewarped scan $\tilde{\chi}$ are registered to the map, optimizing for the LiDAR mapping pose at $t_{k+1}$ denoted as $\textbf{L}_{k+1}$. Denote the initial estimate of $\textbf{L}_{k+1}$ as $\textbf{L}_{k+1}^*$, we have: \begin{equation} \textbf{L}_{k+1}^* = \textbf{L}_k\textbf{T}_{trans}^{L} \end{equation} where $\textbf{L}_k$ is the optimized previous LiDAR mapping pose and $\textbf{T}_{trans}^{L}$ is the relative transformation obtained based on IMU rate VIO poses. All dewarped feature points are then transformed to world coordinate system by $\textbf{L}_{k+1}^*$ for registration. The residual $\textbf{r}_E$ of an edge feature point in the current scan, is the Euclidean distance between itself and the line formed by the two closest edge points in the map. The residual $\textbf{r}_{U}$ of a surface point in the current scan is the distance between itself and the planar patch formed by the three closest surface points in the map. \cite{refLOAM} Incorporating $\textbf{L}_{k+1}^*$, we can rewrite the two residuals as: \begin{equation} f_{E}( E_{(c,i)}^{L}, \textbf{L}_{k+1}^*) = \textbf{r}_{E} \end{equation} \begin{equation} f_{U}( U_{(c,i)}^{L}, \textbf{L}_{k+1}^*) = \textbf{r}_{U} \end{equation} where $E_{(c,i)}^{L}$ and $U_{(c,i)}^{L}$ are the 3D position of the $i$th dewarped feature point in the LiDAR coordinate system. Levenberg-Marquardt optimizer is used to solve this nonlinear optimization problem, formed by stacking the cost functions for all feature points. \begin{figure}[!t \captionsetup[subfloat]{farskip=1pt,captionskip=1pt} \subfloat[Highbay]{\includegraphics[width=\columnwidth]{traj_map/traj_map_highbay.png}} \subfloat[Hallway]{\includegraphics[width=\columnwidth]{traj_map/traj_map_hallway.png}} \subfloat[Tunnel]{\includegraphics[width=\columnwidth]{traj_map/traj_map_tunnel.png}} \subfloat[Huge Loop]{\includegraphics[width=\columnwidth]{traj_map/traj_map_huge_loop.png}} \subfloat[Outdoor]{\includegraphics[width=\columnwidth]{traj_map/traj_map_outdoor.png}} \caption{Trajectories from VIL-SLAM and LOAM are shown on the left and maps generated by VIL-SLAM are shown on the right. Start(end) position is labeled with red triangle in the map and is the origin in the plot.} \label{traj_map_res} \vspace{-4mm} \end{figure} \section{Lidar enhanced loop closure}\label{secSIELC} Loop closure is critical to any SLAM system as long term operation introduces drift. The objective of loop closure is to eliminate drift by performing a global pose graph optimization which incorporates loop constraints and relative transformation information from LiDAR mapping. To better assist LiDAR mapping, the corrected LiDAR pose is sent back in real-time so that feature points from new scans are registered to the revisited map. We propose adding ICP alignment in addition to visual Bag-of-Words \cite{refBoW} loop detection and PnP loop constraint formulation. The system uses iSAM2 \cite{refiSAM2}, an incremental solver, to optimize the global pose graph, achieving real-time performance. \subsection{Loop detection} Stereo images and LiDAR scans are associated using their timestamps. Let us denote these as key images and key scans respectively. To prevent false loop detection we restrict candidates within a certain time threshold. Loop candidates are detected by testing the key images with the Bag-of-Words \cite{refBoW} database of previous key images. Furthermore, We match feature descriptors of the left key image with the loop candidates to filter out the false positives. \subsection{Loop constraint} The system first obtains visual loop constraint as an initial estimate. Since we use a structureless formulation for visual landmarks, triangulation on all the stereo matched features in the loop candidate is performed to obtain their 3D location. Their associations to current key images are given by descriptor match. The visual loop constraint is then evaluated using EPNP \cite{refEPNP}. To improve the accuracy of the visual loop constraint, we use ICP alignment on the feature points of the corresponding LiDAR key scans. With a bad initialization or a larger point count, ICP takes longer to converge and consumes more computation resources. However, the visual loop constraint provides a good initialization point and the ICP only uses sparse feature points (Section \ref{secLSTMM}), which makes it converge faster. \subsection{Global pose graph optimization} The graph representation of the global pose graph is shown in Fig. \ref{global_pose_graph}. It contains all the available LiDAR mapping poses as variables, constrained by the \textit{LiDAR Odometry Factor} and the \textit{Loop Constraint Factor}, both are measurements of the relative transformation: $(\textbf{L}_u)^{-1}\textbf{L}_v$ where $u$ and $v$ stand for scan ID and $\textbf{L}_u$, $\textbf{L}_v$ are the associated poses. For the \textit{LiDAR Odometry Factor}, $u$ is the previous scan ID. For the \textit{Loop Constraint Factor}, $u$ is the key scan ID found as loop. For both cases, $v$ is the current scan ID. Poses are expressed in 6 DoF minimum form in the optimization. To realize real-time performance, we use iSAM2 \cite{refiSAM2} to incrementally optimize the global pose graph. \subsection{Re-localization} Once a true loop closure candidate is found, LiDAR mapping buffers the feature points (without registering them to the map) until it receives loop correction. The loop correction contains globally optimized trajectory. LiDAR mapping updates its map, adds the buffered feature points to the map and then resumes its operation. We can afford to update the map in real-time because (a) loop closure has a real-time performance (b) the sparse feature map does not take much memory, and (c) scan to map registration is fast enough to catch up the LiDAR data rate. \begin{table}[!t] \captionsetup{skip=0pt} \caption{FDE (\%) and MRE (m) TEST RESULTS\label{FDE}} \begin{center} \begin{tabular}{|c|c||c|c||c|c|} \hline \multirow{2}{*}{Test} & Total & \multicolumn{2}{|c||}{FDE} & \multicolumn{2}{|c|}{MRE} \\ \cline{3-6} & Length & VIL-SLAM & LOAM & VIL-SLAM & LOAM \\ \hline \textit{Highbay} & 118 & \textbf{0.08} & 0.56 & \textbf{0.08} & 0.22 \\ \hline \textit{Hallway} & 103 & \textbf{0.61} & 0.91 & \textbf{0.10} & 0.27 \\ \hline \textit{Tunnel} & 85 & \textbf{1.86} & -\tablefootnote{"-" indicates not finished. "$\times$" indicates missing data.} & $\times$ & $\times$ \\ \hline \textit{Huge} & \multirow{2}{*}{318} & \multirow{2}{*}{\textbf{0.01}} & \multirow{2}{*}{-} &\multirow{2}{*}{\textbf{0.22}} & \multirow{2}{*}{0.36} \\ \textit{Loop} & & & & & \\ \hline \textit{Outdoor} & 528 & 0.02 & 0.02 & $\times$ & $\times$ \\ \hline \end{tabular} \end{center} \vspace{-4mm} \end{table} \section{Experimental results}\label{secER} We evaluate VIL-SLAM and compare it with the best real-time LiDAR based system, LOAM\footnote{This is the best implementation of LOAM we could find online https://github.com/laboshinl/loam\_velodyne}\cite{refLOAM} on custom datasets. We did not use KITTI odometry dataset \cite{Geiger2012CVPR} because their evaluation sequences do not have inertial measurements which are needed for VIO. Also, most KITTI sequences are not challenging. So they do not evaluate the robustness of these systems which is the main focus of our experiments. We also evaluate the stereo VIO submodule (VIL-VIO) using the EuRoC MAV dataset \cite{refEUROC}. \begin{figure}[!t] \captionsetup[subfloat]{farskip=1pt,captionskip=1pt} \subfloat[Highbay]{\includegraphics[width=\columnwidth]{map_comp/map_comp_highbay2.png}} \subfloat[Hallway]{\includegraphics[width=\columnwidth]{map_comp/map_comp_hallway.png}} \subfloat[Huge Loop]{\includegraphics[width=\columnwidth]{map_comp/map_comp_huge_loop.png}} \caption{Map registration error of VIL-SLAM (right) and LOAM (left) comparing to the model. Errors above 0.3m are colored red for (a-b) and 0.5m for (c). Discontinuous red regions inside the blue and green are due to lack of the model caused by occlusions of the Faro scans.} \label{map_comp_res} \vspace{-2mm} \end{figure} \subsection{Platform and software} We built a platform (Fig. \ref{platformFig}(a)) with two megapixel cameras, a 16 scan-line LiDAR, an IMU (400Hz), and a 4GHz computer (with 4 physical cores). We built a custom microcontroller based time synchronization circuit that synchronizes the cameras, LiDAR, IMU and computer by simulating GPS time signals. The software pipeline is implemented in C++ with ROS communication interface. We use GTSAM library \cite{refGTSAM} to build the fixed-lag smoother in the VIO. For loop closure, we use ICP module from LibPointMatcher \cite{refLibPointMatcher} to align point clouds, DBoW3 \cite{refDBoW3} to build the visual dictionary, and iSAM2 \cite{refiSAM2} implementation in GTSAM \cite{refGTSAM} to conduct global optimization. \begin{figure}[t!] \captionsetup{captionskip=1pt} \includegraphics[width=\columnwidth]{loopClosure.png} \caption{(a) Map of the tunnel stitched using LIDAR mapping poses. (b) Map of the tunnel stitched using globally refined poses. Double image in (a) is mostly eliminated but not fully, because only one loop constraint is generated, not enough for a full correction. (c) Map of the hallway stitched using LIDAR mapping poses. (d) Map of the hallway stitched using globally refined poses. Double image in (c) is mostly eliminated. Walls are aligned with two loop constraints.} \label{lccc} \vspace{-2mm} \end{figure} \subsection{Tests and results} We present results from five representative environments including featureless hallways, cluttered highbays, tunnels, and outdoor environments. The data collection started and ended at the same point for all these sequences. Odometry (LiDAR mapping pose) is evaluated based on the final drift error (FDE). Mapping results are evaluated in terms of mean registration error (MRE) using Faro scans as ground truth. We first align the map with the model (Faro scans), and then compute the Euclidean distance between a map point and its closest point in the model \cite{refCC}. The odometry FDE and mapping results are shown in Table \ref{FDE} with the better ones in bold. The trajectories and cross-sectioned maps are shown in Fig. \ref{traj_map_res}. The map comparisons are shown in Fig. \ref{map_comp_res}. The \textit{highbay} is an indoor warehouse which is open, structured, and rich in features. However, frequent structural occlusions could be a challenge for the visual frontend and the LiDAR feature extraction part. Both VIL-SLAM and LOAM handle this environment pretty well. For VIL-SLAM, LiDAR mapping module registers most of its scan to map, largely reducing the odometry error. Loop closure recognizes the starting position and closes the loop. The map is generated using the globally refined poses, with the majority of map errors below 0.15m. The \textit{hallway} and \textit{tunnel} tests are challenging environments because of lack of visual features and the degeneracy issue along traversal direction for LiDAR. LOAM accumulates large error in the hallway, and fails the tunnel test mainly due to the degeneracy issue. Aided by the stereo VIO module (VIL-VIO), VIL-SLAM succeeds both tests. In the \textit{hallway} test, the visual frontend returns fewer reliable measurements because of the featureless walls, under-constraining the VIO. This corrupts the map as observed by wall misalignment, which is later corrected by loop closure as shown in Fig. \ref{lccc}(c-d). Loop closure detects the loop twice when approaching the endpoint, lowering FDE to 0.05\% and generating a refined map. In the \textit{tunnel} test, because of the degeneracy issue, VIL-SLAM struggles as well and accumulates some error in the traversal direction. However, loop closure detects the loop at about 3m from the end point, lowering the FDE down to 0.08\% and correcting the map as shown in Fig. \ref{lccc}(a-b). The \textit{huge loop} test features challenges from both \textit{hallway} and \textit{highbay} environments. In addition, we end the trajectory by re-entering the highbay after traversing along a long narrow corridor. LOAM fails this test after re-entering the highbay, at the place labeled by a red cross in Fig. \ref{traj_map_res}(d). We think this is because it fails to register new scans to the original \textit{highbay} map caused by a large error in z-direction accumulated in the corridor. VIL-SLAM succeeds in this test. Without loop closure being triggered, it achieves 0.01\% FDE in odometry. VIL-SLAM is robust and achieves this result by successfully registering new scans to the original \textit{highbay} map at re-entry. The map generated with the odometry estimate of VIL-SLAM is compared with the map generated with LOAM before its failure. The boxed region is where LOAM accumulates errors leading to its failure. The \textit{outdoor} test features an outdoor trajectory which is 546m long and includes a gentle slope. Pedestrians and cars were observed which served as potential outliers. VIL-SLAM and LOAM have comparable results along the xy-plane. However, LOAM fails to capture the changes in the z-direction. The inaccuracy in z of LOAM is also observed in the previous tests. Overall, VIL-SLAM generates more accurate mapping results and lower FDE compare to LOAM when they both finish. Also, VIL-SLAM succeeds the more challenging environments where LOAM fails with qualitatively good mapping and odometry results. \begin{figure}[t!] \centering \includegraphics[scale=0.15]{euroc.png} \caption{Root mean square error of ATE for EoRoC Dataset.} \vspace{-3mm} \label{eurocFig} \end{figure} \subsection{EuRoC MAV Dataset test} VIL-VIO contributes to the robustness and accuracy of VIL-SLAM. We evaluate the VIO using the EuRoC MAV dataset \cite{refEUROC} in terms of the absolute trajectory error (ATE) as in \cite{refATE}. Fig. \ref{eurocFig}\footnote{A sequence is named in the first four letters and the difficulty level is encoded in the last letter (E:easy, M:medium, D:difficult)} shows the comparison results between VIL-VIO and three state-of-the-art methods. Results for VIL-VIO are deterministic, obtained in real-time on a desktop with 3.60GHz i7-4790 CPU. Results for the other methods are the better ones from experiments in \cite{Hsiung18iros} and \cite{refRSVIOupenn}. VIL-VIO succeeds all sequences with accuracy comparable with the others, verifying its capability to handle aggressive motion, illumination changes, motion blur and textureless regions. \section{Conclusions} VIL-SLAM is a state-of-the-art odometry and mapping system designed to robustly operate long term in different environments. Current framework loosely couples VIL-VIO and LiDAR mapping. We are extending it to a tightly-coupled framework such that refined pose estimate from LiDAR mapping could be used for IMU biases correction. In loop closure, ICP refinement operates on sparse feature points between scans. We suspect that we would obtain a better loop constraint by matching a scan to map. \addtolength{\textheight}{-12cm}
{ "timestamp": "2019-03-01T02:01:21", "yymm": "1902", "arxiv_id": "1902.10741", "language": "en", "url": "https://arxiv.org/abs/1902.10741" }
\section{Introduction} In a modest pursuit of the esthetic attributed to the probabilist Feller that ``the best consists of the general embodied in the concrete" \cite{Billingsley}, we consider extreme quantum cases of the kicked top, a widely studied text-book model of quantum chaos \cite{Haake,Peres02,KusScharfHaake1987,KusMostowskiHaake1988,Zyczkowski1990,Gerwinski1995,Wang2004, LombardiMatzkin2011,Ghose, mrgi14, Lewenstein-arxiv-2018,Bhosale-pre-2017}, which has also been implemented in experiments \cite{Chaudhary,Neill16}. The general issues at hand are the emergence of classical chaos from a linear quantum substratum and, more recently, the role of quantum chaos in the thermodynamics of closed quantum systems \cite{CassidyEtal2009,SantosRigol2010,Rigol16}. Vigorous progress is being made in studying thermalization of isolated quantum systems that could be either time-independent or periodically forced \cite{JensenShankar1985,Deutsch91,Srednicki94,Rigol2009,CassidyEtal2009,SantosRigol2010,CiracHastings2011, DeutchLiSharma2013,LangenEtal2013,Rigol16,LucaRigol2014, LazaDasMoess2014,LazDasMoess2014pre,Haldar2018,Neill16,Kaufman2016,ClosEtal2016,HazzardEtal2014}. Entanglement within many-body states in such quantum chaotic systems drives subsystems to thermalization although the full state remains pure and of zero entropy, see \cite{Kaufman2016} for a demonstration with cold atoms. Quantum chaos \cite{Gutzwiller1990,Haake} and, consequently, eigenstate thermalization hypothesis \cite{Srednicki94,Rigol16} enables one to use individual states for ensemble averages. For periodically driven systems that do not even conserve energy, a structureless ``infinite-temperature" ensemble emerges in strongly non-integrable regimes \cite{LucaRigol2014,LazDasMoess2014pre}. A recent 3-qubit experiment, using superconducting Josephson junctions, that simulated the kicked top \cite{Neill16} (see also \cite{Madhok2018_corr}) purported to remarkably demonstrate such a thermalization. Although such behavior has been attributed to non-integrability \cite{Neill16,Rigol16}, we exactly solve this 3-qubit kicked top and also point out that it can be interpreted as a special case of an {\it integrable} model, the well-known transverse field Ising model. Interestingly, we also solve the 4-qubit case exactly, where there is no such evident connection to an already known integrable model. The Arnold-Liouville notion of integrability requires sufficient number of independent constants of motion in involution. It is well-known that in finite dimensional quantum systems this notion can be debated, wherein any system is integrable as the projectors on eigenstates form a set of independent mutually commuting quantities, for example see \cite{YusShastry2013}. However, in this work, we use integrability more in the sense of the traditional definition of the existence of constants that arise from symmetries and whose forms are independent of the parameters of the system. This is a pragmatic approach and in line with current understanding that would classify the nearest neighbor transverse field Ising model as integrable and one with an additional longitudinal field, or a transverse field Ising model with nearest as well as next-nearest neighbor interactions as non-integrable. Nonintegrable, chaotic, systems may be solvable in some tangible sense, the textbook examples of the tent map and the bakers map are solvable, despite being completely chaotic. The Arnold cat map, and its quantizations also admit analytical solutions despite being hyperbolic and chaotic. Nevertheless, this is very rare, and restricted to abstract models. No known model that has a mixed phase space, with both regular and chaotic orbits, is also known to be exactly solvable in the same sense. Attempts at constructing such models include the piecewise linear ``lazy bakers map". The kicked top, in the limit of an infinite number of qubits displays a standard transition to Hamiltonian chaos, including a mixed phase space, and it is remarkable that many of the features are already reflected in the solvable few qubit cases as we show in this paper. For example, we obtain explicit formulas for entanglements generated for the 3 and 4-qubit cases and the compare the former with data from the experiment in \cite{Neill16} and find very good agreement. The infinite time average of single qubit entanglement is found analytically for some initial states and at a special and large value of the forcing, for all initially unentangled coherent states. These are shown to tend to that obtained from relevant (random matrix) ensembles, in some cases even exactly coinciding with them and thus displaying thermalization. These demonstrate that even in the deep quantum regime, the transition to what in the classical limit becomes chaos is reflected in the time-averaged entanglement. While the connections between chaos and entanglement in the semiclassical regime is now well studied \cite{MillerSarkar,Wang2004, LombardiMatzkin2011,Ghose, trail2008entanglement, Lewenstein-arxiv-2018,Lakshminarayan,BandyopadhyayArul2002,Bandyopadhyay04, ScottCaves2003,Ghose,Lakshminarayan16}, such systems are typically not analytically tractable and appeal is made to statistical modeling based on random matrix theory. Remarkably, there are interesting quantum effects in the few-body systems we study here. We find the presence of dynamical tunneling \cite{DavisHeller1981,LinBallentine1990,Peres1991,Tomsovic98b,SrihariBook} between what appears in the classical limit as symmetric regular regions. This results in extremely slow convergence of subsystem entropies in the near-integrable regime that happens for some states of the 4-qubit case. In the near-integrable regime the exactly calculable tunneling splitting is shown to result in this long-time dynamics. The kicked-top experiment involving the spin of cold Cs atoms has already observed such tunneling \cite{Chaudhary} but our observations provide a connection between the number of qubits and a system parameter at which such tunneling occurs. This may open windows to study the interplay of chaos and tunneling even in systems having a small number of qubits. \subsection{The model \label{Theory}} \begin{figure} \centering \includegraphics[scale=1]{./classical.pdf} \caption{(a) Regular and (b) mixed phase space structures resulting from the classical chaotic dynamics. Points labelled with red square and red circle correspond to initial states $\Theta=0, \Phi=0$ on a period-4 orbit and $\Theta=\pi/2, \Phi=-\pi/2$ at the centre of regular island respectively.} \label{fig:classical} \end{figure} The quantum kicked top is a combination of a rotation and a torsion, the Hamiltonian \cite{KusScharfHaake1987,Haake,Peres02} is given by \begin{equation} \label{Eq:QKT} H=\frac{\kappa_0}{2j}{J_z}^2 \sum_{n = -\infty}^{ \infty} \delta(t-n\tau)+\frac{p}{\tau} \, {J_y}. \end{equation} Here $J_{x,y,z}$ are components of the angular momentum operator $\mathbf{J}$. The time between periodic kicks is $\tau$. The Floquet map is the unitary operator, \begin{equation} \mathcal{U} = \exp\left [-i (\kappa_0/2j \hbar) J_z^2 \right]\exp\left[-i (p/\hbar) J_y\right], \end{equation} which evolves states just after a kick to just after the next. The parameter $p$ measures rotation about the $y$ axis, and in the following we set $\hbar=1$ and $p=\pi/2$. The parameter $\kappa_0$, which is the magnitude of a twist applied between kicks controls the transition and measure of chaos. If it vanishes, the dynamics is simply a rotation. As the magnitude of the total angular momentum is conserved, the quantum number $j$, with eigenvalues of $\mathbf{J}^2$ being $j(j+1)\hbar^2$, is a good one. The classical limit, when $j \rightarrow \infty$ is a map of the unit sphere phase space $X^2+Y^2+Z^2=1$ onto itself with the variables being $X,Y,Z=J_{x,y,z}/j$ and is given by (at $i^{\textrm{th}}$ iteration of the map) \begin{eqnarray} X_{i}&=&Z_{i-1}\cos(\kappa_0 X_{i-1})+Y_{i-1}\sin (\kappa_0 X_{i-1}),\nonumber \\ Y_{i}&=&-Z_{i-1}\sin(\kappa_0 X_{i-1})+Y_{i-1}\cos (\kappa_0 X_{i-1}),\nonumber \\ Z_{i}&=&-X_{i-1}. \end{eqnarray} Numerical iterations for various different initial conditions: $(X_{0}, Y_{0}, Z_{0})$, and for two strengths of the chaos, $\kappa_0=0.5$ and $2.5$, are shown in Fig.~(\ref{fig:classical}). These display what may be termed as regular and mixed phase space structures respectively, with the measure of chaotic oribits at $\kappa_0$ being negligibly small. For $\kappa_0=0$ the classical map is evidently integrable, being just a rotation, but for $\kappa_0>0$ chaotic orbits appear in the phase space and when $\kappa_0>6$ it is essentially fully chaotic. Connection to a many-body model can be made by considering the large $\mathbf{J}$ spin as the total spin of spin=1/2 qubits, replacing $J_{x,y,z}$ with $\sum_{l=1}^{2j} \sigma^{x,y,z}_l/2$ \cite{Milburn99,Wang2004}. The Floquet operator is then that of $2j$ qubits, an Ising model with all-to-all homogeneous coupling and a transverse magnetic field: \begin{equation} \label{uni} {\mathcal U}=\exp\left(-i \frac{\kappa_0}{4j} \sum_{ l< l'=1}^{2j} \sigma^z_{l} \sigma^z_{l'}\right) \exp\left( -i \frac{\pi}{4} \sum_{l=1}^{2j}\sigma^y_l \right). \end{equation} Here $\sigma^{x,y,z}_l$ are the standard Pauli matrices, and an overall phase is neglected. In general only the $2j+1$ dimensional permutation symmetric subspace of the full $2^{2j}$ dimensional space is relevant to the kicked top. Note that for $\kappa_0$ that are multiples of $2 \pi j$, $\mathcal{U}$ is a local operator and does not create entanglement, we therefore restrict attention to the interval $\kappa_0 \in [0, \pi j]$. The case of 2-qubits, $j=1$, has been analyzed in \cite{RuebeckArjendu2017} wherein interesting arguments have been proposed for the observation of structures not linked to the classical limit. In this case, several quantum correlation measures were also calculated in \cite{Bhosale-PRE-2018}. For $j=3/2$, the three qubit case, as all-to-all is just nearest neighbor with periodic boundary conditions, it is a nearest neighbor kicked transverse Ising model, known to be integrable \cite{Prosen2000,ArulSub2005}. The Jordan-Wigner transformation renders it a model of noninteracting fermions that can be immediately solved. This is also the case that was considered in the superconducting Josephson junction experiment \cite{Neill16} that treated it as chaotic. For higher values of the spin $j$, the model maybe considered few-body realizations of non-integrable systems. In the following we will mostly be studying time evolution from initial states that are localized in the spherical phase space, and these are the standard $SU(2)$ coherent states. Permutation symmetric initial states used are coherent states located at \begin{eqnarray} X_{0}&=&\sin\theta_0 \cos\phi_0, \nonumber \\ Y_{0}&=&\sin \theta_0 \sin\phi_0, \nonumber \\ Z_{0}&=&\cos \theta_0, \end{eqnarray} on the phase space sphere and given by~\cite{Glauber,Puri}, \begin{equation} |\theta_0,\phi_0\rangle = \otimes^{2j} (\cos(\theta_0/2) |0\rangle + e^{-i \phi_0} \sin(\theta_0/2) |1\rangle). \end{equation} \section{Analytical solution of the three-qubit case} From Eq.~(\ref{uni}), the unitary Floquet operator for $2j=3$-qubits, that simulate the dynamics of a spin-$3/2$ under a kicked top Hamiltonian is given by, \begin{eqnarray} \label{eq1a} \mathcal{U} = \exp && \left({-i \frac{\kappa_0}{6} (\sigma_1^z\sigma_2^z+\sigma_2^z\sigma_3^z+\sigma_3^z\sigma_1^z)} \right). \nonumber \\ && \exp \left({-i \frac{\pi}{4}(\sigma_1^y+\sigma_2^y+\sigma_3^y)} \right), \end{eqnarray} where all the terms have their usual meanings as defined in Section~\ref{Theory}. The solution to the 3-qubit case proceeds from the general observation that \[ [\mathcal{U},\otimes_{l=1}^{2j} \sigma^y_l]=0, \] {\it i.e.,} there is an ``up-down" or parity symmetry. The standard 4-dimensional spin quartet permutation symmetric space with $j=3/2$, $\{|000\rangle, |W\rangle=(|001\rangle+|010\rangle+|100\rangle)/\sqrt{3}, |\overline{W}\rangle =(|110\rangle+|101\rangle+|011\rangle)/\sqrt{3},|111\rangle\}$ is parity symmetry adapted to form the basis \begin{eqnarray} |\phi^{\pm}_1\rangle&=&\frac{1}{\sqrt{2}}(|000\rangle \mp i | 111 \rangle), \\ |\phi_2^{\pm}\rangle&=&\frac{1}{\sqrt{2}} (|W\rangle \pm i |\overline{W}\rangle). \end{eqnarray} These are parity eigenstates such that $\otimes_{l=1}^{3} \sigma^y_l|\phi_j^{\pm}\rangle=\pm |\phi_j^{\pm}\rangle$. Notations employed reflect the usage of $|W\rangle$ as the standard $W-$ state of quantum information and the $|\phi^{\pm}_1\rangle$ correspond to the standard GHZ states. To visualize these basis states the contour plots of their quasiprobability distribution in the phase space is shown in Fig.~(\ref{husimi3q}). We see that while the GHZ class of states are localized prominently at the poles of the sphere, the superposition of the $W$ states are localized at the equatorial plane and peak at $(\theta_0=\pi/2,\phi_0=\pm \pi/2)$. Interestingly these points correspond to low-order periodic points for the classical map and form the most important initial states to evolve for the quantum system. \begin{figure} \centering \includegraphics[scale=1,keepaspectratio=true]{./plotsHusimi3q.pdf} \caption{Husimi (quasiprobability distribution, $|\langle \phi_i|\theta_0,\phi_0 \rangle|^2$) plots for a set of four three-qubit bases states ($|\phi_i\rangle$), where $|\theta_0,\phi_0\rangle$ is an arbitrary three-qubit, parametrized by ($\theta_0,\phi_0$). \label{husimi3q}} \end{figure} In this basis, the unitary operator $\mathcal{U}$ is given by \begin{equation} \label{eq6} \mathcal{U} = \begin{pmatrix} \mathcal{U}_{+} & 0 \\ 0 & \mathcal{U}_{-} \end{pmatrix}, \end{equation} where $0$ is a $2 \times 2$ null matrix, and $2\times2$-dimensional blocks $\mathcal{U}_{+}$ ($\mathcal{U}_{-}$) are written the bases $\{ \phi_{1}^{+}, \phi_{2}^{+} \}$ ($\{ \phi_{1}^{-}, \phi_{2}^{-} \}$), are in the positive (negative)-parity subspaces respectively. Explicitly, these have matrix elements \begin{equation} \label{eq:Uplusm} \mathcal{U}_{\pm} = \pm e^{\mp \frac{i \pi}{4}} e^{-i \kappa} \begin{pmatrix} \frac{i}{2}e^{-2i \kappa} & \mp \frac{\sqrt{3} }{2} e^{-2i \kappa} \\ \pm \frac{\sqrt{3}}{2} e^{2i \kappa} & -\frac{i}{2}e^{2i \kappa} \end{pmatrix}. \end{equation} For simplicity the parameter $\kappa=\kappa_0/6$ is used in these expressions. Expressing $\mathcal{U}_+$ as a rotation $e^{-i \gamma \vec{\sigma} \cdot\hat{\eta}}$ by angle $\gamma$ about an axis $\hat{\eta}=\sin{\theta} \cos{\phi} \, \hat{x} + \sin{\theta} \sin{\phi} \, \hat{y} + \cos{\theta} \, \hat{z}$, upto a phase. On comparison with Eq.~(\ref{eq:Uplusm}), we obtain, $ \cos{\gamma} =\frac{1}{2} \sin{2\kappa}$, $\phi=\pi/2 +2 \kappa$, and $\sin{\theta} \sin{\gamma} = \sqrt{3}/2$. To evolve initial states we need $\mathcal{U}^n$ and therefore $\mathcal{U}_{\pm}^n$, which is explicitly given by, \begin{equation} \label{eq:Upluspowern} \mathcal{U}_{\pm}^n = (\pm 1)^n e^{-i n (\pm \frac{\pi}{4}+\kappa)} \begin{pmatrix} \alpha_n & \mp \beta_n^* \\ \pm \beta_n & \alpha_n^* \end{pmatrix}, \end{equation} where, \begin{eqnarray} \label{eq:alphabetan} \alpha_n &=& T_n(\chi)+\frac{i}{2}\, U_{n-1}(\chi) \cos 2\kappa \quad \textrm{and}\\ \label{eq:betan} \beta_{n} &=& (\sqrt{3}/2)\, U_{n-1}(\chi) \,e^{2 i \kappa}. \end{eqnarray} The Chebyshev polynomials $T_n(\chi)$ and $U_{n-1}(\chi)$ are defined as $T_n(\chi)=\cos(n \gamma)$ and $U_{n-1}(\chi)=\sin(n \gamma)/\sin \gamma$ \cite{mason2002chebyshev} with $\chi=\cos{\gamma}=\sin(2\kappa)/2$. Also note that $|\alpha_n|^2+|\beta_n|^2=1$. This follows both from the unitarity of $\mathcal{U}_{\pm}$ as well as a polynomial Pell identity satisfied by the Chebyshev polynomials, namely \begin{equation} T^2_n(x)+(1-x^2) U_{n-1}^2(x)=1. \end{equation} Remarkably, one can also view this as a new proof of the Pell identity satisfied by Chebyshev polynomials through the unitarity of quantum mechanics. Note also that the range of $\chi$ is restricted in this case to $|\chi|\leq 1/2$, which in addition to the general identity $|T_n(\chi)|\leq 1$, also implies that $|U_{n-1}(\chi)|\leq 2/\sqrt{3},$ which follows from Eq.~(\ref{eq:betan}). It is now straightforward to do time evolution, for an arbitrary three-qubit permutation symmetric state, and thereafter study its various properties. We further analyse two widely different three-qubit states ((i) $|0,0\rangle$ and (ii) $|\pi/2, -\pi/2\rangle$) in detail. For these two states, we obtain the exact expressions for linear entropy of a single-party reduced density matrix, time-average of the linear entropy, and concurrence between any two qubits as a measure of entanglement. These analytical expressions are verified numerically and also compared, where possible, with the data from the superconducting transmon qubits experiment of \cite{Neill16}. We particularly considered these two examples due to their preferential behaviors as classical phase space structures. A three-qubit state $\otimes^3|0\rangle$ corresponds to coherent state at $|0,0\rangle$ which is on the period-4 orbit whose classical correspondence is shown with a square in Fig.~(\ref{fig:classical}), while $\otimes^3|+\rangle_y$ corresponds to the coherent state at $|\pi/2,-\pi/2\rangle$, which is a fixed point on the classical phase space. This becomes unstable as we move from regular to mixed phase space at $\kappa_0=2$ and is indicated by a circle in Fig.~(\ref{fig:classical}). \subsection{Initial state $|000\rangle=|\theta_0=0,\phi_0=0\rangle$ \label{example1}} Let us consider the state on the period-4 orbit, corresponding to the coherent state at $|0,0\rangle$ which is $\otimes^3|0\rangle$. \begin{equation} \label{eq29} \begin{split} |\psi_n\rangle &= \mathcal{U}^n |000\rangle = \frac{1}{\sqrt{2}} \mathcal{U}^n \left( |\phi_{1}^{+} \rangle + |\phi_{1}^{-} \rangle \right) \\ & = \frac{1}{\sqrt{2}} \left( \mathcal{U}_{+}^n |\phi_{1}^{+} \rangle + \mathcal{U}_{-}^n |\phi_{1}^{-} \rangle \right)\\ & =\frac{1}{2}e^{-i n \left(\frac{3 \pi}{4}+\kappa\right)}\left\lbrace (1+i^n) \left( \alpha_n |000\rangle + i \beta_n |\overline{W} \rangle \right) \right. \\ & \left. + (1-i^n) \left( i \alpha_n |111\rangle - \beta_n |W \rangle \right) \right\rbrace. \end{split} \end{equation} From this the $1$ and $2$ qubit reduced density matrices $\rho_1(n)=\text{tr}_{2,3} (|\psi_n \rangle \langle \psi_n |)$, $\rho_{12}(n)=\text{tr}_{3}( |\psi_n \rangle \langle \psi_n |)$ are obtained. The entanglement of one qubit with the other two is found as the linear entropy $1- {\rm{Tr }}\,[\rho_1(n)^2]$, and from the $2$-qubit reduced matrix, the entanglement between two qubits is found as the concurrence \cite{Wootters}. \subsubsection{The linear entropy} It turns out that for even values of the time $n$, say $n=2m$, $\rho_1(2m)$ is diagonal, whose diagonal elements are, $\lambda(2m,\kappa_0)$ and $1-\lambda(2m, \kappa_0)$, from which the linear entropy, \begin{equation} \label{ent3q} S_{(0,0)}^{(3)}(2m,\kappa_0)=2 \lambda(2m,\kappa_0)(1- \lambda(2m,\kappa_0)), \end{equation} where the eigenvalue, \begin{equation} \label{eq:lamb000} \lambda(2m,\kappa_0) = \frac{1}{2} U^{2}_{2m-1}(\chi) =\frac{2}{3}|\beta_{2m}|^2. \end{equation} For odd values of $n$, $\rho_1(n)$ is not diagonal, but a peculiar result is obtained. One can evolve the even $n=2m$ states one step backward in time \begin{equation} |\phi_{2m-1}\rangle=\mathcal{U}^{-1} |\phi_{2m}\rangle, \end{equation} where $\mathcal{U}$ is the Floquet operator in Eq.~(\ref{eq1a}). Let $m$ itself be an even integer, which implies that only the first half of the state in Eq.~(\ref{eq29}) survives. Then upto an overall phase, using the nonlocal part of the unitary operator $\mathcal{U}$, the state upto local unitary operations is \begin{eqnarray} |\phi_{2m-1}\rangle &\stackunder{=}{\text{loc}}& e^{i \kappa (\sigma_1^z\sigma_2^z+\sigma_2^z\sigma_3^z+\sigma_3^z\sigma_1^z)} \left( \alpha_{2m} |000\rangle + i \beta_{2m} |\overline{W} \rangle \right), \nonumber \\ &=& e^{3i \kappa} \alpha_{2m} |000\rangle + i e^{-i \kappa} \beta_{2m} |\overline{W} \rangle, \nonumber \\ &=& \mathcal{V}\otimes \mathcal{V}\otimes \mathcal{V} |\phi_{2m} \rangle, \end{eqnarray} where single qubit unitary operator $\mathcal{V} = e^{i \kappa \sigma_z}$. Thus the three qubit state $|\psi_{2m-1}\rangle$, after odd numbered implementations of the unitary operator $\mathcal{U}$ are local unitarily equivalent to the state obtained after $2m$ implementations of $\mathcal{U}$ and hence all entanglement properties including entropy and concurrence are left unchanged for an odd-to-even time step. A similar situation holds when $m$ is odd. Therefore for a pair of consecutive implementations, entanglement among the qubits does not change, giving rise to step like features in the variation of entropy and concurrence with time. In particular \begin{equation} \label{eq:entsteps} S_{(0,0)}^{(3)}(2n-1,\kappa_0)= S_{(0,0)}^{(3)}(2n,\kappa_0), \;\; n=1,2,\cdots. \end{equation} \begin{figure} \centering \includegraphics[scale=1]{./entropy3qubit1.pdf} \caption{Linear entropy of a single qubit reduced state versus $n$ is plotted for initial state $|000\rangle$ at different values of $\kappa_0=0.1, \; 0.4, \; 0.8$ and $1.2$ as labelled on the right end of each curve.} \label{fig:entropy3qubit1} \end{figure} This step like feature in the variation of entropy is illustrated for a few values of $\kappa_0$ in Fig.~(\ref{fig:entropy3qubit1}). It is seen that there is a monotonic increase of the initial rate of entropy production as a function of $\kappa_0$. This gives way to non-monotonic behavior both in time and in the parameter $\kappa_0$. The initial rate can be simply quantified by the entanglement entropy at $n=1$. Again using the linear entropy we have as a special case that \begin{equation} \label{eq:000time1} S_{(0,0)}^{(3)}(1,\kappa_0)=\sin^2(\kappa_0/3)\left(1-\frac{1}{2}\sin^2(\kappa_0/3)\right), \end{equation} which increases monotonically till $\kappa_0=3 \pi/2$ where acquires the maximum value of $1/2$ which is also the upper-bound. We will see that the case of $\kappa_0=3 \pi/2$ is one of maximal chaos in some sense for $j=3/2$. {For small $\kappa_0$, the growth of the entropy is $S_{(0,0)}^{(3)}(1,\kappa_0) \approx \kappa_0^2/9$. From Fig.~(\ref{fig:entropy3qubit1}) it is seen that even for small values of $\kappa_0$ the entropy eventually becomes large and the maximum allowed value of $1/2$ is reached. As the classical dynamics for small $\kappa_0$ is regular, the large value of the entanglement reached is intriguing. We now estimate the time it takes for the entanglement to reach nearly the maximum value. The state in Eq.~(\ref{eq29}) clearly distinguishes times modulo 4. If the time $n$ is odd and $\beta_n$ vanishes (the conditions under which this happens is discussed below), the resultant state is the GHZ one with an equal superposition of $|000\rangle$ and $|111\rangle$ which is such that the reduced density matrices are maximally mixed and hence have maximum entropy. If the time $n$ is even and $\beta_n$ vanishes, there is no entanglement as the state becomes a tensor product, this also being apparent from the Eqs.~(\ref{ent3q}) and ~(\ref{eq:lamb000}).} From Eq.~(\ref{eq:betan}), the vanishing of $\beta_n$ corresponds to the zeros of the Chebyshev polynomials of the second kind, $U_{n-1}(\chi)$, which are at $\chi=\chi_k=\cos(\pi k/n)$ and $k=1,2, \cdots,n-1$. Thus we are looking for values of $n$ such that \begin{equation} \frac{1}{2}\sin(\kappa_0/3)=\cos(\pi k /n), \end{equation} which may be found from the continued fraction convergents of $r=\cos^{-1}[ \sin(\kappa_0/3)/2]/\pi$. For small $\kappa_0$ ($\ll 1$), $r \lesssim 1/2$ the first non-zero convergent is $1/2$ and therefore the second is of the form $a_1/(2 a_1+1)$ where $a_1$ is an integer. Identifying this with $k/n$ we see that $n$ is an odd integer and hence this corresponds to the case of maximum, or at least near-maximum, entanglement. Taylor expanding the $\sin$ and the $\cos^{-1}$ and retaining the lowest order terms then gives an estimate of the time $n_*$ at which the entanglement, for the first time, reaches nearly the maximum as \begin{equation} \label{eq:000maxtime} n_* \approx 2 \left[ \dfrac{3 \pi}{2 \kappa_0}-\frac{1}{2}\right] +1 \approx \left[ \frac{3 \pi}{\kappa_0}\right], \end{equation} and the time at which it gets unentangled, for the first time, is $\sim 2 n_*$. We see from Fig.~(\ref{fig:entropy3qubit1}) that these are excellent estimates even when $\kappa_0$ is as large as $0.4$ or $0.8$. {The formation of non-classical states such as the GHZ in this instance is a forerunner of dynamical tunneling as for small $\kappa_0$ the islands at the ``poles" of the phase space sphere can start to localize states for large values of $j$. This effect is seen prominently in the long-time averages. The intriguing increase of entanglement with time, even for small $\kappa_0$ in these states therefore has very different origins than the non-integrability of the kicked top. } \subsubsection{Long time averaged linear entropy} The infinite time average of the linear entropy, which can be easily obtained from Eq.~(\ref{ent3q}), maybe inaccessible experimentally but is of definite interest from the point of view of thermalization and it also is a way to study the influence of the parameter $\kappa_0$ directly. We need to use only even values of the time as for this state due to the property discussed above. We have \begin{eqnarray} \label{ent4} S^{(3)}_{(0,0)}(2m,\kappa_0) &=& U^{2}_{2m-1}(\chi)-\frac{1}{2} U^{4}_{2m-1}(\chi) \\ &=& \frac{\sin^2 2m \gamma}{\sin^2\gamma} -\frac{1}{2} \frac{\sin^4 2m \gamma}{\sin^4\gamma}. \end{eqnarray} The time-averaged linear entropy is thus given by \begin{eqnarray} \label{ent5} \langle S^{(3)}_{(0,0)}(\kappa_0) \rangle &=& \lim_{N \rightarrow \infty}\frac{1}{N} \sum_{m=0}^{N-1} S^{(3)}_{(0,0)}(2m, \kappa) \\ &=& \frac{1}{2\sin^2\gamma} -\frac{3}{16\sin^4\gamma}, \end{eqnarray} where we have used that $\langle \sin^2(2m \gamma)\rangle =1/2$ and $\langle \sin^4(2m \gamma)\rangle =3/8$, assuming that $\gamma \neq 0,\pi/2, \pi$. Further, using $\cos \gamma = \frac{1}{2}\sin 2\kappa= \frac{1}{2}\sin (\kappa_0/3)$, we obtain the average explicitly in terms of $\kappa_0$ as \begin{equation} \label{eq:avgpiby2_1} \langle S^{(3)}_{(0,0)}(\kappa_0) \rangle=\frac{5-2\sin^2(\kappa_0/3)}{\left(4-\sin^2(\kappa_0/3)\right)^2}, \, 0<\kappa_0<3 \pi. \end{equation} % This attains its maximum value of $1/3$ at $\kappa_0=3\pi/2$. This may be used as a probe to understand the process of thermalization, which is discussed later in this section. However it is appropriate to point out that $\langle S^{(3)}_{(0,0)}(\kappa_0) \rangle$ is discontinuous at $\kappa_0=0$ as it vanishes at $\kappa_0=0$ but is $5/16$ for arbitrarily small and nonzero values. Thus in this deep quantum regime, the state that starts off from the period-4 orbit gets entangled to a large extent even when the oribit is classically stable. However this is reflected in the infinite time average which includes highly nonclassical time scales, as discussed above. \subsubsection{Concurrence} While the linear entropy is a measure of entanglement of one qubit with the other two, the entanglement between any two qubits is quantified by the concurrence. Due to the permutation symmetry in the state it does not matter which two qubits are considered, there is only one concurrence. The concurrence is derived from the two-qubit reduced density matrix, as opposed to the entanglement of one qubit which needs only the one-qubit state. If $\rho_{12}$ is the two-qubit state, then its concurrence is given by \begin{equation} \label{eq:conc_defn} \mathcal{C}(\rho_{12})=\text{max}\left(0, \sqrt{\lambda_1}-\sqrt{\lambda_2}-\sqrt{\lambda_3}-\sqrt{\lambda_4} \right), \end{equation} where $\lambda_i$ are eigenvalues in decreasing order of $(\sigma_y \otimes \sigma_y)\rho_{12} (\sigma_y \otimes \sigma_y) \rho_{12}^*$, where $\rho_{!2}^*$ is conjugation is in the standard ($\sigma_z$) basis. \begin{figure}[h!] \centering \includegraphics[scale=1]{./conc3qubit1.pdf} \caption{Concurrence of a two-qubit reduced state versus $n$ is plotted for $\kappa_0=0.1, \; 0.4, \; 0.8$ and $1.2$ as labelled on the right end of each curve.} \label{fig:conc3qubit1} \end{figure} An exact expression for concurrence amongst any two qubits in the state $|\psi_n\rangle$ of Eq.~(\ref{eq29}) is possible to obtain explicitly as the two-qubit state is an ``$X$ state" \cite{YuEberly2007} when the time $n$ is even. A two-qubit reduced density operator of $\rho_{12}(n)$ obtained by tracing out one of the qubits in $|\psi_n\rangle\langle \psi_n |$ is given by, \begin{equation} \rho_{12}(n)= \begin{pmatrix} |\alpha_n|^2 & 0 & 0 & -\frac{i}{\sqrt{3}} \alpha_n \beta_n^{*} \\ 0 & \frac{1}{3} |\beta_n|^2 & \frac{1}{3} |\beta_n|^2 & 0 \\ 0 & \frac{1}{3} |\beta_n|^2 & \frac{1}{3} |\beta_n|^2 & 0 \\ \frac{i}{\sqrt{3}} \alpha_n^{*} \beta_n & 0 & 0 & \frac{1}{3}|\beta_n|^2 \end{pmatrix}, \end{equation} whose concurrence is found from the general formula for the $X$ states \cite{YuEberly2007}, \begin{equation} \label{eq:concstate000} \begin{split} &\mathcal{C}(n,\kappa_0)= \\ & 2\, \text{max} \left[ 0, \frac{1}{3}|\beta_{n}|^{2} - \frac{1}{\sqrt{3}} |\alpha_{n}||\beta_{n}|, -( \frac{1}{3}|\beta|_{n}^{2} - \frac{1}{\sqrt{3}} |\alpha_{n}||\beta_{n}|)\right]\\ &=2 \left| \frac{1}{3} \vert \beta_n \vert^2 -\frac{1}{\sqrt{3}} \vert \alpha_n \vert \vert \beta_n \vert \right|\\ &= \left| U_{n-1}(\chi) \right| \left| \frac{1}{2} \vert U_{n-1}(\chi) \vert- \sqrt{1-\frac{3}{4} \vert U_{n-1}(\chi) \vert^2 } \right|, \end{split} \end{equation} where we recall for convenience that $\chi=\cos \gamma=\sin(2\kappa)/2=\sin(\kappa_0/3)/2$. This is valid when the time $n$ is even,but from the arguments presented in the discussion of the entanglement entropy it follows that \begin{equation} \label{eq:concsteps} \mathcal{C}(2m-1,\kappa_0)=\mathcal{C}(2m,\kappa_0), \, m=1,2,\cdots. \end{equation} See Fig.~(\ref{fig:conc3qubit1}) for the variation of the concurrence with time for the same values of $\kappa_0$ as used in the previous figure. As with the case of the linear entropy, the concurrence initially increases monotonically with $\kappa_0$ as well as with time. Once again it is of interest to see how much concurrence is produced in simply the first step and this is \begin{equation} \mathcal{C}(1,\kappa_0)=\sin(\kappa_0/3)\left[\sqrt{1-\frac{3}{4}\sin^2(\kappa_0/3)} - \frac{1}{2}\sin(\kappa_0/3)\right], \end{equation} which is valid when $0 \le \kappa_0 \le 3 \pi$, and beyond this the concurrence is periodic. Interestingly this is monotonic in $\kappa_0$ only till $\kappa_0=\pi/2$, where it attains the maximum value of $(\sqrt{13}-1)/8 \approx 0.3257$. This is in contrast to the linear entropy or entanglement of one qubit with the rest which grows till $\kappa_0=\pi$. \begin{figure}[h!] \centering \includegraphics[scale=1]{./entconc.pdf} \caption{Solid curve with circles and dashed curve with squares show the variation of entropy of a single qubit reduced state and concurrence between a pair of two qubits respectively, with $n$ as three-qubit initial state $|000\rangle$ evolves under $\mathcal{U}^n$. Parts (a), (b), (c), and (d) correspond to different values of chaoticity parameter ($\kappa_0$) as mentioned.} \label{fig:entconc} \end{figure} It is useful to compare the concurrence and entanglement entropy directly and this is illustrated in Fig.~(\ref{fig:entconc}) where for $4$ value of $\kappa_0$ these are plotted as a function of time. It is seen that while initially both of them grow, after a certain time, the concurrence starts to decrease while the entanglement continues to increase. This is the phase where entanglement is started to be shared globally rather than in bipartite manner. In this case of only $3$ qubits, this implies that tripartite entanglement starts to significantly grow after this time. It is also seen that when the entanglement entropy is the maximum possible, concurrence is at a minimum, and sometimes vanishes. This is consistent with the fact that entanglement is monogamous and hence cannot be simultaneously shared among the three qubits. It is interesting that the simple formulas derived for this system illustrates these more general features. In particular it is clear from Eq.~(\ref{ent3q}) and Eq.~(\ref{eq:concstate000}) that while both the entanglement and concurrence vanish when $U_{n-1}(\chi)=0$, the concurrence also vanishes when $U_{n-1}(\chi)=\pm 1$, a case that corresponds to a maximum entanglement. More discussion on this is also found in \cite{Madhok2018_corr}. A curious case is obtained when $\kappa_0=3 \pi/2$ when $\cos\gamma=1/2$ and hence $\gamma =\pi/3$ and $U_{n-1}(\chi)=\sin(2 \pi n/3)/\sin(\pi/3)$, which takes the value $0$ when $n \,(\text{mod}\,3)=0$, is $+1$ when $n \,(\text{mod}\,3)=1$ and is $-1$ when $n \,(\text{mod}\,3)=2$. This implies that when $n \,(\text{mod}\,6) \neq 0$ or $-1$ the entanglement entropy is the maximum possible value of $1/2$ while the concurrence vanishes for all values of time $n$, as seen in the last panel of Fig.~(\ref{fig:entconc}). Thus in this case the entanglement is shared only in a tripartite manner. We will return to this case later, but note here that indeed special values of such parameters in Floquet spin systems display similar behavior with large multipartite entanglement \cite{SunilMishraArulSubhra}. \subsection{Initial state $|+++\rangle_y=|\theta_0=\pi/2, \phi_0=-\pi/2\rangle$ and beyond\label{example2}} We considered in some detail the fate of the state $|000\rangle$, we now study the case of the three-qubit state $|\psi_0\rangle=|+++\rangle_y$, where $|+\rangle_y=\frac{1}{\sqrt{2}}(|0\rangle+i|1\rangle)$ is an eigenvector of $\sigma_y$ with eigenvalue $+1$. The former is an eigenstate of the interaction term in the Floquet operator $\mathcal{U}$, while the latter is the eigenstate of the field. When $|+++\rangle_y$ is the initial state, it's evolution lies entirely in the positive parity sector as it can be also written as $\otimes^3|+\rangle_y=(|\phi_1^+\rangle +\sqrt{3} i |\phi_2^+\rangle)/2$. As a coherent state it corresponds to being localized at $|\pi/2,-\pi/2\rangle$. The corresponding classical object is a fixed point that is stable till $\kappa_0=2$. The time evolved state is then \begin{equation} |\psi_n\rangle=\mathcal{U}^n|+++\rangle_y= e^{-in(\frac{\pi}{4}+\kappa)}\left( \gamma_n |\phi_1^{+} \rangle + \delta_n |\phi_2^{+} \rangle \right), \label{eq:fixedptstate} \end{equation} where $\gamma_n=(\alpha_n-i\sqrt{3}\beta_n^{*})/2$ and $\delta_n=(\beta_n+i\sqrt{3}\alpha_n^{*})/2$. and the $\alpha_n$ and $\beta_n$ are same as in Eq.~(\ref{eq:alphabetan}). One can obtain the single-party reduced state by tracing out any two-qubits, $\rho_{1}(n) =$ \begin{eqnarray} && \begin{pmatrix} \frac{1}{2} & -\frac{i}{3} \left( |\delta_n|^2 - \sqrt{3}\, \text{Im}( \gamma_n \delta_n^{*} ) \right) \\ \frac{i}{3} \left( |\delta_n|^2 - \sqrt{3}\, \text{Im}( \gamma_n \delta_n^{*} ) \right) & \frac{1}{2} \end{pmatrix}. \nonumber \\ \label{eq-e2-5} \end{eqnarray} The eigenvalues of $\rho_1(n)$ are simple and given by $2\chi^2U^2_{n-1}(\chi)$ and $1-2\chi^2U_{n-1}^2(\chi)$; hence the linear entropy is \begin{equation} \label{eq-e2-8} S_{(\frac{\pi}{2}, -\frac{\pi}{2})}^{(3)}(n,\kappa_0)= 4\chi^2U^2_{n-1}(\chi) \left( 1-2\chi^2 U_{n-1}^2(\chi) \right). \end{equation} \begin{figure} \centering \includegraphics[scale=1]{./entropy3qubit2.pdf} \caption{Linear entropy of a single qubit reduced state versus $n$ is plotted for different values of $\kappa_0$. Curves correspond to $\kappa_0=0.4, \; 0.8, \; 1.2$ and $3.0$ are shown by solid line with diamonds, solid line with circles, dashed line with triangles, and dashed line with squares respectively.} \label{fig:entropy3qubit2} \end{figure} Figure~(\ref{fig:entropy3qubit2}) shows the growth of the entanglement entropy in this state as a function of time $n$ for four different values of $\kappa_0$. Comparing with Fig.~(\ref{fig:entropy3qubit1}) we see that the entanglement increases much more slowly, in keeping with the classical interpretation of this state as being localized on a fixed point. However the initial $n=1$ is the same in both the cases, $S^{(3)}_{(\frac{\pi}{2},-\frac{\pi}{2})}(1,\kappa_0)$ is still given Eq.~(\ref{eq:000time1}), and hence the entanglement after the first step is $\sim \kappa_0^2/9$ for small $\kappa_0$. {A difference is seen at $n=2$ when \begin{equation} \label{eq:ppptime2} S_{(\frac{\pi}{2}, -\frac{\pi}{2})}^{(3)}(2,\kappa_0)= \sin^4(\kappa_0/3)\left(1-\frac{1}{2}\sin^4(\kappa_0/3)\right), \end{equation} thus while $S_{(0, 0)}^{(3)}(2,\kappa_0)=S_{(0, 0)}^{(3)}(1,\kappa_0)$, $S_{(\frac{\pi}{2}, -\frac{\pi}{2})}^{(3)}(2,\kappa_0)<S_{(\frac{\pi}{2}, -\frac{\pi}{2})}^{(3)}(1,\kappa_0).$ In fact the contrast with the state $|000\rangle$ is most apparent when we observe that Eq.~(\ref{eq-e2-8}) implies that \begin{equation} \label{eq:pppbound} S_{(\frac{\pi}{2}, -\frac{\pi}{2})}^{(3)}(n,\kappa_0)\leq 4 \chi^2 U_{n-1}^2(\chi) \leq \frac{4}{3}\sin^2(\kappa_0/3), \end{equation} the last inequality is due to the upper-bound $|U_{n-1}(\chi)|\leq 2/\sqrt{3}$ which as has been observed above holds due to the restriction $|\chi|\leq 1/2$. This inequality is useful for small $\kappa_0$ in which case we have that $S_{(\frac{\pi}{2}, -\frac{\pi}{2})}^{(3)}(n,\kappa_0) \leq 4 \kappa_0^2/27$, and is hence very close to the entanglement produced at the very first step, namely $\kappa_0^2/9$, and in particular has no secular growth towards large entanglement.} The long-time average value of the linear entropy is calculated exactly as the case when the initial state was $|000\rangle$, and we therefore merely display the result \begin{equation} \label{eq:avgpiby2_2} \langle S^{(3)}_{(\frac{\pi}{2},-\frac{\pi}{2})}(\kappa_0) \rangle=\frac{\sin^2(\kappa_0/3)}{\left(4-\sin^2(\kappa_0/3)\right)^2}\left( 8-5\sin^2(\kappa_0/3)\right). \end{equation} The major difference between the two initial states considered so far is apparent in this formula, as it is smooth at $\kappa_0=0$ and vanishes at $\kappa_0=0$, unlike the Eq.~(\ref{eq:avgpiby2_1}). The fact that the classical orbit in this case is a fixed point as opposed to a period-4 orbit is notable. In the case of the state $|000\rangle$, extremely non-classical states such as the GHZ can form at sufficiently long times and leads to the large average. We will see that the state centered at the fixed point can also have a large nonzero average for the case of $4$-qubits, again due to the formation of highly non-classical states mediated by tunneling. Figure~(\ref{fig:entropyplot}) shows the long-time average $\langle S^{(3)}_{(\theta_0,\phi_0)}(\kappa_0)\rangle$ as a function of $\kappa_0$, for the case of 3-qubits, and three initial states, two of them being what we just discussed, namely $|000\rangle$ and $|+++\rangle_y$, which correspond to the cases with $\theta_0=0$ and $\pi/2$ respectively. That they are in some sense extreme cases is seen clearly in this figure. Each is seen to increase with the torsion $\kappa_0$ to $1/3$, while a state with $\theta_0=\pi/4$ (and in all cases $\phi_0=-\pi/2$) grows to $7/24$ which we will see is the lowest for any state. The average value of the linear entropy in the $N$-qubit permutation symmetric subspace~\cite{SheshadriPRE2018} is given by \begin{equation} S_{RMT}(N)=\frac{N-1}{2N}, \end{equation} and for $N=3$ this also gives $1/3$. For at least three particular initial states, with important classical phase space correspondences, $|0,0\rangle\equiv |000\rangle$ and $|\pi/2, \pm \pi/2\rangle \equiv |\pm\pm\pm\rangle_y$ this value is, remarkably, exactly attained for $\kappa_0=3\pi/2$, as easily verified from Eq.~(\ref{eq:avgpiby2_1}) and Eq.~(\ref{eq:avgpiby2_2}). Thus ergodicity is attained as time averaged linear entropy approaches the state space averaged linear entropy. Note that the $j=\infty$, classical system shows a transition to chaos in the same range of the parameter. While $j=3/2$ is too small to see effects such as the fixed points' loss of stability, the overall region surrounding the classical fixed points $(\theta_0,\phi_0)=(\pi/2, \pm \pi/2)$ being stable for small $\kappa_0$ and gradually losing stability as the parameter is increased is reflected in the gradual increase of average entropy corresponding to the initial states $|\pi/2,\pm \pi/2\rangle$ starting from $0$ when $\kappa_0=0$. Notice that from a purely quantum mechanical view, $\otimes^{2j} |\pm \rangle_y$ are eigenstates of $\mathcal{U}$ at $\kappa_0=0$. In contrast, the initial state $|000\rangle$ corresponds to a classical period-4 orbit and assumes entanglement entropy as large as $5/16$ for arbitrarily small $\kappa_0$. \begin{figure} \centering \includegraphics[scale=1,keepaspectratio=true]{./tavgfig.pdf} \caption{(a) Time averaged linear entropy, obtained over $n=1000$ periods, of a single qubit {\it vs} the parameter $\kappa_0$, for three initial coherent states $|\theta_0, \phi_0\rangle$. The Eqs.~(\ref{eq:avgpiby2_1},~\ref{eq:avgpiby2_2}) apply to the curves labeled (1) and (3), as for $\theta_0=0$ the value of $\phi_0$ is immaterial on the sphere. Inset shows the entanglement periodicity in the parameter at $\kappa_0=3\pi$. Part (b) displays the time averaged linear entropy across all initial coherent states for the value $\kappa_0=3\pi/2$ and is described by Eq.~(\ref{eq:entthetaphi}).} \label{fig:entropyplot} \end{figure} \subsubsection{Arbitrary initial states, $\kappa_0=3 \pi/2$} For the 3-qubit case, the case of $\kappa_0=3\pi/2$ is an extreme one, and the eigenvalues of $\mathcal{U}$ in this case are $\exp(\pm 2 \pi i/3)$ and $\pm \exp(\pm \pi i/6)$, implying that $\mathcal{U}^{12}=I$. Thus infinite time averages are finite ones over a period, in fact entanglement has a period of $6$ in this case and for arbitrary initial coherent states, the time-averaged entanglement entropy is obtained via a straightforward, if long, computation whose details we skip and state the result as \begin{equation} \begin{split} \langle S^{(3)}_{(\theta_0,\phi_0)}(3\pi/2)\rangle = &\frac{1}{48}[ 15+\cos(4 \theta_0) + \\ &(1+3 \cos(2 \theta_0)) \sin^4 \theta_0 \sin^2(2 \phi_0)]. \end{split} \label{eq:entthetaphi} \end{equation} This takes values in the narrow interval $[7/24,1/3]$, and is shown in Fig.~(\ref{fig:entropyplot}). The minimum corresponds to several initial states including $|\pi/4,\pm \pi/2\rangle$ and the maximum includes the $|0,0\rangle$ and $|\pi/2, \pm \pi/2\rangle$ states as already noted above. The structures seen are not directly linked to classical phase space orbits, except through shared symmetries \cite{RuebeckArjendu2017}, and cannot be expected to do so as the classical limit is for fixed $\kappa_0$ and $j \rightarrow \infty$. Nevertheless these results lend quantitative credence to thermalization in the sense that the time averaged entropy of subsystems of most states are close to the ensemble average for suitable large $\kappa_0$, even for the 3-qubit case \cite{Neill16,Rigol16}.which, when $\kappa_0=3\pi/2$ approaches $1/3$. Coincidentaly, as mentioned above, this is same as the average linear entropy of a single qubit reduced state in a set of random symmetric three-qubit states. The calculations for the case of a general initial state and more figures of the long-time averages are presented in the Appendix. \section{Comparison with an experiment} We analyse the data from a recent experiment~\cite{Neill16}, that demonstrates the kicked top dynamics of a spin-$3/2$, using three superconducting transmon qubits. Experimental data corresponds to the two special initial states: $|0,0\rangle$ and $|\pi/2,-\pi/2\rangle$, (whose analytical solutions are given in sections~\ref{example1} and \ref{example2} respectively), each one for two values of chaoticity parameter, $\kappa_0=0.5$ and $2.5$. Three-qubit state is experimentally initialized in given initial states (respectively), and then allowed to undergo a series of kicks and evolutions, separately for $\kappa_0=0.5$ and $2.5$, as described in~\cite{Neill16} for a total of $20$ time steps. Details of the analysis of the raw experimental data, which often has negative states, is outlined. We analysed the complete quantum state tomographic data, obtained at the end of each time step. State of a three-qubit system is obtained via complete quantum state tomography using a set of 64 projective measurements. These projective measurements are constructed by taking the combinations of Pauli-$x,y,z$ matrices ($\sigma_x$, $\sigma_y$ $\sigma_z$) and the Identity operator ($I$)~\cite{james-pra-2001, Neill16}. These measurements are experimentally realized by various single qubit rotations ($\mathcal{R}$) followed by $\sigma_z$ measurements on individual qubits, that effectively performs a $\sigma_{i'}$ measurement (for $i'=x$, $\mathcal{R}=$Hadamard operator $(H_d)$; $i'=y$, $\mathcal{R}=$ Phase shift $(\mathcal{S}).H_{d}$; $i'=z$, $\mathcal{R}=I$)~\cite{Neill16}. Multiple implementations of each measurement, provides the relative occupancy of the eight basis states of a three-qubit system. The resulting relative populations ($p_m$) of these eight states are thus obtained experimentally. In order to compensate the effect of errors induced by the measurements, the intrinsic populations ($p_{int}$) are obtained via a correction matrix ($F$)~\cite{steffen-science-2006, lucero-prl-2008}. We have, $p_{int}=F^{-1} p_m$, where $F=F_1 \otimes F_2 \otimes F_3$. $F_i$ is the measurement error corresponding to $i^{th}$ qubit, given as, \begin{displaymath} F_i= \left( \begin{array}{cc} f_0^{(i)} & 1-f_1^{(i)} \\ 1-f_0^{(i)} & f_1^{(i)} \end{array} \right). \end{displaymath} Here, $f_0^{(i)}$ is the probability by which a state $|0\rangle$ of the $i^{th}$ qubit is correctly identified as $|0\rangle$, while $1-f_1^{(i)}$ is the probability by which, a state that is actually $|0\rangle$ is being wrongly considered as $|1\rangle$. $f_0^{(i)}$ and $f_1^{(i)}$ are termed as the measurement fidelities of the basis states $|0\rangle$ and $|1\rangle$ respectively of the $i^{th}$ qubit. Using a part of the measurement data corresponding to the initial state preparation, we estimated the measurement fidelities as $f_0^{(1)}=0.98$, $f_1^{(1)}=0.92$, $f_0^{(2)}=0.98$, $f_1^{(2)}=0.94$, $f_0^{(3)}=0.96$, $f_1^{(3)}=0.87$. The intrinsic populations obtained in this manner are positive (as observed till second decimal place). Using these intrinsic population values, three-qubit density operators are obtained, that further undergo the convex optimization. The fidelities between the theoretically expected ($\rho_t$) and the experimentally obtained ($\rho_e$) states is given by~\cite{Neill16} \begin{equation} \mathcal{F}=Tr \sqrt{\sqrt{\rho_t}\rho_e\sqrt{\rho_t}}. \end{equation} These experimentally obtained three-qubit density operators are then used in our study to obtain the correlations, such as linear entropy of a single qubit reduced state and a two-qubit entanglement measure, concurrence. Experimental data has its own imperfections, and the three-qubit experimental state may be not be permutation symmetric under qubit exchange. Therefore, corresponding to each three-qubit density operator, three single qubit reduced density operators (say $\rho_1$, $\rho_2$, $\rho_3$) and three two-qubit reduced density operators (say $\rho_{12}$, $\rho_{23}$, $\rho_{13}$) are obtained. At each time step, using various single qubit and two-qubit density operators correlations such as linear entropy and concurrence are calculated respectively and their average behavior is observed. Figure~\ref{compk1tf1} shows the comparison between the analytical results (dashed curves with markers) and those using experimental data from \cite{Neill16}, shown as solid curves with markers for $\kappa_0$= $0.5$ and $2.5$. Numerical results are also plotted as dashed curves in Fig.~\ref{compk1tf1}, but are naturally indistinct from the respective analytical results. More extensive analytical results have already been displayed in Figs.~(\ref{fig:entropy3qubit1})-~(\ref{fig:entropy3qubit2}) and discussed in the previous section. The Fig.~(\ref{compk1tf1}(a,b)), corresponds to the initial state $|000\rangle$, whose classical limit is the period-4 orbit.This classical the period-4 orbit is unstable at $\kappa_0=2.5$ and we see a rapid growth in the entanglement. However even at $\kappa_0=0.5$ entanglement grows to near maximal values, consistent with the large time average in Eq.~(\ref{eq:avgpiby2_1}), and with the analysis that predicts that the maximum occurs at a time scale $n_*\sim 3 \pi/\kappa_0 \sim 19$. We also noted that the entanglement at time $2n$ is same as the entanglement at time $2n-1$, see Eqs.~(\ref{eq:entsteps}) and ~(\ref{eq:concsteps}). Interestingly these are quite remarkably present (but previously unnoticed) in the experimental data for the first few time steps. All entanglement properties including concurrence is left approximately unchanged in the experimental data for an odd-to-even time step as seen in Fig.~(\ref{compk1tf1}(a,b)). The degradation of this phenomenon is naturally to be attributed to decoherence and maybe a good measure of it. \begin{figure} \centering \includegraphics[scale=1]{./fig_k1tf4.pdf} \caption{Plots showing analytical (dashed curves with markers), experimental (solid curves with markers) and numerical (dashed) curves of linear entropy and concurrence as a function of the number of kicks, as the initial state $|\psi_0\rangle$ is evolved under repeated applications of operator $\mathcal{U}$. Parameters of the initial state, $(\theta_0, \phi_0)$, and chaoticity parameter, $\kappa_0$, are specified in each figure. Analytical (wherever plotted) and numerical curves exactly overlap, and hence can not be seen separately.} \label{compk1tf4} \label{compk1tf1} \end{figure} The plots showing comparison of linear entropy and concurrence from the experimental data for the state $|\pi/2,-\pi/2\rangle$ when $\kappa_0=0.5$ and $\kappa_0=2.5$, are shown in Fig.~(\ref{compk1tf1}(c,d)). It shows a much smaller entropy growth for $\kappa_0=0.5$ in comparison to the state $|000\rangle$, consistent with the bound in Eq.~(\ref{eq:pppbound}) and is a reflection, in the semiclassical limit, of the stable neighborhood of $|\pi/2,-\pi/2\rangle$. This is also consistent with the long time average, already displayed in Eq.~(\ref{eq:avgpiby2_2}). More qualitative discussions of the time-evolution have been published in \cite{Madhok2018_corr} \section{Exact solution for four-qubits:} It is particularly interesting to study a four-qubit kicked top as this is the smallest system where all-to-all interaction among qubits is different from that of nearest-neighbour interaction, and therefore presents a special case of a genuinely nonintegrable system. Surprisingly, even in this case, an exact solution to the kicked top with spin $j=2$, is possible. Similar to that of three-qubit kicked top, we are again confined to ($2j+1=5$)-dimensional permutation symmetric subspace of the total $2^{2j}=16$-dimensional Hilbert space. In this case the parity symmetry reduced and permutation symmetric basis in which $\mathcal{U}$ is block-diagonal is \begin{eqnarray} |\phi_1^{\pm} \rangle&=& \frac{1}{\sqrt{2}} (|W\rangle \mp | \overline{W} \rangle),\nonumber \\ |\phi_2^{\pm} \rangle &=& \frac{1}{\sqrt{2}} (|0000\rangle \pm | 1111 \rangle), \nonumber \, \textrm{and}\\ |\phi_3^{+} \rangle &=& \frac{1}{\sqrt{6}} \sum_{\mathcal{P}}|0011\rangle_{\mathcal{P}} \end{eqnarray} where $|W\rangle =\frac{1}{2}\sum_{\mathcal{P}}|0001\rangle_{\mathcal{P}}$, $|\overline{W}\rangle =\frac{1}{2}\sum_{\mathcal{P}}|1110\rangle_{\mathcal{P}}$, and $\sum_{\mathcal{P}}$ sums over all possible permutations. Husimi plots for each of these states is shown in Fig.~(\ref{husimi4q}). \begin{figure} \centering \includegraphics[scale=1,keepaspectratio=true]{./plotsHusimi4q.pdf} \caption{Husimi (quasiprobability distribution, $|\langle \phi_i|\theta_0,\phi_0 \rangle|^2$) plots for a set of five four-qubit bases states ($|\phi_i\rangle$), where $|\theta_0,\phi_0\rangle$ is an arbitrary four-qubit, parametrized by ($\theta_0,\phi_0$).} \label{husimi4q} \end{figure} While all of these states $|\phi_j^{\pm}\rangle$ are eigenstates of the parity operator $\otimes^4_{j=1} \sigma^y_j$ with eigenvalue $\pm 1$, a peculiarity of 4-qubits is that $|\phi_1^{+}\rangle$ is also an eigenstate of the Floquet operator $\mathcal{U}$ with eigenvalue $-1$ for {\it all} values of the parameter $\kappa_0$. Thus the $5-$ dimensional space splits into $1\oplus2\oplus2$ subspaces on which the operators are $\mathcal{U}_0=-1$ and $\mathcal{U}_{\pm}$. Note that we continue to use the same symbol for the symmetry reduced Floquet operators as for the 3-qubit case, although they are not the same. It is interesting that the eigenstate $|\phi_1^{+}\rangle$ still has a classically viable interpretation, but only for small $\kappa_0$, where as is clear from the Husimi, it is localized on the fixed points and the symmetric islands. A more detailed study of eigenstates is postponed while we concentrate here on the time evolution. In this basis, the unitary Floquet operator $\mathcal{U}$ becomes block diagonal, which makes it easy to take the $n^{th}$ power of the unitary operator $ \mathcal{U}$, \begin{equation} \label{eq8} \mathcal{U}^n = \begin{pmatrix} (-1)^n & 0 & 0 \\ 0 & \mathcal{U}_{+}^n & 0 \\ 0 & 0 & \mathcal{U}_{-}^n \end{pmatrix}. \end{equation} Thus in this case also we do not encounter the need to take powers of any matrix other than $2$-dimensional ones. Block $\mathcal{U}_{+}$ is $\mathcal{U}$ in the basis $\{\phi_2^{+},\phi_3^{+}\}$ and is, \begin{equation} \label{eq12} \mathcal{U}_{+} = -ie^{-\frac{i \kappa }{2}} \left( \begin{array}{cc} \frac{i}{2} e^{-i \kappa} & \frac{\sqrt{3}i}{2} e^{-i \kappa} \\ \frac{\sqrt{3}i}{2} e^{i \kappa} & -\frac{i}{2} e^{i \kappa} \\ \end{array} \right), \end{equation} while $\mathcal{U}_{-}$ is $\mathcal{U}$ in the basis $\{\phi_1^{-},\phi_2^{-}\}$, \begin{equation} \label{eq12} \mathcal{U}_{-} = e^{-\frac{3 i \kappa }{4}} \left( \begin{array}{cc} 0 & e^{\frac{3 i \kappa }{4}} \\ -e^{-\frac{3 i \kappa }{4}} & 0 \\ \end{array} \right), \end{equation} where for simplicity we have used $\kappa=\kappa_0/2$. Adopting the same procedure as for the case of 3 qubits, namely expressing $\mathcal{U}_+$ as a $SU(2)$ rotation, apart from a phase, and taking its power results in \begin{eqnarray} \mathcal{U}_{+}^n &=& e^{-\frac{i n(\pi+\kappa) }{2}} \begin{pmatrix} \alpha_n & i\beta_n^{*} \\ i\beta_n & \alpha_n^{*} \end{pmatrix}, \end{eqnarray} where \begin{equation} \alpha_n = T_{n}(\chi)+\frac{i}{2}U_{n-1}(\chi)\cos{\kappa},\, \beta_n = \frac{\sqrt{3}}{2}U_{n-1}(\chi)e^{i\kappa}. \end{equation} As above, $T_{n}(\chi)$ and $U_{n-1}(\chi)$ denote the Chebyshev polynomials of the first and second kinds respectively, but now $\chi=\sin{\kappa}/2=\sin(\kappa_0/2)/2$. Similarly, \begin{equation} \label{eq12} \mathcal{U}_{-}^n = e^{-\frac{ 3in \kappa }{4}} \left( \begin{array}{cc} \cos \frac{n\pi}{2} & e^{\frac{3 i \kappa}{4}} \sin \frac{n\pi}{2} \\ -e^{-\frac{3 i \kappa }{4}} \sin \frac{n\pi}{2} & \cos \frac{n\pi}{2} \\ \end{array} \right), \end{equation} which has a much simpler form than the $\mathcal{U}_{+}^n$ and in fact $\mathcal{U}_{-}^2=-e^{-3 i \kappa_0/4} I_2$ is up to a dynamical phase proportional to the identity. Thus all states in the negative parity subspace are essentially periodic with period-2, a uniquely quantum feature. In particular the GHZ state $|\phi_2^{-}\rangle=(|0000\rangle -|1111\rangle)/\sqrt{2}$ would be of this kind. Using these it is possible to find the exact evolution of the entanglement entropy of any one-qubit and again in particular we again concentrate on the initial states being $|0000\rangle$ and $|\pm\pm\pm\pm \rangle_y$, for the same reasons as in the 3 qubit case. \subsection{Initial state $|\psi_0 \rangle=|0000\rangle$} Considering four qubit state $|0000\rangle$, under the `$n$' implementations of unitary operator $\mathcal{U}$, \begin{eqnarray} \mathcal{U}^n |0000\rangle &=& \frac{1}{\sqrt{2}} \left( \mathcal{U}^n_{+} |\phi_2^{+}\rangle + \mathcal{U}^n_{-} |\phi_2^{-}\rangle \right). \end{eqnarray} leads to the state $|\psi_n\rangle$ at time $n$. Just as for the 3 qubit case the state $\mathcal{U}^{2n}|0000\rangle$ is upto local-unitary operators same as $\mathcal{U}^{2n-1}|0000\rangle$, and therefore again all entanglement properties have ``steps" in their dynamical evolution and it is sufficient to consider the time $n$ to be an even integer. In this case \begin{equation} |\psi_n\rangle = e^{-\frac{i n(\pi+\kappa)}{2}} \frac{1}{\sqrt{2}} \left( \alpha_n |\phi_2^{+}\rangle +i\beta_n |\phi_3^{+}\rangle + e^{-\frac{in\kappa}{4}} |\phi_2^{-}\rangle\right). \end{equation} Single qubit reduced density matrix is simply diagonal for even values of $n$, eigenvalues being $\lambda(n,\kappa_0)$ and $1-\lambda(n,\kappa_0)$, where $\lambda(n,\kappa_0)=\frac{1}{2}\left( 1+ \xi_n(\kappa_0)\right)$, where \begin{eqnarray} \label{eq:xi4qub} \xi_n(\kappa_0) &=& \textrm{Re} \left( \alpha_n e^{in\kappa_0/8} \right) \nonumber \\ &=& T_{n}(\chi) \cos\frac{n\kappa_0}{8}-\frac{1}{2}U_{n-1}(\chi)\cos\frac{\kappa_0}{2} \sin\frac{n\kappa_0}{8}. \nonumber \\ \end{eqnarray} For even values of $n$, linear entropy of a single-qubit reduced state is given by, \begin{equation} \label{eq:ent4q1} S_{(0,0)}^{(4)}(n,\kappa_0)=\frac{1}{2}\left[ 1- \xi_n^2(\kappa_0) \right], \end{equation} and at odd values, $S_{(0,0)}^{(4)}(2n-1,\kappa_0)=S_{(0,0)}^{(4)}(2n,\kappa_0)$. Figure~(\ref{fig:entropy4qubit1}) shows the evolution of this entanglement entropy for a few representative values of $\kappa_0$. In particular, even for $n=2$ (which is the same as $n=1$), we get a fairly long expression for the entanglement entropy, hence rather than display it, we state that for small $\kappa_0$ it increases as $S_{(0,0)}^{(4)}(1,\kappa_0) \approx 3\, \kappa_0^2/32$, which is very similar to the corresponding 3-qubit case. It grows monotonically with $\kappa_0$ till $\kappa_0=\pi$ where it attains the upper-bound of $1/2$ already, in contrast to the 3-qubit case which attains this only at $\kappa_0=3 \pi/2$. \begin{figure} \centering \includegraphics[scale=1]{./ent4q1.pdf} \caption{Linear entropy of a single-qubit reduced state versus $n$ is plotted at different values of $\kappa_0$ shown in parts (a) and (b), corresponding to a four qubit initial state, $|0000\rangle$.} \label{fig:entropy4qubit1} \end{figure} {To find the relevant time scales in the growth of the entanglement, we note that the maximum value of the entropy is attained when $\xi_n(\kappa_0)=0$. From Eq.~(\ref{eq:xi4qub}), and noting that the zeros of $T_n(\chi)$ and $U_{n-1}(\chi)$, do not occur simultaneously, we first examine the case when $n$ is even and $U_{n-1}(\chi)$ vanishes. This is similar to the analysis of the $3$ qubit case above and we simply state that this implies that $n_* \approx 4 \pi/\kappa_0$. Thus the first even-time at which the second half of Eq.~(\ref{eq:xi4qub}) vanishes is $n_*$, however if this condition is satisfied the first part also vanishes as the $\cos( n\kappa_0/8)$ does. Thus the typical time-scale for the large entanglement to develop is slightly larger than the case of $3$ qubits where it was $3 \pi/\kappa_0$. At the time when the entanglement is maximum, $\beta_n \approx 0$ and the resultant states are superpositions of $|\phi_2 ^{\pm}\rangle$ and are GHZ states. Thus the large 1-qubit entanglement observed in the experiment of \cite{Neill16} for $\kappa_0=0.5$ has more to do with the creation of such GHZ states than thermalization or chaos.} Long time average of the linear entropy is obtained by averaging over the time $n$, and is given by \begin{eqnarray} \label{eq:ent1} \langle S^{(4)}_{(0,0)}(\kappa_0)\rangle &=&\frac{1}{8}\left( \frac{9+2 \cos^2(\kappa_0/2)}{3+\cos^2(\kappa_0/2)}\right), \kappa_0 \neq 0, 2 \pi. \end{eqnarray} For $\kappa_0=0$, or $2 \pi$, the entanglement vanishes. As soon as $\kappa_0$ becomes non-zero, this long-time averaged linear entropy attains a value of $2.75/8$, which further increases with $\kappa_0$ and attains a maximum value of $3/8$ at $\kappa_0=\pi$, as shown by the dashed curve in Fig.~(\ref{fig:ana}). Thus, in this case, long time-averaged linear entropy of single-qubit reduced state oscillates within a very small interval of range $1/32$ for $\kappa_0 \in (0,2\pi)$. \begin{figure}[h] \includegraphics[width=8cm,keepaspectratio=true]{ana.pdf} \caption{Analytically obtained expressions for time-averaged linear entropy for initial states $|0000\rangle$ (Eq.~(\ref{eq:ent1})) and $|++++\rangle_y$ (Eq.~(\ref{eq:ent2})) are plotted for $\kappa_0 \in (0,4\pi)$. Extreme values are presented as horizontal lines, with their respective values ($S^{(4)}_{(\theta,\phi)}(\kappa_0)=\textrm{constant}$) specified on the right side. Solid red curve and dashed black curve correspond to initial states $|++++\rangle_y$ and $|0000\rangle$ respectively.} \label{fig:ana} \end{figure} \subsection{Initial state $|\psi_0 \rangle=|++++\rangle_y$} This state lies entirely in the positive parity subspace of the five dimensional permutation symmetric space of four qubits, and is given by \begin{equation} \otimes^4 |+\rangle_y = \frac{i}{\sqrt{2}}|\phi^+_1\rangle + \frac{1}{\sqrt{8}} |\phi_2^{+}\rangle - \sqrt{\frac{3}{8}}|\phi_3^{+}\rangle, \end{equation} which under the action of $\mathcal{U}^n$, leads to $|\psi_n\rangle=\mathcal{U_{+}}^n|++++\rangle_y$, such that, (for $n>1$), \begin{widetext} \begin{align} |\psi_n\rangle = \frac{(-1)^{n}}{\sqrt{2}} \left( i |\phi_1^{+} \rangle + e^{i \delta} \left( \alpha_n/2 - i \sqrt{3} \beta_n^{*}/2 \right) |\phi_2^{+} \rangle - e^{i \delta} \left( \sqrt{3} \alpha_n^{*}/2 - i \beta_n/2 \right) |\phi_3^{+} \rangle \right), \end{align} \end{widetext} where $\delta=n(2\pi-\kappa_0)/4$. The reduced density matrix of any one of the four qubits is given by, \begin{equation} \rho_1(n, \kappa_0)=\textrm{Tr}_{2,3,4}\left( |\psi_n\rangle\langle \psi_n | \right) = \begin{pmatrix} 1/2 & \xi'_n(\kappa_0) \\ \xi'_n(\kappa_0)^{*} & 1/2 \end{pmatrix}, \end{equation} where, \[ \xi'_n(\kappa_0) = - i(T_n(\chi) \cos \delta + U_{n-1}(\chi) \sin \delta \cos(\kappa_0/2)),\] and the linear entropy is given by \[ S^{(4)}_{(\pi/2,-\pi/2)}(n,\kappa_0)= \frac{1}{2}\left( 1 - |\xi'_n(\kappa_0)|^2 \right). \] Figure~(\ref{fig:entropy4qubit2}) shows the evolution of this entanglement entropy for a few representative values of $\kappa_0$. \begin{figure} \centering \includegraphics[scale=1]{./ent4q2.pdf} \caption{Linear entropy of a single-qubit reduced state versus $n$ is plotted at different values of $\kappa_0$ shown in parts (a), (b), (c), and (d), corresponding to a four qubit initial state, $|++++\rangle_y$.} \label{fig:entropy4qubit2} \end{figure} A closed form expression for long time average linear entropy is then obtained as for the other case and results in (for $\kappa_0 \neq 0, 2 \pi$), \begin{equation} \label{eq:ent2} \langle S^{(4)}_{(\frac{\pi}{2},\pm\frac{\pi}{2})} \rangle =\frac{1}{8}\left(\frac{9-\cos^2(\kappa_0/2)}{3+\cos^2(\kappa_0/2)}\right). \end{equation} As soon as $\kappa_0$ becomes non-zero, this long time-averaged linear entropy attains its minimum value of $1/4$, which further increases with $\kappa_0$ and attains a maximum value of $3/8$ at $\kappa_0=\pi$, as shown by solid red curve in Fig.~(\ref{fig:ana}). In this case, long time-averaged linear entropy of single-qubit reduced state oscillates within a relatively larger interval of range $1/8$ for $\kappa_0 \in (0,4\pi)$. \par Time averaged linear entropy of single-qubit reduced state in both of these cases, reach their maximum value of $3/8$ when $\kappa_0=\pi$ and, remarkably, this matches with the average from the ensemble of random permutation symmetric states \cite{SheshadriPRE2018} of 4-qubits $S_{RMT}(4)$ as in the case of the 3-qubit case. In addition we see that the average for the states at $(\pi/2, \pm \pi/2)$ attain the value of $1/4$ for arbitrarily small $\kappa_0$ in contrast to the 3-qubit case which vanishes as in Eq.~(\ref{eq:avgpiby2_2}). In fact the non-zero average is seen in numerical calculations to be attained only on averaging over extremely long times for small $\kappa_0$, that reflects in Fig.~(\ref{fig:savg4q}), where different curves correspond to time-average over different times (as labelled in terms of $n$ in the inset). For small values of $\kappa_0$, i.e. $\kappa_0=2p\pi\pm \Delta \kappa_0$ ($p$ being an integer), time-average behavior of linear entropy for different times does not converge, and approaches the infinite-time average consistent with Eq.~(\ref{eq:ent2}) and Fig.~(\ref{fig:ana}), as $n\rightarrow \infty$. This slow thermalization, specifically for state $|(\pi/2,\pm\pi/2)\rangle$ is attributed to the process of dynamical tunneling to which we now turn. \begin{figure}[h] \includegraphics{savg4q.pdf} \caption{Simulated time-average linear entropy ($\langle S^{(4)}_{(\frac{\pi}{2},\pm \frac{\pi}{2})}(\kappa_0)\rangle$) subject to initial state, $|(\pi/2,\pm\pi/2)\rangle$, plotted versus $\kappa_0$, is shown for different values of $n$ (as given in the inset) in the interval $\kappa_0 \in [0,4\pi]$. Inset shows the blowed-up horizontal scale for $\kappa_0 \in [0,\Delta \kappa_0]$, where $\Delta \kappa_0=2\pi/5$, that clearly presents the curves approaching the solid curve curve of Fig.~\ref{fig:ana}.} \label{fig:savg4q} \end{figure} \subsection{Dynamical tunneling} This very slow process is due to tunneling between $\otimes^4|+\rangle_y$ and $\otimes^4|-\rangle_y$. At $\kappa_0=0$, two positive parity eigenvectors of $\mathcal{U}$, $|\phi_1^+\rangle$ and $|\phi_{23}^+\rangle=\frac{1}{2} |\phi_2^{+}\rangle - \frac{\sqrt{3}}{2}|\phi_3^{+}\rangle$ are degenerate with eigenvalue $-1$. These can also be written as 4-qubit GHZ states \cite{GHZ0,GHZ}: \begin{equation} i |\phi_1^+\rangle=\left(\otimes^4|+\rangle_y -\otimes^4|-\rangle_y \right)/\sqrt{2}, \end{equation} the unchanging eigenstate, and \begin{equation} |\phi_{23}\rangle = \left(\otimes^4|+\rangle_y +\otimes^4|-\rangle_y \right)/\sqrt{2}. \end{equation} Thus \begin{equation} \mathcal{U}^n\otimes^4 |+\rangle _y = (-1)^{n}\frac{i}{\sqrt{2}}|\phi_1^+\rangle + \mathcal{U}_+^n \frac{1}{\sqrt{2}}|\phi_{23}^+\rangle. \label{eq:tunnelevolve} \end{equation} The eigenvalue of $\mathcal{U}_+$ that is $-1$ at $\kappa_0=0$ is $e^{i \gamma_{-}}$ with \begin{equation} \gamma_{-}= \frac{\kappa_0}{4}+\pi -\sin^{-1}\left(\frac{1}{2}\sin \frac{\kappa_0}{2} \right) \approx \pi -\frac{\kappa_0^3}{128}. \end{equation} This implies that for $\kappa_0 \ll 1$, the corresponding state and $|\phi_1^+\rangle$ are nearly degenerate. The splitting leads to a change in the relative phase of their contributions in Eq.~(\ref{eq:tunnelevolve}) and at time $n_* \approx 128 \pi/\kappa_0^3$ the evolved state is close to $\otimes^4|-\rangle$, leading to tunneling as shown in Fig.~(\ref{fig:tunnel}) between what in the classical limit are two stable islands. At time $n=n_*/2$ the state obtained is close to the GHZ state $(\otimes^4 |+\rangle_y -i \otimes^4 |-\rangle_y)/\sqrt{2}$. This tunneling is observed whenever $\otimes^{2j}|\pm\rangle$ are degenerate eigenstates of the rotation part of the Floquet $\mathcal{U}$. This implies that the number of qubits should be an integer multiple of $2\pi/p$, where $p$ is the rotation angle (we have used $p=\pi/2$, and hence the tunneling occurs when the number of qubits is a multiple of 4). \begin{figure}[h] \includegraphics{fig_tunneling4.pdf} \caption{Husimi (quasi probability distribution) plots for the four-qubit initial state, $\otimes^4|+\rangle$, evolving under $n$ implementations of $\mathcal{U}$, and leading to tunneling to the state, $\otimes^4|-\rangle$, at time $n_* \approx 128 \pi/\kappa_0^3 \approx 402124$. ($\kappa_0=0.1$). } \label{fig:tunnel} \end{figure} \begin{figure}[h] \includegraphics{savg.pdf} \caption{Normalized average single-qubit entanglement when the initial state is $\otimes^{2j}|+\rangle_y$ for increasing number of qubits (except multiples of $4$ where there is tunneling for $p = \pi/2$.)} \label{fig:avgentlargerj} \end{figure} For larger number of qubits, the average single-qubit entropy, normalized by the random state average, is numerically found when the initial state is $\otimes^{2j}|+\rangle_y$ and shown in Fig.~(\ref{fig:avgentlargerj}). The number of qubits used for the cases shown in this figure are not multiples of $4$, and hence the long-time average vanishes at $\kappa_0=0$, that is there is no tunneling. The trend is in keeping with a more complex classical phase space that becomes fully chaotic when the random state average is approached. The initial state being centered on a fixed point, increasing the number of qubits leads to a sharp growth beyond $\kappa_0=2$ when the fixed point becomes unstable, a more detailed study of this is found in \cite{Bhosale-pre-2017}, without the connection to tunneling. Interestingly even for the 3-qubit case, for which we have the analytical evaluation in Eqs~(\ref{eq:avgpiby2_2}), a similar but smoother trend is displayed and reaches the random state value when $\kappa_0=3\pi/2$. \begin{figure}[h] \begin{center} \includegraphics{./avg_j00.pdf} \end{center} \caption{Normalized average single-qubit entanglement when the initial state is $\otimes^{2j}|0\rangle$ for increasing number of qubits.} \label{fig:savgall0largej} \end{figure} {The complementary state $\otimes_{k=1}^{2j}|0\rangle$ has a nonzero average as $\kappa_0$ approaches $0$, both for the 3- and the 4-qubit cases. We have already discussed the origin of this in some detail for the $3-$ qubit case. For a very large number of qubits we expect that classically the tunneling effect vanishes. This is borne out in Fig.~(\ref{fig:savgall0largej}), although surprisingly even for very large number of qubits for a range of $\kappa_0$ values close to $0$, the formation of nonclassical states resulting in large average single qubit entanglement is seen. The subsequent increase of entanglement for larger values of $\kappa_0$ is due to the destabilization of the period-4 orbit at $\kappa_0=\pi$. A more detailed analysis is called for, including the study of entanglement between large blocks of spins which will distinguish between the non-classical states produced when the system is near-integrable and the random states produced at much larger values of the parameter when the classical phase-space is mixed or chaotic. A recent analysis in \cite{kumari2018untangling} uses an upper-bound of the entanglement entropy using the Fannes-Audernaert inequality to argue for connections between entanglement and chaos and why states localized on the stable period-4 orbits can have large entanglement in the deep quantum regime. \section{Conclusions} Quest for an exactly solvable model is hard and often a matter of serendipity. In our work, we give exact solution for 3- and 4- qubit instances of the kicked top and explicitly derive expressions for the time evolved state, reduced density matrix, entanglement entropy and its long time average values. Our work provides interesting connections between a quantum system with few degrees of freedom and its classical limit that is non-integrable and can exhibit chaos for high $\kappa_0$ values. For example, we find that the exactly solvable 3- and 4- qubit instances of the kicked top provide insights into how entropy and entanglement thermalize in closed quantum systems in the sense of long time averages approaching ensemble averages, as the classical limit approaches global chaos, as predicted by random matrix theory. Since we derive exact analytical results valid for all values of $\kappa_0$, this will be further useful to study transition to thermalisation in closed quantum systems. Experiments have already probed the 3-qubit case, and it is worth mentioning that, in the light of our work, it should now be viewed as a study of thermalisation in an integrable system rather than thermalisation induced due to lack of sufficient number of conserved quantities \cite{Neill16}. It will be interesting to see at what spin size does the exact solvability of these models become intractable and whether or not that has a physical interpretation. Even more remarkable is the entanglement dynamics at small values of the chaoticity parameter. This cannot be directly attributed to non-integrability. For example, even for small $\kappa_0$, in the case of the three qubit $|0,0\rangle$ state, we find an increase of entanglement with time, which can be attributed to the generation of highly non-classical GHZ type states. We accurately predict time scales for such entanglement dynamics and found an excellent agreement with the numerics. Likewise, the $|\pi/2, -\pi/2\rangle$ state in the 4-qubit case displays, for the same rotation angle, tunneling and creation of GHZ states and we have described this in detail as well. In the near-integrable regime we exactly calculate tunneling splitting and show this to be in agreement with the numerics. To the best of our knowledge, ours is the first work to find a connection between tunneling splitting, the number of qubits and a system parameter. It is worth mentioning that entanglement generation occurs despite the initial state being localised on a stable island with phase space having almost no chaos. We believe our findings and analysis of entanglement generation at low values of $\kappa_0$ will contribute to the understanding of entanglement generation in dynamical systems and it's connections to classical bifurcations, emergence of structures and ergodicity in the phase space. This also complements our findings for higher values of $\kappa_0$ as well as the existing literature on the connections between entanglement generation and chaos. Lastly, larger number of qubits can show genuine signatures of non-integrability and chaos, and tunneling leads to creation of macroscopic superpositions that are generalized GHZ states. We hope our work raises new questions and adds to the discussion on the connections between integrability, quantum chaos, and thermalization. Since the multi-qubit kicked top can be viewed as an analog quantum simulator, robustness of such a system to errors \cite{hauke2012can, Shepelyansky_2001}, especially in the regime where we generate highly non-classical GHZ like states and explore truly quantum phenomena like tunneling, will be of interest to the quantum information community. As an aside, we are able to give an alternate proof of the Pell identity satisfied by the Chebyshev polynomials! \begin{acknowledgments} We are grateful to the authors of \cite{Neill16} for generously sharing their experimental data, in particular to Pedram Roushan and Charles Neill for useful correspondence regarding the same. \end{acknowledgments}
{ "timestamp": "2019-03-01T02:02:51", "yymm": "1902", "arxiv_id": "1902.10769", "language": "en", "url": "https://arxiv.org/abs/1902.10769" }
\section*{Introduction} This papers gives a type theoretic representation theorem for continuous functions between a wide class of spaces of infinite values. By infinite, we understand a value having a coinductive type. Continuity means that finite information about a result of the function requires only finite information about its argument. Because of that, it is a necessary condition for computability. The simplest coinductive space is the Cantor space of infinite boolean sequences. It corresponds to the coinductive type~$\nu_X(X + X)$ and its topology is well known. Programs that implement continuous functions from the Cantor space to a discrete space~$D$ can be represented by finite binary trees in which leaves are labelled by values in~$D$. We extend this to a class of functors going beyond~$X \mapsto X + X$ on the category~$\Set$ by considering so-called (finitary) polynomial functors on~$\Set^I$ for some index set~$I$. The final coalgebra~$\nu P$ of such a functor~$P$ always exists and may be constructed as the inverse limit of:~$1 \leftarrow P({\bf 1}) \leftarrow P^2({\bf 1}) \leftarrow \cdots$. Those final coalgebras have a natural topology, and when the functor~$F$ is \emph{finitary} (commutes with filtered colimits), the topology enjoys a close connection with the intuitive notion of ``finite amount of information'' about potentially infinite values. However, representing such topologies inside a formalized system such as dependent type theory is far from trivial because their definition relies heavily on delicate topics like equality. Our main result pertains to the question of how continuous functions between these natural classes of spaces can be represented in dependent type theory. It turns out that any such ``implementation'' can itself be put into the form of a potentially infinite data-structure, inhabiting a final coalgebra for an appropriate functor, albeit one which is in most cases no longer finitary. This settles a conjecture of P. Hancock about representability of continuous between ``dependent streams''~\cite{hancock09:_repres_of_stream_proces_using}. We obtain this result via a more general construction, without any cardinality restrictions on the initial functors. One can still topologise the final coalgebras, though the topology that arises from the inverse chain construction no longer enjoys much connection with any intuition of finite information, and there are (classically) continuous functions that cannot be implemented by programs. \section{Preliminaries I. Streams and Trees in Point Set Topology} \subsection{Streams} Given a set~$X$ endowed with the discrete topology, the set of \emph{streams} over~$X$, written~$\Stream{X}$, is defined as the infinite product~$\prod_{i\geq0} X$. The product topology is generated from the basic open sets~$\prod_{i\geq0} U_i$ where finitely many~$U_i$s are of the form~$\{x_i\}$ for some~$x_i \in X$ and the other~$U_i$s are equal to~$X$. This topological space is usually called the Cantor space (when~$X$ is finite) or the Baire space (when~$X$ is countably infinite). Continuity for functions between streams amounts to the following: \begin{lem}\label{lem:continuous_stream} A function~$f : \Stream{X} \to \Stream{Y}$ is continuous if and only if, for each stream~$s$ in~$\Stream{X}$, each projection~$f(s)_k$ of~$f(s)$ depends on at most a finite prefix of~$s$. \end{lem} Writing~$s_{\upharpoonright n}$ for the restriction of stream~$s$ to its finite prefix of length~$n$, the condition is equivalent to \begin{equation}\tag{\ensuremath{\ast}}\label{eqn:prefix_condition} \forall s\in \Stream{X}, \forall k\geq0, \exists n\geq0, \forall t\in \Stream{X}, s_{\upharpoonright n} = t_{\upharpoonright n} \implies f(s)_k = f(t)_k \,. \end{equation} Before proving lemma~\ref{lem:continuous_stream}, let's look at a preliminary result: \begin{lem}\label{lem:stream_openset} For any subset~$V\subseteq \Stream{X}$, we have:~$V$ is open iff \[ \forall s\in V, \exists n\geq 0, \forall t \in \Stream{X}, s_{\upharpoonright n} = t_{\upharpoonright n} \implies t \in V \,. \] \end{lem} \begin{proof} The $\implies$ direction is immediate: an open set is a union of basic open sets, which satisfy the condition. (Recall that a basic open set is of the form~$\prod_{i\geq0} U_i$, where each~$U_i$ is~$X$, except for finitely many that are singleton sets.) For the~$\Leftarrow$ direction, we define, for each~$s\in V$, the set~$V_s = \{t \mid s_{\upharpoonright n_s}=t_{\upharpoonright n_s}\}$, where~$n_s$ is the integer coming from condition. We have~$V = \bigcup_{s\in V} V_s$. \end{proof} \begin{proof}[Proof of lemma~\ref{lem:continuous_stream}] Suppose the function~$f : \Stream{X} \to \Stream{Y}$ satisfies condition~$(\ref{eqn:prefix_condition})$. To show that~$f$ is continuous, it is enough to show that the inverse image of any basic open set is an open set. Because the inverse image commutes with intersections, it is sufficient to look at \emph{pre} basic open sets of the form~$V_{k,y}=\{s \mid s_k = y\}$. To show that~$f^{-1}(V_{k,y})$ is open, we use lemma~\ref{lem:stream_openset} and show that~$s\in f^{-1}(V_{k,y})$ implies \[ \forall s \in f^{-1}(V_{k,y}) \exists n\geq 0, \forall t \in \Stream{X}, s_{\upharpoonright n} = t_{\upharpoonright n} \implies t \in f^{-1}(V_{k,y}) \,. \] Because~$s\in f^{-1}(V_{k,y})$ is~$f(s)_k = y$, this is implied by condition~$(\ref{eqn:prefix_condition})$. For the converse, suppose~$f : \Stream{X} \to \Stream{Y}$ is continuous. We want to show that it satisfies condition~$(\ref{eqn:prefix_condition})$. Let~$s\in \Stream{X}$ and~$k\geq0$. The set~$\{t \mid f(t)_k = f(s)_k\}$ is open and because~$f$ is continuous, its inverse image also is open. By lemma~\ref{lem:stream_openset}, we now that there is some~$n$ such that $s_{\upharpoonright n} = t_{\upharpoonright n} \implies f(t)_k = f(s)_k$. This finishes the proof. \end{proof} Because of this, \emph{constructive} functions between streams are usually held to be continuous: we expect them to arise as continuous functions with the additional properties that: \begin{itemize} \item finding the finite prefix needed to compute a chosen element of~$f(s)$ is computable, and \item finding the value of the element of~$f(s)$ from that finite prefix is computable. \end{itemize} Note that the discrete space~$X$ may be generalized to a family~$(X_i)_{i\geq0}$ that needs not be constant. More interestingly, we can allow the set~$X_i$ (giving the set of possible values for the~$i$th element of a stream) to depend on the~$i$th prefix of the stream. We can in this way obtain for example the space of increasing streams of natural numbers: \begin{itemize} \item the set~$X_0$ doesn't depend on anything and is defined as~$\ensuremath{\mathbb{N}}$, \item the set~$X_1$ depends on the value of~$x_0\in X_0$: $X_{1,x_0} = \{ k\in \ensuremath{\mathbb{N}} , x_0 \le k \}$, \item the set~$X_2$ depends on~$x_1\in X_{1,x_0}$, etc. \end{itemize} The set of increasing streams isn't naturally a product space but is a subspace of~$\Stream{\ensuremath{\mathbb{N}}}$. The topology is the expected one, and continuous functions are still characterized by lemma~\ref{lem:continuous_stream}. \subsection{Infinite Trees, Natural Topology} \label{sub:natural} The natural topology for sets of \emph{infinite trees} is less well known than the Cantor and Baire topologies. The simplest kind of infinite tree, the infinite \emph{binary tree} has a root, and two distinguished ``branches'' going from that root to two ``nodes''. Each of these two nodes also has two branches, etc. An infinite binary tree \emph{over}~$X$ is a way to label each node of the infinite binary tree with an element of~$X$. If we write~$\ensuremath{\mathbb{B}}$ for the set~$\{0,1\}$, each node of the infinite binary tree can be identified by a list of elements of~$\ensuremath{\mathbb{B}}$: this list simply represents the branch leading to this node from the root. The set of infinite binary trees over~$X$, written~$\Tree{\ensuremath{\mathbb{B}}}{X}$, can thus be defined as \[ \Tree{\ensuremath{\mathbb{B}}}{X} = X \times (\ensuremath{\mathbb{B}}\to X) \times (\ensuremath{\mathbb{B}}^2\to X) \times \dots \times (\ensuremath{\mathbb{B}}^i\to X) \times \dots \] where each term gives the~$i$th ``layer'' of the tree as a function from finite branches of length~$i$ to~$X$. We can rewrite this as \[ \Tree{\ensuremath{\mathbb{B}}}{X} \quad=\quad \prod_{i\geq0} \big(\ensuremath{\mathbb{B}}^i \to X\big) \quad=\quad \prod_{i\geq0} \left(\prod_{t\in \ensuremath{\mathbb{B}}^i} X\right) \,. \] By replacing the set~$\ensuremath{\mathbb{B}}$ by some other set~$B$, we obtain the ternary trees over~$X$ or countably-branching trees over~$X$, etc. Streams themselves are recovered by taking~$B = \{\star\}$. If both~$B$ and~$X$ are endowed with the discrete topology, we obtain a natural topology on~$\Tree{B}{X}$. Note that when~$B$ is infinite, the spaces~$B^i \to X$ aren't discrete anymore. Nevertheless, we have: \begin{lem}\label{lem:continuous_trees} Let~$A$, $B$, $X$ and~$Y$ be discrete spaces; a function~$f : \Tree{A}{X} \to \Tree{B}{Y}$ is continuous iff for every~$t\in \Tree{A}{X}$, the value at each node of~$f(t)$ only depends on a finite subtree\footnote{A \emph{subtree} is a set of nodes that contains the root of the tree and is closed by the ancestor relation.} of~$t$. \end{lem} \begin{proof} The proof of this lemma is exactly the same as the proof of lemma~\ref{lem:continuous_stream}, except that we replace the natural number~$n$ in~$t_{\upharpoonright n}$ (for some~$t\in \Tree{B}{Y}$) by a \emph{finite subtree}. The only remark is that basic open sets of~$\Tree{B}{Y}$ are of the form $\prod_{i\geq0} \prod_{t\in B^i} X_{i,t}$ where all sets~$X_{i,t}$ are equal to~$X$, except for finitely many that are singleton of the form~$\{x_{i,t}\}$ for some~$x_{i,t} \in X$. \end{proof} It is again possible to devise more general notions of trees by allowing the set~$X$ at a node in the tree to depend on the values of its location as a path from the root. The resulting space is endowed with the subspace topology and lemma~\ref{lem:continuous_trees} still holds. We will later generalize this notion further by allowing the branching of a node (given by the set~$B$) to depend itself on the value stored at the node. With this generalisation we can model very general objects, such as infinite automata that issue commands and change to a new state (choose a branch) based on the responses. \subsection{Infinite Trees, Wild Topology} \label{sub:wild} The topology we naturally get in this paper corresponds to a different topology on trees. When looking at \[ \Tree{B}{X} \quad=\quad \prod_{i\geq0} \left(\prod_{t\in B^i} X\right) \,, \] we can endow the inner product space with the ``box'' topology, where basic opens are given by arbitrary products of open sets. Because~$B$ and~$X$ are discrete sets, this amounts to giving the discrete topology to each layers~$B^i \to X$. Instead of being generated by ``finite subtrees'', open sets are generated by ``subtrees with finite depth''. \begin{lem}\label{lem:continuous_trees_wild} Let~$A$, $B$, $X$ and~$Y$ be discrete spaces; a function~$f : \Tree{A}{X} \to \Tree{B}{Y}$ is continuous \emph{for the wild topology} iff for every~$t\in \Tree{A}{X}$, and~$k\in\ensuremath{\mathbb{N}}$, there is an~$n\in\ensuremath{\mathbb{N}}$ such that the nodes of~$f(t)$ at depth less than~$k$ depend only on the nodes of~$t$ at depth less than~$n$. \end{lem} Formally, this looks very similar to condition~\ref{eqn:prefix_condition} on page~\pageref{eqn:prefix_condition}: \begin{equation}\tag{\ensuremath{\ast\ast}} \forall t\in \Tree{A}{X}, \forall k\geq0, \exists n\geq0, \forall t'\in \Tree{A}{X}, t_{\upharpoonright k} = t'_{\upharpoonright k} \implies f(t)_n = f(t')_n \,, \end{equation} where~$t_{\upharpoonright k}$ is the complete subtree of~$t$ up-to depth~$k$. Intuitively, this topology considers infinite trees as streams of their layers, where layers are discrete. \medbreak When~$A$ and~$B$ are finite, the two notions of continuity (lemma~\ref{lem:continuous_trees} and~\ref{lem:continuous_trees_wild}) coincide, but when this is not the case, we cannot compare continuous functions for the two topologies. \begin{itemize} \item\label{ex:natural_not_wild} Consider~$f:\Stream{\ensuremath{\mathbb{N}}} \to \Tree{\ensuremath{\mathbb{N}}}{\ensuremath{\mathbb{N}}}$ sending the stream~$s$ to the tree~$f(s)$ where the node indexed by~$(i_0,\dots,i_n)$ is~$s_{i_0}+s_{i_1}+...+s_{i_n}$. This is certainly continuous for the natural topology. However, because the first layer of the output is infinite, we cannot bound the number of layers (elements) of the input stream~$s$ that are needed to construct it: this function isn't continuous for the wild topology. \item Consider~$g:\Tree{\ensuremath{\mathbb{N}}}{\ensuremath{\mathbb{B}}} \to \Stream{\ensuremath{\mathbb{B}}}$ where the~$i$th element of~$g(t)$ is the maximum of the complete~$i$th layer of~$t$ ($\ensuremath{\mathbb{B}}$ being the complete lattice with two elements). This function is continuous in the wild sense, but because we need to know the whole~$i$th layer of the input to get the~$i$th value of the output, this function isn't continuous for the natural topology (and certainly not computable). \end{itemize} \section{Preliminaries II. Martin L\"of Type Theory} \subsection{Basic Features} We work in a meta theory that is in essence Martin-L\"o{}f's dependent type theory~\cite{ML73} with two additional features: \begin{itemize} \item coinductive types, \item inductive-recursive definitions. \end{itemize} All the constructions described in the paper have been defined using the dependently typed functional programming language Agda~\cite{Agda} \subsubsection*{A Note about Equality} This paper is concerned with constructions in ``pure'' type theory, i.e.\ dependent type theory without identity. Those constructions enjoy many interesting properties, but proving them requires some notion of equality. Equality in Martin-L\"of type theory is a complex subject about which whole books have been written~\cite{HOTTbook}. We try to be mostly agnostic about the flavor of equality we are using and only rely on the ``simplest'' one: intentional equality, written~$a \equiv_T b$, or simply~$a \equiv b$. This makes it possible to use vanilla Agda for checking proofs.\footnote{All the Agda code was checked using \t{Agda 2.5.4.2} with the flag \t{--without-K}. The Agda code is available at~\url{http://www.lama.univ-smb.fr/~hyvernat/Files/Infinite/agda.tgz}, with the file \t{PAPER.agda} referencing all the formalized results from the paper. For those without a working Agda installation, the code is also browsable directly from~\url{http://www.lama.univ-smb.fr/~hyvernat/Files/Infinite/browse/PAPER.html}. } We annotate the proofs in the paper with \begin{itemize} \item \checked: for those that have been formalized using Agda, \item \partlychecked: for those which have only partly been formalized, typically because the proof is too complex to write in Agda. \end{itemize} Because we want to explain Agda code only slightly less than you want to read about Agda code, the formalized proofs are either omitted from the paper, or explained informally. We sometimes need to assume equality for functions is extensional, i.e.\ that~$f \equiv_{} g$ iff~$f\,a \equiv_B g\,a$ for all~$a: A$. Those proofs are clearly identified. \subsubsection*{Notation} The notation for dependent types is standard. Here is a summary: \begin{itemize} \item We write~$\Gamma \vdash B$ to mean that~$B$ is a well-formed type in context~$\Gamma$. We write $A = B$ to express that $A$ and $B$ are the same by definition. \item We write~$\Gamma \vdash b:B$ to mean that~$b$ is an element of type~$B$ in context~$\Gamma$. The context is often left implicit and we usually write~$b:B$. We write~$a = b : B$ to express that~$a$ and~$b$ are definitionally equal in type~$B$. When the type $B$ can easily be deduced from the context, we will usually write just~$a = b$. \item If $B$ is a type and~$C$ is a type depending on~$x : B$, i.e.\ ,~$x:B\vdash C$, we write \begin{itemize} \item $\SI{x:B}C$ for the dependent sum. Its canonical elements are pairs~$⟨ b, c⟩$ with $b : B$ and $c : C[x/b]$, \item $\PI{x:B}C$ for the dependent product. Its canonical elements are functions~$\LAM{x:B}u$ where $x:B \vdash u : C$. \end{itemize} When the type~$C$ is constant, we abbreviate those by~$B \times C$ and~$B \to C$. \item The usual amenities are present: the natural numbers, W-types and so on. \end{itemize} We use a universe~$\Set$ of ``small types'' containing~${\bf 0}$ (with no element),~${\bf 1}$ (with a single element~$\star$) and~${\bf 2}$ (with two elements~$0$ and~$1$). Moreover, the dependent sums and products are reflected in this universe, and we use the same notation~$\PI{b:B}C$ and~$\PI{b:B}C$ whenever~$B:\Set$ and~$b:B \vdash C:\Set$. We assume that this universe is closed under many inductive-recursive and coinductive definitions which will be treated below. \smallbreak We are not always consistent with notation for application of functions and usually write \begin{itemize} \item $f\,x$ when the result is an element of a small type (an element of~\Set), \item $A(i)$ when the result is itself a small type (and thus, not an element of~\Set). \end{itemize} \subsubsection*{Predicates and Families} The Curry-Howard isomorphism makes the type~$\Set$ into a universe of propositions. \begin{defi} If~$A:\Set$, the collection of \emph{predicates} on~$A$ is defined as \[ \ensuremath{\mathsf{Pow}}(A) \quad = \quad A \to \Set \,. \] We introduce the following notations~\cite{sambin93:_toolbox}: if~$X$ and~$Y$ are predicates on~$A$, \begin{itemize} \item ``$a \IN X$'' is~``$X(a)$'', \item ``$X \sub Y$'' is~``$\PI{a:A} (a\IN X) \to (a\IN Y)$, \item ``$X \meets Y$'' is~``$\SI{a:A} (a\IN X) \times (a\IN Y)$'', \item ``$X \cap Y$'' is~``$\LAM{a:A}(a \IN X)\times(a\in Y)$'', \item ``$X \cup Y$'' is~``$\LAM{a:A}(a \IN X)+(a\in Y)$''. \end{itemize} \end{defi} The intuition is that a predicate on~$A$ is just a subset of~$A$ in some constructive and predicative\footnote{if~$A:\Set$, $\ensuremath{\mathsf{Pow}}(A)$ isn't of type~$\Set$} set theory. We will also need the following definition. \begin{defi} A family of~$C$ is given by a set~$I$ together with a function from~$I$ to~$C$. In other words, \[ \ensuremath{\mathsf{Fam}}(C) \quad = \quad \SI{I:\Set} I \to C \,. \] \end{defi} \subsection{Inductive-recursive Definitions} Inductive definitions are a way to define (weak) initial algebras for endofunctors on~$\Set$. A typical example is defining~$\mathtt{list}(X)$ as the least fixed point of~$Y\mapsto {\bf 1} + X \times Y$. In their simplest form, inductive-recursive definitions~\cite{dybjer:JSL2000} give a way to define (weak) initial algebras for endofunctors on~$\ensuremath{\mathsf{Fam}}(C)$.\footnote{To ensure the definition is healthy, the endofunctor has to be expressible according to a certain coding scheme~\cite{dybjersetzer:APAL2003}.} It means we can define, at the same time: \begin{itemize} \item a inductive set~$U$, called the \emph{index set}, \item and a recursive function~$f : U \to C$. \end{itemize} Of course, the set~$U$ and the function~$f$ may be mutually dependent. \smallbreak The type~$C$ may be large or small. Taking~$C={\bf 1}$, we recover usual inductive types as the index part of such a definition, the recursive function~$f$ always being trivial. In non-degenerate cases, the inductive clauses defining~$U$ are expressed using the simultaneously defined function~$f$. Here is a traditional example with~$C=\Set$: complete binary trees, defined one layer at a time. Here is the definition, first using Agda syntax: \begin{allttt}\label{ex:TreeIR} mutual data Tree : Set where Empty : Tree AddLayer : (t : Tree) → (branches t \(\times\) Bool → Nat) → Tree branches : Tree → Set branches Empty = One branches (AddLayer t l) = (branches t) \(\times\) Bool \end{allttt} While~\t{Empty} corresponds to the empty tree, the definitions \begin{allttt} T₁ = AddLayer Empty (λ b → if b.2 then 1 else 2) T₂ = AddLayer T₁ (λ b → if b.1.2 \&\& not b.2 then 3 elif b.1.2 \&\& not b.2 then 4 elif not b.1.2 \&\& b.2 then 5 else 6) \end{allttt} correspond to the trees \[ \t{T}_1 \quad = \ % \begin{tikzpicture}[ level distance=1cm, level 1/.style={sibling distance=1cm}, tree node/.style={draw=none}, every child node/.style={tree node}, baseline={([yshift=-1ex] current bounding box.center)}, ] \t{Node}[tree node] (Root) {.} child {node {.} edge from parent node[over] {\tiny1}} child {node {.} edge from parent node[over] {\tiny2}}; \end{tikzpicture} \mskip80mu \t{T}_2 \quad = \ % \begin{tikzpicture}[ level distance=1cm, level 1/.style={sibling distance=3cm}, level 2/.style={sibling distance=1cm}, tree node/.style={draw=none}, every child node/.style={tree node}, baseline={([yshift=-1ex] current bounding box.center)}, ] \t{Node}[tree node] (Root) {.} child { node {.} child {node {.} edge from parent node[over] {\tiny 3}} child {node {.} edge from parent node[over] {\tiny 4}} edge from parent node[over] {\tiny1}} child { node {.} child {node {.} edge from parent node[over] {\tiny 5}} child {node {.} edge from parent node[over] {\tiny 6}} edge from parent node[over] {\tiny2}}; \end{tikzpicture}\] The corresponding functor takes the family~$⟨ T:\Set, b:T\to\Set⟩$ to the family \begin{itemize} \item index set: $T' = {\bf 1} + \SI{t:T} \big((b\,t \times {\bf 2}) \to \ensuremath{\mathbb{N}}\big)$, \item recursive function~$b'$ defined with \begin{itemize} \item $b'\,\star = {\bf 1}$, where~$\star$ is the only element of~${\bf 1}$, \item $b'\,⟨ t, l⟩ = (b\,t)\times{\bf 2}$. \end{itemize} \end{itemize} We will, in section~\ref{sub:layering}, encounter a similar situation in which the type~$C$ will be~$\ensuremath{\mathsf{Fam}}(I)$, i.e.\ we will need to take the least fixed point of a functor from~$\ensuremath{\mathsf{Fam}}\big(\ensuremath{\mathsf{Fam}}(I)\big)$ to itself. Another typical example involves defining a universe~$U$ of types closed under dependent function space:~$U$ needs to contains inductive elements of the form~$\PI{A:U}(B:\mathsf{fam}(A))$, but~$\mathsf{fam}(A)$ is defined as~$\mathsf{El}(A) \to U$ and makes use of the decoding function~$\mathsf{El} : U \to \Set$. \subsection{Greatest Fixed Points} In its simplest form a coinductive definition introduces some~$\NU{F}:\Set$ together with a weakly terminal coalgebra~$c : \NU{F} \to F(\NU{F})$ for a sufficiently healthy\footnote{for our purposes, ``sufficiently healthy'' amounts to ``polynomial''} functor~$F : \Set \to \Set$. For example, given a set~$A$, the functor~$ F(X) = A \times X$ is certainly healthy and the set~$\NU{F}$ corresponds to the set of streams over~$A$. The coalgebra~$c$ ``decomposes'' a stream~$[a_0, a_1, \ldots]$ into the pair~$⟨ a_0 , [a_1,a_2,\ldots] ⟩ $ of its head and its tail. Because the coalgebra is weakly terminal, for any other such coalgebra~$x : X \to F(X)$ there is a map $X \to \NU{F}$. The corresponding typing rules are given by \[ \Rule{\sigma : X \to F(X)}{\nuIntro\,\sigma : X \to \NU{F}} \qquad\text{and}\qquad \Rule{}{\nuElim : \NU{F} \to F(\NU{F})} \] and the computation rule is \[ \nuElim\,(\nuIntro\,\sigma\,x) \quad = \quad F_{\nuIntro\,\sigma} \, (\sigma\,x) \,. \] Such coinductive definitions can be extended to families of sets: given an index set~$I$, we introduce weakly terminal coalgebras for sufficiently healthy functor acting on~$\ensuremath{\mathsf{Pow}}(I)$. The typing rules are extended as follows: \[ \Rule{\sigma : X \sub F(X)}{\nuIntro\,\sigma : X \sub \NU{F}} \qquad\text{and}\qquad \Rule{}{\nuElim : \NU{F} \sub F(\NU{F})} \] together with the computation rule \[ \nuElim\,(\nuIntro\,\sigma\,i\,x) \quad = \quad F_{\nuIntro\,\sigma}\,i \, (\sigma\,i\,x) \,. \] Note that it doesn't seem possible to guarantee the existence of strict terminal coalgebras~\cite{LetsUnfold} without extending considerably the type theory.\footnote{It is apparently possible to do so in univalent type theory~\cite{Ahrens}, where coinductive type can be defined from inductive ones!} \subsubsection*{Bisimulations} Equality in type theory is a delicate subject, and it is even more so in the presence of coinductive types. The usual (extensional or intensional) equality is easily defined and shown to enjoy most of the expected properties. However, it isn't powerful enough to deal with infinite objects. Two infinite objects are usually considered ``equal'' when they are \emph{bisimilar}. Semantically, bisimilarity has a simple definition~\cite{Aczel,Staton,Ahrens}. In the internal language, two coalgebras are bisimilar if their decompositions are equal, coinductively. \begin{defi}\label{def:categorical_bisimulation} Given a locally cartesian closed category~$\ensuremath{\mathbb{C}}$ with an endofunctor~$F$ and two coalgebra~$c_i : T_i \to F(T_i)$ ($i=1,2$), a \emph{bisimulation} between~$T_1$ and~$T_2$ is given by a span~$T_1 \leftarrow R \rightarrow T_2$ with a coalgebra structure such that the following diagram commutes. \[ \begin{tikzpicture} \matrix (m) [matrix of math nodes, column sep={6em,between origins}, row sep={4em,between origins},text height=1.5ex, text depth=.25ex] { T_1 & R & T_2 \\ F(T_2) & F(R) & F(T_2) \\ }; \path[morphism] (m-1-2) edge node[above]{$r_1$} (m-1-1) (m-1-2) edge node[above]{$r_2$} (m-1-3) % (m-2-2) edge node[below]{$F_{r_1}$} (m-2-1) (m-2-2) edge node[below]{$F_{r_2}$} (m-2-3) % (m-1-1) edge node[left]{$c_1$} (m-2-1) (m-1-2) edge node[right]{$r$} (m-2-2) (m-1-3) edge node[right]{$c_2$} (m-2-3) ; \end{tikzpicture} \] \end{defi} In particular, the identity span~$T \leftarrow T \rightarrow T$ is always a bisimulation and the converse of a bisimulation between~$T_1$ and~$T_2$ is a bisimulation between~$T_2$ and~$T_1$. Functions between coinductive types ought to be congruences for bisimilarity: \begin{defi} If $f_i : C_i \to D_i$ ($i=1,2$) are morphisms from coalgebras $c_i:C_i\to F(C_i)$ to coalgebras~$d_i : D_i \to F(D_i)$, we say that~$f_1$ and~$f_2$ are equal \emph{up to bisimulation}, written~$f_1 \approx f_2$, if for every bisimulation~$C_1 \leftarrow R \rightarrow C_2$, there is a bisimulation~$D_1 \leftarrow S \rightarrow D_2$ and a morphism~$h:R\to S$ making the following diagram commute. \[ \begin{tikzpicture} \matrix (m) [matrix of math nodes, column sep={6em,between origins}, row sep={4em,between origins},text height=1.5ex, text depth=.25ex] { F(C_1) & F(R) & F(C_2) \\ C_1 & R & C_2 \\ D_1 & S & D_2 \\ F(D_1) & F(S) & F(D_2) \\ }; \path[morphism] (m-1-2) edge node[above]{$F_{r_1}$} (m-1-1) (m-1-2) edge node[above]{$F_{r_2}$} (m-1-3) % (m-2-1) edge node[left]{$c_1$} (m-1-1) (m-2-2) edge node[right]{$r$} (m-1-2) (m-2-3) edge node[right]{$c_1$} (m-1-3) % (m-2-2) edge node[above]{$r_1$} (m-2-1) (m-2-2) edge node[above]{$r_2$} (m-2-3) % (m-2-1) edge node[left]{$f_1$} (m-3-1) (m-2-3) edge node[right]{$f_2$} (m-3-3) % (m-3-1) edge node[left]{$d_1$} (m-4-1) (m-3-3) edge node[right]{$d_2$} (m-4-3) ; % \path[morphism,densely dashed] (m-2-2) edge node[right]{$h$} (m-3-2) % (m-3-2) edge node[below]{$s_1$} (m-3-1) (m-3-2) edge node[below]{$s_2$} (m-3-3) % (m-3-2) edge node[right]{$s$} (m-4-2) % (m-4-2) edge node[below]{$F_{s_1}$} (m-4-1) (m-4-2) edge node[below]{$F_{s_2}$} (m-4-3) ; \end{tikzpicture} \] \end{defi} Translated in the internal language,~$f_1 \approx f_2$ means that if~$x$ and~$y$ are bisimilar, then~$f_1\,x$ and~$f_2\,y$ are also bisimilar. It is not difficult to show that \begin{itemize} \item $\approx$ is reflexive and symmetric, \item $\approx$ is compositional: if~$f_1\approx f_2$ and~$g_1 \approx g_2$, then~$f_1 g_1 \approx f_2 g_2$ (if the composition makes sense). \end{itemize} \label{rk:bisim_equiv} Without additional properties (which holds in our context), $\approx$ is however not transitive. Coinductive types are interpreted by \emph{weakly} terminal coalgebras. There can be, in principle, several non-isomorphic weakly terminal coalgebras. We however have the following. \begin{lem}\label{lem:WTC} Let~$T_1$ and~$T_2$ be weakly terminal coalgebra for the endofunctor~$F$, \begin{itemize} \item if~$m:T_1 \to T_2$ is a mediating morphisms, then~$m \approx \mathrm{id}$, \item if~$f:T_1 \to T_2$ and~$g:T_2\to T_1$ are mediating morphisms, then~$gf \approx \mathrm{id}_{T_1}$ and~$fg \approx \mathrm{id}_{T_2}$. \end{itemize} \end{lem} \begin{proof} Consider the following diagram: \[ \begin{tikzpicture} \matrix (m) [matrix of math nodes, column sep={6em,between origins}, row sep={4em,between origins},text height=1.5ex, text depth=.25ex] { F(T_2) & F(R) & F(T_1) \\ T_2 & R & T_1 \\ T_2 & R & T_2 \\ F(T_2) & F(R) & F(T_2) \\ }; \path[morphism] (m-1-2) edge node[above]{$F_{r_2}$} (m-1-1) (m-1-2) edge node[above]{$F_{r_1}$} (m-1-3) % (m-2-1) edge node[left]{$t_2$} (m-1-1) (m-2-2) edge node[right]{$r$} (m-1-2) (m-2-3) edge node[right]{$t_1$} (m-1-3) % (m-2-2) edge node[above]{$r_2$} (m-2-1) (m-2-2) edge node[above]{$r_1$} (m-2-3) % (m-2-1) edge node[left]{$\mathrm{id}$} (m-3-1) (m-2-3) edge node[right]{$m$} (m-3-3) % (m-3-1) edge node[left]{$t_2$} (m-4-1) (m-3-3) edge node[right]{$t_2$} (m-4-3) ; % \path[morphism,densely dashed] (m-2-2) edge node[right]{$\mathrm{id}$} (m-3-2) % (m-3-2) edge node[below]{$r_2$} (m-3-1) (m-3-2) edge node[below]{$mr_1$} (m-3-3) % (m-3-2) edge node[right]{$r$} (m-4-2) % (m-4-2) edge node[below]{$F_{r_2}$} (m-4-1) (m-4-2) edge node[below]{$F_{mr_1}$} (m-4-3) ; \end{tikzpicture} \] The only thing needed to make it commutative is that the bottom right square is commutative. This follows from the fact that~$m$ is a mediating morphism between the weakly terminal coalgebras: \[ \begin{tikzpicture} \matrix (m) [matrix of math nodes, column sep={6em,between origins}, row sep={4em,between origins},text height=1.5ex, text depth=.25ex] { R & T_1 & T_2 \\ F(R) & F(T_1) & F(T_2) \\ }; \path[morphism] (m-1-1) edge node[above]{$r_1$} (m-1-2) (m-1-2) edge node[above]{$m$} (m-1-3) % (m-1-1) edge node[left]{$r$} (m-2-1) (m-1-2) edge node[left]{$t_1$} (m-2-2) (m-1-3) edge node[left]{$t_2$} (m-2-3) % (m-2-1) edge node[below]{$F(r_1)$} (m-2-2) (m-2-2) edge node[below]{$F(m)$} (m-2-3) ; \end{tikzpicture} \] The second point is a direct consequence of the first one. \end{proof} \begin{asm}\label{asm:bisim} We assume our type theory is ``compatible'' with bisimilarity, in the sense that any definable function~$f$ between \emph{datatypes} satisfies~$f \approx f$. \end{asm} Since it is possible to construct a dependent type theory where bisimilarity \emph{is} intensional equality~\cite{Ahrens,mogelberg}, this assumption is reasonable. In our case, it allows to simplify some of the (pen and paper) proofs while not needing the extension of our type theory. Note however that the hypothesis that the functions are between \emph{datatypes} (i.e.\ not involving equality) is crucial: the whole problem is precisely that we don't assume bisimilarity is the same as (intensional) equality. \section{Indexed Containers} \label{sec:indexedContainers} \emph{Indexed containers}~\cite{alti:lics09} were first considered (implicitly) in type theory 30 years ago by K.~Petersson and D.~Synek~\cite{treeSet}. Depending on the context and the authors, they are also called \emph{interaction systems}~\cite{hancock-apal06} or \emph{polynomial diagrams}~\cite{polyMonads,polyDiagrams} \begin{defi} For~$I:\Set$, an indexed container over~$I$ is a triple~$w = ⟨A,D,n⟩$ where \begin{itemize} \item $i:I \vdash A(i):\Set$, \item $i:I, a:A \vdash D(i,a):\Set$, \item $i:I, a:A, d:D \vdash n(i,a,d):I$.\footnote{An abstract, but equivalent, way of defining indexed containers over~$I$ is as functions~$I\to\ensuremath{\mathsf{Fam}}\big(\ensuremath{\mathsf{Fam}}(I)\big)$.} \end{itemize} In Agda, the definition looks like \begin{allttt} record IS (I : Set) where field A : I → Set D : (i : I) → A i → Set n : (i : I) → (a : A i) → D i c → I \ \end{allttt} \end{defi} \noindent A useful intuition is that~$I$ is a set of states and that~$⟨A,D,n⟩$ is a game: \begin{itemize} \item $A(i)$ is the set of \emph{moves} (or \emph{actions} or \emph{commands}) in state~$i$ \item $D(i,a)$ is the set of \emph{counter-moves} (or \emph{reactions} or \emph{responses}) after move~$a$ in state~$i$, \item $n(i,a,d) : I$ is the \emph{new state} after move~$a$ and counter-move~$d$ have been played. When no confusion arises, we usually write~$i[a/d]$ for~$n(i,a,d)$. \end{itemize} Each indexed container gives rise to a monotone operator on predicates over~$I$: \begin{defi} If~$w=⟨A,D,n⟩ $ is an indexed container over~$I:\Set$, the \emph{extension of~$w$} is the operator:~$\Sem{w} : \ensuremath{\mathsf{Pow}}(I) \to \ensuremath{\mathsf{Pow}}(I)$ where \[ i \IN \Sem{w}(X) \quad = \quad \SI{a:A(i)} \PI{d:D(i,a)} i[a/d] \IN X \,. \] \end{defi} \begin{lem}\label{lem:monotonic} The operator~$\Sem{w}$ is monotonic, i.e., the following type is inhabited: \[ X\sub Y \quad\to\quad \Sem{w}(X) \sub \Sem{w}(Y) \] for every predicates~$X$ and~$Y$ of the appropriate type. \end{lem} \begin{proof}\checked This is direct: given~$i : X\sub Y$ and~$\langle a,f\rangle : \Sem{w}(X)$, we just need to ``carry''~$i$ through~$\Sem{w}$ and return~$\langle a , i \circ f\rangle$. \end{proof} Indexed containers form the objects of several categories of interest~\cite{hancock-apal06,polyMonads,alti:lics09,polyDiagrams}, but that will only play a very minor role here. \subsection{Composition} \label{subsub:indexedComposition} Extensions of indexed containers can be composed as functions. There is a corresponding operation on the indexed containers. \begin{defi} If~$w_1 = ⟨ A_1,D_1,n_1⟩ $ and~$w_2=⟨ A_2,D_2,n_2⟩ $ are two indexed containers on~$I$, the \emph{composition} of~$w_1$ and~$w_2$ is the indexed container~$w_2\circ w_1 = ⟨ A , D , n⟩ $ where \begin{itemize} \item $A(i) = \SI{a_1:A_1(i)}\PI{d_1:D_1(i,a_1)} A_2\big(i_1[a_1/d_1]\big) = i \IN \Sem{w_1}(A_2)$, \item $D\big(i,⟨ a_1,f⟩ \big) = \SI{d_1:D_1(i,a_1)}D_2\big(i[a_1/d_1], f\,d_1\big)$, \item $n\big(i,⟨ a_1,f⟩ ,⟨ d_1,d_2⟩ \big) = i[a_1/d_1][f\,d_1/d_2]$. \end{itemize} \end{defi} \begin{lem}\label{lem:composition} For every indexed containers~$w_1$ and~$w_2$ and predicate~$X$, we have \[ \Sem{w_2} \circ \Sem{w_1}(X) == \Sem{w_2\circ w_1}(X) \] where~$Y == Z$ is an abbreviation for~$(X\sub Y)\times(Y\sub X)$. If function extensionality holds, the pair of functions are inverse to each other. \end{lem} \begin{proof}\checked The main point is that the intensional axiom of choice \[ \PI{d: D}\SI{a: A(d)} \varphi(d,a) \quad \leftrightarrow \quad \SI{f: \PI{d:D} A(d)}\PI{d: D} \varphi(d,f\,d) \] is provable in Martin-L\"of type theory. We then have \begin{myequation} i\IN \Sem{w_2} \circ \Sem{w_1}(X) & = & \SI{a_1:A_1(i)} \PI{d_1:D_1(i,a_1)} \SI{a_2:A_2(i[a_1/d_1])}\\ && \PI{d_2:D_2(i[a_1/d_1],a_2)} i[a_1/d_1][a_2/d_2] \IN X\\ \text{\footnotesize(axiom of choice)} &\leftrightarrow& \SI{a_1} \SI{f:\PI{d_1:D_1(i,a_1)}A_2(i[a_1/d_1])} \PI{d_1}\\ && \PI{d_2} i[a_1/d_1][f\,d_1/d_2] \IN X\\ &\leftrightarrow& \SI{⟨a_1,f⟩} \PI{⟨d_1,d_2⟩} i[a_1/d_1][f\,d_1/d_2] \IN X\\ & = & i\IN \Sem{w_2 \circ w_1}(X) \,. \end{myequation} \end{proof} \subsection{Duality} \begin{defi}\label{def:duality} If~$w=⟨ A,D,n⟩ $ is an indexed container over~$I$, we write~$w^\perp$ for the indexed container~$⟨ A^\perp,D^\perp,n^\perp⟩ $ where \begin{itemize} \item $A^\perp(i) = \PI{a:A(i)}D(i,a)$, \item $D^\perp (i, \ensuremath{\mbox{}\_\mbox{}}) = A(i)$, (note that it does not depend on the value of~$f : A^\bot(i)$) \item $i[f/a] = i[a/f\,a]$. \end{itemize} \end{defi} \begin{lem}\label{lem:duality} For every indexed container~$w=⟨ A,D,n⟩ $, the following type is inhabited: \[ i \IN \Sem{w^\bot}(X) \quad\longleftrightarrow\quad \PI{a:A(i)}\SI{d:D(i,a)} i[a/d] \IN X \,. \] With function extensionality, this is an isomorphism \end{lem} \begin{proof}[Sketch of proof]\checked Just like lemma~\ref{lem:composition}, the proof relies on the intensional axiom of choice, which shows that \[ \SI{f:A^\bot(i)} \PI{a:D^\bot(i,f)} \varphi(a, f\,a) \leftrightarrow \PI{a:A(i)} \SI{d:D(i,a)} \varphi(a, d) \,. \] \end{proof} \subsection{Free Monad} \label{subsub:freeMonad} If~$w=⟨ A,D,n⟩ $ is an indexed container on~$I$, we can consider the free monad generated by~$\Sem{w}$. N. Gambino and M. Hyland proved that the free monad~$F_w$ generated by some~$\Sem{w}$ (a dependent polynomial functors) is of the form~$\Sem{w^\ast}$ for some indexed container~$w^\ast$~\cite{GambHyl}. It is characterized by the fact that~$\Sem{w^\ast}(X)$ is the least fixed point of~$Y\mapsto X \cup \Sem{w}(Y)$. In other words, we are looking for an indexed container~$w^\ast$ satisfying \begin{itemize} \item $X \cup \Sem{w}\big(\Sem{w^\ast}(X)\big) \sub \Sem{w^\ast}(X)$ \item $X \cup \Sem{w}(Y) \sub Y \to \Sem{w^\ast}(X) \sub Y$. \end{itemize} Informally,~$i\IN\Sem{w}^\ast(X)$ iff $i \IN X \cup w\big(X \cup w( X \cup w(X \cup \dots))\big)$. Expending the definition of~$\Sem{w}$, this means that \[ \SI{a_1}\PI{d_1}\SI{a_2}\PI{d_2} \ \dots\ \ i[a_1/d_1][a_2/d_2]\dots \IN X \,. \] This pseudo formula depicts a well-founded tree (with branching on~$d_i:D(\ensuremath{\mbox{}\_\mbox{}},\ensuremath{\mbox{}\_\mbox{}})$) in which branches are finite: either they end at a~$\Pi{d_i}$ because the corresponding domain~$D(\ensuremath{\mbox{}\_\mbox{}},\ensuremath{\mbox{}\_\mbox{}})$ is empty, or they end in a state~$i[a_1/d_1][a_2/d_2]\dots$ which belongs to~$X$. The length of the sequence~$a_1/d_1,a_2/d_2,\dots$ is finite but may depend on what moves / counter-moves are chosen as the sequence grows longer. We can define~$w^\ast$ inductively. \begin{defi} Define~$w^\ast = ⟨A^\ast,D^\ast,n^\ast⟩$ over~$I$ as: \begin{itemize} \item $A^\ast : \ensuremath{\mathsf{Pow}}(I)$ is a weak initial algebra for the endofunctor~$X \mapsto \LAM{i} \big({\bf 1} + \Sem{w}(X)(i)\big)$ on~$\ensuremath{\mathsf{Pow}}(I)$. Concretely, there are two constructors for~$A^\ast(i)$: \[ \Rule{}{\t{Leaf} : A^\ast(i)} \quad \text{and} \quad \Rule{ a : A(i) \qquad k : \PI{d : D(i,a) } A^\ast(n\,i\,a\,d) } { \t{Node}(a,k) : A^\ast(i) } \,. \] Thus, an element of~$A^\ast(i)$ is a well-founded tree where each internal node is labeled with an elements~$a:A(i)$ and the branching is given by the set~$D(i,a)$. \item The components~$D^\ast$ and~$n^\ast$ are defined by recursion: \begin{itemize} \item in the case of a~$\t{Leaf}$: \[\begin{array}{lclclcl} D^\ast \big(i,\t{Leaf}(i)\big) & = & {\bf 1} : \Set \\ n^\ast\big(i,\t{Leaf}(i),\star\big) & = & i : I \\ \end{array}\] \item in the case of a~$\t{Node}$: \[\begin{array}{lcl} D^\ast \big(i,\t{Node}(a,k)\big) & = & \SI{d : D (i,a)} D^\ast\big(i[a/d],f\,d\big) : \Set\\ n^\ast\big(i,\t{Node}(a,k),⟨ d,d'⟩ \big) & = & n^\ast\big(i[a/d],k\,d,d'\big) : I \,. \end{array} \] \end{itemize} The corresponding Agda definition is \begin{allttt} module FreeMonad (I : Set) (w : IS I) where open IS w data A* : I → Set where Leaf : (i : I) → A* i Node : (i : I) → (a : A i) → (f : (d : (D i a)) → A* (n a d)) → A* i D* : (i : I) → (t : A* i) → Set D* i Leaf = One D* i (Node a f) = \(\Sigma\) (D i a) (λ d → D* (n a d) (f d)) n* : (i : I) → (t : A* i) → (b : D* i t) → I n* i Leaf ⋆ = i n* i (Node a f) ( d , ds ) = n* (f d) ds \end{allttt} \end{itemize} \end{defi} In the presence of extensional equality, this particular inductive definition can be encoded using standard~$W$-types~\cite{polyMonads}. As it is given, it avoids using equality but needs either a universe, or degenerate\footnote{in the sense that the inductive set~$A^\ast(i)$ doesn't depend on the recursive functions~$D^\ast(i,\ensuremath{\mbox{}\_\mbox{}})$ and~$n^\ast(i,\ensuremath{\mbox{}\_\mbox{}},\ensuremath{\mbox{}\_\mbox{}})$.} induction-recursion on~$\ensuremath{\mathsf{Fam}}(I)$. This construction does indeed correspond to the free monad described by N. Gambino and M. Hyland: \begin{lem}\label{lem:RTC_mu} If~$w$ is an interaction system on~$I:\Set$, we have \begin{enumerate} \item for all~$X:\ensuremath{\mathsf{Pow}}(I)$, $X \cup \Sem{w}\big(\Sem{w^\ast}(X)\big) \sub \Sem{w^\ast}(X)$ \item for all~$X,Y:\ensuremath{\mathsf{Pow}}(I)$, $X \cup \Sem{w}(Y) \sub Y \to \Sem{w^\ast}(X) \sub Y$. \end{enumerate} \end{lem} \begin{proof}\checked \end{proof} \subsection{Greatest Fixed Points} Agda has some support for infinite values via the~$\infty\ensuremath{\mbox{}\_\mbox{}}$ type constructor making a type ``lazy'', i.e.\ stopping computation. Using this and the operators~$\natural: A \to \infty A$ (to freeze a value) and~$\flat : \infty A \to A$ (to unfreeze it), it is possible to define the type~$\nu_{\Sem{w}}$ (which we'll write~$\nu_w$) for any interaction system~$w$. The termination checker used in Agda~\cite{foetus} also checks productivity of recursive definition, but since it is not clear that this is sound when inductive and coinductive types are mixed~\cite{ThorstenNad,PHTotality} we will only use the introduction and elimination rules in our developments: \[ \Rule{\sigma : X \sub \Sem{w}(X)}{\nuIntro\,\sigma : X \sub \NU{w}} \qquad\text{and}\qquad \Rule{}{\nuElim : \NU{w} \sub \Sem{w}(\NU{w})} \] which are definable in Agda. Elements of~$\NU{w}$ are formed by coalgebras for~$\Sem{w}$, and any element of~$i\IN\NU{w}$ can be ``unfolded'' into an element of~$i \IN \Sem{w}(\NU{w})$ i.e.\ into an element of \[ \SI{a : A(i)} \PI{d : D(i,a)} i[a/d] \IN \NU{w} \,. \] We can repeat this unfolding and informally decompose an element of~$i\IN\NU{w}$ into an infinite ``lazy'' process of the form \[ \SI{a_1 : A(i)} \PI{d_1 : D(i,a_1)} \SI{a_2 : A(i[a_1/d_1])} \PI{d_2 : D(i[a_1/d_1],a_2)} \cdots \] We therefore picture an element of~$i\IN\NU{w}$ as an infinite tree (which needs not be well-founded). Each node of such a tree has an implicit state in~$I$, and the root has state~$i$. If the state of a node is~$j$, then the node contains an element~$a$ of~$A(j)$, and the branching of that node is given by~$D(j,a)$. Note that some finite branches may be inextensible when they end at a node of sort~$j$ with label~$a$ for which~$D(j,a)$ is empty. \subsubsection*{Examples} We will be particularly interested in fixed points of the form~$\NU{w^\bot}$. Because of lemma~\ref{lem:duality}, an element of~$i\IN\NU{w^\bot}$ unfolds to a potentially infinite object of the form \[ \PI{a_1 : A(i)} \SI{d_1 : D(i,a_1)} \PI{a_2 : A(i[a_1/d_1])} \SI{d_2 : D(i[a_1/d_1],a_2)} \cdots \] For such types, the branching comes from~$A(\ensuremath{\mbox{}\_\mbox{}})$ and the labels from~$D(\ensuremath{\mbox{}\_\mbox{}},\ensuremath{\mbox{}\_\mbox{}})$. Here are some example of the kind of objects we get. \begin{enumerate} \item Streams on~$T$ are isomorphic to~``$\star \IN \NU{w^\bot}$'' where~$I={\bf 1} = \{\star\}$ and~$w=⟨A,D,n⟩$ with \begin{itemize} \item $A(\star) = {\bf 1}$, \item $D(\star,\star) = X$, \item $n(\star,\star,x) = \star$. \end{itemize} \item Increasing streams of natural numbers are isomorphic to~``$0\IN\NU{w^\bot}$'' where~$I=\ensuremath{\mathbb{N}}$ and~$w=⟨A,D,n⟩$ with \begin{itemize} \item $A(i) = {\bf 1}$, \item $D(i,\star) = \SI{j:\ensuremath{\mathbb{N}}} (i < j)$, \item $n(i,\star,⟨j,p⟩) = j$. \end{itemize} \item Infinite, finitely branching trees labeled by~$X$ are isomorphic to~$\star \IN \NU{w^\bot}$ where~$I={\bf 1}$ and~$w=⟨A,D,n⟩$ to be \begin{itemize} \item $A(\star) = \SI{k:\ensuremath{\mathbb{N}}} N(k)$, where~$N(k)$ is the set with exactly~$k$ elements, \item $D(\star,\langle k, i\rangle) = X$, \item $n(\star,k,x) = \star$. \end{itemize} \item In general, non-losing strategies for the second player from state~$i:I$ in game~$w$ are given by~$i \IN \NU{w^\bot}$. \end{enumerate} \subsubsection*{Bisimulations} The appropriate equivalence relation on coinductive types is \emph{bisimilarity}. Translating the categorical notion from page~\pageref{def:categorical_bisimulation} for the type~$\NU{w}$,\footnote{Note that we only define bisimilarity for elements of~$i\IN\NU{w}$, and not for arbitrary ~$\Sem{w}$-coalgebras.} we get that~$T_1 : i_0 \IN \NU{w}$ is bisimilar to~$T_2 : i_0 \IN \NU{w}$ if: \begin{enumerate} \item there is an~$I$ indexed family~$R_i : (i\IN\NU{w}) \times (i\IN\NU{w}) \to \Set$ (an ``indexed relation'') s.t. \item $R_{i_0} \langle T_1,T_2\rangle$ is inhabited, \item whenever~$R_i \langle T_1, T_2\rangle$ is inhabited, we have \begin{itemize} \item $a_1 \equiv a_2$, \item for every $d_1:D(i,a_1)$, the elements~$f_1\,i[a_1/d_1]$ and~$f_2\,i[a_2/d_2]$ are related by~$R$, \end{itemize} where~$\langle a_1 , f_1\rangle$ [resp. $\langle a_2 , f_2\rangle$] comes from the coalgebra structure~$\NU{w} \sub \Sem{w}\NU{w}$ applied to~$T_1$ [resp.~$T_2$]. \end{enumerate} Expressing this formally is quite verbose as values need to be transported along equalities to have an appropriate type. For example, having~$a_1 \equiv a_2 \in A(i)$ only entails that~$D(i,a_1)$ and~$D(i,a_2)$ are isomorphic, and thus, $d_1 : D(i,a_1)$ is not strictly speaking an element of~$D(i,a_2)$! We noted on page~\pageref{rk:bisim_equiv} that categorically speaking, without hypothesis on the functor~$F$,~$\approx$ was not necessarily transitive. We have \begin{lem} If~$w$ is an interaction system over~$I$, then~$\approx$ is an equivalence relation on any~$i\IN\NU{w}$: \begin{itemize} \item for any~$T:i\IN\NU{w}$, there is an element in~$T \approx T$, \item for any~$T_1,T_2:i\IN\NU{w}$, there is a function in~$(T_1 \approx T_2) \to (T_2 \approx T_1)$, \item for any~$T_1,T_2,T_3:i\IN\NU{w}$, there is a function in~$(T_1 \approx T_2) \to (T_2 \approx T_3) \to (T_1 \approx T_3)$. \end{itemize} \end{lem} \begin{proof}\checked The result is intuitively obvious but while reflexivity is easy, proving transitivity (and to a lesser extend symmetry) in Agda is surprisingly tedious. Explaining the formal proof is probably pointless as it mostly consists of transporting elements along equalities back and forth. \end{proof} We will keep some of the bisimilarity proofs in the meta theory in order to simplify the arguments. The only consequence of the assumption from page~\pageref{asm:bisim} that we'll need is the following. \begin{lem}\label{lem:bisim_id} Suppose that~$f,g : i_1 \IN \NU{w_1} \to i_2 \IN \NU{w_2}$ are definable in type theory, and that neither~$w_1$ nor~$w_2$ involve the identity type, then, to prove that~$f \approx g$,\footnote{i.e.\ $S\approx T \to f\,S \approx g\,T$} it is enough to show that~$f\,T \approx g\,T$ for any~$T:i_1 \IN \NU{w_1}$. \end{lem} \begin{proof} If~$S \approx T$, we have~$f\,S \approx f\,T \approx g\,T$ where the first bisimulation comes from the assumption~$f\approx f$ from page~\pageref{asm:bisim}, and the second bisimulation comes from the hypothesis of the lemma. \end{proof} This makes proving~$f\approx g$ simpler as we can replace the hypothesis~$T_1 \approx T_2$ by the stronger~$T_1 \equiv T_2$. \subsubsection*{Weakly Terminal Coalgebras} We will have to show that some sets are isomorphic ``up to bisimilarity''. To do that, we'll use lemma~\ref{lem:WTC} by showing that the two sets are weakly terminal coalgebras for the same functor~$\Sem{w}$. (One of the sets will always be~$\NU{w}$, making half of this automatic.) To show what~$T : \ensuremath{\mathsf{Pow}}(I)$ is a weakly terminal coalgebra for~$\Sem{w}$, we have to define, mimicking the typing rules for coinductive types: \begin{itemize} \item $\mathtt{elim} : T \sub \Sem{w}(T)$, \item $\mathtt{intro} : X \sub \Sem{w}(X) \to X \sub T$, \item $\mathtt{comp}_{X,c,x,i} : \mathtt{elim}(\mathtt{intro}\, c\, i\, x) \equiv \Sem{w}_{\mathtt{intro}\,c}i\,(c\,i\,x)$ whenever \begin{itemize} \item $X:\ensuremath{\mathsf{Pow}}(I)$, $c:X\sub\Sem{w}(X)$, $i:I$ and~$x:i\IN X$, \item $\Sem{w}_{\mathtt{intro}\,c} : \Sem{w}(X) \sub \Sem{w}(T)$ comes from lemma~\ref{lem:monotonic}. \end{itemize} \end{itemize} By lemma~\ref{lem:WTC}, we have \begin{cor}\label{cor:WTC} If $C$ is a weakly terminal coalgebra for~$\Sem{w}$, then there are functions $f : \NU{w} \sub C$ and~$g : C \sub \NU{w}$ such that \[ f\,i\,(g\,i\,T) \approx T \] for any~$T : i \IN \NU{w}$. \end{cor} \begin{proof}\checked This is the second point of lemma~\ref{lem:WTC}, and it has been formalized in Agda. \end{proof} \section{Simulations and Evaluation} \subsection{Functions on Streams} Continuous function from~$\Stream{A}$ to~$\Stream{B}$ can be described by infinite, $A$-branching ``decision trees'' with two kinds of nodes: $\epsilon$ and~$\t{output}_b$ with~$b\in B$. The idea is that~$f(s) = [b_1, b_2, \dots]$ if and only if the infinite branch corresponding to~$s$ contains, in order, the nodes~$\t{output}_{b_1}$, $\t{output}_{b_2}$, etc. For that to be well defined, we need to guarantee that there are no infinite path in the tree without any~$\t{output}$ node. \begin{thm}[\cite{hancock09:_repres_of_stream_proces_using}]\label{thm:stream_transducer} The set of continuous functions from~$\Stream{A}$ to~$\Stream{B}$ is isomorphic to the set~$\nu X.\mu Y. (B \times X) + (A \to Y)$. \end{thm} The nested fixed points guarantee that along a branch, there can only be finitely many consecutive non-\t{output}\ nodes: \begin{itemize} \item $\mu Y. \, (B \times X) + (A \to Y)$: well-founded~$A$-branching trees with leafs in~$B \times X$, i.e.\ consisting of an $\t{output}_b$ node and an element of~$X$, \item $\nu X. \cdots$ the element in~$X$ at each leaf for the well-founded trees is another such well-founded tree, ad infinitum. \end{itemize} We can evaluate each such tree into a continuous function, a process colloquially referred to as ``stream eating'': we consume the elements of the stream to follow a branch in the infinite tree, and output~$b$ (an element of the result) whenever we find an~$\t{output}_b$ node.\footnote{Constructing the tree from a function isn't constructive without access to a function describing the modulus of continuity of the function, i.e.\ the size of the prefix we need to inspect to get a given finite prefix of the result.} \medbreak Our aim is to extend theorem~\ref{thm:stream_transducer} to coinductive types of the form~$i \IN \NU{w^\bot}$. The problem is doing so in a type-theoretic manner and even the case of dependent streams (where the type of an element may depend on the value of the previous element) is not trivial. Retrospectively, the difficulty was that there are two generalizations of streams: \begin{itemize} \item adding states: we consider dependent streams instead of streams; \item adding branching: we consider trees instead of streams. \end{itemize} Both cases required the introduction of states \emph{and} branching, making those seemingly simpler cases as hard as the general one. \subsection{Linear Simulations as Transducers} We are going to define a notion of ``transducer'' that can explore some branch of its input and produce some output along the way. Such notions have already been considered as natural notions of morphisms between dependent containers: linear simulations~\cite{polyDiagrams} and general simulations~\cite{hancock-apal06}. These notions generalize morphism as representations of pointwise inclusions~$\Sem{w_1} \sub \Sem{w_2}$ (called cartesian strong natural transformations), which only make sense for containers with the same index set~\cite{polyMonads,GambHyl,alti:lics09}. A transducer from type~$i_1 \IN \NU{w_1^\bot}$ to type~$i_2 \IN \NU{w_2^\bot}$ works as follows: given an argument (input) in~$i_1 \IN \NU{w_1^\bot}$, morally of the form \[ \PI{a_0}\SI{d_0}\PI{a_1}\SI{d_1} \dots \] it must produce (output) an object of the form \[ \PI{b_0}\SI{e_0}\PI{b_1}\SI{e_1} \dots \] In other words, the transducer \begin{itemize} \item consumes~$b$ (they are given by the environment when constructing the result) and~$d$ (they may be produced internally by the input), \item produces~$e$ (as part of the output) and~$a$ (to be used internally by feeding them to the input). \end{itemize} Graphically, it can suggestively represented as \[ \begin{tikzpicture}[ box/.style={draw, minimum size=1.5cm, }] \t{Node}[box] (a) {\text{\tiny transducer}}; \t{Node}[left=2cm of a] (input) {input}; \t{Node}[right=2cm of a] (output) {output}; \draw[latex-,thick] (a.25) --++(0:1cm) node [right] {$b$}; \draw[-latex,thick] (a.-25) --++(0:1cm) node [right] {$e$}; \draw[-latex,thick] (a.-205) --++(0:-1cm) node [left] {$a$}; \draw[latex-,thick] (a.205) --++(0:-1cm) node [left] {$d$}; \end{tikzpicture} \] A very simple kind of transducer works as follows: \begin{enumerate} \item when given some~$b_0$, \item it produces an~$a_0$ and feeds it to it argument, \item it receives a~$d_0$ from its argument, \item and produces an~$e_0$ for its result. \item It starts again at step (1) possibly in a different internal state. \end{enumerate} This intuition is captured by the following definition. \begin{defi} Let~$w_1=⟨A_1,D_1,n_1⟩$ and~$w_2=⟨A_2,D_2,n_2⟩$ be indexed containers over~$I_1$ and~$I_2$ and let~$R:\ensuremath{\mathsf{Pow}}(I_1 \times I_2)$ be a relation between states. We say that~$R$ is a \emph{linear simulation} from~$w_1$ to~$w_2$ if it comes with a proof: \[\begin{array}{rcl} \rho \quad:\quad \PI{i_1:I_1} \PI{i_2:I_2} R(i_1,i_2) &\to& \PI{a_2 : A_2(i_2)} \\ && \SI{a_1 : A_1(i_1)} \\ && \PI{d_1 : D_1(i_1,a_1)} \\ && \SI{d_2 : D_2(i_2,a_2)} \\ && \quad R\big(i_1[a_1/d_1], i_1[a_2/d_2]\big) \,. \end{array}\] We write~$(R,\rho) : w_1 \linear w_2$, but usually leave the~$\rho$ implicit and write~$R:w_1\linear w_2$. \end{defi} To justify the fact that this can serve as a transducer, we need to ``evaluate'' a simulation on elements of~$i\IN\NU{w_1^\bot}$. \begin{lem}\label{lem:eval_linear} Let~$w_1=⟨A_1,D_1,n_1⟩$ and~$w_2=⟨A_2,D_2,n_2⟩$ be indexed containers over~$I_1$ and~$I_2$, and~$(R,\rho) : w_1 \linear w_2$. We have a function \[ \mathrm{eval}_R : \PI{i_1:I_1}\PI{i_2:I_2}R(i_1,i_2) \to (i_1 \IN \NU{w_1^\bot}) \to (i_2 \IN \NU{w_2^\bot}) \,. \] \end{lem} \begin{proof}\checked This amounts to unfolding the simulation as a linear transducer. The main point in the Agda proof is to show that the predicate~$\NU{w_1^\bot}\between R^\sim(i_2)$ is a coalgebra for~$\Sem{w_2^\bot}$. \end{proof} The next lemma shows that this notion of simulation gives an appropriate notion of morphism between indexed containers. \begin{lem} We have: \begin{itemize} \item the identity type on~$I$ is a linear simulation from any~$w$ over~$I$ to itself, \item if~$R$ is a linear simulation from~$w_1$ to~$w_2$, and if~$S$ is a linear simulation from~$w_2$ to~$w_3$, then~$S\circ R$ is a linear simulation from~$w_1$ to~$w_3$, where~$S\circ R$ is the relational composition of~$R$ and~$S$: \[ (S\circ R)(i_1,i_3) \quad=\quad \SI{i_2 : I_2} R(i_1,i_2) \times S(i_2,i_3) \] \end{itemize} \end{lem} \begin{proof}[Proof] \checked That the identity type is a linear simulation is straightforward. Composing simulation amounts to extracting the functions~$a_1 \mapsto a_2$ and~$d_2 \mapsto d_1$ from the simulations, and composing them. \end{proof} Note that composition of simulations is only associative \emph{up to associativity of relational composition} (pullback of spans) so that a quotient is needed to turn indexed containers into a category~\cite{polyDiagrams}. What is nice is that composition of simulation corresponds to composition of their evaluations, up to bisimilarity. \begin{lem} If~$w_1$, $w_2$ and~$w_3$ are interaction systems on~$I_1$, $I_2$ and~$I_3$, and if~$R:w_1 \linear w_2$ and~$S: w_2 \linear w_3$, then we have \[ \mathrm{eval}_{S\circ R}\,i_1\,i_3\,⟨i_2,⟨r,s⟩⟩ \quad\approx\quad \mathrm{eval}_S\,i_2\,i_3\,s \circ \mathrm{eval}_R\,i_1\,i_2\,r \] where \begin{itemize} \item $i_1:I_1$, $i_2:I_2$, $i_3:I_3$, \item $r:R(i_1,i_2)$ and~$s:S(i_2,i_3)$, \item and thus, $⟨i_2,⟨r,s⟩⟩: (S\circ R)(i_1,i_3)$. \end{itemize} Recall that for functions, $f \approx g$ means that for every input~$T$ (here of type~$i_1\IN\NU{w_1^\bot}$), we have ``$f\,T \approx g\,T$, i.e.\ $f\,T$ is bisimilar to~$g\,T$''. \end{lem} \begin{proof}\checked This is one instance where the direct, type theoretic proof of bisimilarity is possible, and not (too) tedious. With the transducer intuition in mind, this result is natural: starting from some~$a_3 : A_1(i_3)$, we can either \begin{itemize} \item transform it to~$a_2 : A_2(i_2)$ (with the simulation from~$w_2$ to~$w_3$) and then to~$a_1 : A_1(i_1)$ (with the simulation from~$w_1$ to~$w_2$), \item or transform it directly to~$a_1:A(i_1)$ (with the composition of the two simulations). \end{itemize} Because composition is precisely defined by composing the functions making the simulations, the two transformations are obviously equal. (The Agda proof is messier than that but amounts to the same thing.) Note that because of lemma~\ref{lem:bisim_id}, the Agda proof only needs to show that~$(\mathrm{eval}_S \circ \mathrm{eval}_R) \, T$ is bisimilar to~$\mathrm{eval}_{S\circ R} \, T$. \end{proof} \subsection{General Simulations} As far as representing functions from~$i_1\IN\NU{w_1^\bot}$ to~$i_2\IN\NU{w_2^\bot}$, linear simulation aren't very powerful. For streams, the first~$n$ elements of the result may depend at most on the first~$n$ elements of the input! Here is a typical continuous function that cannot be represented by a linear simulation. Given a stream~$s$ of natural numbers, look at the head of~$s$: \begin{itemize} \item if it is~$0$, output~$0$ and start again with the tail of the stream, \item if it is~$n>0$, remove the next~$n$ element of the stream, output there sum, and start again. \end{itemize} For example, on the stream~$[0,1,2,3,4,5,6,\dots]$, the function outputs \[ \Big[0,\, 2,\, \overbrace{\underbrace{\big.4+5+6}_{=15}}^{\text{3 elements}},\, \overbrace{\underbrace{\big.8+9+\cdots+14}_{=77}}^{\text{7 elements}},\, \overbrace{\underbrace{\big.16+17+\cdots+31}_{=376}}^{\text{15 elements}},\, \dots\Big] = [0,2,15,77,376, \dots] \] We can generalize transducers by allowing them to work in the following manner. \begin{enumerate} \item When given some~$b_0$, \item they produce an~$a_0$ and feed it to their argument, \item they receive a~$d_0$ from their argument, \item and \emph{either go back to step~\textup{(2)} or} produce an~$e_0$ (output) for their result, \item start again at step (1)... \end{enumerate} In other words, steps (2) and (3) can occur several times in a row. We can even allow the transducer to go directly from step~(1) to step~(4), bypassing steps~(2) and~(3) entirely. Of course steps~(3) and~(4) should never happen infinitely many times consecutively. \begin{defi} Let~$w_1$ and~$w_2$ be indexed containers over~$I_1$ and~$I_2$, let~$R:\ensuremath{\mathsf{Pow}}(I_1 \times I_2)$ be a relation between states; we say that~$R$ is a \emph{general simulation from~$w_1$ to~$w_2$} if it is a linear simulation from~$w_1^\ast$ to~$w_2$. \end{defi} In other words, $\langle R,\rho\rangle$ is a general simulation from~$⟨A_1,D_1,n_1⟩$ to~$⟨A_2,D_2,n_2⟩$ if \[\begin{array}{rcl} \rho : \PI{i_1:I_1}\PI{i_2:I_2}R(i_1,i_2) &\to& \PI{a_2 : A_2(i_2)} \\ && \SI{\alpha_1 : A_1^\ast(i_1)} \\ && \PI{\delta_1 : D_1^\ast(i_1,\alpha_1)} \\ && \SI{d_2 : D_2(i_2,a_2)} \\ && \quad R\big(i_1[\alpha_1/\delta_1], i_2[a_2/d_2]\big) \,. \end{array}\] Thanks to lemma~\ref{lem:eval_linear}, such a simulation automatically gives rise to a function from~$\NU{w_1^{\ast\bot}}$ to~$\NU{w_2^\bot}$. Fortunately, $\NU{w_1^{\ast\bot}}$ is, up to bisimulation, isomorphic to~$\NU{w_1^\bot}$. \begin{lem}\label{lem:*WTC} $\NU{w^{\ast\bot}}$ is a weakly terminal coalgebra for~$\Sem{w^\bot}$. \end{lem} \begin{proof}\checked From an element of~$i \IN \NU{w^{\ast\bot}}$ we can use the elimination rule and extract a member of~$\PI{\alpha:A^\ast(i)}\SI{\delta:D^\ast(i,\alpha)} i[\alpha/\delta]\NU{w^{\ast\bot}} $. Given some~$a:A(i)$, we instantiate~$\alpha$ to~$\t{Node}(a,\LAM{d:D(i,a)} \t{Leaf}) : A^\ast(i)$ (a single~$a$, followed by nothing), and its responses are just responses to~$a$. This produces an element of~$\PI{a:A(i)}\SI{d:D(i,a)} i[a/d] \IN \NU{w^{\ast\bot}}$, i.e.\ , an element of~$i\IN\Sem{w^\bot}(\NU{w^{\ast\bot}})$. We've just shown that~$\NU{w^{\ast\bot}} \sub \Sem{w^\bot}\NU{w^{\ast\bot}}$. We refer to the Agda code for the rest of the proof. \end{proof} \begin{cor}\label{cor:RTCbot_RTC_WTC} $\NU{w^{\ast\bot}}$ and~$\NU{w^\bot}$ are isomorphic up to bisimulation: \[ \NU{w^{\ast\bot}} \stackrel{\approx}\longleftrightarrow \NU{w^\bot} \,. \] \end{cor} \begin{proof} This is a direct consequence of lemma~\ref{lem:WTC}. \end{proof} By composing the function~$\NU{w^\bot} \sub \NU{w^{\ast\bot}}$ with the evaluation of linear simulations (lemma~\ref{lem:eval_linear}), we get an evaluation function for general simulations. \begin{cor}\label{cor:eval*} Let~$w_1=⟨A_1,D_1,n_1⟩ $ and~$w_2=⟨A_2,D_2,n_2⟩ $ be indexed containers over~$I$ and~$I_2$, and~$(R,\rho) : w^\ast \linear w_2$. We have a function \[ \mathrm{eval}^\ast_R : \PI{i_1:I_1}\PI{i_2:I_2}R(i_1,i_2) \to (i_1 \IN \NU{w_1^\bot}) \to (i_2 \IN \NU{w_2^\bot}) \,. \] \end{cor} \subsubsection*{Sidenote on formal topology} When evaluating a general simulation~$R: w_1^\ast \linear w_2$ directly (i.e.\ not relying on corollary~\ref{cor:RTCbot_RTC_WTC}), we need to compute an element of~$i_2 \IN \NU{w_2^\bot}$ from \begin{itemize} \item a state~$i_1 : I_1$ and a state~$i_2 : I_2$, \item an element~$r : R(i_1,i_2)$, \item an element~$T_1 : i_1 \IN \NU{w_1^\bot}$. \end{itemize} We then need to find, for each branch~$a_2 : A_2(i_2)$, a corresponding~$d_2 : D_2(i_2,a_2)$ and a way to continue the computation. Given~$a_2$, by the general simulation property, we can compute an element of \[ \SI{\alpha_1 : A_1^\ast(i_1)} \PI{\delta_1 : D_1^\ast(i_1,a_1)} \SI{d_2 : D_2(i_2,a_2)} R\big(i_1[\alpha_1/\delta_1] , i_2[a_2/d_2]\big) \] which can be rewritten as \[ i_1 \IN \Sem{w_1^\ast} \bigg( \LAM{i:I_1} \SI{d_2 : D_2(i_2,a_2)} R\big(i , i_2[a_2/d_2]\big) \bigg) \,. \] The crucial first step is matching the well-founded tree~$\alpha_1$ with the infinite tree~$T_1$. This can be done with \begin{lem}\label{lem:SambinExecution} Let~$I:\Set$, and~$w$ an indexed container over~$I$, and~$X$ a predicate over~$I$. The type~$w^\ast(X) \meets \NU{w^\bot} \to X \meets \NU{w^\bot}$ is inhabited. \end{lem} \begin{proof}\checked By matching dual quantifiers coming from~$i \IN w^\ast(X)$ and~$i\IN\nu_{w^\bot}$, we get an alternating sequence of moves~$a_i$ and counter moves~$d_i$ reaching a final state~$i_f$ in~$X$, together with a infinite tree in~$i_f \IN \NU{w^\bot}$. More formally, suppose~$w^\ast(X) \meets \NU{w^\bot}$ is inhabited, i.e., that we have $i : I$, $\langle \alpha, f \rangle : i\IN w^\ast(X)$ and a (non-well-founded) tree $T$ in~$i\IN\NU{w^\bot}$. We examine~$\alpha$: \begin{itemize} \item $\alpha=\t{Leaf}(i)$: we have $f\,\star : i\IN X$, in which case~$U\meets\NU{w^\bot}$ is inhabited. \item $\alpha=\t{Node}(a,k)$: where~$k:\PI{d} i[a/d]\IN w^\ast(X)$. We can apply~$\nuElim$ to ~$T:i \IN \NU{w^\bot}$ to obtain a function in~$\PI{a}\SI{d}i[a/d]\IN\NU{w^\bot}$. Applying that function to~$a:A(i)$, we get~$d:D(i,a)$ s.t. \begin{itemize} \item $i[a/d] \IN \NU{w^\bot}$, \item $k\,d : i[a/d] \IN w^\ast(X)$. \end{itemize} This pair inhabits~$w^\ast(X)\meets\NU{w^\bot}$, and by induction hypothesis, yields an inhabitant of~$U\meets\NU{w^\bot}$ \end{itemize} \end{proof} This formula is at the heart of formal topology, where it is called ``monotonicity''~\cite{DBLP:journals/apal/CoquandSSV03}. There,~$i \IN \Sem{w^\ast}(U)$ is read ``the basic open~$i$ is covered by~$U$'' (written~$i \triangleleft U$), and~$i\IN\NU{w^\bot}$ is read ``the basic open~$i$ contains a point'' and is written~$i\IN\mathsf{Pos}$. Applied to the present situation with~$X= \LAM{i:I_1} \SI{d_2 : D_2(i_2,a_2)} R\big(i , i_2[a_2/d_2]\big)$, we get an element of~$X\meets\NU{w_2^\bot}$, which is precisely given by \begin{itemize} \item a state~$i_1 : I_1$, \item a pair~$\langle d_2 , r\rangle$ with~$d_2:D_2(i_2,a_2)$ and~$r:R(i_1, i_2[a_2/d_2])$, \item an element~$T_1:\NU{w_1^\bot}$. \end{itemize} In other words, we get the sought after~$d_2$ (giving the first element of the~$a_2$ branch), together with enough data to continue the computation. \subsection{The Free Monad Construction as a Comonad} Composition of general simulations doesn't follow directly from composition of linear simulations because there is a mismatch on the middle interaction system: composing~$R : w_1^\ast \linear w_2$ and~$S:w_2^\ast \linear w_3$ to get a simulation from~$w_1^\ast$ to~$w_3$ isn't obvious. Fortunately, the operation~$w \mapsto w^\ast$ lifts to a comonad, and the composition corresponds to composition in its (co)Kleisli category. \begin{prop} Let~$\ensuremath{\mathbb{C}}$ be a locally cartesian closed category, the operation~$P \mapsto P^\ast$ lifts to a monad on the category of polynomial functors over~$I$ with cartesian natural transformations between them. \end{prop} \begin{proof}\partlychecked The operation~$P\mapsto P^\ast$ goes from~$\ensuremath{\mathsf{End}}(\ensuremath{\mathbb{C}}/I)$, the category of polynomial endofunctors on~$\ensuremath{\mathbb{C}}/I$ to~$\ensuremath{\mathsf{Mnd}}(\ensuremath{\mathbb{C}}/I)$, the category of (polynomial) monads over~$\ensuremath{\mathbb{C}}/I$~\cite{GambHyl,polyMonads}. % We write~$\ensuremath{\mathsf{Nat}}(F,G)$ for natural transformations from~$F$ to~$G$, and~$\ensuremath{\mathsf{Nat}}_{\ensuremath{\mathsf{Mnd}}}(F,G)$ for those transformations that respect the monad structures of~$F$ and~$G$. % Writing~$\Alg{F}$ for the category of $F$-algebras, and~$\MAlg{F}$ (when~$F$ is a monad) for the category of~$F$-algebras that respect the monad operations, we have \begin{myequation} \ensuremath{\mathsf{Nat}}_\ensuremath{\mathsf{Mnd}}(P^\ast, M) &\cong& \MAlg{M} \longrightarrow_\ensuremath{\mathbb{C}} \MAlg{P^\ast} &\text{\footnotesize\cite[proposition (5.3)]{Barr70}} \\ &\cong& \MAlg{M} \longrightarrow_\ensuremath{\mathbb{C}} \Alg{P} &\text{\footnotesize\cite[proposition 17]{GambHyl}} \\ &\cong& \ensuremath{\mathsf{Nat}}(P, M) &\text{\footnotesize\cite[proposition (5.2)]{Barr70}} \\ \end{myequation} This shows that~$\ensuremath{\mbox{}\_\mbox{}}^\ast$ is left adjoint to the forgetful functor~$\mathcal{U} : \ensuremath{\mathsf{Mnd}}(\ensuremath{\mathbb{C}}/I) \to \ensuremath{\mathsf{End}}(\ensuremath{\mathbb{C}}/I)$, and makes the composition~$\mathcal{U}\ensuremath{\mbox{}\_\mbox{}}^\ast$ a monad. It only remains to show that the monad operations are strong cartesian transformations. Since strong natural transformations from~$\Sem{w_1}$ to~$\Sem{w_2}$ correspond exactly to linear simulations~$(\equiv,\rho) : w_2 \linear w_1$~\cite{polyMonads,PolyFunctors}, it is enough to define the monad operations as simulations: \begin{itemize} \item $(\equiv,\varepsilon_w) : w^\ast \linear w$ \item and~$(\equiv,\delta_w) : w^\ast \linear w^{\ast\ast}$ \end{itemize} Those constructions are relatively straightforward in Agda: the first operation corresponds to embedding a single action~$a:A(i)$ into~$A^\ast(i)$ as~$\t{Node}(a,\LAM{d:D(i,a)} \t{Leaf})$. A direct definition of~$\delta_w$ is done by induction and can be found in the Agda code. Semantically speaking, it can be derived from lemma~\ref{lem:RTC_mu}: \begin{myequation} && \Sem{w}\Sem{w^\ast}(X) \sub \Sem{w^\ast}(X) &\text{\footnotesize(first point of lemma~\ref{lem:RTC_mu})} \\ &\to& \Sem{w^\ast}(X) \cup \Sem{w}\Sem{w^\ast}(X) \sub \Sem{w^\ast}(X) &\text{\footnotesize(because $\Sem{w^\ast}(X)\sub\Sem{w^\ast}(X)$)} \\ &\to& \Sem{w^\ast}\Sem{w^\ast}(X) \sub \Sem{w^\ast}(X) &\text{\footnotesize(second point of lemma~\ref{lem:RTC_mu})} \\ &\to& X \cup \Sem{w^\ast}\Sem{w^\ast}(X) \sub \Sem{w^\ast}(X) &\text{\footnotesize(because $X \sub \Sem{w^\ast}(X)$)} \\ &\to& \Sem{w^{\ast\ast}}(X) \sub \Sem{w^\ast}(X) &\text{\footnotesize(second point of lemma~\ref{lem:RTC_mu})} \\ \end{myequation} \end{proof} What makes this lemma interesting is that the constructions themselves are easy to define in Agda without identity types. A purely type theoretic proof that the constructions satisfies the monad laws couldn't be completed in Agda, because the overhead of reasoning with equality on dependent structures is \emph{very} tedious. The categorical proof guarantees that it holds in all models for extensional type theory which is good enough for us. Because of the reversal (strong natural transformation from~$w_1$ to~$w_2$ are equivalent to identity linear simulations from~$w_2$ to~$w_1$), this translates to ``$w\mapsto w^\ast$ lifts to a comonad in the category of interaction systems with linear simulations''. \begin{cor}\label{cor:comonad} The operation~$w \mapsto w^*$ lifts to a comonad in the category of interaction systems over~$I$ and linear simulations. \end{cor} \subsection{Composition of General Simulations} Recall that a comonad may be given in triple form with the following data: \begin{itemize} \item a natural transformation~$\varepsilon_w : w^\ast \linear w$, \item a ``cobinding'' operation taking~$R : w_1^\ast \linear w_2$ to~$R^\sharp : w_1^\ast \linear w_2^\ast$ (defined as~$R^\ast \circ \delta_{w_1}$), \end{itemize} satisfying the following laws: \begin{enumerate} \item $\varepsilon_w^\sharp = \mathrm{id}_{w^\ast}$, \item $\varepsilon_{w_2} \circ R^\sharp = R$ for~$R:w_1^\ast \linear w_2$, \item $(S \circ R^\sharp)^\sharp = S^\sharp \circ R^\sharp$ for~$R:w_1 \linear w_2^\ast$ and~$S:w_2^\ast \linear w_3^\ast$. \end{enumerate} We can now define \begin{defi}\label{def:general_comp} If~$R:w_1^\ast \linear w_2$ and~$S:w_2^\ast \linear w_3$, define~$S\bullet R$ with~$S\circ R^\sharp$. \end{defi} The comonad laws are then enough to prove that composition of general simulations corresponds to composition of their evaluations. \begin{prop}\label{prop:comp_general} Let~$w_1,w_2,w_3$ be interaction systems on~$I_1$, $I_2$ and~$I_3$, with~$R:w_1^\ast \linear w_2$ and~$S: w_2^\ast \linear w_3$, then we have \[ (\mathrm{eval}_S^\ast\,i_2\,i_3\,s) \circ (\mathrm{eval}_R^\ast\,i_1\,i_2\,t) \quad\approx\quad \mathrm{eval}_{S\bullet R}^\ast\,i_1\,i_3\,⟨i_2,⟨s,t⟩⟩ \] where \begin{itemize} \item $i_1:I_1$, $i_2:I_2$, $i_3:I_3$, \item $s:S(i_2,i_3)$ and~$t:T(i_2,i_3)$, \item and thus, $⟨i_2,⟨s,t⟩⟩: (T \bullet S)(i_1,i_3)$. \end{itemize} \end{prop} \begin{proof}\partlychecked Because~$\NU{w_2^{\ast\bot}}$ is a weakly terminal algebra for~$\Sem{w_2^\bot}$, by lemma~\ref{lem:WTC}, we have a pair of morphisms $f_2 : i_2 \IN \NU{w_2^\bot} \to i_2 \IN \NU{w_2^{\ast\bot}}$ and $g_2 : i_2 \IN \NU{w_2^{\ast\bot}} \to i_2 \IN \NU{w_2^\bot}$ such that~$f_2g_2 \approx \mathrm{id}$ and~$g_2f_2 \approx \mathrm{id}$. Expanding the definitions (and omitting all non-essential parameters, except for the first and last lines), we have: \begin{myequation} && i_1 \IN \NU{w_1^\bot} \xrightarrow{ \mathrm{eval}^\ast_{S\bullet R}\,i_1\,i_3\,\langle i_2 \langle r,s\rangle\rangle} i_3 \IN \NU{w_3^\bot} \\ &=& \NU{w_1^\bot} \xrightarrow{ \mathrm{eval}_{S\bullet R}^\ast} \NU{w_3^\bot} \\ \text{\tiny(def of $\mathrm{eval}^\ast$)} &=& \NU{w_1^\bot} \xrightarrow{f_1} \NU{w_1^{\ast\bot}} \xrightarrow{ \mathrm{eval}_{S\bullet R}} \NU{w_3^\bot} \\ \text{\tiny(def of~$\bullet$)} &=& \NU{w_1^\bot} \xrightarrow{f_1} \NU{w_1^{\ast\bot}} \xrightarrow{ \mathrm{eval}_{S\circ R^\sharp}} \NU{w_3^\bot} \\ \text{\tiny(lemma~\ref{lem:composition})} &=& \NU{w_1^\bot} \xrightarrow{f_1} \NU{w_1^{\ast\bot}} \xrightarrow{ \mathrm{eval}_{R^\sharp}} \NU{w_2^{\ast\bot}} \xrightarrow{ \mathrm{eval}_{S}} \NU{w_3^\bot} \\ &=& \NU{w_1^\bot} \xrightarrow{f_1} \NU{w_1^{\ast\bot}} \xrightarrow{ \mathrm{eval}_{R^\sharp}} \NU{w_2^{\ast\bot}} \xrightarrow{ \mathrm{id} } \NU{w_2^{\ast\bot}} \xrightarrow{ \mathrm{eval}_S} \NU{w_3^\bot} \\ \text{\tiny(remark above)} &\approx& \NU{w_1^\bot} \xrightarrow{f_1} \NU{w_1^{\ast\bot}} \xrightarrow{ \mathrm{eval}_{R^\sharp}} \NU{w_2^{\ast\bot}} \xrightarrow{g_2} \NU{w_2^{\bot}} \xrightarrow{f_2} \NU{w_2^{\ast\bot}} \xrightarrow{ \mathrm{eval}_S} \NU{w_3^\bot} \\ \text{\tiny(remark below)} &=& \NU{w_1^\bot} \xrightarrow{f_1} \NU{w_1^{\ast\bot}} \xrightarrow{ \mathrm{eval}_{R^\sharp}} \NU{w_2^{\ast\bot}} \xrightarrow{ \mathrm{eval}_{\varepsilon_{w_2}}} \NU{w_2^{\bot}} \xrightarrow{f_2} \NU{w_2^{\ast\bot}} \xrightarrow{ \mathrm{eval}_S} \NU{w_3^\bot} & (*) \\ \text{\tiny(comonad law)} &=& \NU{w_1^\bot} \xrightarrow{f_1} \NU{w_1^{\ast\bot}} \xrightarrow{ \mathrm{eval}_R} \NU{w_2^\bot} \xrightarrow{f_2} \NU{w_2^{\ast\bot}} \xrightarrow{ \mathrm{eval}_S} \NU{w_3^\bot} \\ \text{\tiny(def of $\mathrm{eval}^\ast$)} &=& \NU{w_1^\bot} \xrightarrow{ \mathrm{eval}_R^\ast} \NU{w_2^\bot} \xrightarrow{ \mathrm{eval}_S^\ast} \NU{w_3^\bot} \\ &=& i_1 \IN \NU{w_1^\bot} \xrightarrow{ \mathrm{eval}^\ast_{R}\,i_1\,i_2\,r} i_2 \IN \NU{w_2^\bot} \xrightarrow{ \mathrm{eval}^\ast_{S}\,i_2\,i_3\,s} _3 \IN \NU{w_3^\bot} \\ \end{myequation} The only missing part~$(*)$ is showing that~$\mathrm{eval}_{\varepsilon_{w_2}} \approx g_2$ where~$g_2 : i_2 \IN \NU {w_2^{\ast\bot}} \to i_2 \IN \NU{w_2^\bot}$ is the mediating morphism coming from the~$\Sem{w_2^\bot}$ coalgebra structure of~$i_2 \IN \NU{w_2^{\ast\bot}}$. This was formally proved in Agda. (The reason is essentially that both morphisms are, up-to bisimilarity, the identity.) \end{proof} Note that this proof is hybrid with a formal part proved in Agda, and a pen and paper proof. \section{Layering and Infinite Trees} General simulations allow to represent all computable (continuous) functions on streams: for a given piece of the output, we only need to look at finitely many elements of the stream. A general simulation does that by asking as many elements as it needs. However, for branching structures, general simulations are not enough: general simulations explore their arguments along a single branch $a_1/d_1, a_2/d_2, \dots$. For example, the function summing each layer of a binary tree to form a stream is not representable by a general simulation: \[\begin{tikzpicture}[ level distance=1cm, level 1/.style={sibling distance=3cm}, level 2/.style={sibling distance=1.5cm}, level 3/.style={sibling distance=.9cm}, tree node/.style={draw=none}, every child node/.style={tree node}, baseline={([yshift=-1ex] current bounding box.center)}, ] \t{Node}[tree node] (Root) {.} child { node {.} child {node {.} child {node {\tiny\dots} edge from parent node[over] {\tiny 7}} child {node {\tiny\dots} edge from parent node[over] {\tiny 8}} edge from parent node[over] {\tiny 3}} child {node {.} child {node {\tiny\dots} edge from parent node[over] {\tiny 9}} child {node {\tiny\dots} edge from parent node[over] {\tiny 10}} edge from parent node[over] {\tiny 4}} edge from parent node[over] {\tiny1}} child { node {.} child {node {.} child {node {\tiny\dots} edge from parent node[over] {\tiny 11}} child {node {\tiny\dots} edge from parent node[over] {\tiny 12}} edge from parent node[over] {\tiny 5}} child {node {.} child {node {\tiny\dots} edge from parent node[over] {\tiny 13}} child {node {\tiny\dots} edge from parent node[over] {\tiny 14}} edge from parent node[over] {\tiny 6}} edge from parent node[over] {\tiny2}}; \end{tikzpicture} \quad\mapsto\quad \begin{tikzpicture}[ level distance=1cm, level 1/.style={sibling distance=3cm}, level 2/.style={sibling distance=1.5cm}, level 3/.style={sibling distance=.9cm}, tree node/.style={draw=none}, every child node/.style={tree node}, baseline={([yshift=-1ex] current bounding box.center)}, ] \t{Node}[tree node] (Root) {.} child { node {.} child {node {.} child {node {\tiny\dots} edge from parent node[over] {\tiny 84}} edge from parent node[over] {\tiny 18}} edge from parent node[over] {\tiny 3}} ; \end{tikzpicture} \ We want a notion of ``backtracking'' transducer that can explore several branches. Describing such a transducer is difficult in type theory if we try to refrain from using equality. \subsection{Layering} \label{sub:layering} To give a simulation access to several branches, we are going to replace a branching structure by the stream of its ``layers''. Then, a simulation will be able to access as many layers as it needs to get information about as many branches as it needs. \begin{defi} Given an indexed container~$w = ⟨ A,D,n⟩ $ over~$I:\Set$ we define an indexed container~$w^\sharp = (A^\sharp,D^\sharp,n^\sharp)$ on~$I$ by induction-recursion on~$\ensuremath{\mathsf{Fam}}(I)$. \begin{itemize} \item given~$i:I$, the index set~$A^\sharp(i):\Set$ is defined with \[ \Rule{}{\t{Leaf} : A^\sharp(i)} \qquad \Rule{ \alpha : A^\sharp(i) \qquad l : \PI{\beta : D^\sharp(i,\alpha) } A(n^\sharp\,i\,\alpha \, \beta) } { (\alpha \triangleleft l) : A^\sharp(i) } \,; \] \item for~$\alpha : A^\sharp(i)$, the family $\big\langle D^\sharp(i,\alpha):\Set, n^\sharp\,i\,\alpha : D^\sharp(i,\alpha) \to I\big\rangle$ is defined with \begin{itemize} \item $D^\sharp(i,\star) = {\bf 1}$ and $D^\sharp(i,\alpha \triangleleft l) = \SI{\beta : D^\sharp(i,\alpha)} D\big(n^\sharp\,i\,\alpha\,\beta,l\,\beta\big)$, \item $n^\sharp\,i\,\star\,\star = i$ and $n^\sharp\,i\,(\alpha \triangleleft l)\,⟨ \beta,d⟩ = n\ (n^\sharp\,i\,\alpha\,l)\ (l\,\beta)\ d$. \end{itemize} \end{itemize} The Agda definition looks like \begin{allttt} mutual data A# : I → Set where Leaf : { i : I } → A# i \_◂\_ : {i : I} → (t : A# i) → ((b : D# i t) → A (n# t b)) → A# i D# : (i : I) → A# i → Set D# i Leaf = One D# i (t ◂ l) = Σ (D# i t) (λ ds → D (n# t ds) (l ds)) n# : {i : I} → (t : A# i) → D# i t → I n# {i} Leaf * = i n# {i} (t ◂ l) ( ds , d ) = n {n# {i} t ds} (l ds) d \end{allttt} \end{defi} This definition generalizes the example of complete binary trees as an inductive recursive definition from page~\pageref{ex:TreeIR} to arbitrary dependent~$A$-labelled and~$D$-branching trees. Given such a tree~$\alpha : A^\sharp(i)$,~$D^\sharp(i,\alpha)$ is the set of its terminal leaves. A new layer assigns a new element~$a:A(\dots)$ at each leaf, and~$\alpha \triangleleft l$ is the new, increased tree. Of particular interest is the interaction system~$w^{\bot\sharp}$. An element of~$A^{\bot\sharp}(i)$ is a complete tree of finite depth where branching occurs at~$A(\ensuremath{\mbox{}\_\mbox{}})$ and labels are chosen in~$D(\ensuremath{\mbox{}\_\mbox{}},\ensuremath{\mbox{}\_\mbox{}})$. In particular, an element of~$D^{\bot\sharp}(i,\alpha)$ is a finite sequence of actions. \medbreak We can now construct a ``non-branching'' structure from an arbitrary interaction system: \begin{defi} Given~$w$ an interaction system on~$I$ and a fixed~$i:I$, we define a new interaction system~$\ALayered{w,i}$: \begin{itemize} \item states are~$A^\sharp(i)$, \item actions in state~$\alpha:A^\sharp(i)$ are given by~$\PI{\beta : D^\sharp(i,\alpha)}A(n^\sharp \, i \, \alpha \, \beta)$, i.e., ``layers'' on top of~$\alpha$, \item responses are trivial: there is only ever one possible response:~$\star$, \item the next state after action~$l$ in state~$\alpha$ (and response~$\star$) is~$\alpha\triangleleft l$. \end{itemize} \end{defi} States of this new interaction system record a complete tree of finite depth. Moreover, since it has trivial responses, an element of~$\t{Leaf} \IN \NU{\ALayered{w,i}}$ is not very different from a (dependent) stream of layers, and so, from an infinite tree in~$i \IN \NU{w}$. This is generalized and formalized in the next lemmas. \begin{lem} The predicate $\PI{\beta :D^\sharp(i,\alpha)} (n^\sharp\,i \, \alpha \, \beta) \IN \NU{w}$,\footnote{An element of~$\PI{\beta :D^\sharp(i,\alpha)} (n^\sharp\,i \, \alpha \, \beta) \IN \NU{w}$ is a way to extend the complete tree~$\alpha$ of finite depth to a full infinite complete tree by appending an infinite tree at each leaf.} depending on $\alpha : A^\sharp(i)$, is a weakly terminal coalgebra for~$\Sem{\ALayered{w,i}}$. \end{lem} \begin{proof}\checked The proof corresponds to the above remark that an infinite tree is equivalently given by the infinite stream of its layers. It has been formalized in Agda. \end{proof} \begin{cor}\label{cor:layeringlemma} Given~$\alpha:A^\sharp(i)$, there are functions \[ \PI{\beta :D^\sharp(i,\alpha)} (n^\sharp\,i \, \alpha \, \beta) \IN \NU{w} \quad\stackrel\approx{\longleftrightarrow}\quad \alpha \IN \NU{\ALayered{w,i}} \] that are, up to bisimulation, inverse to each other. In particular, there are functions \[ f : i\IN\NU{w} \to \t{Leaf} \IN \NU{\ALayered{w,i}} \quad \text{and} \quad g : \t{Leaf} \IN \NU{\ALayered{w,i}} \to i\IN\NU{w} \] such that~$fg \approx \mathrm{id}$ and~$gf\approx\mathrm{id}$. \end{cor} \begin{proof} This is a direct consequence of lemma~\ref{lem:WTC}. \end{proof} Interaction system with trivial actions or reactions are special in the sense that duality is essentially idempotent, in the sense that~$w$ and~$w^{\bot\bot}$ are isomorphic: \begin{itemize} \item there are bijections~$f_i$ between~$A(i)$ and~$A^{\bot\bot}(i)$, \item there is are bijections~$g_{i,a}$ between~$D(i,a)$ and~$D^{\bot\bot}(i, f_i\,a)$, \item the next state functions are compatible with them: $n(i,a,d) \equiv n^{\bot\bot}(i,f_i\,a,g_{i,a}\,d)$. \end{itemize} Rather than developing this notion, we will only state and prove the only consequence we'll need. \begin{lem} Suppose~$w$ has trivial (singleton) reactions, then $\NU{w}$ is a weakly terminal coalgebra for~$\Sem{w^{\bot\bot}}$. \end{lem} \begin{proof}\checked \end{proof} \begin{cor}\label{cor:duality_update} There are function \[ \varphi : \t{Leaf} \IN\NU{\ALayered w,i} \to \t{Leaf} \IN \NU{\ALayered{w,i}^{\bot\bot}} \quad \text{and} \quad \psi : \t{Leaf} \IN \NU{\ALayered{w,i}^{\bot\bot}} \to \t{Leaf} \IN\NU{\ALayered w,i} \] such that~$\varphi\psi \approx \mathrm{id}$ and~$\psi\varphi\approx\mathrm{id}$. \end{cor} \subsection{Continuous Functions} We now have everything we need. \begin{defi} A layered simulation from~$w_1$ to~$w_2$ at states~$i_1:I_1$ and~$i_2:I_2$ is a simulation from~$\ALayered{w_1^\bot,i_1}^\bot$ to~$\ALayered{w_2^\bot,i_2}^\bot$. A general layered simulation is a simulation from~$\ALayered{w_1^\bot,i_1}^{\bot\ast}$ to~$\ALayered{w_2^\bot,i_2}^\bot$. \end{defi} \begin{lem}\label{lem:eval_layer} For every general layered simulation~$R:\ALayered{w_1^\bot,i_1}^{\bot\ast} \linear \ALayered{w_2^\bot,i_2}^\bot$ there is an evaluation function \[ \mathrm{eval}_R\, : R(\t{Leaf}, \t{Leaf}) \to i_1 \IN \NU{w_1^\bot} \to i_2 \IN \NU{w_2^\bot} \] \end{lem} \begin{proof} We have \begin{myequation} i_1 \IN \NU{w_1^\bot} &\xrightarrow{\quad f_1\quad }& \t{Leaf} \IN \NU{\ALayered{w_1^\bot,i_1}} &\text{\footnotesize(corollary~\ref{cor:layeringlemma})} \\ &\xrightarrow{\ \varphi \ }& \t{Leaf} \IN \NU{\ALayered{w_2^\bot,i_2}^{\bot\bot}} &\text{\footnotesize(corollary~\ref{cor:duality_update})} \\ &\xrightarrow{\ \mathrm{eval}^\ast_R\,\t{Leaf}\,\t{Leaf}\,r\ }& \t{Leaf} \IN \NU{\ALayered{w_2^\bot,i_2}^{\bot\bot}} &\text{\footnotesize(corollary~\ref{cor:eval*})} \\ &\xrightarrow{\ \psi \ }& \t{Leaf} \IN \NU{\ALayered{w_2^\bot,i_2}} &\text{\footnotesize(corollary~\ref{cor:duality_update})} \\ &\xrightarrow{\quad g_2\quad }& i_2 \IN \NU{w_2^\bot} &\text{\footnotesize(corollary~\ref{cor:layeringlemma})} \\ \end{myequation} \end{proof} \begin{lem} If composition of general layered simulations is defined as general composition of layered simulations, then evaluation of a composition is bisimilar to the composition of their evaluations. \end{lem} \begin{proof} The composition of evaluations gives \[ i_1 \IN \NU{w_1^\bot} \xrightarrow{f_1} \ensuremath{\mbox{}\_\mbox{}} \xrightarrow{\varphi_1} \ensuremath{\mbox{}\_\mbox{}} \xrightarrow{\mathrm{eval}^\ast_R} \ensuremath{\mbox{}\_\mbox{}} \xrightarrow{\psi_2} \ensuremath{\mbox{}\_\mbox{}} \xrightarrow{g_2} \ensuremath{\mbox{}\_\mbox{}} \xrightarrow{f_2} \ensuremath{\mbox{}\_\mbox{}} \xrightarrow{\varphi_2} \ensuremath{\mbox{}\_\mbox{}} \xrightarrow{\mathrm{eval}^\ast_S} \ensuremath{\mbox{}\_\mbox{}} \xrightarrow{\psi_3} \ensuremath{\mbox{}\_\mbox{}} \xrightarrow{g_3} i_3 \IN \NU{w_3^\bot} \,. \] % Since~$f_2 g_2 \approx \mathrm{id}$ (corollary~\ref{cor:layeringlemma}) and~$\varphi_2\psi_2\approx\mathrm{id}$ (corollary~\ref{cor:duality_update}), this whole composition is bisimilar to \[ i_1 \IN \NU{w_1^\bot} \xrightarrow{f_1} \ensuremath{\mbox{}\_\mbox{}} \xrightarrow{\varphi_1} \ensuremath{\mbox{}\_\mbox{}} \xrightarrow{\mathrm{eval}^\ast_R} \ensuremath{\mbox{}\_\mbox{}} \xrightarrow{\mathrm{eval}^\ast_S} \ensuremath{\mbox{}\_\mbox{}} \xrightarrow{\psi_3} \ensuremath{\mbox{}\_\mbox{}} \xrightarrow{g_3} i_3 \IN \NU{w_3^\bot} \] and thus, by proposition~\ref{prop:comp_general}, to \[ i_1 \IN \NU{w_1^\bot} \xrightarrow{f_1} \ensuremath{\mbox{}\_\mbox{}} \xrightarrow{\varphi_1} \ensuremath{\mbox{}\_\mbox{}} \xrightarrow{\mathrm{eval}^\ast_{S\bullet R}} \ensuremath{\mbox{}\_\mbox{}} \xrightarrow{\psi_3} \ensuremath{\mbox{}\_\mbox{}} \xrightarrow{g_3} i_3 \IN \NU{w_3^\bot} \,. \] This corresponds to evaluation of~$S \bullet R : \ALayered{w_1^\bot, i_1}^{\bot\ast} \linear \ALayered{w_2^\bot, i_2}^\bot$ as defined in lemma~\ref{lem:eval_layer}. \end{proof} \section*{Concluding Remarks} \subsection*{Internal Simulations} It is possible to internalize the notion of linear simulation by defining an interaction system~$w_1\linear w_2$ on~$I_1\times I_2$ satisfying~``$R$ is a linear simulation from~$w_1$ to~$w_2$ iff~$R\sub \Sem{w_1\linear w_2}(R)$''~\cite{hancock-apal06,polyDiagrams}. \begin{defi} If~$w_1$ and $w_2$ are interaction systems on~$I_1$ and~$I_2$, then the interaction system~$w_1 \linear w_2$ on~$I_1 \times I_2$ is defined by \begin{itemize} \item $\Big. A\big(\langle i_1,i_2\rangle\big) = \SI{f:A_2(i_2) \to A_1(i_1)}\PI{a_2:A_2(i_2)} D_1\big(i_1,f(a_2)\big) \to D_2(i_2,a_2)$, \item $\Big. D\big(\langle i_1,i_2\rangle, \langle f, \varphi\rangle\big) = \SI{a_2:A_2(i_2)} D_1\big(i_1,{f(a_2)}\big)$, \item $\Big. n\,\langle i_1,i_2\rangle\,\langle f, \varphi\rangle\,\langle a_2, d_1\rangle = \big\langle i_2[a_2/\varphi(a_2)(d_3)],i_3[f(a_2)/d_3]\big\rangle$. \end{itemize} \end{defi} The resulting structure is nicer in the opposite category because then,~$w_1\linear w_2$ generalizes duality (definition~\ref{def:duality}) in the sense that~$w^\bot$ is the same as~$w \linear \bot$, where~$\bot$ is the trivial interaction system (on~$I=\{\star\}$, with a single action and a single reaction) and is right adjoint to the pointwise cartesian product of interaction systems. Linear simulations thus give rise to a symmetric monoidal closed category. That a simulation~$R:w_1 \linear w_2$ is nothing more than a coalgebra for~$\Sem{w_1 \linear w_2}$ means that, up to bisimilarity, it can be seen as elements of~$\NU{w_1\linear w_2}$. However, even if~$w_1$ and~$w_2$ are finitary,~$w_1^\ast$ or~$\ALayered{w_1^\bot}^\ast$ aren't. We cannot really represent higher order continuous functions in this way. \subsection*{Thoughts about Completeness} Stating and proving formally completeness of this representation is yet to be done. It is, in a way, intuitively obvious but with several subtleties. \begin{itemize} \item Semantically, when interpreting all the constructions in the category of sets and functions,\footnote{Proving such theorems in Agda isn't possible, as their can be non-computable continuous functions. We would need at least to add hypothesis about computability of modulus of continuity and other similar hypothesis.} every function that is continuous for the ``wild'' topology from section~\ref{sub:wild} is representable as a general layered simulation. This would be an analogous to theorem~\ref{thm:stream_transducer} and the proof would go as follows \begin{itemize} \item we generalize theorem~\ref{thm:stream_transducer} to dependent streams (i.e.\ consider interaction systems with trivial actions) and show that in this case, any continuous function from~$\NU{w_1^\bot}(i_1)$ to~$\NU{w_2^\bot}(i_2)$ is represented by an element of~$\NU{w_1^\ast \linear w_2}(i_1,i_2)$. \item we use corollary~\ref{cor:duality_update} showing that any~$\NU{w^\bot}(i)$ (without restriction) is isomorphic to some~$\NU{w'^\bot}(j)$ where~$w'$ has trivial actions. \end{itemize} \item In particular, every function continuous for the natural topology (section~\ref{sub:natural}) between greatest fixed points of finitary interaction systems\footnote{An interaction systems is finitary if its sets of actions are finite.} is thus representable as a general layered simulation. \item If only the codomain is finitary, all continuous functions between fixed points of non-finitary interaction are representable by simulations of the form~$\ALayered{w_1^\bot,i_1}^{\bot\ast} \linear w_2$. This is for example the case of the naturally continuous but non-wildly continuous function from page~\pageref{ex:natural_not_wild}. However, we don't know how to compose such simulations. \end{itemize} \subsection*{Notes about Formalization} Some proofs haven't been formalized in Agda, most notably: \begin{itemize} \item the proof of lemma~\ref{lem:bisim_id} or of assumption~\ref{asm:bisim}, \item the proof of corollary~\ref{cor:comonad}. \end{itemize} We think the second holds in intensional type theory with function extensionality but were unable to complete the proof. As it stands, we only know it holds semantically by categorical reasoning. (It thus holds in extensional type theory.) The situation is subtle with lemma~\ref{lem:bisim_id}. Is is possible it can be bypassed entirely. A direct proof of corollary~\ref{cor:layeringlemma} was in fact checked in Agda (with the \t{--with-K} flag), but its complexity convinced us to base similar proofs on lemma~\ref{lem:bisim_id}. If no other way is known, it might mean we need to extend our type theory to the whole of cubical type theory, where assumption~\ref{asm:bisim} holds in a strong sense (bisimilarity is the same as path equality). Considering the degree of obfuscation for many of the formal proofs, it is tempting to investigate extensions of type theory to simplify reasoning. Currently, the best candidate seems to be cubical type theory~\cite{cubicalTT} where we can use a kind of ``heterogeneous equality'' and have equalities ``over'' equalities~\cite{brunerie}. A relation \[ R : \SI{i_1:I} \SI{i_2:I} \SI{\rho_i:i_1 \equiv i_2} (i_1 \IN \NU{w_1}) \times (i_2 \IN \NU{w_2}) \] would then be a bisimulation if \begin{myequation} R\big(\langle i_1 , i_2, \rho_i , T_1 , T_2\rangle\big) &\quad\to\quad& \SI{\rho_a : a_1 \equiv_{\rho_i} a_2} \\ && \PI{d_1 : D(i,a_1)} \PI{d_2 : D(i,a_2)} \PI{\rho_d : d_1 \equiv_{\rho_a} d_2} \\ && R(\langle i_1[a_1/d_1], i_2[a_2/d_2], \cdots, T_1', T_2') \end{myequation} where~$a_1$ and~$T_1'$ come from unfolding~$T_1$ (and similarly for~$a_2$,~$T_2'$),~$\equiv_\rho$ denotes the equality type \emph{over equality~$\rho$}, and the~``$\cdots$'' denotes the equality between~$i_1[a_1/d_1]$ and~$i_2[a_2/d_2]$ coming from~$\rho_i$,~$\rho_a$ and~$\rho_d$. This is left for future work. \subsection*{Acknowledgements} I really want to thank Peter Hancock for all the discussions that led to this paper. His knowledge of type theory and inductive-recursive definitions, together with his reluctance to give in to the temptation of equality are at the bottom of this work.
{ "timestamp": "2019-03-01T02:13:17", "yymm": "1902", "arxiv_id": "1902.10971", "language": "en", "url": "https://arxiv.org/abs/1902.10971" }
\section{Introduction\label{sec:Intro}} Green functions are objects of fundamental importance in field theories, since they represent the fundamental solution of linear inhomogeneous partial differential equations (PDEs) from which any particular solution can be obtained via convolution with the source term \cite{Green}. Moreover, Green functions are the basis of important numerical methods for boundary value problems, such as the boundary element method \cite{Becker92}, and they provide ``flexible" boundary conditions for atomistic simulations \cite{Trinkle2008}. In the context of linear elasticity, the Green function is a tensor-valued function of rank two, also known as the Green tensor. When contracted with a concentrated force acting at the origin, the Green tensor yields the displacement field in an infinite elastic medium. \cite{Kelvin} first derived the closed-form expression of the classical Green tensor for isotropic materials. For anisotropic materials, \cite{LR} and \cite{Synge} were able to derive the Green tensor in terms of an integral expression over the equatorial circle of the unit sphere in Fourier space. \cite{Barnett} extended this result to the first two derivatives of the Green tensor, and showed that the line-integral representation is well suited for numerical integration \textcolor{magenta}{(see also \cite{Bacon,Teodosiu})}. The Green tensor and its derivatives are singular at the origin, ultimately as a consequence of the lack of intrinsic length scales in the classical theory of elasticity. The unphysical singularities in the elastic fields derived from the Green tensor hinder their applicability in nano-mechanics, including the elastic theory of defects such as cracks, dislocations and inclusions \cite{Mura,Askes2011}. Generalized elastic field theories with intrinsic length scales have been proposed in the context of micro-continuum theories \cite{Eringen1999}, non-local theories \cite{Eringen2002}, and gradient theories \cite{Kroner1963,Mindlin64,Mindlin68,Mindlin72,MiEs1968}. In particular, Mindlin's anisotropic strain gradient elasticity has received renewed attention as a tool to solve engineering problems at the micro- and nano-scales for realistic materials \cite{polizzotto2018anisotropy}. Only recently, the structure of the gradient-elastic tensor has been rationalized for different material symmetry classes \cite{Auffray13}, and its atomistic representation and ensuing determination from interatomic potentials has become available \cite{Admal16}. The number of independent strain gradient elastic moduli ranges from 5 for isotropic materials, to 171 in the general case of triclinic materials. While simple expressions of the Green tensor exist for the isotropic case \cite{Rogula73,LP18}, and for simplified anisotropic theories \cite{LP14,LP15}, the Green tensor of the fully anisotropic theory of Mindlin's strain gradient elasticity has remained so far a rather elusive object. \cite{Rogula73} provided an expression for the Green tensor in gradient elasticity of arbitrary order, which involves a sum of terms associated with the roots of a certain characteristic polynomial. However, such representation renders its numerical implementation rather impractical, and it conceals the mathematical structure of the Green tensor in relationship to its classical counterpart. The objective of this paper is to derive a simple representation of the Green tensor of Mindlin's anisotropic first strain gradient elasticity, whose integral kernel involves only matrix operations suitable for efficient numerical implementation. Following a brief summary of Mindlin's anisotropic first strain gradient elasticity in section \ref{MAGEOS}, we derive the matrix representation of the Green tensor in section \ref{GTAHN}. It is shown that the Green tensor is non-singular at the origin, while its first gradient is finite but discontinuous at the origin. The classical tail of the Green tensor, as well as its classical limit for vanishing gradient parameters are easily recovered from the non-singular expression. In section \ref{specialCases} we demonstrate that the Green tensor generalizes other expressions found in the literature. In section \ref{Kelvin} we consider the Kelvin problem and compare the prediction of the Green tensor to atomistic calculations. \section{Mindlin's anisotropic gradient elasticity \label{MAGEOS}} Let us consider an infinite elastic body in three-dimensional space and assume that the gradient of the displacement field $\bm u$ is additively decomposed into an elastic distortion tensor $\bm\beta$ and an inelastic\footnote{ The inelastic distortion comprises plastic effects, and is typically an incompatible field. When the inelastic distortion is absent the elastic distortion is compatible. } \textcolor{magenta}{eigen-distortion} tensor $\bm \beta^*$: \begin{align} \label{uIJ} \partial_ju_i=\beta_{ij}+\beta^*_{ij}\, . \end{align} In the linearized theory of Mindlin's form-II first strain gradient elasticity \cite{Mindlin64,Mindlin68,MiEs1968,Mindlin72}, the strain energy density of an homogeneous and centrosymmetric\footnote{Due to the centrosymmetry, there is no coupling between $e_{ij}$ and $\partial_m e_{kl}$.} material is given by \begin{align} \label{W-an} \mathcal{W}(\bm e,\bm\nabla\bm e) =\frac{1}{2}\, \mathbb{C}_{ijkl}e_{ij}e_{kl}+\frac{1}{2}\, \mathbb{D}_{ijmkln}\partial_m e_{ij} \partial_n e_{kl}\, . \end{align} The strain energy density \eqref{W-an} is a function of the infinitesimal elastic strain tensor \begin{align} e_{ij}=\frac{1}{2}\left(\beta_{ij}+\beta_{ji}\right)\, , \end{align} and of its gradient $e_{ij,m}$. The tensor $\mathbb{C}$ is the standard rank-four tensor of elastic constants. By virtue of the symmetries \begin{align} \label{C} \mathbb{C}_{ijkl}=\mathbb{C}_{jikl}=\mathbb{C}_{ijlk}=\mathbb{C}_{klij}\,, \end{align} it possesses up to 21 independent constants \textcolor{magenta}{with units of $\text{eV}/\textup{\AA}^3$}. The tensor $\mathbb{D}$ is the rank-six tensor of strain gradient elastic constants, with symmetries \begin{align} \label{D} \mathbb{D}_{ijmkln}=\mathbb{D}_{jimkln}=\mathbb{D}_{ijmlkn}=\mathbb{D}_{klnijm}\,. \end{align} \textcolor{magenta}{It has units of $\text{eV}/\textup{\AA}$}. In the general case of triclinic materials the number of independent constants in the tensor $\mathbb{D}$ is equal to 171 \cite{Auffray13}. The quantities conjugate to the elastic strain tensor and its gradient are the Cauchy stress tensor $\bm\sigma$ and the double stress tensor $\bm\tau$, respectively. These are defined as: \begin{align} \label{CR1} \sigma_{ij}&=\frac{\partial \mathcal{W}}{\partial e_{ij}} =\mathbb{C}_{ijkl}e_{kl}\, , \\ \label{CR2} \tau_{ijm}&=\frac{\partial \mathcal{W}}{\partial (\partial_m e_{ij})} =\,\mathbb{D}_{ijmkln} e_{kl,n}\, . \end{align} In the presence of a body forces density $\bm b$, the static Lagrangian density of the system becomes: \begin{align} \mathcal{L}=-\mathcal{W}-\mathcal{V} =-\frac{1}{2}\left(\mathbb{C}_{ijkl}\beta_{ij}\beta_{kl}+\mathbb{D}_{ijmkln} \beta_{ij,m} \beta_{kl,n}\right)+u_ib_i\, , \end{align} where \begin{align} \label{V} \mathcal{V}=-u_i b_i \end{align} is the potential of the body force. The condition of static equilibrium is expressed by the Euler-Lagrange equation \begin{align} \label{EL-u} \frac{\delta \mathcal{L}}{\delta u_i}=\frac{\partial \mathcal{L}}{\partial u_i} -\partial_j\, \frac{\partial \mathcal{L}}{\partial (\partial_j u_i)} +\partial_k \partial_j\, \frac{\partial \mathcal{L}}{\partial (\partial_k \partial_j u_i)}=0\,. \end{align} In terms of the Cauchy and double stress tensors, Eq.~\eqref{EL-u} takes the following form \cite{Mindlin64}: \begin{align} \label{MindlinBE} \partial_j\big(\sigma_{ij}-\partial_m\tau_{ijm}\big)+b_i=0\, . \end{align} Using Eqs.~\eqref{uIJ} \eqref{CR1} \eqref{CR2}, Eq.~\eqref{MindlinBE} can be cast in the following equation for displacements: \begin{align} \label{u-LL} L^{\text{M}}_{ik}\, u_k+f_i=0\,. \end{align} In Eq.~\eqref{u-LL}, $L^{\text{M}}_{ik}$ denotes the differential operator of Mindlin's anisotropic first strain gradient elasticity \begin{align} \label{LM} L^{\text{M}}_{ik}=\mathbb{C}_{ijkl}\partial_j\partial_l-\mathbb{D}_{ijmkln}\partial_j\partial_l\partial_m\partial_n\, , \end{align} while \begin{align} \label{effectiveF} f_i=b_i-\left[\mathbb{C}_{ijkl}\partial_j-\mathbb{D}_{ijmkln}\partial_j\partial_m\partial_n\right]\beta^*_{kl} \end{align} is the forcing term. Note that the second term on the right hand side of Eq.~\eqref{effectiveF} is an ``effective" internal force due to the inelastic eigen-distortion, and arises in the presence of material defects, such as inclusions, cracks, and dislocations. This term is the gradient version of the internal force in Mura's eigen-strain \textcolor{magenta}{theory} \cite{Mura}. \section{The Green tensor of Mindlin's first strain gradient elasticity \label{GTAHN}} In this section, we derive the three-dimensional Green tensor of the operator~\eqref{LM}. To this end, we seek the solution to Eq.~\eqref{u-LL} in the form \begin{align} \label{convolutionU} u_k=G_{kj}*f_j\, , \end{align} where the symbol $*$ indicates convolution over the three-dimensional space, and $\bm G$ is the Green tensor of Mindlin's anisotropic differential operator $\bm L^M$. Substituting Eq.~\eqref{convolutionU} into Eq.~\eqref{u-LL}, one finds that $\bm G$ satisfies the following inhomogeneous PDE: \begin{align} \label{G-LL} \left[\mathbb{C}_{ijkl}\partial_j\partial_l-\mathbb{D}_{ijmkln}\partial_j\partial_l\partial_m\partial_n\right]G_{km}+\delta_{im} \delta=0\,. \end{align} In Eq.~\eqref{G-LL}, $\delta_{ij}$ is the Kronecker symbol, while $\delta$ is the three-dimensional Dirac $\delta$-distribution. Taking the Fourier transform\footnote{ The Fourier transform and its inverse are defined as, respectively~\cite{Wl}: \begin{align} \hat{f}(\bm k)&=\int_{\mathbb{R}^3} f(\bm x)\, \text{e}^{-\text{i}\bm k\cdot\bm x}\dV\, ,\\ f(\bm x)&=\frac{1}{(2\pi)^3}\int_{\mathbb{R}^3} \hat{f}(\bm k)\, \text{e}^{\text{i}\bm k\cdot\bm x}\, \text{d}\hat{V}\,. \end{align} For a real-valued function, the inverse Fourier transform is \begin{align} f(\bm x)=\frac{1}{(2\pi)^3}\int_{\mathbb{R}^3} \hat{f}(\bm k)\, \cos\left(\bm k\cdot\bm x\right)\, \text{d}\hat{V}\,. \end{align} } of Eq.~\eqref{G-LL}, we obtain \textcolor{magenta}{the following} algebraic equation for the Green tensor $\hat{G}_{kj}(\bm k)$ in Fourier space \begin{align} \label{LN-FT} \left[\mathcal{C}_{ik}(\bm k) + \mathcal{D}_{ik}(\bm k)\right]\hat{G}_{kj}(\bm k)=\delta_{ij}\,, \end{align} where \begin{align} \mathcal{C}_{ik}(\bm k) &=\mathbb{C}_{ijkl} k_j k_l\, ,\\ \mathcal{D}_{ik}(\bm k) &=\mathbb{D}_{ijmkln} k_j k_lk_m k_n \end{align} are symmetric matrices. If we further define the unit vector in Fourier space \begin{align} \bm \kappa=\frac{\bm k}{k}\,,\qquad k=\sqrt{k_ik_i}\, ,\qquad \bm\kappa^2=1 \,, \end{align} then \eqref{LN-FT} becomes: \begin{align} \label{LN-FT1} k^2\left[ \mathcal{C}_{ik}(\bm \kappa) + k^2 \mathcal{D}_{ik}(\bm \kappa)\right]\hat{G}_{kj}(\bm k)=\delta_{ij}\,, \end{align} or equivalently, in matrix notation, \begin{align} k^2\left[ \Cb(\bm\kappa) +k^2\Db(\bm\kappa)\right]\hat{\bm G}(\bm k)=\bm I\, . \label{LN-FT11} \end{align} Stability of the differential operator $\bm L^M$ requires that the matrix $\Cb(\bm \kappa)+k^2\Db(\bm \kappa)$ be positive definite. Since this requirement must hold for all $k$ and $\bm \kappa$, then the matrices $\Cb(\bm \kappa)$ and $\Db(\bm \kappa)$ must be individually positive definite. Under \textcolor{magenta}{the assumption that $\Cb(\bm \kappa)$ and $\Db(\bm \kappa)$ are symmetric positive definite (SPD) matrices}, the solution of \eqref{LN-FT11} in Fourier space clearly reads: \begin{align} \hat{\bm G}(\bm k)=\frac{\left[ \Cb(\bm\kappa)+k^2 \Db(\bm\kappa)\right]^{-1}}{k^2}\, . \label{LN-FT12} \end{align} The three-dimensional Green tensor in real space is obtained by inverse Fourier transform of Eq.~\eqref{LN-FT12}. \textcolor{magenta}{It reads:} \begin{align} \bm G(\bm x) &=\frac{1}{8\pi^3}\int_{\mathbb{R}^3} \frac{\left[ \Cb(\bm\kappa)+k^2 \Db(\bm\kappa) \right]^{-1}}{k^2}\, \cos\left(\bm k\cdot\bm x\right)\, \text{d}\hat{V}\nonumber\\ &=\frac{1}{8\pi^3}\int_{\mathcal{S}}\int_0^\infty \left[ \Cb(\bm\kappa)+k^2 \Db(\bm\kappa) \right]^{-1}\, \cos\left(k\bm \kappa\cdot\bm x\right)\, \text{d}k\, \text{d}{\omega}\, . \label{Greal1} \end{align} In Eq.~\eqref{Greal1}, $\text{d}\hat{V}=k^2\, \text{d}k\, \text{d}\omega$ indicates the volume element in Fourier space, and $\text{d}\omega$ is an elementary solid angle on the unit sphere $\mathcal{S}$. Our objective now is to obtain an alternative expression of \textcolor{magenta}{the matrix inverse} $[\Cb(\bm\kappa)+k^2 \Db(\bm\kappa)]^{-1}$ which allows us to carry out the the $k$-integral analytically. By doing so, the non-singular \textcolor{magenta}{nature} of the Green tensor at the origin is revealed. We start by observing that, by virtue of its SPD character, the matrix $\Cb(\bm\kappa)$ admits the following eigen-decomposition \begin{align} \Cb(\bm\kappa)=\bm R (\bm\kappa)\bm V^2(\bm\kappa)\bm R ^T(\bm\kappa) \, , \end{align} where $\bm R (\bm\kappa)$ is the orthogonal matrix of the eigenvectors of $\Cb(\bm\kappa)$, while $\bm V^2(\bm\kappa)$ is the diagonal matrix of positive eigenvalues of $\Cb(\bm \kappa)$. Moreover, the matrix \begin{align} \Cb^\frac{1}{2}=\bm R (\bm\kappa)\bm V(\bm\kappa)\bm R ^T(\bm\kappa) \label{CHalf} \end{align} is also SPD. Using \eqref{CHalf}, let us consider the following identity: \begin{align} \Cb+k^2\Db(\bm\kappa)= \Cb^\frac{1}{2} \left[\bm I+k^2\bm \Lambda^2 (\bm \kappa)\right] \Cb^\frac{1}{2}\, , \end{align} where \begin{align} \bm \Lambda^2 (\bm \kappa)=\Cb^{-\frac{1}{2}}(\bm\kappa)\Db(\bm\kappa)\Cb^{-\frac{1}{2}}(\bm\kappa) \end{align} is a SPD matrix with units of length squared. With this decomposition, the Green tensor in Fourier space becomes \begin{align} \hat{\bm G}(\bm k) &=\Cb^{-\frac{1}{2}}(\bm \kappa)\frac{\left[\bm I+k^2\bm \Lambda^2(\bm \kappa)\right]^{-1}}{k^2}\Cb^{-\frac{1}{2}}(\bm \kappa)\, , \label{PLPT} \end{align} while in real space we obtain \begin{align} \bm G(\bm x) =\frac{1}{8\pi^3}\int_{\mathcal{S}} \Cb^{-\frac{1}{2}}(\bm \kappa) \int_0^\infty \left[\bm I+k^2\bm\Lambda^2(\bm\kappa)\right]^{-1}\, \cos(k\bm \kappa\cdot\bm x)\, \text{d}k\, \Cb^{-\frac{1}{2}}(\bm \kappa)\text{d}\omega\, . \label{NavierReal1} \end{align} In order to carry out the $k$-integral, we make use of the following matrix identity:\footnote{ The proof of \eqref{k-int} descends from the fact that $\bm \Lambda^2 (\bm \kappa)$ is a real SPD matrix, and therefore it admits the eigen-decomposition \begin{align} \textcolor{magenta} {\bm \Lambda^2 (\bm \kappa)={\bm Q}(\bm\kappa)\bm D^2(\bm\kappa){\bm Q}^{T}(\bm\kappa)\, ,} \label{eigenDec} \end{align} where $\bm D^2(\bm\kappa)=\text{diag}\left\{\lambda^2_i(\bm\kappa)\right\}$ is the diagonal matrix of the positive eigenvalues of $\bm \Lambda^2 (\bm\kappa)$, and $\textcolor{magenta}{{\bm Q}(\bm\kappa)}$ is the orthogonal matrix of its eigenvectors. With this observation, we immediately obtain \begin{align*} \int_{0}^\infty \left[\bm I+k^2\bm\Lambda^2(\bm\kappa)\right]^{-1}\, \cos(k\bm \kappa\cdot\bm x)\, \text{d}k &=\int_{0}^\infty \left[\bm Q(\bm\kappa)\left(\bm I+k^2\bm D^2(\bm\kappa)\right)\bm Q^T(\bm\kappa)\right]^{-1}\, \cos(k\bm \kappa\cdot\bm x)\, \text{d}k\\ &=\bm Q(\bm\kappa)\int_{0}^\infty \text{diag}\left\{\frac{\cos(k\bm \kappa\cdot\bm x)}{1+k^2\lambda^2_i(\bm\kappa)}\right\}\, \text{d}k \, \bm Q^T(\bm\kappa) \, . \end{align*} \textcolor{magenta}{With the help of the definite integral 3.767 in \cite{GradshteynRyzhik}, we obtain} \begin{align*} \int_{0}^\infty \left[\bm I+k^2\bm\Lambda^2(\bm\kappa)\right]^{-1}\, \cos(k\bm \kappa\cdot\bm x)\, \text{d}k &=\frac{\pi}{2}\,\bm Q(\bm\kappa)\, \text{diag}\left\{\frac{\text{e}^{-|\bm \kappa\cdot\bm x|/\lambda_i(\bm \kappa)}}{\lambda_i(\bm \kappa)}\right\}\bm Q^T(\bm\kappa)\\ &=\frac{\pi}{2}\,\bm Q(\bm\kappa)\,\text{diag}\left\{\text{e}^{-|\bm \kappa\cdot\bm x|/\lambda_i(\bm \kappa)}\right\}\bm D^{-1}(\bm\kappa)\, \bm Q^T(\bm\kappa)\\ &=\frac{\pi}{2}\,\bm Q(\bm\kappa)\,\exp\left\{-|\bm \kappa\cdot\bm x|\,\bm D^{-1}(\bm\kappa)\right\}\, \bm Q^T(\bm\kappa) \bm\Lambda^{-1}(\bm\kappa)\\ &=\frac{\pi}{2}\,\exp\left\{-|\bm \kappa\cdot\bm x|\,\bm \Lambda^{-1}(\bm\kappa)\right\}\, \bm\Lambda^{-1}(\bm\kappa). \end{align*} In the last step we have used the property that the matrix exponential is an isotropic tensor-valued function of its argument. } \begin{align} \label{k-int} \int_{0}^\infty \left[\bm I+k^2\bm\Lambda^2(\bm\kappa)\right]^{-1}\, \cos(k\bm \kappa\cdot\bm x)\, \text{d}k =\frac{\pi}{2}\,\exp\left(-|\bm \kappa\cdot\bm x|\,\bm \Lambda^{-1}(\bm\kappa)\right)\, \bm\Lambda^{-1}(\bm\kappa)\, . \end{align} With this identity, the Green tensor takes the form \begin{align} \bm G(\bm x) =\frac{1}{16\pi^2}\int_{\mathcal{S}} \Cb^{-\frac{1}{2}}(\bm \kappa) \exp\left\{-|\bm \kappa\cdot\bm x|\,\bm \Lambda^{-1}(\bm\kappa)\right\}\, \bm\Lambda^{-1}(\bm\kappa)\, \Cb^{-\frac{1}{2}}(\bm \kappa)\, \text{d}\omega\, . \label{GT} \end{align} \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{fourierSphere} \caption{The unit sphere in Fourier space. The unit vector $\bm \kappa(\theta,\phi)$ is defined by the azimuth angle $\phi$, and the zenith angle $\theta$ measured from the axis $\hat{\bm e}_3=\bm x/x$.} \label{kSpace} \end{figure} Next, Eq.~\eqref{GT} is further simplified noting that the integration kernel is an even \textcolor{magenta}{function} of $\bm\kappa$. Therefore, the integral over the unit sphere $\mathcal{S}$ is twice the integral over a hemisphere. At the origin, any arbitrary hemisphere $\mathcal{H}$ can be chosen, and the Green tensor assumes the value \begin{align} \bm G(\bm 0) &=\frac{1}{8\pi^2}\int_{\mathcal{H}} \Cb^{-\frac{1}{2}}(\bm \kappa) \, \bm\Lambda^{-1}(\bm\kappa)\, \Cb^{-\frac{1}{2}}(\bm \kappa)\text{d}\omega\, . \end{align} This noteworthy result shows that the Green tensor is non-singular at the origin, \textcolor{magenta}{in contrast to classical elasticity}. Away from the origin, we can choose the hemisphere having the direction $\bm x$ as the zenith. This is a convenient choice because all points \textcolor{magenta}{$\bm\kappa$ on such a} hemisphere satisfy \textcolor{magenta}{the condition} $\bm\kappa\cdot\bm x\ge0$. \textcolor{magenta}{This} hemisphere \textcolor{magenta}{can be} parameterized by the zenith angle $\theta$ and the azimuth angle $\phi$, as shown in Fig.~{\ref{kSpace}}. In this reference system, \textcolor{magenta}{the unit vector $\bm \kappa$} can be expressed as \begin{align} \bm\kappa(\theta,\phi)=\sin\theta\cos\phi\, \hat{\bm e}_1+\sin\theta\sin\phi\, \hat{\bm e}_2+\cos\theta\, \hat{\bm e}_3\, , \end{align} where $\hat{\bm e}_3=\bm x/x$. Finally, letting $q=\cos\theta$, the elementary solid angle becomes \begin{align} \text{d}\omega = \sin \theta\, \text{d}\theta\, \text{d}\phi = -\text{d}q\, \text{d}\phi\, , \end{align} and \begin{align} \bm\kappa(q,\phi)=\sqrt{1-q^2}\cos\phi\, \hat{\bm e}_1+\sqrt{1-q^2}\sin\phi\, \hat{\bm e}_2+q\, \hat{\bm e}_3\, . \end{align} Therefore the Green tensor of the anisotropic Mindlin differential operator of first order finally becomes \begin{align} \label{GT1} {\bm G}(\bm x) =\frac{1}{8\pi^2}\int_0^{2\pi} \int_0^1 & \Cb^{-\frac{1}{2}}(\bm \kappa) \exp\left\{-qx\,\bm \Lambda^{-1}(\bm\kappa)\right\}\, \bm\Lambda^{-1}(\bm\kappa)\, \Cb^{-\frac{1}{2}}(\bm \kappa) \, \text{d} q\, \text{d}\phi\,. \end{align} \subsection{The first two gradients of the Green tensor} The first two gradients of the Green tensor are computed directly by differentiation of \eqref{GT}. The first gradient reads \begin{align} \bm\nabla \bm G(\bm x) =-\frac{1}{16\pi^2}\int_{\mathcal{S}} & \Cb^{-\frac{1}{2}}(\bm \kappa) \exp\left\{-|\bm \kappa\cdot\bm x|\,\bm \Lambda^{-1}(\bm\kappa)\right\}\, \bm\Lambda^{-2}(\bm\kappa) \nonumber\\ &\times \Cb^{-\frac{1}{2}}(\bm \kappa)\otimes \bm\kappa \, \text{sign}(\bm\kappa\cdot\bm x)\,\text{d}\omega\, . \label{dGT} \end{align} In components this is: \begin{align} G_{ij,m}(\bm x) =-\frac{1}{16\pi^2}\int_{\mathcal{S}} &\left[ \Cb^{-\frac{1}{2}}(\bm \kappa) \exp\left\{-|\bm \kappa\cdot\bm x|\,\bm \Lambda^{-1}(\bm\kappa)\right\}\, \bm\Lambda^{-2}(\bm\kappa)\right. \nonumber\\ &\times\left.\Cb^{-\frac{1}{2}}(\bm \kappa)\right]_{ij} \kappa_m \, \text{sign}(\bm\kappa\cdot\bm x)\,\text{d}\omega\, . \label{dGT2} \end{align} Note that, because of the presence of the sign function, the gradient of the Green tensor is finite but discontinuous at the origin. From a computational perspective, it is more convenient to express this result in reference system of Fig.~\ref{kSpace}. Doing so we find the alternative representation \begin{align} G_{ij,m}(\bm x) =-\frac{1}{8\pi^2}\int_0^{2\pi}\int_0^1 &\left[ \Cb^{-\frac{1}{2}}(\bm \kappa) \exp\left\{-|\bm \kappa\cdot\bm x|\,\bm \Lambda^{-1}(\bm\kappa)\right\}\, \bm\Lambda^{-2}(\bm\kappa)\right. \nonumber\\ &\left.\Cb^{-\frac{1}{2}}(\bm \kappa)\right]_{ij} \kappa_m \,\text{d}q\,\,\text{d}\phi . \label{dGT2} \end{align} The second gradient \textcolor{magenta}{of the Green tensor} reads \begin{align} \bm\nabla\bm\nabla \bm G(\bm x) &=\frac{1}{16\pi^2}\int_{\mathcal{S}} \Big( \Cb^{-\frac{1}{2}}(\bm \kappa) \exp\left\{-|\bm \kappa\cdot\bm x|\,\bm \Lambda^{-1}(\bm\kappa)\right\}\, \nonumber\\ &\hspace{2cm} \times \bm\Lambda^{-3}(\bm\kappa)\, \Cb^{-\frac{1}{2}}(\bm \kappa)\otimes \bm\kappa \otimes\bm\kappa \nonumber\\ &\ -\Cb^{-\frac{1}{2}}(\bm \kappa)\, \bm\Lambda^{-2}(\bm\kappa)\, \Cb^{-\frac{1}{2}}(\bm \kappa) \otimes \bm\kappa \otimes \bm\kappa \, \delta(\bm\kappa\cdot\bm x) \Big)\,\text{d}\omega\, . \label{ddGT} \end{align} \textcolor{magenta}{Letting $\bm n(\phi)=\bm \kappa(\pi/2,\phi)$ be a unit vector on the equatorial plane $\bm\kappa\cdot\bm x=0$, we finally obtain} \begin{align} \bm\nabla\bm\nabla \bm G(\bm x) &=\frac{1}{16\pi^2}\int_{\mathcal{S}} \Cb^{-\frac{1}{2}}(\bm \kappa) \exp\left\{-|\bm \kappa\cdot\bm x|\,\bm \Lambda^{-1}(\bm\kappa)\right\}\, \nonumber\\ &\hspace{2cm} \times \bm\Lambda^{-3}(\bm\kappa)\, \Cb^{-\frac{1}{2}}(\bm \kappa)\otimes \bm\kappa \otimes\bm\kappa \,\text{d}\omega\, \nonumber\\ &\ -\frac{1}{8\pi^2 x}\int_{0}^{2\pi} \Cb^{-\frac{1}{2}}(\bm n)\, \bm\Lambda^{-2}(\bm n)\, \Cb^{-\frac{1}{2}}(\bm n) \otimes \bm n \otimes \bm n \,\text{d}\phi\, . \label{ddGT-2} \end{align} \textcolor{magenta}{Note that the second gradient of the Green tensor is singular at the origin.} \subsection{The classical limit} It is now shown that Green tensor \eqref{GT} converges to the classical Green tensor $\bm G^0$ \cite{LR,Synge} when the field point $\bm x$ is sufficiently far from the origin compared to the characteristic length scales, that is when \begin{align} |\bm\kappa\cdot\bm x|/\lambda_i\gg 1, \label{classicalLimitcondition} \end{align} where $\lambda_i$ is an eigenvalue of $\bm \Lambda$, and $i=1,2,3$. This important property guarantees that the non-singular Green tensor \eqref{GT1} regularizes the classical anisotropic Green tensor in the far field. Moreover, as a special case satisfying condition \eqref{classicalLimitcondition}, the classical Green tensor $\bm G^0$ is also recovered in the limit of vanishing tensor of strain gradient coefficients $\mathbb{D}$. The classical Green tensor $\bm G^0$ is \textcolor{magenta}{readily} recovered if we consider the limit\footnote{ Using the eigen-decomposition \eqref{eigenDec}: \begin{align*} &\lim_{\||\bm\kappa\cdot\bm x|\, \bm\Lambda^{-1}\|\rightarrow\infty}\, \exp\left\{-|\bm \kappa\cdot\bm x|\,\bm \Lambda^{-1}(\bm\kappa)\right\}\, \bm\Lambda^{-1}(\bm\kappa)=\nonumber\\ &\lim_{\||\bm\kappa\cdot\bm x|\, \bm D^{-1}\|\rightarrow\infty}\, \bm Q(\bm\kappa)\exp\left\{-|\bm \kappa\cdot\bm x|\,\bm D^{-1}(\bm\kappa)\right\}\, \bm D^{-1}(\bm\kappa)\bm Q^T(\bm\kappa)=\nonumber\\ & \lim_{|\bm\kappa\cdot\bm x|/\lambda_i\rightarrow \infty} \bm Q(\bm\kappa)\, \text{diag}\left\{\frac{\exp\left\{-|\bm \kappa\cdot\bm x|/\lambda_{i}(\bm\kappa)\right\}}{\lambda_i(\bm\kappa)}\right\} \,\bm Q^T(\bm\kappa)=\nonumber\\ & \bm Q(\bm\kappa)\,\frac{2\bm I}{x}\, \delta(\bm \kappa\cdot\hat{\bm x} ) \,\bm Q^T(\bm\kappa) =\frac{2\bm I}{x}\, \delta(\bm \kappa\cdot\hat{\bm x} )\, . \end{align*} } \begin{align} \lim_{\||\bm\kappa\cdot\bm x|\, \bm\Lambda^{-1}\|\rightarrow\infty}\, \exp\left\{-|\bm \kappa\cdot\bm x|\,\bm \Lambda^{-1}(\bm\kappa)\right\}\, \bm\Lambda^{-1}(\bm\kappa) =\frac{2\bm I}{x}\, \delta(\bm \kappa\cdot\hat{\bm x} )\, , \label{DiracLimit} \end{align} where $\hat{\bm x} =\bm x/x$ and $\bm I$ is the identity tensor. In fact, the substitution of \eqref{DiracLimit} into \eqref{GT} yields \begin{align} \bm G(\bm x) \rightarrow \bm G^0(\bm x) &=\frac{1}{8\pi^2 x}\int_{\mathcal{S}} \Cb^{-1}(\bm \kappa)\, \delta(\bm \kappa\cdot\bm x ) \, \text{d}\omega =\frac{1}{8\pi^2 x}\, \int_0^{2\pi} \Cb^{-1}(\bm n)\, \text{d} \phi\, . \label{GTclassical} \end{align} \textcolor{magenta}{Here we used again the notation} $\bm n(\phi)=\bm \kappa(\pi/2,\phi)$ \textcolor{magenta}{to indicate} a unit vector on the equatorial plane $\bm\kappa\cdot\bm x=0$. Note that the span of integration can be reduced to the range $0\le\phi\le \pi$ using the symmetry $\Cb^{-1}(\bm n)=\Cb^{-1}(-\bm n)$. \section{Special cases\label{specialCases}} In this section we show that the Green tensor \eqref{GT} generalizes other results obtained in the literature. \subsection{The weakly non-local Green tensor $\mathbf{G}^\text{NL}$} Lazar and Po \cite{LP15} have considered a simplified strain gradient elasticity theory under the assumption \begin{align} \mathbb{D}_{ijmkln}=\mathbb{C}_{ijkl} {L}_{mn}\, , \end{align} a framework which was named Mindlin's strain gradient elasticity with weak non-locality because of its relation to non-local theories \cite{LazarNonLocal,LazarAgiasofitou}. The Green tensor \eqref{GT} recovers our previous result as a special case. In fact, under the previous assumption, we have \begin{align} \bm \Lambda(\bm\kappa)= \bm I\, \sqrt{\bm\kappa^T\bm L\bm \kappa}\, , \end{align} and \begin{align} \exp\left\{-|\bm \kappa\cdot\bm x|\,\bm \Lambda^{-1}(\bm\kappa)\right\}\, \bm\Lambda^{-1}(\bm\kappa) =\bm I \frac{\exp\left(-\frac{|\bm\kappa\cdot\bm x|}{\sqrt{\bm\kappa^T\bm L\bm \kappa}}\right)}{\sqrt{\bm\kappa^T\bm L\bm \kappa}}\nonumber\, . \end{align} Therefore the Green tensor becomes \begin{align} \bm G^\text{NL}(\bm R) &=\frac{1}{16\pi^2}\int_{\mathcal{S}} \Cb^{-1}(\bm \kappa) \frac{\exp\left(-\frac{|\bm\kappa\cdot\bm x|}{\sqrt{\bm\kappa^T\bm L\bm \kappa}}\right)}{\sqrt{\bm\kappa^T\bm L\bm \kappa}} \, \text{d}\omega\, , \label{GT-NL} \end{align} which is the expression given in \cite{LP15}. \subsection{The Green tensor of anisotropic gradient elasticity of Helmholtz type $\mathbf{G}^\text{H}$} An even simpler theory, named Mindlin's gradient elasticity of Helmholtz type, has been proposed by \cite{LP14}. The theory is characterized by only one gradient length scale parameter $\ell$, which renders the tensor $\bm L$ diagonal: \begin{align} \bm L=\ell^2\, \bm I\, . \label{LH} \end{align} The non-singular Green tensor of this theory is obtained by substituting \eqref{LH} in \eqref{GT-NL}, thus yielding \begin{align} \bm G^\text{H}(\bm R) &=\frac{1}{16\pi^2 \ell}\int_{\mathcal{S}} \Cb^{-1}(\bm \kappa) \exp\left(-\frac{|\bm\kappa\cdot\bm x|}{\ell}\right) \, \text{d}\omega\, , \label{GT-H} \end{align} which coincides with the expression given in \cite{LP14}. \subsection{The isotropic Green tensor $\mathbf{G}^\text{I}$} \begin{figure}[t!] \centering \includegraphics[width=0.48\textwidth]{Afunc} \caption{Plot of the regularized distance function $A(x,\ell)$.} \label{Afunc} \end{figure} The isotropic tensor $\mathbb{C}$ has components \begin{align} \label{C-iso} \mathbb{C}_{ijkl}=\lambda\delta_{ij}\delta_{kl} +\mu\big(\delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk}\big)\,, \end{align} where $\lambda$ and $\mu$ are the Lam\'e constants. \textcolor{magenta}{On the other hand,} the isotropic tensor $\mathbb{D}$ reads \begin{align} \label{D-iso} \mathbb{D}_{ijmkln}&= \frac{a_1}{2}\big(\delta_{ij}\delta_{km}\delta_{ln} +\delta_{ij}\delta_{kn}\delta_{lm} +\delta_{kl}\delta_{im}\delta_{jn} +\delta_{kl}\delta_{in}\delta_{jm}\big) \nonumber\\ &\ +\frac{a_3}{2}\big(\delta_{jk}\delta_{im}\delta_{kl} +\delta_{ik}\delta_{jm}\delta_{nl} +\delta_{il}\delta_{jm}\delta_{kn} +\delta_{jl}\delta_{im}\delta_{kn}\big) \nonumber\\ &\ +\frac{a_5}{2}\big(\delta_{jk}\delta_{in}\delta_{lm} +\delta_{ik}\delta_{jn}\delta_{lm} +\delta_{jl}\delta_{km}\delta_{in} +\delta_{il}\delta_{km}\delta_{jn}\big) \nonumber\\ &\ +2a_2\, \delta_{ij}\delta_{kl}\delta_{mn} +a_4\big(\delta_{il}\delta_{jk}\delta_{mn} +\delta_{ik}\delta_{jl}\delta_{mn}\big)\,, \end{align} where $a_1$, $a_2$, $a_3$, $a_4$, $a_5$ are the gradient parameters in isotropic Mindlin's first strain gradient elasticity theory \cite{Mindlin64} (see also~\cite{Mindlin68,Lazar16}). Therefore, the matrices $\Cb(\bm\kappa)$ and $\Db(\bm\kappa)$ become, respectively \begin{align} \mathcal{C}_{ik}(\bm\kappa &=(\lambda+2\mu)\kappa_i\kappa_k +\mu\big(\delta_{ik}-\kappa_i\kappa_k\big)\label{CFiso}\, ,\\ \mathcal{D}_{ik}(\bm\kappa)&= 2(a_1+a_2+a_3+a_4+a_5)\kappa_{i}\kappa_k \nonumber \\ &\ +\frac{1}{2}\,(a_3+2a_4+a_5)\big(\delta_{ik}-\kappa_{i}\kappa_k\big) \nonumber\\ &=(\lambda+2\mu)\, \ell_1^2 \kappa_{i}\kappa_k +\mu\, \ell_2^2 \big(\delta_{ik}-\kappa_{i}\kappa_k\big)\, . \label{DFiso} \end{align} \textcolor{magenta}{The two characteristic lengths $\ell_1$ and $\ell_2$ introduced above are defined as} \begin{align} \label{l1} \ell_1^2&=\frac{2(a_1+a_2+a_3+a_4+a_5)}{\lambda+2\mu}\,,\\ \label{l2} \ell_2^2&=\frac{a_3+2a_4+a_5}{2\mu}\,. \end{align} Owing to the special structure\footnote{\label{footNoteSpecialStructure} Consider a matrix $\bm A$ with structure \begin{align} A_{ij}=a\kappa_i\kappa_j+b(\delta_{ij}-\kappa_i\kappa_j)\, . \end{align} If $a>b>0$, then the matrix is SPD, and a unique SPD square root of $A_{ij}$ exists with form \begin{align} A_{ij}^{\frac{1}{2}}&=\sqrt{a}\kappa_i\kappa_j+\sqrt{b}(\delta_{ij}+\kappa_i\kappa_j)\, . \end{align} Moreover, the inverse of $A_{ij}$ reads \begin{align} A_{ij}^{-1}&=\frac{1}{a}\kappa_i\kappa_j+\frac{1}{b}(\delta_{ij}-\kappa_i\kappa_j)\, . \end{align} } of $\Cb(\bm\kappa)$ and $\Db(\bm\kappa)$, the following results are easily obtained: \begin{align} \mathcal{C}^{-\frac{1}{2}}_{ij}(\bm\kappa) &=\frac{1}{\sqrt{\mu}}\left(\delta_{ij}-\kappa_i\kappa_j\right)-\frac{1}{\sqrt{\lambda+2\mu}}\kappa_i\kappa_j\\%\left(1+\sqrt{\frac{\mu}{\lambda+2\mu}}\right)\kappa_i\kappa_k\right)\\ \Lambda_{ij}^{-1}(\bm\kappa)&=\frac{1}{\ell_2}\left(\delta_{ij}-\kappa_i\kappa_j\right)+\frac{1}{\ell_1}\kappa_i\kappa_j\, . \end{align} The matrix $\bm\Lambda^{-1}$ admits the eigenvalue $1/\ell_1$, corresponding to the eigenvector $\hat{\bm v}_1=\bm\kappa$. The degenerate eigenvalue $1/\ell_2$ has multiplicity two, corresponding to two arbitrary eigenvectors $\hat{\bm v}_2$ and $\hat{\bm v}_3$ perpendicular to $\bm \kappa$. Choosing such eigenvectors to be mutually orthogonal, the matrix $\bm \Lambda^{-1}$ admits the eigen decomposition $\bm \Lambda^{-1}=\bm Q \bm D^{-1} \bm Q^T$. Here \begin{align} \bm Q=[\hat{\bm v}_1\, \hat{\bm v}_2\, \hat{\bm v}_3] \end{align} is an orthogonal matrix whose columns are the eigenvectors of $\bm \Lambda^{-1}$, and \begin{align} \bm D^{-1}=\text{diag}\left\{\frac{1}{\ell_1},\, \frac{1}{\ell_2},\,\frac{1}{\ell_2}\right\} \end{align} is the diagonal matrix of its eigenvalues. This special form of $\bm Q$ yields the identity \begin{align} \Cb^{-\frac{1}{2}}\bm Q=\bm Q\, \text{diag}\left\{-\frac{1}{\sqrt{\lambda+2\mu}},\, \frac{1}{\sqrt{\mu}},\,\frac{1}{\sqrt{\mu}}\right\}\, . \end{align} Using these results in \eqref{GT}, we obtain \begin{align} \bm G^I(\bm x) &=\frac{1}{16\pi^2}\int_{\mathcal{S}} \Cb^{-\frac{1}{2}}\bm Q\exp\left\{-|\bm \kappa\cdot\bm x|\,\bm D^{-1}\right\}\, \bm D^{-1}\bm Q^T \Cb^{-\frac{1}{2}}\text{d}\omega\nonumber\\ &=\frac{1}{16\pi^2}\int_{\mathcal{S}} \bm Q\, \text{diag}\left\{\frac{e^\frac{-|\bm \kappa\cdot\bm x|}{\ell_1}}{\ell_1(\lambda+2\mu)},\, \frac{e^\frac{-|\bm \kappa\cdot\bm x|}{\ell_2}}{\ell_2\mu},\,\frac{e^\frac{-|\bm \kappa\cdot\bm x|}{\ell_2}}{\ell_2\mu}\right\} \bm Q^T\text{d}\omega\nonumber\\ &=\frac{1}{16\pi^2}\int_{\mathcal{S}} \frac{e^\frac{-|\bm \kappa\cdot\bm x|}{\ell_1}}{(\lambda+2\mu)\ell_1}\hat{\bm v}_1\otimes\hat{\bm v}_1\text{d}\omega\nonumber\\ &+\frac{1}{16\pi^2}\int_{\mathcal{S}} \frac{e^\frac{-|\bm \kappa\cdot\bm x|}{\ell_2}}{\mu\ell_2}\left(\hat{\bm v}_2\otimes\hat{\bm v}_2+\hat{\bm v}_3\otimes\hat{\bm v}_3\right)\text{d}\omega\, . \label{GTI} \end{align} Because they \textcolor{magenta}{form} an orthonormal basis, the three eigenvectors satisfy \textcolor{magenta}{the identity} $\hat{\bm v}_1\otimes\hat{\bm v}_1+\hat{\bm v}_2\otimes\hat{\bm v}_2+\hat{\bm v}_3\otimes\hat{\bm v}_3=\bm I$, hence we have \begin{align} \bm G^I(\bm x) &=\frac{1}{16\pi^2}\int_{\mathcal{S}} \left[\frac{e^\frac{-|\bm \kappa\cdot\bm x|}{\ell_1}}{(\lambda+2\mu)\ell_1}{\bm \kappa}\otimes{\bm \kappa}+\frac{e^\frac{-|\bm \kappa\cdot\bm x|}{\ell_2}}{\mu\ell_2}\left(\bm I-{\bm \kappa}\otimes{\bm \kappa}\right)\right] \text{d}\omega\, . \label{GTI2} \end{align} The integral over the unit sphere is carried out using the relation \begin{align} \int_{\mathcal{S}} \frac{e^\frac{-|\bm \kappa\cdot\bm x|}{\ell}}{\ell}\kappa_i\kappa_j\, \text{d}\omega =2\pi\, \partial_i\partial_j A(x,\ell)\, , \label{KKintegral} \end{align} where the scalar function $A(x,\ell)$ is \begin{align} A(x,\ell)=x+\frac{2\ell^2}{x} -\frac{2\ell^2}{x}e^{-x/\ell}\, . \label{Afunction} \end{align} The scalar function $A(x,\ell)$ can be regarded as a regularized distance function in the sense that $A(x,\ell)$ tends to $x$ when $x/\ell\gg 1$, while it smoothly approaches to $2\ell$ for small $x$, as shown in Fig.~\ref{Afunc}. By sake of \eqref{KKintegral}, the Green tensor finally becomes: \begin{align} \label{GT-iso} G_{ij}(\bm x)= \frac{1}{8\pi\mu}\, \Big[ \frac{\mu}{\lambda+2\mu}\, \partial_i\partial_j A(x,\ell_1) +\big(\delta_{ij}\Delta-\partial_i\partial_j\big) A(x,\ell_2)\Big]\, . \end{align} This result can also be obtained by direct inverse Fourier transform of \eqref{LN-FT12}, as shown in Appendix \ref{DirectDerivationGTI}. A \textcolor{magenta}{more} detailed analysis of the isotropic Green tensor \eqref{GT-iso} can be found in \cite{LP18}. \section{A comparison with Molecular Statics: The Kelvin problem \label{Kelvin}} \newcommand{\ex}[1]{$\cdot 10^{#1}$} \begin{table}[b!] \centering \begin{tabular}{| l | l | l |l |} \hline & Cu EAM & Cu MEAM & Al MEAM\\ \hline ${C}_{1,1}$ $[\text{eV}/\textup{\AA}^3]$&1.0868 & 1.0994 & 7.1366\ex{-1}\\ ${C}_{1,2}$ $[\text{eV}/\textup{\AA}^3]$& 7.9386\ex{-1}& 7.7973\ex{-1} & 3.8649\ex{-1}\\ ${C}_{4,4}$ $[\text{eV}/\textup{\AA}^3]$ &5.2252\ex{-1} & 5.1043\ex{-1} & 1.9704\ex{-1}\\ \hline ${D}_{1,1}$ $[\text{eV}/\textup{\AA}]$&1.1182 &6.5018\ex{-1} &1.0855\\ ${D}_{1,2}$ $[\text{eV}/\textup{\AA}]$& 3.5814\ex{-1} & 3.6659\ex{-1} & 1.4572\ex{-1}\\ ${D}_{1,3}$ $[\text{eV}/\textup{\AA}]$& 3.7951\ex{-1} & 2.4150\ex{-1} & 1.5934\ex{-1}\\ ${D}_{2,2}$ $[\text{eV}/\textup{\AA}]$& 4.7935\ex{-1} & 7.3885\ex{-1} & 8.4221\ex{-1}\\ ${D}_{2,3}$ $[\text{eV}/\textup{\AA}]$ & 3.0103\ex{-1} & 2.0651\ex{-1} & 1.5671\ex{-1}\\ ${D}_{2,4}$ $[\text{eV}/\textup{\AA}]$& 1.2789\ex{-1} & 4.7496\ex{-1} & 7.1708\ex{-1}\\ ${D}_{2,5}$ $[\text{eV}/\textup{\AA}]$& 1.0652\ex{-1} & -4.2545\ex{-2} & -1.1434\ex{-2}\\ ${D}_{3,3}$ $[\text{eV}/\textup{\AA}]$& 4.3662\ex{-1} & 2.9055\ex{-1} & 2.7613\ex{-1}\\ ${D}_{3,5}$ $[\text{eV}/\textup{\AA}]$& 1.2789\ex{-1} & -1.8275\ex{-2} & -1.2408\ex{-1}\\ ${D}_{16,16}$ $[\text{eV}/\textup{\AA}]$& 1.4925\ex{-1} & 3.7419\ex{-2} & 1.6786\ex{-1}\\ ${D}_{16,17}$ $[\text{eV}/\textup{\AA}]$& 1.0652\ex{-1} & 3.7394\ex{-2} & 1.5006\ex{-1}\\ \hline \end{tabular} \caption{Elastic and gradient-elastic constants obtained from the interatomic potentials \cite{kimlee2001} and \cite{kimmendelev2008}. } \label{ElasticTable} \end{table} In this section, we compare the Green tensor obtained from Mindlin's strain gradient elastic theory to that obtained from an atomistic system. This study was carried out using Minimol \cite{TadmorBook} which is a KIM-compliant \textcolor{magenta}{molecular dynamics (MD) and molecular statics (MS)} program. The Open Knowledgebase of Interatomic Models (KIM) is a project focused on creating standards for atomistic simulations including an application programming interface (API) for information exchange between atomistic simulation codes and interatomic potentials \cite{Tadmor2011,TadmorKIM2013}. \begin{figure*}[t!] \centering \subfloat[$\mathbb{C}$ for Cu EAM]{ \begin{tikzpicture}[scale=1] \node[] at (0,0) {\includegraphics[width=0.29\textwidth]{C_Cu_fcc_EAM_Mendelev_2013}}; \end{tikzpicture} \label{C_Cu_fcc_EAM_Mendelev_2013} } \subfloat[$\mathbb{C}$ for Cu MEAM]{ \begin{tikzpicture}[scale=1] \node[] at (0,0) {\includegraphics[width=0.29\textwidth]{C_Cu_fcc_MEAM_Lee_2001}}; \end{tikzpicture} \label{C_Cu_fcc_MEAM_Lee_2001} } \subfloat[$\mathbb{C}$ for Al MEAM]{ \begin{tikzpicture}[scale=1] \node[] at (0,0) {\includegraphics[width=0.33\textwidth]{C_Al_fcc_MEAM_Lee_2001}}; \node[] at (2,1.8) {$\frac{\text{eV}}{\textup{\AA}^3}$}; \end{tikzpicture} \label{C_Al_fcc_MEAM_Lee_2001} }\hfill \subfloat[$\mathbb{D}$ for Cu EAM]{ \begin{tikzpicture}[scale=1] \node[] at (0,0) {\includegraphics[width=0.29\textwidth]{D_Cu_fcc_EAM_Mendelev_2013}}; \end{tikzpicture} \label{D_Cu_fcc_EAM_Mendelev_2013} } \subfloat[$\mathbb{D}$ for Cu MEAM]{ \begin{tikzpicture}[scale=1] \node[] at (0,0) {\includegraphics[width=0.29\textwidth]{D_Cu_fcc_MEAM_Lee_2001}}; \end{tikzpicture} \label{D_Cu_fcc_MEAM_Lee_2001} } \subfloat[$\mathbb{D}$ for Al MEAM]{ \begin{tikzpicture}[scale=1] \node[] at (0,0) {\includegraphics[width=0.33\textwidth]{D_Al_fcc_MEAM_Lee_2001}}; \node[] at (2,1.8) {$\frac{\text{eV}}{\textup{\AA}}$}; \end{tikzpicture} \label{D_Al_fcc_MEAM_Lee_2001} }\hfill \caption{Voigt representation of the elastic tensors $\mathbb{C}$ and gradient-elastic tensor $\mathbb{D}$ for fcc Al and Cu, computed from the interatomic potentials \cite{kimlee2001} and \cite{kimmendelev2008}. \protect\subref{C_Cu_fcc_EAM_Mendelev_2013} and \protect\subref{D_Cu_fcc_EAM_Mendelev_2013} Cu for EAM potential \cite{kimmendelev2008}. \protect\subref{C_Cu_fcc_MEAM_Lee_2001} and \protect\subref{D_Cu_fcc_MEAM_Lee_2001} Cu for MEAM potential \cite{kimlee2001}. \protect\subref{C_Al_fcc_MEAM_Lee_2001} and \protect\subref{D_Al_fcc_MEAM_Lee_2001} Al for MEAM potential \cite{kimlee2001}. } \label{voigtCD} \end{figure*} We choose face-centered-cubic Aluminum and Copper for this comparison, and consider the following two interatomic potentials: \textcolor{magenta}{the} modified-embedded-atom-method (MEAM) \textcolor{magenta}{by} \cite{lee2001}, and the embedded-atom-potential \textcolor{magenta}{by} \cite{mendelev2008}, \textcolor{magenta}{which are} archived in the OpenKIM repository. Elastic and gradient-elastic constants for these potentials were computed using the method described in \cite{Admal16}, and they are available on the KIM repository \cite{kimlee2001,kimmendelev2008}. For convenience, the values of the independent elastic and gradient-elastic constants are reported in table \ref{ElasticTable}. These components are used to populate the elastic tensors $\mathbb{C}$ and $\mathbb{D}$ \cite{Admal16,Auffray13}. The Voigt structure of the resulting tensors $\mathbb{C}$ and $\mathbb{D}$ is shown in Fig.~\ref{voigtCD}. The atomistic system is constructed by stacking together $15\times15\times15$ unit cells resulting in $13500$ atoms. A force of $0.0116$ eV/$\textup{\AA}$ in the $x_1$ direction is imposed on the central atom of the system, and displacement boundary conditions are imposed on five layers of atoms close to the boundary using the classical solution given in Eq.~\eqref{GTclassical}. The padding atoms thickness is 0.15 times the size of the box. A MS simulation is carried out using the above-mentioned boundary conditions resulting in a deformed crystal. The resulting displacement field normalized with respect to the force on the central atom yields the atomistic Green tensor component fields. Simulation results are shown in Fig.~\ref{MScomparison}, where we compare the Green tensor components $G_{11}(x_1,0,0)$ and $G_{22}(x_1,0,0)$. Despite the fact that these potentials were never fitted to gradient-elastic constants, it can be observed that the analytical predictions are in good agreement with MS calculations, with a maximum error at the origin in the order of 5-30\%, depending on the potential used. It should be noted that, compared to the EAM potential, the MEAM potential better compares to the analytical results, possibly as a result of artifacts in gradient-elastic constants evaluated by EAM potentials \cite{Admal16}. \begin{figure*}[t!] \centering \subfloat[Cu EAM]{ \includegraphics[width=0.45\textwidth]{Cu_fcc_EAM_Mendelev_2013_G_11_x1} \label{Cu_fcc_EAM_Mendelev_2013_G_11_x1} } \subfloat[Cu EAM]{ \includegraphics[width=0.45\textwidth]{Cu_fcc_EAM_Mendelev_2013_G_22_x1} \label{Cu_fcc_EAM_Mendelev_2013_G_22_x1} }\hfill \subfloat[Cu MEAM]{ \includegraphics[width=0.45\textwidth]{Cu_fcc_MEAM_Lee_2001_G_11_x1} \label{Cu_fcc_MEAM_Lee_2001_G_11_x1} } \subfloat[Cu MEAM]{ \includegraphics[width=0.45\textwidth]{Cu_fcc_MEAM_Lee_2001_G_22_x1} \label{Cu_fcc_MEAM_Lee_2001_G_22_x1} }\hfill \subfloat[Al MEAM]{ \includegraphics[width=0.45\textwidth]{Al_fcc_MEAM_Lee_2001_G_11_x1} \label{Al_fcc_MEAM_Lee_2001_G_11_x1} } \subfloat[Al MEAM]{ \includegraphics[width=0.45\textwidth]{Al_fcc_MEAM_Lee_2001_G_22_x1} \label{Al_fcc_MEAM_Lee_2001_G_22_x1} }\hfill \caption{Components of the Green tensor for fcc Al and Cu, and comparison to atomistic calculations obtained from the interatomic potentials \cite{kimlee2001} and \cite{kimmendelev2008}. \protect\subref{Cu_fcc_EAM_Mendelev_2013_G_11_x1}-\protect\subref{Cu_fcc_EAM_Mendelev_2013_G_22_x1} Cu for EAM potential \cite{kimmendelev2008}. \protect\subref{Cu_fcc_MEAM_Lee_2001_G_11_x1}-\protect\subref{D_Cu_fcc_MEAM_Lee_2001} Cu for MEAM potential \cite{kimlee2001}. \protect\subref{Al_fcc_MEAM_Lee_2001_G_11_x1}-\protect\subref{Al_fcc_MEAM_Lee_2001_G_22_x1} Al for MEAM potential \cite{kimlee2001}. } \label{MScomparison} \end{figure*} \section{Summary and Conclusions\label{conclusions}} In this paper we have derived an expression for the Green tensor of Mindlin's anisotropic strain gradient elasticity, which possesses up to 21 elastic constants and 171 gradient elastic constants in the general case of triclinic media. The Green tensor is found in terms of a matrix kernel integrated over the unit sphere in Fourier space. Such representation is similar to that of the classical anisotropic Green tensor, which requires integration over the equatorial plane of the unit sphere. In contrast to its classical counterpart, however, the Green tensor of Mindlin's anisotropic strain gradient elasticity is non-singular at the origin, while its gradient is finite but discontinuous at the origin. It is shown that the non-singular Green tensor converges to the classical tensor a few characteristic lengths away from the origin. Therefore, the Green tensor of Mindlin's first strain gradient elasticity can be regarded as a physical regularization of the classical anisotropic Green tensor. Moreover, existing expressions of the Green tensor found in the literature are recovered as special cases. Because the Green tensor regularizes its classical counterpart without unphysical singularities, it offers a more realistic description of near-core elastic fields of defects in micro-mechanics, and it provides more accurate boundary conditions for atomistic and \textit{ab-initio} energy-minimization calculations. As an illustrative example, we have computed the displacement field induced by a concentrated force acting at the origin (Kelvin problem), and compared the analytical predictions to atomistic calculations when the elastic and gradient-elastic moduli are consistently derived from the interatomic potentials. Despite the fact that these potentials were not fitted to gradient-elastic constants, it is shown that the analytical predictions are in good agreement with MS calculations, with a maximum error at the origin in the order of 5-30\%, depending on the potential used. \small{ \section*{List of Abbreviations} PDE: partial differential equation. SPD: symmetric positive definite. KIM: Open Knowledgebase of Interatomic Models. API: application programming interface. EAM: embedded atom method. MEAM: modified embedded atom method. \section*{Declarations} \noindent\textbf{Availability of data and materials.} Elastic and gradient-elastic material constants used to obtain the results in section \ref{Kelvin} are freely available as part of the Open Knowledgebase of Interatomic Models (KIM). \medskip \noindent\textbf{Competing Interest.} The authors declare that they have no competing interests. \medskip \noindent\textbf{Funding.} G.P. acknowledges the support of the U.S. Department of Energy, Office of Fusion Energy, through the DOE award number DE-SC0018410, the Air Force Office of Scientific Research (AFOSR), through award number FA9550-16-1-0444, and the National Science Foundation, Division of Civil, Mechanical and Manufacturing Innovation (CMMI), through award number 1563427 with UCLA. N.A. acknowledges the support of the US Department of Energy's Office of Fusion Energy Sciences, Grant No. DE-SC0012774:0001. M.L. gratefully acknowledges a grant from the Deutsche Forschungsgemeinschaft (Grant No. La1974/4-1). \medskip \noindent\textbf{Authors Contribution.} G.P. and M.L. obtained the expression of the Green Tensor. N.A. and G.P. carried out the numerical analysis. All authors read and approved the final manuscript. } \bibliographystyle{spbasic}
{ "timestamp": "2019-03-01T02:06:19", "yymm": "1902", "arxiv_id": "1902.10844", "language": "en", "url": "https://arxiv.org/abs/1902.10844" }
\section{Introduction} Smart beta is a relatively new term that has become ubiquitous in asset management over the last few years. The financial theory underpinning Smart Beta, known as factor investing, has been around since the 1960s, when factors were first identified as being drivers of equity returns \citep{Agather2017}. These factor returns can be a source of risk and/or improved return, and understanding whether any additional risk is adequately compensated with higher returns is important. \citep{Ang:2014}. By selecting stocks based on their factor exposures, active managers can build portfolios with particular factor exposures and so use factor investing to improve portfolio returns and/or lower risk, depending on their particular objectives. Smart beta aims to achieve these goals at a reduced cost by utilising a transparent, systematic, rules-based approach, bringing down the costs significantly when compared to active management \citep{Asness2016}. While smart beta strategies have shown strong performance in the long run, they often suffer from severe short-term drawdown (peak-to-trough decline) with fluctuating performance across cycles \citep{Arnott2016}. These fluctuations can arise from extreme macroeconomic conditions, elevated volatility, heightened correlations across multiple markets and uncertainty monetary and fiscal policy responses. In this paper we address this by building a regime switching model using Hidden Markov Models (HMMs). Hidden Markov models have become one of the mainstream techniques to model times series data \citep{baum1970, Rabiner:1989}, with applications across many areas such as speech recognition, text classification and medical applications. We first study how a regime switching framework can be used to detect regimes across factors and, if so, add value to smart beta strategies. The prevalent approach in regime switching frameworks for asset allocation has been to specify in advance a static decision rule dependent on the predicted state \citep{Nystrup:2018}. An alternative approach is to dynamically optimise a portfolio using information from the inferred regime parameters. We follow this second approach and use the regime information to construct different types of portfolios (more return oriented and more risk focused). In a first step we build a dynamic asset allocation (DAA) system to construct portfolios through a regime switching model and perform a systematic analysis using hundreds of combinations of factors by training the HMM with the same factors that will be used for the allocation in the portfolio. Our study shows that using the regime information from the HMM has a better performance than a single regime allocation and we find that more return-oriented portfolios yield better risk-adjusted returns than their benchmarks, while the performance of more risk focused portfolios show some improvement. Finally, the common factor in the majority of the research on regime-switching models in finance is that it considers either a single or a small set of assets to build the model, with the selection criteria for the assets usually coming from domain knowledge. The reason for this is that unsupervised feature selection for HMMs is very limited, with wrapping methods exhibiting high computational cost or with very few methods specific for HMMs \citep{FSHMMsSurvey}. In most applications of HMMs, features are either pre-selected based on expert knowledge or feature selection is omitted entirely. One of the few feature selection algorithms developed for HMMs is the feature saliency hidden Markov model (FSHMM) proposed by \citet{FSHMM:article}, where the feature selection process is embedded in the training of the HMM. We incorporate this FSHMM into our dynamic asset allocation system. with two benefits: (1) by selecting the features during the training we expect to improve regime identification by selecting features that are state dependent and rejecting features that are state independent; (2) it allows incorporation of many features on a model and let the algorithm decide which ones contribute to regime identification, thus avoiding the need for expert knowledge in the construction of financial cycles. The main contributions of this paper are the following: \begin{enumerate} \item We build a dynamic asset allocation (DAA) system using an HMM for regime detection and perform a systematic study using multiple combinations of assets and comparing performance with their single-regime portfolio counterparts. We show that the DAA system consistently performs better than the benchmarks; \item We extend our DAA system by incorporating a Feature Saliency HMM for feature selection, thus improving regime identification; \item We test the DAA system with embedded feature selection on real life investable indices using MSCI indices and show an improvement in risk-adjusted return on strategies built using the DAA system with FSHMM with respect to strategies built using DAA system without feature selection. \end{enumerate} This paper is organized as follows: Section \ref{section:prev_work} gives an overview of previous work on HMM in finance; Section \ref{section:RSframework} introduces hidden Markov models and feature saliency hidden Markov models; data and index construction are described in Section \ref{section:data_and_performance}; Section \ref{section:DAA_system} introduces the dynamic allocation system, the feature saliency algorithm and its incorporation into our dynamic asset allocation system; Section \ref{section:results} shows the experimental results of the DAA system, and the incorporation of embedded feature selection. Finally, we test the DAA system with feature selection using investable assets; conclusions and further work are considered Section \ref{section:conclusion}. \section{Previous work} \label{section:prev_work} In finance, HMMs have been used extensively to build regime-based models, since Hamilton proposed using a regime-switching model to identify economic cycles using the GNP series \citep{Hamilton:1989}. As pointed out by \citet{Ang2012} HMMs can simultaneously capture multiple characteristics from financial return series such as time-varying correlations, skewness and kurtosis, while also providing good approximations even in processes for which the underlying model is unknown \citep{Ang:2004, Bulla:2011a, Bulla:2006, Nystrup:2015, Nystrup:2017}. In addition, HMMs allow for good interpretability of results, as thinking in terms of regimes is a natural approach in finance. Examples of dynamic asset allocation are \citet{ReusMulvey:2016} that use a HMM to build a dynamic portfolio using currency futures and \citet{BaeMulvey:2014} that use a HMM to identify market regimes using different asset classes, with regime information helping portfolios to avoid risk during left-tail events. \citet{Guidolin2012} provides an extensive review on applications of Markov switching models in empirical finance covering stock returns, term structure of default-free interest rates, exchange rates and joint processes of stock and bond returns. Outside of asset allocation, HMMs have been used to capture energy prices dynamics \citep{Ramos:2014} to build credit risk systems, for example \citet{Petropoulos:2016} build a credit rating system using a students'-t HMM, addressing two problems in current systems, their heavy-tailed actual distribution and their time-series nature; \citet{Elliott:2014} build a model using double hidden Markov model to extract information about true credit qualities of firms. \citet{Dabrowski:2016} study HMMs and other Bayesian networks to build early warning systems to detect systemic banking crisis and find that Bayesian methods provided superior performance on early warning than traditional signal extraction logic models and \citet{Zhou:2012} investigate three popular short-rate models and extend them to capture the switching of economic regimes using a finite-state Markov chain. So far, little work has been done on the impact of regime switching models to factor investing. Among them, \citet{Guidolin2008} found evidence of four economic regimes in size and value factors that capture time-variations in mean returns, volatilities and return correlations. \citet{Zhao:2011a} and \citet{Zhao:2011b} study time-varying risk premiums using a six factor model to explain the returns of sector ETFs. In their work they cover a short period of testing time (9 months) and do not consider transaction costs. \section{Theoretical background} \label{section:RSframework} In this section we present the hidden Markov model and the feature saliency hidden Markov model that can simultaneously train the model and perform feature selection. \subsection{Hidden Markov Models (HMMs)} HMMs are sequential models that assume an underlying hidden process modeled by a Markov chain and a sequence of observed data as a noisy manifestation of this latent process \citep{Murphy:2012}. \begin{figure}[h!] \centering \includegraphics[width=0.4\textwidth]{img/HMMcolor8} \caption{The Hidden Markov Model: blue squares represent latent variables, orange circles are observations and green circles represent model parameters.} \label{fig:hmm} \end{figure} Given $o = \{\mathbf{y_1}, ... , \mathbf{y_T}\}$ the sequence of observed data where each $x_t \in \mathbb{R}^L$ with $L$ the dimension of observations and $x = {x_1, \ldots, x_T}$ the latent sequence of states where $x_t \in \{1, \ldots, K\}$ with $K$ the number of latent states. The HMM model parameters are $\Lambda = (\pi, A, \mu, \sigma)$ where $\pi$ and $A$ correspond to the initial probability and transition probabilities, and $\mu$ and $\sigma$ are the mean and variance of the state dependent Gaussian feature distribution (generally called emission probabilities, symbolized here by $b_{x_t}$), the graphical model of the HMM can be seen in Figure \ref{fig:hmm} where blue squares represent latent variables, orange circles are observations and green circles represent model parameters. The complete likelihood can be written as:. \begin{equation} p(x,y|\Lambda) = \pi(x_0) b_{x_0}(y_0) \prod^T_{t=1} A(x_{t-1},x_t) b_{x_t}(y_t) \label{eq:HMMlike} \end{equation} In this work the sequence of noisy observations are factor indices returns and the underlying hidden process is the state of the market that generates them. We assume that the emission probabilities are Gaussian. While normal distributions are a poor fit to financial returns, the mixture of normal distributions provide a much better fit capturing stylize behaviors including fat tails and skewness \citep{Nystrup:2015, Ang2012}. The training of HMMs is done by the Baum-Welch algorithm, a type of Expectation-Maximization (EM) algorithm \citep{Rabiner:1989}. The E-step calculates the expected value of the log-likelihood with respect to the state, given the data and current model parameters and the M-step maximizes the expectation computed in the previous step to update the model parameters. The algorithm iterates between these two steps until convergence. The expectation of the complete log-likelihood function is given by: \begin{equation} Q(\Lambda, \Lambda') = E[\log{p(x,y|\Lambda)}|y,\Lambda'] \label{eq:qML} \end{equation} where $\Lambda$ are the parameters for the current iteration and $\Lambda^{\prime}$ is the set of parameters from the previous iteration. Following \citet{FSHMM:article}, we place priors on the parameters and calculate the MAP estimate, so the $Q$ function is modified by adding the prior on the model parameters, $G(\Lambda)$: \begin{equation} Q(\Lambda, \Lambda') + \log{G(\Lambda)} \label{eq:qMAP} \end{equation} The EM algorithm is as follows, the $Q$ function in \ref{eq:qML} is calculated in the E-step and the equation \ref{eq:qMAP} is maximized in the M-step. \subsection{FSHMM} \label{section:FSHMMtheory} The feature saliency HMM considers a feature relevant if its distribution is dependent on the underlying state and irrelevant if it is independent. Given a set of binary variables $\{z_1, \ldots, z_L\}$ that indicate the relevance of the feature, i.e. $z_l = 1$ if the $l$-th feature is relevant and $z_l =0$ if it's irrelevant, the feature saliency $\rho_l$ is defined as the probability that the $l$-th feature is relevant. Assuming the features are conditionally independent given the state enables the multivariate Gaussian to be written as a multiplication of univariate Gaussians, and the conditional distribution of $y_t$ given $z$ and $x$ can be written as follows: \begin{equation} p(y_t|z,x_t=i, \Lambda) = \prod_{l=1}^L r(y_{lt}|\mu_{il},\sigma^2_{il})^{z_l} q(y_{lt}|\epsilon_l,\tau^2_l)^{1-z_l} \end{equation} where $r(y_{lt}|\mu_{il},\sigma^2_{il})$ is the Gaussian conditional feature distribution for the $l$-th feature and $q(y_{lt}|\epsilon_l,\tau^2_l)$ is the state-independent feature distribution. The FSHMM model parameters are $\Lambda = (\pi, A, \mu, \sigma, \rho, \epsilon, \tau)$ where the first four parameters correspond to the regular HMM, $\rho$ is the feature saliency and $\epsilon$ and $\tau$ are the mean and variance of the state independent Gaussian feature distribution. Figure \ref{fig:FSHMM} shows the feature saliency Hidden Markov Model. \begin{figure}[h!] \centering \includegraphics[width=0.4\textwidth]{img/FSHMMcolor11} \caption{The feature saliency Hidden Markov Model: blue squares represent latent variables, orange circles are observations and green circles represent model parameters.} \label{fig:FSHMM} \end{figure} The marginal probability of $z$ is: \begin{equation} p(z|\Lambda) = \prod_{l=1}^L \rho_l^{z_l} (1-\rho_l)^{1-z_l} \end{equation} The joint probability distribution of $y_t$ and $z$ given $x$ is: \begin{align} & p(y_t,z|x_t=i,\Lambda) = \nonumber \\ & \prod_{l=1}^L [\rho_l r(y_{lt}|\mu_{il},\sigma^2_{il})]^{z_l} [(1-\rho_l)q(y_{lt}|\epsilon_l,\tau^2_l)]^{1-z_l} \end{align} The complete likelihood for the FSHMM is given by: \begin{equation} p(x,y,z|\Lambda) = \pi_{x_0} p(y_0,z|x_0,\Lambda) \prod_{t=1}^L a_{x_{t-1},x_t} p(y_t,z|x_t,\Lambda) \end{equation} The MAP estimation of the FSHMM is similar to the HMM using EM but the $Q$ function incorporates the hidden variables associated with feature saliency and can be written as: \begin{align} Q(\Lambda,\Lambda^{\prime}) & = E[\log{p(x,y,z|\Lambda)}|y,\Lambda^{\prime}]\nonumber \\ & = \sum_{x,z} \log{p(x,y,z|\Lambda)} P(x,z|y,\Lambda^{\prime}) \end{align} The update steps of the EM algorithm are shown in Appendix \ref{appendix:FS} and the pseudocode for the MAP FSHMM formulation is given in Algorithm \ref{fhsmm-algo}. A detailed description of the equation derivations and the steps of the algorithm can be found in \citet{Adams2015}. \begin{algorithm} \caption{MAP FSHMM Algorithm}\label{fhsmm-algo} \begin{algorithmic}[1] \State Select initial values for $\pi_i, a_{ij}, \mu_{il}, \sigma_{il}, \epsilon_l, \tau_l$ and $\rho_l$ for $i=1\ldots I, j = 1 \ldots I$, and $l=1\ldots L$ \State Select initial values for $\bar{p}_i, \bar{a}_{ij}, m_{il}, s_{il}, \zeta_{il}, \eta_{il}, b_l, c_l, \nu_l, \psi_l$ and $k_l$ for $i=1\ldots I, j = 1 \ldots I$, and $l=1\ldots L$ \State Select stopping threshold $\delta$ and maximum number of iterations $M$ \State Set absolute percent change in the posterior probability between current iteration and previous iteration $\Delta \mathcal{L}$ to $\infty$ and the number of iterations $it$ to 1 \While {$\Delta \mathcal{L} > \delta$ and $it < M$} \State E-step: calculate probabilities $\gamma_t(i), \xi(i,j), e_{ilt}, h_{ilt}, g_{ilt},$ $u_{ilt}, v_{ilt}$ following \ref{eq:Estep01} to \ref{eq:Estep5} \State M-step: update parameters $\pi_i,a_{ij},\mu_{il},\sigma^2_{il}, \epsilon_l, \tau^2_l, \rho_l$ following \ref{eq:Mstepi} to \ref{eq:Mstepf} \State $\Delta \mathcal{L}$ \State $it = it+1$ \EndWhile \State Perform feature selection based on $\rho_l$ and construct reduced models \end{algorithmic} \end{algorithm} As well as the parameters estimated through EM, the model also has several hyperparameters to set in advance. The most relevant is the weight parameter $k_l$ that can be used as an informative exponential prior on $\rho$. Setting higher values of $k_l$ for the parameters translates into a higher cost in the algorithm, so in order for the algorithm to select that feature, it needs more evidence that this feature is relevant. This can either be used to reduce the number of selected features or as a proxy for the cost of selecting a feature in the optimization process. The heuristic to select a reasonable value of $k_l$ is to scale it with the number of observations as $T/4$ with $T$ the number of observations. \subsection{Smart Beta investing} \label{subsection:smartBeta} As mentioned, smart beta is a systematic, low cost implementation of factor investing, where securities are selected based on their exposure to an attribute that has been associated with a persistent higher return in the past, called a factor. Factors can be fundamental characteristics of the economy (macroeconomic factors) or of companies (style factors). Macroeconomic factors can be thought of as capturing the broad risks and returns across assets classes while style factors can be thought of as aiming to explain returns and risks for securities within asset classes. This paper looks at style factors in the equity market. Within style factors, dozens of indicators have been identified. The majority can be grouped into families, with style factors within a family measuring similar characteristics and often highly correlated. An example of this is momentum, which includes factors measuring returns over different periods (3-months, 6-months, 12-months etc). While there is no universal definition of these families or the factors that belong in each family there are common themes. Typically, families will comprise: value, growth, momentum, quality, size and some sort of volatility/risk/beta measure. There may be variations on this, for example Dividend Yield is sometimes viewed as a factor family in its own right or sometimes it is viewed as a member of the Value family; sometimes the Value family can be split into Value and Deep Value. \section{Data} \label{section:data_and_performance} Below is the description of the two datasets used, and table \ref{table:datasets} summarises their main characteristics. \subsubsection*{Daily factor data from S$\&$P500 index} The first dataset is a set of style factors which are constructed based on the S$\&$P 500 universe of US stocks. The style factor for each individual stock is determined, the universe is ranked and a portfolio is constructed with the top 20$\%$ of stocks and short positions (negative weights) in the bottom 20$\%$ of stocks. This is repeated each month. The resulting style factor portfolio will have a strong exposure to the factor and no exposure to the overall market (because the negative holdings offset the positive weights) - Table \ref{table:JPM} shows these. The data is supplied by a broker and consists of 25 style factors covering a time period from 1988 to 2016. This dataset is used throughout the analysis. \newcommand{\ra}[1]{\renewcommand{\arraystretch}{#1}} \begin{table*}[t] \caption{Representative factor indices used for building regime switching frameworks.} \ra{1.2} \centering \footnotesize \begin{tabular}{@{}ccccccc@{}}\toprule $\#$ & Factor & Family & & $\#$ & Factor & Family \\ \cmidrule{1-3} \cmidrule{5-7} 1 & Book Value Yield & Value & & 14 & Operating Margin Growth-1Yr & Quality\\ 2 & 1 Yr Fwd Earnings Yield & Value & & 15 & Operating Margin Growth-3Yr & Quality\\ 3 & Free Cash Flow Yield & Value & & 16 & Historical Free Cash Flow Growth-1Yr & Growth \\ 4 & Sales Yield & Value & & 17 & Historical Free Cash Flow Growth-3Yr & Growth \\ 5 & Dividend Yield & Value & & 18 & Historical DPS Growth-1Yr & Growth\\ 6 & Historical ROE & Quality & & 19 & Historical DPS Growth-3Yr & Growth\\ 7 & Operating (EBIT) Margin & Quality & & 20 & 6 Month Price Momentum & Momentum \\ 8 & AltmanZ & Quality & & 21 & 12 Month Price Momentum & Momentum\\ 9 & ROA & Quality & & 22 & 3 Month Avg Mean EPS & Quality\\ 10 & Piotroski & Quality & & 23 & Size & Risk\\ 11 & Earnings Growth FY1 to FY2 & Growth & & 24 & EPSCV & Quality \\ 12 & Historical Sales Growth-1Yr & Growth & & 25 & Beta & Risk\\ 13 & Historical Sales Growth-3Yr & Growth & & & &\\ \bottomrule \end{tabular} \label{table:JPM} \end{table*} \subsubsection*{Daily MSCI USA enhanced indices} The second dataset is supplied by MSCI and consists of a range of indices which they publish. Like the first dataset, the individual style factors are calculated using underlying stocks and their style factor exposures. These individual style factor indices are then grouped into six style factor families, and it’s these indices that are used in this paper. We use the six MSCI USA enhanced style indices, which are: value, low size, momentum, quality, low volatility and dividend yield \cite{MSCI:tablecitation}. These have different inception dates, with the most recent beginning in 1999, which limits the period we can use this dataset for to 1999-2016. The advantage of using a published set of indices (such as the MSCI indices) is that they can be packaged into an easy to purchase product, such as an Exchange Traded Fund (ETF), by a separate investment company. As an example, an investor who wants to buy US value stocks can buy an MSCI US enhanced Value ETF, which would involve buying one security (the ETF) rather than the underlying stocks. By removing the need to analyse and purchase the underlying companies, the complexity and cost of implementing a smart beta strategy can be reduced. This allows us to test our Novel DAA system with real world assets. \begin{figure}[h!] \centering \includegraphics[width=0.5\textwidth]{img/MSCI_enh} \caption{Cumulative returns of MSCI USA enhanced factors. Returns are in excess of the market in USD, for the date range Jan 2012 to Feb 2016.} \label{fig:MSCI-index} \end{figure} \begin{table} \caption{Description of datasets.} \ra{1.2} \centering \footnotesize \begin{tabular}{@{}cccc@{}}\toprule Dataset & Date & Nr of features & Frequency\\ \cmidrule{1-4} Factor data & Jan-1988 to Feb-2016 & 25 & Daily\\ MSCI Enhanced & Jan-1999 to Feb-2016 & 6 & Daily\\ \bottomrule \end{tabular} \label{table:datasets} \end{table} \section{Dynamic asset allocation system} \label{section:DAA_system} Investment on single factor strategies has been shown to have significant returns over the long term but how to build multi-factor strategies and rotate factors according to market conditions is not straightforward. Factor indices are time series data, hence we take advantage of the capacity of hidden Markov models to identify underlying regimes in sequences of observations and build a dynamic asset allocation system. We will first determine the optimal number of hidden states to model market regimes and then, in order to avoid excessive transactions costs through frequent rebalancing, we optimize the rebalancing signal. \subsection{DAA system} We design a dynamic trading framework with daily evaluations and monthly re-adjustments as shown in figure \ref{fig:DAAsystem}. Each day a new vector of returns is added to the training set with an expanding window, and the state is predicted. Returns are lagged by one day in order to avoid look-ahead bias. Because this prediction is noisy, we'll determine an optimal window of consecutive days in the new state before the portfolio is rebalanced. Once a change of state has been accepted, the vector of means and covariance matrix from the new state are retrieved and the portfolio weights optimized, with transaction costs calculated after the rebalance. After a full month has passed, we add this new batch of data to the training set with an expanding window and retrain the model. Figure \ref{fig:expWindow} shows how data is added daily with an expanding window. While this will not produce immediate changes in the model parameters (transition matrix and emission distributions) in time they should change slightly to accommodate the new information. Therefore, we can capture changes on the dynamics of the system over time. \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{img/DAAonly} \caption{Dynamic Asset Allocation system diagram.} \label{fig:DAAsystem} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.3\textwidth]{img/expanding_window} \caption{Data scheme.} \label{fig:expWindow} \end{figure} \subsubsection{Model selection} The number of latent states in a HMM has to be set in advance, before training. One option is to use the Bayesian Information criterion (BIC), a penalized log-likelihood function that can be used for model selection \citep{schwarz1978}. BIC is defined by: \begin{equation*} BIC = -2 \log p(D|\hat{\theta}) + d \log(N) \end{equation*} where $d$ is the number of free parameters in the model and $N$ is the number of samples. Thus, calculating the score over a range of $K$ states, we can select the model with the lowest value. Another option is to follow a greedy approach, calculating performance of the portfolios built with a different number of regimes and selecting the model with highest performance. \begin{figure}[h!] \includegraphics[width=0.5\textwidth]{img/boxplot_BIC_new_PAPER} \includegraphics[width=0.5\textwidth]{img/perf_vs_states_fixed} \caption{The Top plot shows the boxplot of BIC number for different number of states: a two state model has a higher BIC but there is no distinction between three, four and five; the Bottom plot shows performance of portfolios as a function of number of hidden states. the two state model yields a better performance for the majority of portfolios.} \label{ICvsSTATES} \end{figure} In the financial HMM literature \citep{Guidolin2008}, regime switching models normally range between two and four states. Keeping the number of states low allows better interpretability, so we selected 200 random combinations of 5 assets each and used this combinations to train an HMM with 2, 3, 4, 5 and 6 hidden states respectively. From each HMM information we built different types of portfolios, as will be explained in section \ref{subsection:tradeStrategies}. The performance of each portfolio was calculated using the IR ratio (the ratio between annualized return and annualized volatility); the plots of BIC and performance as a function of number of states are shown in Figure \ref{ICvsSTATES}. The BIC score is quite similar for states three to six (four being the lowest) and is slightly higher for two states. While this would suggest use of a four regime model, performance of portfolios for three and four states is significantly lower than for two states, so we have selected a two-state model. Two-state models can be interpreted as expansion-contraction. \subsubsection{System calibration} The dynamic asset allocation system requires a trained HMM to model regime changes and the selection of an optimal time window to decide when a change of state has taken place and the portfolio has to be rebalanced. For the first part of the work, where we want to test if the proposed DAA system adds value to multi-factor strategies, we test it using multiple combinations of factors, and calibrate the system for each combination. From a pool of 25 factor indices we select $n$ assets randomly and use their returns to train a HMM. As factors can be grouped into five families (following table \ref{table:JPM}), we randomly select one factor from each group so all families are represented. This yields a total of 1260 combinations. We then use the same factors to build the portfolios. We divide the data set into three parts, training (15 years), validation (9 years) and test set (4 years). In order to avoid getting stuck in a local maximum we do random initialization with initial parameters calculated from the training data and select the model with highest score. Figure \ref{fig:hmm-flowchart} shows the process of training, validation and test using the DAA system. \begin{figure}[ht!] \centering \includegraphics[width=0.5\textwidth]{img/HMMflowchart} \caption{Full schematic of calibration and usage of the dynamic asset allocation system for smart beta investing.} \label{fig:hmm-flowchart} \end{figure} The regime prediction is done by passing the whole series of returns up to the previous day to decode the most probable sequence of hidden states, and keep the last value as the state prediction. This daily prediction is noisier that it would be if a whole month of returns was passed together, and we cannot re-balance a portfolio each time a change of state is flagged, as quite often this would mean a daily re-balance. Instead, in the validation set, we look for a window of $d$ consecutive days in the same new state and then we flag a change of regime and re-balance the portfolio accordingly. Figure \ref{fig:heatmap} shows the performance of a selection of portfolios as a function of the time window $d$. While certain combinations of assets perform consistently better than others with larger windows, smaller windows have the worst performance in all cases. The main reason is that performance of portfolios is adjusted for transaction costs, so smaller windows mean higher portfolio turnover and therefore, higher costs. We use the training set to identify the optimal window for each combination of assets. \begin{figure}[ht!] \centering \includegraphics[width=0.5\textwidth]{img/Heatmap_performance_vs_winsize2.pdf} \caption{A subset of the 1260 portfolios is plotted. The colormap corresponds to the performance measured by IR (adjusted for transaction costs) as a function of window size. In the majority of cases performance is low for smaller windows due to frequent re-balance; performance tends to improve with window size, 15. However, if the window is too large, performance may decrease again as it fails to take advantage of more frequent regime changes.} \label{fig:heatmap} \end{figure} \subsection{DAA system with Feature Saliency: FS-DAA} \label{sec:feature-selection} So far, we proposed a DAA system where the time series to train the HMM were known in advance, which can be a limitation. Therefore, we propose a novel DAA system that incorporates an embedded feature selection method during the training, by using a Feature Salience Hidden Markov Model (FSHMM) as described in section \ref{section:FSHMMtheory}. This method allows to select features that contribute to the regime identification, called regime dependent, and rejects features that don't depend on the regimes. Figure \ref{fig:fshmm-flowchart} shows the different stages for training, validation and test using this new DAA system, that we called FS-DAA. FS-DAA takes multiple time series data and fits a FSHMM, that assigns a saliency to each time series. Higher saliency means that the feature is selected. Because FSHMM proposes that features are conditionally independent, the fitted model has diagonal covariance matrices. We therefore take the selected relevant features and used them to train a HMM with full covariance matrices. \begin{figure}[ht!] \centering \includegraphics[width=0.5\textwidth]{img/FSHMMflowchart} \caption{Full schematic of calibration and usage of the DAA system with embedded feature selection for smart beta investing.} \label{fig:fshmm-flowchart} \end{figure} As a first step to assess whether FSHMM can distinguish between relevant features and noise, we generated irrelevant features of random noise and added them to our daily factor data set. We tested this using different number of features, number of observations and values of $k_l$. For each case, $k_l$ was the same for all features, both relevant and noise. Results are summarized in Tables \ref{table:rhos1} and \ref{table:rhos2}. In all cases, the algorithm assigned low values of saliency for the irrelevant features and high values for the relevant ones. Secondly, we train a DAA system using all 25 features from the factor dataset, and we train a FS-DAA system that takes the 25 features, selects the relevant ones and then trains a HMM only with those factors and compare the regimes obtained. Finally, using these two systems, we build a strategy using a MSCI USA enhanced family of factor indices. Both models are trained using 16 years of data (from 1990 to 2006) and then retrained every month until 2016. We use 7.5 years of trading data to estimate mean and covariance of the MSCI indices for each regime, from Jan 1999 to June 2006, to have a robust estimation of the covariance matrix for both regimes. We then use a validation set of 6 years to select the optimal time window to set a change of state, and a test set of 4 years. One advantage of the proposed DAA system is that it allows to decouple data used to train the HMM to detect regimes from the data used for allocation. This is useful for factor investing because we can build factors with a long history (as the factor dataset) and then use real life, investable assets that have a shorter history (MSCI enhanced data) to build the portfolios. \section{Results and analysis} \label{section:results} Firstly, the DAA system performance is compared with baseline strategies on the large factor dataset. Then, the implementation of FSHMM algorithm is discussed. Lastly, we test the proposed FS-DAA system with real life assets using the MSCI indices dataset. \subsection{Trading strategies and benchmarks} \label{subsection:tradeStrategies} Instead of constructing only one kind of portfolio we build several: Risk Parity, Maximum diversification, Minimum Variance, Max return, Max Sharpe and a modified max return - (for a short description of each portfolio, see Appendix (\ref{appendix:port}). Risk Parity (RP), Maximum diversification (MD) and Minimum Variance (MV) are constructed taking into account only the covariance matrix, so they can be considered more risk aware. Max return (MR), Max Sharpe (Sharpe) and modified max return (Dyn) all consider the mean of the return during the construction, so they tend to be more aggressive. For comparison we built an equally weighted portfolio and a benchmark for each asset combination. Each benchmark is constructed using the same optimization method as its DAA system counterpart, but are rebalanced monthly and the covariance matrix is estimated using ``single regime'' past returns. The DAA-system instead has two covariance matrices, one for each regime. All portfolios and their benchmarks are constructed taking into account transaction costs. Costs are calculated by multiplying portfolio turnover (how much a portfolio is rebalanced) with a transaction cost of 50bps (0.5$\%$), for each selling and buying. \subsection{DAA system compared to baseline} We first evaluated our DAA system by using 1260 combinations of randomly selected assets to train the HMM and for the allocation, and compare it with their benchmarks. Figure \ref{fig:boxplots_all_port} shows the performance measured through Sortino ratio of all portfolios calculated using the DAA system, and their benchmarks. We can see that all portfolios constructed using regime information perform better than their counterpart. Portfolios that are more return-oriented because are calculated using the mean returns in the optimization process improve greatly with respect to their benchmarks while more risk focused portfolios show an improvement with respect to their single-regime counterparts but show a similar performance to equally weighted portfolios. \begin{figure}[h!] \centering \includegraphics[width=0.5\textwidth]{img/Sortino_all_2_states_color2_new_PAPER} \caption{Boxplots corresponding to the Sortino ratio for all portfolios calculated using a HMM (blue) and their benchmarks (orange) and an equally weighted portfolio (green).} \label{fig:boxplots_all_port} \end{figure} \begin{figure}[h!] \includegraphics[width=0.5\textwidth]{img/fig_RV_test} \\ \includegraphics[width=0.5\textwidth]{img/histo_test_sortino} \caption{Left plot shows annualized return as a function of annualized volatility for Sharpe portfolios built using HMM information (blue), Sharpe portfolios rebalanced monthly (orange) and EQ portfolios (green). Right plot corresponds to the Sortino distribution of the plots. All plots correspond to the test set (are out of sample).} \label{fig:RV} \end{figure} The highest performing portfolio is Sharpe, that takes into account both mean and covariance in the construction process. Figure \ref{fig:RV}-Top shows the annualized return as a function of annualized volatility for the Sharpe portfolios and their benchmarks. Portfolios built using HMMs show a higher return and less volatility than their unconditional counterpart, and higher return and volatility than the EQ portfolios. Figure \ref{fig:RV}-Bottom shows a risk adjusted return metric (Sortino) for the same portfolios. We can see that the HMM portfolios yield a better performance than their benchmarks. \rowcolors{2}{gray!8}{white} \begin{table*} \caption{Average performance of portfolios built using HMMs and their benchmarks. Top portfolios that are more aggressive have a higher risk adjusted return (measured through IC and Sortino ratios) than their unconditional counterpart and the equally weighted portfolio. Bottom portfolios that are more defensive (only the covariance matrix is taken into account in the construction process) perform worse than their benchmark counterparts and the EQ portfolio.} \ra{1.2} \centering \small \begin{tabular}{@{}rlccccccccc@{}}\toprule & Ann ret & Ann vol & IR & Skw & kurt & D. risk & Sortino & DD & DD days \\ \cmidrule{1-1} \cmidrule{2-10} EQ & 0.77 & 2.88 & 0.26 & -0.14 & 0.81 & 2.05 & 0.37 & 379 & 318 \\ Dyn HMM & 1.67 & 4.73 & 0.34 & -0.19 & 1.35 & 3.37 & 0.48 & 32 & 291 \\ Dyn Bench & -0.60 & 3.98 & -0.14 & -0.40 & 1.68 & 2.96 & -0.19 & 1136 & 682 \\ Sharpe HMM & 2.31 & 4.66 & 0.53 & -0.19 & 1.16 & 3.29 & 0.75 & 429 & 253 \\ Sharpe Bench & -3.14 & 4.89 & -0.64 & -0.79 & 4.49 & 3.80 & -0.82 & 1375 & 873 \\ MR HMM & 3.190 & 7.03 & 0.46 & -0.19 & 1.34 & 4.98 & 0.65 & 35 & 264 \\ MR Bench & -5.03 & 7.20 & -0.69 & -0.78 & 3.71 & 5.63 & -0.88 & $>$4000 & 1001 \\ MV HMM & 0.61 & 2.41 & 0.24 & -0.14 & 0.96 & 1.72 & 0.35 & 662 & 309 \\ MV Bench & -0.12 & 2.24 & -0.07 & -0.11 & 0.83 & 1.61 & -0.09 & 520 & 511 \\ MD HMM & 0.69 & 2.54 & 0.26 & -0.14 & 1.01 & 1.80 & 0.37 & 340 & 306 \\ MD Bench & 0.01 & 2.39 & -0.02 & -0.12 & 0.84 & 1.71 & -0.02 & 454 & 447 \\ RP HMM & 0.63 & 2.58 & 0.24 & -0.13 & 1.04 & 1.84 & 0.34 & 212 & 302 \\ RP Bench & 0.20 & 2.40 & 0.07 & -0.13 & 1.04 & 1.72 & 0.10 & 475 & 416 \\ \bottomrule \end{tabular} \label{table:results-part1} \end{table*} Table \ref{table:results-part1} shows different performance metrics averaged for each type of portfolio. In most cases, HMM-portfolios show better performance than their unconditional benchmarks on all metrics, and more return-oriented portfolios perform better than equally weighted ones. Performance improvement comes both from higher returns and risk reduction in return-oriented portfolios. Additionally, skewness and kurtosis are lower than benchmark returns and maximum drawdown is lower (and for a shorter period of time) in most cases. \subsection{DAA system with FSHMM} We then used the algorithm to detect relevant features in our data set of 25 factor indices. Figure \ref{fig:rhos} shows the feature saliencies of all factor return series for different values of $k$. As the training set has about 3800 observations, we chose values of $k$ closer to a quarter of that number following the heuristics proposed in \citet{FSHMM:article}. The selected features are: Book Value Yield, 1 Yr Fwd Earnings Yield, Sales Yield, 6 Month Price Momentum, 12 Month Price Momentum, EPSCV, Beta. This is of interest as the selected factors represent four of the six or seven factor families mentioned in section \ref{subsection:smartBeta}. \begin{figure}[h!] \includegraphics[width=0.5\textwidth]{img/rhos_vs_feat_new} \caption{Selected features in the training set ($T = 3800$ observations) of the 25 factor return series with different values of $k$. With small values of $k$ all features are accepted. With $k\geq T/4$ the algorithm selects a relevant subset of features.} \label{fig:rhos} \end{figure} For comparison, we trained a HMM using all 25 feature and a model trained with the selected assets. Figure \ref{fig:filtered-prob} shows the predicted state and estimated probabilities for the model after training. We can identify state 1 as a "good state", and state 0 as a "bad" state. The plots clearly identify the 2008 economic crisis - the first steps developed in August and September of 2007 with some episodes between January and May 2008 before the big crash in September 2008. Both models identify spikes of state 0 in the second half of 2007 and transition fully to state zero during 2008. The model trained with relevant features tends to be more sensible to the distress state - it spends 24$\%$ of the time in this state versus $20\%$ of the model trained with the full set of features. The average duration of state 0 is 3.8 days vs average length of 3.2 days of the full model. No smoothing was applied to the predicted probabilities to calculate these values. \begin{figure}[h!] \includegraphics[width=0.5\textwidth]{img/Filtered_prob_FSHMM_new}\\ \includegraphics[width=0.5\textwidth]{img/Filtered_prob_full_HMM_new} \caption{Left Plot corresponds to predicted state and state probabilities for the model trained with relevant features. Right Plot corresponds to the HMM trained with all 25 features.} \label{fig:filtered-prob} \end{figure} \subsection{DAA-FS system with MSCI indices} \label{section:MSCI-strategy} In this section we evaluate performance of the DAA-FS system using a subset of factors from the daily factor dataset after feature selection, and MSCI enhanced factors for allocation, and compare it with the DAA system without feature selection, that trains the HMM with all 25 factors from the dataset. For simplicity we calculated only Sharpe, MR and Dyn portfolios, as they showed a significantly better performance when using a regime switching model in their construction than risk-focused portfolios and their benchmarks. Figure \ref{fig:MSCI-HMM-port} shows the cumulative return of these three portfolios with a full feature HMM, FSHMM and the benchmarks constructed without regime information. Both HMM portfolios perform better than their benchmarks (top plot) and portfolios constructed using an HMM with feature selection perform slightly better than portfolios built with a full feature HMM (bottom plot). \begin{figure}[h!] \includegraphics[width=0.5\textwidth]{img/all_msci_enh}\\ \includegraphics[width=0.5\textwidth]{img/fig_enh_compar} \caption{Top plot corresponds to portfolios built using information from HMM with feature saliency, portfolios built using information from HMM with full features and their benchmarks. Both HMM portfolios accumulate higher returns than the benchmarks. Bottom plot shows that cumulative returns of FSHMM and fullHMM, portfolios built using FS have a better performance. Returns are in excess of the market in USD, for the period Jan 2012 to Feb 2016.} \label{fig:MSCI-HMM-port} \end{figure} Metrics performance for all portfolios and for the MSCI enhanced indices net of market are shown in table \ref{table:MSCI-metrics}. All metrics are annualized and are out-of-sample, covering the period Jan-2012 to Feb-2016. The results obtained using DAA and FS-DAA show a robust improvement with respect to their benchmarks. We can see that only three MSCI indices have a positive IR in the period, and two of the three FSHMM portfolios show the highest IR in all cases. Reduction of downside risk is achieved in most cases that use either a full-feature HMM or a FSHMM with respect to their benchmarks and the MSCI indices. \begin{table*} \caption{Metrics for portfolios built using FSHMM, all assets (HMM), their benchmark and the individual MSCI indices used to build the portfolios. The metrics covered the period Jan 2012 to Feb 2016. } \ra{1.2} \centering \small \begin{tabular}{@{}rlccccccccc@{}}\toprule & Ann ret & Ann vol & IR & Skw & kurt & D. risk & Sortino & DD & DD days \\ \cmidrule{1-1} \cmidrule{2-10} Sharpe FSHMM & 0.061 & 0.50 & 0.12 &-0.71 & 2.85 & 0.37 & 0.16 &-94 & 387 \\ Sharpe HMM &-0.11 & 0.65 &-0.16 &-0.70 & 3.84 & 0.49 &-0.22 &-164 & 522 \\ Sharpe Bench &-1.62 & 0.92 &-1.76 &-2.75 & 15.0 & 0.82 &-1.98 &19825& 1452\\ Dyn FSHMM & 0.39 & 0.65 & 0.61 &-0.41 & 0.84 & 0.47 & 0.84 &-52 & 141 \\ Dyn HMM &-0.019 & 0.60 &-0.032 &-1.12 & 9.03 & 0.45 &-0.042 &-175 & 566 \\ Dyn Bench &-1.10 & 1.03 &-1.07 &-2.76 & 16.2 & 0.88 &-1.24 &-1508& 1123\\ MR FSHMM & 2.02 & 3.20 & 0.63 &-0.39 & 1.83 & 2.30 & 0.88 &-82 & 62 \\ MR HMM & 1.85 & 3.19 & 0.58 &-0.39 & 1.84 & 2.29 & 0.80 &-92 & 62 \\ MR Bench &-3.46 & 3.78 &-0.91 &-2.71 & 20.5 & 3.17 &-1.09 &-4032& 1250\\ MSCI Quality & 0.50 & 2.76 & 0.18 & 0.20 & 2.02 & 1.90 & 0.26 &-208 & 837 \\ MSCI Enhanced Value & 0.025 & 3.97 & 0.0064& 0.029 & 0.86 & 2.83 & 0.0090&-105 & 599 \\ MSCI High Dividend Yield &-2.16 & 3.22 &-0.67 & 0.38 & 0.85 & 2.24 &-0.96 &-2374& 1317\\ MSCI Momentum & 2.48 & 4.35 & 0.57 &-0.35 & 1.42 & 3.11 & 0.80 &-144 & 475 \\ MSCI Minimum Volatility &-0.89 & 3.58 &-0.25 & 0.10 & 0.69 & 2.52 &-0.35 &-38371& 906\\ MSCI Equal Weighted &-0.27 & 2.94 &-0.092 &-0.045 & 0.74 & 2.09 &-0.13 &-135 & 675 \\ \bottomrule \end{tabular} \label{table:MSCI-metrics} \end{table*} \section{Conclusions and future work} \label{section:conclusion} The main focus of the paper is to improve smart beta strategies through the use of regime switching models. The main contributions from this work are: \begin{enumerate} \item We have shown that constructing a portfolio using information from a HMM with two latent states trained with the same assets that will be used for allocation, improves performance with respect to the same portfolio built with a single regime approach. We have tested this by calculating different types of portfolios, ranging from more risk focused to more aggressive. The improvement is more significant for return-oriented and balanced portfolios where return or risk-adjusted return is optimized achieving on average an information ratio of 50$\%$ annually in excess of market, and is less evident in risk-focused portfolios (Risk Parity, Minimum Variance and Maximum diversification) with an improvement on IR of 25$\%$ on average annually. \item We have developed a systematic framework for asset allocation using an embedded feature selection algorithm to identify features of relevance to the model. This improves the model's accuracy and allows for a more objective approach to portfolio construction in the sense that it should help to prevent biases in the feature selection process which is normally done by a financial expert. We used a FSHMM algorithm to select relevant features from a pool of well known factor indices and compared it with a HMM trained with the whole set of assets. Both models showed agreement on regime identification, with the model trained using only relevant features being more sensitive to periods of economic distress. \item We have tested both models using real, investable assets through MSCI USA enhanced factor indices. Portfolios constructed using information from the FSHMM trained with relevant features show a higher performance than the same portfolios constructed using a HMM trained with full set of features. \end{enumerate} Possible extensions of the model for future work could be to include macroeconomic series in the HMM, where the embedded feature selection could potentially solve the problem of selecting relevant economic series, allowing for a more precise identification of economic cycles. This would be particularly interesting for other asset classes such as fixed income, but this is outside of the scope of this paper. A drawback of using HMMs is that the number of latent states has to be known in advance, or selected through BIC, which is not always effective, or with a greedy approach choosing the model with higher performance. This could be addressed using an infinite HMM \citep{iHMM}. \section*{Acknowledgement} The authors are thankful to Sahil Kahn, David Hutchins and Andrew Chin for their valuable feedback on early results of this work. This work was supported by the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie Grant Agreement no. 675044 (\url{http://bigdatafinance.eu/}), Training for Big Data in Financial Research and Risk Management.
{ "timestamp": "2019-03-01T02:06:44", "yymm": "1902", "arxiv_id": "1902.10849", "language": "en", "url": "https://arxiv.org/abs/1902.10849" }
\section{Rank--2 $U(1)$ gauge theory} Here, following \cite{XuPRB06,PretkoPRB16}, we derive the relationship between electric, magnetic and gauge fields within the rank--2 $U(1)$ [\mbox{R2--U1}] electrodynamics considered in the main text. Our starting point is an electric field described by a symmetric rank--2 tensor, \begin{equation} E_{ij} = E_{ji}. \end{equation} As in conventional electrodynamics, its conjugate is the rank-two gauge field $\bf A$, which also has to be symmetric to match the degrees of freedom, \begin{equation} A_{ij} = A_{ji}. \end{equation} The low energy sector of the electric field has vanishing vector charge, and is traceless, \begin{equation}\label{EQN_S_low_E_cond} \partial_i E_{ij} = 0, \quad E_{ii} = 0. \end{equation} Here we keep all indices as subscripts but still use the Einstein summation rule. These two conditions determine the form of gauge transformation. Consider a wave-function \begin{equation} \ket{\Psi({\bf A})}. \end{equation} We take a low energy configuration of $\bf E$ obeying Eq.~\eqref{EQN_S_low_E_cond} and construct a symmetrized operator that is identical to zero to act upon the wave-function \begin{equation} -i(\lambda_j \partial_i E_{ij} + \lambda_j \partial_i E_{ij})\ket{\Psi({\bf A})} = 0. \end{equation} By integration by parts and assuming vanishing boundary terms, we have \begin{equation} i(\partial_i\lambda_j +\partial_i \lambda_j )E_{ij}\ket{\Psi({\bf A})} = 0. \end{equation} Since $E_{ij}$ conjugates with $A_{ij}$, it generates a transformation of $\bf A$. Thus \begin{equation} i(\partial_i\lambda_j +\partial_i \lambda_j )E_{ij}\ket{\Psi({\bf A})} = \ket{\Psi({\bf A}+\bm{\nabla} \otimes\bm{\lambda} +(\bm{\nabla} \otimes\bm{\lambda} )^T)} - \ket{\Psi({\bf A})} = 0. \end{equation} That is, the low energy sector wave-function is invariant under gauge transformation \begin{equation} \begin{split} {\bf A}+\bm{\nabla} \otimes\bm{\lambda} +(\bm{\nabla} \otimes\bm{\lambda} )^T, \quad \text{ i.e., }\quad A_{ij }\rightarrow A_{ij} + \partial_i\lambda_j +\partial_i \lambda_j. \end{split} \end{equation} Similarly, the traceless condition \begin{equation} -i\gamma\delta_{ij} E_{ij}\ket{\Psi({\bf A})} = 0. \end{equation} leads to another gauge symmetry \begin{equation} A_{ij }\rightarrow A_{ij} +\gamma\delta_{ij}. \end{equation} Finally, the magnetic field is obtained by finding the simplest gauge-invariant quantity. In this case, it has to have three derivatives acting on the gauge field, \begin{equation} \begin{split} B_{ij} = & \frac{1}{2}[ \epsilon_{jab}(\partial_a \partial_k \partial_i A_{bk} - \partial_a \partial^2 A_{bi}) \\ & + \epsilon_{iab} (\partial_a \partial_k \partial_j A_{bk} - \partial_a \partial^2 A_{bj})]. \end{split} \end{equation} Further details of the phenomenology of \mbox{R2--U1} phases can be found in \cite{PretkoPRB16,PretkoPRB17}. \section{Derivation of Effective Field Theory} We show how a rank--2 tensor electric field, satisfying the constraint required for \mbox{R2--U1} electrodynamics [Eqs.~(\ref{eq:constraint.1},\ref{eq:constraint.2})], can be derived from a breathing pyrochlore lattice model [Eq.~\eqref{eq:H}]. The pattern of this derivation closely follows Refs.~\cite{Benton2016NatComm,BentonThesis,YanPRB17}. Our starting point is the breathing pyrochlore lattice with a spin on each of its sites, and nearest neighbor interactions between the spins. ``Breathing" means the lattice is bi-partitioned into A- and B-tetrahedra [Fig.~\ref{fig:breathing.lattice}], and each type of tetrahedron has its own interactions. The model that hosts a rank-2 spin liquid has breathing Heisenberg antiferromagnetic interactions on both the A- and B-tetrahedra, and negative Dzyaloshinskii-Moriya (DM) interactions on A-tetrahedra only. The Hamiltonian for the model is \begin{equation}\label{EQN_S_hamiltonian} \mathcal{H} = \sum_{\langle ij \rangle\in \text{A}} \left[J_A {\bf S}_i \cdot {\bf S}_j + D_A\hat{\bf d}_{ij}\cdot( {\bf S}_i \times {\bf S}_j )\right ] + \sum_{\langle ij \rangle\in \text{B}} \left[J_B {\bf S}_i \cdot {\bf S}_j + D_B \hat{\bf d}_{ij}\cdot( {\bf S}_i \times {\bf S}_j )\right ]. \end{equation} where $\langle ij \rangle \in \text{A(B)}$ denotes nearest neightbour bonds belonging to the A(B)-tetrahedra. The sites $0,\ 1,\ 2,\ 3$ are at positions relative to the center of an A-tetrahedron \begin{equation} {\bf r}_0 = \frac{a}{8}(1,1,1),\; {\bf r}_1 = \frac{a}{8}(1,-1,-1),\; {\bf r}_2 = \frac{a}{8}(-1,1,-1),\; {\bf r}_3 = \frac{a}{8}(-1,-1,1), \end{equation} where $a$ is the length of the unit cell. Vectors $\hat{\bf d}_{ij}$ are bond dependent, defined in accordance with Ref~\cite{KotovPRB05,CanalsPRB08,RauPRL16}: \begin{equation} \begin{split} &\hat{\bf d}_{01}= \frac{(0,-1,1)}{\sqrt{2}},\; \hat{\bf d}_{02}= \frac{(1,0,-1)}{\sqrt{2}},\; \hat{\bf d}_{03}= \frac{(-1,1,0)}{\sqrt{2}} , \\ &\hat{\bf d}_{12}= \frac{(-1,-1,0)}{\sqrt{2}},\; \hat{\bf d}_{13}= \frac{(1,0,1)}{\sqrt{2}},\; \hat{\bf d}_{23}= \frac{(0,-1,-1)}{\sqrt{2}}. \end{split} \label{EQN_SM_D_Vec} \end{equation} Equivalently, this model can be written in a standard matrix-exchange form for a breathing-pyrochlore lattice model as \begin{equation} \mathcal{H} = \sum_{\langle ij \rangle \in \text{A}} S_i^\alpha \mathcal{J}_\text{A,ij}^{\alpha\beta} S_j^\beta + \sum_{\langle ij \rangle \in \text{B}} S_i^\alpha \mathcal{J}_\text{B}^{\alpha\beta} S_j^\beta \end{equation} where $\mathcal{J}_\text{A,ij}$ is a three-by-three matrix that couples spins on sub-lattice sites $i, j$ whose bond belongs to A-tetrahedra, and $\mathcal{J}_\text{B}$ is the coupling matrix for B-tetrahedra. In the case of $D_B=0$ that we are interested in, $\mathcal{J}_\text{B} $ is identical for any pair of $i, j$, \begin{equation} \mathcal{J}_\text{B} = \begin{bmatrix} J_B & 0 & 0 \\ 0 & J_B & 0 \\ 0 & 0 & J_B \end{bmatrix} . \end{equation} Matrices $\mathcal{J}_\text{A,ij}$ are bond dependent and related to each other by the lattice symmetry. Their values are \begin{equation} \begin{split} & \mathcal{J}_\text{A,01} = \begin{bmatrix} J_A & D_A/\sqrt{2} & D_A/\sqrt{2} \\ -D_A/\sqrt{2} & J_A & 0 \\ -D_A/\sqrt{2} & 0 & J_A \end{bmatrix} ,\; \mathcal{J}_\text{A,02} = \begin{bmatrix} J_A & -D_A/\sqrt{2} & 0 \\ D_A/\sqrt{2} & J_A & D_A/\sqrt{2} \\ 0 & -D_A/\sqrt{2} & J_A \end{bmatrix} ,\; \\ & \mathcal{J}_\text{A,03} = \begin{bmatrix} J_A & 0 & -D_A/\sqrt{2}\\ 0 & J_A & -D_A/\sqrt{2} \\ D_A/\sqrt{2} & D_A/\sqrt{2} & J_A \end{bmatrix} ,\; \mathcal{J}_\text{A,12} = \begin{bmatrix} J_A & 0 & D_A/\sqrt{2} \\ 0 & J_A & -D_A/\sqrt{2} \\ -D_A/\sqrt{2} & D_A/\sqrt{2} & J_A \end{bmatrix} ,\;\\ & \mathcal{J}_\text{A,13} = \begin{bmatrix} J_A & D_A/\sqrt{2} & 0\\ -D_A/\sqrt{2} & J_A & D_A/\sqrt{2} \\ 0 & -D_A/\sqrt{2} & J_A \end{bmatrix} ,\; \mathcal{J}_\text{A,23} = \begin{bmatrix} J_A & -D_A/\sqrt{2} & D_A/\sqrt{2}\\ D_A/\sqrt{2} & J_A & 0 \\ -D_A/\sqrt{2} & 0 & J_A \end{bmatrix} . \end{split} \end{equation} \begin{table*} \begin{tabular}{ c c c } \hline \hline \multirow{2}{*}{} order & definition in terms & associated \\ parameter & of spin components & ordered phases \\ \hline \multirow{1}{*}{} $m_{\sf A_2}$ & $\frac{1}{2 \sqrt{3} } \left(S_0^x+S_0^y+S_0^z+S_1^x-S_1^y-S_1^z-S_2^x+S_2^y-S_2^z-S_3^x-S_3^y+S_3^z \right)$ & ``all in-all out'' \\ \multirow{1}{*}{} ${\bf m}_{\sf E}$ & $\begin{pmatrix} \frac{1}{2 \sqrt{6} } \left( -2 S_0^x + S_0^y + S_0^z - 2 S_1^x - S_1^y-S_1^z+2 S_2^x + S_2^y- S_2^z +2 S_3^x-S_3^y +S_3^z \right) \\ \frac{1}{2 \sqrt{2}} \left( -S_0^y+S_0^z+S_1^y-S_1^z-S_2^y-S_2^z+S_3^y+S_3^z \right) \end{pmatrix}$ & $\begin{matrix} \Gamma_{5}, \textrm{including}\\ \Psi_2 \textrm{ and } \Psi_3 \end{matrix}$ \\ \multirow{1}{*}{} ${\bf m}_{\sf T_{1+}}$ & $\begin{pmatrix} \frac{1}{2} (S_0^x+S_1^x+S_2^x+S_3^x) \\ \frac{1}{2} (S_0^y+S_1^y+S_2^y+S_3^y) \\ \frac{1}{2} (S_0^z+S_1^z+S_2^z+S_3^z) \end{pmatrix} $ & collinear FM \\ \multirow{1}{*}{} ${\bf m}_{\sf T_{1, -}}$ & $\begin{pmatrix} \frac{-1}{2\sqrt{2}} (S_0^y+S_0^z-S_1^y-S_1^z-S_2^y+S_2^z+S_3^y-S_3^z) \\ \frac{-1}{2\sqrt{2}} (S_0^x+S_0^z-S_1^x+S_1^z-S_2^x-S_2^z+S_3^x-S_3^z) \\ \frac{-1}{2\sqrt{2}} ( S_0^x+S_0^y-S_1^x+S_1^y+S_2^x-S_2^y-S_3^x-S_3^y) \end{pmatrix}$ & non-collinear FM \\ \multirow{1}{*}{} ${\bf m}_{\sf T_2} $ & $\begin{pmatrix} \frac{1}{2 \sqrt{2}} \left( -S_0^y+S_0^z+S_1^y-S_1^z+S_2^y+S_2^z-S_3^y-S_3^z \right) \\ \frac{1}{2 \sqrt{2}} \left( S_0^x-S_0^z-S_1^x-S^z_1-S_2^x+S_2^z+S_3^x+S_3^z \right) \\ \frac{1}{2 \sqrt{2} } \left( -S_0^x+S_0^y+S_1^x+S_1^y-S_2^x-S_2^y+S_3^x-S_3^y \right) \end{pmatrix} $ & Palmer--Chalker ($\Psi_4$) \\ \hline \end{tabular} \caption{ Order parameters ${\bf m}_\mathsf{X}$, describing how the point-group symmetry of a single tetrahedron within the pyrochlore lattice is broken by magnetic order. % Order parameters transform according to irreducible representations of the point-group ${\sf T}_d$, and are expressed in terms of linear combinations of spin-components ${\bf S}_i = (S^x_i, S^y_i, S^z_i)$, in the global frame of the crystal axes --- cf. $\mathcal{H}$~[Eq.~\eqref{EQN_S_hamiltonian})]. % Labelling of spins within the tetrahedron follows the convention of Ross~{\it et al.}~\cite{RossPRX11}. % The notation $\Psi_i$ for ordered phases is taken from~Ref.~\cite{PooleJPCM07}. } \label{table:m_lambda_global} \end{table*} The spin degrees of freedom on each tetrahedron can be rewritten in terms of fields forming the irreducible representations of the lattice symmetry, \begin{equation}\label{EQN_S_Irreps} {m}_{\mathsf{A_2}} ,\quad \mathbf{m}_{\mathsf{E}} ,\quad \mathbf{m}_{\mathsf{T_2}} ,\quad \mathbf{m}_{\mathsf{T_{1+}}} ,\quad \mathbf{m}_{\mathsf{T_{1-}}} , \end{equation} whose definition can be found in Table \ref{table:m_lambda_global}. They are linear combinations of the spin degrees of freedom, allowing for a fully quadratic Hamiltonian: \begin{equation} \mathcal{H}=\frac{1}{2}\sum_{\mathsf{X}}a_{\mathsf{X},\text{A}} m^2_{\mathsf{X},\text{A}} + \frac{1}{2}\sum_{\mathsf{X}}a_{\mathsf{X},\text{B}} m^2_{\mathsf{X},\text{B}} , \end{equation} where $\mathsf{X}$ runs over irreps of the group $T_d$, i.e. $\{ \mathsf{A_2}, \mathsf{E}, \mathsf{T_2}, \mathsf{T_{1+}}, \mathsf{T_{1-}} \}$ as listed in Eq.~\eqref{EQN_S_Irreps}, and the subscript A,B denotes on which type of tetrahedra they are defined. For the couplings in Eq.~\eqref{EQN_S_hamiltonian}, we have on A-tetrahedra \begin{eqnarray} a_{\mathsf{A_2},\text{A}} & = & -J_A- 4D_A/\sqrt{2} \;,\\ a_{\mathsf{T_2},\text{A}} & = & -J_A -2D_A/\sqrt{2} \;,\\ a_{\mathsf{T_{1+}},\text{A}} & = & 3J_A \;,\\ a_{\mathsf{T_{1-}},\text{A}} = a_{\mathsf{E},\text{A}} & = & -J_A + 2 D_A/\sqrt{2} , \end{eqnarray} and on B-tetrahedra \begin{eqnarray} a_{\mathsf{A_2},\text{B} }= a_{\mathsf{E},\text{B}} = a_{\mathsf{T_2},\text{B}} = a_{\mathsf{T_{1-}},\text{B} } & = & -J_B ,\\ a_{\mathsf{T_{1+}},\text{B} } & = & 3J_B . \end{eqnarray} For $J_{A},J_{B}>0$ and $D_{A}<0$, these parameters are in order \begin{eqnarray} && \text{on A-tetrahedra:} \qquad a_{\mathsf{E},\text{A}} = a_{\mathsf{T_{1-}},\text{A}} < a_{\mathsf{A_2},\text{A}} ,\ a_{\mathsf{T_2},\text{A} } , \ a_{\mathsf{T_{1+}},\text{A}} ,\\ && \text{on B-tetrahedra:} \qquad a_{\mathsf{A_2},\text{B}} = a_{\mathsf{E},\text{B}} = a_{\mathsf{T_2},\text{B}} =a_{\mathsf{T_{1-}},\text{B}} < a_{\mathsf{T_{1+}},\text{B}} , \end{eqnarray} which plays the central role of dictating the low energy physics. The irreducible representation fields are subject to constraints arising from fixed spin length \begin{equation} \sum_\mathsf{X} m^2_\mathsf{X} = 1 \end{equation} for both A- and B-tetrahedra. As a consequence, the low energy sector allows the $m^2_\mathsf{X}$ corresponding to the smallest $a_{\mathsf{X}}$ to fluctuate, while all other fields have to vanish. This principle applied to our model leads to \begin{itemize} \item On A-tetrahedra, the fields $ \mathbf{m}_{\mathsf{E}} $ and $\mathbf{m}_{\mathsf{T_{1-}}}$ can fluctuate; \item On A-tetrahedra, the fields ${\bf m}_{\mathsf{T_{1+}}} = {\bf m}_{\mathsf{T_2}} ={m}_{\mathsf{A_2}}=0$; \item On B-tetrahedra, the fields ${m}_{\mathsf{A_2}} ,\; \mathbf{m}_{\mathsf{E}} ,\; \mathbf{m}_{\mathsf{T_2}} ,\; \mathbf{m}_{\mathsf{T_{1-}}}$ can fluctuate; \item On B-tetrahedra, \begin{equation}\label{EQN_S_B_condition} {\bf m}_{\mathsf{T_{1+}}} = 0 \end{equation} \end{itemize} Since every spin is shared by an A- and a B-tetrahedron, the fluctuating fields $ \mathbf{m}_{\mathsf{E}} $ and $\mathbf{m}_{\mathsf{T_{1-}}}$ on A-tetrahedra must obey additional constraints to respect the the low-energy sector condition on B-tetrahedron imposed by Eq.~\eqref{EQN_S_B_condition}. Assuming that the fields are varying slowly in space such that the continuous limit can be taken, the constraint Eq.~\eqref{EQN_S_B_condition} can be expressed in terms of fields living on A-tetrahedron as \begin{equation}\label{EQN_S_E_constraint_1} \frac{2}{\sqrt{3}} \begin{bmatrix} \partial_x m_\mathsf{E}^1 \\ -\frac{1}{2} \partial_y m_\mathsf{E}^1 - \frac{\sqrt{3}}{2} \partial_y m_\mathsf{E}^2 \\ -\frac{1}{2} \partial_y m_\mathsf{E}^1 + \frac{\sqrt{3}}{2} \partial_y m_\mathsf{E}^2 \end{bmatrix} - \begin{bmatrix} \partial_y m_\mathsf{T_{1-}}^z + \partial_z m_\mathsf{T_{1-}}^y \\ \partial_z m_\mathsf{T_{1-}}^x + \partial_x m_\mathsf{T_{1-}}^z \\ \partial_x m_\mathsf{T_{1-}}^y + \partial_y m_\mathsf{T_{1-}}^x \end{bmatrix} = 0 . \end{equation} From this constraint we can build the symmetric, traceless, rank-two magnetic field $E_{ij}$ as \begin{equation} E_{ij} = \begin{bmatrix} \frac{2}{\sqrt{3}}m_\mathsf{E}^1 & m_\mathsf{T_{1-}}^z & m_\mathsf{T_{1-}}^y \\ m_\mathsf{T_{1-}}^z & -\frac{1}{\sqrt{3}}m_\mathsf{E}^1 - m_\mathsf{E}^2 & m_\mathsf{T_{1-}}^x \\ m_\mathsf{T_{1-}}^y & m_\mathsf{T_{1-}}^x & -\frac{1}{\sqrt{3}}m_\mathsf{E}^1 + m_\mathsf{E}^2 \end{bmatrix} \; , \label{eq:S.31} \end{equation} such that Eq.~\eqref{EQN_S_E_constraint_1} becomes \begin{equation} \partial_i E_{ij}= 0 \; , \label{EQN_S_R2_Constraint} \end{equation} with symmetric and traceless conditions \begin{equation} \label{EQN_S_S_E_constaint} E_{ji} = E_{ji} ,\qquad \Tr {\bf E} = 0 \end{equation} by the definition of $E_{ij}$. Hence a rank-2, traceless, vector charged magnetic field emerges at the low-energy sector from the microscopic model of Eq.~\eqref{EQN_S_hamiltonian}. Equation~\eqref{EQN_S_S_E_constaint} constrains the form of correlations functions of $\langle E_{ij}({\bf q}) E_{kl}(-{\bf q}) \rangle$, in the same spirit as how the two-in-two-out condition constrains the spin-spin correlation of spin ice. It is, however, in a more complicated form. The explicit results for the \textit{traceful} scalar-charged and vector-charged versions of \mbox{R2--U1} are discussed in detail in Ref.~\cite{PremPRB18}. The vector-charge field correlation is \begin{equation}\label{EQN_S__Vc_Corr} \begin{split} \langle E_{ij}({\bf q}) E_{kl}(-{\bf q}) \rangle \propto & \frac{1}{2}(\delta_{ik}\delta_{jl} + \delta_{il}\delta_{jk}) + \frac{q_i q_j q_k q_l}{q^4} \\ & - \frac{1}{2} \left(\delta_{ik}\frac{q_i q_l}{q^2} +\delta_{jk}\frac{q_i q_l}{q^2} +\delta_{il}\frac{q_j q_k}{q^2} +\delta_{jl}\frac{q_i q_k}{q^2} \right) \end{split} \end{equation} In close analogy, we derive the correlation function of our \textit{traceless} vector-charged model by deducting the trace, \begin{equation}\label{EQN_S_Corr} \begin{split} \langle E_{ij}({\bf q}) E_{kl}(-{\bf q}) \rangle \propto & \frac{1}{2}(\delta_{ik}\delta_{jl} + \delta_{il}\delta_{jk}) + \frac{q_i q_j q_k q_l}{q^4} \\ & - \frac{1}{2} \left(\delta_{ik}\frac{q_i q_l}{q^2} +\delta_{jk}\frac{q_i q_l}{q^2} +\delta_{il}\frac{q_j q_k}{q^2} +\delta_{jl}\frac{q_i q_k}{q^2} \right) \\ & -\frac{1}{2}\left( \delta_{ij} -\frac{q_iq_j}{q^2} \right)\left( \delta_{kl} -\frac{q_kq_l}{q^2} \right) , \end{split} \end{equation} which encodes a singularity at ${\bf q} \rightarrow 0$. Different choices of the components $E_{ij}$ and $E_{kl}$ show different patterns. A few representative ones can be found in Fig.~\ref{Fig_S_diff_corr}. \begin{figure}[H] \centering \subfloat[\label{EQN_S_Corr_1}]{\includegraphics[width=0.18\textwidth]{FigS-1-1.png}} \; {\raisebox{3ex}{\includegraphics[height=0.15\textwidth]{FigS-1-legend1.png}}}\; % \subfloat[\label{EQN_S_Corr_2}]{\includegraphics[width=0.18\textwidth]{FigS-1-2.png}}\; \subfloat[\label{EQN_S_Corr_3}]{\includegraphics[width=0.18\textwidth]{FigS-1-3.png}} \; {\raisebox{3ex}{\includegraphics[height=0.15\textwidth]{FigS-1-legend2.png}}}\; % \subfloat[\label{EQN_S_Corr_4}]{\includegraphics[width=0.18\textwidth]{FigS-1-4.png}}\; \subfloat{\raisebox{3ex}{\includegraphics[height=0.15\textwidth]{FigS-1-legend4.png}}} % % \caption{Different components of correlation function $\langle E_{ij}({\bf q})E_{kl}(-{\bf q})\rangle$ in $q_x-q_y$ plane, calculated from Eq.~\eqref{EQN_S_Corr}. } \label{Fig_S_diff_corr} \end{figure} Fig.~\ref{EQN_S_Corr_2},\ref{EQN_S_Corr_3} have the four-fold pinch-point (4FPP) singularity, which differentiates the rank-2 gauge theories uniquely from the conventional $U(1)$ gauge theory. It is the key signature to be looked for in experiments. \section{Connection between Heisenberg Antiferromagnet and Rank--2 $U(1)$ spin liquid} The Heisenberg antiferromagnet (HAF) on a pyrochlore lattice also hosts a spin liquid, but one described by a $U(1)\times U(1)\times U(1)$ gauge theory \cite{isakov05}. These three copies of $U(1)$ originate in the separate flux--conservation laws for the three components of an O(3) spin and, as noted by Henley, can be collected into a single rank--2 tensor field \cite{henley10}. The three independent flux--conservation laws impose a condition of zero vector charge, Eq.~(\ref{EQN_S_R2_Constraint}), one of the requirements for a rank--2 $U(1)$ theory. However they do not enforce the other requirement, namely that the tensor field be symmetric and traceless, Eq.~(\ref{EQN_S_S_E_constaint}). We can gain more insight into the connection between these two spin liquids by generalising the analysis in terms of irrep fields given in the Section above. We find that, in the case of the HAF, low--energy fluctuations can be described by a rank--2 tensor field with the form \begin{equation} \label{EQN_SM_HAF_Electric} {\bf E}^\text{\sf HAF} = \begin{bmatrix} \frac{2}{\sqrt{3}}m_\mathsf{E}^1 -\sqrt{\frac{2}{3}}m_\mathsf{A_2} & m_\mathsf{T_{1,-}}^z + m_\mathsf{T_2}^z& m_\mathsf{T_{1,-}}^y - m_\mathsf{T_2}^y\\ m_\mathsf{T_{1,-}}^z - m_\mathsf{T_2}^z & -\frac{1}{\sqrt{3}}m_\mathsf{E}^1 - m_\mathsf{E}^2 -\sqrt{\frac{2}{3}}m_\mathsf{A_2} & m_\mathsf{T_{1,-}}^x + m_\mathsf{T_2}^x\\ m_\mathsf{T_{1,-}}^y + m_\mathsf{T_2}^y& m_\mathsf{T_{1,-}}^x - m_\mathsf{T_2}^x & -\frac{1}{\sqrt{3}}m_\mathsf{E}^1 + m_\mathsf{E}^2 -\sqrt{\frac{2}{3}}m_\mathsf{A_2} \end{bmatrix} . \end{equation} This satisfies the condition of zero vector charge [Eq.~(\ref{EQN_S_R2_Constraint})], \begin{equation} \partial_i E_{ij}^{\sf HAF} = \rho_j = 0 \; . \label{EQN_SM_HAF_Electric_Constraint} \end{equation} However, as anticipated by the arguments of Henley, ${\bf E}^{\sf HAF}$ also has a finite trace, and a finite anti--symmetric part, and so does not satisfy Eq.~(\ref{EQN_S_S_E_constaint}). We can separate the different contributions to ${\bf E}^{\sf HAF}$ as \begin{equation} {\bf E}^{\sf HAF} = {\bf E}^{\sf HAF}_{\sf sym.} + {\bf E}^{\sf HAF}_ {\sf antisym.} + {\bf E}^{\sf HAF}_{\sf trace} \end{equation} where the antisymmetric part is given by \begin{equation} {\bf E}^{\sf HAF}_ {\sf antisym.} = \begin{bmatrix} 0 & m_\mathsf{T_2}^z& - m_\mathsf{T_2}^y\\ - m_\mathsf{T_2}^z & 0 & m_\mathsf{T_2}^x\\ m_\mathsf{T_2}^y& - m_\mathsf{T_2}^x & 0 \end{bmatrix} , \label{eq:S.origin.antisymetric} \end{equation} and the finite trace comes from \begin{equation} {\bf E}^{\sf HAF}_{\sf trace} = \begin{bmatrix} -\sqrt{\frac{2}{3}}m_\mathsf{A_2} & 0& 0\\ 0 & -\sqrt{\frac{2}{3}}m_\mathsf{A_2} & 0\\ 0& 0& -\sqrt{\frac{2}{3}}m_\mathsf{A_2} \end{bmatrix} . \label{eq:S.origin.trace} \end{equation} The remaining components of ${\bf E}^{\sf HAF}$ are identical to the symmetric, traceless rank--2 electric field ${\bf E}$ found in the R2--U1 theory [Eq.~(\ref{eq:S.31})], i.e. \begin{equation} {\bf E}^{\sf HAF}_{\sf sym.} = \begin{bmatrix} \frac{2}{\sqrt{3}}m_\mathsf{E}^1 & m_\mathsf{T_{1-}}^z & m_\mathsf{T_{1-}}^y \\ m_\mathsf{T_{1-}}^z & -\frac{1}{\sqrt{3}}m_\mathsf{E}^1 - m_\mathsf{E}^2 & m_\mathsf{T_{1-}}^x \\ m_\mathsf{T_{1-}}^y & m_\mathsf{T_{1-}}^x & -\frac{1}{\sqrt{3}}m_\mathsf{E}^1 + m_\mathsf{E}^2 \end{bmatrix} \; . \end{equation} Written in this way, it is clear that the origin of the finite antisymmetric part [Eq.~(\ref{eq:S.origin.antisymetric})], and the finite trace [Eq.~(\ref{eq:S.origin.trace})], are the fluctuations of the irreps ${\bf m}_{\mathsf{T_2}}$ and $m_\mathsf{A_2}$, respectively. Introducing DM interactions of the appropriate form $D_A < D_B < 0$ [Eq.~(\ref{EQN_S_hamiltonian})], introduces an energy cost for the fluctuations of ${\bf m}_{\mathsf{T_2}}$ and $m_\mathsf{A_2}$, and so enforces the ``missing'' constraint, Eq.~(\ref{EQN_S_S_E_constaint}). This converts the $U(1) \times U(1) \times U(1)$ spin liquid of the HAF into the R2--U1 spin liquid of the breathing--pyrochlore model studied in this article. \section{Predictions for Neutron Scattering} The 4FPP is a unique pattern that differentiates the \mbox{R2--U1} from vector $U(1)$ gauge theory, which only has the conventional two-fold pinch points. The 4FPPs are most unambiguously presented in the correlation function of the irrepresentation fields as discussed in the main text. These correlation functions are, however, not directly accessible to the experiments. In magnetism, the neutron scattering technique is widely applied to measure the spin-spin correlation of the form \begin{equation} S({\bf q}) = \sum_{\alpha, \beta, i, j} \left( \delta_{\alpha\beta}-\frac{q^\alpha q^\beta}{q^2} \right) \langle S_i^\alpha({\bf q}) S_j^\beta(-{\bf q}) \rangle \end{equation} where $\alpha, \beta = x, y, z$ are spin-component indices and $i, j = 0, 1, 2, 3$ are sub-lattice site indices. Furthermore, with neutrons polarized in direction of unit vector $\hat{\bf v}$ perpendicular to the scattering plane, one can measure the spin-flip (SF) channel neutron scattering defined by \begin{equation} S({\bf q})_\text{SF} = \sum_{\alpha, \beta, i, j} (v_\perp^\alpha v_\perp^\beta) \langle S_i^\alpha({\bf q}) S_j^\beta(-{\bf q}) \rangle , \end{equation} where \begin{equation} \hat{\bf v}_\perp = \frac{\hat{\bf v}\times {\bf q}}{\hat{\bf v}\times {\bf q}} . \end{equation} One can also measure the non-spin-flip (NSF) channel defined by \begin{equation} S({\bf q})_\text{NSF} = \sum_{\alpha, \beta, i, j} (v^\alpha v^\beta) \langle S_i^\alpha({\bf q}) S_j^\beta(-{\bf q}) \rangle \end{equation} Finally we show the spin structure factor of the $[hhk]$ plane as a complement of the $[h0k]$ plane neutron scattering result shown in the main text. \begin{figure}[h] \includegraphics[width=0.45\textwidth]{Fig_MC_Big_1_hhk.png} \caption{4-Fold Pinch Points (4FPPs) in spin structure factor in the $[hhk]$ plane of momentum space of the model [Eq.~\eqref{eq:H}] from MC simulations. The exchange parameters are from the idealized theoretical case $J_A = J_B = 1.0,\ D_A = -0.01,\ D_B = 0.0$, at $T = 2.5 \times10^{-3}\ J_A$. (a) Total structure factor. (b) Non-spin-flip (NSF) channel. (b) Spin-flip (SF) channel. The 4FPPs can be clearly observed in the SF channel, centered on [0, 0, 2] (and points related by symmetry), but weaker than in the $[h0k]$ plane. } \label{Fig_thy_SF_hhk} \end{figure} \begin{figure}[h] \includegraphics[width=0.45\textwidth]{Fig_MC_Big_2_hhk.png} \caption{4-Fold Pinch Points (4FPPs) in spin structure factor in the $[hhk]$ plane of momentum space of the model (cf. Eq.~\eqref{eq:H}) from MC simulations. The exchange parameters are from the experimental case (cf. Eq.~\eqref{eq:experimental.parameter.set}) at $T = 252\ \text{mK}$. (a) Total structure factor. (b) Non-spin-flip (NSF) channel. (b) Spin-flip (SF) channel. The 4FPPs can be observed in the SF channel, centered on [0, 0, 2] (and points related by symmetry), but weaker than in the $[h0k]$ plane. } \label{fig:Sq.experimental.parameters_hhk} \end{figure} \end{document}
{ "timestamp": "2019-03-01T02:11:11", "yymm": "1902", "arxiv_id": "1902.10934", "language": "en", "url": "https://arxiv.org/abs/1902.10934" }
\section{Introduction} \label{sec:probabilitymodels} We introduce a probability model of evolution that has three purposes: as an abstract model of biological evolution; as a class of efficiently implementable genetic algorithms that are easy to analyse theoretically; and as a link between genetic algorithms and Bayesian non-parametric MCMC methods. The stationary distribution of the model can be expressed in closed form for arbitrary fitness functions: this enables us to investigate the behaviour of the model for different population sizes, mutation rates, and fitness scalings. We find two phase transitions that occur for all fitness functions. Our approach is most applicable to evolution by sexual rather than asexual reproduction. In any model of evolution under constant conditions, there is a temporal sequence of possibly overlapping populations. In the transition from each population to the next, one or more new individuals are `born', and the same number of individuals are removed from the population, or `die'. Each new generation is a population that depends only upon the previous parent population, so that the sequence of populations is a Markov chain. In a model with mutation, the Markov chain is irreducible and has a unique stationary distribution that is also known as the mutation-selection equilibrium. Typically we wish to know what the mutation-selection equilibrium is. Unfortunately the mutation-selection equilibrium is notoriously hard to characterize, even for apparently simple and elegant models of breeding, mutation, and selection. The reason is that in previous models of evolution with sexual reproduction, the Markov chain of populations is irreversible, so that there is no obvious method of finding the stationary distribution other than attempting to compute eigenvectors of the transition matrix directly, as done in \cite{vose1999simple}, but these calculations are neither easy nor revealing for arbitrary fitness functions. For a reversible Markov chain, the stationary distribution may be found by verifying detailed balance conditions. In our model, introduced in \cite{watkins2014sex}, we start by writing the mutation-selection equilibrium in a convenient closed form, and then exhibit MCMC kernels that implement reversible Markov chains with this stationary distribution, and for which the proposal and acceptance algorithms are plausible abstract models of breeding, mutation, and selection. Each generation starts with a population of $n$ individuals. One new individual is `bred' -- that is, sampled conditionally on the existing population -- to produce an expanded population of $n+1$ individuals; from this expanded population, one individual is selected to be discarded, leaving a new population of size $n$ to start the next generation. This might appear similar to the Moran Process \cite{moran1962statistical} but in fact it is very different, as explained in section\ \ref{sec:morandifference} In previous evolutionary models such as \cite{crow1970introduction}, and in genetic algorithms such as \cite{holland1975adaptation}, the reproductive fitness of a genome is modelled as the genome's rate of breeding, and not as its probability of death. In the breeding phase of each generation, fit genomes are chosen to breed more frequently than unfit genomes: in these models, discarding individuals -- or `death' -- is modelled as random deletion from the population. Here we model breeding as conditional sampling in which all existing members of the population are treated equally, regardless of their fitness. We discard individuals according to their fitness: less-fit individuals are more likely to be discarded, so that they remain in the population for a shorter time, and they are therefore sampled less as a result of their shorter lifetime, and so contribute less to the new individuals that are `bred'. Thus differences in reproductive fitness are modelled as differences in longevity. This is a significant design choice in our models because breeding and selection are modelled separately, and this turns out to greatly simplify the analysis. As explained in detail in sections \ref{sec:breeding} and \ref{sec:selection}, careful choices in modelling breeding as conditional sampling, and in modelling fitness by stochastic rejection of the less fit, enable us to construct a reversible Markov chain of populations that is a form of Metropolis-Hastings process. Throughout the paper, we suppose there is a finite set ${\cal X}$ of possible genomes, and $\xi_1,\xi_2,\ldots$ denotes an exchangeable ${\cal X}$-valued stochastic process. For any population of $n$ genomes $\bx = (x_1, \ldots, x_n) \in {\cal X}^n$, we define \begin{equation}\label{eq:subtledef} P(\xi_1=x_1,\ldots,\xi_n=x_n)=:P_{\xi}(x_1,\ldots,x_n) =: P_\xi(\bx) \end{equation} By definition of exchangeability, we have that for any permutation $\sigma$, \begin{equation*} P_\xi(x_1, \ldots, x_n) = P_\xi( x_{\sigma(1)}, \ldots , x_{\sigma(n)} ). \end{equation*} Given a population of genomes $\bx = (x_1,\ldots,x_n)$, we breed an $n+1$'th genome by sampling $x_{n+1}$ conditionally: \begin{equation*} x_{n+1} \sim P_\xi(\cdot \mid x_1, \ldots, x_n). \end{equation*} The `fitness' of a genome $x$ is denoted by $w(x)>0$, where $w$ is an arbitrary strictly positive function over $\mathcal{X}$. In the evolutionary models we propose, in each generation one individual is `bred' by conditional sampling from the existing population, and then an individual is discarded in a fitness-biased way, so that less-fit individuals are more likely to be discarded. The stationary distribution of populations factorises into the form: \begin{equation}\label{eq:stationary1} \underbrace{P_n(x_1, \ldots , x_n )}_\text{stationary distribution} =\frac{1}{Z_n} \underbrace{P_{\xi}(x_1, \ldots, x_n)}_\text{breeding term} ~ \underbrace{w(x_1) \cdots w(x_n)}_\text{fitness term}. \end{equation} The process is similar to, but not the same as non-parametric Bayesian MCMC, and we make this connection explicit in section \ref{sec:BayesianMCMC}. A genetic algorithm has three basic parameters: population size, mutation rate, and fitness scaling. The most important results of this paper characterise the interacting effect of these parameters as population size $n\rightarrow \infty$. We particularly consider exchangeable processes $\xi$ that are (collections of) Polya urn processes, also known as Dirichlet-categorical processes because these admit a notion of `mutation'. Conditional sampling from these processes is directly interpretable as a simplified model of sexual breeding with mutation, as explained in section \ref{sec:breeding}. We denote the parameters defining the Dirichlet prior(s) as $\alpha$. Since fitness $w>0$, we often write $w$ in terms of a function $\phi:\mathcal{X}\rightarrow\mathbb{R}$ such that $w(x) = \exp(-\phi(x) )$. We study the interaction of $n$, $\alpha$, and $\phi$ on the stationary distribution by letting population size $n\rightarrow\infty$ while the fitness of each genome $x$ scales with $n$ as $\exp(n^{-\lambda}\phi(x))$, and $\alpha$ scales with $n$ as $n^{1-\lambda}\alpha$; in sections \ref{sec:largepopulationlimit} and \ref{sec:dirichletprior} we use the de Finetti representation of $\xi$ to derive limiting forms of the marginal distributions over $\mathcal{X}$ of \begin{equation*} P_n( x_1, \ldots, x_n \mid n^{1-\lambda} \alpha ; n^{-\lambda}\phi) ~~ \text{ as $n\rightarrow \infty$}. \end{equation*} We establish that with this rescaling, for any fitness function $\phi$ over $\mathcal{X}$, and for any Dirichlet-categorical process $\xi$, there are three different non-trivial population distributions in the limit as $n\rightarrow\infty$: \begin{description} \item[Constant mutation-rate limit]: $\lambda =0$, so that $\alpha$ scales as $n \alpha$ and $\phi$ is unscaled; \item[Low mutation, weak selection limit]: $\lambda \in (0,1]$, so that $\alpha$ scales as $n^{1-\lambda} \alpha$ and $\phi$ as $n^{-\lambda}\phi$; \item[Critical low mutation limit]: $\lambda=1$, so that $\alpha$ is unscaled, and $\phi$ scales as $\frac{\phi}{n}$. \end{description} The phase transitions at $\lambda=0$ and $\lambda=1$ are sharp. We characterise the limiting distributions in the case that there is a unique maximal element of the posterior distribution. As far as we are aware, these are the first results that give exact explicit expressions for stationary distributions of genetic algorithms with arbitrary fitness functions. The paper is organised as follows. In section \ref{sec:breeding} we give examples of exchangeable conditional sampling procedures that can be regarded as abstract models of breeding, and section \ref{sec:selection} gives examples of selection procedures that, together with any of the breeding procedures, will produce a reversible Markov chain of populations with the stationary distribution of equation \ref{eq:stationary1}. Section \ref{sec:noiseinvariance} establishes that the stationary distributions are invariant to multiplicative fitness noise; section \ref{sec:BayesianMCMC} shows that special cases of this process are forms of Bayesian inference by MCMC. This family of evolutionary algorithms might appear similar to the Moran Process -- but section\ \ref{sec:morandifference} explains that the Moran process is quite different. Next, section \ref{sec:measurepn} establishes basic conditions on the convergence of the stationary distribution to a limit distribution as population size tends to infinity. Sections \ref{sec:largepopulationlimit} and \ref{sec:dirichletprior} characterise the limiting forms of the stationary distribution for large populations. These are the main results of this paper. We present computational experiments demonstrating our results in section \ref{sec:experiments}. Finally we discuss implications of our results for genetic algorithms and evolutionary modelling in \ref{sec:conclusions}. \section{Breeding and mutation} \label{sec:breeding} We model breeding as Gibbs sampling \cite{geman1984stochastic} from an exchangeable distribution; exchangeable Gibbs sampling is a standard technique in statistical nonparametric MCMC methods, for example \cite{neal2000markov,hjort2010bayesian}, but in that context it is not of course regarded as a model of breeding. The property of exchangeability of $P_\xi$ will be used in two ways: first, in section \ref{sec:selection} we will use it to establish detailed balance for several selection procedures, which establishes that the stationary distribution of populations is indeed as given in equation \ref{eq:stationary1}. Second, in section \ref{sec:measurepn} we use the de Finetti integral representation of $\xi$ to establish limit properties of the stationary distribution as $n\rightarrow\infty$. We now give examples of conditional sampling that can be regarded as plausible models of breeding. \paragraph{Dirichlet-Categorical Process.} In the simplest case, each `genome' consists of only one `gene' which can be one of $K$ possible alleles, that we denote by $\{1,\ldots,K\}$. The exchangeable process $\xi$ is the well known Polya urn model for a Dirichlet process with discrete base distribution. We recall the definition of this process. Let $\alpha = (\alpha_1, \ldots, \alpha_K)$, $\alpha_i>0$ be the prior parameters of the base distribution; we write $|\alpha| := \alpha_1 + \cdots + \alpha_K$. Let $\xi_1, \xi_2, \ldots$ be a random process over $\{ 1,\ldots,K\}$. Let $n\ge 0$. Given $\bx=x_1, \ldots, x_n$, we denote the number of $k$-s in the sequence by $n_k(\bx)$: \begin{equation}\label{fr} n_k=\sum_{i=1}^n I_{\{k\}}(x_i). \end{equation} Thus \begin{equation*} P_\xi( \xi_{n+1} = k \mid x_1, \ldots, x_n, \alpha ) := \frac{n_k + \alpha_k}{n + |\alpha | }. \end{equation*} It follows that \begin{equation*} P_\xi( x_1, \ldots, x_n \mid \mathbf{\alpha} ) = \frac{ \alpha_1 (\alpha_1 +1) \cdots (\alpha_1 + n_1 - 1) \cdots \alpha_K (\alpha_K +1) \cdots (\alpha_K + n_K - 1)}{|\alpha| ( |\alpha| + 1) \cdots (|\alpha| + n - 1) } \end{equation*} and $\xi_1, \xi_2, \ldots$ is infinitely exchangeable by inspection. By de Finetti's theorem $$P_\xi( x_1, \ldots, x_n \mid \mathbf{\alpha} )=\int_{{\cal P}} q(x_1)\cdots q(x_n)\pi (dq),$$ where ${\cal P}=\{(q_1,\ldots,q_K): q_i\geq 0,\sum q_i=1\} $ is the set of all probability vectors (simplex) and the prior measure $\pi$ is Dirichlet distribution, i.e. $\pi={\rm Dir}(\alpha_1,\ldots,\alpha_K)$. This process is in a sense the central object of our study. \paragraph{Concentration parameter and mutation rate.} The concentration parameter ${|\alpha|} = \alpha_1 + \cdots + \alpha_K$ may be viewed as determining a mutation rate that depends on $n$. With $n$ balls in the urn, when a new ball is sampled, there is a probability $\frac{|\alpha|}{n+|\alpha|}$ that the ball will be sampled from the collection of `prior' balls, rather than the actual balls in the urn. This probability is independent of the colors of $n$ actual balls present: since a new colour may be introduced in this way, we regard this as analogous to a mutation. The mutation rate $u$ is \begin{equation*} u = \frac{|\alpha|}{|\alpha| + n} ~~~ \text{or equivalently} ~~~ |\alpha| = \frac{nu}{1-u} \end{equation*} In our evolutionary processes, one new genome is sampled, and one genome is then discarded at each generation, so that $n$ remains constant. To define processes with the same mutation rate and different values of $n$, $\alpha$ must be adjusted to depend on $n$. Note that in this model, mutations only occur at birth, and -- importantly -- the distribution of mutations does not depend on the frequencies of different colours that are currently in the population; mutations are distributed according to a fixed prior distribution determined by the frequencies of `magic' balls in the urn. This is strictly less general than an alternative model in which breeding occurs as follows: a ball $x$ is sampled from the urn, and then a mutated ball $x^\dagger \sim P_M(\cdot \mid x )$ is conditionally sampled according to some mutation distribution $P_M$, and then $x$ and the new mutated ball $x^\dagger$ are both returned to the urn. But this alternative model would not necessarily be reversible, and we do not consider it further. In our model, we count as a mutation any ball which results from a draw of a `magic' ball: this definition is consistent with an extension of our model to Dirichlet processes with continuous base distributions, which we intend to consider in future work. \subsection{Complex genomes: direct product of Dirichlet processes} \label{sec:product} More complex evolutionary models such as genetic algorithms require more complex genomes. Suppose that each genome is a vector of $L$ genes, $x_i = (y^1_i , \ldots , y^L_i)$. Let $\xi$ be a direct product of $L$ independent exchangeable processes $\xi = ( \xi^1, \ldots, \xi^L )$. Then \begin{equation} P_\xi( x_1 , \ldots, x_n) := \prod_{j=1}^L P_{\xi^j}(y^j_1, \ldots, y^j_n) \end{equation} which clearly is exchangeable as well. Exchangeable sampling from $P_\xi(\cdot \mid x_1, \ldots, x_n)$, where each $x_i$ is a vector of discrete values, can be viewed as a model of sexual reproduction with the assumption of linkage equilibrium. A new vector $(y_{n+1}^ 1, \ldots, y_{n+1}^L)$ is sampled by, for $1\le j \le L$, sampling the $j$-th component $y_{n+1}^j \sim P_{\xi^{j}}(\cdot \mid y^j_1, \ldots, y^j_n)$ independently from the rest of the components. In words, each new element of the vector $x_{n+1}$ is either a copy of the corresponding element of a randomly chosen member of the existing population, or else a mutation. Instead of a new `child' genome being constructed by random recombination of two parent genomes, it is instead a random recombination of all $n$ existing genomes in the population, with mutations. This method of constructing new genomes by `$n$-way recombination' is a widely used approach in genetic algorithms, as used by \cite{baluja1995removing,baluja1997genetic,baum2001genetic} and others, and sexual reproduction with full linkage equilibrium is a standard simplified model of sexual reproduction in population genetics theory \cite{crow1970introduction,ewens2004mathematical}. The extension to Cartesian products of Dirichlet processes might appear rather simple because each component of a new genome is sampled independently of the others; however, this extension can lead to models of great complexity because the fitness function $w$, or equivalently $\phi$, can be an arbitrary function on $\mathcal{X}^L$, so that the stationary distribution need not be a product distribution. In genetic language, the fitness function can have arbitrary epistasis. Note that there are other discrete exchangeable distributions based on Dirichlet distributions that could also be used as $P_\xi$. A notable example is the discrete fragmentation-coagulation sequence process introduced in \cite{elliott2012scalable}; this was intended as statistical model for imputing phasing in genetic analysis, but it could also be used as a breeding distribution for our purposes. \section{Selection} \label{sec:selection} Several MCMC sampling methods give the factorized stationary distribution given by equation\ (\ref{eq:stationary1}) and at the same time are models of sexual reproduction that are as plausible as those used in evolutionary computation or simplified models in population genetics. We suppose that each element $x \in {\cal X}$ has a strictly positive weight $w(x)$. In context, we will denote the weights $w(x_1), \ldots, w(x_n)$ as $w_1,\ldots,w_n$. \subsection{Single tournament selection} \label{sec:singletournamentselection} We suppose that when a new genome $x_{n+1}$ is `born' and added to the population, it competes to survive by having a tournament with another randomly selected member of the population, $x_i$ say. The probability that $x_{n+1}$ wins the tournament and ejects $x_i$ from the population is $\frac{w(x_{n+1})}{w(x_{n+1}) + w(x_i)}$. This probability is always well defined since $w$ is strictly positive. An equally valid tournament winning probability is that $x_{n+1}$ wins with probability $\min\{1, \frac{w(x_{n+1})}{w(x_i)}\}$. These two tournament winning probabilities are simply different formulations of the Metropolis-Hastings acceptance rule. The proof below establishes detailed balance for the first winning rule. The algorithm for performing one generation of breeding, mutation, and selection is: \begin{enumerate} \item Sample $ x_{n+1} \sim P_\xi( \cdot \mid x_1, \ldots, x_n ) $ \item Sample $i$ randomly from $\{ 1, \ldots, n\}$ \item With probability $\frac{w_{n+1}}{w_i + w_{n+1}}$ replace $x_i$ with $x_{n+1}$ and discard $x_i$, otherwise discard $x_{n+1}$. \end{enumerate} Let $\bx$, $\bx^\prime$ be populations defined as \begin{align*} \bx &= x_1, \ldots , x_n\quad \bx^\prime = x_1, \ldots, x_{i-1}, x_{n+1}, x_{i+1}, \ldots, x_n \end{align*} \noindent Recall the measure $P_n(\bx)$ defined in (\ref{eq:stationary1}). We now show that this measure satisfies detailed balance. By exchangeability of $\xi$, we have: \begin{equation} P_\xi(\bx) P_\xi( x_{n+1} \mid \bx ) = P_\xi( x_1, \ldots , x_{n+1}) = P_\xi( \bx^\prime) P_\xi( x_i \mid \bx^\prime) \end{equation} \noindent Note that $x_{n+1}$ has tournament against $x_i$ with probability $\frac{1}{n}$ and wins with probability $\frac{w_{n+1}}{w_i + w_{n+1}}$, so: \begin{align*} P_n( \bx ) P( \bx \to \bx^\prime ) &= P_n(\bx) \cdot P_\xi( x_{n+1} \mid \bx ) \cdot \frac{1}{n} \frac{w_{n+1}}{w_i + w_{n+1}} \\ &=\frac{1}{Z_n} P_\xi( \bx ) w_1 \cdots w_n \cdot P_\xi(x_{n+1} \mid\bx) \cdot \frac{1}{n} \frac{ w_{n+1}}{w_i + w_{n+1}}\\ &= \frac{1}{Z_n}P_\xi(x_1,\ldots, x_{n+1}) \cdot \frac{1}{n} \frac{w_1 \cdots w_{n+1}}{w_i + w_{n+1}} \\ &= \frac{1}{Z_n} P_\xi(\bx^\prime) P_\xi(x_i \mid \bx^\prime) \cdot \frac{1}{n} \frac{w_1 \cdots w_{n+1}}{w_i + w_{n+1}} \\ &= \frac{1}{Z_n} P_\xi(\bx^\prime) w_1 \cdots w_{i-1} w_{n+1} w_{i+1} \cdots w_n \cdot P_\xi( x_i \mid \bx^\prime ) \cdot \frac{1}{n} \frac{w_i}{w_i + w_{n+1}}\\ & = P_n(\bx^\prime) P( \bx^\prime \to \bx). \end{align*} \subsection{Inverse fitness selection: limit of many tournaments} \label{sec:inversefitnessselection} Suppose that many tournaments are fought, and each time the loser of the previous tournament fights another randomly chosen genome from the population. After many tournaments, and at a stopping time, the current loser is ejected. The current loser evolves according to a irreducible aperiodic Markov chain, and the limiting distribution of ejection is the stationary distribution of that chain: \begin{equation} P(\text{ eject $i$ }) = \frac{\frac{1}{w_i} }{\frac{1}{w_1} + \cdots + \frac{1}{w_{n+1}} }. \end{equation} \noindent The algorithm for performing one generation of breeding and selection is then: \begin{enumerate} \item Sample $ x_{n+1} \sim P_\xi( \cdot \mid x_1, \ldots, x_n ) $ \item Sample $i$ from a discrete p.d. over $\{1,\ldots,n+1\}$ with probabilities proportional to $\{ \frac{1}{w_1}, \ldots , \frac{1}{w_{n+1}}\}$ \item Discard $x_i$ \end{enumerate} This process too satisfies detailed balance. With $\bx$ and $\bx^\prime$ defined as above, note that: \begin{align*} P_n( \bx ) P( \bx \to \bx^\prime ) &= P_n(\bx) \cdot P_\xi( x_{n+1} \mid \bx ) \cdot \frac{ \frac{1}{w_i} }{ \frac{1}{w_1} + \cdots + \frac{1}{w_{n+1}} }\\ &= \frac{1}{Z_n} P_\xi( \bx ) w_1 \cdots w_n \cdot P_\xi(x_{n+1} \mid\bx) \cdot \frac{ \frac{1}{w_i} }{ \frac{1}{w_1} + \cdots + \frac{1}{w_{n+1}} }\\ &= \frac{1}{Z_n} P_\xi(x_1, \ldots, x_{n+1}) \frac{ w_1 \cdots w_{n+1}}{ w_{n+1} } \frac{ \frac{1}{w_i} }{ \frac{1}{w_1} + \cdots + \frac{1}{w_{n+1}} }\\ &= \frac{1}{Z_n} P_\xi(x_1, \ldots, x_{n+1}) \frac{ w_1 \cdots w_{n+1}}{ w_{i} } \frac{ \frac{1}{w_{n+1}} }{ \frac{1}{w_1} + \cdots + \frac{1}{w_{n+1}} }\\ &= P_n(\bx^\prime) \cdot P_\xi( x_i \mid \bx^\prime ) \cdot \frac{ \frac{1}{w_{n+1} }}{ \frac{1}{w_1} + \cdots + \frac{1}{w_{n+1}} }\\ & = P_n( \bx^\prime ) P( \bx ^\prime\to \bx ). \end{align*} Note in passing that \begin{equation} \mathbf{E}\{ \mbox{weight of rejected genome} \mid w_1, \ldots , w_{n+1} \} = \frac{ 1}{ \frac{1}{w_1} + \cdots + \frac{1}{w_{n+1}} }. \end{equation} \noindent That is, the expected weight of the rejected genome is the harmonic mean of the weights of the genomes in the current population, including the newly added genome $x_{n+1}$. \paragraph{The case $n=1$ reduces to Metropolis-Hastings:} with a population of size 1, we have proposal distribution $P_\xi(x^\prime \mid x) P_\xi(x) = P_\xi(x,x^\prime) = P_\xi(x \mid x^\prime) P_\xi(x^\prime)$ and acceptance probability of $x^\prime$ given $x$ of $\frac{w(x^\prime)}{w(x) + w(x^\prime)}$, which gives a stationary distribution \begin{equation*} P_1(x) = \frac{P_\xi(x) w(x)}{\sum_x P_\xi(x) w(x) }. \end{equation*} \subsection{Exchangeable breeding of many offspring} \label{sec:breedmany} Another MCMC process which is also interpretable as an evolutionary algorithm breeds some arbitrary number $m$ of offspring to give a population of size $n+m$, and then from these selects $n$ genomes to form the next generation. The algorithm for a single generation is as follows: \begin{enumerate} \item Pick a random number $m$ of offspring to breed, and a random number $t$ of tournaments to conduct. Both $m$ and $t$ should be independent of the current population $x_1, \ldots, x_n$ \item Breed $x_{n+1}, \ldots , x_{n+m}$ by sequential exchangeable sampling; that is, let $x_{n+i} \sim P_\xi(\cdot \mid x_1, \ldots, x_{n+i-1})$ for $1\le i \le m$ \item Assign `survival tickets' to each of $x_1, \ldots, x_n$; the newly bred genomes $x_{n+1}, \ldots, x_{n+m}$ have as yet no survival tickets. \item Repeat $t$ times: \begin{enumerate} \item Uniformly sample from the population a genome $x_i$ which currently has a survival ticket, and a genome $x_j$ which currently does not have a survival ticket. \item Hold a tournament between $x_i$ and $x_j$; $x_i$ wins with probability $\frac{w(x_i)}{w(x_i) + w(x_j)}$. \item The winner of the tournament gets the survival ticket; after the tournament, the winner has the ticket and the loser does not. \end{enumerate} \item After $t$ tournaments have taken place, the $n$ genomes currently holding survival tickets are selected to be the new population $x^\prime_1, \ldots, x^\prime_n$; the genomes that are not holding tickets are discarded. \end{enumerate} \noindent The arguments of section \ref{sec:singletournamentselection} can readily be extended to show that this algorithm has the same stationary distribution (\ref{eq:stationary1}). \subsection{Invariance to fitness noise} \label{sec:noiseinvariance} In real life, survival depends not just on fitness but also on luck. Suppose that when each new `genome' is {`born'}, it combines its own intrinsic fitness with an independent random amount of luck. It keeps this amount of luck, unchanged, throughout its life. Does individual luck at birth alter the stationary distribution? To formalise the question, let $\psi_1,\psi_2,\ldots,\psi_n$ be discrete random variables that represent `luck'. We consider the $\psi_i$ as discrete random variables because the formal derivations become far simpler. The random variable $\psi_i$ can be interpreted as the individual luck that multiplies the fitness of $i$-th individual. The log fitness function of $i$-th individual is $\phi(x)+\psi_i$. \begin{equation*} P_n(\bx|\psi_1,\ldots,\psi_n) = \frac{1}{Z'_n} P_\xi( \bx ) \exp \big[ -\sum_{i=1}^n \big(\phi(x_i) + \psi_i\big) \Big] \end{equation*} The joint probability \begin{align*} P_n(\bx;\psi_1,\ldots,\psi_n) &= \frac{1}{Z'_n} P_\xi(\bx) \exp \big[ -\sum_{i=1}^n \big(\phi(x_i) + \psi_i\big) \big] P(\psi_1,\ldots,\psi_n)\\ &=\frac{1}{Z'_n} P_\xi( \bx ) \exp \big[ -\sum_{i=1}^n \phi(x_i)\big]\exp\big[-\sum_i \psi_i\big] P(\psi_1,\ldots,\psi_n). \end{align*} We see that the sum factorizes, and so $Z'_n=Z_n\cdot Z_n^{\psi}$, where $$Z_n=\sum_{\bx}P_\xi( \bx ) \exp \big[ -\sum_{i=1}^n \phi(x_i)\big],\quad Z^{\psi}_n=\sum_{\psi_1,\ldots,\psi_n}\exp\big[-\sum_i \psi_i\big] \prod_i P(\psi_1,\ldots,\psi_n)$$ Thus the joint measure factorizes $$P_n(\bx;\psi_1,\ldots,\psi_n)=P_n(\bx)P_n^{\psi}(\psi_1,\ldots,\psi_n),$$ where $$P_n^{\psi}(\psi_1,\ldots,\psi_n)={1\over Z_n^{\psi}}P(\psi_1,\ldots,\psi_n)\exp[-\sum_i \psi_i]$$ is a probability measure. Thus, after summing $\psi_1,\ldots,\psi_n$ out, we end with $P_n(\bx)$: $$\sum_{\psi_1,\ldots,\psi_n}P_n(\bx;\psi_1,\ldots,\psi_n)=P_n(\bx).$$ It follows that the stationary distribution is unchanged by multiplicative noise that is independent of the genomes, for all population sizes. Also, considering $\psi_1,\ldots,\psi_n$ as the prior, we see that posterior measure is $P_n(\psi_1,\ldots,\psi_n)$ that is independent of $\bx$. \subsection{Fitness as likelihood: a connection with non-parametric Bayesian MCMC} \label{sec:BayesianMCMC} In Dirichlet Process mixture models, as described for example in \cite{neal2000markov,teh2010hierarchical}, items of data $d_1, \ldots, d_n$ are given, together with a likelihood function $l(x,d) = P(d\mid x)$, where $x \in \mathcal{X}$ is a discrete latent variable. The prior distribution over latent variables is given by the exchangeable process $\xi$, each item of data is associated with its corresponding latent variable, and the aim of MCMC fitting is to sample latent variables $x_1, \ldots, x_n$ from the distribution \begin{equation} \label{eq:likelihood} P_n(x_1, \ldots, x_n) = \frac{1}{Z_n} P_\xi(x_1, \ldots, x_n) P(d_1 \mid x_1) \cdots P(d_n \mid x_n) \end{equation} where $Z_n$ is an appropriate normalizing constant. If we write \begin{equation*} {w_j( x_i) := P(d_j \mid x_i)} \end{equation*} then this distribution (\ref{eq:likelihood}) is produced by the MCMC algorithm of section \ref{sec:singletournamentselection} with the slight modification that the tournament between the new `genome' $x_{n+1}$ and the randomly selected $x_j$ is a victory for $x_{n+1}$ with probability $\frac{w_j(x_{n+1})}{w_j(x_j) + w_j(x_{n+1})}$. One might construct an evolutionary ``Just-So'' story as follows. On a rock in the ocean, there are $n$ niches, in each of which one member of the species $\xi$ can live; the fitness of $x$ living in niche $j$ is $w_j(x)$. Evolution occurs when a new individual, bred by `n-way recombination' of all $n$ parents, then challenges the occupant of a randomly chosen niche by fighting a tournament, after which the victor survives and takes over the niche. This gives a (fanciful) evolutionary interpretation to Bayesian latent variable models with exchangeable priors. \subsection{Differences from previous models} \label{sec:morandifference} The well known Moran Process was introduced by \cite{moran1962statistical}, but since then the term has broadened to include many other related genetic models. Our model has significant differences from the Moran process and its subsequent variants, which we now describe. First, our model implicitly includes mutation; the Moran model did not and the mutation had to be incorporated independently. To show the difference more clearly, consider the Polya urn scheme with an $\alpha_i>0$ ``prior'' of initial balls of color $i$. Unlike in the Moran model, in our model the urn is not the same as the population. A ball is randomly chosen and returned to the urn with another ball of the same color. This procedure is repeated $n$ times, and after that there are $n+|\alpha|$ balls in the urn, but $n$ elements in the population. Now, according to our breeding rule, a random ball is chosen again and returned with another ball of the same color. Observe that the prior balls are involved in the breeding process as much as the rest of the balls. {Whenever a prior ball is chosen during the breeding process, we call it a mutation. For example, if all $n$ population balls are currently white, and a black prior ball is chosen, then a black ball enters the population. Or it might be that a color (type) that has never observed before (inside the $n$ balls taken out) suddenly appears. Or a mutation may produce a colour that is already common in the population.} The main difference from the Moran model is that the prior balls do not take part in the selection step. Thus, before tournament(s) start(s), $|\alpha|$ balls ( $\alpha_i$ balls from $i$-th color) are removed from the urn, and so they will never be discarded. There are now $n+1$ balls in the urn and in the population, one of them will be ejected according to our selection rule. Now the breeding starts, but before that the previously removed $|\alpha|$ prior balls are put back to the urn. In this way, the Markov chain does not have absorbing states, and no fixation occurs. Second, in the Moran model and in variants we know of, differences in fitness are differences in the probability of being selected to breed, whereas in our model fitness is related to the probability of being selected for discard. This decision avoids complicating the model of breeding by including arbitrary fitness, which simplifies the analysis. Third, our model applies to any exchangeable distributions over finite sets of possible genomes. The most important instantiation we give here is the product of Dirichlet processes with finite support, but other complex finitely supported exchangeable distributions are possible. This means that our model can be applied to non-trivial genetic algorithms. The Moran model, in contrast, concerns the fixation probabilities of individual alleles. Finally, the Moran model required one individual to be born and one to die in each generation, whereas our model also to applies to exchangeable breeding of arbitrary numbers of offspring, as described in section \ref{sec:breedmany} above. Our model is a population generalisation of Metropolis Hastings, and is more closely related to nonparametric Bayes inference with distributions based on Dirichlet processes, than it is to the Moran model. Forms of reversible genetic algorithm, or `evolutionary MCMC' that satisfy detailed balance were previously suggested by \cite{strens2003evolutionary}, \cite{ter2006markov}, and others, but in these models the genetic operators are not biologically relevant and although they are described as `evolutionary', they have no relevance to modelling biological evolution; rather, they are proposal heuristics for Metropolis Hastings. These methods are normally applied to continuous domains, but if they were applied to a discrete vector space, the stationary distribution in our notation would be: \begin{equation*} P_n(x_1, \ldots, x_n ) = \frac{1}{Z_n} w(x_1) \cdots w(x_n) \end{equation*} which differs from our stationary distribution in equation \ref{eq:stationary1} in that the breeding term is absent or equivalently that $P_\xi$ is the uniform distribution over $\mathcal{X}$. In our examples, and in genetic models, $P_\xi$ is very far from the uniform distribution. \subsection{Designing a reversible evolutionary model} {Our larger aim is to develop a model of evolution that is sufficiently realistic to capture some of the computational power of natural evolution, but which is also simple and tractable for analysis. To ensure that our model satisfies detailed balance and has a stationary distribution that factorises into a breeding and a selection term, we have made the following simplifying assumptions in addition to the usual simplifications of population genetics or genetic algorithms:} \begin{description} \item[Overlapping populations]: We believe that overlapping populations are necessary for reversibility. If full replacement of the population is enforced at each generation, there can be do guarantee that the population at time $t$ could be easily bred from the population at time $t+1$. In MCMC, state changes typically occur through proposing changes that may or may not be accepted; in an overlapping generations model, if a proposed change is not accepted, we continue with the same population as before. \item[$n$-way recombination]: in the product of Dirichlet processes breeding system, each new genome is bred from $n$ parents rather than from two parents selected from the population. We conjecture that this is necessary for exact reversibility because, with long genomes and many mutations, if a child is bred from two parents, then the child will be more similar to each of its parents than to other individuals in the population, so that `triples' of two parents and one child will be identifiable even in the stationary distribution. This breaks reversibility since the direction of time can be determined observing evolution in the stationary distribution. \item[Mutation as sampling]: We consider mutation as sampling from a base distribution of possible alleles. This model of mutation is not as general as those found in biology, where mutation probabilities are not symmetric or reversible. \item[Fitness as lifetime]: All members of the population `breed' at the same rate, and differences in fitness affect only the expected lifetime of an individual. This clearly differs from many types of natural selection, but it is also well known that many organisms continue to produce offspring throughout their lives, so that their total reproductive success depends on their lifetime, as well as on other factors. \end{description} It is beyond the scope of this article to argue further whether our model successfully abstracts some essential computational aspects of evolution with sexual reproduction: we present it merely as an abstraction of sexual evolution which is significantly more tractable to analyse than other apparently simple models. \section{The measure $P_n$} \label{sec:measurepn} The measure $P_n$ is our main object of interest. In this section we show that the marginal distributions $P_n$ over the set of genotypes converge as the population size $n \rightarrow \infty$; in the next section we characterize these limits. Since $\xi$ is exchangeable, by de Finetti's theorem there exists a prior measure $\pi$ on {the set of all probability measures on ${\cal X}$ (simplex)} ${\cal P}$ such that for every $\bx\in {\cal X}^n$ \begin{equation} P_{\xi}(\bx)=\int_{{\cal P}}\prod_{i=1}^n q(x_i)\pi(dq) \end{equation} so that we can write $P_n$ as follows \begin{equation}\label{measure2} P_n(\bx)={1\over Z_n}\prod_{i=1}^n w(x_i)\int_{{\cal P}} \prod_{i=1}^n q(x_i)\pi(d q)={1\over Z_n}\int_{{\cal P}} \prod_{i=1}^n \big(q(x_i)w(x_i)\big)\pi(d q), \end{equation} where $Z_n$ is the normalizing constant. In order to analyze the measure, it is convenient to rewrite it as follows. First, let us introduce some notation $$\langle q,w \rangle:=\sum_{k=1}^K q(k)w(k),\quad r_q(k):={w(k)q(k)\over \langle q,w \rangle},\quad k=1,\ldots,K.$$ Note that since $w(k)>0$ for all $k$, $r_q$ is correctly defined for every $q \in \mathcal{P}$. Thus $\langle q,w \rangle$ is the expected weight (under $q$-measure) and $r_q$ is a probability measure on ${\cal P}$. Now \begin{equation}\label{eq:pn} P_n(\bx)={1\over Z_n} \int_{{\cal P}} \prod_{i=1}^n \big(q(x_i)w(x_i)\big)\pi(d q)={1\over Z_n}\int_{{\cal P}} \langle q,w \rangle^n \prod_{i=1}^n r_q(x_i) \pi (dq).\end{equation} Since $r_q$ is a probability measure, it holds that $\sum_{\bx} \prod_{i=1}^n r_q(x_i)=1$ and so the normalization constant for $P_n$ is $$Z_n=\sum_{\bx\in {\cal X}^n}\int_{{\cal P}} \langle q,w \rangle^n \prod_{i=1}^n r_q(x_i) \pi (dq)=\int_{{\cal P}} \langle q,w \rangle^n \pi (dq).$$ Finally note that (\ref{eq:pn}) can be rewritten more neatly by defining the measure \begin{equation}\label{eq:pi} d\bar{\pi}_n:={\langle q,w \rangle^n\over Z_n}d\pi.\end{equation} With $\bar{\pi}_n$, we have \begin{equation}\label{pibn} P_n(\bx)=\int_{{\cal P}}\prod_{i=1}^n r_q(x_i) \bar{\pi}_n (dq)\quad \forall \quad \bx\in {\cal X}^n. \end{equation} From(\ref{pibn}), it is easy to find all marginal distributions, namely for any $m=1,\ldots,n$ \begin{equation}\label{marginals} P_n(x_1,\ldots,x_m)=\int_{{\cal P}}\prod_{i=1}^m r_q(x_i) \bar{\pi}_n (dq)={1\over Z_n}\int_{{\cal P}} \langle q,w \rangle^{n-m} \prod_{i=1}^m q(x_i)w(x_i) \pi (dq). \end{equation} In particular, when $(X_1,\ldots,X_n)\sim P_n$, then \begin{align*} P(X_i=k)&=\int_{{\cal P}}r_q(k) \bar{\pi}_n (dq)={1\over Z_n}\int \langle q,w \rangle^{n-1} w(k) q(k) \pi (dq)\\ P(X_i=k,X_j=l)&=\int_{{\cal P}}r_q(k)r_q(l) \bar{\pi}_n (dq)={1\over Z_n}\int \langle q,w \rangle^{n-2}w(k)q(k)w(l)q(l) \pi (dq) \end{align*} and so on. It is important to observe that $P_n(x_1,\ldots,x_m)$ depends on $n$. \paragraph{The limit process.} We have defined for every $n$ the measure (\ref{pibn}) that describes the genotype distribution of a $n$-element population. Now the natural question is: do these measures converge (in some sense) if the population size $n$ grows? First we have to define the sense of convergence. Since every measure $P_n$ is defined on different domain (${\cal X}^n$), we cannot speak about standard (weak) convergence of measures. Instead, we ask about the existence of a limiting stochastic process. To explain the sense of convergence, consider that we have defined a triangular array of random variables: \begin{align*} &X_{1,1}\sim P_1\\ &(X_{2,1},X_{2,2})\sim P_2\\ &(X_{3,1},X_{3,2},X_{3,3})\sim P_3\\ &\cdots \\ &(X_{n,1},X_{n,2},\ldots,X_{n,n})\sim P_n\\ &\cdots \end{align*} We also know that the joint distribution of the first $m$ variables in every row depends on $n$. Therefore we ask: is there a stochastic process $X_1,X_2,\ldots$ so that for every $m$ the following convergence holds \begin{equation}\label{convm}(X_{1,n},\ldots X_{m,n})\Rightarrow (X_{1},\ldots X_{m})?\end{equation} According to Kolmogorov's existence theorem, the existence of a stochastic process is equivalent to the existence of (finite dimensional) measures $P^*_m$ on set ${\cal X}^m$, $m=1,2,\ldots$ that satisfy the following consistency conditions: for every $m$ and for every $(x_1,\ldots,x_m)\in {\cal X}^m$, it holds that $$\sum_{x_{m+1}}P_{m+1}^*(x_1,\ldots,x_m,x_{m+1})=P_{m}^*(x_1,\ldots,x_m).$$ If we also want (\ref{convm}) to be true, then for every $m$ and for every $(x_1,\ldots,x_m)\in {\cal X}^m$ the following convergences must hold: \begin{equation}\label{fin-dim} P_n(x_1,\ldots,x_m)\to P^*(x_1,\ldots,x_m),\quad \forall m,\quad \forall (x_1,\ldots,x_m)\in {\cal X}^m.\end{equation} We now present a general lemma that guarantees the convergence (\ref{fin-dim}). To achieve the full generality, we let $w$ also depend on $n$. Thus, we have weights $w_n$, and we define the measures $r_{q,n}$ as follows $$ r_{q,n}(k):={w_n(k)q(k)\over \langle q,w_n \rangle} \quad \forall k\in {\cal X},.$$ We start with the following observation, proven in appendix. \begin{claim}\label{claim} If $w_n(i)\to w(i)$ $\forall i\in {\cal X}$, and $r_{q,n}$ and $r_q$ are defined with respect of $w_n$ and $w$, respectively, then the following uniform convergence holds. \begin{equation}\label{unif} \sup_{q\in {\cal P}}|r_{q,n}(k)-r_q(k)|\to 0. \end{equation} \end{claim} In the following lemma, $\bar{\pi}_n$ is an arbitrary probability measures on ${\cal P}$, not necessarily as in (\ref{eq:pi}). The measure $\bar{\pi}_n$ define $P_n$ as in (\ref{pibn}). \begin{lemma}\label{lemma1} Let $w_n(k)\to w(k)$ for every $k\in {\cal X}$. If there exists an probability measure $\bar{\pi}$ such that $\bar{\pi}_n\Rightarrow \bar{\pi}$, then for every $m$ there exists a probability measure $P_m^*$ on ${\cal X}^m$ so that (\ref{fin-dim}) holds. Moreover, for every $(x_1,\ldots,x_m)\in {\cal X}^m$ $$P^*_m(x_1,\ldots, x_m)=\int_{{\cal P}}\prod_{i=1}^m r_{q}(x_i)\bar{\pi}(dq),\quad {\rm where}\quad r_q(k)={w(k)q(k)\over \langle w, q \rangle},\quad k=1,\ldots,K $$ and the measures $P^*_m$, $m=1,2,\ldots$ satisfy consistency conditions.\end{lemma} \begin{proof} For every $x_1,\ldots, x_m$ from (\ref{unif}), it follows that \begin{equation}\label{unif2} \sup_{q\in {\cal P}}|\prod_{i=1}^m r_{q,n}(x_i)-\prod_{i=1}^m r_q(x_i)|\to 0. \end{equation} Since the functions $$q\mapsto \prod_{i=1}^m r_{q,n}(x_i),\quad q\mapsto \prod_{i=1}^m r_q(x_i)$$ are bounded (by 1) continuous functions, from uniform convergence, it holds $$P_n(x_1,\ldots,x_m)=\int_{{\cal P}}\prod_{i=1}^m r_{q,n}(x_i)\bar{\pi}_n(dq)\to \int_{{\cal P}}\prod_{i=1}^m r_{q}(x_i)\bar{\pi}(dq)=P^*_m(x_1,\ldots, x_m).$$ Clearly $P_m^*$ are probability measures. The consistency condition trivially holds, because $$\sum_{m+1}P^*_{m+1}(x_1,\ldots,x_{m+1})=\int_{{\cal P}}\sum_{x_{m+1}}\prod_{i=1}^{m+1} r_{q}(x_i)\bar{\pi}(dq)=\int_{{\cal P}}\prod_{i=1}^m r_{q}(x_i)\bar{\pi}(dq)=P^*_m(x_1,\ldots, x_m).$$\end{proof} \subsection{Frequencies: the measure $Q_n$} We now consider how to express the limit measure $P^*$ in terms of a limiting measure on the simplex $\mathcal{P}$. Recall $n_k(\bx)$ defined in (\ref{fr}) and let \begin{equation*} \mathbf{n}(\bx) := (n_1, \ldots, n_K ), \quad \text{where $n_i= n_i(\bx)$, for $i=1,\ldots,K$} \end{equation*} Since $\xi$ is exchangeable, the probability $P_\xi(\bx)$ depends on the counts $\mathbf{n}(\bx)$ only, so \begin{equation*}\label{g} P_{\xi}(\bx)=:g(n_1,\cdots,n_K). \end{equation*} We may now write: \begin{equation}\label{definetti} g(n_1,\dots,n_K)=P_{\xi}(\bx)=\int_{{\cal P}}\prod_{i=1}^n q(x_i)\pi(dq)=\int_{{\cal P}}\prod_{k=1}^K q(k)^{n_k}\pi(dq). \end{equation} In what follows, let us denote $$\mathbb{N}_n:=\{(n_1,\ldots,n_K): \sum_i n_i=n\}.$$ Observe that $$\sum_{(n_1,\ldots,n_k)\in \mathbb{N}_n}{n!\over n_1!\cdots n_K!} g(n_1,\cdots,n_K)=1.$$ \noindent Therefore, the measure $P_n$ can be defined on the set $\mathbb{N}_n$ as follows: \begin{align}\label{measuren} P_n({\bf n})& :={1\over Z_n}\frac{n!}{n_1!\cdots n_K!}g(n_1,\ldots,n_K)\prod_{k=1}^K \big( w_n(k)\big)^{n_k}={n!\over n_1! \cdots n_K!} \int_{{\cal P}} \prod_{k=1}^K \big(r_{q,n}(k)\big)^{n_k} \bar{\pi}_n (dq). \end{align} Considering the frequencies instead of counts, we can define the corresponding measure on the simplex ${\cal P}$. Let us denote that measure as $Q_n$, so that with ${\bf n}/n:=(n_1/n,\ldots n_k/n)$ \begin{equation}\label{Qn} Q_n\biggl({{\bf n}\over n}\biggr):=P_n({\bf n}),\quad \forall {\bf n}\in \mathbb{N}_n.\end{equation} Thus $Q_n$ is a discrete measure $$Q_n=\sum_{{\bf n}\in \mathbb{N}_n} P_n({\bf n})\delta_{{\bf n}\over n}.$$ The advantage of $Q_n$ over the measure $P_n$ on $\mathbb{N}_n$ is that for any $n$, $Q_n$ is defined on the same domain ${\cal P}$, and so one can speak about the weak convergence of $Q_n$. Essentially, obviously, the measure $P_n$ on ${\cal X}^n$, the measure $P_n$ on $\mathbb{N}_n$ and $Q_n$ on ${\cal P}$ are all the same, just the domains are different. Since the measures $Q_n$ are defined on the same space (simplex), it is now natural to ask, whether there exists a probability measure $Q^*$ so that $Q_n\Rightarrow Q^*$? It turns out the if the assumption of Lemma \ref{lemma1} holds, i.e. $\pi_n\Rightarrow {\bar \pi}$ and $w_n\to w$ (pointwise), then the limit measure is actually ${\bar \pi}r^{-1}$, where $$r: {\cal P} \mapsto {\cal P},\quad r(q)=r_q$$ and $r$ is defined with respect to limit weight function $w$. Thus for a measurable $E\subset {\cal P}$, $${\bar \pi}r^{-1}(E)={\bar \pi}\big(r^{-1}(E)\big).$$ For example, if ${\bar \pi}=\delta_{q^*}$ (the measure is concentrated on one point), then $${\bar \pi}r^{-1}=\delta_{r(q^*)},$$ because $$\delta_{q^*}r^{-1}(E)=1\quad \Leftrightarrow \quad q^*\in r^{-1}(E)\quad \Leftrightarrow \quad r(q^*)\in E.$$ The following lemma is the counterpart of Lemma \ref{lemma1}. Again, $\bar{\pi}_n$ is an arbitrary sequence of probability measures on ${\cal P}$, $P_n$ are defined via $\bar{\pi}_n$ by (\ref{measuren}) and $Q_n$ via $P_n$ as in (\ref{Qn}). \begin{lemma}\label{lemma2} Let $w_n(k)\to w(k)$ for every $k\in {\cal X}$. If there exists a probability measure $\bar{\pi}$ such that $\bar{\pi}_n\Rightarrow \bar{\pi}$, then $Q_n\Rightarrow {\bar \pi}r^{-1}$.\end{lemma} \begin{proof} Let $f: {\cal P}\to \mathbb{R}$ be a $(K$-variable) bounded continuous function. By definition of the weak convergence, it suffices to show that \begin{equation}\label{int} \int f(q) Q_n(dq)\to \int f(q) {\bar \pi}r^{-1}(dq)=\int f(r_q){\bar \pi}(dq),\end{equation} where the last equality holds by the change of variable formula. Note that \begin{align*} \int f(q)Q_n(dq)&=\sum_{{\bf n}\in \mathbb{N}_n} f\Big({{\bf n}\over n}\Big)P_n({\bf n})=\sum_{{\bf n}\in \mathbb{N}_n} f\Big({{\bf n}\over n}\Big){n!\over n_1! \cdots n_K!} \int \prod_{k=1}^K \big(r_{q,n}(k)\big)^{n_k} \bar{\pi}_n (dq)\\ &=\int \Big(\sum_{{\bf n}\in \mathbb{N}_n} f\Big({{\bf n}\over n}\Big) {n!\over n_1! \cdots n_K!} \prod_{k=1}^K \big(r_{q,n}(k)\big)^{n_k} \Big) \bar{\pi}_n(dq)\\ &=\int f_n(r_{q,n}(1),\ldots,r_{q,n}(K)) \bar{\pi}_n (dq)=\int f_n(r_{q,n}) \bar{\pi}_n (dq) ,\end{align*} where $$f_n(r_{q,n}):=:f_n(r_{q,n}(1),\ldots,r_{q,n}(K)):=\sum_{(n_1,\ldots,n_K)\in \mathbb{N}_n:} f\Big({n_1\over n},\ldots,{n_1\over n}\Big){n!\over n_1! \cdots n_K!} \prod_{k=1}^K \big(r_{q,n}(k)\big)^{n_k}$$ is the Bernstein polynomial evaluated at $r_{q,n}=(r_{q,n}(1),\ldots,r_{q,n}(K))$. It is easy to see and well known that for any vector $r\in {\cal P}$, $f_n(r)\to f(r)$, moreover, the convergence is uniform over ${\cal P}$: \begin{equation}\label{bern} \sup_{r\in {\cal P}}\Big|f_n(r)-f(r)\Big|\to 0.\end{equation} Since for every $q$, $r_{q,n}$ is a probability vector, then $${n!\over n_1! \cdots n_K!} \prod_{k=1}^K \big(r_{q,n}(k)\big)^{n_k}\leq 1,$$ and since $f$ is bounded, we see that for every $n$, $q\mapsto f_n(r_{q,n})=:b_n(q)$ is a bounded continuous function. Also the function $q\mapsto f(r_{q})=:b(q)$ is a bounded continuous function. Then \begin{align*} \sup_q|b_n(q)-b(q)|&\leq \sup_q|f_n(r_{q,n})-f(r_{q,n})|+ \sup_q|f(r_{q,n})-f(r_{q})|. \end{align*} By (\ref{unif}), $\sup_q|r_{q,n}(i)-r_q(i)|\to 0$ for every $i$. Then also $\sup_q\|r_{q,n}-r_q\|\to 0$. A continuous function on compact space is uniformly continuous, so $$\sup_q|f(r_{q,n})-f(r_{q})| \to 0.$$ By (\ref{bern}), $$\sup_q|f_n(r_{q,n})-f(r_{q,n})|\leq \sup_r|f_n(r)-f(r)|\to 0.$$ Therefore, we have shown that $\sup_q|b_n(q)-b(q)|$ implying that $$\int f(q)Q_n(dq)=\int b_n(q)\bar{\pi}_n(dq)\to \int b(q){\bar \pi}(dq)=\int f(r_{q}){\bar \pi}(dq).$$ \end{proof} \section{$P^*$ and $Q^*$ in the large population limit} \label{sec:largepopulationlimit} In what follows, let us rewrite fitnesses in terms of $\phi(k) := - \ln( w(k))$, so that for any genome $k\in {\cal X}$, $w(k) = \exp(-\phi(k))$. Moreover, in order to increase the influence of prior, we let weights $w_n$ depend on $n$ in the following way: \begin{equation}\label{wn} w_n(k)=\exp[-{\phi(k)\over n^{\lambda}}], \quad k=1,\ldots,K \end{equation} where $\lambda\geq 0$ and $0\leq \phi(1)<\phi(2)<\cdots <\phi(K)$. The case $\lambda=0$ corresponds to fitness that is constant in that it does not vary with $n$. Clearly, for every $k$, $w_n(k)\to w(k)$ and $\lambda$ controls the speed of that convergence. When $\lambda>0$, then $w(k)=1$ implying that in this case the mapping $r$ is identity, i.e. for every $q$, $r_q=q$. Let us return to our original $\bar{\pi}_n$, defined as in (\ref{eq:pi}) with $w_n$: \begin{equation}\label{pinn} \bar{\pi}_n(E)=\int_E {\langle q,w_n \rangle^n\over Z_n}\pi (dq),\quad Z_n=\int \langle q,w \rangle^n \pi(dq).\end{equation} In this section we consider the case where the prior measure $\pi$ is independent of $n$, and the support of $\pi$ is the whole simplex ${\cal P}$. Since by assumption $w(1)>w(2)$, clearly the function $q\mapsto \langle q,w \rangle$ has unique maximizer $q^*:=(1,0,\ldots,0)$. The following theorem states that phase transition occurs: when $0\leq \lambda <1$, then $\bar{\pi}_n\Rightarrow \delta_{q^*}$ and $P_n(x_1,\ldots,x_m)\to P^*(x_1,\ldots,x_m),$ where $$P^*(x_1,\ldots, x_m)=\prod_{i=1}^m r_{q^*}(x_i)=\prod_{i=1}^m q^*(x_i)=\left\{ \begin{array}{ll} 1, & \hbox{if for every $i$, $x_i=1$ ;} \\ 0, & \hbox{else.} \end{array} \right., $$ because in both cases (i.e. $\lambda=0$ and $\lambda\in (0,1)$), it holds that $r_{q^*}=q^*$. Thus the limit process $X_1,X_2,\ldots$ has only one realization: $1,1,\ldots$. In this case also $Q_n\Rightarrow \delta_{q^*}$. Thus, when $\lambda\in [0,1)$ then only the fittest genotype survives, no one else has any change, no matter what the prior says. In other words, the influence of the prior vanishes. \\ When $\lambda=1$, then $\bar{\pi}_n$ as well $Q_n$ converges to a nondegenerate distribution, specified below, and also the limit measure $P^*$ is non-degenerate. \\ And finally, when $\lambda>1$, then $\bar{\pi}_n\Rightarrow \pi$, $Q_n\Rightarrow \pi$ and the measure $P^*$ is the law of birth process $\xi$. In this case the influence of fitness vanish and only the prior matters -- the limit process equals to the breeding one. \begin{theorem}\label{thm1} Let the fitness function be defined as in (\ref{wn}) and assume that the support of the prior $\pi$ is ${\cal P}$. Then the following convergences hold: \begin{description} \item[1)] If $\lambda \in [0,1)$, then $\bar{\pi}_n\Rightarrow \delta_{q^*}$, $Q_n\Rightarrow \delta_{q^*}$ and (\ref{fin-dim}) holds with $$P^*(x_1,\ldots,x_m)=\prod_{i=1}^mq^*(x_i),\text{ where } q^*=(1,0,\ldots,0).$$ \item[2)] If $\lambda=1$, then $\bar{\pi}_n\Rightarrow {\bar \pi}$, $Q_n\Rightarrow {\bar \pi}$ and (\ref{fin-dim}) holds with $$P^*(x_1,\ldots,x_m)=\int \prod_{i=1}^mq(x_i)\bar{\pi}(dq),$$ where for every $E\subset {\cal P}$, $${\bar \pi}(E)={1\over Z} \int_E \exp[-\langle \phi,q\rangle] \pi (dq),\quad Z=\int \exp[-\langle \phi,q\rangle] \pi (dq).$$ \item[3)] If $\lambda>1$, then $\bar{\pi}_n\Rightarrow {\pi}$, $Q_n\Rightarrow {\pi}$ and (\ref{fin-dim}) holds with $$P^*(x_1,\ldots,x_m)=P_{\xi}(x_1,\ldots,x_m).$$ \end{description} \end{theorem} \subsection{Proof of Theorem \ref{thm1}} Before proving the theorem, let us state a very useful preliminary result. Recall that simplex ${\cal P}$ is a compact set. Let $f_n,f: {\cal P}\to \mathbb{R}^+$ be {continuous, hence bounded} measurable functions so that $f_n\to f$ uniformly and let $m_n\to \infty$ be an increasing sequence. We are given a measure $\pi$ on ${\cal P}$, and we are interested in the asymptotic behavior of the measure $\nu_n$, where $$\nu_n(E):=\int_E h_n(q) \pi(dq),\quad h_n(q):={f_n^{m_n}(q)\over \int f_n^{m_n}(q) \pi(dq)}=\Big({f_n(q)\over \|f_n\|_{m_n}}\Big)^{m_n}.$$ {Here we assume that $\int f_n^{m_n} d\pi<\infty$ for every $n$. If $\pi$ is a finite measure, then the conditions automatically holds due to the boundedness of $f_n$.} In what follows, let \begin{equation}\label{S} {\cal S}^*:=\{q\in {\cal S}: f(q)=\|f\|_{\infty}\},\quad {\cal S}_{\delta}^*:=\{q\in {\cal S}: f(q)> \|f\|_{\infty}-\delta\}.\end{equation} Here $\|f\|_{\infty}$ is the essential supremum of $f$ with respect to the $\pi$-measure. If $f$ is continuous and the support of $\pi$ is ${\cal P}$, then $\|f\|_{\infty}=\sup_q f(q)$. The proof of the following Proposition \ref{point2} is given in appendix. \begin{proposition}\label{point2} Let $f_n\to f$ uniformly and {let $\pi$ be a finite measure on ${\cal P}$. Then for every $\delta>0$, $\nu_n\big({\cal S}_{\delta}^*\big)\to 1$.} If ${\cal S}^*=\{q^*\}$, then $\nu_n\Rightarrow \delta_{q^*}.$ \end{proposition} Besides Proposition \ref{point2}, the proof of Theorem \ref{thm1} is based on the following well-known observation: when $m\to \infty$, then \begin{equation}\label{exp} \sup_{q\in {\cal P}}\Big| \big\langle \exp[-{\phi\over m}], q \big\rangle^m - \exp[-\langle \phi, q \rangle]\Big|\to 0. \end{equation} \begin{proof} {\bf (Theorem \ref{thm1})} \begin{description} \item[1)] For $\lambda=0$, take $f_n(q)=f(q)=\langle w,q \rangle$. From Proposition \ref{point2}, it follows that $\bar{\pi}_n\Rightarrow \delta_{q^*}$. Since for any weight $w$, $r_{q^*}(k)=q^*(k)$, from Lemma \ref{lemma1}, it follows that $$P^*(x_1,\ldots, x_m)=\prod_{i=1}^m r_{q^*}(x_i)=\prod_{i=1}^m q^*(x_i).$$ Since $\bar{\pi}_n r^{-1}(q^*)=q^*$, from Lemma \ref{lemma2}, it follows that $Q_n\Rightarrow \delta_{q^*}$. So, for $\lambda=0$, the statement is proven and we now consider the case $\lambda\in (0,1)$. Let $$f_n(q):=\langle \exp[-{\phi\over n^{\lambda}}], q \rangle^{n^{\lambda}},\quad f(q):=\exp[-\langle \phi, q \rangle ].$$ By (\ref{exp}), $\|f_n-f\|_{\infty}\to 0$. Since $\lambda\in (0,1)$, take $m_n=n^{1-\lambda}$. Then $$h_n(q):={\langle \exp[-{\phi\over n^{\lambda}}], q \rangle^{n}\over Z_n}$$ is the density of $\bar{\pi}_n$ with respect to $\pi$. Since $f$ is continuous, the set ${\cal S}^*$ in (\ref{S}) is $${\cal S}^*= \arg\max_{q\in {\cal S}} f(q)=\arg\min_{q\in {\cal S}}\langle \phi, q \rangle=\{(1,0,\ldots,0)\}=\{q^*\}.$$ By Proposition \ref{point2}, $\bar{\pi}_n\Rightarrow \delta_{q^*}$. As in the case of $\lambda=0$, it follows that $Q_n\Rightarrow \delta_{q^*}$ and $P^*(x_1,\ldots,x_m)=1$ if and only if $x_1=\cdots=x_m=1$. \item[2)] Since for any $q$ and any $n$, it holds $$\langle \exp[-{\phi\over n}], q \rangle^n\leq e^{-\phi(1)}\leq 1,$$ we obtain from (\ref{exp}) and bounded convergence that for any measurable $E$ \begin{equation}\label{Ekoo} \int_E \langle \exp[-{\phi\over n}], q \rangle^n \pi (dq)\to \int_E \exp[-\langle \phi, q \rangle]\pi(dq).\end{equation} Recall $$\bar{\pi}(E)={1\over Z}{\int_E}\exp[-\langle \phi, q \rangle]\pi(dq),\quad {\rm where}\quad Z=\int \exp[-\langle \phi, q \rangle]\pi(dq).$$ Therefore, from (\ref{Ekoo}), it follows that when $\lambda=1$, we have $$\bar{\pi}_n(E)\to {\bar\pi}(E),$$ meaning that $\bar{\pi}_n\Rightarrow \bar{\pi}$ (even in a stronger sense). Since $r_q=q$, from Lemma \ref{lemma1}, it follows that the limits of $P_n(x_1,\ldots,x_m)$ are $$P^*(x_1,\ldots,x_m)=\int \prod_{i=1}^mq(x_i)\bar{\pi}(dq)={1\over Z}\int \prod_{i=1}^mq(x_i)\exp[-\langle \phi, q \rangle]\pi(dq).$$ Since $r$ is identity function, by Lemma \ref{lemma2} the limit measure of frequencies is $\bar{\pi}$, i.e. $Q_n\Rightarrow \bar{\pi}$. \item[3)] Since for any $q$, $$\langle \exp[-{\phi\over n^{\lambda}}], q \rangle^n\to 1$$ by dominated convergence, again, for any measurable $E$ \begin{equation*}\label{Ekoo2} \int_E \langle \exp[-{\phi\over n}], q \rangle^n \pi (dq)\to \pi(E).\end{equation*} Therefore $\bar{\pi}_n\Rightarrow \pi$. By Lemma \ref{lemma1}, the limits of $P_n(x_1,\ldots,x_m)$ are $$P^*(x_1,\ldots,x_m)=\int \prod_{i=1}^mq(x_i)\pi(dq),$$ so that the limit process is $\xi$. The convergence $Q_n\Rightarrow \pi$ follows from Lemma \ref{lemma2}. \end{description} \end{proof} We have seen that the critical case $\lambda=1$ is the only case where the prior and fitnesses both determine the limit measure. In this case, the limit process $X_1,X_2,\ldots$, governed by $P^*$ has marginals \begin{align*} P(X_i=k)&=P(X_1=k)=\int q(k) \bar{\pi}(dq)={1\over Z}\int e^{-\langle \phi,q \rangle}q(k)\pi(dq),\\ P(X_i=k,X_j=l)&=P(X_1=k,X_2=l)= {1\over Z}\int e^{-\langle \phi,q \rangle}q(k)q(l)\pi(dq).\end{align*} It is also interesting to point out that in the critical case $\lambda=1$, the measure $\bar{\pi}$ satisfies $$\bar{\pi}=\arg\min_{\pi'\in E}D(\pi\|\pi'),$$ where $E$ is a set of probability measures on ${\cal P}$, namely $E:=\{\pi': \int \langle \phi,q \rangle \pi'(dq)\geq c\}$ and $c>0$ is a constant. \section{Dirichlet prior} \label{sec:dirichletprior} Also in the current section we consider the weights $w_n(k)$ as in (\ref{wn}), where $\lambda \in [0,1]$. We already know that in the case of constant priors, the case $\lambda<1$ means that the fitnesses will prevail over the prior, and the limit measure is degenerate one. Therefore, it is meaningful to consider the non-constant priors so that the influence of prior increases with suitable rate. Therefore, in the present section, we consider Dirichlet' priors \begin{equation}\label{dir} \pi_n={\rm Dir}(n^{1-\lambda}\alpha_1,\ldots n^{1-\lambda}\alpha_K),\end{equation} where $\alpha:=(\alpha_1,\ldots,\alpha_K)$, $\alpha_k>0$ and $|\alpha|:=\sum_k \alpha_k$. The constant $\sum_i n^{1-\lambda}\alpha_k=|\alpha|n^{1-\lambda}$ is the so called {\it concentration} or {\it precision} parameter, the bigger that parameter, the more prior is concentrated over it expectation $(\alpha_1/|\alpha|,\ldots,\alpha_K/|\alpha|)$. Increasing the concentration parameter increases the influence of prior, and now it is clear the the smaller is $\lambda$, the bigger must be the prior influence. This justifies the choice of $n^{1-\lambda}$. The case $\lambda=1$ corresponds to already studied case of constant priors, therefore we now consider the case $\lambda\in [0,1)$ The following theorem shows that the phase transition occurs again. \begin{theorem}\label{thm2} Let the fitness function be defined be defined as in (\ref{wn}) and the prior $\pi_n$ as in (\ref{dir}). Let $\bar{\pi}_n$ be defined as in (\ref{pinn}) with $\pi_n$ instead of $\pi$. Then the following convergences hold: \begin{description} \item[1)] If $\lambda=0$, then $\bar{\pi}_n\Rightarrow \delta_{q^*}$, where $q^*$ is the unique maximizer of the following function \begin{equation}\label{measure1} \ln \langle e^{-\phi} ,q \rangle + \sum_{k=1}^K\alpha_k\ln q(k).\end{equation} Then $Q_n\Rightarrow \delta_{r^*}$, where $r^*=r_{q^*}$, so that $r^*(k)\propto q^*(k)w(k)$, and (\ref{fin-dim}) holds with $$P^*(x_1,\ldots,x_m)=\prod_{i=1}^mr^*(x_i).$$ \item[2)] If $\lambda\in (0,1)$, then $\bar{\pi}_n\Rightarrow \delta_{q^*}$, where $q^*$ is the unique maximizer of the following function: \begin{equation}\label{measure2} -\langle \phi,q\rangle + \sum_{k=1}^K\alpha_k \ln q(k) \end{equation} Then $Q_n\Rightarrow \delta_{q^*}$ and (\ref{fin-dim}) holds with $$P^*(x_1,\ldots,x_m)=\prod_{i=1}^m q^*(x_i).$$ \end{description} \end{theorem} Let us start with proving the uniqueness of the solutions of (\ref{measure1}) and (\ref{measure2}). The proof of the following lemma is in appendix.\vfil\break \begin{lemma}\label{sol}\hfil\break \begin{description} \item[1)] The function (\ref{measure1}) has an unique maximizer $q^*$, where \begin{equation}\label{solla1} q^*(k)={\alpha_k\over (1+|\alpha|)-{w(k)\over \theta}}, \quad k=1,\ldots,K \end{equation} where $\theta>0$ is a parameter satisfying $\theta=\langle w ,q^* \rangle$. \item[2)] The function (\ref{measure2}) has an unique maximizer $q^*$, where \begin{equation}\label{solla2} q^*(k)={ \alpha_k \over \phi(k)+|\alpha|-\theta},\quad k=1,\ldots,K, \end{equation} where $\theta>0$ is the parameter satisfying $\theta=\langle \phi,q^*\rangle.$ \end{description} \end{lemma} \paragraph{Proof of theorem \ref{thm2}.} \begin{description} \item[1)] In the case $\lambda=0$, the measure $\bar{\pi}_n$ has the following density with respect to the Lebesgue measure: $$\bar{\pi}_n(q)={1\over Z_n }\langle e^{-\phi} ,q \rangle^{n}\cdot {1\over B(n\alpha)}\prod_{k=1}^K (q(k))^{n(\alpha_k-{1\over n})}={f_n(q)^n\over Z'_n},$$ where $$f_n(q):=\langle e^{-\phi} ,q \rangle\prod_{k=1}^K(q(k))^{(\alpha_k-{1\over n})},\quad Z'_n:=\int f^n_n(q)dq.$$ Clearly for every $q$, $f_n(q)\to f(q)$, where $$f(q)=\langle e^{-\phi} ,q \rangle\prod_{k=1}^K(q(k))^{\alpha_k}.$$ It is not hard to see that the convergence is uniform, i.e. $\sup_q |f_n(q)-f(q)|\to 0$. By {\bf 1)} of Lemma \ref{sol}, the function $f$ has unique maximizer $q^*$ (\ref{solla1}), i.e. ${\cal S}^*=\{q^*\}$. Now apply Proposition \ref{point2} with $\pi$ being the Lebesgue measure on ${\cal P}$ (hence $\pi$ is finite) and $m_n=n$ so that $$\nu_n(E)=\int_E h_n d q={1\over Z'_n} \int_E {f_n(q)^n }dq=\bar{\pi}_n(E).$$ Since all assumptions are fulfilled, we have $\bar{\pi}_n\Rightarrow \delta_{q^*}$. Since $w_n=w$, by Lemma \ref{lemma1} the limit process has finite-dimensional distributions $$P^*(x_1,\ldots,x_m)=\prod_{i=1}^mr^*(x_i),\quad \text{where}\quad r^*=r_{q^*}$$ so that the limit process $P^*$ corresponds to a i.i.d. sequence $X_1,X_2,\ldots$ with $X_1\sim r^*$. According to Lemma \ref{lemma2}, the frequencies $Q_n$ converge weakly to the measure ${r^*}$ and this is also quite obvious by SLLN. \item[2)] The proof is similar: $\bar{\pi}_n$ has density (with respect to Lebesgue measure) $${f_n(q)^{m_n}\over Z'_n},\quad \text{where}\quad f_n(q)=\langle e^{-{\phi\over n^{\lambda}}} ,q \rangle^{n^{\lambda}}\prod_{k=1}^K (q(k))^{(\alpha_k-{1\over n^{1-\lambda}})},\quad m_n=n^{(1-\lambda)}.$$ Since $$\langle e^{-{\phi\over n^{\lambda}}} ,q \rangle^{n^{\lambda}}\to e^{\langle \phi,q\rangle },$$ uniformly over $q$, we have that sequence $f_n$ converges uniformly to $$f(q)=e^{-\langle \phi,q\rangle }\prod_{k=1}^K (q(k))^{\alpha_k}.$$ By {\bf 2)} of Lemma \ref{sol}, the function $f$ has unique maximizer $q^*$ (\ref{solla2}). As in the case {\bf 1)}, it is easy to see that the assumptions of Proposition \ref{point2} are fulfilled with $\pi$ being Lebesgue measure on ${\cal P}$, $m_n=n^{1-\lambda}$ and so $\bar{\pi}_n\Rightarrow \delta_{q^*}$. In the present case, for every $k=1,\ldots,K$, $w_n(k)\to 1$ and so by Lemma \ref{lemma1}, the limit process $P^*$ is i.i.d. process with distribution ${q^*}$ (because $r$ is identity function). According to Lemma \ref{lemma2}, the frequencies $Q_n$ converge weakly to the measure $\delta_{q^*}$. \end{description} \subsection{Relation between $\lambda=0$ and $\lambda\in (0,1)$} From (\ref{exp}), it follows: \begin{equation}\label{exp2} \sup_{q\in {\cal P}}\Big| m\ln \langle \exp[-{\phi\over m}], q \rangle +\langle \phi, q \rangle]\Big|\to 0 \end{equation} so that with $$f_m(q):=\ln \langle \exp[-{\phi\over m}], q \big\rangle+\sum_k {\alpha_k\over m} \ln q(k), \quad f(q)=-\langle \phi, q \rangle+\sum_k {\alpha_k} \ln q(k),$$ we have $$\sup_{q\in {\cal P}}|m f_m(q)-f(q)|\to 0.$$ Since $f(q)$ is as in (\ref{measure2}), it has the unique maximizer $q^*$ given in (\ref{solla1}). On the other hand, the maximizer of $m f_m(q)$ is the same as the maximizer of $f_m(q)$, which corresponds to (\ref{measure1}) where $\phi$ is replaced by $\phi/m$ and $\alpha$ is replaced by $\alpha/m$. Let this unique maximizer be $q^*_m$ Since the functions $mf_m(\cdot)$ and $f(\cdot)$ are continuous, uniformly convergent and having unique maximum, it follows that $q^*_m \to q^*$ (in usual sense, because ${\cal P}$ is compact). Thus, we have proven the following proposition. \begin{proposition}\label{propa} Let $$q^*_m=\arg\max_q \Big(\ln \langle \exp[-{\phi\over m}], q \big\rangle+\sum_k {\alpha_k\over m} \ln q(k)\Big)$$ and let $q^*$ be the maximizer of (\ref{measure2}). Let $r_m^*$ be the corresponding $r$ measure, i.e. $r_m^*(k)\propto q^m(k)\exp[-{\phi(k)\over m}].$ Then $q^*_m\to q^*$ and $r^*_m\to q^*$. \end{proposition} \subsection{Product of Dirichlet priors} Recall the setup in Subsection \ref{sec:product}. The set of genomes is now ${\cal X}^L=\overbrace{{\cal X}\times \cdots \times {\cal X}}^L$ and the breeding process $\xi=(\xi^1,\ldots,\xi^L)$, where $\xi^l$ are independent exchangeable processes. We now assume that the prior of $\xi^l$ is $\pi^l={\rm Dir}(\alpha^l),$ where $\alpha^l=(a_1^l,\ldots,\alpha_K^l)$, $l=1,\ldots,L$. In this model, $L$ different Polya urns are run independently. Let ${\cal P}^L$ be the set of $L$-fold product measures: $${\cal P}^L:=\{q^1\times \cdots \times q^L: q^j\in {\cal P}\},$$ where ${\cal P}$, as previously, stands for the $(K-1)$-dimensional simplex. Observe that ${\cal P}^L$ is a compact subset of the set of all possible probability measures on ${\cal X}^L$. Since the components of $\xi$ are independent, the prior $\pi$ of $\xi$ is the product of Dirichlet measures $\pi=\pi^1\times \cdots \times \pi^L$. This means that the support of $\pi$ is ${\cal P}^L$ and for every element $q=q^1\times \cdots\times q^L\in {\cal P}^L$, the density is (with slight abuse of notation, $\pi$ stands for the measure as well as for its density) $$\pi(q)=\prod_{l=1}^L\pi^l(q^l)={1\over B}\prod_{l=1}^L \prod_{k=1}^K \big(q(k)^l\big)^{\alpha^l_k-1},\quad B:=\prod_{l=1}^L B(\alpha^l).$$ The function $\phi$ is now defined on the set ${\cal X}^L$, and so for any $q\in {\cal P}^L$, $$\langle \phi,q \rangle=\sum_{(k_1,\ldots,k_L)\in {\cal X}^L}\phi(k_1,\ldots,k_L)q^1(k_1)\cdots q^L(k_L)$$ and $\langle e^{-\phi} ,q \rangle$ is defined similarly. When $\lambda=0$, the measure $\bar{\pi}_n$ has density $f_n(q)^n/Z'_n$, where $$f_n(q)=\langle e^{-\phi} ,q \rangle \prod_{l=1}^L\prod_{k=1}^K(q^l(k))^{(\alpha^l_k-{1\over n})},\quad Z'_n:=\int f^n_n(q)dq.$$ Clearly $f_n(q)$ converges uniformly to \begin{equation}\label{eq1} f(q):=\langle e^{-\phi} ,q \rangle \prod_{l=1}^L\prod_{k=1}^K(q^l(k))^{(\alpha^l_k)}.\end{equation} Similarly, when $\lambda\in (0,1)$ the measure $\bar{\pi}_n$ has density $f_n(q)^{m_n}/Z'_n$, where $m_n=n^{1-\lambda}$, $$f_n(q)=\langle e^{-\phi\over n^{\lambda}} ,q \rangle^{n^{\lambda}}\prod_{l=1}^L\prod_{k=1}^K(q^l(k))^{(\alpha^l_k-{1\over n^{1-\lambda}})},\quad Z'_n:=\int f^{m_n}_n(q)dq.$$ Again, $f_n(q)$ converges uniformly to \begin{equation}\label{eq2} f(q):=e^{\langle {-\phi} ,q \rangle} \prod_{l=1}^L\prod_{k=1}^K(q^l(k))^{(\alpha^l_k)}.\end{equation} When (\ref{eq1}) (resp. (\ref{eq2})) have unique solution $q^*$, then the statements of Theorem \ref{thm2} hold (the proof is the same): \begin{description} \item[1)] Suppose $\lambda=0$ and (\ref{eq1}) has unique maximizer $q^*=q^1\times \cdots\times q^L$. Then $\bar{\pi}_n\Rightarrow \delta_{q^*}$, and $Q_n\Rightarrow \delta_{r^*}$, where $$r^*(k_1,\ldots,k_L)\propto w(k_1,\ldots,k_L)q^1(k_1)\cdots q^L(k_L),$$ and $w(k_1,\ldots,k_L)=\exp[-\phi(k_1,\ldots,k_L)].$ Observe that the measure $r^*$ is not necessarily a product measure. Then also (\ref{fin-dim}) holds with $$P^*(x_1,\ldots,x_m)=\prod_{i=1}^mr^*(x_i),\quad x_i\in {\cal X}^L.$$ \item[2)] Suppose $\lambda\in (0,1)$ and (\ref{eq2}) has unique maximizer $q^*=q^1\times \cdots\times q^L$. Then $\bar{\pi}_n\Rightarrow \delta_{q^*}$, $Q_n\Rightarrow \delta_{q^*}$ and (\ref{fin-dim}) holds with $$P^*(x_1,\ldots,x_m)=\prod_{i=1}^mq^*(x_i),\quad x_i\in {\cal X}^L.$$ \end{description} In the case $L>1$, the maximizer of (\ref{eq1}) and (\ref{eq2}) is not always unique. Whether it is unique or not depends on $\phi$ and vectors $\alpha^l$. Indeed, maximizing (\ref{eq1}) is equivalent to minimizing \begin{equation}\label{eq1ln} -\ln \big(\langle e^{-\phi} ,q \rangle \big)-\sum_{l=1}^L\sum_{k=1}^K \alpha_k^l \ln (q^l(k)), \end{equation} and $-\sum_{l=1}^L\sum_{k=1}^K \alpha_k^l \ln (q^l(k))$ is always a convex function. So, when the parameters $\alpha^l_k$ are big enough, then the whole function (\ref{eq1ln}) becomes convex. The same argument holds for (\ref{eq2}). We shall present some sufficient conditions for convexity of (\ref{eq1}) and (\ref{eq2}) for the case $K=L=2$ below. Recall: when a positive continuous function $f(q)$ has one maximizer $q^*$, then for any sequence $m_n\to \infty$, the measures $\nu_n$ with densities proportional to $f^{m_n}(q)$ converge weakly to $\delta_{q^*}$. When the function has, say, two maximizers, $q^*_1$ and $q^*_2$, then by Proposition \ref{point2}, for all disjoint open balls $B_1$ and $B_2$ so that $q^*_i\in B_i$, it still holds that $\nu_n(B_1)+\nu_n(B_2)\to 1$. Thus, when the measures $\nu_n$ are weakly convergent, then the limit measure is concentrated on $\{ q^*_1,q^*_2\}$ so that the limit measure must be $p \delta_{q^*_1}+(1-p)\delta_{q^*_2}$, for some $p\in[0,1]$. In this case $\nu_n(B_1)\to p$. However, the function $f$ might be so that the limits $\nu_n(B_i)$ do not exist. And even if they do exist (i.e. the measures $\nu_n$ are weakly convergent), the limits $p$ and $1-p$ might be arbitrary real numbers, and hard to determine. Therefore the following theorem adapted from \cite{HaarioSaksman} (Theorem 5.7 and a remark after it) might be very useful. \begin{theorem}\label{thmg} Suppose $K\subset \mathbb{R}^k$ is a compact non-empty subset and let $g: K\to [0,\infty)$ be a twice continuously differentiable function with finitely many minimum points $\{a_1,\ldots,a_r\}$ all located in the interior of $K$. Let, for every $i=1,\ldots r$ the Hessian of $g$ at $a_i$ be positive definite. Given any increasing sequence $m_n$, define the sequence of measures $$\nu_n(E):= {1\over Z_n} \int_E \exp[-m_n\cdot g(x)]dx,\quad Z_n:=\int \exp[-m_n\cdot g(x)]dx.$$ Then $\nu_n\Rightarrow \nu$, where $$\nu=\sum_{i=1}^r p_i \delta_{a_i},\quad \text{ with } p_i\propto {1\over \sqrt{ \det H(a_i)}}$$ and $\det H(a_i)$ is a determinant of Hessian evaluated at $a_i$.\end{theorem} To apply the theorem in our case, let us first note that any $K-1$ dimensional simplex can be considered as a $K-1$-dimensional non-empty compact set $${\cal P}_K=\{(q(1),\ldots, q(K-1)): q(k)\geq 0, \sum_k^{K-1} q(k)\leq 1\}.$$ Therefore, our search space ${\cal P}^L$ can be considered as a subset in $\mathbb{R}^{L(K-1)}.$ This subset has non-empty interior. Clearly any solution of (\ref{eq1}) and (\ref{eq2}) has all components strictly positive so that all maximizers of maximizer of (\ref{eq1}) are interior points and the same holds for (\ref{eq2}). We take $g(q)=-\ln f(q)$, where $f$ is as in (\ref{eq1}) or (\ref{eq2}). Thus, for any $m$, $\exp[-m g(y)]=f^m(q)$ so that the measure defined in the statement of theorem is $$\nu_n(E)\propto \int_E f^{m_n}(q)dq.$$ However, even when the measures $\nu_n$ converge weakly to a limit, it does not automatically follow that the measures $\bar{\pi}_n$ converge to the same limit even if $f_n$ converges to $f$ uniformly. This convergence might depend on the speed of the uniform convergence, and we leave it for the further studies and proceed with an example instead. \subsubsection{The case $K=L=2$} Let us analyze more closely the case $K=2$ and $L=2$. Denote $q^1(1)=:z_1$ and $q^2(1)=:z_2$. Also denote $\alpha_k^1=\alpha_k$ and $\alpha_k^2=\beta_k$. The function (\ref{eq1ln}) is \begin{equation}\label{g1} g(z_1,z_2)=-\ln \big(\langle w,z \rangle\big)-\alpha_1 \ln z_1- \alpha_2 \ln (1-z_1) - \beta_1 \ln z_2- \beta_2 \ln (1-z_2),\end{equation} where $$\langle w,z \rangle= w(1,1)z_1z_2+w(1,2)z_1(1-z_2)+w(2,1)(1-z_1)z_2+w(2,2)(1-z_1)(1-z_2).$$ Thus with $w^*:=w(1,1)-w(1,2)-w(2,1)+w(2,2)$ and \begin{align*} \theta^1_1&:=w(1,1)z_2+w(1,2)(1-z_2),\quad \theta^1_2=w(2,1)z_2+w(2,2)(1-z_2)\\ \theta^2_1&:=w(1,1)z_1+w(2,1)(1-z_1),\quad \theta^2_2:=w(1,2)z_1+w(2,2)(1-z_1). \end{align*} we obtain the Hessian \begin{align*} \left( \begin{array}{cc} {\partial^2 g\over \partial z_1^2} & {\partial^2 g\over \partial z_1 \partial z_2} \\ {\partial^2 g\over \partial z_2 \partial z_1} & {\partial^2 g\over \partial z_2^2} \\ \end{array} \right)=\left( \begin{array}{cc} {\alpha_1\over z_1^2}+{\alpha_2\over (1- z_1)^2} +{( \theta_1^1-\theta_2^1 )^2\over \langle w,z \rangle^2} & {(\theta_1^1-\theta_2^1)(\theta_1^2-\theta_2^2)-w^*\langle w,z \rangle\over \langle w,z \rangle^2}\\ {(\theta_1^1-\theta_2^1)(\theta_1^2-\theta_2^2)-w^*\langle w,z \rangle\over \langle w,z \rangle^2} & {\beta_1\over z_2^2}+{\beta_2\over (1-z_2)^2}+{(\theta_1^2-\theta_2^2)^2\over \langle w,z \rangle^2} \\ \end{array} \right). \end{align*} The elements in the main diagonal are strictly positive and therefore the matrix is positive definite if the determinant is positive, i.e. \begin{align*} \Big({\alpha_1\over z_1^2}+{\alpha_2\over (1- z_1)^2} +{( \theta_1^1-\theta_2^1 )^2\over \langle w,z \rangle^2}\Big) \Big({\beta_1\over z_2^2}+{\beta_2\over (1-z_2)^2}+{(\theta_1^2-\theta_2^2)^2\over \langle w,z \rangle^2}\Big) >\Big({(\theta_1^1-\theta_2^1)(\theta_1^2-\theta_2^2)\over \langle w,z \rangle^2}-{w^*\over \langle w,z \rangle}\Big)^2. \end{align*} Observe: $ \langle w,z \rangle \geq \min_{i,j}w(i,j)>0$ and so $$\big|{w^*\over \langle w,z \rangle}\big|\leq {|w^*|\over \min_{i,j}w(i,j)}.$$ On the other hand, for any $(z_1,z_2)\in [0,1]\times[0,1]$, $${\alpha_1\over z_1^2}+{\alpha_2\over (1-z_1)^2}\geq (\alpha_1^{1\over 3}+\alpha_2^{1\over 3})^3, \quad {\beta_1\over z_2^2}+{\beta_2\over (1-z_2)^2}\geq (\beta_1^{1\over 3}+\beta_2^{1\over 3})^3.$$ Therefore, it is not hard to see that when \begin{equation}\label{posd1} (\alpha_1^{1\over 3}+\alpha_2^{1\over 3})^3(\beta_1^{1\over 3}+\beta_2^{1\over 3})^3>\big({w^*\over \min_{i,j}w(i,j)}\big)^2,\end{equation} then the Hessian is always positive definite, i.e. the function $g$ in (\ref{g1}) is strictly convex, and the minimum unique. In particular, the condition holds if $w^*=0$. In particular, if $\alpha_1=\beta_1$, $\alpha_2=\beta_2$ and $w(1,2)=w(2,1)$, then under (\ref{posd1}) the unique solution $z_1,z_2$ satisfies $z_1=z_2$ (by symmetry). Indeed, in symmetric case it holds: if $(z_1,z_2)$ is a solution, then so must be $(z_2,z_1)$, and if the solution is unique, then it must be that $z_1=z_2$. If (\ref{posd1}) fails, then the function might be non-convex, and for small $\alpha_k$ and $\beta_k$ values it is (given $w^*\ne 0$), but evaluated at the minimums, the Hessian might still be positive definite and so Theorem \ref{thmg} might apply. Similarly, for (\ref{eq2}) \begin{equation}\label {g2} g(z)=-\ln f(z)= \langle \phi, z \rangle -\alpha_1^1 \ln z_1- \alpha_2^1 \ln (1-z_1) - \alpha_1^2 \ln z_2- \alpha_2^2 \ln (1-z_2)\end{equation} and so the Hessian is \begin{align*} \left( \begin{array}{cc} {\partial^2 g\over \partial z_1^2} & {\partial^2 g\over \partial z_1 \partial z_2} \\ {\partial^2 g\over \partial z_2 \partial z_1} & {\partial^2 g\over \partial z_2^2} \\ \end{array} \right)=\left( \begin{array}{cc} {\alpha_1\over z_1^2}+{\alpha_2\over (1- z_1)^2} & \phi^* \\ \phi^* & {\beta\over z_2^2}+{\beta_2\over (1-z_2)^2} \\ \end{array} \right), \end{align*} where $$\phi^*:=\phi(1,1)-\phi(1,2)-\phi(2,1)+\phi(2,2).$$ Since the elements of the main diagonal are are strictly positive, the matrix is positive definite if and only if the determinant is positive: \begin{equation}\label{pdf} \big({\alpha_1\over z_1^2}+{\alpha_2\over (1- z_1)^2}\big)\big({\beta_1\over z_2^2}+{\beta_2\over (1-z_2)^2}\big)>(\phi^*)^2.\end{equation} Thus, when the following inequality holds \begin{equation}\label{posdef2} (\alpha_1^{1\over 3}+\alpha_2^{1\over 3})^3(\beta_1^{1\over 3}+\beta_2^{1\over 3})^3>(\phi^*)^2, \end{equation} then the Hessian is always positive definite and the function (\ref{g2}) strictly convex implying that the minimum is unique. If the Hessian is not always unique, but (\ref{pdf}) holds for minimums, then Theorem \ref{thmg} applies. Again, when $\alpha=\beta$ and $\phi(1,2)=\phi(2,1)$, then under (\ref{posdef2}) the unique minimum is such that $z_1=z_2$. For example, when $\alpha=\beta=(2,2)$ and $\phi(1,1)=1, \phi(1,2)=\phi(2,1)=2,\phi(2,2)=3$, then $\phi^*=0$ and, therefore, (\ref{posdef2}) holds. It means the minimum is unique, $z_1=z_2$, and one can verify that $$z_1=z_2={\sqrt{17}-3\over 2}\approx 0.561.$$ So the unique limit distribution in this case is $q\times q$, where $q=(z,1-z)$. But when $\alpha=(2,3)$, $\beta=(3,2)$ and $\phi$ is as previously, then the minimum is again unique but since $\alpha\ne \beta$, we now have $z_1\ne z_2$: $z_1=\sqrt{6}-2\approx 0.45,\quad z_2=\sqrt{7}-2\approx 0.645.$ This means: the unique limit distribution is $q^1\times q^2$, where $q^1=(\sqrt{6}-2,3-\sqrt{6})$, $q^2=(\sqrt{7}-2,3-\sqrt{7})$. When $\alpha=\beta=(2,2)$, $\phi(1,1)=4, \phi(1,2)=\phi(2,1)=2,\phi(2,2)=4$, then $\phi^*=4$, but (\ref{posdef2}) still holds and therefore there is unique minimum: $z_1=0.5,z_2=0.5$. However, when the $\phi$ is as previously, but $\alpha=\beta=(0.25,0.25)$, then (\ref{posdef2}) fails. It turns out that now the function is not convex and there are two minima: $$(z_1={2-\sqrt{2}\over 4},z_2={2+\sqrt{2}\over 4}),\quad (z_1={\sqrt{2}+2\over 4},z_2={\sqrt{2}-2\over 4})$$ Observe that $z_2=1-z_1$. Thus the limit measures are $q^1\times q^2$ and $q^2\times q^1$, where $q^1=(z_1,z_2)$ and $q^2=(z_2,z_1)$. These two product measures are different. Finally observe that in both cases (\ref{pdf}) holds, so that by Theorem \ref{thmg}, $$\nu_n\Rightarrow {1\over 2}\delta_{q^1\times q^2}+ {1\over 2}\delta_{q^2\times q^1}.$$ The function $$f_n(q)=\langle e^{-{\phi\over n^{\lambda}}},q\rangle^{n^{\lambda}} \prod_{l=1}^2\prod_{k=1}^2 (q(k)^l)^{0.25-{1\over n^{1-\lambda}}}$$ is symmetric, i.e. $f_n(z_1,z_2)=f_n(z_2,z_1)$ and then $\bar{\pi}_n \Rightarrow {1\over 2}\delta_{q^1\times q^2}+ {1\over 2}\delta_{q^2\times q^1}$. Since now $r_q=q$, by Lemma \ref{lemma1}, (\ref{fin-dim}) holds, with $$P^*(x_1,\ldots,x_m)= {1\over 2}\prod_{i=1}^mq^1\times q^2(x_i)+{1\over 2}\prod_{i=1}^mq^2\times q^1(x_i),\quad x_i\in {\cal X}^L.$$ \section{Experiments} \label{sec:experiments} Let $K=2$, $\phi(1)=0$, $\phi(2)=\ln 6$, $\alpha_1=0.3$, $\alpha_2=0.7$. Let us find the limit measures $q^*$ as in Theorem \ref{thm2} in the following cases: $\lambda=0$, $\lambda\in (0,1)$ and $\lambda=1$. \paragraph{\bf Case $\lambda=0$:} Then, as it can be easily checked by verifying (\ref{solla1}) that $q^*=({3/5},2/5)$. Since $\theta=\langle q^*,w \rangle=2/3$, the measure $r^*$ is as follows: $r^*_1=w(1)q^*(1)/\theta=9/10$ and $r^*(2)=w(2)q^*(2)/\theta=1/10$. Therefore the limit process governed by $P^*$ is an i.i.d. process with measure $r^*$, and so due to the weight function, the proportion of the first genotype has increased from 0.3 (according to the prior) to 0.9. Figure \ref{fig:graphlambda0} illustrates the convergence. \begin{figure}[h] \begin{center} \includegraphics[width=10cm,height=6cm]{graph1.pdf} \caption{\small Density histogram of the fraction of the first type in the population, for a Dirichlet prior $\alpha=(0.3, 0.7)$, fitness $\phi=(0,\ln 6)$, and $\lambda=0$, and three population sizes $10^2$, $10^3$, and $10^4$. The histograms were constructed by recording the fraction of the first type in the population over $10^8$ MCMC samples according to the process described in section\ \ref{sec:inversefitnessselection} }. \label{fig:graphlambda0} \end{center} \end{figure} \paragraph{\bf Case $\lambda\in (0,1)$:} The solution of (\ref{solla2}) is \begin{equation}\label{lumi} q^*(1)={\ln (6)-1+\sqrt{(\ln (6)-1)^2+1.2 \ln(6)}\over 2\ln (6)}\approx 0.686. \end{equation} Now $r^*=q^*$, thus we see that the limit process governed by $P^*$ is an i.i.d. process with measure $q^*$, and so due to the weight function, the proportion of first genotype has increased from 0.3 (according to the prior) to 0.689. The increase is smaller than in the previous case. Figures \ref{fig:graphlambda25}, \ref{fig:graphlambda50} and \ref{fig:graphlambda75} illustrates the convergence for $\lambda=0.25,0.5,0.75$, respectively. We see that although the limit is the same, the speed of convergence depends very much on $\lambda$. \begin{figure}[hp] \begin{center} \includegraphics[width=9cm,height=4 cm]{graph6.pdf} \caption{\small Case $\lambda=0.25$ : density histogram of the fraction of the first type in the population, for a Dirichlet prior $\alpha=(0.3, 0.7)$, fitness $\phi=(0,\ln 6)$, and $\lambda=0.25$, and population sizes $10^3$ and $10^4$. The histograms were constructed by recording the fraction of the fitter allele in the population over $10^8$ MCMC samples according to the process described in section\ \ref{sec:inversefitnessselection} }. \label{fig:graphlambda25} \end{center} \begin{center} \includegraphics[width=9cm,height=4 cm]{graph5.pdf} \caption{\small Case $\lambda=0.5$ : density histogram of the fraction of the first type in the population, for a Dirichlet prior $\alpha=(0.3, 0.7)$, fitness $\phi=(0,\ln 6)$, and population sizes $10^3$ and $10^4$. The histograms were constructed by recording the fraction of the fitter allele in the population over $10^8$ MCMC samples according to the process described in section\ \ref{sec:inversefitnessselection} }. \label{fig:graphlambda50} \end{center} \begin{center} \includegraphics[width=9cm,height=4 cm]{graph4.pdf} \caption{\small Case $\lambda=0.75$ : density histogram of the fraction of the first type in the population, for a Dirichlet prior $\alpha=(0.3, 0.7)$, fitness $\phi=(0,\ln 6)$, and population sizes $10^3$ and $10^4$. The histograms were constructed by recording the fraction of the fitter allele in the population over $10^{10}$ MCMC samples according to the process described in section\ \ref{sec:inversefitnessselection} }. \label{fig:graphlambda75} \end{center} \end{figure} Let $q^*_m=(q_m(1),q_m(2))$ be the maximizer of $$\ln \langle \exp[-{\phi\over m}], q \big\rangle+\sum_k {\alpha_k\over m} \ln q(k).$$ In our example $q_1(1)=3/5$ (the case $\lambda=0$) and $$q_m(1)={b_m+\sqrt{b_m^2+{12\over 10}(1+m)\big(1-(1/6)^{1\over m}\big)(1/6)^{1\over m}}\over 2(1+m) \big(1-(1/6)^{1\over m}\big)},\text{ where }b_m=(m+0.3)-(1/6)^{1\over m}(1.3+m).$$ According to Proposition \ref{propa}, in the process $m\to \infty$, $q_m(1)$ tends to $q^*(1)$ from (\ref{lumi}), and one can easily verify that it is really so. \paragraph{\bf Case $\lambda=1$:} According to {\bf 2)} of Theorem \ref{thm1}, the limit measure $\bar{\pi}$ has density with respect to the Lebesgue'i measure on $[0,1]$: $$\bar{\pi}(q)={1\over Z}\exp[-\ln 6\cdot q] (1-q)^{0.3-1}q^{0.7-1},$$ where $Z$ is the valuer of the moment generating function of Beta(0.7,0.3)-distributed random variable evaluated at $\ln 6$. In the density above, $q$ stands for $q(2)$. Therefore, the limit proportion of the second genotype in the stochastic process governed by $P^*$ is a random variable, its distribution has density $\bar{\pi}(q)$ as stated above. Figure \ref{fig:graphlambda1} illustrates the convergence. \begin{figure}[h] \begin{center} \includegraphics[width=10cm,height=5 cm]{graph7.pdf} \caption{\small Case $\lambda=1$ : Density histogram (log scale) of the fraction of the first type in the population, for a Dirichlet prior $\alpha=(0.3, 0.7)$, fitness $\phi=(0,\ln 6)$, and population sizes $10^2$ and $10^3$. Note that because the concentration parameter is kept constant, the limit distribution is a reweighted $\beta$ distribution, with infinities at 0 and 1: when $\lambda=0$, the prior is the same for all $N$, and the log-fitness term is reduced by a factor of $N$, so that $P_n$ never concentrates. The histograms were constructed by recording the fraction of the fitter allele in the population over $10^9$ and $10^{10}$ MCMC samples respectively according to the process described in section\ \ref{sec:inversefitnessselection} }. \label{fig:graphlambda1} \end{center} \end{figure} \section{Conclusions} \label{sec:conclusions} We have constructed MCMC algorithms that are similar to existing genetic algorithms. The `breeding' consists of sampling from exchangeable distributions based on the Dirichlet distribution, and the `selection' is essentially Metropolis-Hastings. The sequence of populations forms a reversible Markov chain that satisfies detailed balance conditions. We have exhibited two possible sampling distributions: more elaborate exchangeable sampling distributions are possible. The entire MCMC procedure is a population generalisation of Metropolis-Hastings. As far as we are aware, this is the first plausible and general model of sexual reproduction that exactly satisfies detailed balance, and for which the stationary distribution can be written in closed form for arbitrary fitness functions. We also explored some properties of the stationary distribution, and showed that for any fitness function there are three non-trivial limiting distributions for large population sizes, with two phase transitions. This is a first step towards a more general understanding of the interaction of the population size, fitness scaling, and mutation rate in genetic algorithms and evolutionary models. Formulating a genetic model as a MCMC procedure opens a new research direction in using the many techniques developed in MCMC to achieve faster convergence to the stationary distribution using different MCMC kernels. We have shown that the stationary distribution is unaffected by multiplicative noise in fitness evaluations. This has been suggested by, for example, \cite{morse2016simple}, but our techniques allow a proof of this effect. {Finally there is a more general conclusion from our analysis. For many years, since \cite{holland1975adaptation} and \cite{goldberg1989genetic}, a widely suggested folk-motivation for genetic algorithms has been that because they are inspired by natural biological evolution, and because evolution has produced the variety of life on earth, genetic algorithms should be in some sense generally effective. Our analysis makes it clear that genetic algorithms are more closely related to conventional MCMC methods for non-parametric Bayesian inference than has previously been recognised. \section*{Appendix} \paragraph{Proof of claim \ref{claim}} \begin{proof} First note that $$\sup_{q\in {\cal P}}|\langle w_n,q\rangle-\langle w,q \rangle|\leq \|w_n-w\|\|q\|\leq \|w_n-w\| \to 0.$$ Now use the fact that if $f_n, f, g_n,g$ are nonnegative functions such that $\sup_x |f_n(x)-f(x)|\to 0$, $\sup_x |g_n(x)-g(x)|\to 0$, $\inf_x g(x)=g_*>0$ and $f$, $g$ are bounded above, then with $g^*=\sup_x g(x)$ and $f^*=\sup_x f(x)$ \begin{align*} \sup_x \big|{f_n(x)\over f(x)}-{g_n(x)\over g(x)}\big|&=\sup_x \big|{f_n(x)g(x)-g_n(x)f(x)\over g_n(x)g(x)}\big|\\ &\leq \sup_x \big|{(f_n(x)-f(x))g(x)\over g_n(x)g(x)}\big|+\sup_x \big|{(g_n(x)-g(x))f(x)\over g_n(x)g(x)}\big|\\ &\leq \sup_x \big|{(f_n(x)-f(x))g^*\over g_n(x)g(x)}\big|+\sup_x \big|{(g_n(x)-g(x))f^*\over g_n(x)g(x)}\big|\to 0, \end{align*} because for $n$ big enough $g_n(x)g(x)\geq {g^2_*\over 2}$ for every $x$. Take $x=q$, $f_n(q)=w_n(k)q(k)$, $f(q)=w(k)q(k)$, $g_n(q)=\langle w_n,q\rangle$ and $g(q)=\langle w,q\rangle$. Then $g_*=w(K)>0$, $f^*=g^*=w(1)$ and so (\ref{unif}) follows.\end{proof} \subsection{Proof of Proposition \ref{point2}} Recall that $f_n$ and $f$ are continuous and bounded functions on ${\cal P}$ so that $\|f_n\|_{\infty}<\infty$ and $\|f\|_{\infty}<\infty$. By assumption, $\pi$ is a finite measure. Since $f_n$ converges to $f$ uniformly, it follows that $\|f_n-f\|_{\infty}\to 0$ and so $\| f_n\|_{\infty}\to \| f\|_{\infty}.$ For every $m$, $$|\|f_n\|_m-\|f\|_m |\leq \|f_n-f\|_m\leq \pi({\cal P})^{1\over m}\|f_n-f\|_{\infty}\to 0.$$ Since $\|f\|_{m_n}\to \|f\|_{\infty}$, we have \begin{align*} \big|\|f_n\|_{m_n}-\|f\|_{\infty}\big|&\leq \big|\|f_n\|_{m_n}-\|f\|_{m_n}\big|+\big|\|f\|_{m_n}-|f\|_{\infty}\big|\\ &\leq \|f_n-f\|_{m_n}+\big|\|f\|_{m_n}-|f\|_{\infty}\big|\leq \pi({\cal P})^{1\over m_n}\|f_n-f\|_{\infty}+\big|\|f\|_{m_n}-|f\|_{\infty}\big|\to 0. \end{align*} Now fix $\delta>0$. Since ${\cal S}^*_{\delta}:=\{q: f(q) > \|f\|_{\infty}-\delta \}$, we have ${\cal S}-{\cal S}^*_{\delta}=\{q: f(q)\leq \|f\|_{\infty}-\delta\}. $ Define $\delta':=\delta /\|f\|_{\infty}$. Then \begin{align*} \sup_{q\in {\cal S}-{\cal S}^*_{\delta}} {f_n(q)\over \|f_n\|_{m_n}}&=\sup_{q\in {\cal S}-{\cal S}^*_{\delta}}{f(q)+(f_n(q)-f(q))\over \|f_n\|_{m_n}}= \sup_{q\in {\cal S}-{\cal S}^*_{\delta}}{f(q)+(f_n(q)-f(q))\over \|f\|_{\infty}}{\|f\|_{\infty}\over \|f_n\|_{m_n}}\\ &\leq \sup_{q\in {\cal S}-{\cal S}^*_{\delta}}{f(q)\over \|f\|_{\infty}}{\|f\|_{\infty}\over \|f_n\|_{m_n}}+{\|f_n-f\|_{\infty}\over \|f_n\|_{m_n}}\leq 1-{\delta'\over 2},\end{align*} provided $n$ is big enough. Thus, $$ \sup_{q\in {\cal S}-{\cal S}^*_{\delta}}h_n(q)\leq \Big(1-{\delta'\over 2}\Big)^{m_n}\to 0,$$ so that $\nu_n({\cal S}^{*}_{\delta})\to 1$. We now argue that for any $\epsilon>0$ there exists $\delta>0$ so that \begin{equation}\label{balls} {\cal S}^*_{\delta} \subset B(q^*,\epsilon), \end{equation} where $B(q^*,\epsilon)$ is a ball in Euclidean sense. If, for an $\epsilon>0$, such a $\delta>0$ would not exists, then there would exist a sequence $q_n\to q$ such that $f(q_n)\nearrow f(q)$, but $\|q_n-q\|\geq \epsilon.$ Since ${\cal P}$ is compact, along a subsequence $q_{n'}\to q$ and by continuity $f(q_{n'})\to f(q)$. On the other hand $\|q-q^*\|>\epsilon$ and that would contradict the uniqueness of $q^*$. Therefore (\ref{balls}) holds and so for any $\epsilon>0$, it holds that $\nu_n \big (B(q^*,\epsilon)\big )\to 1$. From the definition of the weak convergence, it now follows that $\nu_n\Rightarrow \delta_{q^*}.$ \subsection{Proof of Lemma \ref{sol}} \begin{description} \item[1)] To find \begin{equation}\label{probleem1} q^*=\arg\max_{q\in {\cal P}} \,\,[ \ln \langle e^{-\phi} ,q \rangle + \sum_{k}\alpha_k\ln q(k)],\end{equation} we define Lagrangian $$L(q,\beta)=\ln \langle e^{-\phi} ,q \rangle + \sum_{k}\alpha_k\ln q(k)-\beta (\sum_k q(k)-1)$$(here $\beta$ is a scalar) and maximize $L(q,\beta)$ over $q> 0$ (all entries are positive). Taking partial derivatives with respect to $q(k)$, we have $${e^{-\phi(k)}\over \langle e^{-\phi} ,q \rangle}+{\alpha_k\over q(k)}=\beta,\quad \Rightarrow \quad {e^{-\phi(k)}q(k)\over \langle e^{-\phi} ,q \rangle}+{\alpha_k}=q(k) \beta \quad \forall k.$$ With $|\alpha|=\sum_k \alpha_k,$ we have thus $\beta=1+|\alpha|$ and so the solution $q^*$ satisfies the set of equalities \begin{equation}\label{q3} q^*(k)={1\over 1+|\alpha|}\Big({e^{-\phi(k)}q^*_k\over \langle e^{-\phi} ,q^* \rangle}+\alpha_k\Big),\quad \forall k. \end{equation} Now with $w(k)=e^{-\phi(k)}$ define parameter $\theta:=\langle w ,q^* \rangle$ and rewrite (\ref{q3}) as follows \begin{equation}\label{theta} q^*(k)={\alpha_k \over (1+|\alpha|)-{w(k)\over \theta}} \quad k=1,\ldots,K.\end{equation} We see that amongst the probability vectors satisfying $\langle w ,q^* \rangle=\theta,$ the solution is unique. Since $\alpha_k>0$ for every $k$, it is easy to see that there is only one parameter $\theta$ such that the right hand side of (\ref{theta}) would be a probability measure: if $\theta'>\theta$, then for every $k$, we have $${\alpha_k \over (1+|\alpha|)-{w(k)\over \theta}}>{\alpha_k \over (1+|\alpha|)-{w(k)\over \theta'}}.$$ Therefore a solution of (\ref{probleem1}) is unique vector $q^*$ given by (\ref{theta}), where $\theta=\langle w ,q^* \rangle$. \item[2)] To find \begin{equation}\label{probleem2} q^*=\arg\max_{q\in {\cal P}} [-\langle \phi,q\rangle + \sum_{k}\alpha_k \ln q(k)],\end{equation} we define Lagrangian $$L(q,\beta)= -\langle \phi,q\rangle + \sum_{k}\alpha_k\ln q(k)-\beta (\sum_k q(k)-1).$$ Partial derivatives with respect to $q(k)$ give us the equalities $$-\phi(k)+{\alpha_k\over q(k)}=\beta \quad \forall k\quad \Rightarrow \quad -\langle \phi,q\rangle + |\alpha| =\beta.$$ Therefore, the inequalities for $q^*(k)$ are \begin{equation}\label{q4} q^*(k)={\phi(k)q^*(k) - \alpha_k \over \langle \phi,q^*\rangle -|\alpha|}={\phi(k)q^*(k) - \alpha_k \over \theta -|\alpha|},\quad \theta:=\langle \phi,q^*\rangle. \end{equation} After rewriting (\ref{q4}), we obtain \begin{equation*}\label{solla2a} q^*(k)={ \alpha_k \over \phi(k)+|\alpha|-\theta},\quad k=1,\ldots,K, \end{equation*} Thus, there cannot be two solutions having the same $\theta$. As in the case {\bf 1)}, it is easy to see that when $\alpha_k>0$ there is only one $\theta$ so that (\ref{solla2a}) sums up to one. Therefore, the solution to the problem (\ref{probleem2}) is unique. Note that the solution is independent of $\lambda$. \end{description} \subsection*{Acknowledgments} The research is supported by Estonian Institutional research funding IUT34-5. \bibliographystyle{plain}
{ "timestamp": "2019-03-01T02:05:28", "yymm": "1902", "arxiv_id": "1902.10834", "language": "en", "url": "https://arxiv.org/abs/1902.10834" }
\section{Introduction}\label{sec:intro} Superpositions of two coherent states of an optical mode for several decades attract attention of the scientific community as optical realizations of the famous ``Schr\"odinger cat'' state of a quantum system \cite{Dodonov74,Buzek92,Brune96,Ourjoumtsev06,Nielsen06,Sychev17,Yurke86,Kirchmair13,Vlastakis13}. There are two classes of these states with distinct properties and generation methods: (i) ``even'' and ``odd'' coherent states \cite{Dodonov74,Buzek92,Brune96,Ourjoumtsev06,Nielsen06,Sychev17}, and (ii) the Yurke-Stoler coherent state \cite{Yurke86,Kirchmair13,Vlastakis13}. The states of the first class contain either even or odd number of photons and are generated for high-Q microwave cavity field by interaction with non-resonant Rydberg atoms \cite{Brune96} or for traveling-wave optical field by photon subtraction from a squeezed state \cite{Ourjoumtsev06,Nielsen06,Sychev17}. The states of the second class have a Poissonian distribution of photons and have been recently generated for microwave cavity field coupled to a superconducting qubit by nonlinear Kerr effect \cite{Kirchmair13,Vlastakis13}. The states of both classes have been extensively studied as models of decoherence \cite{Buzek92,Brune96,Horoshko98}, sources of quantum instabilities \cite{Kilin96,Horoshko00} and resources for quantum computation \cite{Jeong02,Ralph03,Mirrahimi14,Albert16}. Splitting these superpositions in two modes, one obtains entangled coherent states \cite{Sanders92,Hirota01,vanEnk01,Jeong01,Joo11,Reut17,Wang16}. The states of the first class create in this way quasi-Bell states \cite{Hirota01}, having the same structure as usual Bell states of two qubits, but with two non-orthogonal coherent states as basis for each mode. These states can be applied to quantum metrology \cite{Joo11}, quantum tomography \cite{Reut17} and probabilistic quantum teleportation \cite{vanEnk01,Jeong01}, which is a key element of coherent state quantum computation. While traveling-wave superpositions can be rather easily split by a beam-splitter, cavity fields require a more sophisticated technique, realized only recently for two coupled microwave cavities \cite{Wang16}. Increasing the number $N$ of coherent components in a single-mode superposition, one obtains ``multiple component Schr\"odinger cats'' \cite{HarocheRaimond06}. The most attention has been attracted to coherent states placed equidistantly on a circle \cite{HarocheRaimond06,Janszky93,Domokos94}, though other geometries are also possible \cite{Kilin95}. Multicomponent extensions of the second class \cite{Tanas91} are known as ``Kerr states'' and can be generated by the same Kerr effect as their two-component variants, as has been demonstrated recently in the microwave domain \cite{Kirchmair13,Vlastakis13}. Production of multicomponent states of the first class has been proposed for cavity field \cite{Domokos94}, but not yet reported. These states are highly important for studying decoherence \cite{Zurek01}, and for application in quantum computation with qudits, quantum systems with the number of levels higher than two \cite{Kim15,Li17}. Splitting multiple component Schr\"odinger cat states in two modes, one naturally arrives at entangled coherent states of high dimension \cite{vanEnk03,Kilin11}. The states of the first class create in this way the states which are generalizations of quasi-Bell states, and their structure has been recently analyzed in detail \cite{Horoshko16}. These states have been proposed for enhancing sensitivity in quantum metrology \cite{Lee15}. The states of the second class, entangled Kerr states, can be used for quantum teleportation of high dimensional systems \cite{vanEnk03}. In this work we further develop the approach of Ref.~\cite{Horoshko16} and introduce explicitly entangled states of two optical modes, which we call ``generalized quasi-Bell states'' (GQBS). They are ``quasi-Bell'' because their basis vectors are not exactly orthogonal, the term having been coined in Ref.~\cite{Hirota01} for the $N=2$ case. At higher $N$ they generalise the quasi-Bell states to higher dimensions like generalised Bell states \cite{Cerf00,Horoshko07} do for the Bell states. GQBS create a natural basis for quantum information processing with coherent states of light. In particular, we show that the protocol of probabilistic quantum teleportation works for these states much better than for the entangled Kerr states, for which it was originally suggested \cite{vanEnk03}. \section{Generalized quasi-Bell states} We consider one mode of the electromagnetic field, for which we fix a set of $N$ coherent states $\{|\alpha_m\rangle, m=0,1,...,N-1\}$, placed equidistantly on the circle of radius $|\alpha_0|$, i.e., $\alpha_m=\alpha_0 e^{-i2\pi m/N}$, where $\alpha_0$ is an arbitrary non-zero complex number (see Fig.\ref{Fig1}). \begin{figure}[h] \begin{center} \includegraphics[width=0.7\columnwidth]{Fig1_States.eps} \caption{\label{Fig1} Coherent states on the circle of radius $|\alpha_0|$. Each coherent state is represented by a circle of radius $\frac12$, corresponding to the $\sigma$-area of its Wigner function.} \end{center} \end{figure} We are interested in coherent superpositions of these states having equal weights and linear periodic relative phase: \begin{equation}\label{RICS} |\mathrm{c}_q\rangle = \frac1{N\sqrt{\tilde g(q)}}\sum_{m=0}^{N-1}e^{i2\pi mq/N}|\alpha_0 e^{-i2\pi m/N}\rangle, \end{equation} where $\tilde g(q)$ is the discrete Fourier transform, \begin{equation}\label{tildeg} \tilde g(k)=\frac1{N}\sum_{m=0}^{N-1}g(m)e^{-i2\pi km/N}=e^{-|\alpha_0|^2}\sum_{l=0}^\infty \frac{|\alpha_0|^{2(k+lN)}}{(k+lN)!}, \end{equation} of the first column of the Gram matrix $G_{mn}=\langle \alpha_m|\alpha_n\rangle = g(m-n)$, defined as \begin{equation}\label{g} g(m)=\exp\left\{|\alpha_0|^2\left(e^{i2\pi m/N}-1\right)\right\}. \end{equation} Each state of the set $\{|\alpha_m\rangle, m=0,1,...,N-1\}$ is produced from the previous one in this set by a rotation in phase space $|\alpha_m\rangle=U_N|\alpha_{m-1}\rangle$, described by the unitary operator \begin{equation}\label{U} U_N = e^{-i2\pi a^\dagger a/N}, \end{equation} where $a$ is the photon annihilation operator of the optical mode. The eigenvalues of $U_N$ are given by $e^{-i2\pi q/N}$. There are only $N$ different eigenvalues, for which we let $0\le q \le N-1$. The corresponding eigenstates are given by Eq.(\ref{RICS}): \begin{equation}\label{eigen} U_N|\mathrm{c}_q\rangle = e^{-i2\pi q/N}|\mathrm{c}_q\rangle. \end{equation} The last property allows us to call them rotationally-invariant circular sates (RICS). They satisfy the orthonormality condition $\langle \mathrm{c}_q|\mathrm{c}_r\rangle = \delta_{qr}$. Using the decomposition of a coherent state in the Fock basis, we rewrite the RICS, Eq.(\ref{RICS}), in the form \cite{Janszky93} \begin{equation}\label{cq} |\mathrm{c}_q\rangle = \frac{e^{-|\alpha_0|^2/2}}{N\sqrt{\tilde g(q)}}\sum_{l=0}^{\infty}\frac{\alpha_0^{q+lN}}{\sqrt{(q+lN)!}}|q+lN\rangle, \end{equation} where $|q+lN\rangle$ is a Fock state with $q+lN$ photons. Eq.(\ref{cq}) shows that a RICS is a sum of Fock states with the number $m$ of photons such, that $(m \mod N)=q$. In the case of $N=2$ this property is reduced to a fixed parity of the photon number, peculiar to ``even'' and ``odd'' coherent states. In general, RICS are highly nonclassical, their distance from the set of classical states \cite{Bievre19} is lower-bounded by $\sqrt{1+2\langle n\rangle}-1$, where $\langle n\rangle = \langle\mathrm{c}_q|a^\dagger a|\mathrm{c}_q\rangle$ is the mean photon number. Thus, with growing number of photons these states become harder to create and control. However, as we show, even for low values of the mean photon number they can be used for some protocols of quantum information processing. The states, Eq.~(\ref{cq}) have been recently considered in the case where all coherent states are almost orthogonal \cite{Kim15}, which implies sufficiently high $|\alpha_0|$ and sufficiently low $N$. In this case they are referred to as ``pseudo-number'' states, while the coherent states $|\alpha_m\rangle$ as the corresponding ``pseudo-phase'' states. They create two complementary bases for an $N$-level system (qudit), encoded into a subspace of optical mode's Hilbert space. Note, that the second basis is non-orthogonal for low values of $\alpha_0$. Now we consider the RICS $|\mathrm{c}_q\rangle$ with parameters $\{N,\sqrt{2}\alpha_0\}$ at one input of a 50:50 beam-splitter with the vacuum at the other one. The state of the two output modes $A$ and $B$ of the beam-splitter is \begin{eqnarray}\label{out0} |\Phi_{q0}\rangle_{AB} &=& \frac{1}{N\sqrt{\tilde g_1(q)}}\sum_{m=0}^{N-1}e^{i2\pi qm/N}|\alpha_m\rangle_A|\alpha_m\rangle_B \\\nonumber &=& \sum_{k=0}^{N-1}\sqrt{\lambda_k(q)}|\mathrm{c}_k\rangle_A|\mathrm{c}_{q-k}\rangle_B, \end{eqnarray} where the second part represents a Schmidt decomposition with the Schmidt coefficients \cite{Horoshko16} \begin{equation}\label{lambda} \lambda_k(q) = \frac{\tilde g(k)\tilde g(q-k)}{\tilde g_1(q)}, \end{equation} and \begin{equation}\label{normpsi} \tilde g_1(q) = \sum_{k=0}^{N-1}\tilde g(k)\tilde g(q-k) = \frac1N\sum_{m=0}^{N-1}g^2(m)e^{-i2\pi qm/N}, \end{equation} from where the summation of the Schmidt coefficients to unity follows. Here and below all indexes and integer arguments are taken modulo $N$. Now we apply the power $p$ of the rotation operator $U_N^p$ with $0\le p\le N-1$ to the mode $B$ of $|\Phi_{q0}\rangle_{AB}$ and obtain the state \begin{eqnarray}\label{GQBS} |\Phi_{qp}\rangle_{AB} &=& \frac{1}{N\sqrt{\tilde g_1(q)}}\sum_{m=0}^{N-1}e^{i2\pi qm/N}|\alpha_m\rangle_A|\alpha_{m+p}\rangle_B \\\nonumber &=& \sum_{k=0}^{N-1}\sqrt{\lambda_k(q)} e^{-i2\pi(q-k)p/N}|\mathrm{c}_k\rangle_A|\mathrm{c}_{q-k}\rangle_B, \end{eqnarray} which is, on the one hand, an extension of the quasi-Bell states \cite{Hirota01} to higher dimensions, and on the other hand, an extension of generalized Bell state \cite{Cerf00,Horoshko07} to non-orthogonal basis, and which we call GQBS. Two forms of this state, presented in Eq.~(\ref{GQBS}), correspond to writing it in the non-orthogonal coherent basis with equal weights, or in the orthogonal RICS basis with non-equal weights. Averaging out one mode one obtains the reduced density operator of the other one: \begin{equation}\label{rhoA} \rho_A(q) = \sum_{k=0}^{N-1}\lambda_k(q)|\mathrm{c}_k\rangle\langle\mathrm{c}_{k}|, \end{equation} which tends to the completely undetermined qudit state $\mathbb{I}/N$ in the limit of high $\alpha_0$. In this limit GQBS is maximally entangled, and its entanglement, defined as $E=-\Tr\{\rho_A\log_2\rho_A\}$, is $E=\log_2 N$. For lower values of $\alpha_0$ the reduced state, Eq.~(\ref{rhoA}), is not the completely undetermined one and the GQBS is non-maximally entangled. Entanglement of the state $|\Phi_{qp}\rangle_{AB}$ is given in this case by the Shannon entropy of the Schmidt coefficients $\lambda_k(q)$ \begin{equation}\label{E} E(q) = -\sum_{k=0}^{N-1}\lambda_k(q)\log_2\lambda_k(q), \end{equation} and is a function of $q$ (but not $p$). Schmidt coefficients $\lambda_k(q)$ are shown in Fig.~\ref{Fig2} for the case where each coherent state has on average just 1 photon. \begin{figure}[h] \begin{center} \includegraphics[width=0.49\columnwidth]{Fig_lambda_a1N8_q0.eps} \includegraphics[width=0.49\columnwidth]{Fig_lambda_a1N8_q1.eps} \includegraphics[width=0.49\columnwidth]{Fig_lambda_a1N8_q2.eps} \includegraphics[width=0.49\columnwidth]{Fig_lambda_a1N8_q3.eps} \includegraphics[width=0.49\columnwidth]{Fig_lambda_a1N8_q4.eps} \includegraphics[width=0.49\columnwidth]{Fig_lambda_a1N8_q5.eps} \includegraphics[width=0.49\columnwidth]{Fig_lambda_a1N8_q6.eps} \includegraphics[width=0.49\columnwidth]{Fig_lambda_a1N8_q7.eps} \caption{\label{Fig2} Schmidt coefficients $\lambda_k(q)$ of a GQBS with $N=8$ and $|\alpha_0|=1$. The entanglement $E$ characterizes the spread of the Schmidt coefficients. Maximal entanglement corresponds to the maximal spread of Schmidt coefficients at $q=7$. For $q=0$ the state is close to the two-mode vacuum with very low entanglement.} \end{center} \end{figure} The spread of coefficients in Fig.~\ref{Fig2} increases with $q$, reaching its maximum at $q=N-1$. Since the entanglement is given by the Shannon entropy of the Schmidt coefficients, it is minimal at $q=0$ and maximal at $q=N-1$. The same coefficients are shown in Fig.~\ref{Fig3} for the case where each coherent state has on average 4 photons. The dependence of entanglement on $q$ is more complicated in this case. The maximum is reached for $q=3$ and the corresponding state $|\Phi_{3p}\rangle$ for any $p$ is very close to a maximally entangled state, which is reflected in its entanglement 2.97, very close to $\log_2N=3$. \begin{figure}[h!] \begin{center} \includegraphics[width=0.49\columnwidth]{Fig_lambda_a2N8_q0.eps} \includegraphics[width=0.49\columnwidth]{Fig_lambda_a2N8_q1.eps} \includegraphics[width=0.49\columnwidth]{Fig_lambda_a2N8_q2.eps} \includegraphics[width=0.49\columnwidth]{Fig_lambda_a2N8_q3.eps} \includegraphics[width=0.49\columnwidth]{Fig_lambda_a2N8_q4.eps} \includegraphics[width=0.49\columnwidth]{Fig_lambda_a2N8_q5.eps} \includegraphics[width=0.49\columnwidth]{Fig_lambda_a2N8_q6.eps} \includegraphics[width=0.49\columnwidth]{Fig_lambda_a2N8_q7.eps} \caption{\label{Fig3} Schmidt coefficients $\lambda_k(q)$ of a GQBS with $N=8$ and $|\alpha_0|=2$. In this case the maximal spread corresponds to $q=3$, the corresponding state being almost maximally entangled.} \end{center} \end{figure} \section{Teleportation of circular states of light} Quantum teleportation of optical states is a key quantum technique using an entangled state of two optical field as a resource \cite{Furusawa98,Horoshko00b}. Entangled two-mode states, GQBS, can be used for teleporting a superposition of states $|\alpha_m\rangle$ with arbitrary coefficients, known as ``circular state'' \cite{Horoshko16} and representing an optical qudit, by the protocol proposed by van Enk \cite{vanEnk03}. This protocol was initially devised for an entangled Kerr state as a resource, but, as we will see, it works even better for a GQBS. Since a GQBS is in general non-maximally-entangled, the teleportation has a non-unit probability of success. Below we reproduce the description of the teleportation protocol from Ref.~\cite{vanEnk03} with a replacement of the nonlocal state. \textit{ We suppose two parties, Alice and Bob, share a GQBS $|\Phi_{qp}\rangle_{AB}$, given by Eq.~(\ref{GQBS}), with even number of components $N=2L$. Alice possesses in mode $C$ a superposition of coherent states $|\alpha_l\rangle$ with arbitrary coefficients $Q_l$: \begin{equation}\label{psi} |\psi\rangle_{C} = \sum_{l=0}^{N-1}Q_l|\alpha_l\rangle_C, \end{equation} that she wishes to teleport to Bob. Alice first uses beam splitters to make $L=N/2$ ``diluted'' copies of both the state to be teleported (ending up in modes $C_k$ for $k=0,...,L-1$) and of her half of the entangled state (ending up in modes $A_k$ for $k=0,...,L-1$) by the process \begin{equation}\label{dilution} |\alpha_m\rangle|0\rangle^{\otimes L-1} \to |\alpha_m/\sqrt{L}\rangle^{\otimes L}. \end{equation} Then she applies the phase shift operator $U_N^k$ to the mode $A_k$ and, in order to perform her Bell measurement, subsequently combines the modes $C_k$ and $A_k$ on $L$ 50:50 beam splitters. If we call the output modes $G_k$ and $H_k$ for $k=0,...,L-1$, the resulting state is \begin{eqnarray}\label{tele} &&\frac{1}{N\sqrt{\tilde g_1(q)}}\sum_{m=0}^{N-1}\sum_{l=0}^{N-1}e^{i2\pi qm/N}Q_l |\alpha_{m+p}\rangle_B\\\nonumber &&\bigotimes_{k=0}^{L-1}|(\alpha_l-\alpha_{m+k})/\sqrt{2L}\rangle_{G_k} \,|(\alpha_l+\alpha_{m+k})/\sqrt{2L}\rangle_{H_k}\,. \end{eqnarray} Alice now performs photon-number measurements on all $2L=N$ output modes. She cannot find a nonzero number in every mode. But suppose she finds nonzero numbers of photons in all but one mode, say, mode $H_k$. Then the only terms that survive the sums over $m$ and $l$ in Eq.~(\ref{tele}) are those for which $\alpha_l+\alpha_{m+k} = 0$, that is $e^{-i2\pi l/N}=-e^{-i2\pi(m+k)/N}$ or $m+k-l={L}\mod{N}$. The state at Bob's side reduces to \begin{equation}\label{tele2} |\psi'\rangle_{B} = \sum_{l=0}^{N-1}e^{i2\pi q(L+l-k)/N}Q_l|\alpha_{L+l-k+p}\rangle_B. \end{equation} Alice communicates to Bob which mode contained no photons, and Bob then applies the appropriate unitary transformation. Here, with $H_k$ being the empty mode, he applies $U_N^{k-p-L}$ to his state to obtain \begin{equation}\label{tele3} |\psi''\rangle_{B} = e^{i2\pi q(L-k)/N}\sum_{l=0}^{N-1}e^{i2\pi ql/N}Q_l|\alpha_{l}\rangle_B. \end{equation} } Since the states $|\alpha_{l}\rangle$ are non-orthogonal, the phase factor under the sum cannot in the general case be removed by a unitary transformation. It means that for exact teleportation we need to choose $q=0$ for the entangled state used as resource, which makes Bob's state identical to Alice's, $|\psi''\rangle=|\psi\rangle$. We see also, that the value of $p$ does not affect the process of teleporation and is compensated at the final stage by the rotation operator, so that we can put $p=0$ from the beginning, and consider the state $|\Phi_{00}\rangle_{AB}$ as the optimal resource for quantum teleportation. The analysis of the previous section (see also Ref.~\cite{Horoshko16}) shows that the case of $q=0$ does not always correspond to maximal entanglement for given $N$ and $\alpha_0$, but employment of other GQBS for quantum teleportation is not possible if one requires an exact reproduction of the initial state. Occurrence of zero-photon measurement outcome in more than one mode leads to protocol failure, thus the described protocol is probabilistic. The possibility of realizing an exact, although probabilistic, teleportation of an arbitrary circular state on the basis of GQBS is a serious advantage compared to the original teleportation scheme \cite{vanEnk03}, where the final state contains phase factors quadratic in $l$, which cannot be corrected by any unitary transformation. \section{Probability of success} Teleportation described in the previous section is probabilistic, it succeeds if a zero number of photons is found in one of the measured modes only. If zero photons is found in two or more modes, the protocol fails. The probability of success in general depends on the input state. Let us find this probability for the case where the state to be teleported is one of the states $|\alpha_m\rangle$. Thanks to the rotational symmetry, we can choose this state as $|\psi\rangle_C=|\alpha_0\rangle_C$ without loss of generality. The shared entangled state which we consider is $|\Phi_{00}\rangle_{AB}$, the one providing an exact teleportation. For such a choice the multimode state, Eq.~(\ref{tele}), takes the form \begin{equation}\label{tele2} |\Psi\rangle = \frac{1}{N\sqrt{\tilde g_1(0)}}\sum_{m=0}^{N-1} |\alpha_{m}\rangle_B \bigotimes_{k=0}^{L-1}\left|\frac{\alpha_0-\alpha_{m+k}}{\sqrt{N}}\right\rangle_{G_k} \,\left|\frac{\alpha_0+\alpha_{m+k}}{\sqrt{N}}\right\rangle_{H_k}\,. \end{equation} The probability of obtaining zero photons in the mode $H_k$ and non-zero number of photons in the other $N-1$ measured modes is given by the average $\langle\Psi|\Gamma_{H_k}|\Psi\rangle$ of the projector \begin{equation}\label{Gamma} \Gamma_{H_k} = \bar\Pi_{G_0}\bar\Pi_{H_0}...\bar\Pi_{G_k}\Pi_{H_k}...\bar\Pi_{G_L}\bar\Pi_{H_L}, \end{equation} where $\Pi=|0\rangle\langle0|$ is the projector on the vacuum, and $\bar\Pi=\mathbb{I}-|0\rangle\langle0|$ is the projector on the non-vacuum state of the mode, whose label is indicated by the lower index. If we denote the summand in the right hand side of Eq.~(\ref{tele2}) by $|\Psi_m\rangle$, then it is easy to see that only the state $|\Psi_{L-k}\rangle$ makes a non-zero input to $\langle\Psi|\Gamma_{H_k}|\Psi\rangle$. Indeed, each other state has vacuum in a mode different from $H_k$ and gives zero when averaged with the projector $\bar\Pi$ of this mode. Thanks to the symmetry of all $N$ modes the total success probability is $N$ times the probability of the successful outcome in one mode. Finally we find \begin{eqnarray}\label{P} P_\mathrm{success} &=& N\langle\Psi_{L-k}|\Gamma_{H_k}|\Psi_{L-k}\rangle\\\nonumber &=& \frac{1}{N\tilde g_1(0)}\prod_{l=1}^{N-1}\left(1-e^{-|\alpha_0-\alpha_l|^2/N}\right). \end{eqnarray} This probability is shown in Fig.~\ref{fig:P} for different values of $N$. We see that a practical value 0.2 of probability can be reached for the case of 4 components for $|\alpha_0|\approx1.15$, available in the optical domain \cite{Nielsen06,Sychev17}, and for the case of 8 components for $|\alpha_0|\approx3.1$, available in the microwave domain \cite{Vlastakis13}. \begin{figure}[h!] \begin{center} \includegraphics[width=\columnwidth]{Fig-Psuccess.eps} \caption{\label{fig:P} Probability of success for teleportation of a coherent state as function of the coherent amplitude. For a sufficiently large amplitude $|\alpha_0|$ the coherent states $|\alpha_m\rangle$ become almost orthogonal and the teleportation becomes almost deterministic. However, with the growing number of components $N$, the critical amplitude where the probability approaches 1 grows approximately as $|\alpha_0|\approx N$.} \end{center} \end{figure} In a similar way we can find the success probability of the original protocol of Ref.~\cite{vanEnk03}. Expression for this probability coincides with Eq.~(\ref{P}) if we omit the normalisation factor before the product. However, since at $|\alpha_0|>1$ we have $N\tilde g_1(0)\approx1$, for all practical values of the amplitude this probability is very close to that of the protocol based on GQBS. \section{Pseudo-phase state} In the regime where all coherent states in the superposition can be considered mutually orthogonal the set of these states is a discrete Fourier transform of the set of RICS and vice versa. That is why RICS can be considered as pseudo-number states and the coherent states as pseudo-phase states \cite{Kim15}. Measuring the number of photons in a RICS $|\mathrm{c}_q\rangle$ one obtains $q+lN$, as shown by Eq.~(\ref{cq}), which justifies its name of pseudo-number state. A coherent state has rather well-defined phase, which justifies its name of pseudo-phase state. Both state sets are orthonormal in this regime. At lower $|\alpha_0|$ the set of RICS remains orthonormal, but the set of coherent states $|\alpha_k\rangle$ does not. Following the approach of Pegg and Barnett \cite{Pegg89}, we can introduce a set of pseudo-phase states as the discrete Fourier transform of the set of the pseudo-number states: \begin{equation}\label{pseudo-phase} |\varphi_k\rangle = \frac1{\sqrt{N}}\sum_{q=0}^{N-1}e^{-i2\pi qk/N} |\mathrm{c}_q\rangle =\sum_{m=0}^{N-1}\tilde\delta_{mk}|\alpha_0 e^{-i2\pi m/N}\rangle, \end{equation} where \begin{equation}\label{delta} \tilde\delta_{mk} = \frac1{N^{3/2}} \sum_{q=0}^{N-1}\frac1{\sqrt{\tilde g(q)}}e^{i2\pi (m-k)q/N}, \end{equation} is an almost diagonal matrix, close to the Kroneker delta $\delta_{mk}$, with which it coincides at high $|\alpha_0|$, where $\tilde g(q)\to1/N$. The states $|\varphi_k\rangle$ are mutually orthogonal $\langle\varphi_k|\varphi_l\rangle=\delta_{kl}$ and create a mutually unbiased basis with RICS: $|\langle\varphi_k|\mathrm{c}_q\rangle|=1/\sqrt{N}$. Using these states as basis, we can construct the true generalised Bell states \cite{Cerf00,Horoshko07} for two optical modes \begin{eqnarray}\label{GBS} |\tilde\Phi_{qp}\rangle_{AB} &=& \frac1{\sqrt{N}}\sum_{m=0}^{N-1}e^{i2\pi qm/N} |\varphi_m\rangle_A|\varphi_{m+p}\rangle_B \\\nonumber &=& \frac1{\sqrt{N}}\sum_{k=0}^{N-1} e^{-i2\pi(q-k)p/N}|\mathrm{c}_k\rangle_A|\mathrm{c}_{q-k}\rangle_B, \end{eqnarray} which can be used for standard deterministic teleportation of qudits. However, the way of generation of the state Eq.~(\ref{GBS}) is not clear, while a GQBS can be produced by beam-splitting a multicomponent cat state, as shown by Eq.~(\ref{out0}). \section{Conclusion} We have introduced a class of entangled states of two optical modes, GQBS, which are direct generalizations of Bell states of two qubits to the case of $N$-level systems encoded into superpositions of coherent states on the circle. We have shown that these states can be written in orthogonal bases where they have non-uniform coefficients. We have shown that an exact probabilistic teleportation of circular states of optical mode is possible on the basis of these states and the GQBS $|\Phi_{00}\rangle_{AB}$ is the optimal resource for this purpose. The probability of teleportation success is reasonably high for currently available sizes of cat states in the optical and microwave domains, which allows us to hope for a possible experimental realisation of the proposed teleportation protocol. We have also discussed the true generalised Bell states of two optical modes and shown that they are related to nonclassical pseudo-phase states. The obtained results can be useful for various schemes of quantum metrology and quantum information processing on the basis of information encoding in superpositions of coherent states of light. \section*{Acknowledgments} The authors are grateful to Stephan de Bi\`evre for many fruitful discussions. This work was supported by the European Union's Horizon 2020 research and innovation programme under grant agreement No 665148 (QCUMbER).
{ "timestamp": "2019-03-01T02:01:56", "yymm": "1902", "arxiv_id": "1902.10757", "language": "en", "url": "https://arxiv.org/abs/1902.10757" }
\section{Introduction} Recent years have seen remarkable progress in quantum information processing, with rapid advancement towards high-fidelity multi-qubit systems \cite{Ballance:2016aa,Barends:2014aa,Zajac439}, some of which are now even publicly available \cite{IBMQ,Rigetti}. This has enabled significant achievements in many aspects of quantum computation, such as first demonstrations of the building blocks of error correction and fault-tolerance, e.g.~\cite{Nigg302,Kelly:2015aa,Ofek:2016aa,Takita:2017aa,Linkee1701074,Rosenblum266,Hu:2019aa}. Concurrently, demonstrations of noisy-intermediate-scale quantum algorithms \cite{Preskill:2018aa} that do not require full fault-tolerance, e.g.~\cite{Peruzzo:2014aa,OMalley:2016aa,Riste:2017aa,Kandala:2017aa,Colless:2018aa}, make real world applications of quantum information processing a near-term possibility. In light of these achievements, the need for robust, accurate, and efficient validation and verification of quantum processors becomes ever more pressing. This is the natural domain of \emph{quantum state tomography} (QST) and \emph{quantum process tomography} (QPT). Respectively, QST and QPT seek to characterize the state of a quantum processor or the dynamical map of its evolution \cite{Nielsen00}. Unfortunately, na\"ive implementations of both QST and QPT require measurement of a number of observables that scales exponentially in the number of qubits. Practically, this scaling has limited full QST and QPT to small system sizes, e.g.~\cite{Walther:2005aa,Bialczak:2010aa}, though this can be improved using approximate characterizations \cite{Shabani:2011aa,Lanyon:2017aa}, or in situations with large amounts of symmetry \cite{Medeiros-de-Araujo:2014aa,Yokoyama:2013aa}. Further compounding QPT, the most error-prone operations are often system preparation and measurement (SPAM), which can overwhelm the intrinsic error in high-fidelity quantum processes and hinder their characterization. Several SPAM-insensitive metrics exist, such as the widely-successful randomized benchmarking \cite{Emerson:2005aa,Knill:2008aa,Magesan:2011aa,Magesan:2012ab} and its variants \cite{Magesan:2012aa,Gambetta:2012aa,Carignan-Dugas:2015aa,Chasseur:2017aa,Wood:2018aa,Helsen:aa,Proctor:aa}, as well as gate-set tomography (GST) \cite{Merkel:2013aa,Greenbaum:aa,Blume-Kohout:2017aa}. Randomized benchmarking has the additional benefit of overcoming the exponential scaling of standard QPT, but at the cost of returning only a single number characterizing the quantum process. In this work, we present an approach to efficient QPT that reduces the exponential scaling to quadratic scaling, while still returning a full process matrix describing the quantum process. We propose the \emph{Pairwise Perturbative Ansatz} (PAPA), which describes the unknown quantum process as sequential two-qubit processes on all qubit pairs. We show how to fit the free parameters of our ansatz to data obtained from QPT of two-qubit subsets of the full system. When this data is provided by SPAM-insensitive tomography, such as GST, our approach becomes SPAM-insensitive as well as efficient. The paper is organized as follows. In section \ref{sec:Background} we provide background information on QPT and compare PAPA to existing QPT protocols. In sections \ref{sec:LA} and \ref{sec:Char} we describe PAPA in detail, and outline how to obtain the necessary tomographic data to obtain a PAPA characterization. In section \ref{sec:Sim} we benchmark the PAPA approach using numerical simulation, and finally in section \ref{sec:Conc} we present our conclusions. \section{Background} \label{sec:Background} A generic $N$-qubit quantum process, which we label as $\mathcal{E}$, has $16^N - 4^N$ free parameters, and the goal of QPT is to determine these free parameters. This makes na\"ive QPT an exponentially hard problem, as an exponential number of measurement settings (unique observables) are required to determine the free parameters. Even for small to modest $N$ this scaling is practically unfavorable, and QPT is very challenging experimentally. Process tomography can be rephrased as state tomography of the Choi dual-state (via the Choi-Jamiołkowski isomorphism), which is the state formed when the unknown process acts on one half of a maximally entangled state in a Hilbert space of dimension $2^{2N}$, given by \begin{align} \rho_{\mathcal{E}} &= \frac{1}{2^N}\sum_{\mu\nu}\ketbra{\psi_\mu}{\psi_\nu} \otimes \mathcal{E}\left(\ketbra{\psi_\mu}{\psi_\nu}\right), \end{align} where $\{\ket{\psi_\mu}\}$ is an orthonormal basis for $N$-qubit Hilbert space. Thus, one can use efficient state tomography methods for process tomography, such as compressed sensing \cite{Gross:2010aa,Gross:2011aa,Shabani:2011aa} and matrix-product-state (MPS) parameterizations \cite{Baumgratz:2013aa,Cramer:2010aa,Holzapfel:2015aa,Lanyon:2017aa}. Unfortunately, the matrix completion algorithms that underly these approaches can themselves be inefficient in run-time. This issue can be circumvented using constrained approaches, as in Refs.~\cite{Cramer:2010aa,Lanyon:2017aa}, which restrict to pure state descriptions of the unknown quantum state. Both compressed sensing and MPS parameterizations implicitly assume an ansatz for the unknown quantum process, that it is either low rank, or has a matrix product structure (and thus correlations are not long range) respectively. Our pairwise perturbative ansatz assumes a different physical constraint on the unknown process: that it is intrinsically built from two-qubit processes on all pairs of qubits. Like the MPS approach, this implies that few-body QPT is sufficient to find a PAPA characterization of the unknown process. Unlike an MPS, PAPA has no locality constraint on correlations, and allows for long-range correlations, though these come about only via local interactions between qubit pairs. Further, we will see in the next section that the PAPA constraint is physically motivated, unlike the low rank restriction of compressed sensing. \section{Ansatz for Process Tomography} \label{sec:LA} \begin{figure} \includegraphics[width=\columnwidth]{Schematic.pdf} \caption{{\bf a)} Pairwise Perturbative Ansatz (PAPA) tomography: for all qubit pairs, characterize the effective two-qubit process (Choi state $\sigma_{\mathcal{S}}$) when the unknown $N$-qubit process $\mathcal{E}$ occurs, and all other qubits start in the maximally mixed state. {\bf b)} three-qubit PAPA+GST: characterized two-qubit gate-sets are bootstrapped to a three-qubit gate-set via PAPA.} \label{fig:PAPA} \end{figure} We propose to restrict the unknown Choi state by assuming an ansatz for its form. This restricts the number of free parameters in the unknown process {\it a priori}, and therefore restricts the number of measurement settings required. We will assume an ansatz where the unknown $N$-qubit process is written as a composition of two-qubit processes, consisting of quantum processes for each qubit pair in the system. This is most easily expressed in terms of the super-operator matrix representation $\hat{\mathcal{E}}$ of the quantum process $\mathcal{E}$, as the series composition becomes a product of matrices. This has the general form \begin{align} \hat{\mathcal{E}} = \prod_{k=1,n=1}^{N-1,N-k}\hat{\mathcal{E}}_{k,n+k}, \label{eqn:2QPar} \end{align} where $\mathcal{E}_{k,n+k}$ is an arbitrary two-qubit process on qubits $k$ and $(k+n)$ with no restrictions. The product runs over all pairs of qubits, of which there are $(N^2-N)/2$. Each of the unknown two-qubit processes can be written as \begin{align} \nonumber\mathcal{E}_{k,n+k} = \sum^{16}_{i_{k,n},j_{k,n}}\chi_{i_{k,n}}^{j_{k,n}}\Big(&\mathcal{I}^{\otimes k-1}\otimes\mathcal{A}^{(k)}_{i_{k,n}}\otimes\mathcal{I}^{\otimes n-1}\\ &\otimes\mathcal{A}^{(k+n)}_{j_{k,n}}\otimes \mathcal{I}^{\otimes N-k-n}\Big), \label{eqn:2Qlocal} \end{align} where $\{\mathcal{A}^{(k)}_{i_{k,n}}\}$ is a complete basis for single-qubit processes and $\mathcal{I}$ is the identity process. $\chi_{i_{k,n}}^{j_{k,n}}$ is an element of the $\chi$-matrix describing the two-qubit process, and the summation variables $i_{k,n}$ and $j_{k,n}$ are subscripted to emphasize that they correspond to a particular qubit pair. There are many possible ansatz for an unknown quantum process \cite{Gross:2010aa,Gross:2011aa,Shabani:2011aa,Baumgratz:2013aa,Cramer:2010aa,Holzapfel:2015aa,Lanyon:2017aa}, but the form we have chosen is particularly well motivated physically. As it is the composition of two-qubit processes in sequence, it captures the natural two-body quantum operations that occur in a gate-based quantum computation. It can completely specify any \emph{ideal} gate operation (single-layer quantum circuit built from one and two-qubit gates), and will contain both single-qubit errors and correlated two-qubit errors as independent free parameters. It also describes processes that involve more than two qubits, but as combinations of two-qubit processes performed in sequence. Thus, it describes general processes in a perturbative fashion, built from one- and two-qubit processes. While each arbitrary two-qubit process described by Eq.~\eqref{eqn:2Qlocal} is parameterized in terms of a basis with $16^2$ elements, its $\chi$-matrix has only $16^2-4^2 = 240$ free parameters. There are ${{N}\choose{2}} = (N^2-N)/2$ two-qubit subsets, and so the total number of free parameters in our ansatz is $120(N^2-N)$. As this scales quadratically with qubit number, PAPA is an efficient approach to QPT. QPT with PAPA consists of determining the $\chi$-matrix for each two-qubit process in the product in Eq.~\eqref{eqn:2QPar}. Inspired by the local tomography used in \cite{Cramer:2010aa,Lanyon:2017aa}, we will use the tomographic characterization of two-qubit processes on all pairs of qubits to determine these free parameters. In essence, from characterization of two-body processes, we bootstrap to a multi-qubit process of PAPA form. To compare the PAPA ansatz to two-qubit tomographic data, we must determine a notion of a two-qubit reduction of a process $\mathcal{E}$. This is most easily done in terms of the Choi state $\rho_\mathcal{E}$. For the two-qubit subset $\mathcal{S} = \{m, p\}$ this takes the form \begin{align} \rho_{\mathcal{S}}= \frac{1}{2^N}\sum_{\mu\nu}{\rm Tr}_{/\mathcal{S}}\left[\ketbra{\psi_\mu}{\psi_\nu}\right] \otimes {\rm Tr}_{/\mathcal{S}}\left[\mathcal{E}\left(\ketbra{\psi_\mu}{\psi_\nu}\right)\right], \label{eqn:RedMat} \end{align} where by ${\rm Tr}_{/\mathcal{S}}[\rho]$ we mean the partial trace of all qubits other than those in the set $\mathcal{S}$, and it is important to note that the partial trace is applied to both ``parts'' of the Choi state. Using the orthogonality of the $N$-qubit basis, we see that \begin{align} {\rm Tr}_{/\mathcal{S}}\left[\ketbra{\psi_\mu}{\psi_\nu}\right] = \delta_{\mu_{/\mathcal{S}}, \nu_{/\mathcal{S}}}\ketbra{\psi_{\mu_{\mathcal{S}}}}{\psi_{\nu_{\mathcal{S}}}}, \end{align} where the indices $\mu_{\mathcal{S}}$ ($\mu_{/\mathcal{S}}$) are the subset of indices in $\mu$ that correspond to the qubits inside (outside) of the subset $\mathcal{S}$. Thus, the reduced Choi state of the unknown process can be written as \begin{align} \nonumber\rho_{\mathcal{S}}= \frac{1}{2^2}\sum_{\mu_{\mathcal{S}}\nu_{\mathcal{S}}}&\Big(\ketbra{\psi_{\mu_{\mathcal{S}}}}{\psi_{\nu_{\mathcal{S}}}} \\ &\otimes {\rm Tr}_{/\mathcal{S}}\left[\mathcal{E}\left(\ketbra{\psi_{\mu_{\mathcal{S}}}}{\psi_{\nu_{\mathcal{S}}}}\otimes\frac{\mathbb{I}_{N-2}}{2^{N-2}}\right)\right]\Big), \label{eqn:RedMat2} \end{align} where $\mathbb{I}_{N-2}$ is the identity matrix of dimension $2^{N-2}$. To determine the free parameters in the PAPA ansatz, for each pair of qubits we compare the two-qubit reduced Choi states described by Eq.~\eqref{eqn:RedMat2} to the corresponding experimentally characterized two-qubit Choi state. Operationally, this amounts to performing two-qubit QPT on the $(N^2-N)/2$ pairs of qubits. Each of the pairwise characterized two-qubit processes is described by $16^2-4^2 = 240$ complex numbers, which gives a total of $120(N^2-N)$ total complex numbers describing the two-qubit process characterization of all pairs of qubits. Thus, we have exactly as many constraints (coming from experimental characterization) as there are free parameters in PAPA. This further motivates our choice of ansatz, as we have made use of all available data from two-qubit characterizations of the unknown multi-qubit process. In the following section we complete our description of PAPA tomography by describing what two-qubit processes must be characterized for each qubit pair in order to solve for the unknown parameters in our ansatz. \section{Characterizing The Two-Qubit Processes} \label{sec:Char} In the most general version of QPT, there is a completely unknown quantum process which one wishes to determine. Applying PAPA to this problem, the required two-qubit QPT is derived from the form of Eq.~\eqref{eqn:RedMat2}. For a pair of qubits defined by the subset $\mathcal{S}$ we perform two-qubit QPT to characterize the effective process the qubits in $\mathcal{S}$ experience when the unknown process $\mathcal{E}$ is implemented on all $N$ qubits (with all other qubits initialized in the maximally mixed state), as depicted in Fig.~\ref{fig:PAPA}a). To see that Eq.~\eqref{eqn:RedMat2} describes a valid two-qubit process, we describe the unknown $N$-qubit process in a basis of $N$-qubit processes as \begin{align} \mathcal{E} = \sum_i \epsilon_i \bigotimes_k^N \Lambda_{i_k}, \end{align} where $\sum \epsilon_i = 1$. Substituting this expression into the partial trace in Eq.~\eqref{eqn:RedMat2}, we obtain (recall $\mathcal{S} = \{m,p\}$) \begin{align} &\nonumber {\rm Tr}_{/\mathcal{S}}\left[\mathcal{E}\left(\ketbra{\psi_{\mu_{\mathcal{S}}}}{\psi_{\nu_{\mathcal{S}}}}\otimes\mathbb{I}_{N-2}\right)\right] \\ =&~\nonumber2^{N-2}\sum_i \epsilon_i \Lambda_{i_m}\otimes\Lambda_{i_p}\left(\ketbra{\psi_{\mu_{\mathcal{S}}}}{\psi_{\nu_{\mathcal{S}}}}\right) {\rm Tr}\left[\bigotimes_k^N \Lambda_{i_k}\left(\frac{\mathbb{I}}{2}\right)\right]\\ =&~\nonumber2^{N-2}\sum_i \epsilon_i \Lambda_{i_m}\otimes\Lambda_{i_p}\left(\ketbra{\psi_{\mu_{\mathcal{S}}}}{\psi_{\nu_{\mathcal{S}}}}\right)\\ \equiv&~2^{N-2}\Lambda_{\mathcal{S}}\left(\ketbra{\psi_{\mu_{\mathcal{S}}}}{\psi_{\nu_{\mathcal{S}}}}\right), \end{align} where we have defined $\Lambda_{\mathcal{S}} = \sum_i\epsilon_i \Lambda_{i_m}\otimes\Lambda_{i_p}$. The reduced Choi state can then be written as \begin{align} \rho_{\mathcal{S}}= \frac{1}{2^2}\sum_{\mu_{\mathcal{S}}\nu_{\mathcal{S}}}\ketbra{\psi_{\mu_{\mathcal{S}}}}{\psi_{\nu_{\mathcal{S}}}} \otimes\Lambda_{\mathcal{S}}\left(\ketbra{\psi_{\mu_{\mathcal{S}}}}{\psi_{\nu_{\mathcal{S}}}}\right), \label{eqn:RedMat3} \end{align} and it is clear that $\Lambda_{\mathcal{S}}$ must describe a valid quantum process. In Eq.~\eqref{eqn:RedMat2} we see that the qubits outside the qubit pair of interest (the \emph{spectator} qubits) must be prepared in the maximally mixed state. If this is experimentally challenging, one can instead randomly sample spectator qubit preparations from the uniform distribution of the set of spectator qubit logical states. With sufficient sampling to generate accurate statistics, the normalized sum of the randomly sampled preparation states approaches the maximally mixed state for the spectator qubits. Thus, performing two-qubit QPT on the qubit pair of interest with spectator qubits prepared in a random logical state will characterize the desired effective process in Eq.~\eqref{eqn:RedMat2}. From two-qubit QPT we can obtain an experimentally characterized two-qubit Choi state, which we label $\sigma_{\mathcal{S}}$. We equate this to our reduced Choi state for the unknown process, $\rho_{\mathcal{S}}$, to determine the free parameters in the PAPA. In other words, we simultaneously solve the equations \begin{align} \rho_{\mathcal{S}} = \sigma_{\mathcal{S}}, \label{eqn:PAPAmain} \end{align} for every pair of qubits. Note that each $\rho_{\mathcal{S}}$ depends on the $\chi$-matrix elements for \emph{all} qubit pairs, i.e.~all $\chi_{i_{k,n}}^{j_{k,n}}$, not just the qubit pair of the subset $\mathcal{S}$. Thus, each two-qubit process characterization $\sigma_{\mathcal{S}}$ constrains the global process, not just the component of the ansatz on the qubits in ${\mathcal{S}}$. For this reason, we have labelled the reduced two-qubit processes as $\Lambda_{\mathcal{S}}$ to distinguish them from the two-qubit processes that construct the PAPA, $\mathcal{E}_{k,n+k}$ in Eq.~\eqref{eqn:2QPar}. \subsection{PAPA and Gate-Set-Tomography} The PAPA tomography approach described so far works well to obtain a bootstrapped description of an $N$-qubit process from characterization of the effective processes on all qubit pairs. However, often the problem at hand is not to characterize a completely unknown process, but to determine the actual process, $\mathcal{G}$, that occurs when we aim to implement a unitary gate, $\hat{G}$, (from here on we use calligraphic text for processes and latin text for unitary gates). Extending this to an entire gate-set via Gate-Set Tomography (GST), we obtain a set of processes $\{\mathcal{G}_i\}$ corresponding to the experimental implementation of an ideal gate-set $\{\hat{G}_i\}$. GST has the further benefit of excluding state-preparation and measurement (SPAM) errors from the processes $\{\mathcal{G}_i\}$ \cite{Greenbaum:aa}. Note that for clarity we will use ``gate-set'' to refer to the processes $\{\mathcal{G}_i\}$, and ``ideal gate-set'' to refer to the unitary gates $\{\hat{G}_i\}$. Combining PAPA with GST, we can perform GST on all qubit pairs to obtain a characterized gate-set for each pair, and then use PAPA to bootstrap to descriptions of an $N$-qubit processes. To see why this is useful, consider the three-qubit gate $\hat{X}\otimes\hat{Y}\otimes\hat{X}$. Given characterized gate-sets with the relevant two-qubit gates, one way to describe the three-qubit process would be \begin{align} \hat{X}\otimes\hat{Y}\otimes\hat{X}\rho\hat{X}\otimes\hat{Y}\otimes\hat{X} \rightarrow \mathcal{G}_{X_1Y_2}\left(\mathcal{G}_{I_2X_3}\left(\rho\right)\right) \end{align} where $\mathcal{G}_{AB}$ is the experimental process when we try to implement the gate $\hat{A}\otimes\hat{B}$. However, there is ambiguity in the correct decomposition of the three-qubit gate, and $\mathcal{G}_{X_1X_3}\left(\mathcal{G}_{Y_2I_3}\left(\rho\right)\right)$ would be an equally valid description of the process. An issue arrises as it is unlikely that the constructed three-qubit processes from all possible two-qubit decompositions will agree with one another. Using PAPA avoids this issue, as it finds the three-qubit process of PAPA form that best agrees with the pairwise characterized processes, i.e.~with $\mathcal{G}_{X_1Y_2}$, $\mathcal{G}_{Y_2X_3}$, and $\mathcal{G}_{X_1X_3}$. As such it captures context dependence between gate operations, such as when the effect on qubit 1 is different for the processes $\mathcal{G}_{X_1Y_2}$ and $\mathcal{G}_{X_1X_3}$. As an added benefit, one never has to implement the full $N$-qubit process, as one does when using PAPA without GST (as described in the previous section). Instead, from the characterized gate-sets on all qubit pairs, we can bootstrap to PAPA characterizations of the processes in an $N$-qubit gate-set, as represented in Fig.~\ref{fig:PAPA}b). \begin{figure}[!t] \begin{subfigure}{ \includegraphics[width=0.8\columnwidth]{3QProcess.pdf}}\end{subfigure} \begin{subfigure}{ \includegraphics[width=0.8\columnwidth]{2QProcesses.pdf}}\end{subfigure} \caption{{\bf a)} Simulated trace distance between the actual three-qubit process Choi state and either the PAPA reconstructed Choi state (black +) or the ideal gate (red $\times$), see Eq.~\eqref{eqn:3QTD}. {\bf b)} The average trace distance between the reduced two-qubit Choi states, see Eq.~\eqref{eqn:2QTD}. Processes i), ii) are the all identity gate of length 50 ns and 400 ns; iii), v), and vi) are $\hat{X}\otimes\hat{Y}\otimes\hat{X}$ of length 50 ns; iv) and vii) are ${\rm CNOT}_{12}\otimes\hat{\mathbb{I}}$ of length 400 ns. i), ii), iii), and iv) have single-qubit decoherence with $T_1 = T_2 = 50~\mu$s; v) and vii) have coherent error $\phi = 0.02$ and vi) has $\phi = 0.2$.} \label{fig:numsim} \end{figure} While PAPA can in principle return a characterization of any $N$-qubit gate, when we restrict the pairwise two-qubit QPT to GST, the PAPA+GST combination can only characterize a limited set of $N$-qubit gates. Which $N$-qubit gates can be characterized with PAPA+GST is detailed further in Appendix \ref{app:PAPAGST}. The general requirement is that each two-qubit reduced process of the ideal $N$-qubit gate must be an incoherent mixture of two-qubit gates built from the ideal gate-set. For example, if the ideal gate is ${\rm CNOT}_{12}\otimes\hat{\mathbb{I}}$, then as shown in Appendix \ref{app:PAPAGST}, the ideal gates $\hat{Z}\otimes\hat{\mathbb{I}}$ and $\hat{\mathbb{I}}\otimes\hat{\mathbb{I}}$ need to be in the characterized gate-set for qubit pair 1-3. Decomposing an $N$-qubit gate this way implicitly assumes the errors that make the implemented process $\mathcal{G}$ distinct from the ideal gate $\hat{G}$ are not strongly specific to the implementation of $\mathcal{G}$. This is easily satisfied if the errors are gate-independent, but some kinds of gate-dependent error are tolerable, such as context dependence in simultaneous single-qubit gates. For the ${\rm CNOT}_{12}\otimes\hat{\mathbb{I}}$ gate considered previously, an example of a tolerable gate-dependent error would be a coherent error that occurs on qubit 1 both for an actual $\hat{Z}$-gate or an effective $\hat{Z}$-gate (as occurs in the reduced process on qubit 1 for the ${\rm CNOT}_{12}$ gate). It is important to emphasize that neither of these issues are limitations of PAPA, which can characterize any $N$-qubit process using pairwise two-qubit QPT, but of the two-qubit characterizations supplied to PAPA by GST. Nevertheless, there are many situations where PAPA+GST may be applicable, i.e.~the ideal-gate decomposition is possible and the errors can be assumed to be captured by PAPA+GST, as we explore in section \ref{sec:N12}. For situations where PAPA+GST is not possible, PAPA can inherit SPAM-insensitivity from other SPAM-insensitive process tomography such as that using randomized benchmarking \cite{Kimmel:2014aa,Johnson:2015aa,Roth:2018aa}. \section{Simulation Tests of the Ansatz} \label{sec:Sim} \subsection{Noisy One- and Two-Qubit Gates} \label{sec:N12} To test our PAPA approach to multi-qubit QPT, we numerically simulate ``unknown'' three-qubit processes, and then reconstruct the PAPA characterization of these processes. We consider several example processes formed by one of the ideal three-qubit gates $\hat{\mathbb{I}}\otimes\hat{\mathbb{I}}\otimes\hat{\mathbb{I}}$, ${\rm CNOT}_{12}\otimes\hat{\mathbb{I}}$, or $\hat{X}\otimes\hat{Y}\otimes\hat{X}$, followed by an error process. For the error process we consider two cases of gate-independent error, either a coherent error described by single-qubit rotations on all three qubits \begin{align} &\hat{G}_{\rm Coh.~Error} = \hat{X}_\phi\otimes\hat{Y}_\phi\otimes\hat{X}_\phi \\ & \hat{X}_\phi = \cos(\phi)\hat{\mathbb{I}} + i\sin(\phi)\hat{X} \\ & \hat{Y}_\phi = \cos(\phi)\hat{\mathbb{I}} + i\sin(\phi)\hat{Y} \end{align} or single-qubit decay and pure dephasing implemented by their standard Kraus operator representations \cite{Nielsen00}. In standard PAPA reconstruction, pairwise two-qubit QPT is used to characterize the reduced two-qubit process, and obtain $\sigma_{\mathcal{S}}$ for each qubit pair. With PAPA+GST this is circumvented by using a GST characterized gate-set for each qubit pair to calculate $\sigma_{\mathcal{S}}$, provided the ideal reduced two-qubit process can be built from gates in the ideal gate-set. For the example three-qubit ideal gates chosen, the required two-qubit gates are contained in the ideal gate-set ${\rm CNOT}+\{\hat{\mathbb{I}},\hat{X},\hat{Y},\hat{Z}\}^{\otimes 2}$. We follow the PAPA+GST approach for our numerical tests, simulating the implementation of this gate-set on all qubit pairs, including the error process, and use results of these simulations as our GST reconstructed two-qubit gate-sets. We then use the characterized two-qubit gate-sets to calculate $\sigma_{\mathcal{S}}$ for each qubit pair. We outline our approach in explicit detail in Appendix \ref{app:PAPAGST}. We compare the PAPA+GST reconstruction for a noisy gate to the actual simulated noisy gate by calculating the trace distance between the Choi state of the PAPA-reconstructed three-qubit process, $\rho_\mathcal{E}$, and that for the actual process, $\rho^{\rm act}_\mathcal{E}$ \begin{align} {\rm Trace~Dist.} = \frac{1}{2}{\rm Tr}\left[\sqrt{\left(\rho_\mathcal{E}-\rho^{\rm act}_\mathcal{E}\right)^\dagger\left(\rho_\mathcal{E}-\rho^{\rm act}_\mathcal{E}\right)}\right]. \label{eqn:3QTD} \end{align} We also calculate the trace distance for each of the reconstructed two-qubit processes, comparing them to the actual two-qubit reduced processes \begin{align} {\rm Trace~Dist.} = \frac{1}{2}{\rm Tr}\left[\sqrt{\left(\rho_{{\mathcal S}}-\sigma_{\mathcal S}\right)^\dagger\left(\rho_{{\mathcal S}}-\sigma_{\mathcal S}\right)}\right]. \label{eqn:2QTD} \end{align} The results of this are shown in Fig.~\ref{fig:numsim} for the seven candidate processes listed in the caption. As the results show, the PAPA reconstructed process always improves upon the initial guess (ideal gate), both in terms of the trace distance for the full three-qubit process reconstruction, Fig.~\ref{fig:numsim}a), and the average of the trace distances for the two-qubit reconstructions, Fig.~\ref{fig:numsim}b). This improvement is typically around one order of magnitude, except in the case of the CNOT gate, which was the most difficult to reconstruct of the gates tested. The accuracy of the PAPA reconstructions of the simulated gates is set by the specifics of the classical numerical algorithm implemented (see Appendix \ref{app:Num} for details). If other algorithms \cite{Bolduc:2017aa,Knee:aa} more tailored to quantum process reconstruction are used with PAPA we expect significant improvements in accuracy and runtime are possible. \subsection{Coherent Error in the Cross-Resonance Gate} \begin{figure}[t] \includegraphics[width=\columnwidth]{CR_plot.pdf} \caption{Simulated trace distance between the three-qubit Choi state for the simulated CR-CNOT with coherent error and either the PAPA reconstructed Choi state (black +), or the ideal gate (red $\times$), as a function of over-rotation error (angle $\beta$) and stray $ZZ$-coupling (angle $\phi$), see Eq.~\eqref{eqn:UCR}.} \label{fig:CR} \end{figure} We also perform a systematic testing of the PAPA approach by examining coherent error in a cross-resonance (CR) implementation of a CNOT gate \cite{Rigetti:2010aa,Chow:2011aa}, with the ideal gate taking the form ${\rm CNOT}_{12}\otimes \hat{\mathbb{I}}$. Referred to as a CR-CNOT, this ideal gate consists of the ideal CR-gate followed by single-qubit gates. The unitary describing the implemented gate in the presence of coherent error is given by $\hat{U}_{\rm CNOT} = \hat{U}^1_{-Z90}\hat{U}^{2}_{+X90}\hat{U}_{\rm CR}$, with $\hat{U}_{\pm\mu90}^j$ a rotation of qubit $j$ of angle $\pm 90^{\circ}$ around the $\mu$-axis (which we assume to be perfect), and \begin{align} \hat{U}_{\rm CR} = \exp\left(-i\left[\left(\frac{\pi}{2} + \beta\right)\frac{\hat{Z}\hat{X}\hat{\mathbb{I}}}{2} + \phi~\frac{\hat{\mathbb{I}}\hat{Z}\hat{Z}}{2}\right]\right), \label{eqn:UCR} \end{align} where for compactness of notation we have suppressed the tensor product symbols, such that $\hat{Z}\hat{X}\hat{\mathbb{I}} = \hat{Z}\otimes\hat{X}\otimes\hat{\mathbb{I}}$. In Eq.~\eqref{eqn:UCR}, the angles $\beta$ and $\phi$ quantify the coherent error, with $\beta$ the angle of over-rotation of the desired CR-interaction between qubits 1-2, and $\phi$ the angle quantifying the effect of spurious $ZZ$-coupling between qubits 2-3. We consider the echoed CR-pulse of Ref.~\cite{Corcoles:2013aa}, such that the only remaining $ZZ$-coupling is between the target and idle qubits (i.e.~2 and 3). We use values of $\beta$ between $\pi/16$ and $\pi/8$ radians, which produce non-ideal gates with trace-overlap fidelity of $95-99\%$, and values of $\phi$ between $10^{-3}$ and $4\times 10^{-3}$ radians. For a gate of 400 ns in duration, these values of $\phi$ correspond to spurious ZZ-couplings of $2.5-10$ kHz. From the decomposition of the effective two-qubit processes for the ideal CNOT gate given in Appendix \ref{app:PAPAGST}, it is clear that a CR-CNOT with coherent error does not satisfy the criteria for PAPA+GST. In particular, the error is strongly gate-dependent as it is intrinsic to the CR-interaction. For instance, no effect of the CR error would be seen in the implementation of the simultaneous single-qubit gate $\hat{Z}\otimes\hat{I}$ on qubits 1-3. As such, we must apply standard PAPA, and simulate QPT on the effective process for each pair of qubits during the implemented CR-CNOT. For this we assume no SPAM error, and in practice similar results can be achieved by applying other SPAM-insensitive process tomography approaches to the CR-CNOT \cite{Kimmel:2014aa,Johnson:2015aa,Roth:2018aa}. The results of our simulations are shown in Fig.~\ref{fig:CR}. As can be seen, for all values of $\beta$ and $\phi$ tested the PAPA reconstruction is approximately an order of magnitude closer to the simulated unitary of Eq.~\eqref{eqn:UCR} than the ideal gate (used as the initial guess). Thus, PAPA is a useful technique for benchmarking the performance of experimentally relevant implementations of entangling gates, such as the CR-CNOT widely used in circuit QED \cite{Chow:2014aa}. \section{Conclusion} \label{sec:Conc} We have presented here an approach to efficient and SPAM-insensitive quantum process tomography that relies on fitting tomographic data to a constrained ansatz for the unknown quantum process. Our physically motivated pairwise perturbative ansatz requires only two-qubit process tomography on all pairs of qubits, such that the total number of measurements scales only quadratically with qubit number. Further, our ansatz inherits SPAM-insensitivity from SPAM-insensitive two-qubit tomography, such as gate-set tomography \cite{Blume-Kohout:2017aa} or RB gate tomography \cite{Kimmel:2014aa,Johnson:2015aa,Roth:2018aa}. Testing via numerical simulations validates the usefulness of our tomographic approach on both a series of example gates, and the experimentally relevant CR-CNOT \cite{Chow:2011aa}. In typical cases, the resulting description of the unknown quantum process found by our ansatz is an order of magnitude more accurate than the naïve initial guess. In the future, we hope to improve the efficiency and accuracy of the classical algorithm underlying our reconstruction method \cite{Bolduc:2017aa,Knee:aa}. It is worth noting that while we have chosen to build our ansatz for an $N$-qubit process from two-qubit processes, similar ansatz can be created from $K$-qubit processes for any $K < N$. These have measurement resource requirements that scale as a polynomial of order $K$, and are therefore still asymptotically efficient. We have focussed on the case $K=2$ in our work as two-qubit process tomography is within current experimental capabilities. However, for larger system sizes, there will likely be an optimal $K>2$ that reduces the number of qubit subsets, given by $N\choose K$, while maintaining a small enough $K$ that $K$-qubit QPT is experimentally feasible. Finally, we comment briefly on the situations where PAPA may fail, and the fact that this actually gives useful information about the unknown process. Numerical reasons aside, PAPA reconstruction fails when the process being estimated is an operation that is not factorable to 2-body, or when non-Markovian noise is present. As such, PAPA reconstruction can be used as a form of model testing for error processes that entangle more than 2 qubits, or non-Markovian noise sources such as slow parameter drift. Similarly, PAPA+GST puts greater restrictions on the gate and context independence of the noise sources, and can be used as a model testing procedure for these error sources. This highlights the usefulness of ansatz-based approached to QPT: even when they fail they provide useful information about the system. \acknowledgements The authors acknowledge useful discussions with David Poulin. This material is based upon work supported by the U.S. Army Research Office under Contract No: W911NF-14-C-0048. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the U.S. Army Research Office.
{ "timestamp": "2019-03-01T02:04:52", "yymm": "1902", "arxiv_id": "1902.10821", "language": "en", "url": "https://arxiv.org/abs/1902.10821" }
\section*{Introduction} The physics of Multi-Regge limit of scattering amplitudes in Yang-Mills theory (see e.g. Refs.~\cite{RevLipatov97, RevDelDuca95} for the review) continues to attract a lot of attention both form theoretical and phenomenological points of view. From the theory side, the list of applications of Multi-Regge limit includes studies~\cite{Bartels:2008ce, Bartels:2008sc, Bartels:2014jya} of corrections to the Bern-Dixon-Smirnov all-order ansatz~\cite{Bern:2005iz} for scattering amplitudes in ${\cal N}=4$-supersymmetric Yang-Mills theory and recently discovered connection~\cite{IRfactorRegge, Caron-Huot:2017zfo} between Multi-Regge asymptotics of scattering amplitudes and all-order structure of infrared divergences in {\it non-supersymmetric} QCD, summarized by dipole formula~\cite{Becher:2009cu, Gardi:2009qi, Becher:2009qa, Gardi:2009zv}. From more phenomenological perspective, scattering amplitudes in Multi-Regge Kinematics (MRK) form a basis for the celebrated Balitsky-Fadin-Kuraev-Lipatov (BFKL)~\cite{BFKL1, BFKL2, BFKL3} evolution equation, which allows one to resum the higher-order corrections to scattering amplitudes or observable cross-sections in QCD, enhanced by powers of $\log s/(-t)$, where $s$ is a (partonic) squared center-of-mass energy and $t$ is the typical $t$-channel momentum transfer in the process under consideration. In QCD, the kernel of BFKL equation is computed up to the Next-to-Leading order(NLO) level~\cite{NLO-BFKL,NLOCiafaloni1,NLOCiafaloni2}, but it's direct phenomenological application is impossible because NLO correction turns out to be numerically large. Logarithms coming from the Dokshitzer-Gribov-Lipatov-Altarelli-Parisi~\cite{DGLAP1, DGLAP2, DGLAP3} evolution in transverse-momentum scale~\cite{Salam98} and running of QCD coupling~\cite{Brodsky:1998kn} has been identified as the main sources of perturbative instability of the BFKL kernel and their consistent resummation is one of the central problems of the BFKL phenomenology today, see e.g. Refs.~\cite{RGIBFKL, ChVeraRGI1, Ball:2017otu, Marzani:2007gk} for the recent work in this direction. The property of exponentiation of loop corrections enhanced by $\log s/(-t)$ is known under the name of {\it Reggeization} of quarks and gluons in QCD and it is proven up to Next-to-Leading Logarithmic approximation (NLLA)~\cite{ReggeProofLLA, ReggeProofNLLA, BogFad06}. It is natural to ask for the systematic tool, which makes this property of QCD scattering amplitudes manifest. The gauge-invariant Effective Field Theory (EFT) for Multi-Regge processes in QCD~\cite{Lipatov95, LV} is such a tool. In the original papers~\cite{Lipatov95, LV}, the Reggeization of gluon and quark was shown to hold in a framework of EFT in the LLA. Computing loop corrections in this EFT one finds a new type of divergences of loop integrals, the so-called {\it rapidity divergences}, which will be discussed in the forthcoming sections. Similar divergences also arise in the context of Soft-Collinear Effective Theory (SCET)\footnote{On the possibility to include BFKL effects and quark Reggeization into framework of SCET see e.g. Refs.~\cite{Rothstein:2016bsq, Moult:2017xpp}.}~\cite{Becher:2011dz, Chiu:2012ir} and Transverse-Momentum-Dependent (TMD) factorization~\cite{CollinsQCD, Collins:2011ca} and a few prescriptions to regularize them, including analytic regularization~\cite{Becher:2011dz}, $\delta$-regularization~\cite{Echevarria:2015byo} and ``tilted Wilson lines'' regularization~\cite{CollinsQCD, Collins:2011ca} where introduced in the literature. Among the mentioned prescriptions, only the last one does not modify the standard definition of the Wilson lines, and therefore it can be relatively straightforwardly applied to the High-Energy EFT~\cite{Lipatov95, LV}. First such applications where performed in the Refs.~\cite{Hentschinski:2011tz, Chachamis:2012cc, Chachamis:2012gh} where one-loop corrections to the propagator of Reggeized gluon and effective vertices of interaction of on-shell quark and gluon with one Reggeized gluon where computed in the framework of EFT, and found to be consistent with the results obtained earlier from QCD. Later, the two-loop Regge trajectory of a gluon was extracted from a rapidity-divergent part of the two-loop correction to a Reggeized gluon propagator in the EFT~\cite{Chachamis:2013hma} and was shown to coincide with known QCD result. Also, the calculation of the one-loop correction to the propagator of Reggeized quark and vertex of interaction of on-shell photon with one Reggeized and one QCD quark ($Q\gamma q$-vertex) has been done in the Ref.~\cite{gaQq-real-photon} and it was shown, that this results allow one to reproduce the Multi-Regge asymptotics of the positive-signature part of the one-loop QCD amplitude for $\gamma\gamma\to q\bar{q}$ process, thus checking the consistency of High-Energy EFT in the Reggeized quark sector. Further development of computational techniques within this approach, in particular going beyond LLA or considering quantities with more than one scale of virtuality, is required for the EFT-approach to be instrumental for the solution of above-mentioned problems of BFKL-physics. In the present paper we deal with the problem of computation of one-loop corrections to Reggeon-Particle-Particle vertices, containing one additional scale of virtuality, besides the transverse momentum of the Reggeized parton. Our primary example will be the one-loop correction to the $Q\gamma^\star q$-vertex with a space-like off-shell photon ($\gamma^\star$). Besides methodological significance of this calculation, this results will also become an integral part of virtual NLO correction to the $F_2$ structure function of Deep Inelastic Scattering (DIS) at NLO of Parton Reggeization Approach(PRA) to the multi-scale hard processes at Hadron Colliders, see Ref.~\cite{NS_PRA} for the introduction. The real NLO correction for DIS hard subprocess with Reggeized gluon in the initial state has been computed in PRA in Refs.~\cite{NS_DIS1, NS_DIS2}, where the problem of definition of unintegrated Parton Distribution Function at NLO is also discussed. The present paper is organized as follows. Introduction to the formalism of High-Energy EFT and problem of rapidity divergences is given in the Sec.~\ref{Sec:EFT}. In Sec.~\ref{Sec:RDs-gen} we perform the general analysis of rapidity divergences in the one-loop integrals with one and two external Reggeon lines and in Sec.~\ref{Sec:Ints} we compute necessary scalar integrals. The calculation of one-loop correction to $Q\gamma^\star q$-vertex proceeds in the Sec.~\ref{Sec:1-loop-vert}, where the fate of spurious power-divergent terms is traced and it is shown, that they cancel, leaving only sigle-logarithmic rapidity divergence, as it should be according to quark Reggeization hypothesis. Using the obtained result for vertex correction, one can reconstruct the Multi-Regge asymptotics of the one-loop amplitude of DIS on a real photon target, and compare the result with a direct QCD calculation. The one-Reggeon exchange amplitude contributes both to the real and imaginary parts of amplitude at this order, and it is considered in Sec.~\ref{Sec:comp-QCD}. In the Sec.~\ref{Sec:Imag} the two-Reggeon contribution in the EFT is discussed, which contributes only to the imaginary part at one loop. Finally, our conclusions are summarized in the Sec.~\ref{Sec:Conclusions} and in the \hyperlink{Sec:Appendix:A}{Appendix} the comparison of pole prescriptions for the $O(g_s^2)$ and $O(g_s^3)$ induced vertices arising from the Hermitian version of the Reggeon-gluon interaction~\cite{RevLipatov97, BondZubkov} with results of Ref.~\cite{MH_PolePrescr} is presented. \section{Gauge-invariant Effective Field Theory for Multi-Regge processes in QCD} \label{Sec:EFT} In the limit of MRK for the $2\to 2+n$ partonic subprocess, all $2+n$ final-state particles are highly separated in rapidity, while their transverse momenta are small in comparison with the collision energy. The kinematic situation when one can identify a few groups of final-state partons, each of which occupy a relatively small rapidity interval, in comparison with rapidity gaps between groups is called Quasi-Multi-Regge kinematics(QMRK). If one introduces the invariant mass $s_{ij}\sim e^{\Delta y_{ij}}$ for each pair of highly-separated final-state partons $i$ and $j$ in MRK (or each pair of clusters of partons in QMRK), and (anti-)symmetrizes the full QCD scattering amplitude w.r.t. substitutions $s_{ij}\leftrightarrow u_{ij}\simeq -s_{ij}$, obtaining in such a way the amplitude with {\it definite signature} in each of the $s_{ij}$-channels, then in the QMRK limit, such an object is shown~\cite{ReggeProofLLA, ReggeProofNLLA, BogFad06} to factorize into a product of certain universal gauge-invariant parts. An example of such factorization can be given with the help of QMRK process, depicted in the Fig.~\ref{Fig:QMRK}. Here the QMRK asymptotics of the part of total QCD amplitude with positive signature in $s_{12}=(p_1+q_2)^2$-channel and negative signature in $s_{23}=(p_2+q_1)^2$-channel can be represented as a product of three {\it effective vertices}, corresponding to the production of three clusters of final-state partons, and $t$-channel propagators of {\it Reggeized gluons} ($R_{\pm}$) and {\it Reggeized quarks} ($Q_{\pm}$), collectively named as {\it Reggeons}. The Reggeized gluon is a scalar particle in the adjoint representation of the color group $SU(N_c)$, while the Reggeized quark is the Dirac fermion in the fundamental representation of the color group. The propagators of Reggeons are dressed by Regge factors $\left( (s_{12}/s_0)^{\omega_q(q_1^2)}+(-s_{12}/s_0)^{\omega_q(q_1^2)} \right)/2$ and $\left( (s_{23}/s_0)^{\omega_g(q_2^2)}+(-s_{23}/s_0)^{\omega_g(q_2^2)} \right)/2$, which determine the dependence of amplitude on energies $s_{12}$ and $s_{23}$, while effective vertices depend only on an arbitrary {\it rapidity-factorization scale} $s_0$, and $\omega_{g/q}(q_i^2)$ are functions of $q_i^2$ and $\alpha_s$ called {\it gluon}({\it/quark}) {\it Regge trajectories.} These functions contain explicit infrared divergences and in QCD they are currently known up to $O(\alpha_s^2)$. \begin{figure} \begin{center} \includegraphics[width=0.6\textwidth]{figures/MRK.pdf} \end{center} \caption{An example of QMRK process with three clusters of final-state partons highly separated in rapidity. The Reggeized quark and Reggeized gluon in the $t$-channels are denoted by dashed lines. The effective vertices are denoted by by shaded circles. }\label{Fig:QMRK} \end{figure} It is most convenient to describe the QMRK if one introduces Sudakov decomposition for an arbitrary four-vector: \begin{equation} k^\mu=\frac{1}{2}\left(k_+n_-^\mu + k_-n_+^\mu \right) + k_T^\mu, \label{Eq:Sud-dec} \end{equation} where\footnote{The reader should notice, that unlike to the notation of e.g. Ref.~\cite{Lipatov95}, we do not distinguish between covariant and contravariant light-cone components, so that the position of indices $\pm$ in our formulas does not have any meaning. } $n^\mu_-=(n^-)^\mu=p_1^\mu/\sqrt{s}$, $n^\mu_+=(n^+)^\mu=p_2^\mu/\sqrt{s}$, $s=(p_1+p_2)^2$, $p_1^2=p_2^2=0$, so that $n_{\pm}^2=0$, $n_+n_-=2$ and $k_{\pm}=k^{\pm}=n_{\pm}k$, $n_\pm k_T=0$. In the center-of-mass frame of $p_1$ and $p_2$, the relation $k^{\pm}=k^0\pm k^3$ holds, and for the square of four-vector one has $k^2=k_+k_- - {\bf k}_T^2$ while rapidity of a particle is defined as $y=\log (k_+/k_-)/2$. By definition, momentum $p_1$ has large $p_1^+$ component, $p_2$ has large $p_-$ component and in QMRK only small fraction of this large components is transferred to the cluster at central rapidities: \[ z_1=\frac{q_1^+}{p_1^+}\ll 1,\ \ z_2=\frac{q_2^-}{p_2^-}\ll 1. \] Consequently, on entrance to the central production vertex, the following scaling relations for components of momenta $q_{1,2}$ holds: \begin{eqnarray} |{\bf q}_{T1}|\sim q_1^+\sim O(z_1)\ll q_1^-\sim O(z_1^2), \nonumber \\ |{\bf q}_{T2}|\sim q_2^-\sim O(z_2)\ll q_2^+\sim O(z_2^2), \label{Eq:MRK-scalings} \end{eqnarray} so that one (so-called ``large'') light-cone component of these momenta is of the same order as corresponding transverse momentum, while the other one is negligible and momenta $q_{1,2}$ are necessarily off-shell: $q_{1,2}^2\simeq -{\bf q}_{T1,2}^2$. Effective vertices describe interaction of Reggeons and ordinary QCD partons. They are gauge-invariant independently from each-other, despite the fact that Reggeons connected to them are off-shell. This property dictates the specific form of interaction of particles and Reggeons in the EFT~\cite{Lipatov95, LV}, containing Wilson lines. In QCD {\it at tree level,} the above-described {\it Regge-pole factorization} is true up to corrections, suppressed by powers of $s_{ij}$. If loop corrections are taken into account, it holds in the LLA (when one takes all the terms proportional to $\alpha_s^n \log^n s_{ij}$ from all orders of perturbation theory) and in the NLLA (when one adds also $\sim\alpha_s^{n+1} \log^n s_{ij}$ terms). Beyond NLLA, Regge-pole factorization is violated by Regge cut contribution~\cite{DelDuca:2001gu, Caron-Huot:2017fxr, Fadin:2017nka} and nonlinear effects of interaction of multiple Reggeons in the $t$-channel, but it should be possible to formulate both of this effects in the language of High-Energy EFT, see e.g. recent Ref.~\cite{Hentschinski:2018rrf} on the non-linear effects. The construction of High-Energy EFT~\cite{Lipatov95, LV} proceeds as follows. The axis of rapidity is sliced into a few intervals, corresponding to clusters of particles, highly separated in rapidity. For the QCD gluon and quark fields, ``living'' in each interval, the separate copy of QCD Lagrangian: \begin{equation} L_{\rm QCD}(A_\mu, \psi_q)=-\frac{1}{2}{\rm tr}\left[G_{\mu\nu}G^{\mu\nu}\right]+\sum\limits_{q=1}^{n_F}\bar{\psi}_q (i\hat{D}-m_q)\psi_q, \label{Eq:L-QCD} \end{equation} is defined, where $\hat{D}=\gamma_\mu D^\mu$, $D_\mu=\partial_\mu+ig_s A_\mu$ is the covariant derivative, $A_\mu=A_\mu^a T^a$, $T^a$ are the (Hermitian) generators of the fundamental representation of $SU(N_c)$, $g_s$ is the coupling constant of QCD and $G_{\mu\nu}=-i\left[D_\mu,D_\nu\right]/g_s$ is the Yang-Mills field-strength tensor. The complete Lagrangian of the EFT takes the form: \begin{eqnarray} & L_{\rm eff}= L_{\rm kin.} + \sum\limits_i \left[ L_{\rm QCD}(A_\mu^{[y_i, y_{i+1}]}, \psi_q^{[y_i, y_{i+1}]})\right. \nonumber \\ & \left. + L_{\rm Rg}(A_\mu^{[y_i, y_{i+1}]}, R_+, R_-) + L_{\rm Qqg}(A_\mu^{[y_i, y_{i+1}]}, \psi_q^{[y_i, y_{i+1}]}, Q_+, Q_-) \right], \label{Eq:LEFT} \end{eqnarray} where the index $[y_i, y_{i+1}]$ of the field means, that the real part of rapidity of it's momentum modes is restricted to lie within the interval $y_i\leq {\rm Re}(y) \leq y_{i+1}$. Kinetic part of the Lagrangian takes the form: \begin{equation} L_{\rm kin.}=4{\rm tr}\left[ R_+ \partial_T^2 R_- \right] + \overline{Q}_- \left(i\hat{\partial}_T\right) Q_+ + \overline{Q}_+ \left(i\hat{\partial}_T\right) Q_-, \label{Eq:L_kin} \end{equation} which leads to non-diagonal bare propagators, connecting $R_+$-field with $R_-$: $-i/(2{q}_T^2)$ and $Q_+$-field with $Q_-$: $i\hat{q}_T/{q_T^2}$, where $q_T$ is the transverse part of the four-momentum of the Reggeon. Due to MRK-conditions (\ref{Eq:MRK-scalings}) fields of Reggeized gluons $R_{\pm}$ and quarks $Q_{\pm}$ are subject to the following kinematic constraints: \begin{eqnarray} \partial_+ R_- = \partial_- R_+ = 0, \label{Eq:kin-constr-R}\\ \partial_+ Q_- = \partial_- Q_+ = 0, \label{Eq:kin-constr-Q} \end{eqnarray} where $\partial_\pm=n_{\pm}^\mu\partial_\mu=2\partial/\partial x_{\mp}$. Also, to remove the Dirac structures, associated with ``small'' light-cone component of Reggeon momentum, the following constraints are applied to the spinor structure of the fields of Reggeized quarks~\cite{LV}: \begin{equation} \hat{n}_\pm Q_{\mp} =0,\ \overline{Q}_{\pm} \hat{n}_{\mp}=0. \label{Eq:kin_cons_Q2} \end{equation} On the level of Feynman rules the latter constraints are implemented by adding the following projectors to propagators of Reggeized quarks: \begin{equation} \hat{P}_{\pm}=\frac{1}{4}\hat{n}_{\mp} \hat{n}_{\pm}. \label{Eq:Proj-Qpm} \end{equation} Kinematic constraints (\ref{Eq:kin-constr-R}) and (\ref{Eq:kin-constr-Q}) are not invariant w.r.t. local gauge transformations, and therefore fields $R_{\pm}$ and $Q_{\pm}$ have to be gauge-invariant by themselves. This requirement mostly fixes the form of gauge-invariant interaction between Reggeons and QCD partons. The Hermitian form~\cite{RevLipatov97,BondZubkov} of the Lagrangian of interaction of Reggeized and QCD gluons~\cite{Lipatov95} can be written as follows: \begin{equation} L_{Rg}(x)=\frac{i}{g_s}{\rm tr}\left[R_+(x) \partial_\rho^2 \partial_- \left(W_x[A_-]-W_x^\dagger[A_-]\right) + R_-(x) \partial_\rho^2\partial_+ \left(W_x[A_+]-W_x^\dagger[A_+]\right) \right], \label{Eq:L-Rg} \end{equation} where $W_x[A_{\pm}]$ is (past-pointing) half-infinite Wilson line, stretching in the $(+)$ or $(-)$ light-cone direction from the point $x$: \begin{eqnarray} W_x[A_{\pm}]&=& P\exp\left[\frac{-ig_s}{2} \int\limits_{-\infty}^{x_{\mp}} dx'_{\mp} A_{\pm}\left(x_{\pm}, x'_{\mp}, {\bf x}_{T}\right) \right] \nonumber \\ &=& 1-ig_s\left(\partial_\pm^{-1}A_{\pm} \right) + (-ig_s)^2\left(\partial_\pm^{-1}A_{\pm}\partial_\pm^{-1}A_{\pm}\right)+\ldots , \label{Eq:WL-def} \\ \overline{W}_x[A_{\pm}]&=& P\exp\left[\frac{-ig_s}{2} \int\limits^{+\infty}_{x_{\mp}} dx'_{\mp} A_{\pm}\left(x_{\pm}, x'_{\mp}, {\bf x}_{T}\right) \right] \nonumber \\ &=& 1-ig_s\left(\bar{\partial}_\pm^{-1}A_{\pm} \right) + (-ig_s)^2\left(\bar{\partial}_\pm^{-1}A_{\pm}\bar{\partial}_\pm^{-1}A_{\pm}\right)+\ldots , \label{Eq:WLbar-def} \end{eqnarray} where we have defined also the future-pointing Wilson line $\overline{W}_x[A_{\pm}]$ and operators $\partial^{-1}_{\pm}$ or $\bar{\partial}^{-1}_{\pm}$ act as $\partial^{-1}_{\pm}f(x)=\int\limits_{-\infty}^{x^{\mp}} dx'_{\mp}/2\ f(x_{\pm},x'_{\mp},{\bf x}_T)$ and $\bar{\partial}^{-1}_{\pm}f(x)=\int\limits^{+\infty}_{x^{\mp}} dx'_{\mp}/2\ f(x_{\pm},x'_{\mp},{\bf x}_T)$ so that on the level of Feynman rules, they correspond to Eikonal factors with definite $i\varepsilon$-prescription:$-i/(k_{\pm}+i\varepsilon)$ and $i/(k_{\pm}-i\varepsilon)$ respectively. Lagrangian (\ref{Eq:L-Rg}) generates an infinite sequence of {\it induced vertices} of interaction of Reggeized gluon with $n$ QCD gluons. The simplest of them is $R_+g$-transition vertex, corresponding to $O(g_s^0)$-term in (\ref{Eq:L-Rg}): \begin{eqnarray} L_{Rg}\supset \frac{i}{g_s}{\rm tr}\left[ R_+ \partial_\rho^2 \partial_- (-2i g_s) \partial_-^{-1} A_- \right] \rightarrow \Delta_{+\mu}^{ab}(q)=(-iq^2)n^-_\mu\delta_{ab}, \end{eqnarray} where $q$ is (incoming) momentum of the Reggeon, factor $2$ in circular brackets corresponds to taking into account both $W$ and $W^\dagger$ terms in (\ref{Eq:L-Rg}) and $R_-g$ vertex $\Delta_{-\mu}^{ab}$ can be obtained by the $-\leftrightarrow +$ replacement in the obtained expression. The $R_+gg$ induced vertex is generated by $O(g_s)$-term in (\ref{Eq:L-Rg}): \begin{eqnarray} L_{Rg}\supset \frac{i}{g_s}{\rm tr}\left[ R_+ \partial_\rho^2 \partial_- (-g_s^2)\left( T^{b_1}T^{b_2}-T^{b_2} T^{b_1} \right) \partial_-^{-1}A_-^{b_1}\partial_-^{-1}A_-^{b_2} \right] = -i g_s \frac{if^{ab_1b_2}}{2} R^a_+ \partial_\rho^2 A_-^{b_1} \partial_-^{-1} A_-^{b_2} \nonumber \\ \rightarrow \Delta_{+\mu_1 \mu_2}^{ab_1 b_2}(q,k_1) = g_s (n^-_{\mu_1} n^-_{\mu_2}) \frac{q^2}{2} \left(\frac{f^{a b_1 b_2}}{k_2^-+i\varepsilon} + \frac{f^{a b_2 b_1}}{k_1^-+i\varepsilon} \right) = g_s q^2 (n^-_{\mu_1} n^-_{\mu_2}) \frac{f^{ab_1b_2}}{2} \left( \frac{1}{k^-_1+i\varepsilon} + \frac{1}{k^-_1-i\varepsilon} \right), \label{eq:Rgg-ind-vert} \end{eqnarray} where $k_1$ and $k_2$ are (incoming) momenta of the gluons, the term $T^{b_2}T^{b_1}$ in the first line came from the $W^\dagger$ and in the second line we have used the condition $k_1^-+k_2^-=0$ which follows from the constraint (\ref{Eq:kin-constr-R}). One can see, that the PV-prescription for the $1/k^-$-pole in the $Rgg$ induced vertex, which was advocated in Ref.~\cite{MH_PolePrescr}, follows form the Lagrangian of $Rg$-interaction in the form~(\ref{Eq:L-Rg}). Pole prescription for higher-order induced vertices is more complicated and constrains also the color structure of the vertex. In the Ref.~\cite{MH_PolePrescr} the $i\varepsilon$ structure of induced vertices was derived from the requirement for the exchanges of Reggeized gluons in $t$-channels to have a negative signature, and this prescription was later used in the calculations of Refs.~\cite{Hentschinski:2011tz, Chachamis:2012cc, Chachamis:2012gh, Chachamis:2013hma}. It turns out, that induced vertices which satisfy all conditions introduced in Ref.~\cite{MH_PolePrescr} can be obtained directly from the Lagrangian~(\ref{Eq:L-Rg}), without any additional adjustments. At least, this statement had been verified by us in the orders $O(g_s^2)$ and $O(g_s^3)$, see the \hyperlink{Sec:Appendix:A}{Appendix}. The Lagrangian of interaction of Reggeized quarks with QCD quarks and gluons has the form~\cite{LV}: \begin{equation} L_{\rm Qqg}(x)= - \overline{Q}_-(x) \frac{i\hat{\partial}}{2} \left( W_x^\dagger [A_+]+\overline{W}_x[A_+]\right) \psi(x) - \overline{Q}_+(x) \frac{i\hat{\partial}}{2} \left( W_x^\dagger [A_-] + \overline{W}_x[A_-] \right)\psi(x) + {\rm h. c.}, \label{Eq:L-Qq} \end{equation} where ${\rm h.c.}$ denotes the Hermitian-conjugate terms. Working with Reggeized quarks it is most convenient to remove the $Q_{\pm}q$-transition vertex by the following shift of quark fields: \[ \psi\to \psi + \hat{P}_+Q_+ + \hat{P}_-Q_-,\ \ \overline{\psi}\to \overline{\psi} + \overline{Q}_+ \hat{P}_- + \overline{Q}_- \hat{P}_+. \] After the shift, only vertices of the type $Q_{\pm}gq$, $Q_{\pm}ggq$, ..., $Q_+ g Q_-$, $Q_+ggQ_-$ e.t.c. are left in the Lagrangian. The $O(g_s)$ $Q_+gq$-vertex~\cite{FadinSherman76, FadinSherman77, BogFad06} is obtained from the following term in the sum of (\ref{Eq:L-Qq}) and (\ref{Eq:L-QCD}) after the shift: \begin{eqnarray} L_{\rm Qqg}+L_{\rm QCD}\supset i\overline{Q}_-\left[ ig_s \hat{A}+\frac{\hat{\overleftarrow{\partial}}}{2} \left(ig_s \partial^{-1}_- A_- - ig_s \bar{\partial}^{-1}_- A_-\right) \right] \psi \nonumber \\ \rightarrow g_sT^a\cdot \Gamma^{(0),a}_{-\mu}(q, k)=g_sT^a\left[ \gamma_\mu+ \frac{\hat{q} n_\mu^-}{2}\left( \frac{1}{k^-+i\varepsilon} + \frac{1}{k^- - i\varepsilon} \right) \right], \label{Eq:Qgq-vert} \end{eqnarray} where $k$ is the (incoming) gluon momentum and index $(0)$ denotes, that this is the LO result in $g_s$, without any loop corrections. One can see, that PV-prescription for the simplest $Qgq$ effective vertex is obtained from the Lagrangian (\ref{Eq:L-Qq}). In the following expressions, we will denote the PV-prescription for the pole by square brackets: \[ \frac{1}{[k^\pm]}=\frac{1}{2}\left( \frac{1}{k^-+i\varepsilon} + \frac{1}{k^- - i\varepsilon} \right). \] Analogously, one can obtain the Fadin-Sherman~\cite{FadinSherman76, FadinSherman77} $Q_+gQ_-$-vertex: \begin{equation} g_sT^a\cdot\Gamma^{(0),a}_{+\mu-}(q_1,k,q_2)=g_sT^a\left[ \gamma_\mu+ \frac{\hat{q}_1 n_\mu^-}{[k^-]} + \frac{\hat{q}_2 n_\mu^+}{[k^+]} \right],\label{Eq:FS-vertex} \end{equation} where $q_1$ and $q_2$ are incoming momenta of $Q_+$ and $Q_-$, while $k$ is incoming gluon momentum. Below, we will also need the couplings of photons to Reggeized quarks. They can be obtained from vertices of interaction of Reggeized quarks with QCD quarks and gluons by replacement of corresponding factors $g_s T^a_{ij}$ by $ee_q\delta_{ij}$ where $e$ is the coupling constant of QED and $e_q$ is the quark charge in units of electron charge. As it will be discussed in details in Sec.~\ref{Sec:RDs-gen}, in certain kinematics, loop integrals containing Eikonal propagators exhibit a new type of divergences, which are not regulated by the usual dimensional regularization. These are so-called {\it rapidity divergences}, and in the standard formulation of effective action (\ref{Eq:LEFT}) such divergences are regulated by the cutoffs on the real part of rapidity of loop momenta or momenta of real particles produced within the given cluster: $y_i\leq {\rm Re}(y) \leq y_{i+1}$ and they manifest themselves as terms proportional to $y_{i+1}-y_i$. Dependence on the regulators $y_i$ cancels between the contributions of neighboring clusters in each order of perturbation theory, building up the terms which can be finally resummed into Regge exponents $\exp[Y \omega_{q/g}(t)]$. However, for practical calculations, the cutoff regularization is rather inconvenient, especially in the multi-scale case. Also it may influence such useful procedures as tensor reduction or Integration-by-Parts reduction of loop integrals, and can make them less straightforward to apply. In the context of SCET and TMD factorization, a few alternative approaches, preserving explicit Lorentz-covariance of the formalism, has been proposed. The analytic regularization is most convenient from the computational point of view and is widely applied in SCET calculations, but it has to be introduced on diagram-by-diagram basis and is used mostly in the context of real corrections~\cite{Becher:2011dz}. The proposal of $\delta$-regularization in a form of Ref.~\cite{Echevarria:2015byo} is also rather attractive and it is applicable to virtual corrections, but this approach modifies the standard definition of Wilson line in a nontrivial way (see Eq. 5 of Ref.~\cite{Echevarria:2015byo}), destroying their classical gauge-transformation properties, so in a moment it is not clear, if such modification is applicable in context of High-Energy EFT. Finally one should comment on the proposal of Ref.~\cite{vanHameren:2017hxx} to treat Eikonal propagators as standard propagators of fictitious particle with addition of large light-like momentum. If, as proposed in Ref.~\cite{vanHameren:2017hxx}, one expands the well-known results for scalar one-loop integrals in rapidity regulator variable {\it after} the expansion in $\epsilon=(4-d)/2$, where $d$ is the dimension of space-time, then one generally obtains double logarithms of rapidity regulator already at one loop, while from Reggeization one expects only single logarithms. Therefore double logarithms should cancel. In present work we first perform the asymptotic expansion in rapidity regulator and keep the exact in $\epsilon$ results up to a point when all rapidity divergences, except single logarithm, cancel away. Our analysis presented in in Sec.~\ref{Sec:1-loop-vert} shows, that this cancellation happens independently of the order of expansion in rapidity regulator vs. $\epsilon$, which might suggest that approach of Ref.~\cite{vanHameren:2017hxx} is also correct. However, in the present paper we have chosen probably less technically convenient, but still tractable approach of {\it tilted Wilson lines}~\cite{CollinsQCD, Collins:2011ca, Hentschinski:2011tz, Chachamis:2012cc, Chachamis:2012gh, Chachamis:2013hma}, which has the advantage that it does not modify gauge-transformation properties of Wilson lines and if applied properly -- allows one to preserve gauge-invariance of effective action. Rapidity divergences in real and virtual corrections will be regularized if one shifts the directions of Wilson lines in the interaction part of the Lagrangian of EFT (\ref{Eq:L-Rg}, \ref{Eq:L-Qq}) from the light-cone, substituting: \begin{eqnarray} n^\pm_{\mu} \to \tilde{n}^{\pm}_\mu=n^{\pm}_\mu+r\cdot n^{\mp}_\mu,\ \ k_\pm \to \tilde{k}_{\pm}=k_{\pm}+r\cdot k_{\mp},\label{Eq:r-reg-def} \end{eqnarray} and therefore assigning finite rapidities $\mp \log r/2$ to the Wilson lines, where $0<r\ll 1$ is the rapidity regularization parameter and $\tilde{n}_{\pm}^2=4r$, $\tilde{n}_+\tilde{n}_-=2(1+r^2)$. This shift does not affect gauge-invariance of Lagrangian of interaction of Reggeized quarks with QCD quarks and gluons (\ref{Eq:L-Qq}). However, the Lagrangian (\ref{Eq:L-Rg}) has to be further modified. Indeed, after regularization, action contains the following term: \begin{eqnarray*} S_{Rg}\supset \int d^2 {\bf x}_T \int\limits_{-\infty}^{+\infty} \frac{dx_- dx_+}{2} {\rm tr} \left[ R_+(x) \tilde{\partial}_- \partial_\rho^2 \tilde{W}_{\tilde{x}_+} [ \tilde{A}_- ] \right] = \int d^2 {\bf x}_T \int\limits_{-\infty}^{+\infty} \frac{d\tilde{x}_- d\tilde{x}_+}{1-r^2} {\rm tr} \left[ R_+(x) \frac{\partial}{\partial \tilde{x}_+} \partial_\rho^2 \tilde{W}_{\tilde{x}_+} [ \tilde{A}_- ] \right], \end{eqnarray*} where we have passed from integration over $x_+$ and $x_-$ to integration over $\tilde{x}_+$ and $\tilde{x}_-$ with the Jacobian $1/(1-r^2)$ and we denote explicitly, that Wilson line depends on $\tilde{x}_+$. Integrating by parts over $\tilde{x}_+$ one obtains: \begin{eqnarray*} S_{Rg}\supset \int d^2 {\bf x}_T \int\limits_{-\infty}^{+\infty} \left. \frac{d\tilde{x}_-}{1-r^2} {\rm tr} \left[ R_+(x) \partial_\rho^2 \tilde{W}_{\tilde{x}_+} [ \tilde{A}_- ] \right] \right\vert_{\tilde{x}_+=-\infty}^{\tilde{x}_+=+\infty} - \int d^2 {\bf x}_T \int\limits_{-\infty}^{+\infty} \frac{d\tilde{x}_- d\tilde{x}_+}{1-r^2} {\rm tr} \left[ \left(\tilde{\partial}_-R_+(x)\right) \partial_\rho^2 \tilde{W}_{\tilde{x}_+} [ \tilde{A}_- ] \right]. \end{eqnarray*} When $\tilde{x}_+=+\infty$, Wilson line in the first term stretches from $-\infty$ to $+\infty$ and such Wilson line is invariant w.r.t. gauge transformations which become trivial at large distances from the origin. Invariance w.r.t. this subgroup of gauge transformations is enough for perturbation theory. The second term in last expression is equal to zero at $r=0$ due to kinematic constraint (\ref{Eq:kin-constr-R}). To nullify it at $r\neq 0$ we propose to modify the kinematic constraints (\ref{Eq:kin-constr-R}) and (\ref{Eq:kin-constr-Q}) as follows: \begin{eqnarray} \tilde{\partial}_+ R_- = \tilde{\partial}_- R_+ = 0, \label{Eq:r-kin-constr-R}\\ \tilde{\partial}_+ Q_- = \tilde{\partial}_- Q_+ = 0. \label{Eq:r-kin-constr-Q} \end{eqnarray} As it was noted above, modification of kinematic constraint for Reggeized quarks is not necessary for gauge invariance, but it turns out, that calculation of many scalar integrals is simpler in such ``regularized'' MRK, so we propose to use constraint (\ref{Eq:r-kin-constr-Q}) at least on the level of scalar integrals. It is instructive first to understand how the proposed regularization work on the level of real corrections. Let's consider the regularized Lipatov's $R_+gR_-$ vertex~\cite{BFKL1}: \begin{eqnarray} \tilde{\Delta}^{abc}_{+\mu-}(q_1,q_2)= \tilde{\Delta}_{+\rho}^{ad}(q_1) \frac{-i}{q_1^2} \tilde{\Delta}^{-\rho\mu}_{dbc}(q_2,q_1) + \tilde{\Delta}_{-\mu\rho}^{abd}(q_2) \frac{-i}{q_2^2} \tilde{\Delta}^{+\rho}_{dc}(q_1,q_2-q_1) \nonumber \\ + \tilde{\Delta}_{+\rho_1}^{ad_1}(q_1) \frac{-i}{q_1^2} \tilde{\Delta}_{-\mu\rho_2}^{abd_2}(q_2) \frac{-i}{q_2^2} (g_s f^{d_1 b d_2}) \gamma^{\rho_1\mu\rho_2}(q_1,q_2-q_1,-q_2) \nonumber \\ = g_s f^{abc} \left[ -(\tilde{n}_+\tilde{n}_-) \left( (q_1+q_2)_\mu + q_1^2 \frac{\tilde{n}^-_\mu}{\tilde{q}_2^-} + q_2^2\frac{\tilde{n}^+_\mu}{\tilde{q}_1^+} \right)+2\left( \tilde{q}_1^+\tilde{n}^-_\mu + \tilde{q}_2^-\tilde{n}^+_\mu \right) \right],\label{Eq:LV-reg} \end{eqnarray} where $\gamma_{\mu_1\mu_2\mu_3}(k_1,k_2,k_3)=g_{\mu_1\mu_2}(k_1-k_2)_{\mu_3}+g_{\mu_2\mu_3}(k_2-k_3)_{\mu_1}+g_{\mu_1\mu_3}(k_3-k_1)_{\mu_2}$, momentum $q_1$ is incoming and $q_2$ is outgoing, so that the gluon momentum is $k=q_1-q_2$, and we have used the regularized kinematic constraints $\tilde{q}_1^-=\tilde{q}_2^+=0$. For the on-shell gluon with a given $k^+$ and $k^-$, the regularized kinematic conditions are satisfied by: $q_1^+=(k^++rk^-)/(1-r^2)$, $q_1^-=-rq_1^+$, $q_2^-=-(k^-+rk^+)/(1-r^2)$ and $q_2^+=-rq_2^-$. It is easy to verify, that vertex (\ref{Eq:LV-reg}) satisfies the Slavnov-Taylor identity $(q_1-q_2)^\mu\tilde{\Delta}^{abc}_{+\mu -}(q_1,q_2)=0$ independently of $r$, while it would not be the case if modified kinematic conditions where not used. The square of regularized vertex is: \[ (-g^{\mu\nu})\tilde{\Delta}^{abc_1}_{+\mu -}(q_1,q_2)\tilde{\Delta}^{abc_2}_{+\nu -}(q_1,q_2)=16N_c \delta_{c_1c_2}\frac{{\bf q}_{T1}^2 {\bf q}_{T2}^2}{{\bf k}_T^2} \left[f(y,r)+O(r) \right], \] where we have neglected $O(r)$-terms in the numerator and introduced a function $f(y,r)=(re^{-y}+e^y)^{-2}(re^{y}+e^{-y})^{-2}$ of $y=\log(k^+/k^-)/2$. Function $f(y,r)$ provides a smooth cutoff at $\mp\log r/2$ to otherwise divergent integral over rapidity of a gluon: \[ \int\limits_{-\infty}^{+\infty} dy\ f(y,r)= -\log r-1+O(r), \] so that rapidity divergence manifests itself as term $\sim\log r$. \section{Rapidity divergences at one loop} \label{Sec:RDs-gen} In this section we will derive general conditions for the appearance of rapidity divergences in one-loop scalar integrals with $n+1\geq 2$ external lines, among which $n_R=1$ or $2$ could be Reggeon lines. We will not consider integrals with number of Eikonal propagators $>n_R$, since such integrals will not appear in calculations described in the present paper, but such integrals will arise from higher-order induced vertices. An example of integral with one Eikonal propagator $1/\tilde{l}^+$ is given in the left diagram in the Fig.~\ref{Fig:1-loop}, where momentum of the Reggeon satisfies modified kinematic constraint $\tilde{p}^+_n=0$. One notes, that it doesn't matter to which end of the Eikonal propagator one attaches the Reggeon line, since $\tilde{n}_+(l+p_n)=\tilde{l}^+$. Nontrivial one-loop integrals with two external Reggeon lines will contain at least two Eikonal propagators, e.g. $1/\tilde{l}^+$ and $1/\tilde{l}^-$, and in this case, one can have $m\leq n-1$ external lines connected to the loop in between two Eikonal propagators, see the right diagram in Fig.~\ref{Fig:1-loop}. Momentum of the second Reggeon in this case satisfies the modified kinematic constraint $\tilde{k}_{m+1}^-=0$. We denote the loop momentum as $l$ and momenta which flow through the propagators can be represented as $l+p_i$, $i=0,\ldots, n$, where $p_i=\sum\limits_{j=1}^i k_i,\ p_0=0$ and $k_i$, $i=1,\ldots,n$ are (incoming) momenta of external particles. In general, integrals which we are going to study can be written as: \begin{eqnarray*} I_1&=&\int\frac{d^d l}{(l^2)^{1-\alpha_0}((l+p_1)^2)^{1-\alpha_1}\ldots ((l+p_n)^2)^{1-\alpha_n} (\tilde{n}^+l+i\varepsilon)^{1-\beta_1}}, \\ I_2&=&\int\frac{d^d l}{(l^2)^{1-\alpha_0}((l+p_1)^2)^{1-\alpha_1}\ldots ((l+p_n)^2)^{1-\alpha_n} (\tilde{n}^+l+i\varepsilon)^{1-\beta_1}(\tilde{n}^-l+\tilde{p}^-_m+i\varepsilon)^{1-\beta_2}}, \end{eqnarray*} where $d=4-2\epsilon$ and we have chosen the ``Euclidean'' pole prescriptions for the Eikonal poles, while the usual $+i\varepsilon$ prescription for quadratic propagators is implied. Integrals with other prescriptions for Eikonal poles can be related with this integrals by analytic continuation, as it will be discussed in Sec.~\ref{Sec:Ints}. \begin{figure} \begin{center} \includegraphics[width=0.22\textwidth]{figures/1R-1Loop.pdf}\hspace{2cm} \includegraphics[width=0.28\textwidth]{figures/2R-1Loop-2s.pdf} \end{center} \caption{Scalar one-loop integrals with one (left diagram) and two (right diagram) external Reggeon lines. Eikonal propagators are denoted by double lines. \label{Fig:1-loop}} \end{figure} For the general discussion of $I_1$ and $I_2$ it is convenient to employ the ``mixed'' version of Feynman's parametrization, where parameters for Eikonal propagators $x_{1,2}\in[0,+\infty)$, while parameters for the usual propagators $a_i\in[0,1]$, $i=0,\ldots, n$. Up to an overall factor one can write: \begin{eqnarray*} I_1 \sim \int\limits_{0}^1 [d^{n+1}a] \int\limits_0^\infty \frac{dx_1}{x_1^{\beta_1}} \int d^dl\ \left[ \sum\limits_{i=0}^n a_i (l+p_i)^2 + x_1(\tilde{n}^+ l) \right]^{-n-2+\alpha+\beta_1}, \\ I_2 \sim \int\limits_{0}^1 [d^{n+1}a] \int\limits_0^\infty \frac{dx_1}{x_1^{\beta_1}} \frac{dx_2}{x_2^{\beta_2}} \int d^dl\ \left[ \sum\limits_{i=0}^n a_i (l+p_i)^2 + x_1(\tilde{n}^+ l) + x_2(\tilde{n}^- l+\tilde{p}_m) \right]^{-n-3+\alpha+\beta}, \end{eqnarray*} where $[da_{(n)}]= \frac{da_0}{a_0^{\alpha_0}}\ldots \frac{da_n}{a_n^{\alpha_n}} \delta\left(1-\sum\limits_{i=0}^{n}a_i \right)$, $\alpha=\alpha_0+\ldots+\alpha_n$, $\beta=\beta_1+\beta_2$. After diagonalization of the quadratic form in square brackets and integration over $l$, $I_1$ and $I_2$ take the form: \begin{eqnarray} &&\hspace{-14mm}I_1\sim \int\limits_{0}^1 [d^{n+1}a] \int\limits_0^\infty \frac{dx_1}{x_1^{\beta_1}} \left[ {\cal D}+ x_1\sum\limits_{i=0}^{n} \tilde{p}^+_i a_i +rx_1^2 \right]^{-n-\epsilon-\alpha-\beta_1},\label{Eq:I1_par_r} \\ &&\hspace{-14mm}I_2\sim \int\limits_{0}^1 [d^{n+1}a] \int\limits_0^\infty \frac{dx_1}{x_1^{\beta_1}} \frac{dx_2}{x_2^{\beta_2}}\left[ {\cal D} + \sum\limits_{i=0}^{n}a_i(x_1\tilde{p}^+_i + x_2(\tilde{p}^-_i-\tilde{p}_m^-))+(1+r^2)x_1x_2 +r(x_1^2+x_2^2) \right]^{-n-1-\epsilon-\alpha-\beta}, \label{Eq:I2_par_r} \end{eqnarray} where ${\cal D}=-\frac{1}{2}\sum\limits_{i,j=0}^{n} a_i a_j (p_i-p_j)^2$ is the usual parametric polynomial, corresponding to the same integral but without Eikonal propagators. To study the rapidity divergences, one puts $r=0$, then integral over $x_1$ in $I_1$ and $x_2$ in $I_2$ can be easily calculated: \begin{eqnarray} &&\left. I_1^{(+)}\right\vert_{r=0}\sim \int\limits_{0}^1 [d^{n+1}a] \left(\sum\limits_{i=0}^{n} p^+_i a_i\right)^{-1+\beta_1} {\cal D}^{-n+1-\epsilon-\alpha-\beta_1} , \label{Eq:I1_par_r-0}\\ && \left. I_2^{(+-)}\right\vert_{r=0}\sim \int\limits_{0}^1 [d^{n+1}a] \int\limits_0^\infty \frac{dx_1}{x_1^{\beta_1}} \left(x_1+\sum\limits_{i=0}^{n}a_i(p^-_i-p_m^-) \right)^{-1+\beta_2} \left[{\cal D}+x_1 \sum\limits_{i=0}^{n-1}a_i p^+_i \right]^{-n-\epsilon-\alpha-\beta} . \label{Eq:I2_par_r-0} \end{eqnarray} In both cases, rapidity divergence arises from the factors in circular brackets. One considers first the case when powers of all propagators are equal to one, i.e. $\alpha_0=\ldots=\alpha_n=\beta_1=\beta_2=0$. Then for $I_1^{(+)}$ in the particular case $n=2$, due to kinematic condition $p^+_n=0$ the sum in circular brackets in Eq.~(\ref{Eq:I1_par_r-0}) consists only of one term: $p_1^+a_1$ and integral over $a_1$ is logarithmically divergent at $a_1\to 0$. This is rapidity divergence and it is not regularized by dimensional regularization, because ${\cal D}$ is finite when $a_1\to 0$. For $n>2$ no non-integrable singularity comes from the factor in circular brackets in Eq.~(\ref{Eq:I1_par_r-0}), because the corresponding sum will contain more than one term. Integrals with $n=0$ and $n=1$ are actually more singular and will be considered in Sec.~\ref{Sec:Ints-pow}. Judging from Eq.~(\ref{Eq:I1_par_r}) and (\ref{Eq:I1_par_r-0}), rapidity divergence for $n=2$ can be removed in three ways. First, one can keep $r$ finite. Then integral over $x_1$ doesn't produce a non-integrable singularity at $a_1\to 0$ as it will be shown in Sec.~\ref{Sec:Ints}. Second, one can keep $r=0$, but set $\beta_1>0$ or $\alpha_1<0$. This corresponds to analytic regularization of rapidity divergence. Third, one can differentiate $I_1$ w.r.t. $k_1^2$ or $k_2^2$, thus introducing additional factor $a_1$ into the numerator and so removing logarithmic divergence. Therefore derivative of an integral w.r.t. external scale typically doesn't contain rapidity divergence and can be calculated at $r=0$. This convenient property will be exploited in Sec.~\ref{Sec:Ints-log}. For the Eq.~(\ref{Eq:I2_par_r-0}) case $n=1$ is special. In this case $m=0$ and $p_1^-=0$ due to kinematic constraint of external Reggeon, so that the sum in circular brackets in Eq.~(\ref{Eq:I2_par_r-0}) consists only of one term: $x_1$, and integral over $x_1$ is logarithmically divergent for $x_1\to 0$ if powers of all propagators are equal to one. Since function ${\cal D}$ doesn't depend on $x_1$, this divergence can not be regularized by dimensional regularization. Again, divergence is absent if $r>0$ (``tilted Wilson line'' regularization) or if $r=0$ but $\beta_2>0$ or $\beta_1<0$ (analytic regularization). One notices, that to analytically regularize the rapidity divergence, one always have to change powers of propagators forming the ``horizontal rungs'' of the ``ladder'', but this time both of this propagators are Eikonal. For $n>1$ there is no logarithmic divergence, because the sum in circular brackets in Eq.~(\ref{Eq:I2_par_r-0}) contains more than one term. From the analysis above one concludes, that kinematic constraints (\ref{Eq:kin-constr-R}) and (\ref{Eq:kin-constr-Q}) are necessary for the appearance of rapidity divergences in loop integrals. \section{Rapidity divergent scalar integrals} \label{Sec:Ints} \subsection{Scalar integrals with power rapidity divergences} \label{Sec:Ints-pow} In this section we will consider scalar integrals with one external Reggeon line, and one or two quadratic propagators. We will use the following normalization of the measure of integration over loop momentum~\cite{1-loop-ints}: \[ [d^d l]=\frac{(\mu^2)^\epsilon d^d l}{i\pi^{d/2} r_\Gamma},\ r_\Gamma=\frac{\Gamma^2(1-\epsilon)\Gamma(1+\epsilon)}{\Gamma(1-2\epsilon)}=\frac{1}{\Gamma(1-\epsilon)}+O(\epsilon^3). \] An integral with one Eikonal and one quadratic propagator is nonzero if there is an addition of external momentum $p$ in the quadratic propagator\footnote{Such integrals appear in process of tensor reduction}: \begin{equation} A_{[-]}(p)=\int\frac{[d^d l]}{(p+l)^2 [\tilde{l}^-]} = -\frac{\tilde{p}^-\ r^{-1+\epsilon} }{\cos(\pi \epsilon)} \frac{1}{2\epsilon (1-2\epsilon)} \left\lbrace \frac{\mu}{\tilde{p}^-} \right\rbrace^{2\epsilon}, \label{Eq:A0m} \end{equation} where $\left\lbrace \frac{\mu}{k} \right\rbrace^{2\epsilon}=\frac{1}{2}\left[ \left(\frac{\mu}{k+i\varepsilon} \right)^{2\epsilon} + \left(\frac{\mu}{-k+i\varepsilon} \right)^{2\epsilon} \right]$. Integral (\ref{Eq:A0m}) with $1/(\tilde{l}^-+i\varepsilon)$-propagator is Euclidean, and one obtains integral with PV pole prescription by analytic continuation in $\tilde{p}^-$. Integral (\ref{Eq:A0m}) contains rapidity divergence of the power type $\sim r^{-1+\epsilon}$. It is possible to treat such integrals in two ways. If one sets $r=0$ then integral (\ref{Eq:A0m}) is actually scaleless and one can put it to zero in dimensional regularization. But, as it will be shown below, if one keeps $r>0$, power divergences cancel between diagrams describing the same region in rapidity. An integral with tho quadratic propagators: \[ B_{[-]}(p)=\int\frac{[d^d l]}{l^2 (p+l)^2 [l^-]}, \] will play an important role in our calculations For the non-light-like $p$ the leading terms of asymptotic expansion of this integral for $r\ll 1$ can be written as: \begin{equation} B_{[-]}(p)= \frac{1}{p^- \epsilon^2} \left( \frac{\mu^2}{-p^2} \right)^\epsilon + \frac{1-2\epsilon }{\epsilon } \frac{rA_{[-]}(p)}{\tilde{p}_-^2} + \Delta B_{[-]}(-p^2, p_-) + O(r), \label{Eq:B0m} \end{equation} where we have expressed the contribution $\sim r^{\epsilon}$ in terms of $A_{[-]}$ integral and $\Delta B_{[-]}$ is $O(r^{-\epsilon})$ contribution: \begin{equation} \Delta B_{[-]}(-p^2, p_-)=-\frac{1}{p_-}\left( \frac{p_-^2 \mu^2}{(-p^2)^2} \right)^\epsilon \frac{\Gamma^2(1-2\epsilon)\Gamma(1+2\epsilon)\cdot r^{-\epsilon}}{2\epsilon^2 \Gamma^2(1-\epsilon)}. \label{Eq:DB0m} \end{equation} For light-like $p$ the first and third terms in Eq.~(\ref{Eq:B0m}) are absent. This terms are also absent for the special case when $p^2<0$ but $p^-=0$. Expression for $B_{[-]}$ published by us in Ref.~\cite{gaQq-real-photon} applies only in this special cases. If $p^-=0$, asymptotic expansion of integral with Euclidean pole-prescription actually starts with the term $\sim r^{-1/2}$ (such term can be found e.g. in Appendix 1 of Ref.~\cite{Chachamis:2012cc}, integral [MSQ1]), but this term cancels in PV-prescription and asymptotic expansion of the integral with PV-prescription starts with $O(r^{\epsilon})$. The contribution (\ref{Eq:DB0m}) was obtained by resummation of infinite series of poles in the double Mellin-Barnes representation~\cite{Smirnov} for integral (\ref{Eq:B0m}). Due to the fact, that $1/\epsilon^2$-pole cancels, and parametric representation for $B_{[-]}$ is only a two-fold integral, the numerical cross-check of expression (\ref{Eq:DB0m}) is rather straightforward and doesn't require specialized methods such as sector decomposition~\cite{Heinrich:2008si}. We have checked, that indeed, for sufficiently small $r$ the difference between numerical result for $B_{[-]}$ (with Euclidean pole-prescription) and analytic result (\ref{Eq:B0m}) scales as $O(r)$ independently on other parameters. \subsection{Scalar integrals with logarithmic rapidity divergences} \label{Sec:Ints-log} As it follows from the analysis of Sec.~\ref{Sec:RDs-gen}, the simplest integral, containing logarithmic rapidity divergence is: \[ B_{[+-]}(p)=\int\frac{[d^d l]}{l^2 (p+l)^2 [\tilde{l}^+] [\tilde{l}^-]}, \] where $p_+=p_-=0$. Integral with PV-prescription for both poles can be expressed in terms of integral with Euclidean pole prescription: \[ B_{(+-)}({\bf p}_T^2,r)=\int\frac{[d^d l]}{l^2 (p+l)^2 (\tilde{l}^++i\varepsilon) (\tilde{l}^-+i\varepsilon)}, \] as \begin{equation} B_{[+-]}(p)=\frac{1}{2}\left[B_{(+-)}({\bf p}_T^2,r) - e^{i\pi\epsilon} B_{(+-)}(-{\bf p}_T^2+i\varepsilon,-r+i\varepsilon) \right].\label{Eq:Bpm_AC} \end{equation} Parametric representation (with all Feynman parameters ranging from zero to infinity) for $B_{(+-)}$ has the form: \[ B_{(+-)}({\bf p}_T^2,r)=\frac{\Gamma(2+\epsilon)(\mu^2)^\epsilon}{r_\Gamma} \int\limits_{0}^\infty dx_1 (1+x_1)^{2\epsilon} \int\limits_0^\infty dy_2 \int\limits_{r y_2}^{y_2/r} \frac{dy_3}{1-r^2} \left[ x_1 {\bf p}_T^2 + y_2y_3 \right]^{-2-\epsilon}, \] where we have made the change of variables $y_2=x_2+rx_3$, $y_3=x_3+rx_2$ with Jacobian $1/(1-r^2)$ in the original parametric representation. Taking the integral over $y_3$ one obtains, that $B_{(+-)}=(J(r)-J(1/r))/(1-r^2)$, with: \[ J(r)=\frac{\Gamma(2+\epsilon)(\mu^2)^\epsilon}{r_\Gamma (1+\epsilon)} \int\limits_{0}^\infty dx_1 (1+x_1)^{2\epsilon} \int\limits_0^\infty \frac{dy_2}{y_2^{1-\alpha}} ({\bf p}_T^2 x_1 + ry_2^2)^{-1-\epsilon}, \] where we have introduced intermediate regulator $\alpha>0$, dependence on which should cancel between $J(r)$ and $J(1/r)$. Intermediate regularization is needed since integral $B_{(+-)}$ converges only if the range of integration over $y_3$ is finite. But we wish to represent integral over finite range in $y_3$ as difference of two integrals $J(1/r)$ and $J(r)$ with $y_3$ extending to $+\infty$, so the intermediate regularization is necessary to make $J(r)$ well-defined. Integral $J(r)$ is easily calculated in terms of $\Gamma$-functions and the poles in $\alpha$ indeed cancel between $J(r)$ and $J(1/r)$ leading to the simple answer: $B_{(+-)}({\bf p}_T^2,r)=\left(\mu^2/{\bf p}_T^2 \right)^\epsilon 2\log r/{\bf p}_T^2/\epsilon$, which after substitution to the analytic continuation formula (\ref{Eq:Bpm_AC}) leads to \begin{equation} B_{[+-]}({\bf p}_T^2)=\frac{1}{{\bf p}_T^2(1-r^2)}\left(\frac{\mu^2}{{\bf p}_T^2} \right)^\epsilon \frac{2\log r + i\pi}{\epsilon}. \label{Eq:B0pm} \end{equation} Another integral with logarithmic rapidity divergence, which was identified in Sec.~\ref{Sec:RDs-gen}, is the integral with one external Reggeon line and three quadratic propagators: \[ C_{[-]}(t_1, Q^2, q^-)=\int\frac{[d^d l]}{l^2 (q_1+l)^2 (q_1+q+l)^2 [\tilde{l}^-]}, \] where $t_1=-q_1^2$, $Q^2=-q^2$, $(q+q_1)^2=0$ and $\tilde{q}_1^-=0$. We will first consider the case when $Q^2=0$, then on-shell condition for $q+q_1$ together with modified kinematic constraint for $q_1$ is satisfied by $q_1^+=t_1/{q_-}+t_1^2r/q_-^3+O(r^2)$, $q_1^-=-rq_1^+$ and parametric representation for $C_{[-]}$ with Euclidean pole prescription takes the form: \[ C_{(-)}=-\frac{\Gamma(2+\epsilon)(\mu^2)^\epsilon}{r_\Gamma } \int\limits_0^\infty dx_1 dx_2 dx_3\ (1+x_1+x_2)^{2\epsilon} \left[ (t_1+O(r))x_1 + x_3(x_2q_- + r x_3) \right]^{-2-\epsilon}, \] where $O(r)$-corrections to $t_1$ are independent on Feynman parameters and can be neglected, since they lead only to $O(r)$ corrections to the integral. Integral with PV-prescription is obtained from this integral as: \begin{equation} C_{[-]}(t_1,Q^2,q_-)=\frac{1}{2}\left[ C_{(-)}(t_1,Q^2,q_- -i\varepsilon) - C_{(-)}(t_1,Q^2,-q_- -i\varepsilon) \right]. \label{Eq:Cm_AC} \end{equation} Anti-symmetrization of the integral $C_{(-)}$ w.r.t. light-cone component $q_-$ in Eq.~(\ref{Eq:Cm_AC}) can be understood as anti-symmetrization of the full amplitude w.r.t. the substitution $s\leftrightarrow -s$ and therefore the PV pole prescription indeed projects-out the part with definite signature from the amplitude. Integral $C_{(-)}$ can be straightforwardly expanded in $r\ll 1$ using one-fold Mellin-Barnes representation, which is not the case for parametric integral, obtained without modified kinematic constraint. Then the result with PV prescription is obtained using Eq.~(\ref{Eq:Cm_AC}): \begin{equation} C_{[-]}(t_1,Q^2=0,q^-)=\frac{1}{q^- t_1} \left(\frac{\mu^2}{t_1} \right)^\epsilon \frac{1}{\epsilon} \left[ \log r+i\pi - \log\frac{|q_-|^2}{t_1} -\psi(1+\epsilon)-\psi(1)+2\psi(-\epsilon) \right] + O(r^{1/2}). \label{Ch2:eq:C0m_1-scale} \end{equation} This integral was first calculated in Ref.~\cite{Hentschinski:2011tz}. To find the $Q^2$-dependence of $C_{[-]}$ we will use the observation of Sec.~\ref{Sec:RDs-gen}, that derivative of it w.r.t. any scale will not contain rapidity divergence. Introducing parameter $X=Q^2/t_1$, taking the derivative of parametric representation w.r.t. $X$ and setting $r=0$ we obtain: \[ \left.\frac{\partial C_{(-)}}{\partial X}\right\vert_{r=0}=\frac{t_1 \mu^{2\epsilon} \Gamma(3+\epsilon)}{r_\Gamma} \int\limits_0^\infty dx_1 dx_2 dx_3\ x_1 (1+x_1+x_2)^{2\epsilon} \left[ t_1 x_1 (x_2+X) + q_- x_3 \right]^{-3-\epsilon}. \] This integral can be straightforwardly taken by direct integration over $x_1$, $x_2$ and $x_3$. Introducing the function $I(X)= t_1 q_- \left(\frac{\mu^2}{t_1}\right)^{-\epsilon} \left[ C_{(-)}(X) - C_{(-)}(X=0) \right]_{r=0}$, with the boundary condition $I(0)=0$ one obtains: \[ \frac{\partial I}{\partial X} = \frac{2 X^{-1-\epsilon}}{\epsilon} - \frac{2}{\epsilon} \frac{1-X^{-\epsilon}}{1-X}, \] from where we find \begin{eqnarray*} I(X)&=&-\frac{2X^{-\epsilon}}{\epsilon^2} - \frac{2}{\epsilon} \int\limits_0^X \frac{(1-x^{-\epsilon}) dx}{1-x} = -\frac{2X^{-\epsilon}}{\epsilon^2} +2 \left[ -{\rm Li}_2(1-X)+\frac{\pi^2}{6} \right] + O(\epsilon). \end{eqnarray*} Leading $r$-dependence of the integral $C_{(-)}$ also contains the term $O(r^{-\epsilon})$. This term is invisible for the analysis above, because integral for $\partial C_{(-)}/\partial X$ converges at $\epsilon<0$ and this term vanishes for such $\epsilon$ in the limit $r\to 0$. As in the case of $B_{[-]}$-integral, this term can be calculated via resummation of a series of poles in the double Mellin-Barnes representation for the integral with $r\neq 0$. Up to a sign and dimensional factor $O(r^{-\epsilon})$ contribution to $C_{(-)}$ coincides with similar contribution to Eq.~(\ref{Eq:B0m}): $-\Delta B_{[-]}(Q^2,q_-)/t_1$, so that final result for $C_{[-]}$ at arbitrary values of $Q^2$ takes the form: \begin{equation} C_{[-]}(t_1, Q^2, q_-)=C_{[-]}(t_1,Q^2=0,q_-) + \left(\frac{\mu^2}{t_1} \right)^\epsilon \frac{I(Q^2/t_1)}{q_- t_1} -\frac{1}{t_1}\Delta B_{[-]}(Q^2,q_-)+O(r^{1/2}). \label{Eq:C0m_2-scales} \end{equation} Numerical cross-check of Eq.~(\ref{Eq:C0m_2-scales}) is more involved, because the parametric representation for this integral is three-fold and $\Delta B_{[-]}$ contribution introduces the $1/\epsilon^2$ pole. To numerically calculate an exact $r$-dependence of $C_{(-)}$ to all orders in $\epsilon$, we have implemented the sector-decomposition algorithm~\cite{Heinrich:2008si}. The integral in front of $1/\epsilon^2$ was calculated analytically and this contribution was subtracted from analytic result~(\ref{Eq:C0m_2-scales}) for comparison with numerical results. Numerically, with the help of the regular algorithm CUHRE of the CUBA library~\cite{CUBA}, we have calculated the exact $\epsilon$-dependence and $r$-dependencies only for $O(1/\epsilon)$ and finite parts of the integral $C_{(-)}$. It was found, that difference between analytic result (\ref{Eq:C0m_2-scales}) with $O(1/\epsilon^2)$ contribution subtracted and numerical results indeed scales as $O(r^{1/2})$ for sufficiently small $r$, which confirms Eq.~(\ref{Eq:C0m_2-scales}). \section{One-loop correction to the $Q\gamma^\star q$-vertex} \label{Sec:1-loop-vert} \subsection{Calculation of the correction, cancellation of power divergences} In this section we will calculate the one-loop correction $\Gamma^{(1)}_{+\mu}$ to the $Q_+\gamma^\star q$-vertex with the off-shell photon. It is given by EFT diagrams (1 -- 3) in the Fig.~\ref{Fig:Q-ga-q_corr}. We will denote incoming momentum of Reggeized quark as $q_1$, $q_1^2=-t_1$, $q_1^-=0$, incoming momentum of the photon as $q$: $q^2=-Q^2$ and outgoing quark is on-shell: $(q+q_1)^2=0$, so we will consider projection of this vertex on the on-shell spinor of QCD quark. To simplify the kinematics, we work in the reference frame, where off-shell photon have no transverse momentum and ``large'' $q_-\gg \sqrt{Q^2}$, so it's virtuality is given by $q^2=q_+q_-=-Q^2$. \begin{figure} \begin{center} \includegraphics[width=0.5\textwidth]{./figures/v_corr+SE.pdf} \end{center} \caption{Diagrams contributing to the one-loop correction to $Q\gamma^\star q$-vertex and self-energy of Reggeized quark (diagram 4). Dashed lines denote Reggeized quarks.}\label{Fig:Q-ga-q_corr} \end{figure} The expressions for diagrams (1--3) are: \begin{eqnarray} {\cal M}_{\mu,ij}^{(1)}&=&e e_q \frac{g_s^2 C_F\delta_{ij}}{(2\pi)^D} \int d^d l\ \frac{\bar{u}(q+q_1)\gamma_\mu\hat{q}_1\gamma^{\rho} (\hat{q}_1+\hat{l})\tilde{\Gamma}^{(0)}_{+\rho}(q_1,l) \hat{P}_+}{q_1^2\ l^2\ (q_1+l)^2}, \label{Eq:M1} \\ {\cal M}^{(2)}_{\mu,ij}&=&e e_q \frac{g_s^2 C_F\delta_{ij}}{(2\pi)^D} \int d^d l\ \frac{\bar{u}(q+q_1)\gamma^\rho (\hat{q}_1+\hat{q}+\hat{l}) \gamma_\mu (\hat{q}_1+\hat{l})\tilde{\Gamma}^{(0)}_{+\rho}(q_1,l)\hat{P}_+ }{l^2\ (q_1+q+l)^2\ (q_1+l)^2}, \label{Eq:M2} \\ {\cal M}_{\mu,ij}^{(3)}&=&e e_q \frac{g_s^2 C_F\delta_{ij}}{(2\pi)^D} \int d^d l\ \frac{\bar{u}(q+q_1)\gamma^\rho (\hat{q}_1+\hat{q}+\hat{l}) \tilde{\Gamma}^{(0)}_{+\mu\rho}(q_1,q,l)\hat{P}_+ }{ l^2\ (q_1+q+l)^2}, \label{Eq:M3} \end{eqnarray} where $i$ and $j$ are color indices of quarks, $C_F=(N_c^2-1)/2N_c$, $\tilde{\Gamma}^{(0)}_{+\rho}$ is the regularized $Q_+gq$-vertex (\ref{Eq:Qgq-vert}), $Q_+\gamma gq$-vertex $\tilde{\Gamma}^{(0)}_{+\mu\rho}(q_1,q,l)$ $=-\hat{q}_1(\tilde{n}_\mu^- \tilde{n}_\rho^-)/(q^- [\tilde{l}^-])$ can be derived from the $Q_+ggq$-vertex of the Lagrangian~(\ref{Eq:L-Qq}) and projector $\hat{P}_+$ (\ref{Eq:Proj-Qpm}) enforces constraint (\ref{Eq:kin_cons_Q2}) for the Dirac structure of $Q_+$-field. For evaluation of diagrams (1 -- 3) we reduced tensor loop integrals down to scalar ones. Diagrams (1) and (2) can be separated into ``standard'' and ``Eikonal'' parts, where the latter contains Eikonal denominator $[\tilde{l}^-]$. The diagram (3) has only Eikonal part. Tensor reduction of standard parts can be performed by usual methods and tools, e.g. by routines of the FeynCalc package~\cite{FeynCalc}. Tensor reduction of integrals containing Eikonal denominator is completely analogous to this procedure, except, that now integral depends also on vector $\tilde{n}^-$, so this vector has to be added to the ansatz for the tensor structure. For example, for the rank-one three-point integral one has: \begin{equation} \int \frac{d^d l \cdot l^{\rho}}{D_1 D_2 D_3 D_4}= c_1\cdot (q_1+q)^{\rho} + c_2\cdot q_1^{\rho} + c_3\cdot \tilde{n}^{\rho}_- , \label{Eq:1-rank-ans} \end{equation} where $D_1=(q_1+q+l)^2$, $D_2=(q_1+l)^2$, $D_3=l^2, D_4=\tilde{l}^-$. Contracting Eq.~(\ref{Eq:1-rank-ans}) with linearly-independent vectors which form the decomposition in the r.h.s. of this equation, one obtains system of linear equations for coefficients $c_i$. The r.h.s. of this system consists of linear combinations of integrals with scalar products in the numerator. All these scalar products can be expressed in terms of denominators $D_i$, thus leading to linear combinations of integrals with some denominators canceled. Hence, solving the linear system for $c_i$ one expresses them as linear combinations of scalar integrals with the same set denominators $\{D_i\}$ or it's subsets. At some stages of tensor reduction it is necessary to make shifts of the loop momentum by $q_1$ to restore $l^2$ denominator. We assume, that Eikonal denominator is insensitive to this shifts: $\tilde{l}^-\to \tilde{n}^-(l\pm q_1)=\tilde{l}^-$ due to modified kinematic constraint. In case of the three-point function, the determinant of above-mentioned linear system (Gram determinant) is nonzero for $r\to 0$, so one can simplify the numerator of diagram (2), ignoring all $O(r)$ terms. This is also the case for diagram (3) but not for diagram (1) for which the Gram determinant is equal to $-4t_1r + O(r^2)$, so in this case one has to keep $O(r)$ terms in the numerator. The sum of three diagrams can be expressed as: \[ {\cal M}_\mu^{(1)}+{\cal M}_\mu^{(2)}+{\cal M}_\mu^{(3)}= iee_q\bar{u}(q+q_1)\Gamma_{+\mu}^{(1)}(q_1,q), \] where one-loop correction to $Q\gamma^\star q$ vertex $\Gamma_{+\mu}^{(1)}(q_1,q)$ is naturally decomposed as: \[ \Gamma_{+\mu}^{(1)}(q_1,q)=C[\Gamma]\cdot\Gamma_{+\mu}^{(0)}(q_1,q)+C[\Delta^{(1)}]\cdot\Delta_{+\mu}^{(1)}(q_1,q)+C[\Delta^{(2)}]\cdot\Delta_{+\mu}^{(2)}(q_1,q), \] where $\Delta_{+\mu}^{(1)}(q_1,q)=\frac{\hat{q}}{q_-} \left( n_\mu^- - \frac{2(q_1)_\mu}{q_1^+} \right)$ and $\Delta^{(2)}_{+\mu}(q_1,q)=\frac{\hat{q}}{q_-} \left( n_\mu^- - \frac{q_\mu}{q^+} \right)$ . All three Dirac structures satisfy the QED Ward identities: $q^\mu \Gamma_{\mu}^{(0)}(q_1,q)=0$, $q^\mu \Delta_\mu^{(1)}(q_1,q)=0$, $q^\mu \Delta_\mu^{(2)}(q_1,q)=0$, but only $\Gamma_{+\mu}^{(0)}$ and $\Delta_{+\mu}^{(1)}$ where found for the case of real photon in Ref.~\cite{gaQq-real-photon}. The structure $\Delta_{+\mu}^{(2)}$ contributes only for non-zero $Q^2$ because coefficient in front of it decreases as $O(Q^2)$. Also, it doesn't contribute to the $F_2$ structure function of DIS. Coefficients in front of independent gauge-invariant structures are linear combinations of the following eight scalar integrals \[ B(q),\ B(q_1),\ A_{[-]}(q),\ B_{[-]}(q),\ B_{[-]}(q_1),\ B_{[-]}(q+q_1),\ C(-q_1^2,-q^2),\ C_{[-]}(-q_1^2,-q^2,q^-), \] where we have introduced a short-hand notation for the standard one-loop bubble and triangle integrals of Ref.~\cite{1-loop-ints}: $B(q)=B(-q^2)=I_2^{D}(q^2;0,0)$, $C(-q_1^2,-q^2)=I^{D}_3(0,q_1^2,q^2;0,0,0)$. Expressions for the coefficients in terms of scalar integrals look as follows: \begin{eqnarray} C[\Gamma]&=& -\frac{\bar{\alpha}_s C_F}{4\pi} \frac{1}{2} \left\{ \frac{[(d-8)Q^2+(d-6)t_1]B(t_1)-2(d-7)Q^2 B(Q^2)}{Q^2-t_1} \right. \nonumber \\ && \left. -2\left[ (Q^2-t_1)C(t_1,Q^2)- q_- \left( t_1 C_{[-]}(t_1,Q^2,q_-)+(B_{[-]}(q)-B_{[-]}(q+q_1)) \right) \right] \right\}, \label{Eq:C-Gm} \\ C[\Delta^{(1)}]&=& -\frac{\bar{\alpha}_s C_F}{4\pi} \frac{(Q^2+t_1) }{2(Q^2-t_1)^2}\left[ \left( (d-2)Q^2-(d-4)t_1 \right)B(t_1) -2Q^2 B(Q^2) \right] ,\label{Eq:C-D1} \\ C[\Delta^{(2)}]&=& -\frac{\bar{\alpha}_s C_F}{4\pi} \frac{Q^2}{(Q^2-t_1)^2}\left[ \left( (d-6)t_1-(d-8)Q^2 \right)B(Q^2) +2(t_1-2Q^2) B(t_1) \right] ,\label{Eq:C-D2} \end{eqnarray} were $\bar{\alpha}_s=\frac{\mu^{-2\epsilon} g_s^2}{(4\pi)^{1-\epsilon}} r_\Gamma$ is the dimensionless strong-coupling constant, and Gram-determinant singularity at $t_1=Q^2$ cancels if one substitutes expressions for scalar integrals. We observe the following pattern of cancellations of power rapidity divergences in the coefficients $C[\ldots]$. The coefficient in front of most singular integral $A_{[-]}(q)$ simplifies to zero if projector $\hat{P}_+$ in Eqns.~(\ref{Eq:M1}) -- (\ref{Eq:M3}) is in it's place, otherwise contribution of this integral is nonzero. The same cancellation happens with the coefficient in front of $B_{[-]}(q_1)$. Integrals $B_{[-]}(q)$ and $B_{[-]}(q+q_1)$ are contained only in $C[\Gamma]$ and coefficients in front of them {\it are equal and opposite in sign}, again if the projector $\hat{P}_+$ is present. This means, that $O(r^\epsilon)$ terms in Eq.~(\ref{Eq:B0m}) cancel between integrals $B_{[-]}(q+q_1)$ and $B_{[-]}(q)$, and only $Q^2$-dependent reminder of the latter integral is left. Apart from the $r$-independent term, above-mentioned reminder contains the term $\Delta B_{[-]}(Q^2,q_-)$ (\ref{Eq:DB0m}) which is $O(r^{-\epsilon})$. Since, the integral $B_{[-]}(q)$ stands together with $C_{[-]}(t_1,Q^2,q_-)$ in a combination $t_1 C_{[-]}(t_1,Q^2,q_-)+(B_{[-]}(q)-B_{[-]}(q+q_1))$ (see Eq.~(\ref{Eq:C-Gm})), the $\Delta B_{[-]}(Q^2,q_-)$-term cancels between this two integrals due to Eq.~(\ref{Eq:C0m_2-scales}). As a result, no power divergences or $O(r^{\pm\epsilon})$-terms is left and {the only rapidity divergence in one-loop correction to the $\Gamma^{(0)}_{+\mu}$ vertex is $O(\log r)$-divergence from integral $C_{[-]}$.} In particular, this fact means that $O(\log^2 r)$ terms which would show-up if we have expanded all integrals in $\epsilon$, should cancel-out. One should check such cancellation also in the approach of Ref.~\cite{vanHameren:2017hxx}. Substituting explicit exact in $\epsilon$ results for Feynman integrals, one can take the limit $Q^2\to 0$ for $\epsilon<0$ and reproduce our results for on-shell photon case~\cite{gaQq-real-photon}, also our answer in this limit can be related with the results of Ref.~\cite{Kotsky:2002aq} for the one-loop correction to $Qgq$-vertex, obtained from QCD, however in this paper, gluon is on-shell, while quark is massive. After substitution of explicit expressions for scalar integrals and expansion in $\epsilon$, coefficients take the form: \begin{eqnarray} C[\Gamma]&=& -\frac{\bar{\alpha}_s C_F}{4\pi}\left\{\frac{1}{\epsilon^2} + \frac{L_1+1}{\epsilon} + \left( \frac{1}{\epsilon} +\log \frac{\mu^2}{t_1} \right) (\log\bar{r}+i\pi) \right. \nonumber \\ &+&\left. \left[ -2{\rm Li}_2\left(1-\frac{Q^2}{t_1}\right) + L_1 + \frac{L_1^2-L_2^2}{2} - \frac{(2Q^2+t_1)L_2}{Q^2-t_1} - \frac{\pi^2}{6} + 3 \right]\right\}+O(r,\epsilon), \label{eq:C-Gm-Isub}\\ C[\Delta^{(1)}]&=& -\frac{\bar{\alpha}_s C_F}{4\pi} \frac{(Q^2+t_1)(Q^2(L_2-1)+t_1)}{(Q^2-t_1)^2}+O(r,\epsilon), \label{eq:C-D1-Isub} \\ C[\Delta^{(2)}]&=& -\frac{\bar{\alpha}_s C_F}{4\pi}\frac{2Q^2(Q^2-t_1-(2Q^2-t_1)L_2)}{(Q^2-t_1)^2}+O(r,\epsilon), \end{eqnarray} where $L_1=\log(\mu^2/Q^2)$, $L_2=\log(Q^2/t_1)$ and $\bar{r}= rQ^2/q_-^2$. The results obtained above can be used to compute one-loop correction to the partonic coefficient of $F_2$ DIS structure function in PRA. In this approach, one-loop correction to the partonic tensor, corresponding to subprocess $Q_+(q_1)+\gamma^\star(q)\to q$ has the form (see e.g. Eq. (5) in Ref.~\cite{NS_DIS1}): \[ \hat{w}^{(1)}_{\mu\nu}=\frac{e^2e_q^2}{2}\cdot 2{\rm Re\ tr}\left[ (\hat{q}+\hat{q}_1) \Gamma^{(1)}_{+\mu} \frac{q_1^+ \hat{n}_-}{2} \Gamma^{(0)}_{+\nu} \right], \] where factor $1/2$ stands for the averaging over quark's spin. Projecting this tensor on the $F_2$ structure function, and dividing by the $d$-dimensional Born result for the partonic coefficient, one obtains the following expression for the one-loop correction factor to DIS partonic coefficient in PRA: \begin{eqnarray} C^{(1)}_{2q}(Q^2, t_1, \bar{r})=\frac{\bar{\alpha}_s C_F}{2\pi} \left\{-\frac{1}{\epsilon^2} -\frac{1+L_1}{\epsilon} -\left(\frac{1}{\epsilon} + \log\frac{\mu^2}{t_1} \right)\log\bar{r} +\left(\frac{\pi^2}{6}-2-L_1\right)\right. \nonumber\\ \left. +\frac{L_2^2-L_1^2}{2}+2{\rm Li}_2\left(1-\frac{Q^2}{t_1} \right) -\frac{1}{(Q^2-t_1)^2} \left[ Q^2(Q^2-t_1)+(t_1^2-2Q^4)L_2 \right] +O(r,\epsilon) \right\}. \label{eq:F2-CV} \end{eqnarray} This result will be used for the calculation of $F_2$ structure function at NLO in PRA. This calculation will allow us to arrange the scheme of NLO calculations in PRA in such a way, that for single-scale observables, such as $F_2$, PRA will reproduce the NLO results in Collinear Parton Model up to higher-order in $\alpha_s(Q^2)$ and higher-twist terms, see Refs.~\cite{NS_DIS1, NS_DIS2} for more details. \subsection{One Reggeon exchange. Comparison with QCD} \label{Sec:comp-QCD} To check the result obtained above and demonstrate the self-consistency of EFT~\cite{Lipatov95, LV} we will compare Multi-Regge limit of specific one-loop QCD amplitude with results obtained from EFT. To work with the scalar quantity, we will consider the interference between one-loop and tree-level amplitude of the subprocess: \begin{equation} \gamma^\star(q)+\gamma(P)\to q(q+q_1)+\bar{q}(P-q_1),\label{Eq:ga_ga-q_q} \end{equation} with massless quarks, projected on the $F_2$ structure-function for the indices of the off-shell photon. For convenience, we will work in the center of mass frame of momenta $P$ and $q$. In terms of usual Bjorken variable $x_B=Q^2/2(Pq)$, light-cone components of $P$ and $q$ can be expressed in this frame as: \begin{eqnarray*} q^+=-x_B P_+,\ q^-=\frac{Q^2}{x_B P_+},\ {\bf q}_T=0;\ P_+=\sqrt{\frac{Q^2}{x_B (1-x_B)}},\ P_-={\bf P}_T=0, \end{eqnarray*} so that in the MRK limit $x_B\to 0$: $P^+\to \infty$, $q^-\to\infty$, while $q^+\to 0$. First, using FeynArts~\cite{FeynArts} and FeynCalc~\cite{FeynCalc} we calculate the interference of one-loop amplitude of the process (\ref{Eq:ga_ga-q_q}), given by diagrams in the left panel of Fig.~\ref{Fig:1-loop-QCD}, with it's tree-level counterpart. We contract the indices of real photon, while indices of virtual photon we project on the $F_2$ structure-function, using the projector: \[ P_2^{\mu\nu}=\frac{Q^2}{(d-2) (Pq)} \left[ -g^{\mu\nu} + Q^2 (d-1)\frac{P^\mu P^\nu}{(Pq)^2} \right]. \] Then we divide-out the $d$-dimensional Born result and expand obtained expression in the limit $x_B\to 0$ (after expansion in $\epsilon$), the leading term of this expansion is: \begin{eqnarray} F^{\rm (1,QCD)}_{2}(Q^2, t_1, x_B)=\frac{\bar{\alpha}_sC_F}{4\pi}\left\{ -\frac{2}{\epsilon^2} -\frac{3}{\epsilon}-\frac{2L_1}{\epsilon} + \left( \frac{2\pi^2}{3}-7-L_1^2-3L_1 \right) +\right. \nonumber \\ + \left(\frac{1}{\epsilon}+\log\frac{\mu^2}{t_1}\right)\left(\log \frac{1}{x_B^2}-2\pi i \right) +L_2^2+2{\rm Li}_2\left(1-\frac{Q^2}{t_1}\right) \nonumber \\ \left. - \frac{1}{(Q^2-t_1)^2}\left[ Q^2(Q^2-t_1)+(3t_1^2-4Q^2 t_1)L_2 \right] \right\}+O(\epsilon, x_B), \label{eq:C_2-QCD} \end{eqnarray} and it should be predicted by EFT. \begin{figure} \begin{center} \parbox{0.5\textwidth}{\includegraphics[width=0.5\textwidth]{./figures/gaga_qq-eps-converted-to.pdf}}\hfill \parbox{0.4\textwidth}{\includegraphics[width=0.4\textwidth]{./figures/gaga-qq_EFT.pdf}} \end{center} \caption{Diagrams contributing to amplitude of the process (\ref{Eq:ga_ga-q_q}) with massless quarks in QCD (left figure) and one-Reggeon exchange contribution in the EFT (right figure).\label{Fig:1-loop-QCD}} \end{figure} In this section we will consider the EFT-contribution to the MRK limit of amplitude of the process (\ref{Eq:ga_ga-q_q}) with one Reggeon in $t$-channel. It can be constructed from three contributions (right panel of the Fig.~\ref{Fig:1-loop-QCD}): \begin{eqnarray*} && {\cal M}_{\mu\rho}^{(1,+)} = -i\bar{u}(q+q_1) \Gamma_{+\mu}^{(1)} \frac{\hat{q}_1}{q_1^2} \Gamma_{-\rho}^{(0)} v(P),\ {\cal M}_{\mu\rho}^{(1,-)} = -i\bar{u}(q+q_1) \Gamma_{+\mu}^{(0)} \frac{\hat{q}_1}{q_1^2} \Gamma_{-\rho}^{(1)} v(P), \\ && {\cal M}_{\mu\rho}^{(1,\Sigma)}=\bar{u}(q+q_1) \Gamma_{+\mu}^{(1)} \frac{\hat{q}_1}{q_1^2}\Sigma(q_1) \frac{\hat{q}_1}{q_1^2} \Gamma_{-\rho}^{(0)} v(P),\ {\cal M}_{\mu\rho}^{(0)} = -i\bar{u}(q+q_1) \Gamma_{+\mu}^{(0)} \frac{\hat{q}_1}{q_1^2} \Gamma_{-\rho}^{(0)} v(P), \end{eqnarray*} where ${\cal M}_{\mu\rho}^{(1,+)}$ is the contribution of one-loop correction to the $Q_+\gamma^\star q$-vertex, ${\cal M}_{\mu\rho}^{(1,-)}$ is the contribution of the one-loop correction to the $Q_-\gamma q$-vertex and ${\cal M}_{\mu\rho}^{(1,\Sigma)}$ is the one-loop correction to the propagator of Reggeized quark $\Sigma(q_1)$ (diagram (4) in the Fig.~\ref{Fig:Q-ga-q_corr}), which is related with the integral (\ref{Eq:B0pm}) and is given in Eq.~(16) of Ref.~\cite{gaQq-real-photon}. The tensor, describing interference of one-loop and tree-level amplitudes for the one-Reggeon contribution in the EFT can be decomposed as: \begin{equation} W^{\rm (1, Q)}_{\mu\nu}=W_{\mu\nu}^{(1,+)}+W_{\mu\nu}^{(1,-)}-W_{\mu\nu}^{(1,\Sigma)}, \label{Eq:W-dec} \end{equation} where $W^{(1,+)}_{\mu\nu}=\sum\limits_{\rm spins} {\cal M}^{(1,+)}_{\mu\rho} (-g^{\rho\sigma}) {\cal M}^{(0)}_{\nu\sigma}$ and so on. The minus sign in front of the self-energy contribution in Eq.~(\ref{Eq:W-dec}) appears due to the necessity to ``localize'' one-loop corrections to effective vertices in rapidity~\cite{Hentschinski:2011tz,Chachamis:2012cc,Chachamis:2012gh,Chachamis:2013hma,gaQq-real-photon}. As it was noted in the end of Sec.~\ref{Sec:EFT}, our regularization for rapidity divergences in real corrections acts as a smooth cutoff at rapidities $\pm\log(1/r)/2$. It is instructive to extend this point of view also to the case of virtual corrections. Then the following picture emerges. Rapidity of the loop momentum in the ``backward'' $\Gamma_{+\mu}^{(1)}$-vertex is naturally bounded from below by some scale $\sim -\log q_-/\sqrt{t_1}$, while, due to regularization of $1/l^-$-pole, it is also bounded from above by $\log(1/r)/2$. For the ``forward'' vertex situation is reversed: rapidity of the gluon in the loop is bounded from above by the natural rapidity scale of this vertex $\sim\log P_+/\sqrt{t_1}$ while from below it is cut-off by $-\log(1/r)/2$ due to regularization of $1/l^+$ pole. Finally for the ``central'' contribution, which is given by Reggeized quark self-energy, containing both poles, rapidity of momentum in the loop is bounded from below by $-\log (1/r)/2$ and from above by $\log(1/r)/2$. To make logarithmic divergences cancel we should remove double counting of the region $-\log (1/r)/2\leq y \leq \log (1/r)/2$ from contributions of ``forward'' and ``backward'' vertices, i.e. we should subtract the central contribution from each of them. In total, we are subtracting $2W_{\mu\nu}^{(1,\Sigma)}$ and adding back $W_{\mu\nu}^{(1,\Sigma)}$, thus we get a minus sign in front of this contribution in Eq.~(\ref{Eq:W-dec}). As a result of this procedure, logarithmic rapidity divergences cancel between ``forward'', ``backward'' and ``central'' contributions and we are left with $r$-independent result for $W^{\rm (1, Q)}_{\mu\nu}$. Projecting it on the $F_2$ structure-function and discarding $O(x_B)$-terms we exactly reproduce the real part of QCD result (\ref{eq:C_2-QCD}), while the {\it imaginary part} of QCD result is twice as large as imaginary part of one-Reggeon exchange (Regge-pole) contribution (\ref{Eq:W-dec})\footnote{The same is true for imaginary part of amplitude studied in our Ref.~\cite{gaQq-real-photon}. In Eq.~(21) of this reference, the imaginary part of one-Reggeon exchange amplitude is written and our statement that it accounts for the full imaginary part of the amplitude is wrong. Instead one should consider also two Reggeon exchange diagrams as we do in Sec.~\ref{Sec:Imag}.}. The reason for this mismatch is that one-Reggeon exchange diagrams, which we have taken into account, are responsible only for the {\it positive-signature part} of the amplitude of the process (\ref{Eq:ga_ga-q_q}), while the part with negative signature is given by diagrams with two Reggeons in $t$-channel, as it will be shown in the next section. \subsection{Two Reggeon exchange. Imaginary part} \label{Sec:Imag} In the EFT~\cite{Lipatov95,LV}, negative-signature part of the amplitude of the process (\ref{Eq:ga_ga-q_q}) at the order we considering is given by two-Reggeon contribution, depicted diagrammatically in the Fig.~\ref{Fig:2-R}. This quantity naturally factorizes into a product of two impact-factors, connected by propagators of Reggeized quark and gluon: \begin{equation} {\cal M}_{\mu\rho}^{(QR)}=\frac{g_s^2 C_F}{2} (ee_q)^2 \int \frac{d^{d-2}{\bf l}_T}{(2\pi)^d}\frac{A_\mu^+(l_T)\hat{l}_T A_\rho^-(l_T)}{{\bf l}_T^2 ({\bf q}_{T1} - {\bf l}_T)^2} ,\label{Eq:M_QR-fact} \end{equation} where an overall factor $1/2$ comes from the propagator of Reggeized gluon~(\ref{Eq:L_kin}). In Eq.~(\ref{Eq:M_QR-fact}) we have decomposed loop-momentum integration measure as $d^{d}l=d^{d-2}{\bf l}_T (dl_+ dl_-)/2$ and moved integrations over light-cone components $l_+$ and $l_-$ into the corresponding impact-factors. \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth]{./figures/2-Reggeon.pdf} \end{center} \caption{Factorization of two-Reggeon contribution into a product of impact-factors, connected by Reggeon propagators (left panel) and the structure of impact-factor $A_+$ (right panel). Impact-factor $A_-$ contains diagrams of the same topology, up to $+\leftrightarrow -$ exchange.}\label{Fig:2-R} \end{figure} The impact-factor $A_+$ is given by three diagrams, shown in the Fig.~\ref{Fig:2-R}. Contribution of the diagram (1) is: \[ A_\mu^{(+,1)}=i\int\limits_{-\infty}^{+\infty}\frac{dl_+}{\sqrt{2}}\frac{\bar{u}(q+q_1) \hat{n}_- \left(q_-\hat{n}_+/2+ (l_++q_+)\hat{n}_-/2+\hat{l}_T\right)\Gamma^{(0)}_{+\mu}(l,q) \hat{P}_+}{q_-l_+ -{\bf l}_T^2+i\varepsilon}, \] where the first factor $\hat{n}_-$ in the numerator has come from the $R_+q\bar{q}$-vertex: $iT^b \gamma^\rho\cdot (-i/q^2)\cdot\Delta^{ab}_{+\rho}(q)=-iT^a \hat{n}_-$ and due to the MRK kinematic constraint, the $l^-$-component of the loop momentum does not propagate into $A_+$ impact-factor so denominator of the quark propagator does not depend on it. The numerator of $A_+$ simplifies into \[ 2q_- \bar{u}(q+q_1) \hat{P}_+ \gamma_\mu \hat{P}_+ = 2q_- \bar{u}(q+q_1) \Gamma_{+\mu}^{(0)}(q_1,q) \hat{P}_+, \] so it doesn't depend neither on $l^+$ nor on ${\bf l}_T$, so for the contribution of the first diagram we obtain the representation: \begin{equation} A_\mu^{(+,1)}= \left( \bar{u}(q+q_1) \Gamma_{+\mu}^{(0)}(q_1,q) \hat{P}_+\right)\int\limits_{-\infty}^{+\infty} \frac{i\sqrt{2}\ dl_+}{l_+-{\bf l}_T^2/q_- + i\varepsilon },\label{Eq:Ap_1} \end{equation} which is ill-defined by itself and become convergent only when combined with other diagrams in the Fig.~\ref{Fig:2-R}. Contribution of the diagram (2) in the Fig.~\ref{Fig:2-R} is equal to zero, because the $Q_+ q R_+$-vertex: \[ \Delta_{+\rho}(q_1-l)\frac{-ig^{\rho\sigma}}{(q_1-l)^2}\Gamma_{+\sigma}^{(0)}(l,q_1-l)\hat{P}^+=0. \] Finally we have to ``localize in rapidity'' the impact-factor $A_+$ to avoid double-counting with the Regge-pole (positive-signature) contribution. To this end we subtract diagram (3) in the Fig.~\ref{Fig:2-R} from the impact-factor. Similar procedure applies e.g. in case of $qR_+R_+q$ impact-factor (see Sec. 3.2 of Ref.~\cite{MHThesis}) where analogous subtraction removes the double counting between the contribution with two Reggeized-gluons in $t$-channel (positive signature) and one Reggeized gluon contribution (negative signature). The diagram (3) gives: \begin{eqnarray} A_\mu^{(+,3)}&=& -i \int\limits_{-\infty}^{+\infty}\frac{dl_+}{\sqrt{2}}\bar{u}(q+q_1) \Gamma_{+\mu}^{(0)}(q_1,q)\hat{P}_+ \frac{\hat{q}_1}{q_1^2} \hat{P}_+\Gamma^{(0)}_{++-}(l,l-q_1,-q_1)\hat{P}_+ \nonumber \\ &=& \left( \bar{u}(q+q_1) \Gamma_{+\mu}^{(0)}(q_1,q) \hat{P}_+\right) \int\limits_{-\infty}^{+\infty} \frac{i\sqrt{2}\ dl_+}{[l^+-q_1^+]}, \label{Eq:Ap_3} \end{eqnarray} where the $Q_+R_+Q_-$ vertex $\hat{P}_+ \Gamma^{(0)}_{++-}(q_1,q_2,q_3)\hat{P}_+$ is obtained from $Q_+gQ_-$-vertex (\ref{Eq:FS-vertex}) as $\hat{P}_+ \Delta_{+\rho}(q_2) \times \frac{-ig^{\rho\sigma}}{q_2^2}\times $ $\Gamma_{+\sigma-}^{(0)}(q_1,q_2,q_3)\hat{P}_+ $ $= 2\hat{q}_{T3}/[l_+]$. Combining Eq.~(\ref{Eq:Ap_1}) and (\ref{Eq:Ap_3}) together one obtains: \begin{eqnarray*} A_\mu^{(+)}&=&A_{\mu}^{(+,1)}-A_{\mu}^{(+,3)}= \left( \bar{u}(q+q_1) \Gamma_{+\mu}^{(0)}(q_1,q) \hat{P}_+\right) \int\limits_{-\infty}^{+\infty} \frac{i\sqrt{2}\ dl_+ (l_+-q_1^+)\left( -q_1^++{\bf l}_T^2/q_-+i\varepsilon \right)+O(\varepsilon^2)}{\left(l_+-{\bf l}_T^2/q_- + i\varepsilon \right) (l_+-q_1^++i\varepsilon) (l_+-q_1^+-i\varepsilon)} \\ &=& \left( \bar{u}(q+q_1) \Gamma_{+\mu}^{(0)}(q_1,q) \hat{P}_+\right)\times \pi\sqrt{2}, \end{eqnarray*} which is independent on ${\bf l}_T$. Analogously for the second impact-factor one obtains \[ A_\rho^{(-)}=\left( \hat{P}_+ \Gamma_{-\rho}^{(0)}(-q_1,P) v(P-q_1) \right)\times(-\pi\sqrt{2}), \] so for the two-Reggeon contribution we have: \begin{eqnarray} {\cal M}_{\mu\rho}^{(QR)}&=& {\cal M}_{\mu\rho}^{(0)} \frac{g_s^2 C_F}{2} \int \frac{d^{d-2}{\bf l}_T}{(2\pi)^d} \frac{\hat{l}_T\hat{q}_{T1} (-2i\pi^2)}{{\bf l}_T^2 ({\bf q}_{T1} - {\bf l}_T)^2} ={\cal M}_{\mu\rho}^{(0)}\times (-i\pi)\frac{\bar{\alpha}_s C_F}{4\pi} \frac{1}{\epsilon}\left(\frac{\mu^2}{{\bf q}_{T1}^2} \right)^\epsilon\nonumber \\ &=&{\cal M}_{\mu\rho}^{(0)}\times(-i\pi)\frac{\bar{\alpha}_s C_F}{4\pi} \left(\frac{1}{\epsilon}+\log\frac{\mu^2}{t_1}+O(\epsilon) \right),\label{Eq:neg-sign-Im} \end{eqnarray} which gives exactly the half of imaginary part of expression (\ref{eq:C_2-QCD}), missing in the Regge-pole contribution (\ref{Eq:W-dec}). \section{Conclusions} \label{Sec:Conclusions} In the present paper the one-loop correction to effective vertex of interaction of the off-shell photon with one Reggeized and one QCD quark has been computed using the formalism of gauge-invariant EFT for Multi-Regge processes in QCD~\cite{Lipatov95, LV} (Sec.~\ref{Sec:1-loop-vert}) with ``tilted Wilson lines'' prescription for regularization of rapidity divergences. For this sake, scalar integrals $C_{[-]}$ and $B_{[-]}$ with additional scales of virtuality has been computed for the first time. Consistency of the result has been verified by comparison with Regge limit of $\gamma^\star \gamma\to q\bar{q}$-amplitude in Sec.~\ref{Sec:comp-QCD} and two-Reggeon contribution has been studied in the Sec.~\ref{Sec:Imag}. These results will be instrumental in development of Parton Reggeization Approach~\cite{NS_PRA,NS_DIS1,NS_DIS2} or other techniques of NLO calculations in $k_T$-factorization~\cite{vanHameren:2017hxx} and serve as a nontrivial cross-check of the formalism of effective action for Reggeized quarks~\cite{LV} for the case of amplitudes containing more than one scale of virtuality. \section*{Acknowledgements} Author would like to acknowledge Andreas van Hameren, Martin Hentschinski, Berndt Kniehl and Vladimir Saleev for multiple discussions of different aspects of loop calculations in the framework of High-Energy Factorization and Alexander von Humboldt foundation for awarding him with the Research Fellowship for Postdoctoral Researchers. This work also has been supported in part by RFBR grant \# 18-32-00060. \section*{Appendix : Higher-order induced vertices from the Hermitian Reggeon-gluon interaction}\hypertarget{Sec:Appendix:A}{} In the Ref.~\cite{MH_PolePrescr} the pole-prescription for higher-order induced vertices has been proposed, satisfying the following properties: the $R_{\pm}g\ldots g$ induced vertex should be Bose-symmetric w.r.t. QCD gluons and should not depend on a sign of $\varepsilon$ in the $i\varepsilon$ prescription. The latter property ensures that the single-Reggeon exchange in the EFT indeed possesses a definite signature. In this appendix we will show, that effective vertices, which can be derived from the Hermitian $Rg$-Lagrangian~(\ref{Eq:L-Rg}) satisfy both properties. In a moment we do not have the general proof of this statement, so we check it for $O(g_s^2)$ and $O(g_s^3)$ induced vertices. Above we have shown, that Lagrangian~(\ref{Eq:L-Rg}) automatically leads to the PV-prescription for the Eikonal pole in the the $R_+gg$ vertex. The $R_+ggg$ and $R_+gggg$ induced vertices, generated by the Lagrangian~(\ref{Eq:L-Rg}) look as follows: \begin{eqnarray} \Delta_{+\mu_1\mu_2\mu_3}^{ab_1b_2b_3}=-ig_s^2q^2 (n_{\mu_1}^- n_{\mu_2}^- n_{\mu_3}^-) \sum\limits_{(i_1,i_2,i_3)\in S_3} \frac{{\rm tr}\left[T^a \left(T^{b_{i_1}}T^{b_{i_2}} T^{b_{i_3}} + T^{b_{i_3}}T^{b_{i_2}} T^{b_{i_1}}\right) \right]}{(k_{i_3}^- +i\varepsilon)(k_{i_3}^-+k_{i_2}^-+i\varepsilon)}, \label{Eq:R+_g3-vert} \\ \Delta_{+\mu_1\mu_2\mu_3\mu_4}^{ab_1b_2b_3b_4}=-ig_s^3q^2 (n_{\mu_1}^- n_{\mu_2}^- n_{\mu_3}^- n_{\mu_4}^-) \sum\limits_{(i_1,i_2,i_3,i_4)\in S_4} \frac{{\rm tr}\left[T^a \left(T^{b_{i_1}}T^{b_{i_2}} T^{b_{i_3}} T^{b_{i_4}} - T^{b_{i_4}} T^{b_{i_3}}T^{b_{i_2}} T^{b_{i_1}}\right) \right]}{(k_{i_4}^- +i\varepsilon)(k_{i_4}^-+k_{i_3}^-+i\varepsilon) (k_{i_4}^-+k_{i_3}^-+k_{i_2}^-+i\varepsilon)}, \label{Eq:R+_g4-vert} \end{eqnarray} where $q$ is the incoming momentum of Reggeon, $k_{1,\ldots,4}$ are incoming momenta of QCD gluons, the summation goes over permutations of three or four objects and due to the kinematic constraint (\ref{Eq:kin-constr-R}) the $(-)$-component of momentum of gluons is conserved: $k_1^-+k_2^-+k_3^-=0$ and $k_1^-+k_2^-+k_3^-+k_4^-=0$ for $O(g_s^2)$ and $O(g_s^3)$ vertices respectively. In Ref.~\cite{MH_PolePrescr} the $W^\dagger$-term in Eq.~(\ref{Eq:L-Rg}) is not included into the Lagrangian. Instead, the color structure of induced vertices obtained from the $Rg$-Lagrangian without $W^\dagger$-term is decomposed over the certain symmetry basis and it is shown, that only terms in the subspace spanned by ``maximally-nested'' commutators, which in the notation of Ref.~\cite{MH_PolePrescr} are defined as: \[ [[[i_1,i_2],i_3],i_4]={\rm tr}\left\{T^a \left[\left[[T^{b_{i_1}},T^{b_{i_2}}], T^{b_{i_3}}\right], T^{b_{i_4}} \right] \right\} = \frac{-i}{2}f^{b_{i_1}b_{i_2}c_1} f^{c_1 b_{i_3},c_2} f^{c_2 b_{i_4} a}, \] have desired properties. Decomposing the color structure of Eqns.~(\ref{Eq:R+_g3-vert}) and (\ref{Eq:R+_g4-vert}) over the same basis, one discovers, that only surviving color structures are the same nested commutators, and $S_3(1,2,3)$-structure for the case of Eq.~(\ref{Eq:R+_g3-vert}) or $S_3([i_1,i_2],i_3,i_4)$-structures for the Eq.~(\ref{Eq:R+_g4-vert}). Using the identities: \begin{eqnarray*} && \frac{1}{k_{i_1}^-\pm i\varepsilon}\frac{1}{k_{i_1}^-+k_{i_2}^-\pm i\varepsilon} + \frac{1}{k_{i_2}^-\pm i\varepsilon}\frac{1}{k_{i_1}^-+k_{i_2}^-\pm i\varepsilon} = \frac{1}{k_{i_1}^-\pm i\varepsilon}\frac{1}{k_{i_2}^-\pm i\varepsilon},\\ && \frac{1}{k^-+i\varepsilon}-\frac{1}{k^--i\varepsilon}=-2\pi i \delta(k^-), \end{eqnarray*} which hold in a distributional sense, one can show, that the part of induced vertices (\ref{Eq:R+_g3-vert}) and (\ref{Eq:R+_g4-vert}) which is proportional to nested commutators reproduces results of Ref.~\cite{MH_PolePrescr}, while remnants, proportional to other color structures can be simplified as follows: \begin{eqnarray} \Delta_{+\mu_1\mu_2\mu_3}^{ab_1b_2b_3}&=&-ig_s^2q^2 (n_{\mu_1}^- n_{\mu_2}^- n_{\mu_3}^-)\left\{ [{\rm nested\ comm.}]-\frac{4}{3}\pi^2\delta(k_1^-)\delta(k_2^-) S_3(1,2,3) \right\}, \label{Eq:R+_g3-vert_simpl} \\ \Delta_{+\mu_1\mu_2\mu_3\mu_4}^{ab_1b_2b_3b_4}&=&-ig_s^3q^2 (n_{\mu_1}^- n_{\mu_2}^- n_{\mu_3}^- n_{\mu_4}^-)\left\{ [{\rm nested\ comm.}] \right. \nonumber\\ &-&8\pi^2\left[ d(k_2^-)\delta(k_3^-)\delta(k_4^-) S_3([1,2],3,4) + d(k_3^-)\delta(k_2^-)\delta(k_4^-) S_3([1,3],2,4) \right. \nonumber\\ &&\hspace{5mm} + d(k_4^-)\delta(k_2^-)\delta(k_3^-) S_3([1,4],2,3) + d(k_3^-)\delta(k_1^-)\delta(k_4^-) S_3([2,3],1,4) \nonumber\\ &&\hspace{4mm} \left. \left. + d(k_4^-)\delta(k_1^-)\delta(k_3^-) S_3([2,4],1,3)+d(k_4^-)\delta(k_1^-)\delta(k_2^-) S_3([3,4],1,2) \right] \right\}, \label{Eq:R+_g4-vert_simpl} \end{eqnarray} where distribution $d(k)=1/(k-i\varepsilon)-i\pi\ {\rm sgn}(\varepsilon)\delta(k)$ has the property $d(-k)=-d(k)$, which ensures the Bose-symmetry of the induced vertex if one takes into account the conservation of $k^-$-momentum component. We have checked, that the color-symmetric terms in Eq.~(\ref{Eq:R+_g3-vert_simpl}) and (\ref{Eq:R+_g4-vert_simpl}) are independent on the sign of $\varepsilon$ and therefore are compatible with definite signature of one-Reggeon exchange. If one assumes the modified kinematic constraints (\ref{Eq:r-kin-constr-R}), the regularization of rapidity divergences in Eqns.~(\ref{Eq:R+_g3-vert_simpl}) and (\ref{Eq:R+_g4-vert_simpl}) boils down to replacements $k_i^-\to \tilde{k}_i^-$. In Ref.~\cite{Chachamis:2013hma} the rapidity-divergent part of two-loop correction to the Reggeized gluon propagator has been computed, using the pole-prescription of Ref.~\cite{MH_PolePrescr}, and consistency with known results for two-loop Regge trajectory of a gluon has been demonstrated. However, there is no contradiction between this result and Eq.~(\ref{Eq:R+_g3-vert_simpl}), since additional term in Eq.~(\ref{Eq:R+_g3-vert_simpl}) does not contribute to the rapidity-divergence of Reggeon propagator at two loops. In this term the Eikonal propagators are replaced by delta-functions, which kill the logarithmic rapidity divergence, which could have come from each loop of the two-loop diagram (${\rm k}_1$) in the Fig.~4 of Ref.~\cite{Chachamis:2013hma}. Therefore this term contributes neither to $O(\alpha_s^2 \log^2 r)$ nor to $O(\alpha_s^2 \log r)$ parts, but possibly contributes to the finite part of the two-loop correction to the Reggeon propagator, which was beyond the scope of Ref.~\cite{Chachamis:2013hma}. Therefore neither prescription of Ref.~\cite{MH_PolePrescr} nor usage of the Hermitian form of $Rg$-Lagrangian~(\ref{Eq:L-Rg}) is favored by existing calculations and further comparisons between scattering amplitudes in EFT and QCD are required to decide which option is the correct one.
{ "timestamp": "2019-03-01T02:16:45", "yymm": "1902", "arxiv_id": "1902.11030", "language": "en", "url": "https://arxiv.org/abs/1902.11030" }
\section{Abstract} The position accuracy based on Decawave Ultra-Wideband (UWB) is affected mainly by three factors: hardware delays, clock drift, and signal power. This article discusses the last two factors. The general approach to clock drift correction uses the phase-locked loop (PLL) integrator, which we show is subject to signal power variations, and therefore, is less suitable for clock drift correction. The general approach to the estimation of signal power correction curves requires additional measurement equipment. This article presents a new method for obtaining the curve without additional hardware and clock drift correction without the PLL integrator. Both correction methods were fused together to improve two-way ranging (TWR). \section{Introduction} In the last century, autonomous systems became omnipresent in almost every field of the industry. Spending on robotics is expected to reach 67 billion US dollars by 2025, as compared to 11 billion in 2005 \cite{RobotMarket}. One of the most important tasks in robotics is the interaction between a robot and its environment. This task can only be accomplished if the location of the robot with respect to its environment is known. Visual sensors are very common for localization \cite{SLAM1,SLAM2}. In some cases, estimating the position in non-line-of-sight conditions is required. Radio-frequency-based (RF) sensors are able to operate in such conditions, but the outcome depends highly on measurement principles, such as received signal strength indicator (RSSI) \cite{RSSI}, fingerprinting \cite{fingerPrinting}, FMCW \cite{LPM} and UWB \cite{UWB_Limits}, as well as on techniques such as the angle of arrival \cite{angleOfarrival}, time of arrival \cite{TWR} or time difference of arrival \cite{TDOA}. Indoor positioning is, in general, a challenge for RF-based localization systems. Reflections could cause interference with the main signal. In contrast to narrowband signals are ultra-wideband (UWB) signals, which are more robust to fading \cite{fading1,fading2}. A common UWB system is the Decawave UWB transceiver \cite{Why_Decawave}, which is low cost and provides centimeter precision. The accuracy and precision of this chip are affected by three factors: hardware delays, clock drift, and signal power \cite{Decawave_clock,Signal_power}. This article discusses clock drift correction and signal power error, which is specific to the Decawave UWB transceiver and affects the accuracy of the position significantly. The general approach to estimating signal power dependency is to use ground truth data, which are provided by additional measurement equipment \cite{SignalPower_correction}. The clock drift error is caused by the different frequencies of the transceiver clocks. The general approach to Decawave UWB clock drift correction is to use the integrator of the phase-locked loop (PLL) \cite{PLL_1,PLL_2,Cico,PLL_Decawave2018}. In the following section, we explain that the general approach to clock drift correction is not suitable because the PLL is also affected by the signal power. Therefore, a more accurate method for clock drift correction is presented. The middle sections of this article discuss the estimation of the signal power correction curve without the need for additional hardware. As far as we know, nobody has obtained a signal power correction curve by self-calibration before. The last part of this article presents a two-way ranging (TWR) method that is able to use the correction methods for distance estimation. \begin{table}[H] \begin{centering} \caption{Notations used\label{tab:notations}} \ \\ \par\end{centering} \centering{ \begin{tabular}{|c|c|} \hline Notations & \selectlanguage{ngerman Definition\selectlanguage{english \tabularnewline \hline \hline $T_{i}$ & Timestamp\tabularnewline \hline $\Delta T_{n,m}$ & Difference between two timestamps $T_{m}-T_{n}$\tabularnewline \hline $C_{n,m}$ & Clock drift with respect to the timestamps n and m\tabularnewline \hline $E_{i}$ & Timestamp error due to signal power\tabularnewline \hline $Z$ & Hardware delay and signal power correction offset\tabularnewline \hline \end{tabular} \end{table} \section{Decawave UWB} Decawave transceivers are based on UWB technology and are compliant with IEEE802.15.4-2011 standards \cite{Decawave_Anaysis}. They support six frequency bands with center frequencies from 3.5 GHz to 6.5 GHz and data rates of up to 6.8 Mb/s. The bandwidth varies with the selected center frequencies from 500 up to 1,000 MHz. With higher bandwidth, the send impulse becomes sharper. The timestamps for the positioning are provided by an estimation of the channel impulse response, which is obtained by correlating a known preamble sequence against the received signal and accumulating the result over a period of time. In contrast to narrowband signals is UWB more resistant to multipath fading. Reflections would cause an additional peak in the impulse response. The probability that two peaks interfere with each other is small. The sampling of the signal is performed by an internal 64 GHz chip with 15 ps event-timing precision (4.496 mm). Because of general regulations, the transmit power density is limited to \textminus 41.3 dBm/MHz. These regulations are due to the high bandwidth occupied by the UWB transceiver. The maximum permissible power level is averaged over 1 ms period; hence, the power can be increased for shorter message durations. The following experiments were carried out with the Decawave EVK1000. This board mainly consists of a DW1000 chip and an STM32 ARM processor. \section{Clock drift correction} In practice, it is not possible to manufacture exactly the same clock generators, so every transceiver has a different clock frequency. Clock drift correction represents the difference between clock frequencies but not current time values. \subsection{General approach} The general approach to clock drift correction is to use the PLL integrator \cite{PLL_1,PLL_2,Cico,PLL_Decawave2018}. Figure \ref{fig:PLL} shows an example of frequency demodulation by a PLL. The voltage-controlled oscillator (VCO) is set to the mid-position and the loop is locked in at the frequency of the carrier wave. Modulations on the carrier would cause the VCO frequency to follow the incoming signal, so changes in the voltage correspond to the applied modulations. The difference between the received carrier frequency (VE) and the internal loop frequency (VI) can be observed in the integrator of the loop filter. \begin{figure}[H] \begin{centering} \includegraphics[scale=0.5]{pll} \par\end{centering} \caption{Example of the phase locked loop (PLL)\label{fig:PLL}} \end{figure} In Figure \ref{fig:IntegratorPLL}, the integrator output is presented. The test scenario is based on measurements obtained at every 50 ms between two stationary transceivers. The difference between the two frequencies is about five parts per million. Reaching the final condition took up to 15 minutes. \begin{figure}[H] \begin{centering} \includegraphics[scale=0.6]{frequenzy_sec} \par\end{centering} \caption{Integrator of the PLL\label{fig:IntegratorPLL}} \end{figure} The tests were repeated four times with another two stationary stations. Figure \ref{fig:IntegratorRestart} shows the filtered results of the obtained curves provided by a 500-point moving average filter. The curve progression is deterministic. \begin{figure}[H] \begin{centering} \includegraphics[scale=0.6]{restart_1meter_sec} \par\end{centering} \caption{Filtered integrator of the PLL four times restarted\label{fig:IntegratorRestart}} \end{figure} Decawave indicates that the logarithmic increase of the integrator at the beginning is due to the warm-up when the crystal oscillator is activated, graphically represented in Figure \ref{fig:DW1000-temperature-crystal}. This oscillator follows from the combination of a quartz crystal and the circuitry within the DW1000-based design. \begin{figure}[H] \begin{centering} \includegraphics[scale=0.4]{pll_decawave} \par\end{centering} \caption{DW1000 temperature crystal oscillator warm-up\label{fig:DW1000-temperature-crystal} \cite{SignalPower_correction}, used with permission} \end{figure} In the following test scenario, the effect of the signal power on the integrator was investigated. Both the transmitter and receiver stations were stationary. The left side of Figure \ref{fig:Left:-Power_and_Integrator} shows the measured signal strength at the receiving station. After about 4,600 measurements, we arranged the transmitter to reduce the signal power. The integrator of the receiver jumped after the signal power changed to a new level, indicating that distance changes between the transmitter and receiver would affect the integrator, and so, affect the clock drift correction as well. \begin{figure}[H] \begin{centering} \includegraphics[scale=0.3]{powerS_sec}\includegraphics[scale=0.3]{frequenzy_sec2} \par\end{centering} \caption{Left: Filtered received signal power. Right: Filtered integrator of the PLL \label{fig:Left:-Power_and_Integrator} } \end{figure} The reason for this dependency could be the analog phase detectors of the PLL, in which the loop gain$K_{D}$ is a function of amplitude, which affects the error signal $v_{e}(t)=K_{D}[\varPhi_{Out}(t)-\varPhi_{In}(t)]$ , and so, affects the pull-in time (total time taken by the PLL to lock) as well. \subsection{Proposed approach for the clock drift correction} In this section, we present an alternative method for the clock drift correction, which is independent of the signal power. The measurement setup is presented in Figure \ref{fig:Measurement-setup}. All measurements and calibrations were conducted with Decawave EVK1000 boards. The station with the identification number (id) 2 is the transmitting station (TX). The receiving station (RX) has the identification number 1. The receiving signal power, as well as the timestamps, were obtained by reading the register provided by the transceivers \cite{Signal_power,SignalPower_correction}. \begin{figure}[H] \begin{centering} \includegraphics[scale=0.4]{examplePic} \par\end{centering} \caption{Measurement setup \label{fig:Measurement-setup}} \end{figure} The general settings for the hardware setup can be found in Table \ref{tab:Test-settings} and the notations in Table \ref{tab:notations}. \begin{table}[H] \begin{centering} \begin{tabular}{|c|c|} \hline Channel & 2\tabularnewline \hline Center Frequency & 3993.6 MHz\tabularnewline \hline Bandwidth & 499.2 MHz\tabularnewline \hline Pulse repetition frequency & 64 MHz\tabularnewline \hline Preamble length & 128\tabularnewline \hline Data rate & 6.81 Mbps\tabularnewline \hline \end{tabular} \par\end{centering} \caption{Test settings \label{tab:Test-settings}} \end{table} Figure \ref{fig:Alternative-clock-drift} shows a schematic diagram of the approach. TX is sending three signals at times $T_{1}$, $T_{2}$, and $T_{3}$. The clocks of the transmitter and receiver are not synchronous. If the clocks have no drift, then both clocks should have the same frequency and the difference between $\Delta T_{1,2}=T_{2}-T_{1}$ should be the same for the transmitter and the receiver; otherwise, $\Delta T_{1,2}^{RX}\neq\Delta T_{1,2}^{TX}$ . The same applies to $\Delta T_{1,3}$. If the clock of RX is running faster than that of TX, then $\Delta T_{1,3}^{RX}>\Delta T_{1,3}^{TX}$ and the clock drift error becomes $C_{1,2}=\Delta T_{1,2}^{RX}-\Delta T_{1,2}^{TX}$ . Previously, the frequency difference between the two clocks was presented by the integrator of the PLL. After the warm-up time, the clocks reached their final frequencies. The clock error now increased linearly. For short measurement periods the clock drift error can be assumed to be linear even during the the oscillator\textquoteright s warm-up. \begin{figure}[H] \begin{centering} \includegraphics[scale=0.4]{samePower} \par\end{centering} \caption{Alternative clock drift correction \label{fig:Alternative-clock-drift}} \end{figure} The main idea is for the clock drift error $C_{1,3}=\Delta T_{1,3}^{RX}-\Delta T_{1,3}^{TX}$ to be used for correcting the timestamp $T_{2}$ by simple linear interpolation. In Figure \ref{fig:Clock_drift_error}, three messages, P1, P2, and P3, with constant signal powers have been sent. The delay between every message was about 2 ms. The values are already filtered; hence, every point consists of the mean of 4,000 measurements. The right side of Figure \ref{fig:Clock_drift_error} shows the clock drift error $C_{1,2}=\Delta T_{1,2}^{RX}-\Delta T_{1,2}^{TX}$ . Because of the long delay is the distance error resulting from the clock drift about 1 m. \begin{figure}[H] \begin{centering} \includegraphics[scale=0.4]{samePower_db}\includegraphics[scale=0.4]{WithoutclockCorrection} \par\end{centering} \caption{Left: Signal strength. Right: Error due to clock drift\label{fig:Clock_drift_error}} \end{figure} In the next step, the clock drift error $C_{1,2}$ is corrected by the linear interpolation of $C_{1,3}$. \begin{equation} C_{1,2}^{'}=C_{1,2}-\frac{C_{1,3}}{\Delta T_{1,3}^{TX}}\cdotp\Delta T_{1,2}^{TX}\label{eq:Correction} \end{equation} The results are shown in Figure \ref{fig:Clock-drift-correction}. The correction requires only three messages and the remaining average offset is about $-1.915\cdotp10^{-5}m$. The linear interpolation is also suitable for the warm-up phase. The implementation of the presented clock drift correction for the TWR is presented in section \ref{sec:Two-way-ranging}. A position error caused by a constant velocity of the object is also corrected by the linear interpolation, due to the linear increase of the position error (pseudo clock drift). In pratise, is it possible to obtain $\Delta T_{1,3}^{TX}\thickapprox1\,ms$. An acceleration high enough to cause an error greater than 5 mm, would require near most 1,000g $\left(10^{4}\frac{m}{s^{2}}\right)$. \begin{figure}[H] \begin{centering} \includegraphics[scale=0.4]{error} \par\end{centering} \caption{Results of clock drift correction $C_{1,2}^{'}$ \label{fig:Clock-drift-correction}} \end{figure} \section{Signal power correction} The next section discusses the signal power correction. It is known that the time stamp of the DW1000 is affected by the signal power, in which an increase causes a negative shift of the time stamp and vice versa. \subsection{General approach} Figure \ref{fig:Sigmoid} illustrates the reported distance error with respect to the received signal power. At a certain signal strength, the range bias effect should be zero. In Figure \ref{fig:Sigmoid} the bias vanishes between \textminus 80 and \textminus 75 dBm. The correction curve is affected by the system design elements, such as printed circuit boards, antenna gain, and pulse repetition frequency (PRF). The general approach to correction curve estimation is to compare the distance measurements with the ground truth distances. This method has two disadvantages. First of all, additional measurement equipment is necessary. Second, every created curve applies to two stations but not every individual station. \begin{figure}[H] \begin{centering} \includegraphics[scale=0.35]{sigmoid} \par\end{centering} \caption{Effect of range bias on reported distance \cite{SignalPower_correction}\label{fig:Sigmoid}, used with permission} \end{figure} Figure \ref{fig:Estimated-RX-level} shows the relationship between the measured and correct signal strengths for different PRF. The measured signal power is correct only for measurements smaller than \textminus 85 dBm. The knowledge of the difference between the measured and correct signal strengths can be used for additional measurement techniques, such as the RSSI, for distance estimation. \begin{figure}[H] \begin{centering} \includegraphics[scale=0.35]{decawave} \par\end{centering} \caption{Estimated RX level with respect to actual RX level \cite{Signal_power}\label{fig:Estimated-RX-level}, used with permission} \end{figure} \subsection{Proposed approach for the signal power correction} In the previous section, we discussed an alternative approach to clock drift correction with three messages (P1, P2, and P3). The following method is based on this concept, but the TX station changes the signal strength of the second message (P2). The left side of Figure \ref{fig:Cable-less-then} shows how the signal strengths of the first and last messages(P1 and P3) remain constant and only the signal strength of the second signal (P2) decreases after 1,000 measurements. Every measurement point is the result of the mean of 2,000 signals. The tests were conducted with a cable connection of 10 cm and the transmitter decreased the signal gain with a step size of 3 dB. Figure \ref{fig:Clock-drift-correction} shows that, after the clock drift correction, the remaining error of $C_{1,2}^{'}$ (\ref{eq:Correction}) is close to zero. With the decreasing signal strength of P2, the error of $C_{1,2}$ is increasing; hence, it is possible to create a dependency between the measured signal strength and the timestamp error. \begin{figure}[H] \begin{centering} \includegraphics[scale=0.4]{power}\includegraphics[scale=0.4]{errorDiff_pos}\ \par\end{centering} \caption{Left: Signal strength with cable. Right: Timestamp error with cable\label{fig:Cable-less-then}} \end{figure} In the following test scenario, the power calibration was repeated with an antenna and a distance of $1.5\,m$ between the RX and TX stations. The gain step size was reduced to $0.5\,dB$. Figure \ref{fig:Left:-Filtered-signal} shows the results of the filtered signal power calibration curve. The main difference between Decawave\textquoteright s curve, as shown in Figure \ref{fig:Sigmoid}, and our curve is that the zero line is unknown. This line marks the signal power at which the timestamp error is zero. The step size of the decreasing transmitting signal power gain was constant, but the measured decreasing signal power curve for P2 was nonlinear because the measured signal power did not equate to the correct signal power for high signal strength, as shown in Figure \ref{fig:Sigmoid}. \begin{figure}[H] \begin{centering} \includegraphics[scale=0.4]{B1_signal_power}\ \includegraphics[scale=0.4]{B1_signal_error_pos} \par\end{centering} \caption{Left: Signal strength with antenna. Right: Timestamp error with antenna \label{fig:Left:-Filtered-signal}} \end{figure} It is necessary to pay attention to the timing between the messages. With short delays between the messages, it is possible that they affect each other. This effect can be seen by the offset between P1 and P3 in Figure \ref{fig:Short-update-time}. In Figure \ref{fig:Left:-Filtered-signal} a delay of 2 ms has been used between the messages and in Figure \ref{fig:Short-update-time} a delay of 150 $\mu s$. \begin{figure}[H] \begin{centering} \includegraphics[scale=0.4]{zuKurzUpdateTime}\ \par\end{centering} \caption{Short update time\label{fig:Short-update-time}} \end{figure} It was previously mentioned that the measured signal strength equals the correct signal power only for small signal powers. Therefore, it is possible to use the very first measurements with small signal strengths to estimate the slope. The left side of Figure \ref{fig:Final-results-of} shows an estimated line based on the estimated slope. The results are the same as the curve obtained by Decawave except that no additional measurement equipment is required and our curves can be obtained individually for every station. The right side of Figure \ref{fig:Final-results-of} illustrates the correction curve with respect to the signal power. \begin{figure}[H] \begin{centering} \includegraphics[scale=0.4]{power3}\includegraphics[scale=0.4]{error2} \par\end{centering} \caption{Final results of power correction \label{fig:Final-results-of}. Left: Measured vs. real signal power. Right: Signal power error curve} \end{figure} Even for the same hardware design, it is possible that the shape of the correction curve differs. In Figure \ref{fig:Final-results-with}, the final results of the power correction curve are obtained from another station. The calibration was repeated six times. The shapes of the curves are deterministic but different from those of the the station above. Therefore, it makes sense to repeat the calibration for every individual station. \begin{figure}[H] \begin{centering} \includegraphics[scale=0.4]{db}\includegraphics[scale=0.4]{error3} \par\end{centering} \caption{Final results of power correction with several restarts\label{fig:Final-results-with}. Left: Measured vs. real signal power. Right: Signal power error curve} \end{figure} \section{Two way ranging \label{sec:Two-way-ranging}} The following section describes how the presented clock drift and signal power correction can be used for precise TWR. Figure \ref{fig:TWR} shows the concept for the TWR. The initial message is sent by the reference station at $T_{1}^{R}$ and received by the tag. The timestamp $T_{2}^{T}$ is affected by the signal power and causes an error E1. After some delay caused by internal processing, the tag sends a response message at $T_{2}^{T}$. The reference station receives the response from the tag and saves the timestamp $T_{2}^{R}$, which is affected by the signal power error E2. In this example, the delay due to the hardware offset is not considered. \begin{figure}[H] \begin{centering} \includegraphics[scale=0.45]{power1} \par\end{centering} \caption{Concept for two-way ranging \label{fig:TWR}} \end{figure} The time of flight between the reference station and the tag can be determined by the following formula. It is assumed that the distance between the two devices does not change between time stamp $T_{2}^{R}$ and $T_{1}^{R}$. \begin{equation} T_{TOA}=\frac{\left(T_{2}^{R}-T_{1}^{R}\right)-\left(T_{2}^{T}-T_{1}^{T}\right)-E_{2}-E_{1}}{2} \end{equation} The values E1 and E2 can be obtained from the signal power correction curve. It should be taken into account that the signal power affects the tag and reference station differently. The time difference $\Delta T_{1,2}^{R}$ increases with decreasing signal power. The zero lines for both the signal power and hardware offset are unknown but constant; hence, both values are represented by the variable Z. In the previous section, we explained that the clock drift could be corrected by three messages. Figure \ref{fig:TWR-clock-drift} shows how this principle can be adapted for TWR. The last message was used to obtain the clock drift error $C_{1,3}=\Delta T_{1,3}^{R}-\Delta T_{1,3}^{T}$. The signal power E1 had no effect on the time stamp difference $\Delta T_{1,3}^{T}$. The final time of the flight equation with the clock drift correction becomes: \begin{equation} T_{TOA}=0.5\cdotp\left(\Delta T_{1,2}^{R}-\Delta T_{1,2}^{T}-\left(\frac{C_{1,3}^{RT}}{\Delta T_{1,3}^{T}}\cdotp\left(\Delta T_{1,2}^{T}+E1\right)\right)-E_{2}-E_{1}\right)+Z \end{equation} \begin{figure}[H] \begin{centering} \includegraphics[scale=0.45]{power1_clockdrift} \par\end{centering} \caption{TWR clock drift correction \label{fig:TWR-clock-drift}} \end{figure} The results of the TWR with signal power and clock drift correction are illustrated in Figure \ref{fig:TWR-test}. The blue line represents the difference between laser distance measurements (ground truth) and distances provided by the TWR. In addition, error bars are used to illustrate the standard deviation. The 11 distances extend from $3.515\,m$ to $0.562\,m$. Every point results from the mean of 2,000 measurements. The unkown hardware offset, which cause to the $0.3\,m$ offset, is not relevant in this example. The signal power error depends on the distance and the clock drift on time. If both effects are corrected properly the resulting error bars should be as small as possible. The standard deviation of the error is $0.015\,m$. The small error bars shows that the signal power and clock drift correction are both sufficient. The antenna area was $0.0012\,m^{2}$; therefore, it is not possible to obtain ground truth data with a precision higher than a few centimeters. \begin{figure}[H] \begin{centering} \includegraphics[scale=0.4]{distanceChange_error} \par\end{centering} \caption{TWR test\label{fig:TWR-test}} \end{figure} \section{Conclusion} This article presents a new method for signal power and clock drift correction. It was shown that the curves obtained for the signal power correction could be highly accurate and deterministic, as well as provide individual results for every station. The signal power correction procedure can be performed once as a factory calibration. In addition to the estimation of the signal power correction curve, it was also possible to obtain the relationship between the measured and real signal powers. Knowing the relationship allows for better distance estimations with methods based on the signal strength. In contrast to the general approach, our clock drift correction is independent of the signal power and promises results with centimeter accuracy. The last part of the article explained how the signal power and clock drift correction are fused together to provide highly accurate TWR. \bibliographystyle{unsrt}
{ "timestamp": "2019-03-01T02:18:45", "yymm": "1902", "arxiv_id": "1902.11085", "language": "en", "url": "https://arxiv.org/abs/1902.11085" }
\section{Introduction: reformulating the $\mu$ problem for the LHC era} \label{sec:intro} Supersymmetry provides a solution to the Big Hierarchy problem-- why does the Higgs mass not blow up to the GUT/Planck scale-- via a neat cancellation of quadratic divergences which is required by extending the Poincare group of spacetime symmetries to its maximal structure\cite{hier,wss}. SUSY is also supported indirectly via the confrontation of data with virtual effects in that 1. the measured gauge couplings unify under Minimal Supersymmetric Standard Model (MSSM) renormalization group evolution (RGE)~\cite{drw}, 2. the measured value of $m_t$ falls in the range required for a radiatively-driven breakdown of electroweak symmetry~\cite{ir}, 3. the measured value of the Higgs boson mass falls squarely within the narrow allowed range required by the MSSM~\cite{mhiggs,h125} and 4. the measured values of $m_W$ and $m_t$ favor the MSSM with heavy superpartners~\cite{Heinemeyer:2006px}. In spite of these successes, so far no direct signal for SUSY has emerged at LHC leading to mass limits $m_{\tilde g}\gtrsim 2$ TeV and $m_{\tilde t_1}\gtrsim 1$ TeV while the rather large value of $m_h\simeq 125$ GeV also seemingly requires multi-TeV highly mixed top squarks~\cite{h125}. The new LHC Higgs mass measurement and sparticle mass limits seem to have exacerbated the so-called Little Hierarchy problem (LHP)~\cite{bs}: why doesn't the Higgs mass blow up to the soft SUSY breaking scale $m_{soft}\gtrsim $several TeV, or what stabilizes the apparent hierarchy $m_h\ll m_{soft}$? The LHP opens up the naturalness question: how can it be that the weak scale $m_{weak}\sim m_{W,Z,h}\sim 100$ GeV without unnatural fine-tunings of dimensionful terms in the MSSM Lagrangian? The most direct link between the magnitude of the weak scale and the SUSY Lagrangian comes from minimization of the MSSM Higgs potential to determine the Higgs field vevs~\cite{wss}. A straightforward calculation\cite{wss} reveals that \be m_Z^2/2= \frac{m_{H_d}^2+\Sigma_d^d-(m_{H_u}^2+\Sigma_u^u)\tan^2\beta} {\tan^2\beta -1}-\mu^2\simeq -m_{H_u}^2-\Sigma_u^u(\tilde t_{1,2})-\mu^2 \label{eq:mzs} \ee where $\tan\beta\equiv v_u/v_d$ is the ratio of Higgs field vevs, $\mu$ is the SUSY conserving Higgs/higgsino mass term and $m_{H_{u,d}}^2$ are soft SUSY breaking up- and down-Higgs mass terms. The $\Sigma_u^u$ and $\Sigma_d^d$ terms contain a large assortment of loop corrections (see the Appendix of Ref.~\cite{rns2} for expressions) the largest of which are usually the $\Sigma_u^u(\tilde t_{1,2})$ from the top-squark sector. We can see immediately from the right-hand-side of Eq. \eqref{eq:mzs} that if say one contribution is far larger than $m_Z^2/2$, then another (unrelated) term will have to be fine-tuned to compensate so as to maintain $m_Z$ at its measured value. The {\it electroweak} fine-tuning measure $\Delta_{EW}$ has been introduced~\cite{rns2,rns1}-- \be \Delta_{EW}\equiv max |largest\ term\ on\ RHS\ of\ Eq.~\eqref{eq:mzs}|/(m_Z^2/2) \ee -- to quantify the weak-scale fine-tuning required to maintain $m_Z$ at its measured value. While a low value of $\Delta_{\rm EW}$ seems to be a necessary condition for naturalness within the MSSM, the question is: is it also sufficient? It is argued in Ref's~\cite{dew,mt,seige,arno} that for {\it correlated} ({\it i.e.} inter-dependent) soft terms as should occur in any more fundamental theory such as SUGRA with a well-specified SUSY breaking sector, or in string theory, then other measures such as $\Delta_{\rm HS}\simeq \delta m_h^2/m_h^2$ and $\Delta_{\rm BG}\equiv max_i|\frac{\partial\log m_Z^2}{\partial\log p_i}|$ (where the $p_i$ are fundamental model parameters) collapse to $\Delta_{\rm EW}$ so that $\Delta_{\rm EW}$ is sufficient as both an infra-red (IR) and ultra-violet (UV) fine-tuning measure. In contrast, theories with multiple independent soft parameters may be susceptible to further fine-tunings which would otherwise cancel in a more fundamental theory. It should be recalled that in the multi-soft-parameter effective theories such as CMSSM/mSUGRA, NUHM2 etc., the various soft parameters are introduced to parametrize one's ignorance of the SUSY breaking sector such that some choice of soft parameters will reflect the true choice in nature. However, in no sense are the multi-soft-parameter theories expected to be fundamental. Thus, in this paper we will adopt $\Delta_{\rm EW}$ as a measure of naturalness in fundamental theories with the MSSM as the weak scale effective theory. In Ref.~\cite{upper}, it is shown that the fine-tuning already turns on for values of $\Delta_{\rm EW}\sim 20-30$. We will adopt a value of $\Delta_{\rm EW}<30$ as a conservative choice for natural models of SUSY. For a natural theory-- where $m_{W,Z,h}\sim 100$ GeV because the RHS contributions to Eq. \eqref{eq:mzs} are comparable to or less than the measured value of $m_Z^2/2$-- then evidently \begin{itemize} \item $m_{H_u}^2(weak) \sim -(100-300)^2$ GeV$^2$ and \item $|\mu |\sim 100-300$ GeV~\cite{ccn,bbh}, \item the largest of the radiative corrections (usually $\Sigma_u^u(\tilde t_{1,2})$) are not too large. \end{itemize} The first of these conditions pertains to the soft SUSY breaking sector. It can be achieved for multi-TeV values of high-scale soft terms (as required by LHC limits) by radiatively driving $m_{H_u}^2$ from large, seemingly unnatural high scale values to a natural value at the weak scale. Thus, a high scale value of $m_{H_u}^2(\Lambda =m_{GUT} )$ must be selected such that electroweak symmetry is barely broken. While this may seem to be a tuning in itself, such a selection seems to automatically emerge from SUSY within the string-landscape picture~\cite{DD,bbss}. In this scenario, there is a statistical draw towards large soft terms which must be balanced by the anthropic requirement that EW symmetry be properly broken and with a weak scale magnitude not too far from its measured value\cite{don}. The balance between these two tendencies pulls $m_{H_u}^2(m_{GUT} )$ to such large values that EW symmetry is barely broken. The third of the above conditions-- that $\Sigma_u^u(\tilde t_{1,2})\sim 100-300$ GeV-- is achieved for third generation squark soft terms in the several TeV range along with a large trilinear soft term $A_t$ (as is expected in gravity-mediation models). These same conditions which reduce the $\Sigma_u^u(\tilde t_{1,2})$ values also increase the Higgs mass to its measured value $m_h\sim 125$ GeV~\cite{rns1,rns2}. The second condition-- that the superpotential $\mu$ parameter is of order the weak scale-- brings up the famous SUSY $\mu$ problem~\cite{Polonsky:1999qd}: since $W_{\rm MSSM}\ni\mu H_u H_d$ is SUSY preserving, naively one expects the dimensionful parameter $\mu$ to be of order $m_P\simeq 2.4\times 10^{18}$ GeV while phenomenology requires $\mu\sim m_{weak}$. In this paper, we focus attention on the SUSY $\mu$ problem as occurs in gravity-mediation. The SUSY $\mu$ problem in gauge-mediated supersymmetry breaking (GMSB) is summarized in Ref.~\cite{GR}. In GMSB, since the trilinear soft terms are expected to be tiny, then sparticle masses must become huge with highly unnatural contributions to the weak scale in order to accommodate a light Higgs boson with $m_h\simeq 125$ GeV~\cite{djouadi,bbm}.\footnote{ We also do not consider SUSY models with non-holonomic soft terms\cite{ross} or multiple $\mu$ terms; it is not clear whether such models have viable UV completions\cite{nelson,martin}.} There are two parts to solving the SUSY $\mu$ problem: \begin{itemize} \item First, one must forbid the appearance of $\mu$, usually via some symmetry such as Peccei-Quinn (PQ) or better a continuous or discrete gauge or $R$-symmetry, and then \item re-generate $\mu$ at the much lower weak scale $|\mu |\sim 100-300$ GeV (the lower the more natural) via some mechanism such as symmetry breaking. \end{itemize} Many solutions to the SUSY $\mu$ problem have been proposed, and indeed in Sec. \ref{sec:rev} we will review twenty of these. In most of these solutions, the goal (for gravity-mediation) was to re-generate $\mu\sim m_{3/2}$ where $m_{3/2}$ is the gravitino mass which arises from SUGRA breaking and which sets the mass scale for the soft SUSY breaking terms\cite{sugra}. When many of these $\mu$ solutions were proposed-- well before the LHC era-- it was commonly accepted that $m_{3/2}\sim m_{weak}$ which would also solve the SUSY naturalness problem. However, in light of the above discussion, the SUSY $\mu$ problem needs a reformulation for the LHC era: any solution to the SUSY $\mu$ problem should first forbid the appearance of $\mu$, but then re-generate it at the weak scale, {\it which is now hierarchically smaller than the soft breaking scale}: \begin{equation} |\mu |\sim m_{weak}\sim 100-300\ {\rm GeV}\ll m_{soft}\sim {\rm multi-TeV}\lesssim m_{3/2} . \label{eq:muLHP} \end{equation} Our goal in this paper is to review various proposed solutions to the SUSY $\mu$ problem and confront them with the Little Hierarchy as established by LHC data and as embodied by Eq. \ref{eq:muLHP}. While many solutions can be {\it tuned} to maintain the Little Hierarchy, others may offer compatibility with or even a mechanism to generate Eq. \ref{eq:muLHP}. Thus, present LHC data may be pointing to favored solutions to the SUSY $\mu$ problem which may be reflective of the way nature actually works. With this end in mind, in Sec. \ref{sec:rev} we will review a variety of mechanisms which have been offered as solutions to the SUSY $\mu$ problem. We organize the twenty solutions according to: \begin{itemize} \item solutions from supergravity/superstring constructions, \item extended MSSM solutions, \item solutions from an extra local $U(1)^\prime$ and \item solutions involving Peccei-Quinn (PQ) symmetry and axions. \end{itemize} Many of these solutions tend to relate the $\mu$ parameter to the scale of soft SUSY breaking which would place the $\mu$ parameter well above the weak scale and thus require significant EW fine-tuning. One such example is the original Kim-Nilles (KN)~\cite{kn} model (Subsec. \ref{ssec:kn}) which generates a $\mu$ parameter $\mu\sim v_{PQ}^2/m_P$ and relates $v_{PQ}\sim m_{hidden}$ (where $m_{hidden}$ is a mass scale associated with hidden sector SUGRA breaking) and thus obtains $\mu\sim v_{PQ}^2/m_P \sim m_{hidden}^2/m_P \sim m_{3/2}$. However, the LHP can also be accomodated by allowing for $v_{PQ} \ll m_{hidden}$ so that $\mu \ll m_{3/2}$. While KN allows this possibility to be implemented ``by hand'', the later MSY~\cite{msy}, CCK~\cite{cck} and SPM~\cite{spm} models (Subsec. \ref{ssec:radpq}) implement radiative PQ breaking as a consequence of SUSY breaking with the result that $v_{PQ} \ll m_{hidden}$ and hence $\mu\ll m_{soft}$~\cite{radpq}. A prominent criticism of the $\mu$ solutions based on the existence of a global PQ or discrete symmetry is that such symmetries are incompatible with gravity at high scales~\cite{nohair,wormhole,suss,dob,km_r}, {\it i.e.} that including the presence of gravity could spoil any global or discrete symmetries which may be postulated. In Subsec. \ref{sssec:grav}, we discuss possible ways around the gravity spoliation of global or discrete symmetries. The MBGW model~\cite{bgw2} (Subsec. \ref{ssec:mbgw}) adopts a gravity-safe PQ symmetry thanks to a more fundamental discrete gauge symmetry $\mathbb{Z}_{22}$ and also generates PQ breaking from SUSY breaking, albeit not radiatively. An attractive alternative to the discrete or continuous gauge symmetry resides in the possibility of a discrete or continuous $R$ symmetry. Several discrete $R$-symmetries are possible which are anomaly-free (up to a Green-Schwarz term), forbid the $\mu$ parameter and other dangerous proton decay operators, and are consistent with an underlying grand unification structure\cite{lrrrssv1,lrrrssv2}. Such discrete $R$-symmetries are expected to arise from compactification of extra dimensions in string theory. The $\mathbb{Z}_4^R$ symmetry stands out as a particularly simple approach that also leads to exact $R$-parity conservation. If one seeks to relate a gravity-safe PQ solution to the strong CP problem with a solution to the $\mu$ problem, then two hybrid models based on $\mathbb{Z}_{24}^R$ are examined (Subsec. \ref{ssec:hybrid}). In this case, the PQ symmetry arises as an accidental approximate global symmetry which emerges from the more fundamental discrete $R$ symmetry. Here, the PQ breaking is generated through a large negative soft term and not radiatively. In Sec. \ref{sec:exp} we discuss the issue of experimental testability and distinguishability of various solutions to the $\mu$ problem. In Sec. \ref{sec:conclude}, we present a convenient Table \ref{tab:overview} which summarizes our review. Then we draw some final conclusions. Some pedogogical reviews providing an in-depth overview of supersymmetric models of particle physics can be found in Ref's \cite{wss}. \section{A review of some solutions to the SUSY $\mu$ problem} \label{sec:rev} In this Section, we review some solutions to the SUSY $\mu$ problem. In the solutions reviewed here, the $\mu$-term is typically generated by breaking the symmetry which originally prohibits the $\mu$-term at the tree-level. Depending on the source of such symmetry breaking, we categorize the solutions according to 1. those from supergravity/superstring models, 2. those from (visible-sector) extensions of the MSSM, 3. those including an extra local $U(1)^\prime$ and 4. those which include also a solution to the strong CP problem with Peccei-Quinn symmetry breaking. \subsection{Solutions in supergravity/string construction} \subsubsection{Giudice-Masiero (GM)} \label{ssec:gm} In supergravity models the K\"ahler function $G=K+\log |W|^2$ is written in terms of the real K\"ahler potential $K$ and the holomorphic superpotential $W$. If we posit some symmetry (PQ or $R$-symmetry are suggested in Ref. \cite{gm}) to forbid the usual MSSM $\mu$ term, then one may regenerate it via the Higgs fields coupling to hidden sector fields $h_m$ via non-renormalizable terms in K~\cite{gm}: \be K\ni H_u^\dagger H_u+H_d^\dagger H_d +\left(\frac{\lambda_\mu}{m_P}H_u H_d h^\dagger +h.c.\right) . \ee If we arrange for SUSY breaking in the hidden sector, then the auxilliary component of $h$ develops a vev $\langle F_h\rangle\sim m_{hidden}^2$ so that the gravitino gets a mass $m_{3/2}\sim m_{hidden}^2/m_P$. A $\mu$ term is generated of order \be \mu_{\rm eff}=\lambda_{\mu}\frac{\langle F_h^*\rangle}{m_P}\sim \lambda_{\mu} m_{hidden}^2/m_P\sim \lambda_{\mu} m_{3/2}\sim m_{soft} . \ee Thus, in the GM case, the $\mu$ parameter arises which is typically of order the soft breaking scale unless the coupling $\lambda_{\mu}$ is suppressed at the $\sim 0.01-0.1$ level. \subsubsection{Casas-Munoz (CM)} \label{ssec:cm} Casas and Munoz~\cite{cm} propose a string theory inspired solution to the SUSY $\mu$ problem. In string theory, dimensionful couplings such as $\mu$ are already forbidden by the scale invariance of the theory so no new symmetries are needed to forbid it. They begin with a superpotential of the form \be W=W_0+\lambda_{\mu}W_0H_uH_d/m_P^2 \label{eq:cm_W} \ee where $W_0$ is the usual superpotential of the MSSM (but without the $\mu$ term) along with the hidden sector component which is responsible for SUSY breaking: $W_0=W_0^{vis}(z_i)+W_0^{hid}(h_m)$ where the $z_i$ comprise visible sector fields while the $h_m$ denote hidden sector fields. While the scale-variant $\mu$ term is forbidden in $W_0^{vis}$, the non-renormalizable contribution in Eq. \eqref{eq:cm_W} is certainly allowed and, absent any symmetries which could forbid it, probably mandatory. Under, for instance, $F$-term SUSY breaking in the hidden sector, then $W_0^{hid}$ gains a vev $\langle W_0^{hid}\rangle\sim m_{hidden}^2m_P$ (as is easy to see in the simplest Polonyi model for SUSY breaking with $W_{Polonyi}=m_{hidden}^2(h+\beta m_P)$ where $\beta$ is a dimensionless constant). Under these conditions, then a $\mu$ term develops with \be \mu_{\rm eff}\sim \lambda_{\mu}m_{hidden}^2/m_P\sim \lambda_{\mu} m_{3/2} \sim m_{soft}. \label{eq:cm_mu} \ee Ref.~\cite{cm} goes on to show that the CM solution can easily emerge in models of SUSY breaking due to hidden sector gaugino condensation at some intermediate mass scale $\Lambda_h$ (where then we would associate $m_{hidden}^2\simeq \Lambda_h^3/m_P$). A benefit of the CM solution is that it should be consistent with any stringy UV completion~\cite{saul} as it avoids the presence of some global (PQ) symmetry. A possible drawback to CM is that the $\mu$ term is naturally expected to be of order $m_{soft}$ instead of $m_{weak}$ unless $\lambda_{\mu}$ is suppressed (as in GM). One way to falsify the CM solution would be to discover a DFSZ-like axion with consistent mass and coupling values. Such a discovery would exclude the second term in Eq. \eqref{eq:cm_W} since it would violate the PQ symmetry. \subsubsection{$\mu$ and a big hierarchy from approximate $R$-symmetry} \label{ssec:Rsym} In string theory models, approximate $R$-symmetries are expected to develop from overall Lorentz symmetry of the 10-dimensional spacetime when compactified to four dimensions. Under a continuous $U(1)_R$ symmetry, the superspace co-ordinates transform non-trivially and hence so do the bosonic and fermionic components of superfields. Thus, these symmetries can be linked to overall Lorentz symmetry where also bosons and fermions transform differently. Under exact $R$-symmetry and supersymmetry, then the superpotential $\mu$ term is forbidden since the gauge-invariant bilinear term of Higgs pair $H_uH_d$ carries zero $R$-charge while the superpotential must have $R_W=+2$. However, $H_uH_d$ may couple to various other superfields $\phi_i$ which carry non-trivial $R$-charges so that \be W\ni P_\mu(\phi_i )H_uH_d \ee where $P_\mu (\phi_i )$ is a sum over monomials in the fields $\phi_i^n$. Unbroken $R$-symmetry requires a vanishing $\langle P_\mu (\phi_i )\rangle$ but if the $R$-symmetry is approximate then non-vanishing $P_\mu (\phi_i )$ contributions will develop at higher orders in powers of the field vevs $\langle (\phi_i/m_P)\rangle \lesssim 1$. Thus, a mild hierarchy in the field vevs $\langle \phi_i/m_P\rangle \lesssim 1$, when raised to higher powers $\langle (\phi_i/m_P)^{n_i}\rangle \ll 1$, can generate a much larger hierarchy of scales~\cite{kappl}. In this solution to the $\mu$ problem, which is essentially a UV completion of the CM solution, then $\mu\sim m_{3/2}\sim \langle W\rangle$ is expected to arise. \subsubsection{Solution via the discrete $R$-symmetry $\mathbb {Z}_4^R$} \label{ssec:Z4R} A particularly attractive way to solve the $\mu$ problem in some string constructions is via a discrete Abelian $R$-symmetry $\mathbb{Z}_4^R$~\cite{cckR,hnp,DineR}. Such $R$-symmetries may arise as discrete remnants of the Lorentz symmetry of extra dimensional ($d=10$) models upon compactification to $d=4$. In Ref.~\cite{babu}, the $\mathbb{Z}_4^R$ symmetry was invoked to forbid the $\mu$ term as well as dimension-4 baryon- and lepton-number violating operators while dangerous dimension-5 operators leading to proton decay are highly suppressed~\cite{lrrrssv1,lrrrssv2}. The desirable Weinberg neutrino mass operator is allowed. The $\mathbb{Z}_4^R$ charges are assigned so that all anomalies cancel by including Green-Schwarz terms (and extra $R$-charged singlets for gravitational anomalies). The $R$-charge assignments for the discrete $R$-symmetry $\mathbb{Z}_4^R$ are shown in the second row of Table \ref{tab:Z4R}. \begin{table}[!htb] \renewcommand{\arraystretch}{1.2} \begin{center} \begin{tabular}{c|cccccccc} multiplet & $H_u$ & $H_d$ & $Q_i$ & $L_i$ & $U_i^c$ & $D_i^c$ & $E_i^c$ & $N_i^c$ \\ \hline $\mathbb{Z}_{4}^R$ charge & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 \\ \hline \end{tabular} \caption{ $\mathbb{Z}_{4}^R$ charge assignments for various superfields of the LRRRSSV model\cite{lrrrssv1}. } \label{tab:Z4R} \end{center} \end{table} The charge assignments are consistent with embedding the matter superfields into a single ${\bf 16}$ of $SO(10)$ while the split Higgs multiplets would arise from Wilson-line breaking of gauge symmetry. The $\mathbb{Z}_4^R$ symmetry may be broken via non-perturbative effects such as gaugino condensation breaking of SUGRA in the hidden sector so that a gravitino mass $m_{3/2}$ is induced along with soft terms $m_{soft}\sim m_{3/2}$. A $\mu$ term may arise via GM (Sec. \ref{ssec:gm}) and/or CM (Sec.~\ref{ssec:cm}) so that $\mu\sim\langle W\rangle/m_P^2\sim m_{3/2}\sim m_{soft}$. Although the discrete $\mathbb{Z}_4^R$ $R$-symmetry is broken, the discrete matter/$R$-parity remains unbroken so that the LSP remains absolutely stable. This sort of solution to the $\mu$ problem is expected to be common in heterotic string models compactified on an orbifold~\cite{lrrrssv2}. Other possibilities for $\mathbb{Z}_N^R$ with $N>4$ also occur\cite{lrrrssv2} and in fact any $N$ value is possible under anomaly cancellations provided one includes additional exotic matter into the visible sector~\cite{Harigaya:2013vja}. A further concern is that a spontaneously broken discrete symmetry may lead to formation of domain walls in the early universe which could dominate the present energy density of the universe~\cite{sikivie,Larsson:1996sp,dine}. For the case of gravity mediation, the domain walls would be expected to form around the SUSY breaking scale $T\sim 10^{12}$ GeV. However, if inflation persists to lower temperatures, then the domain walls may be inflated away. It is key to observe that many mechanisms of baryogenesis are consistent with inflation persisting down to temperatures of $T\sim 10^6$ GeV~\cite{baryo}. \subsubsection{String instanton solution} \label{ssec:instanton} In string theory models, it is possible for superpotential terms to arise from non-perturbative instanton effects. These are particularly well suited for open strings in braneworld scenarios such as IIA and IIB string theory. Intriguing applications of stringy instanton effects include the generation of Majorana neutrino mass terms, generation of Yukawa couplings and generation of the $\mu$ term in the superpotential~\cite{ibanez_uranga,gw}. In some D-brane models which include the MSSM at low energy, then the superpotential $\mu$ term may be forbidden by $U(1)$ symmetries but then it is generated non-perturbatively via non-gauge $D$-brane instanton effects. In this case, then a $\mu$ term of the form \be W\sim \exp (-S_{\rm cl})M_s H_u H_d \ee can be induced where then $\mu\simeq \exp (-S_{\rm cl})M_s$ and $M_s$ is the string mass scale. The exponential suppression leads to the possibility of a $\mu$ term far below the string scale. Of course, in this case one might expect the $\mu$ term to arise at any arbitrary mass scale below the string scale rather than fortuitously at the weak scale. If the $\mu$ term does arise at the weak scale from stringy instanton effects, then that value may act as an attractor such that soft terms like $m_{H_u}^2$ are pulled statistically to large values by the string theory landscape, but not so large that EW symmetry doesn't break. Then the weak scale value of $m_{H_u}^2$ is of comparable (negative) magnitude to $\mu$ (the naturalness condition) to ensure a universe with anthropically required electroweak symmetry breaking~\cite{bbss}. \subsubsection{Mu solution in ${\rm G_2MSSM}$} \label{ssec:g2mssm} In Ref.~\cite{kane} (Acharya {\it et al.}), the authors consider 11-dimensional $M$-theory compactified on a manifold of $G_2$ holonomy, and derive various phenomenological implications. They consider fields living in multiplets of $SU(5)$ so the doublet-triplet splitting problem is present. As opposed to string theory models compactified on orbifolds, in $M$-theory the matter fields live only in four dimensions so a different solution to the $\mu$ problem is required. Witten suggested the existence of an additional discrete symmetry which forbids the $\mu$ term from appearing but which allows the Higgs triplets to gain large enough masses so as to evade proton decay constraints~\cite{witten}. In Ref.~\cite{kane_mu}, it is shown that a $\mathbb{Z}_4$ symmetry is sufficient to forbid the $\mu$ term and other dangerous RPV operators while allowing massive Higgs triplets. The $\mathbb{Z}_4$ discrete symmetry is assumed to be broken via moduli stabilization so that a small $\mu$ term develops. In the $G_2MSSM$, the gravitino gains mass from non-perturbative effects (such as gaugino condensation) in the hidden sector so that $m_{3/2}\sim \Lambda_h^3/m_P^2\sim 10-200$ TeV. Matter scalar soft masses are expected at $m_\phi\sim m_{3/2}$ so should be very heavy (likely unnatural in the context of Eq. \eqref{eq:mzs}). In contrast, gauginos gain mass from the gauge kinetic function which depends on the vevs of moduli fields so they are expected to be much lighter: $m_{\lambda}\sim $TeV scale and in fact these may have dominant AMSB contributions~\cite{amsb} (with comparable moduli-mediated SUSY breaking contributions) so that the wino may be the lightest of the gauginos. The dominant contribution to the $\mu$ parameter arises from K\"ahler contributions ala Giudice-Masiero and these are expected to be $\mu\sim c\frac{\langle S_i\rangle}{m_p}m_{3/2}\sim 0.1 m_{3/2}$ (where $c$ is some constant $\sim 1$) and thus is suppressed compared to scalar soft masses, but perhaps comparable to gaugino masses. \subsection{Extended MSSM-type solutions} \subsubsection{NMSSM: Added singlet with $\mathbb{Z}_3$ discrete symmetry} \label{ssec:nmssm} The case of adding an additional visible-sector gauge singlet superfield $S$ to the MSSM leads to the next-to-minimal SSM or NMSSM~\cite{nmssm}. Some motivation for the NMSSM can originate in string theory models such as heterotic orbifolds where the $\mu$-term arises as an effective term from couplings of the Higgs pair to a singlet field~\cite{saul}. Without imposing any symmetry to forbid singlet couplings, we can write a generic NMSSM superpotential as follows: \be W_{NMSSM}=W_{MSSM}(\mu =0 )+\lambda_\mu S H_u H_d+ \xi_F S +\frac{1}{2}\mu_SS^2 + \frac{1}{3}\kappa S^3 \label{eq:gen_nmssm} \ee and corresponding soft terms \be {\cal L}_{soft}^{NMSSM}={\cal L}_{soft}^{MSSM}-(a_\lambda S H_uH_d+B\mu H_uH_d +\frac{1}{3}a_\kappa S^3 +\frac{1}{2}b_SS^2+t S+c.c.)-m_S^2|S|^2 . \label{eq:NMSSMsoft} \ee Here $W_{MSSM}(\mu=0)$ denotes the superpotential for the MSSM but without the $\mu$-term. The tadpole $t$ in Eq. \eqref{eq:NMSSMsoft} may have destabilizing quadratic divergences and must be suppressed~\cite{bagger_etc}. A $\mathbb{Z}_3$ discrete symmetry is usually imposed wherein chiral superfields transform as $\phi\rightarrow e^{2\pi i/3}\phi$ which sends the dimensionful couplings $\xi_F$, $\mu$, $\mu_S$, $B\mu$, $b_S$ and $t$ to zero (only cubic couplings are allowed) at the expense of possibly introducing domain walls into the early universe after the electroweak phase transition~\cite{nmssm_domain}. (Some means of avoidance of domain walls are proposed in Ref's~\cite{Abel:1996cr}.) By minimizing the scalar potential, now including the new singlet scalar $S$, then vevs $v_u$, $v_d$ and $v_s$ are induced. An effective $\mu$ term emerges with \be \mu_{\rm eff}=\lambda_\mu v_s . \ee An attractive alternative choice for $\mu$-forbidding symmetry than the (perhaps ad-hoc) $\mathbb{Z}_3$ would be one of the anomaly-free discrete $R$-symmetries $\mathbb{Z}_4^R$ or $\mathbb{Z}_8^R$~\cite{lrrrssv2}. Like the $\mathbb{Z}_3$ discrete symmetry, the $\mathbb{Z}_8^R$ symmetry also forbids the dangerous divergent tadpole term. The $\mathbb{Z}_4^R$ symmetry would allow the linear singlet term, but it can be argued that in the effective theory the linear term appears when the fields with which the singlet field is coupled acquire VEVs. If these fields belong to the hidden sector, then the coupling will be suppressed by some high mass scale ranging as high as $m_P$ in the case of gravity-mediation. In this case the linear singlet term will be present but it will be highly suppressed~\cite{lrrrssv2}. Thus, all the advantages of the $\mathbb{Z}_3$ discrete symmetry can be obtained by imposing instead either a $\mathbb{Z}_4^R$ or $\mathbb{Z}_8^R$ symmetry: this then avoids the disadvantages--ad-hocness and introduction of domain walls into the early universe after electroweak phase transition-- inherent in the $\mathbb{Z}_3$ discrete symmetry. The added singlet superfield $S$ in the NMSSM leads to new scalar and pseudoscalar Higgs fields which can mix with the usual MSSM Higgses for $v_s\sim v_{u,d}$. So far, LHC Higgs coupling measurements favor a SM-like Higgs so one might expect $v_s\gg v_{u,d}$ which may lead one to an unnatural value of $\mu_{\rm eff}$. The superfield $S$ also contains a spin-$1\over 2$ singlino $\tilde{s}$ which may mix with the usual neutralinos and might even be the LSP~\cite{balazs}. In the NMSSM, an additional Higgs quartic potential term is generated from the $F$-term of the singlet superfield, and thus the SM-like Higgs mass 125~GeV is explained more easily without introducing large one-loop corrections. This feature can make the NMSSM more attractive to those who are uncomfortable with an MSSM Higgs of mass $m_h\simeq 125$ GeV\cite{Hall:2011aa}. \subsubsection{nMSSM} \label{sssec:nMSSM} An alternative singlet extension of the MSSM is the Nearly-Minimal Supersymmetric Standard Model (nMSSM) (also sometimes called Minimal Nonminimal Supersymmetric Standard Model or MNSSM)~\cite{Panagiotakopoulos:1999ah,Panagiotakopoulos:2000wp}. The nMSSM, like the NMSSM, solves the $\mu$ problem via an added singlet superfield $S$. But in the nMSSM, the model is founded on a discrete $R$-symmetry either $\mathbb{Z}_5^R$ or $\mathbb{Z}_7^R$. Discrete $R$-charge assignments for $\mathbb{Z}_5^R$ are shown in Table \ref{tab:nMSSM}. The tree level superpotential is given by \be W_{nMSSM}\ni \lambda_\mu SH_uH_d+f_uQH_uU^c+f_dQH_d D^c+f_{\ell}LH_dE^c+f_{\nu}LH_uN^c +\frac{1}{2} M_N N^cN^c \nonumber \\ \ee so that unlike the NMSSM with $\mathbb{Z}_3$ symmetry, the $\kappa S^3$ term is now forbidden. This is why the model is touted as a more minimal extension of the MSSM. The discrete $R$ symmetry is broken by SUSY breaking effects in gravity-mediation. Then, in addition to the above terms, an effective potential tadpole contribution \be W_{nMSSM}^{tad}\ni \xi_F S \ee is induced at six-loop or higher level where $\xi_F\sim m_{3/2}^2$ (along with a corresponding soft SUSY breaking term). Due to lack of the discrete global $\mathbb{Z}_3$ symmetry, the nMSSM then avoids the domain wall and weak scale axion problems that might afflict the NMSSM. \begin{table}[!htb] \renewcommand{\arraystretch}{1.2} \begin{center} \begin{tabular}{c|ccccccccc} multiplet & $H_u$ & $H_d$ & $Q_i$ & $U_i^c$ & $D_i^c$ & $L_i$ & $E_i^c$ & $N^c$ & $S$ \\ \hline $\mathbb{Z}_5^R$ & 2 & 2 & 4 & 6 & 6 & 4 & 6 & 6 & 3 \\ \hline \end{tabular} \caption{Charge assignments for various superfields of nMSSM with a $\mathbb{Z}_5^R$ discrete $R$-symmetry. } \label{tab:nMSSM} \end{center} \end{table} Like the NMSSM, the nMSSM will include added scalar and pseudoscalar Higgs particles along with a fifth neutralino. However, due to lack of the $S$ self-coupling term and presence of the tadpole term, the mass eigenstates and couplings of the added matter states will differ from the NMSSM~\cite{Dedes:2000jp,Menon:2004wv,Barger:2006dh,Barger:2006kt,Cao:2009ad}. The neutralino in the nMSSM is very light, mostly below 50 GeV, but it is hard to get lower than 30 GeV due to the dark matter relic density constraint. Since the neutralinos are so light it is very likely that a chargino will decay into either a MSSM-like $\chi_2^0$ or a singlino $\chi_1^0$, giving rise to a 5 lepton final state. A further decay of the neutralino can give rise to a 7 lepton state. These kinds of multilepton events are more likely in the nMSSM than in the NMSSM. Also, since in the nMSSM the neutralino can be so light, then deviations in Higgs boson $h$ decay branching fractions become more likely than in the case of the NMSSM\cite{Barger:2006dh,Barger:2006kt}. \subsubsection{Mu-from-nu SSM ($\mu\nu$SSM)} \label{ssec:munuSSM} The $\mu$-from-$\nu$SSM ($\mu\nu$SSM)~\cite{LopezFogliani:2005yw} is in a sense a more minimal version of the NMSSM in that it makes use of the gauge singlet right-hand-neutrino superfields $N^c_i$ to generate a $\mu$ term. The $\mu\nu$SSM first requires a $\mathbb{Z}_3$ symmmetry to forbid the usual $\mu$ term (and also a usual Majorana neutrino mass term $M_iN^cN^c$). The superpotential is given by \bea W &\ni &f_uQH_uU^c+f_dQH_d D^c+f_{\ell}LH_dE^c+f_{\nu}LH_uN^c \nonumber \\ & +& \lambda_{\mu i} N^c_i H_u H_d+{1\over 3}\kappa_{ijk}N_i^cN_j^cN_k^c . \eea If the scalar component of one of the RHN superfields $\tilde{\nu}_{Ri}$ of $N_i^c$ gains a weak scale vev, then an effective $\mu$ term develops: \bea \mu_{\rm eff} = \lambda_{\mu i} \langle\tilde{\nu}_{Ri}\rangle \eea along with a weak scale Majorana neutrino mass term $M_{Njk}\sim\kappa_{ijk}\langle\tilde{\nu}_{Ri}\rangle$. By taking small enough neutrino Yukawa couplings, then a weak scale see-saw develops which can accommodate the measured neutrino masses and mixings. The $\mu\nu$SSM develops bilinear $R$-party violating terms via the superpotential $f_\nu LH_uN^c$ term so that the lightest $\mu\nu$SSM particle is not stable and doesn't comprise dark matter: $\tilde{\chi}_1^0\rightarrow W^{(*)}\ell$ and other modes. As an alternative, a gravitino LSP is suggested with age longer than the age of the universe: it could decay as $\tilde{G}\rightarrow\nu\gamma$ and possibly yield gamma ray signals from the sky~\cite{Choi:2009ng}. The phenomenology of the $\mu\nu$SSM also becomes more complex: now the neutrinos inhabit the same mass matrix as neutralinos, leptons join charginos in another mass matrix and Higgs scalars and sneutrinos inhabit a third mass matrix (albeit with typically small mixing effects). Collider signals are strongly modified from usual MSSM expectations~\cite{Fidalgo:2011ky}. While the $\mu\nu$SSM may be considered the most minimal model to solve the $\mu$ problem, it suffers the same $\mathbb{Z}_3$ domain wall problem as the NMSSM (and perhaps the same routes to avoidance~\cite{Abel:1996cr}). Also, in the context of GUTs, the role that the $N_i^c$ field plays in the {\bf 16}-dimensional spinor of $SO(10)$ woud have to be abandoned. \subsection{$\mu$ from an extra local $U(1)^\prime$} \label{ssec:u1} In this class of models~\cite{cvetic,xt,mw,cp,arvanitaki}, a SM singlet superfield $S$ is introduced which is charged under a new $U(1)^\prime$ gauge interaction, so terms with mass dimensions in Eq.~\eqref{eq:gen_nmssm} are forbidden. Due to the $U(1)^\prime$ gauge charges of $S$, the cubic coupling $S^3$ is also absent. We will see below three representative realizations of this class of model. \subsubsection{CDEEL model} \label{sssec:CDEEL} Cvetic-Demir-Espinosa-Everett-Langacker~\cite{cvetic} (CDEEL) propose a $U(1)^\prime$ extended gauge symmetry model as emblematic of fermionic orbifold string compactifications. While the usual $\mu$ term is forbidden by the extended gauge symmetry, the superpotential term \be W\ni\lambda_{\mu} S H_u H_d \ee is allowed and under $U(1)^\prime$ breaking then $S$ develops a vev $\langle S\rangle\sim m_{weak}$ such that a $\mu$ term is generated $\mu_{\rm eff}=\lambda_{\mu}\langle S\rangle$ along with an additional weak scale $Z^\prime$ gauge boson. Forbidding the $\mu$ term via a gauge symmetry avoids the gravity spoliation/global symmetry problem. In addition, the $\mu$ term is linked to EW symmetry breaking and this would be expected to occur at $m_{weak}$ rather than $m_{soft}$. The $U(1)^\prime$ breaking can occur either via large soft SUSY breaking trilinear couplings or via radiative corrections driving certain mass-squared terms negative. A way to test this class of models, in the exotica decoupling limit, is to search for new $Z^\prime$ gauge bosons with exotic decays to light higgsinos~\cite{cp}. To maintain anomaly cancellation, a variety of (intermediate scale) exotic quark and lepton fields must be introduced along with extra SM gauge singlets. If these new states come in GUT representations, then gauge coupling unification can be maintained. A set of possible $U(1)^\prime$ gauge charges are listed in Table~\ref{tab:cp}. \begin{table}[!htb] \renewcommand{\arraystretch}{1.2} \begin{center} \begin{tabular}{c|cccccccc} multiplet & $H_u$ & $H_d$ & $Q_i$ & $U_i^c$ & $D_i^c$ & $L_i$ & $E_i^c$ & $S$ \\ \hline $(2\sqrt{10})Q^\prime$ & -2 & -3 & 1 & 1 & 2 & 2 & 1 & 5 \\ \hline \end{tabular} \caption{Charge assignments for various superfields of a $U(1)^\prime$ model~\cite{cp,mw}. } \label{tab:cp} \end{center} \end{table} \\ \subsubsection{sMSSM model} \label{sssec:sMSSM} An alternative $U(1)^\prime$-extended MSSM (abbreviated as sMSSM)\cite{Erler:2002pr,Han:2004yd} also solves the $\mu$ problem by invoking multiple SM singlet superfields charged under $U(1)^\prime$ symmetry. In this model, a visible-sector singlet field $S$ directly couples to Higgs doublets but avoids stringent constraints on having an additional weak scale $Z^\prime$ gauge boson by introducing as well a {\it secluded sector} containing three additional singlets $S_1,\ S_2,\ S_3$ charged under $U(1)^\prime$. The superpotential is given by \be W_{sMSSM}\ni\lambda_{\mu} S H_u H_d+\lambda_s S_1S_2S_3 \ee so that the secluded sector has a nearly $F$- and $D$-flat scalar potential. The $U(1)^\prime$ and electroweak symmetry breaking then occurs as a result of SUSY breaking $A$-terms. Then the secluded sector scalars can obtain vevs much larger than the weak scale; if also the trilinear singlet coupling $\lambda_s$ is small, then the additional $Z^\prime$ essentially decouples. Nonetheless, additional Higgs and singlinos appear in the weak scale effective theory so that this model phenomenologically resembles the nMSSM (described in Subsec.~\ref{sssec:nMSSM}) which has very different manifestations from what is expected from the CDEEL $U(1)^\prime$ model. \subsubsection{HPT model} \label{sssec:HPT} The Hundi-Pakvasa-Tata (HPT) model~\cite{xt} also solves the SUSY $\mu$ problem by positing an additional $U(1)^\prime$ gauge symmetry in a supergravity context. The $U(1)^\prime$ charges of the multiplets in the HPT scheme are shown in Table \ref{tab:hpt}. With these $U(1)^\prime$ charge assignments, the $\mu$ term is forbidden in the superpotential but (unlike the CDEEL model) a dim-4 term as $\mu$ solution \`a la Kim-Nilles is allowed: \be W\ni\lambda_{\mu} S^2 H_u H_d/M_p . \ee The $U(1)^\prime$ gauge symmetry also forbids trilinear RPV couplings and dangerous $p$-decay operators. When the $U(1)^\prime$ breaks (at an intermediate scale $Q\sim 10^{11}$ GeV), the $S$ field acquires a vev to yield an effective $\mu$ parameter of the required magnitude. A distinctive feature of the HPT model is that a bilinear RPV (bRPV) term, $LH_u$ is allowed at the right magnitude so as to generate phenomenologically-allowed neutrino masses~\cite{valle}. The desired pattern of neutrino masses and mixing angles are also accommodated through radiative corrections. The bRPV leads to an unstable lightest neutralino which decays via $\tilde{\chi}_1^0\rightarrow \ell W^{(*)}$ or $\nu Z^{(*)}$ and may lead to displaced vertices in collider events. Dark matter must be comprised of some other particles ({\it e.g.} axions). Also, the $U(1)^\prime$ is broken at the intermediate scale $Q\sim 10^{11}$ GeV so that the additional $Z^\prime$ has a mass far beyond any collider reach. Since solving the $\mu$ problem as well as generating the neutrino mass scale of suitable order requires introduction of a new gauge group $U(1)^\prime$, care must be taken so that associated anomalies are cancelled. Anomaly cancellation requires introducing various additional exotic fields including color triplets $K_i$ and $K_i^\prime$ states. The lightest of these leads to stable weak-scale exotic hadrons which may also yield highly-ionizing tracks at collider experiments. In the HPT scheme, gauge coupling unification may be upset. \begin{table}[!htb] \renewcommand{\arraystretch}{1.2} \begin{center} \begin{tabular}{c|cccccccc} multiplet & $H_u$ & $H_d$ & $Q_i$ & $U_i^c$ & $D_i^c$ & $L_i$ & $E_i^c$ & $S$ \\ \hline $Q^\prime$ & 25 & -31 & 0 & -25 & 31 & 2 & 29 & 3 \\ \hline \end{tabular} \caption{Charge assignments for various superfields of the HPT $U(1)^\prime$ supergravity model~\cite{xt}. } \label{tab:hpt} \end{center} \end{table} \subsection{Solutions related to Peccei-Quinn symmetry breaking} In this Subsection, we examine natural $\mu$-term solutions related to the PQ symmetry used to solve the strong CP problem. In this class of models, the $\mu$-term is forbidden by the PQ symmetry, but generated once the PQ symmetry is spontaneously broken. Then the model also provides a solution to the strong CP problem and generates axion dark matter. In Subsec.~\ref{ssec:kn}, \ref{ssec:ckn}, and \ref{ssec:king}, we review $\mu$-term generation models with various sources of PQ breaking. Meanwhile, imposing a global symmetry causes the `quality' issues of the symmetry which may spoil the PQ solution to the strong CP problem, since global symetries are not protected from quantum gravity effects. In Subsec.~\ref{sssec:grav}, we discuss a criterion for protecting the PQ solution to the strong CP problem, and in Subsec.~\ref{sssec:gra_rsym} we present examples based on discrete $R$-symmetries which satisfy the gravity-safety criterion and can be considered as generating an accidental, approximate PQ symmetry. Also, we review the natural Higgs-flavor-democracy (HFD) solution which contains an approximate PQ symmetry from a discrete symmetry in Subsec.~\ref{sssec:hfd}. Finally, we review $\mu$-term generation by breaking of PQ symmetry from SUSY breaking: radiative breaking of PQ symmetry (Subsec.~\ref{ssec:radpq}), breaking of an accidental approximate PQ symmetry from a gauged $U(1)_R$ symmetry (Subsec.~\ref{ssec:CCL}) and a $\mathbb{Z}_{22}$ discrete gauge symmetry (Subsec.~\ref{ssec:mbgw}) by a large negative trilinear term. \subsubsection{Kim-Nilles solution} \label{ssec:kn} Kim and Nilles (KN)~\cite{kn} presented the first formulation of the SUSY $\mu$ problem along with a proposed solution. In KN, it is proposed that there exists a global Peccei-Quinn (PQ) symmetry $U(1)_{PQ}$ which is needed at first as a solution to the strong CP problem. The PQ symmetry is implemented in the context of the supersymmetrized version of the DFSZ~\cite{dfsz} axion model\footnote{In the DFSZ axion model~\cite{dfsz}, the SM is extended to include two Higgs doublets which then couple to singlets which contain the axion.} wherein the Higgs multiplets carry PQ charges {\it e.g.} $Q_{PQ}(H_u)=Q_{PQ}(H_d)=-1$ so that the $\mu$ term is forbidden by the global $U(1)_{PQ}$. Next, the Higgs multiplets are coupled via a non-renormalizable interaction to a SM gauge singlet field $X$ which carries a PQ charge $Q_{PQ}(X)=+2/(n+1)$: \begin{equation} W_{\mu}\ni \frac{\lambda_{\mu}}{m_P^n}X^{n+1}H_uH_d \end{equation} for $n\ge 1$. It is arranged to spontaneously break PQ by giving the $X$ field a vev $\langle X\rangle$ which also generates a (nearly) massless axion $a$ which solves the strong CP problem. To obtain cosmologically viable axions-- with $\langle X\rangle \sim 10^{11}$ GeV and with $m_p \simeq 2.4\times 10^{18}$ GeV, we can obtain the $\mu$ parameter of the order of $m_{3/2}$ only if $n = 1$ (for which $Q_{PQ}(X) = +1$). The matter superfields also carry appropriate PQ charge so as to allow the MSSM trilinear superpotential terms: see Table \ref{tab:kn}. \begin{table}[!htb] \renewcommand{\arraystretch}{1.2} \begin{center} \begin{tabular}{c|cccccccccc} multiplet & $H_u$ & $H_d$ & $Q_i$ & $L_i$ & $U_i^c$ & $D_i^c$ & $E_i^c$ & X & Y & Z \\ \hline PQ charge & $-1$ & $-1$ & $+1$ & $+1$ & 0 & 0 & 0 & +1 & -1 & 0\\ \hline \end{tabular} \caption{PQ charge assignments for various superfields of the KN model with $n=1$. One may add multiples of weak hypercharge or $B-L$ to these so their values are not unique. } \label{tab:kn} \end{center} \end{table} The intermediate PQ breaking scale can be gained from a PQ superpotential of the form: \be W_{PQ}=\lambda_{PQ}Z\left( XY-v_{PQ}^2\right). \label{eq:knW} \ee The scalar components of $X$ and $Y$ develop vevs $\langle X\rangle =\langle Y\rangle = v_{PQ}$ such that a $\mu$ term is generated: \be \mu = \lambda_\mu \langle X\rangle^2/m_P . \ee This value of the $\mu$ term $\mu\sim \lambda_{\mu}v_{PQ}^2/m_P$ is to be compared to the soft breaking scale in models of gravity-mediation: $m_{soft}\sim m_{3/2}\sim m_{hidden}^2/m_P$. Here, $v_{PQ}$ is identified as $v_{PQ} \sim m_{hidden}$ and thus $\mu$ is obtained as $\mu \sim m_{3/2}$. But, a value $\mu\sim m_{weak}\ll m_{soft}\sim m_{3/2}$ can be accomodated for $v_{PQ}< m_{hidden}$, {\it i.e.} if the scale of PQ breaking lies somewhat below the mass scale associated with hidden sector SUSY breaking.\footnote{ In models with SUSY breaking arising from {\it e.g.} gaugino condensation at an intermediate scale $\Lambda_h$, then $m_{3/2}\sim \Lambda_h^3/m_P^2$ in which case we would define $m_{hidden}^2 \sim \Lambda_h^3/m_p$.} \footnote{ The model \cite{mafi} shows a more complete ultraviolet theory which includes a mechanism to get $v_{PQ}$ in the intermediate scale through the introduction of a chiral superfield in the hidden brane, yielding an ultraviolet suppressed term in the hidden brane which gives rise to $\mu\sim m_{weak}$ when SUSY is broken in the hidden brane through the shining mechanism \cite{nima}.} A virtue of the KN solution is that it combines a solution to the strong CP problem with a solution to the SUSY $\mu$ problem which also allows for a Little Hierarchy. A further benefit is that it provides an additional dark matter particle-- namely the DFSZ~\cite{dfsz} axion-- to co-exist with the (thermally under-produced) higgsino-like WIMP from natural SUSY. Thus, dark matter is then expected to be comprised of a WIMP/axion admixture~\cite{az1,bbc}. For the lower range of PQ scale $v_{PQ}$, then the dark matter tends to be axion dominated with typically 10-20\% WIMPs by mass density~\cite{dfsz2}. For larger $v_{PQ}$ values, then non-thermal processes such as saxion and axino~\cite{bci} decay augment the WIMP abundance while for even larger values of $v_{PQ}$ then the higgsino-like WIMPs are overproduced and one typically runs into BBN constraints from late-decaying neutral particles (saxions and axinos) or overproduction of relativistic axions from saxion decay which contribute to the effective number of neutrino species $N_{eff}$ (which is found to be $N_{eff}=3.13\pm 0.32$ from the recent Particle Data Group tabulation~\cite{pdb}). In the context of the DFSZ model embedded within the MSSM, then the presence of higgsinos in the $a\gamma\gamma$ triangle diagram is expected to reduce the axion-photon-photon coupling to levels below present sensitivity making the SUSY DFSZ axion very challenging to detect~\cite{axpaper}. \subsubsection{Chun-Kim-Nilles} \label{ssec:ckn} In the CKN model~\cite{ckn}, it is assumed that SUSY is broken in the hidden sector due to gaugino condensation $\langle \lambda\lambda\rangle\sim \Lambda_h^3\sim (10^{13}$ GeV$)^3$ in the presence of a hidden $SU(N)_h$ gauge group. Furthermore, there may be vector-like hidden sector quark chiral superfields present $Q$ and $Q^c$ which transform as $N$ and $N^*$ under $SU(N)_h$. The Higgs and hidden quark superfields carry PQ charges as in Table~\ref{tab:ckn}: \begin{table}[!htb] \renewcommand{\arraystretch}{1.2} \begin{center} \begin{tabular}{c|ccccccc} multiplet & $H_u$ & $H_d$ & $Q$ & $Q^c$ & $Q_i$ & $U_i^c$ & $D_i^c$ \\ \hline PQ charge & $-1$ & $-1$ & $1$ & $1$ & 0 & 1 & 1 \\ \hline \end{tabular} \caption{PQ charge assignments for various superfields of the CKN model. } \label{tab:ckn} \end{center} \end{table} This allows for the presence of a superpotential term \be W_{CKN}\ni \frac{\lambda_{\mu}}{m_P}QQ^cH_uH_d . \ee Along with gauginos condensing at a scale $\Lambda_h$ to break SUGRA with $m_{3/2}\sim \Lambda_h^3/m_P^2$, the hidden sector scalar squarks condense at a scale $\Lambda <\Lambda_h$ to break the PQ symmetry and to generate a $\mu$ term \be \mu_{\rm eff}\sim \lambda_{\mu} \Lambda^2/m_P . \ee Thus, this model provides a framework for $\mu <m_{soft}$. It also generates a DFSZ axion to solve the strong CP problem along with a string model-independent (MI) axion which could provide a quintessence solution for the cosmological constant (CC)~\cite{Kim:2009cp}. The CC arises from the very low mass MI axion field slowly settling to the minimum of its potential. \subsubsection{BK/EWK solution linked to inflation and strong CP} \label{ssec:king} In Ref's~\cite{BasteroGil:1997vn,EytonWilliams:2004bm}, a model is proposed with superpotential \be W_{EWK} \ni \lambda_\mu \phi H_u H_d+\kappa \phi N^2 \ee where the $\phi$ field plays the role of inflaton and the $N$ field is a waterfall field leading to hybrid inflation in the early universe~\cite{hybridI}. Although the model appears similar to the NMSSM, it is based on a PQ rather than $\mathbb{Z}_3$ symmetry with charges as in Table \ref{tab:king}. Thus, it avoids the NMSSM domain wall problems which arise from a postulated global $\mathbb{Z}_3$ symmetry. Augmenting the scalar potential with soft breaking terms, then the $\phi$ and $N$ fields gain vevs of order some intermediate scale $Q\sim 10^{12}$ GeV so that Yukawa couplings $\lambda_\mu$ and $\kappa$ are of order $10^{-10}$. Such tiny Yukawa couplings might arise from type-I string theory constructs~\cite{EytonWilliams:2005bg}. To fulfill the inflationary slow-roll conditions, then the field $\phi$ must gain a mass of less than $5-10$ MeV and a reheat temperature of $1-10$ GeV. Domain walls from breaking of the PQ symmetry are inflated away. \begin{table}[!htb] \renewcommand{\arraystretch}{1.2} \begin{center} \begin{tabular}{c|cccc} multiplet & $H_u$ & $H_d$ & $\phi$ & $N$ \\ \hline PQ charge & $-1$ & $-1$ & $+2$ & $-1$ \\ \hline \end{tabular} \caption{PQ charge assignments for various superfields of the EWK model. } \label{tab:king} \end{center} \end{table} \subsubsection{Global symmetries and gravity} \label{sssec:grav} It is well known that gravitational effects violate global symmetries, as has been considered via black hole ``no hair'' theorems~\cite{nohair} and wormhole effects~\cite{wormhole}. In such cases, it has been questioned whether the PQ mechanism can be realistic once one includes gravity or embeds the SUSY PQ theory into a UV complete string framework~\cite{suss,km_r,dob}. Indeed, Kamionkowski and March-Russell~\cite{km_r} (KMR) considered the effect of gravitational operators such as \be V(\phi )\ni \frac{g}{m_P^{2m+n-4}}|\phi|^{2m}\phi^n+h.c. +c \ee involving PQ charged fields $\phi$ in the scalar potential upon the axion potential. In the case of $2m+n=5$, {\it i.e.} a term suppressed by a single power of $m_P$, then these gravitational terms would displace the minimum of the PQ axion potential such that the QCD $CP$ violating term $G_{\mu\nu A}\tilde{G}_A^{\mu\nu}$ settles to a non-zero minimum thus destroying the PQ solution to the strong CP problem. To maintain $\bar{\theta}\lesssim 10^{-10}$, KMR calculated that all gravitational operators contributing to the axion potential should be suppressed by at least powers of $(1/m_P)^8$. This is indeed a formidable constraint! \begin{figure}[tbp] \centering \includegraphics[height=0.25\textheight]{figDiscrGlob07} \caption{Kim diagram~\cite{kn_de,kimrev} where the column represents an infinite sequence of Lagrangian terms obeying gravity-safe discrete symmetry while the row represents an infinite sequence of terms obeying the global symmetry. The green region terms are gravity-unsafe while red region violates the global symmetry. The lavender terms are gravity-safe and obey the global symmetry. \label{fig:kim}} \end{figure} To avoid such terms, additional symmetries are required~\cite{kn3}. In string theory, it is known that discrete symmetries arising from gauge symmetries are gravity-safe, as are other discrete symmetries or $R$-symmetries arising from string compactification. In Fig. \ref{fig:kim} the Kim diagram is shown~\cite{kn_de,kimrev}. The red/lavender column denotes an infinite set of Lagrangian terms in the model under consideration which obey some exact, gravity-safe, discrete symmetry. Of this set of terms, the few lower order terms, denoted by the lavender region, obey an exact global symmetry, understood here to be the PQ symmetry whose breaking yields the QCD axion. The red-shaded terms obey the discrete symmetry but violate any global symmetry. The green/lavender row denotes the full, infinite set of global symmetry terms, of which the green-shaded terms are not gravity-safe. If the discrete symmetry is strong enough, then the gravity-unsafe terms will be sufficiently suppressed. The global PQ symmetry is expected to be approximate. The question then is: is it sufficiently strong so as to be gravity-safe? Some additional gravity-safe symmetry is required to ensure the PQ mechanism is robust. The lavender region represents gravity-safe terms which obey the global symmetry. \subsubsection{Gravity-safe symmetries : gauge symmetries or $R$-symmetries: continuous or discrete } \label{sssec:gra_rsym} Given that global symmetries are not gravity-safe (and hence not fundamental), it is common to turn to gauge symmetries as a means to forbid the $\mu$ term. Some models based on an extra local $U(1)^\prime$ were examined in Subsec. \ref{ssec:u1}. Some problems with this approach emerge in that one has to suitably hide any new gauge bosons associated with the extra gauge symmetry and one must also typically introduce (and hide) extra exotic matter which may be needed to ensure anomaly cancellation. In addition, such exotic matter may destroy the desireable feature of gauge coupling unification should the new exotica not appear in complete GUT multiplets. An alternative approach is to introduce {\it discrete} gauge symmetries~\cite{kn3,CL}. Such $\mathbb{Z}_M$ symmetries may emerge from a local $U(1)^\prime$ when a charge $M$ object (charged under the new $U(1)^\prime$) condenses at very high energy leaving a discrete $\mathbb{Z}_M$ gauge symmetry in the low energy effective theory. Since the $\mathbb{Z}_M$ emerges from a local gauge theory, it remains gravity-safe. In Subsec. \ref{ssec:mbgw}, the MBGW model~\cite{bgw2} which is based on a $\mathbb{Z}_{22}$ discrete gauge symmetry is examined. The model under $\mathbb{Z}_{22}$ is found to be anomaly-free and is used to not only forbid the $\mu$ term but to generate a PQ symmetry needed to solve the strong CP problem. The lowest order PQ violating term allowed by the $\mathbb{Z}_{22}$ is sufficiently suppressed so that PQ arises as an accidental approximate global symmetry thereby rendering the model to be gravity-safe. The $\mathbb{Z}_{22}$ discrete gauge charges of the multiplets turn out to be not consistent with GUTs which should be manifested at some level in the high energy theory. Also, the presence of a charge 22 object which condenses at some high energy scale may not be very plausible and might be inconsistent with the UV completion of the theory ({\it i.e.} lie in the swampland). Continuous or discrete $R$-symmetries offer a further choice for gravity-safe symmetries. A solution using a continuous $U(1)_R$ symmetry was examined in Subsec. \ref{ssec:Rsym}.\footnote{See also Ref. \cite{Choi:2010xf}.} In the interest of minimality, it is noted that continuous $R$ symmetries are not consistent with the particle content of just the MSSM~\cite{dreiner}. Then it is also of interest to examine the possibility of discrete remnant $R$-symmetries $\mathbb{Z}_N^R$ which arise upon compactification of the full Lorentz symmetry of 10-$d$ string theories. $R$-symmetries are characterized by the fact that superspace co-ordinates $\theta$ carry non-trivial $R$-charge: in the simplest case, $Q_R(\theta )=+1$ so that $Q_R( d^2\theta ) =-2$. For the Lagrangian ${\cal L}\ni \int d^2\theta W$ to be invariant under $\mathbb{Z}_N^R$-symmetry, the superpotential $W$ must carry $Q_R(W)= 2 + $integer multiples of $N$. \begin{table}[!htb] \renewcommand{\arraystretch}{1.2} \begin{center} \begin{tabular}{c|ccccc} multiplet & $\mathbb{Z}_{4}^R$ & $\mathbb{Z}_{6}^R$ & $\mathbb{Z}_{8}^R$ & $\mathbb{Z}_{12}^R$ & $\mathbb{Z}_{24}^R$ \\ \hline $H_u$ & 0 & 4 & 0 & 4 & 16 \\ $H_d$ & 0 & 0 & 4 & 0 & 12 \\ $Q$ & 1 & 5 & 1 & 5 & 5 \\ $U^c$ & 1 & 5 & 1 & 5 & 5 \\ $E^c$ & 1 & 5 & 1 & 5 & 5 \\ $L$ & 1 & 3 & 5 & 9 & 9 \\ $D^c$ & 1 & 3 & 5 & 9 & 9 \\ $N^c$ & 1 & 1 & 5 & 1 & 1 \\ \hline \end{tabular} \caption{Derived MSSM field $R$ charge assignments for various anomaly-free discrete $\mathbb{Z}_{N}^R$ symmetries which are consistent with $SU(5)$ or $SO(10)$ unification (from Lee {\it et al.} Ref.~\cite{lrrrssv2}). } \label{tab:R} \end{center} \end{table} These remnant discrete $R$-symmetries $\mathbb{Z}_N^R$-- if sufficiently strong-- can forbid lower order operators in powers of $1/m_P$ which would violate putative global symmetries such as PQ. Such a built-in mechanism from string theory may enable the PQ symmetry to be strong enough to support the axion solution to the strong CP problem. Since the $R$-symmetry is necessarily supersymmetric (it acts on superspace co-ordinates), this is another instance in how the implementation of the axion solution to the strong CP problem is enhanced and made more plausible by the presence of supersymmetry. However, not all possible $R$-symmetries are a suitable candidate for a fundamental symmetry. Table \ref{tab:R} (as derived in Ref's~\cite{lrrrssv1,lrrrssv2}) shows the $R$-symmetries along with the $R$-charges of the multiplets which are consistent with either $SU(5)$ or $SO(10)$ unification, anomaly-free (allowing for a Green-Schwarz term), forbid the $\mu$ term and also forbid the R-parity violating and dimension-five proton decay operators and hence can serve the purpose of being a fundamental symmetry. In fact, the $\mathbb{Z}_N^R$ symmetries of Table \ref{tab:R} have been shown to be the {\it only} anomaly-free symmetries which allow for fermion masses and suppress the $\mu$ term while maintaining consistency with GUTs. As a bonus, they allow for neutrino masses while forbidding $R$-parity and dangerous proton decay operators. Implementation of the discrete $R$-symmetries is only possible in extra-dimensional GUTs, making their implementation in string compactifications very natural~\cite{cfr}. \subsubsection{Natural Higgs-Flavor-Democracy (HFD) solution to $\mu$ problem} \label{sssec:hfd} In Ref.~\cite{jkim}, the $\mu$ problem is solved by introducing additional identical Higgs doublet superfields to those of the MSSM. The theory then contains a direct product of discrete interchange symmetries $S_{2}(H_u)\times S_{2}(H_d)$. This is {\it Higgs flavor democracy} (HFD). Besides solving the $\mu$ problem, this mechanism also gives rise to an approximate PQ symmetry and hence a light QCD axion, thereby solving the strong CP problem whilst avoiding the gravity spoliation problem. The HFD discrete symmetry can be found in several string theory models. \textit{HFD:} One starts by introducing two pairs of Higgs doublets at the GUT scale $m_G$ namely : \{$H_u^{(1)}$, $H_d^{(1)}$\} and \{$H_u^{(2)}$, $H_d^{(2)}$\}. However, the weak scale MSSM requires only one pair of Higgs doublets: \{$H_u$, $H_d$\}. If, at the GUT scale, the two pairs of Higgs doublets : $H_u$ = \{$H_u^{(1)}$, $H_u^{(2)}$\} and $H_d$ = \{$H_d^{(1)}$, $H_d^{(2)}$\} are indistinguishable then there must exist the permutation symmetries $S_2(H_u)\times S_2(H_d)$ . Then the Higgsino mass matrix has a democratic form given by: \[ \left \{ \begin{tabular}{cc} $m_G$/2 & $m_G$/2 \\ $m_G$/2 & $m_G$/2 \\ \end{tabular} \right \} . \] The Higgs mass eigenvalues are $m_G$ and 0. Hence, the Higgs pair in the weak scale MSSM is obtained to be massless. Still, the model construction of the MSSM requires a massive Higgs pair at the weak scale with mass value $\mu$. In order to fulfill this criteria, the HFD must be broken and this mechanism results in $\mu$ $\approx$ $\mathcal{O}$(TeV). \textit{Generation of $\mu$:} The minimal Kahler potential is considered as $K = \Phi_i \Phi_i^{\dagger}$ where $\Phi_i$ ( i =1, 2) is a doublet under the gauge group such as the Higgs superfield and $X_i$ and $\bar{X_i}$ (i=1,2) are singlets under the gauge group. Both $\Phi_i$ and $X_i$ and the corresponding barred fields obey the $S_2\times S_2$ symmetry. $X^{(0)}$ and $\bar{X}^{(0)}$ are SM singlet fields containing a very light QCD axion for $10^9$ GeV $\leq$ $v_{PQ}$ $\leq$ $10^{12}$ GeV. With this construct, the $S_2(L) \times S_2(R)$ symmetric nonrenormalizable term is: \begin{equation} W^{(nonrenormalizable)} = \sum_{i,j=1,2} \Bigg( \frac{X^{(i)}\bar{X}^{(j)}}{m_P} \Bigg) H_u^{(i)} H_d^{(j)} + \sum_{ij} \sum_{kl} \Bigg( \frac{X^{(i)}\bar{X}^{(j)}}{m_P} \Bigg) H_u^{(k)} H_d^{(l)} \label{eq:HFD2} \end{equation} With the HFD breaking minimum at $\langle X_1 \rangle$ = $\langle \bar{X_1} \rangle$ = $v_{PQ}$ and $\langle X_2 \rangle$ = $\langle \bar{X_2} \rangle$ = 0, Eq. \eqref{eq:HFD2} becomes \begin{equation} W^{(nonrenormalizable)} = \frac{\lambda_\mu v_{PQ}^2}{2m_P}(H_u^{(0)}+H_u^{(M_G)})(H_d^{(0)}+H_d^{(M_G)}) \label{eq:HFD3} \end{equation} This choice of HFD breaking minimum is spontaneous. Thus we obtain $\mu$ = $\frac{\lambda_\mu v_{PQ}^2}{2m_P}$. With $10^{10}$ GeV $\leq$ $v_{PQ}$ $\leq$ $10^{12}$ GeV and $\lambda_\mu$ $\approx$ $\mathcal{O}$(1), we obtain $\mu$ $\approx$ $\mathcal{O}(0.1-10^3 \text{ TeV})$. The LH can be accomodated for the lower range of $v_{PQ}$ or if $\lambda_\mu <1$. \textit{Light QCD Axion} - Integrating out the heavy fields in Eq. \eqref{eq:HFD3}, one obtains \begin{equation} W = \frac{\lambda_\mu X^{(0)} \bar{X}^{(0)}}{2m_P}H_u^{(0)}H_d^{(0)} . \label{eq:HFD4} \end{equation} The PQ charges of Higgs multiplets are obtained from their interaction with the quarks and PQ charges of $X^{(0)}$ and $\bar{X}^{(0)}$ are defined by Eq. \eqref{eq:HFD4}. Thus, a term $m_{3/2}\frac{\lambda^2}{4m_P^2}\frac{1}{M_G}H_uH_d(XX^c)^2$ is obtained which violates PQ and hence adds a tiny correction to $\mu$. Here, $M_G$ is the GUT scale higgsino mass. Hence, PQ symmetry emerges as an approximate symmetry, thereby giving rise to a light QCD axion which does not suffer from the gravity-spoliation problem. \subsubsection{Radiative PQ breaking from SUSY breaking} \label{ssec:radpq} The above models are particularly compelling in that they include supersymmetry which solves the gauge hierarchy problem, but also include the axion solution to the strong CP problem of QCD. In addition, they allow for the required Little Hierarchy of $\mu\ll m_{soft}$. A drawback to the KN model is that it inputs the PQ scale ``by hand'' via the superpotential Eq.~\eqref{eq:knW}. It is desireable if the PQ scale can be generated via some mechanism and furthermore, the emergence of three {\it intermediate mass scales} in nature-- the hidden sector SUSY breaking scale, the PQ scale and the Majorana neutrino scale-- begs for some common origin. A model which accomplishes this was first proposed by Murayama, Suzuki and Yanagida (MSY)~\cite{msy}. In radiative PQ breaking models, the MSSM superpotential is \be W_{MSSM}=\sum_{i,j=1}^{3}\left[ ({\bf f}_u )_{ij}Q_iH_u U_j^c+ ({\bf f}_d )_{ij}Q_iH_d D_j^c+({\bf f}_e )_{ij}L_iH_d E_j^c+({\bf f}_{\nu} )_{ij} L_iH_u N_j^c \right] \ee where we explicitly include the right hand neutrino superfields $N_i$ and the generation indices $i,j$ run from $1-3$. To this, we add a PQ superpotential containing new PQ-charged fields $X$ and $Y$ of the form \be W_{PQ}\ni \frac{1}{2}h_{ij}XN_i^cN_j^c+\frac{f}{m_P}X^3Y+W_{\mu} \label{eq:Wmsy} \ee and where \be W_{\mu}^{MSY} =\frac{g_{MSY}}{m_P}XYH_uH_d , \label{eq:msy} \ee where the PQ charges $Q_{PQ}(matter)=1/2$, $Q_{PQ}(Higgs)=-1$, $Q_{PQ}(X)=-1$ and $Q_{PQ}(Y)=3$. Along with the MSY superpotential terms, we include the corresponding soft SUSY breaking terms \bea V_{MSY}& \ni & m_X^2|\phi_X|^2+m_Y^2|\phi_Y|^2+m_{N_i}^2|\phi_{N_i}|^2\nonumber \\ &+& \left( \frac{1}{2}h_iA_i\phi_{N_i}^2\phi_X+\frac{f}{m_P}A_f\phi_X^3\phi_Y+ \frac{g_{MSY}}{m_P}A_gH_uH_d\phi_X\phi_Y +h.c.\right). \eea For simplicity, we assume a diagonal coupling $h_{ij}= h_i\delta_{ij}$. The model may be defined as applicable at the reduced Planck scale $m_P\simeq 2.4\times 10^{18}$ GeV and the corresponding Renormalization Group Equations (RGEs) can be found in Ref.~\cite{msy} at 1-loop and Ref.~\cite{radpq} at 2-loop order. Under RG evolution, the large Yukawa coupling(s) $h_i$ push the soft mass $m_X^2$ to negative values at some intermediate mass scale resulting in the radiatively-induced breakdown of PQ symmetry as a consequence of SUSY breaking. The scalar potential consists of the terms $V=V_F+V_D+V_{\rm soft}$. The Higgs field directions can be ignored since these develop vevs at much lower energy scales. Then the relevant part of the scalar potential is just \be V_F\ni \frac{|f|^2}{m_P^2}|\phi_X^3|^2+\frac{9|f|^2}{m_P^2}|\phi_X^2\phi_Y|^2 . \ee Augmenting this with $V_{\rm soft}$, we minimize $V$ at a scale $Q=v_{PQ}$ to find the vevs of $\phi_X$ and $\phi_Y$ ($v_X$ and $v_Y$): \bea 0&=& \frac{9|f|^2}{m_P^2}|v_X^2|^2v_Y +f^*\frac{A_f^*}{m_P}v_X^{*3}+m_Y^2v_Y \label{eq:minQ}\\ 0&=& \frac{3|f|^2}{m_P^2}|v_X^2|^2v_X+\frac{18|f|^2}{m_P^2}|v_X|^2|v_Y|^2v_X +3f^*\frac{A_f^*}{m_P}v_X^{*2}v_Y^*+m_X^2v_X . \label{eq:minP} \eea The first of these may be solved for $v_Y$. Substituting into the second, we find a polynomial for $v_X$ which may be solved for numerically. The potential has two minima in the $v_X$ and $v_Y$ plane symmetrically located with respect to the origin. For practical purposes, we use the notation $v_X$=$|v_X|$ and $v_Y$=$|v_Y|$. The fields $\phi_X$ and $\phi_Y$ obtains vevs $v_X$ and $v_Y$ at the intermediate mass scale, taken here to be $v_{PQ}=\sqrt{v_X^2+9v_Y^2}$. The corresponding axion decay constant is given by $f_a = \sqrt{2}v_{PQ}$.\footnote{For axion interactions, the actual decay constant is $f_A\equiv f_a/N_{DW}$ where $N_{DW}$ is the domain wall number.} A DFSZ-like axion $a$ arises as the pseudo-Goldstone boson of spontaneous PQ breaking, thus solving the strong CP problem. A $\mu$ parameter, which is originally forbidden by PQ symmetry, is generated with a value \be \mu_{\rm eff}=g_{MSY}\frac{v_Xv_Y}{m_P} \ee and a Majorana neutrino mass, also initially forbidden by PQ symmetry, is generated at \be M_{N_i}=h_i|_{Q=v_x} v_X . \ee Since the $\mu$ term depends on an arbitrary coupling $g_{MSY}$, one may obtain any desired value of $\mu$ for particular $v_X$ and $v_Y$ vevs by suitably adjusting $g_{MSY}$. However, if the required values of $g_{MSY}$ are very different from unity, {\it i.e.} $g_{MSY}\gg 1$ or $g_{MSY}\ll 1$, we might need to introduce an additional physical scale to explain the $\mu$ term. To generate a value of $\mu =150$ GeV, then values of $g_{MSY}$ as shown in Fig. \ref{fig:gMSY} are required depending on the values of $m_{3/2}$ and $h(M_P)$ which are assumed. The virtues of this model then include: \begin{itemize} \item it is supersymmetric, thus stabilizing the Higgs sector and allowing for a gauge hierarchy, \item it solves the strong CP problem via a DFSZ-like axion $a$, \item it presents a unified treatment of the three intermediate mass scale where the PQ and Majorana neutrino scales arise as a consequence of SUSY breaking and \item it allows for a Little Hierarchy $\mu\ll m_{soft}$ for the case where $v_{PQ}< m_{hidden}$. \end{itemize} Detailed numerical calculations in the MSY model have been carried out in Ref.~\cite{radpq}. There, it is found that for generic $W_{\mu}^{MSY}$ couplings $g_{MSY}\sim 0.1-1$, then a $\mu$ parameter $\mu\sim 100-200$ GeV can easily be generated from TeV-scale soft breaking terms. Furthermore, since the $\mu$ term sets the mass scale for the $W,Z,h$ boson masses and is determined itself by the PQ vevs $v_X$ and $v_Y$, then the axion mass $m_a\simeq 0.48 f_{\pi}m_{\pi}/f_a=6.25\times 10^{-3}\ {\rm GeV}/f_a$ is related to the Higgs mass $m_h$ and the higgsino masses $m_{\widetilde\chi^{\pm}_1,\widetilde\chi^0_{1,2}}\sim \mu$. The required PQ charges for the MSY model are listed in Table \ref{tab:radpq}. \begin{table}[!htb] \renewcommand{\arraystretch}{1.2} \begin{center} \begin{tabular}{c|ccc} multiplet & MSY & CCK & SPM \\ \hline $H_u$ & $-1$ & $-1$ & $-1$ \\ $H_d$ & $-1$ & $-1$ & $-1$ \\ $Q$ & $+1/2$ & $3/2$ & $+1/2$ \\ $L$ & $+1/2$ & $3/2$ & $+5/6$ \\ $U^c$ & $+1/2$ & $-1/2$ & $+1/2$ \\ $D^c$ & $+1/2$ & $-1/2$ & $+1/2$ \\ $E^c$ & $+1/2$ & $-1/2$ & $+1/6$ \\ $N^c$ & $+1/2$ & $-1/2$ & $+1/6$ \\ $X$ & $-1$ & $+1$ & $-1/3$ \\ $Y$ & $+3$ & $-3$ & $+1$ \\ \hline \end{tabular} \caption{PQ charge assignments for various superfields of the CCK, MSY and SPM models of radiative PQ breaking. } \label{tab:radpq} \end{center} \end{table} \begin{figure}[tbp] \centering \includegraphics[height=0.4\textheight]{gMSYx} \caption{Value of $g$ which is needed in the MSY to generate $\mu =150$ GeV from a gravitino mass $m_{3/2}$ and a GUT coupling $h$. We also show some contours of $v_{PQ}$. \label{fig:gMSY}} \end{figure} \begin{figure}[tbp] \centering \includegraphics[height=0.4\textheight]{gCCKx} \caption{Value of $g$ which is needed in the CCK to generate $\mu =150$ GeV from a gravitino mass $m_{3/2}$ and a GUT coupling $h$. We also show some contours of $v_{PQ}$. \label{fig:gCCK}} \end{figure} \begin{figure}[tbp] \centering \includegraphics[height=0.4\textheight]{gSPMx} \caption{Value of $g$ which is needed in the SPM model to generate $\mu =150$ GeV from a gravitino mass $m_{3/2}$ and a GUT coupling $h$. We also show some contours of $v_{PQ}$. \label{fig:gSPM}} \end{figure} Other closely related models make different choices for which fields enter into $W_{\mu}$. We can also have: \bea W_{\mu}^{CCK} &=&\frac{g_{CCK}}{m_P}X^2 H_uH_d\ \ \ \ or\ \\ W_{\mu}^{SPM} &=&\frac{g_{SPM}}{m_P}Y^2 H_uH_d . \eea The above three possibilities for $W_{\mu}$ correspond to Ref's~\cite{msy} (MSY), \cite{cck} (CCK) and \cite{spm} (SPM). The corresponding PQ charges for the three radiative PQ breaking models are listed in Table \ref{tab:radpq}. We list in Fig's \ref{fig:gCCK} and \ref{fig:gSPM} also the values of $g_{CCK}$ and $g_{SPM}$ which are needed to generate a value of $\mu\simeq 150$ GeV. For a given value of $h(m_P)$ and $m_{3/2}$, then typically $g_{CCK}< g_{MSY}<g_{SPM}$. The MSY model has the interesting feature that the PQ charge assignments are consistent with $SO(10)$ unification. We also remark that all three models can easily generate weak scale values of $\mu$ from multi-TeV values of $m_{3/2}$: {\it i.e.} $\mu\ll m_{3/2}$ so that a Little Hierarchy is naturally generated. {\it Gravity safety of radiative PQ breaking models:} An important issue for the radiative PQ breaking models is whether the required PQ symmetry is actually gravity-safe and whether it may emerge from any of the aforementioned $\mathbb{Z}_N^R$ symmetries. We have examined whether or not the three radiative PQ breaking models of Table \ref{tab:radpq} (CCK, MSY and SPM) can be derived from any of the more fundamental $\mathbb{Z}_N^R$ symmetries in Table \ref{tab:R}~\cite{grav}. In almost all cases, the $hXN^cN^c$ operator is disallowed: then there is no large Yukawa coupling present to drive the PQ soft term $m_X^2$ negative so that PQ symmetry is broken. And since the PQ symmetry does not allow for a Majorana mass term $\frac{1}{2}M_NN^cN^c$, then no see-saw scale can be developed. One exception is the MSY model under $\mathbb{Z}_4^R$ symmetry with charge assignments $Q_R(X)=0$ and $Q_R(Y)=2$: then a $YH_uH_d$ term is allowed which would generate a $\mu$ term of order the intermediate scale. Also, without considering any specific R-charges for the fields $X$ and $Y$, we can see that the R-charges for $X$ and $Y$ should be such that the term $XYH_uH_d$ is allowed and since the R-charges of $H_u$ and $H_d$ are 0, then a term $MXY$ would always be allowed: this term breaks PQ at high order and is not gravity safe. A second exception is SPM under the $\mathbb{Z}_6^R$ symmetry with charges $Q_R(X)=0$ and $Q_R(Y)=2$: then operators like $Y^4/m_p$ are allowed which break PQ but are not sufficiently suppressed so as to be gravity-safe. Furthermore, we can see that in this model that the R-charge of $Y$ is such that terms like $M^2Y$ which break PQ are always allowed but are not gravity safe. Thus, we conclude that while the radiative PQ breaking models are indeed compelling and can address all three intermediate scales in a unified framework, the required PQ symmetry does not appear gravity-safe. \subsubsection{CCL model from gauged $U(1)_R$ symmetry} \label{ssec:CCL} In the model of Choi, Chun and Lee~\cite{Choi:2010xf} (CCL), the $\mu$ term is generated in a manner similar to the SPM model~\cite{spm}, but with the difference that the fundamental symmetry is a gauged $U(1)_R$ symmetry out of which the PQ symmetry arises to be an accidental approximate symmetry. The superpotential for CCL is \bea W_{CCL}&=&f_uQH_uU^c+f_dQH_dD^c+f_eLH_dE^c+f_{\nu}LH_uN^c+\\ &+&\lambda_{\mu}\frac{Y^2H_uH_d}{m_p}+\kappa X^3Y/m_P+\lambda_NX^nN^cN^c/2m_P^{n-1} , \label{eq:WCCL} \eea with $U(1)_R$ and $PQ$ charges for the $n=2$ case given in Table~\ref{tab:CCL}. \begin{table}[!htb] \renewcommand{\arraystretch}{1.2} \begin{center} \begin{tabular}{c|cccccccccc} multiplet & $H_u$ & $H_d$ & $Q_i$ & $L_i$ & $U_i^c$ & $D_i^c$ & $E_i^c$ & $N_i^c$ & X & Y \\ \hline $U(1)_R$ charge & $4$ & $4$ & $-\frac{4}{3}$ & $-\frac{4}{3}$ & $-\frac{2}{3}$& $-\frac{2}{3}$ & $-\frac{2}{3}$ & $-\frac{2}{3}$ & $\frac{5}{3}$ & $-3$ \\ \hline PQ charge & $3$ & $3$ & $-3$ & $-2$ & $0$ & $0$ & $-1$ & $-1$ & $1$ & $-3$ \\ \hline \end{tabular} \caption{$U(1)^R$ and PQ charge assignments for various superfields of the CCL model for $n=2$. } \label{tab:CCL} \end{center} \end{table} The singlets $X$ and $Y$ get their VEVs at the intermediate scale when the PQ symmetry is broken via a large (relative to $m_{3/2}$) negative trilinear soft term contribution to the scalar potential, thereby giving rise to $\mu\sim m_{soft}$. The $U(1)_R$ gauge boson has mass of order the compactification scale so the low energy theory is that of the MSSM. Because the fundamental symmetry of CCL is a gauged $U(1)_R$ symmetry, the phenomenology of this model is dictated by a hierarchy of soft terms $m_{1/2}\gg m_{scalars}>m_{3/2}$ ($m_{1/2}$: gaugino mass). Scalar soft masses are fixed in terms of $U(1)_R$ $D$-terms and typically lead to large negative $m_{H_u}^2$ at the weak scale which then requires a large, unnatural $\mu$ term which would violate the $\mu\ll m_{soft}$ Little Hierarchy. The gravitino or the RH sneutrino turns out to be the LSP and hence end up as cold dark matter candidates. If the neutrino is Majorana type then the gravitino is the LSP and if the neutrino is Dirac type then the RH sneutrino is the LSP. \subsubsection{MBGW model of PQ breaking from SUSY breaking} \label{ssec:mbgw} The Martin-Babu-Gogoladze-Wang (MBGW) model~\cite{spm,bgw2} begins with a superpotential \bea W&=&f_uQH_uU^c+f_dQH_dD^c+f_eLH_dE^c+f_{\nu}LH_uN^c\\ &&+\frac{1}{2}M_RN^cN^c+\lambda_{\mu}\frac{X^2H_uH_d}{m_p}+\lambda_2\frac{(XY)^2}{m_P} \label{eq:mbgw} \eea which is augmented by soft SUSY breaking terms \be V_{soft}\ni m_X^2|\phi_X|^2+m_Y^2|\phi_Y|^2+\left(\lambda_2C\frac{(\phi_X\phi_Y)^2}{m_P} +h.c.\right) \ee so that the scalar potential is \be V_{MBGW}=V_F+V_{soft} \ee with \be V_F\ni 4\frac{\lambda_2^2}{m_P}|\phi_X\phi_Y|^2\left( |\phi_X|^2+|\phi_Y|^2\right). \ee The scalar potential admits non-zero minima in the fields $\phi_X$ and $\phi_Y$ for $C<0$. The scalar potential for the case of $m_X=m_Y\equiv m_s=10^4$ GeV and $C=-3.5\times 10^4$ GeV is shown in Fig. \ref{fig:Vbgw}. \begin{figure}[tbp] \centering \includegraphics[height=0.4\textheight]{vxy_one.png} \caption{Scalar potential $V_{MBGW}$ versus $\phi_X$ and $\phi_Y$ for $m_s=10^4$ GeV and $C=-3.5\times 10^4$ GeV. \label{fig:Vbgw}} \end{figure} It is found in Ref.~\cite{bgw2} that the model admits a remnant $\mathbb{Z}_{22}$ discrete gauge symmetry which is anomaly free up to Green-Schwarz terms and forbids lower order operators which would lead to gravitational instability. Beside the terms in Eq.~\eqref{eq:mbgw}, the lowest order PQ-violating term in the superpotential is $\frac{(Y)^{11}}{m_P^8}$: thus this model is gravity safe according to the KMR criterion. An approximate PQ symmetry emerges as an accidental consequence of the discrete $\mathbb{Z}_{22}$ gauge symmetry. The $\mathbb{Z}_{22}$ and PQ charges are listed in Table \ref{tab:bgw}. \begin{table}[!htb] \renewcommand{\arraystretch}{1.2} \begin{center} \begin{tabular}{c|cccccccccc} multiplet & $H_u$ & $H_d$ & $Q_i$ & $L_i$ & $U_i^c$ & $D_i^c$ & $E_i^c$ & $N_i^c$ & $X$ & $Y$ \\ \hline $\mathbb{Z}_{22}$ charge & $22$ & $18$ & $3$ & $11$ & $19$ & $1$ & $15$ & $11$ & $13$ & $20$ \\ \hline PQ charge & $-1$ & $-1$ & $+1$ & $+1$ & $0$ & $0$ & $0$ & $0$ & $+1$ & $-1$ \\ \hline \end{tabular} \caption{$\mathbb{Z}_{22}$ and PQ charge assignments for various superfields of the MBGW model. } \label{tab:bgw} \end{center} \end{table} By taking $\langle \phi_X\rangle\equiv v_x$ and $\langle\phi_Y\rangle \equiv v_Y$, then the scalar potential minimization conditions read \bea 0 &=& 2\frac{\lambda_2}{m_P}C^*v_xv_Y^2+m_X^2v_X+4\frac{\lambda_2^2}{m_P^2}\left( v_Xv_Y^2(v_X^2+v_Y^2)+v_X^3v_Y^2\right)\\ 0 &=& 2\frac{\lambda_2}{m_P}C^*v_x^2v_Y+m_Y^2v_Y+4\frac{\lambda_2^2}{m_P^2}\left( v_X^2v_Y(v_X^2+v_Y^2)+v_X^2v_Y^3\right). \eea A simplifying assumption of $m_X^2=m_Y^2\equiv m_s^2$ and $v_X=v_Y\equiv v_s$ leads to \be v_s^2=\frac{-C\pm\sqrt{C^2-12m_s^2}}{12\lambda_2}m_P \ee so that the $\mu$ term is \be \mu_{MBGW}\simeq\lambda_\mu\frac{v_s^2}{m_P} \ee with $v_s^2\simeq \frac{|C|}{12\lambda_2}m_P$. Taking $m_s\simeq m_{3/2}=10^4$ GeV with $\mu =150$ GeV and $C=-3.5\times 10^4$ GeV leads to $v_s\simeq v_{PQ}\simeq 10^{11}$ GeV for $\lambda_2=0.7$ and $\lambda_{\mu}\simeq0.036$. Thus, the MBGW model admits a Little Hierarchy $\mu\ll m_{3/2}$ whilst generating the PQ scale $v_{PQ}\sim 10^{11}$ GeV (which generates mainly axion dark matter with a smaller portion of higgsino-like WIMPs~\cite{bbc,dfsz2,axpaper}). The allowed range of MBGW model parameter space is shown in Fig. \ref{fig:bgw} where we show contours of $\lambda_{\mu}$ values which lead to $\mu =150$ GeV. \begin{figure}[tbp] \centering \includegraphics[height=0.4\textheight]{ms_c_3.png} \caption{Value of $\lambda_{\mu}$ required for $\mu =150$ Gev in the $m_{3/2}$ vs. $-C$ plane of the MBGW model. \label{fig:bgw}} \end{figure} As mentioned previously, the MBGW model appears gravity-safe under the $\mathbb{Z}_{22}$ discrete gauge symmetry, The discrete gauge symmetry $\mathbb{Z}_M$ might arise if a charge $Me$ field condenses and is integrated out of the low energy theory while charge $e$ fields survive (see Krauss and Wilczek, Ref.~\cite{grav}). While the ensuing low energy theory should be gravity safe, for the case at hand one might wonder at the plausibility of a condensation of a charge 22 object and whether it might occupy the so-called {\it swampland}~\cite{swamp} of theories not consistent with a UV completion in string theory. In addition, the charge assignments~\cite{bgw2} are not consistent with $SU(5)$ or $SO(10)$ grand unification which may be expected at some level in a more ultimate theory. Alternatively, it is worth checking whether MBGW is gravity-safe under any of the discrete $R$-symmetries listed in Table \ref{tab:R}. To check gravity safety, we note that additional superpotential terms of the form $\lambda_3X^pY^q$ may be allowed for given $\mathbb{Z}_N^R$ charge assignments and powers $p$ and $q$. Such terms will typically break the PQ symmetry and render the model not gravity safe if the scalar potential $V(\phi )$ includes terms which are not suppressed by at least eight powers of $1/m_P$~\cite{km_r}. The largest dangerous scalar potential terms develop from interference between $\lambda_2 (XY)^2/m_P$ and $\lambda_3 X^pY^q/m_P^{p+q-3}$ when constructing the scalar potential $V_F=\sum_{\hat{\phi}}|\partial W/\partial\hat{\phi}|_{\hat{\phi}\rightarrow\phi}^2$ (here, the $\hat{\phi}$ label chiral superfields with $\phi$ being their leading components). We find the MBGW model to be not gravity safe under any of the $\mathbb{Z}_N^R$ discrete $R$-symmetries of Table \ref{tab:R}. \subsection{Hybrid models of PQ breaking from SUSY breaking} \label{ssec:hybrid} In this Subsection, we review three models which combine approaches where PQ symmetry breaking is triggered by SUSY breaking and where a gravity-safe accidental approximate PQ symmetry might emerge from a discrete $R$-symmetry. \begin{itemize} \item These models are obtained by adopting a hybrid approach~\cite{grav} between the radiative breaking models and the MBGW model. \item In the radiative breaking models, a Majorana neutrino scale is generated as the PQ field $X$ gets VEV. However, in the hybrid models, the Majorana mass term $MN^cN^c/2$ is allowed but it is not generated through PQ breaking-- similar to MBGW model. \item In the radiative breaking models, intermediate PQ and Majorana neutrino scales develop as a consequence of intermediate scale SUSY breaking and the running of soft SUSY breaking mass term to negative squared values. In contrast, in the MBGW model and in the hybrid models, PQ breaking is triggered by large negative soft terms instead of radiative breaking. \end{itemize} Three hybrid models as listed below : \subsubsection{Hybrid CCK Model} The superpotential for the hybrid CCK model (hyCCK) is given by~\cite{grav}: \bea W_{hyCCK}&\ni &f_uQH_uU^c+f_dQH_d D^c+f_{\ell}LH_dE^c+f_{\nu}LH_uN^c+ M_N N^cN^c/2\nonumber \\ & +& fX^3Y/m_P+\lambda_\mu X^2 H_uH_d/m_P . \eea Thus when the PQ symmetry breaks, the $\mu$ parameter is obtained as \bea \mu_{\rm eff} = \lambda_\mu \langle X\rangle^2/m_P . \eea We have checked that the hyCCK model is not gravity-safe under the $\mathbb{Z}_N^R$ symmetries for $N=4,6,8$ or 12. However, it does turns out to be gravity-safe under $\mathbb{Z}_{24}^R$ symmetry with the $\mathbb{Z}_{24}^R$ charge and PQ charge assignments as shown in Table \ref{tab:hcck}. \begin{table}[!htb] \renewcommand{\arraystretch}{1.2} \begin{center} \begin{tabular}{c|cccccccccc} multiplet & $H_u$ & $H_d$ & $Q_i$ & $L_i$ & $U_i^c$ & $D_i^c$ & $E_i^c$ & $N_i^c$ & X & Y \\ \hline $\mathbb{Z}_{24}^R$ charge & 16 & 12 & 5 & 9 & 5 & 9 & 5 & 1 & -1 & 5 \\ \hline PQ charge & -1 & -1 & 1 & 1 & 0 & 0 & 0 & 0 & 1 & -3 \\ \hline \end{tabular} \caption{ $\mathbb{Z}_{24}^R$ and PQ charge assignments for various superfields of the hyCCK model. } \label{tab:hcck} \end{center} \end{table} The scalar potential for hyCCK is found to be \bea V = [fA_f\frac{\phi_X^3\phi_Y}{m_P}+h.c.] + m_X^2|\phi_X|^2 + m_Y^2|\phi_Y|^2 + \frac{f^2}{m_P^2}[9|\phi_X|^4|\phi_Y|^2 + |\phi_X|^6] \eea and is shown in Fig. \ref{fig:Vgspq} vs. scalar field values $\phi_X$ and $\phi_Y$. For large negative values of soft term $A_f$, then a $\mathbb{Z}_{24}^R$ and $PQ$ breaking minimum develops. \begin{figure}[tbp] \centering \includegraphics[height=0.4\textheight]{vxy_phix_phiy_hybrid_cck.png} \caption{Scalar potential $V_{hyCCK}$ versus $\phi_X$ and $\phi_Y$ for $m_X=m_Y\equiv m_{3/2}=10$ TeV, $f=1$ and $A_f=-35.5$ TeV. \label{fig:Vgspq}} \end{figure} The lowest order PQ violating terms in the superpotential are $X^8Y^2/ m_P^7$, $X^4Y^6/m_P^7$ and $Y^{10}/ m_P^7$ which implies that the lowest order PQ breaking term in the scalar potential is suppressed by $1/ m_P^8$. Therefore, this model satisfies the KMR condition for being gravity-safe. The allowed range of hyCCK model parameter space is shown in Fig. \ref{fig:GSPQ} where we show contours of $\lambda_{\mu}$ values which lead to $\mu =200$ GeV in the $m_{3/2}$ vs. $-A_f$ plane for $f=1$. We also show several representative contours of $v_{PQ}$ values. Values of $\lambda_{\mu}\sim 0.015-0.2$ are generally sufficient for a natural $\mu$ term and are easily consistent with soft mass $m_{soft}\sim m_{3/2}\sim 2-30$ TeV as indicated by LHC searches. We also note that for $m_{3/2}\sim 5-20$ TeV, then $v_{PQ}\sim 10^{11}$ GeV which corresponds to the sweet spot for axion cold dark matter. \begin{figure}[tbp] \centering \includegraphics[height=0.4\textheight]{lamda_mu_vpq_hybrid_cck_mu_200.png} \caption{Representative values of $\lambda_{\mu}$ required for $\mu =200$ GeV in the $m_{3/2}$ vs. $-A_f$ plane of the hyCCK model for $f=1$. We also show several contours of $v_{PQ}$. \label{fig:GSPQ}} \end{figure} \subsubsection{Hybrid SPM Model} The superpotential for the hybrid SPM model (hySPM) is given by~\cite{Choi:2010xf,grav} \bea W_{hySPM}&\ni &f_uQH_uU^c+f_dQH_d D^c+f_{\ell}LH_dE^c+f_{\nu}LH_uN^c+M_NN^cN^c/2\nonumber \\ & +& fX^3Y/m_P+\lambda_\mu Y^2 H_uH_d/m_P . \eea In this case, when PQ symmetry breaks, the $\mu$ parameter is generated to be \bea \mu_{\rm eff} = \lambda_\mu \langle Y\rangle^2/m_P . \eea This model also turns out to be not gravity-safe under $\mathbb{Z}_N^R$ symmetries for $N=4,6,8$ and 12 but is gravity-safe for $\mathbb{Z}_{24}^R$ symmetry. The gravity-safe $\mathbb{Z}_{24}^R$ charge and PQ charge assignments are shown in Table \ref{tab:hspm}. \begin{table}[!htb] \renewcommand{\arraystretch}{1.2} \begin{center} \begin{tabular}{c|cccccccccc} multiplet & $H_u$ & $H_d$ & $Q_i$ & $L_i$ & $U_i^c$ & $D_i^c$ & $E_i^c$ & $N_i^c$ & X & Y \\ \hline $\mathbb{Z}_{24}^R$ charge & 16 & 12 & 5 & 9 & 5 & 9 & 5 & 1 & 5 & -13 \\ \hline PQ charge & -1 & -1 & 1 & 1 & 0 & 0 & 0 & 0 & -1/3 & 1 \\ \hline \end{tabular} \caption{ $\mathbb{Z}_{24}^R$ and PQ charge assignments for various superfields of the hySPM model. } \label{tab:hspm} \end{center} \end{table} The scalar potential is obtained similar to that in the hyCCK model with the only difference being that now the lowest order PQ violating terms in the superpotential are $Y^8X^2/m_P^7$, $Y^4X^6/ m_P^7$ and $X^{10}/m_P^7$ which means that the lowest order PQ breaking terms in the scalar potential are suppressed by $1/ m_P^8$ so that the hySPM model also satisfies the KMR condition for being gravity-safe. The allowed range of hySPM model parameter space is shown in Fig. \ref{fig:spm} where we show contours of $\lambda_{\mu}$ values which lead to $\mu =150$ GeV in the $m_{3/2}$ vs. $-A_f$ plane for $f=1$. We also show several representative contours of $v_{PQ}$ values. \begin{figure}[tbp] \centering \includegraphics[height=0.4\textheight]{lamda_mu_vpq_hybrid_spm.png} \caption{Representative values of $\lambda_{\mu}$ required for $\mu =150$ GeV in the $m_{3/2}$ vs. $-A_f$ plane of the hySPM model for $f=1$. We also show several contours of $v_{PQ}$. \label{fig:spm}} \end{figure} \subsubsection{Hybrid MSY model} The superpotential in the hybrid MSY model (hyMSY) is given as~\cite{grav}: \bea W_{hyMSY}&\ni &f_uQH_uU^c+f_dQH_d D^c+f_{\ell}LH_dE^c+f_{\nu}LH_uN^c+M_NN^cN^c/2\nonumber \\ & +& fX^3Y/m_P+\lambda_\mu XY H_uH_d/m_P . \eea However, we have checked that the hyMSY model does not satisfy the KMR condition for being gravity-safe under any of the $R$-symmetries listed in Table \ref{tab:R}. \section{Are the various $\mu$ solutions experimentally distinguishable?} \label{sec:exp} An important question arises: are the various solutions to the SUSY $\mu$ problem experimentally testable and experimentally distinguishable from one another? Obviously, one important consequence is the existence of weak scale SUSY (WSS) so that if WSS is disproved, then the whole discussion on the origin of the $\mu$ term is moot. The main {\it raison d'etre} for SUSY is to stabilize the weak scale under the presence of quantum corrections. In addition, WSS provides a natural mechanism for electroweak symmetry breaking. This means no severe fine-tuning of parameters involved in determining the magnitude of the weak scale, which we take to be no fine-tuning in Eq. \eqref{eq:mzs}. Upper limits have been derived on sparticle masses within the context of unified SUSY models with no fine-tuning~\cite{rns2,upper,Baer:2018hpb} ({\it i.e.} $\Delta_{EW}\lesssim 30$). These imply typically $m_{\tilde g}\lesssim 6$ TeV and $m_{\tilde t_1}\lesssim 3$ TeV and $|\mu |\lesssim 360$ GeV. To explore such high sparticle masses, then about 15 ab$^{-1}$ of $pp$ collisions at $\sqrt{s}\gtrsim 27$ TeV is required for a hadron collider~\cite{Baer:2018hpb} or $\sqrt{s}\gtrsim 720$ GeV is needed for an $e^+e^-$ collider~\cite{Baer:2014yta}. If no sparticles are seen at such colliders, then SUSY as we understand it would no longer be a viable hypothesis for stabilization of the weak scale. Some of the $\mu$ solutions are expected to give rise to the MSSM-only as the weak scale effective theory. In this case, it may be difficult to distinguish for instance a GM solution from a CM solution. In the case of the G2MSSM solution, distinctive mass relations amongst sparticles are expected to occur which could support or deter such explanations~\cite{Acharya:2012tw}. In addition to weak scale SUSY, several models-- KN, CKN, EWK, HFD, radiative PQ models (MSY, CCK, SPM), MBGW and hybrid models predict a SUSY DFSZ axion. Recent searches for axions at axion haloscope experiments~\cite{admx} have reached the non-SUSY DFSZ coupling strengths for a narrow range of $m_a$ possibilities. However, the SUSY DFSZ axion-- by virtue of including higgsinos in the $a\gamma\gamma$ triangle vertex-- has a much smaller coupling~\cite{axpaper}. It is not clear whether present technology has the capability to probe such tiny $a\gamma\gamma$ couplings. In the event that a thorough search can be made for SUSY DFSZ axions over their allowed range of masses and couplings strengths, then (non)observation of axions could rule out or verify this class of $\mu$ problem solutions. A related test could be the determination of a diminished abundance of higgsino-like WIMPs such that the presence (or not) of additional dark matter particles such as axions is required. Several of the $\mu$ solutions require as well additional distinctive particles. The NMSSM solution requires the presence of additional scalar and pseudoscalar Higgs bosons and a fifth neutralino arising from the NMSSM singlino. For many NMSSM parameter choices, some deviations in the $h$ boson coupling strengths are expected~\cite{nmssm,balazs}. The $U(1)^\prime$ $\mu$ solutions also include distinctive new particle predictions. The CDEEL model~\cite{cvetic} requires the presence of an additional weak scale $Z^\prime$ boson which could decay to higgsinos as well as SM particles~\cite{arvanitaki,cp}. For the HPT model~\cite{xt}, the $Z^\prime$ is expected to be far beyond any collider reach projections. Instead, for HPT, one expects bilinear RPV leading to distinctive collider signatures and altered expectations for dark matter. Also, in these models one may expect the presence of stable weak scale exotic hadrons or other exotica which arise from the requirement for anomaly cancellation. \section{Conclusions} \label{sec:conclude} In this paper, we have re-examined the SUSY $\mu$ problem with perspective gained from experimental results from LHC through Run 2 with 150 fb$^{-1}$ of data. The two parts to the SUSY $\mu$ solutions are 1. first forbid the $\mu$ term, perhaps via some symmetry and then 2. regenerate it, perhaps via symmetry breaking. The new perspective from LHC and the naturalness issue is that $\mu$ should be generated of order $m_{weak}\sim m_{W,Z,h}\sim 100-300$ GeV whilst the soft SUSY breaking terms likely inhabit the multi-TeV regime. Thus, a Little Hierarchy (LH) should now be included in SUSY $\mu$ solutions where $|\mu |\ll m_{soft}$. This is different from pre-LHC expectations where solutions sought to generate $|\mu |\simeq m_{soft}$. To gain an updated perspective on the SUSY $\mu$ problem, we examined twenty solutions. These solutions are summarized in Table \ref{tab:overview} where we list each solution and how it may admit a LH, whether it also addresses the strong CP problem, whether it is gravity-safe, its relation to neutrino masses (Standard see-saw or other) and any distinctive experimental consequences. While all solutions have the capacity to be consistent with the LH (usually by adjusting some arbitrary constant $\lambda_{\mu}$), some actually generate $\mu\sim m_{weak}\ll m_{soft}$ with $\lambda_\mu\sim 1$ (such as the radiative PQ breaking models MSY, CCK and SPM). Also, early attempts to solve the SUSY $\mu$ problem could appeal to an underlying global symmetry such as PQ to suppress the $\mu$ term. It soon became clear that such global symmetries are not consistent with an ultra-violet completion which includes gravity effects since gravitational interactions don't respect global symmetries. Continuous ($U(1)^\prime$) or discrete gauge symmetries are gravity-safe but usually require the addition of perhaps unwanted exotica in order to preserve anomaly-freedom. The more recent emergence of discrete $R$-symmetries~\cite{lrrrssv1,lrrrssv2}, which can arise from compactification of extra dimensions in string theory, seems to provide the cleanest suppression symmetry for the $\mu$ term. A delineation of anomaly-free (including a GS term) $\mathbb{Z}_N^R$ symmetries which are consistent with $SO(10)$ or $SU(5)$ unification (thus preserving gauge coupling unification) offers perhaps the most compelling solutions for the first half of the SUSY $\mu$ problem. For $N=4,6,8,12$ and 24, these symmetries forbid $\mu$ along with RPV trilinear terms and dimension-5 $p$-decay operators whilst allowing the required Yukawa couplings and neutrino mass operators. Of these, the $\mathbb{Z}_4^R$ stands out as both simple and compelling. It should probably now replace $R$-parity as a standard pillar upon which the MSSM is constructed. If one also seeks to simultaneously solve the strong CP problem, then the $\mathbb{Z}_{24}^R$ symmetry works in the hybrid models to suppress unwanted superpotential terms while providing the underlying fundamental symmetry from which a global PQ can emerge as an accidental, approximate symmetry which is gravity-safe. Several other solutions also have their roots in stringy behavior (CM, $U(1)^\prime$, instanton, G2MSSM). If the naturalness edict is followed-- which requires $|\mu |$ not too far from $m_{weak}\sim 100$ GeV-- then one expects thermally-underproduced higgsino-like WIMPs as (part of) dark matter. If the natural WIMP abundance is enhanced by non-thermal processes to make up the entirety of dark matter, then they become excluded by a combination of direct and indirect WIMP detection experiments~\cite{Baer:2018rhs}. Thus, additional dark matter beyond WIMPs then seems to be required. The axion is a highly motivated candidate to make up the remaining bulk of dark matter. To gain accord with the requirements of cold dark matter, a gravity-safe solution to the strong CP problem and a solution to the SUSY $\mu$ problem (while also suppressing dangerous $p$-decay operators and allowing for see-saw neutrino masses), then the hybrid models based on $\mathbb{Z}_{24}^R$ discrete $R$-symmetry stand out as a rather complete answer. Overall, the SUSY $\mu$ problem has generated a rich panoply of solutions over the past 35 years. To begin the process of selecting amongst them or building others, it is of the essence to first discover SUSY and then to proceed with precision measurements of the SUSY spectra along with any exotica to gain insight into which if any of the solutions best describes nature. Future collider and dark matter experiments should go a long way towards selecting amongst or ruling out these various solutions and other solutions perhaps yet to come. \begin{table}[!htb] \renewcommand{\arraystretch}{1.2} \begin{center} \begin{tabular}{|c|ccccc|} \hline model & admit LH? & strong CP? & gravity safe? & see-saw? & exp. cons.\\ \hline GM & small $\lambda_{\mu}$ & $\times$ & $--$ & $SNSS$ & MSSM \\ \hline CM & small $\lambda_{\mu}$ & $\times$ & $--$ & $SNSS$ & MSSM \\ \hline $R$-sym & $(v_i/m_P)^{n_i}\ll 1$ & $\times$ & $?$ & $SNSS$ & MSSM \\ \hline $\mathbb{Z}_4^R$ & small $\lambda_{\mu}$ & $\times$ & $--$ & $SNSS$ & MSSM \\ \hline Instanton & small $e^{-S_{cl}}$ & $\times$ & $--$ & $SNSS$ & MSSM \\ \hline $G_2MSSM$ & $\langle S_i\rangle/m_P\ll 1$ & $\times$ & $--$ & $SNSS$ & $G_2MSSM$ \\ \hline NMSSM & small $\lambda_{\mu}$ & $\times$ & $--$ & $SNSS$ & extra Higgs/neutralino\\ \hline nMSSM & small $\lambda_{\mu}$ & $\times$ & $--$ & $SNSS$ & extra Higgs/neutralino\\ \hline $\mu\nu$SSM & small $\lambda_{\mu}$ & $\times$ & $--$ & $bRPV$ & $bRPV$, mixings\\ \hline $U(1)^\prime$ (CDEEL) & small $\lambda_{\mu}$ & $\times$ & $--$ & $SNSS$ & $Z^\prime$ \\ \hline sMSSM & small $\lambda_{\mu}$ & $\times$ & $--$ & $SNSS$ & extra Higgs/neutralino\\ \hline $U(1)^\prime$ (HPT) & small $\lambda_{\mu}$ & $\times$ & $--$ & $bRPV$ & $bRPV$, stable heavy hadrons \\ \hline KN & $v_{PQ}<m_{hidden}$ & $\surd$ & $?$ & $SNSS$ & DFSZ axion\\ \hline CKN & $\Lambda <\Lambda_h$ & $\surd$ & $?$ & $SNSS$ & DFSZ axion\\ \hline BK/EWK & $\lambda_\mu\sim 10^{-10}$ & $\surd$ & $?$ & $SNSS$ & DFSZ axion\\ \hline $\rm HFD$ & $v_{PQ}< m_{hidden}$ & $\surd$ & $?$ & $SNSS$ & MSSM \\ \hline MSY/CCK/SPM & $v_{PQ}< m_{hidden}$ & $\surd$ & $\times$ & $RadSS$ & DFSZ axion \\ \hline CCL & small $\lambda_{\mu}$ & $\surd$ & $?$ & $several$ & DFSZ axion, $\tilde G$ or $\tilde\nu$ LSP \\ \hline BGW & small $\lambda_{\mu}$ & $\surd$ & $\mathbb{Z}_{22}$ & $SNSS$ & DFSZ axion \\ \hline Hybrid CCK/SPM & small $\lambda_{\mu}$ & $\surd$ & $\mathbb{Z}_{24}^R$ & $SNSS$ & DFSZ axion \\ \hline \end{tabular} \caption{Summary of twenty solutions to the SUSY $\mu$ problem and how they 1. admit a Little Hierarchy (LH), 2. solve the strong CP problem ($\surd$) or not ($\times$), 3. are expected gravity-safe, 4. Standard neutrino see-saw (SNSS) or other and 5. some experimental consequences. } \label{tab:overview} \end{center} \end{table} \section*{Acknowledgments} We thank H. Serce for help in the early stages of this project. This work was supported in part by the US Department of Energy, Office of High Energy Physics. The work of KJB was supported by IBS under the project code, IBS-R018-D1.
{ "timestamp": "2019-03-01T02:01:32", "yymm": "1902", "arxiv_id": "1902.10748", "language": "en", "url": "https://arxiv.org/abs/1902.10748" }
\section{Introduction}\label{sec:introduction} Image morphing amounts to computing a visually appealing transition of two images such that image features in the reference image are mapped to corresponding image features in the target image whenever possible. A particular model for image morphing known as image metamorphosis was proposed by Miller, Trouv\'e, and Younes~\cite{MiYo01,TrYo05,TrYo05a}. It is based on the flow of diffeomorphism model and the large deformation diffeomorphic metric mapping (LDDMM), which dates back to the work of Arnold, Dupuis, Grenander and others \cite{Ar66a,ArKh98,DuGrMi98,BeMiTr05,JoMi00,MiTrYo02,VS09,VRRC12}. From the perspective of the flow of diffeomorphism model, each point of the reference image is transported to the target image in an energetically optimal way such that the image intensity is preserved along the trajectories of the pixels. The metamorphosis model additionally allows for image intensity modulations along the trajectories by incorporating the magnitude of these modulations, which is reflected by the integrated squared material derivative of the image trajectories as a penalization term in the energy functional. Recently, the metamorphosis model has been extended to images in reproducing kernel Hilbert spaces~\cite{RY16}, to functional shapes~\cite{CCT16} and discrete measures~\cite{RY13}. For a more detailed exposition of these models we refer the reader to \cite{Younes2010,MTY15} and the references therein. A variational time discretization of the metamorphosis model for square-integrable images $L^2(\Omega,\mathbb{R}^m)$ was proposed in \cite{BeEf14}. Furthermore, existence of discrete geodesic paths and the Mosco--convergence of the time discrete to the time continuous metamorphosis model was proven. In \cite{NPS17}, the time discrete model was extended to the set of image~$L^2(\Omega,\mathcal{H})$, where $\mathcal{H}$ denotes a finite dimensional Hadamard manifold. Recall that Hadamard manifolds are Hadamard spaces with a special Riemannian structure having non-positive sectional curvature (for details see below). In \cite{Bac14}, it is revealed that many concepts of Banach spaces can be generalized to Hadamard spaces, which are therefore a proper choice for the analytical treatment of algorithms for manifold-valued images. They are at the same time very relevant for the processing of manifold-valued images in different applications. Throughout the past years, manifold-valued images have received increased attention (see e.g.~\cite{BaBeStWe16,CiHiSc18,LeStKoCr13,DeStWe14,BeLa17}). Some prominent applications for Hadamard manifold-valued images are the following: \begin{itemize}[label=--] \item Diffusion tensor magnetic resonance imaging is an image acquisition method that incorporates in vivo magnetic resonance images of biological tissues driven by local molecular diffusion. The range space of the resulting images is frequently the space of symmetric and positive semidefinite matrices \cite{BaMa94,ChTs04,FeJo07,VaBe13}. \item Retina data is commonly modeled as images with values in the manifold of univariate non-degenerate Gaussian probability distributions endowed with the Fisher metric~\cite{JeVe14,BePe16}. This space is isometric to the hyperbolic space, which can be exploited numerically. \end{itemize} In this paper, we prove convergence of the manifold-valued time discrete geodesic paths to geodesic paths in the proposed manifold-valued metamorphosis model, which coincides with the original metamorphosis energy functional in the Euclidean space. The proof of convergence in \cite{BeEf14} imports as an essential ingredient a representation formula for images via integration of the weak material derivative along motion paths for the time continuous metamorphosis model in the Euclidean setting. Here, we no longer make use of such a representation formula. Indeed, our convergence result can thus be considered as a stronger result even in the case of images as pointwise maps into a Euclidean space. \medskip \paragraph{Notation} Throughout this paper, we assume that the image domain~$\Omega\subset\mathbb{R}^n$ for $n\in\{2,3\}$ has Lipschitz boundary. Henceforth, we denote time continuous operators by calligraphic letters and time discrete operators by normal letters. We use standard notation for Lebesgue and Sobolev spaces on the image domain $\Omega$, i.e.~$L^p(\Omega)$ and $H^m(\Omega)=W^{m,2}(\Omega)$. The associated norms are denoted by $\|\cdot\|_{L^p(\Omega)}$ and $\|\cdot\|_{H^m(\Omega)}$, respectively, and the seminorm in $H^m(\Omega)$ is given by $|\cdot|_{H^m(\Omega)}$. For any $f,g\in H^m(\Omega)$, $m\geq 1$, we set \[ D^m f\cdot D^m g=\sum_{i_1,\ldots,i_m=1}^n\frac{\partial^m f}{\partial_{i_1}\cdots\partial_{i_m}}\cdot\frac{\partial^m g}{\partial_{i_1}\cdots\partial_{i_m}}\,, \qquad \left|D^m f\right|=\left(D^m f\cdot D^m f\right)^{\frac{1}{2}}\,. \] Then, the Sobolev (semi-)norm is defined as \[ |f|_{H^m(\Omega)}=\|D^m f\|_{L^2(\Omega)}\,,\qquad \|f\|_{H^m(\Omega)}=\left(\sum_{j=0}^m|f|_{H^j(\Omega)}^2\right)^\frac{1}{2}\,. \] The symmetric part of a matrix~$A\in\mathbb{R}^{l,l}$ is denoted by $A^\mathrm{sym}$, i.e.~$A^\mathrm{sym}=\frac{1}{2}(A+A^\top)$. We denote by $GL^+(n)$ the elements of $GL(n)$ with positive determinant, by $\mathds{1}$ the identity matrix, and by $\mathrm{Id}$ the identity map. \paragraph{Mosco--convergence} We conclude this section with a brief review of Mosco--convergence, which can be seen as a generalization of $\Gamma$--convergence. For further details we refer the reader to \cite{dalMaso93,Mosco69}. \begin{definition}[Mosco--convergence]\label{def:MoscoConv} Let $(X,d)$ be a Hadamard space and let $\{J_k\}_{k \in \mathbb{N}}$ and $J$ be functionals mapping from $X$ to $\overline \mathbb{R}$. Then the sequence $J_k$ is said to converge to $J$ in the sense of Mosco w.r.t.~the topology induced by $d$ if the following holds: \begin{enumerate} \item \label{MoscoItem1} For every sequence $\{x_k\}_{k \in \mathbb{N}} \subset X$ with $x_k \rightharpoonup x\in X$ it holds \begin{equation}\label{eq:Mosco1}\tag{liminf-inequality} J(x) \leq \liminf_{k \to \infty} J_k(x_k)\,. \end{equation} \item For every $x \in X$ there exists a recovery sequence $\{x_k\}_{k \in \mathbb{N}} \subset X$ such that $x_k \to x\in X$ and \begin{equation}\label{eq:Mosco2}\tag{limsup-inequality} J(x) \geq \limsup_{k \to \infty} J_k(x_k)\,. \end{equation} \end{enumerate} If in \ref{MoscoItem1} the strong convergence of $x_k$ to $x$ in the topology induced by $d$ is required, then $J_k$ is said to $\Gamma$-converge to $J$ w.r.t.~the topology induced by $d$. \end{definition} This paper is organized as follows: In \cref{sec:preliminaries}, we briefly recall some preliminaries of Hadamard manifolds as well as the metamorphosis model in the Euclidean case and its time discretization on Hadamard manifolds. Then, in \cref{sec:manifoldMetamorphosis} the novel manifold-valued metamorphosis model is introduced and the equivalence to the original metamorphosis model in the case of Euclidean spaces is proven. \Cref{sec:Extension} is devoted to the temporal extension of all relevant quantities as required for the convergence proof. Finally, \cref{sec:Mosco} contains the precise statement of the convergence result in the manifold-valued case. \section{Review and preliminaries}\label{sec:preliminaries} In this section, we briefly present some preliminaries of Hadamard manifolds, a short introduction to the metamorphosis model in the Euclidean setting, and the manifold-valued time discrete metamorphosis model \cite{BeEf14, NPS17}. \subsection{Hadamard manifolds} In what follows, a short introduction of Hadamard manifolds is provided and the space of H\"older continuous functions on Hadamard manifolds is analyzed. For further details we refer the reader to the books \cite{Bac14,BH1999,Jost97}. \paragraph{Hadamard manifolds} \begin{figure}[htb] \begin{tikzpicture} \coordinate[label=below left:$\bar p$] (ep) at (0,0); \coordinate[label=above left:$\bar x$] (ex) at (2,2); \coordinate[label=above left:$\bar r$] (er) at (4,4); \coordinate[label=below:$\bar y$] (ey) at (3,1); \coordinate[label=below:$\bar q$] (eq) at (6,2); \draw [thick] (ep) -- (er) -- (eq) -- (ep); \draw [thick] (ex) -- (ey); \fill (ep) circle [radius=2pt]; \fill (eq) circle [radius=2pt]; \fill (er) circle [radius=2pt]; \fill (ex) circle [radius=2pt]; \fill (ey) circle [radius=2pt]; \node[anchor= west] at (1,-0.5) {Euclidean space $\mathbb{R}^2$}; \node[anchor= west] at (1,-1.0) {$\bar x = \bar p + s (\bar r-\bar p),\; \bar y = \bar p + s (\bar q-\bar p)$}; \coordinate[label=below left:$p$] (ep) at (8,0); \coordinate[label=above left:$x$] (ex) at (10,1.6); \coordinate[label=above left:$r$] (er) at (12,4); \coordinate[label=below:$y$] (ey) at (10.8,1.27); \coordinate[label=below:$q$] (eq) at (14,2); \draw [thick] (ep) to [bend right=10] (er); \draw [thick] (ep) to [bend left=10] (eq); \draw [thick] (er) to [bend right=30] (eq); \draw [thick] (ex) to [bend right=30] (ey); \node[anchor= west] at (10,-0.5) {Hadamard manifold}; \node[anchor= west] at (10,-1.0) {$x = \gamma_{{p,r}}(s),\; y = \gamma_{{p,q}}(s)$}; \fill (ep) circle [radius=2pt]; \fill (eq) circle [radius=2pt]; \fill (er) circle [radius=2pt]; \fill (ex) circle [radius=2pt]; \fill (ey) circle [radius=2pt]; \end{tikzpicture} \label{fig:ComparisonTriangle} \caption{Comparison triangle in the Euclidean space~$\mathbb{R}^2$ and geodesic triangle on a Hadamard manifold (Figure adapted from \cite[Figure~1.1]{Bac14}).} \end{figure} A metric space $(X,d)$ is \emph{geodesic} if every two points $x,y \in X$ are connected by a shortest geodesic curve $\gamma_{{x,y}}\colon [0,1] \to X$, which can be arclength parametrized, i.e. \begin{equation} d \bigl(\gamma_{{x,y}}(s),\gamma_{{x,y}}(t) \bigr) = \lvert s - t\rvert d \bigl(\gamma_{{x,y}}(0),\gamma_{{x,y}}(1) \bigr) \label{eq:distancePropertyGeodesics} \end{equation} for every $s,t\in[0,1]$ such that $\gamma_{{x,y}}(0)=x$ and $\gamma_{{x,y}}(1)=y$. A \emph{geodesic triangle}~$\triangle(p,q,r)$ in a geodesic space~$(X,d)$ is composed of the vertices $p,q,r\in X$ and three geodesics joining these points. The corresponding comparison triangle~$\triangle(\bar p,\bar q,\bar r)$ (which is unique up to isometries) is a triangle in the Euclidean space~$\mathbb{R}^2$ with vertices~$\bar p,\bar q,\bar r\in \mathbb{R}^2$ such that the three line segments have the same side lengths as the corresponding geodesics of~$\triangle(p,q,r)$, i.e. \[ d(p,q)=\Vert\bar p-\bar q\Vert\,,\quad d(p,r)=\Vert\bar p-\bar r\Vert\,,\quad d(r,q)=\Vert\bar r-\bar q\Vert\,. \] A complete geodesic space $(\mathcal{H},d)$ is called a \emph{Hadamard space} if for every geodesic triangle $\triangle(p,q,r)\in\mathcal{H}$ and $x \in \gamma_{{p,r}}$, $y \in \gamma_{{q,r}}$ we have $d(x,y) \leq\Vert\bar x - \bar y\Vert$, where $\bar x, \bar y$ are the corresponding points in the comparison triangle $\triangle(\bar p, \bar q,\bar r)\in \mathbb{R}^2$ (see \cref{fig:ComparisonTriangle}). Geodesic spaces satisfying the latter property are also called CAT(0) spaces. By~\cite[Proposition~1.1.3 and Corollary~1.2.5]{Bac14} the geometric CAT(0) condition is equivalent to $(\mathcal{H},d)$ being a complete geodesic space with \begin{equation}\label{eq:reshet0} d^2(x,v) + d^2(y,w) \le d^2(x,w) + d^2(y,v) + 2d(x,y)d(v,w) \end{equation} for every $x,y,v,w \in \mathcal{H}$. The most prominent examples of Hadamard spaces are Hilbert spaces and \emph{Hadamard manifolds}, which are defined as complete simply connected Riemannian manifolds with non-positive sectional curvature. Hyperbolic spaces and the manifold of positive definite matrices with the affine invariant metric are examples of Hadamard manifolds. Throughout this paper, we exclusively consider finite dimensional Hadamard manifolds, for which the existence of geodesic curves joining two arbitrary points is always valid. Recall that the Hopf--Rinow Theorem ceases to be true for general infinite dimensional manifolds \cite{Kl95}. A function $f\colon \mathcal{H} \rightarrow \mathbb{R}$ is \emph{convex} if for every $x,y \in \mathcal{H}$ the function $f \circ \gamma_{{x,y}}$ is convex, i.e. \[ f\bigl( \gamma_{{x,y}}(t) \bigr) \le (1-t) f\bigl( \gamma_{{x,y}} (0) \bigr) + t f \bigl(\gamma_{{x,y}}(1)\bigr) \] for all $t \in [0,1]$. In Hadamard spaces the distance is \emph{jointly convex} \cite[Proposition 1.1.5]{Bac14}, i.e.~for two geodesics $\gamma_{{x_1,x_2}},\gamma_{{y_1,y_2}}$ and $t\in[0,1]$ the relation \begin{equation}\label{eq:ConvDist} d\bigl(\gamma_{{x_1,x_2}}(t),\gamma_{{y_1,y_2}}(t)\bigr) \le (1-t)d(x_1,y_1)+td(x_2,y_2) \end{equation} holds true. Thus, geodesics are in particular uniquely determined by their endpoints. For a bounded sequence $\{x_n\}_{n\in\mathbb{N}}\subset \mathcal{H}$, the function $w \colon \mathcal{H} \to [0, +\infty)$ defined by \begin{equation}\label{eq:WeakConv} w(x;\, \{x_n\}_{n\in\mathbb{N}}) \coloneqq \limsup_{n\to\infty} d^2(x, x_n) \end{equation} has a unique minimizer, which is called the \emph{asymptotic center of $\{x_n\}_{n\in\mathbb{N}}$} \cite[p.~58]{Bac14}. A sequence $\{x_n\}_{n\in\mathbb{N}}$ is said to \emph{converge weakly to a point $x \in \mathcal{H}$} if it is bounded and $x$ is the asymptotic center of each subsequence of $\{x_n\}_{n\in\mathbb{N}}$ \cite[p.~103]{Bac14}. Then, the notion of proper and (weakly) lower semicontinuous functions is analogous to Hilbert spaces. Next, we consider the Borel $\sigma$-algebra $\mathcal{B}$ on $\mathcal{H}$ on the open and bounded set~$\Omega \subset \mathbb R^n$. A measurable map $f\colon\Omega\to\mathcal{H}$ belongs to $\mathcal{L}^p(\Omega,\mathcal{H})$, $p \in [1,\infty]$, if \[ \mathrm{d}_p(f,f_a) < \infty \] for any constant mapping $f_a(\omega)=a$ with $a\in \mathcal{H}$, where $\mathrm{d}_p$ is defined for two measurable maps $f$ and $g$ by \[ \mathrm{d}_p(f,g) \coloneqq \begin{cases} \left(\displaystyle\int_{\Omega}d^p(f(\omega),g(\omega))\,\mathrm{d}\omega\right)^{\frac{1}{p}}\,,\quad & p\in[1,\infty)\,,\\ \operatorname{ess\,sup}_{\omega\in\Omega}d(f(\omega),g(\omega))\,, & p =\infty\,. \end{cases} \] Using the equivalence relation $f\sim g$ if $\mathrm{d}_p(f,g) = 0$, the space $L^p(\Omega,\mathcal{H})\coloneqq \mathcal{L}^p(\Omega,\mathcal{H})/ \sim$ equipped with $\mathrm{d}_p$ becomes a complete metric space. In the case $p=2$ this space is a Hadamard space \cite[Proposition 1.2.18]{Bac14}. Finally, for $f,g$ in the \emph{weighted Bochner space} $L^2((0,1),L^2(\Omega,\mathcal{H}),w)$ with weight $w\in C^0([0,1] \times \Omega,[c_1,c_2])$, $0<c_1<c_2$, the metric is given by \[ \mathrm{d}_2^2(f,g)=\int_0^1\int_\Omega d(f(t,x),g(t,x))^2 w(t,x)\,\mathrm{d} x\,\mathrm{d} t\,. \] In the context of paths with finite path energy in the space of manifold-valued images, we observe H\"older continuity in time, which enables pointwise evaluations in time, in particular for $t=0$ and $t=1$. Next, we restate a density result given in \cite[Theorem 2]{NPS17}, which is slightly sharpened, noting that the functions $h_k$ in the proof are actually Lipschitz continuous. \begin{theorem}\label{thm:denselip} Let $(\mathcal{H},d)$ be a locally compact Hadamard space. Then the set of Lipschitz continuous functions mapping from $\Omega$ to $\mathcal{H}$ is dense in $L^p(\Omega,\mathcal{H})$ for $p \in [1,\infty)$. \end{theorem} Another classical property of Lebesgue spaces also transfers to the Hadamard setting: \begin{lemma}\label{lemm:convergenceAE} Let $f_k \in L^2((0,1), L^2(\Omega, \mathcal{H}),w)$ be a convergent sequence with limit $f$. Then there exists a subsequence which converges a.e.~as $k \to \infty$. \end{lemma} \begin{proof} The Chebyshev inequality implies the convergence in measure, then we can apply \cite[Theorem 5.2.7 (i)]{Malliavin2012}. \end{proof} Next, we define subsets of H\"older continuous functions with fixed parameters $\alpha\in(0,1]$ and $L>0$ by \[ A_{\alpha,L,w} \coloneqq \left\{f \in L^2((0,1), L^2(\Omega,\mathcal{H}),w) \colon \mathrm{d}_2(f(s),f(t)) \leq L \vert t -s \vert^{\alpha} \,\, \forall t,s \in [0,1]\right\}\,. \] \begin{theorem}\label{thm:LipClos} The set $A_{\alpha,L,w}$ is closed and convex. In particular, $A_{\alpha,L,w}$ is weakly closed. \end{theorem} \begin{proof} First, we show closedness. By \Cref{lemm:convergenceAE} we get an a.e.~convergent subsequence. Assume there exists a point $t\in[0,1]$, where this sequence does not converge. Then, we can choose $s \in [0,1]$ arbitrarily close with $\mathrm{d}_2(f_k(s),f(s))\to 0$ as $k\to \infty$. This implies \[ \mathrm{d}_2(f_k(t),f_l(t))\leq 2L(t-s)^{\alpha}+\mathrm{d}_2(f_k(s),f_l(s)) \] for all $k,l$ sufficiently large, which proves that the sequence $f_k(t)$ is a Cauchy sequence. The H\"older continuity of $f$ follows directly by approximation arguments. Second, we show convexity. For $f_1,f_2\inA_{\alpha,L,w}$ and the connecting geodesic \[ [0,1]\ni r\mapsto\gamma_{{f_1,f_2}}^r\in L^p([0,1], L^2(\Omega,\mathcal{H}), w) \] we obtain by the convexity of the Hadamard metric \begin{align*} &\mathrm{d}_2\left(\gamma_{{f_1,f_2}}^{r}(s),\gamma_{{f_1,f_2}}^{r}(t)\right) =\mathrm{d}_2\left(\gamma_{{f_1(s),f_2(s)}}^{r},\gamma_{{f_1(t),f_2(t)}}^{r}\right)\\ \leq &(1-r)\mathrm{d}_2(f_1(s),f_1(t)) + r \mathrm{d}_2(f_2(s),f_2(t)) \leq L \vert t-s \vert^{\alpha}\,. \end{align*} Finally, the weak closedness in the Bochner space follows by \cite[Lemma~3.2.1]{Bac14}. \end{proof} The following lemma is exploited in the proof of the convergence result \cite[Corollary~3]{NPS17}, \cite[Lemma~2.2.2]{NPS18}: \begin{lemma}\label{lemm:stet_norm} Let $(\mathcal{H},d)$ be a locally compact Hadamard space. For fixed $p\in [1,\infty)$ let $f \in L^p(\Omega,\mathcal{H})$ and $\{ Y_j\}_{j\in\mathbb{N}}\subset C^1(\overline\Omega,\overline\Omega)$ be a sequence of diffeomorphisms such that $\vert \mathrm{det} (D Y_j) \vert^{-1} \leq C$ for all $j \in \mathbb{N}$, which converges to a diffeomorphism~$ Y$ in $(L^\infty(\Omega))^n$. Then, \[ \limsup_{j \to \infty} \mathrm{d}_p(f\circ Y_j, f \circ Y) = 0\,. \] If in addition $ Y_j$ converges to $ Y$ in $(C^{1,\alpha}(\overline\Omega))^n$, then $\limsup_{j \to \infty}\mathrm{d}_p(f\circ ( Y_j)^{-1},f\circ Y^{-1}) = 0$. \end{lemma} The generalization of this result to the space $L^2((0,1), L^2(\Omega,\mathcal{H}))$ is straightforward. Using the triangle inequality, we get the following corollary, which generalizes to $L^2((0,1), L^2(\Omega,\mathcal{H}))$. \begin{corollary}\label{cor:stet_norm_2} Let the assumptions from \cref{lemm:stet_norm} hold true and let $\{f_j\}_{j \in \mathbb{N}}\subset L^p(\Omega,\mathcal{H})$, $p \in [1,\infty)$, be a sequence which converges to $f$ in $L^p(\Omega,\mathcal{H})$. Then, \[ \limsup_{j\to\infty}\mathrm{d}_p(f_j\circ Y_j,f\circ Y)=0 \quad\text{and}\quad\limsup_{j\to\infty}\mathrm{d}_p(f_j\circ(Y_j)^{-1},f\circ Y^{-1})=0\,. \] \end{corollary} \subsection{Metamorphosis model in Euclidean case} In this subsection, we briefly introduce the space of images $I\colon\Omega \to \mathbb{R}$ with a Riemannian structure from the perspective of the flow of diffeomorphisms model and the metamorphosis model. For further details we refer the reader to the literature mentioned in \cref{sec:introduction}. \paragraph{Flow of diffeomorphisms} In the flow of diffeomorphisms model, the temporal evolution of each pixel of the reference image along a trajectory is determined by a \emph{family of diffeomorphisms $(Y(t))_{t\in[0,1]}\colon\overline\Omega\rightarrow\mathbb{R}^n$} such that the brightness is preserved. The \emph{brightness constancy assumption} is mathematically reflected by a vanishing material derivative~$\frac{D}{\partial t}I=\dotI+v\cdot\nablaI$ along a motion path $(I(t))_{t\in [0,1]}$ in the space of images, where $v(t)=\dot Y(t)\circ Y^{-1}(t)$ denotes the time-dependent \emph{Eulerian velocity}. Then, one defines the metric and the right-invariant path energy associated with this family of diffeomorphisms as follows \[ g_{Y(t)}(\dot Y(t),\dot Y(t))=\int_\Omega L[v(t),v(t)]\,\mathrm{d} x\,,\qquad \boldsymbol{\mathcal{E}}((Y(t))_{t\in [0,1]})=\int^1_0 g_{Y(t)}(\dot Y(t),\dot Y(t))\,\mathrm{d} t\,. \] Throughout this paper, we consider the \emph{higher order operator} \begin{equation} L[v(t),v(t)]=\tfrac{\lambda}{2}(\tr\varepsilon[v])^2+\mu\tr(\varepsilon[v]^2)+\gamma|D^m v|^2\,, \label{eq:ellipticOperator} \end{equation} where $\varepsilon[v]=(\nabla v)^\mathrm{sym}$ refers to the symmetric part of the gradient and $m>1+\frac{n}{2}$ as well as $\lambda,\mu,\gamma>0$ are fixed constants. This particular choice of the operator~$L$ originates from fluid mechanics, where the metric~$g_{Y(t)}$ refers to a \emph{viscous dissipation} in a multipolar fluid model as described in \cite{NeSi91,GrRi64,GrRi64a}. If $Y_A$ and $Y_B$ are diffeomorphisms and the energy $\boldsymbol{\mathcal{E}}$ is finite for a general path $(Y(t))_{t\in[0,1]}$ with $Y(0)=Y_A$ and $Y(1)=Y_B$, then using the $H^m(\Omega)$-coerciveness of the metric $g_{Y(t)}$ the path is already a family of diffeomorphisms. In addition, an energy minimizing velocity field~$v$ exists such that $\frac{\,\mathrm{d}}{\,\mathrm{d} t}Y(t,\cdot)=v(t,Y(t,\cdot))$ for every $t\in[0,1]$, see \cite{DuGrMi98}. Furthermore, the corresponding path~$I$ for two input images $I_A,I_B\in L^2(\Omega)$ has the particular form $I(t,\cdot)=I_A\circ Y^{-1}(t,\cdot)$. In what follows, we investigate diffeomorphisms induced by velocity fields in the space \[ \mathcal V \coloneqq H^m(\Omega, \mathbb{R}^n) \cap H^1_0(\Omega, \mathbb{R}^n)\,. \] The following theorem relates the norm of the induced flow to the integrated norm of the associated velocity field. \begin{theorem}\label{thm:DiffeoVelo} Let $v\in L^2((0,1),\mathcal V)$ be a velocity field. Then there exists a global flow $Y \in H^1([0, 1], H^m(\Omega)^n)$ such that \begin{equation} \begin{array}{rcl} \displaystyle\frac{\,\mathrm{d}}{\,\mathrm{d} t}Y(t,x)&=&v(t,Y(t,x))\,,\\[0.5em] Y(0,x)&=&x\,, \end{array} \label{eq:contFlow} \end{equation} for a.e.~$x \in \Omega$ and a.e.~$t \in[0,1]$. In particular, $Y(t,\cdot)$ is a homeomorphism for all $t \in [0, 1]$. Further, for $\alpha \in [0, m - 1 - \frac{n}{2})$ the following estimates hold \begin{align} \Vert Y \Vert_{C^0([0,1],C^{1,\alpha}(\overline{\Omega}))} + \Vert Y^{-1} \Vert_{C^0([0,1],C^{1,\alpha}(\overline{\Omega}))} \leq C \exp \left(C\int_0^1 \Vert v(s,\cdot)\Vert_{C^{1,\alpha}(\overline{\Omega})} \,\mathrm{d} s\right)\,,\label{eq:Gronwall} \end{align} If $\alpha>0$, then the constant $C$ depends on $\int_0^1 \Vert \nabla v(s,\cdot)\Vert_{C^{0,\alpha}(\overline{\Omega})} \,\mathrm{d} s$. Finally, the solution operator $L^2((0,1),\mathcal V) \to C^0([0,1], C^1(\overline{\Omega}, \mathbb{R}^n))$ assigning a flow $Y$ to every velocity field~$v$ is continuous w.r.t. to the weak topology in $L^2((0,1),\mathcal V)$ and the $L^\infty([0,1],C^{0}(\overline{\Omega}))$-topology for $Y$. \end{theorem} \begin{proof} Aside from \cref{eq:Gronwall} the theorem corresponds to \cite[Theorem 1 and Theorem 9]{TrYo05a}. The estimate for the first term in \cref{eq:Gronwall} follows from \cite[Lemma 7]{TrYo05a} and relies on Gronwall's inequality when considering the $C^0([0,1],C^1(\overline\Omega))$-norm on the left-hand side and the $H^m(\Omega)$-norm in the exponent on the right-hand side. The generalization to the $C^{1,\alpha}(\overline\Omega)$-norm in the exponent is straightforward. In what follows, we sketch the proof of \eqref{eq:Gronwall} when employing the $C^0([0,1],C^{1,\alpha}(\overline\Omega))$-norm on the left-hand side. Let $i\in\{1,\dots,n\}$, $t\in(0,1)$ and $x,y\in\Omega$. Taking into account the aforementioned result we can assume $\Vert Y\Vert _{C^0([0,1],C^1(\overline\Omega))}\leq C(v)\coloneqq C\exp\left(C\int_0^1\Vert v(s,\cdot)\Vert _{C^{1,\alpha}(\overline\Omega)}\,\mathrm{d} s\right)$. Then, a nonrigorous computation yields \begin{align*} &\quad\left\vert \tfrac{\,\mathrm{d}}{\,\mathrm{d} t}\vert \partial_i Y(t,x)-\partial_i Y(t,y)\vert \right\vert \leq\vert \nabla v(t, Y(t,x))\cdot\partial_i Y(t,x)-\nabla v(t, Y(t,y))\cdot\partial_i Y(t,y)\vert \\[1ex] &\leq \vert \nabla v(t, Y(t,x))-\nabla v(t, Y(t,y))\vert \ \vert \partial_i Y(t,x)\vert + \vert \nabla v(t, Y(t,y))\vert \ \vert \partial_i Y(t,x)-\partial_i Y(t,y)\vert \\[1ex] &\leq C\Vert \nabla v(t,\cdot)\Vert _{C^{0,\alpha}(\overline\Omega)}\Vert Y\Vert _{C^0([0,1],C^1(\overline\Omega))}^{1+\alpha}\vert x-y\vert ^\alpha+ \Vert \nabla v(t,\cdot)\Vert _{C^0(\overline\Omega)}\vert \partial_i Y(t,x)-\partial_i Y(t,y)\vert \\[1ex] &\leq C\,C(v)^{1+\alpha}\Vert \nabla v(t,\cdot)\Vert _{C^{0,\alpha}(\overline\Omega)}\vert x-y\vert ^\alpha+\Vert \nabla v(t,\cdot)\Vert _{C^0(\overline\Omega)}\vert \partial_i Y(t,x)-\partial_i Y(t,y)\vert \,. \end{align*} Thus, \begin{align*} &\quad\frac{\,\mathrm{d}}{\,\mathrm{d} t}\left(\exp\left(-\int_0^t\Vert \nabla v(s,\cdot)\Vert _{C^0(\overline\Omega)}\,\mathrm{d} s\right)\vert \partial_i Y(t,x)-\partial_i Y(t,y)\vert \right)\\ &\leq C\, C(v)^{1+\alpha}\Vert \nabla v(t,\cdot)\Vert _{C^{0,\alpha}(\overline\Omega)}\exp\left(-\int_0^t\Vert \nabla v(s,\cdot)\Vert _{C^0(\overline\Omega)}\,\mathrm{d} s\right)\vert x-y\vert ^\alpha\\ &\leq C\, C(v)^{1+\alpha}\Vert \nabla v(t,\cdot)\Vert _{C^{0,\alpha}(\overline\Omega)}\vert x-y\vert ^\alpha\,. \end{align*} The integration of both sides w.r.t.~$t$ yields \begin{align*} &\quad\exp\left(-\int_0^t\Vert \nabla v(s,\cdot)\Vert _{C^0(\overline\Omega)}\,\mathrm{d} s\right)\vert \partial_i Y(t,x)-\partial_i Y(t,y)\vert \\ &\leq C\, C(v)^{1+\alpha}\int_0^1\Vert \nabla v(t,\cdot)\Vert _{C^{0,\alpha}(\overline\Omega)}\,\mathrm{d} t\vert x-y\vert ^\alpha\,, \end{align*} which bounds the first term in \eqref{eq:Gronwall} and the second term is estimated similarly by noting that $ Y^{-1}(t,\cdot)$ is the flow associated with the (backward) motion field $-v(1-t,\cdot)$. This proof can be further generalized to $C^0([0,1],C^{k,\alpha}(\overline\Omega))$-norms provided that $m$ is sufficiently large. \end{proof} \begin{remark}\label{rem:DiffeoVeloRem} Analogous results hold when replacing $\mathcal V$ by $C^{1,\alpha}(\overline{\Omega})$ with zero boundary condition \cite[Chapter~8]{Younes2010}. Furthermore, the mapping $v \to Y^v$ is Lipschitz continuous in $v$, i.e. \[\Vert Y_v(t,\cdot) - Y_{\tilde v}(t, \cdot) \Vert_{C^0(\overline\Omega)} \leq \bigl(1+C\exp(C)\bigr) \int_{0}^{t} \Vert v(s,\cdot) - \tilde v(s,\cdot)\Vert_{C^0(\overline\Omega)} \,\mathrm{d} s,\] where $C=\int_{0}^{t}\Vert v(s,\cdot)\Vert_{C^1(\overline\Omega)} \,\mathrm{d} s$ \cite[(8.16)]{Younes2010}. \end{remark} \paragraph{Metamorphosis} The metamorphosis model can be regarded as a generalization of the flow of diffeomorphisms model, in which the brightness constancy assumption is replaced by a quadratic penalization of the material derivative, which in particular allows for intensity modulations along the trajectories. Thus, as a first attempt one could define the metric and the path energy in the metamorphosis model associated with the family of images $(I(t))_{t\in[0,1]}\colon\overline\Omega\rightarrow\mathbb{R}^n$ and a penalization parameter~$\delta>0$ as follows \begin{equation} g(\dotI,\dotI)=\min_{v:\overline\Omega\rightarrow\mathbb{R}^n}\int_\Omega L[v,v]+{\frac{1}{\delta}} \left(\frac{D}{\partial t}I\right)^2\,\mathrm{d} x\,,\qquad \boldsymbol{\mathcal{E}}(I)=\int_0^1g(\dotI(t),\dotI(t))\,\mathrm{d} t\,. \label{eq:firstMetamorphosis} \end{equation} This shows that the flow of diffeomorphisms model can be seen as the limit case of the metamorphosis model for $\delta\to0$. However, there are two major problems related with \eqref{eq:firstMetamorphosis}. Clearly, in general paths in the space of images do not exhibit any smoothness properties---neither in space nor in time. Thus, the evaluation of the material derivative~$(\frac{D}{\partial t}I)^2$ is not well-defined. Moreover, since different pairs of velocity fields~$v$ and material derivatives~$\frac{D}{\partial t}I$ can imply the same time derivative of the image path~$\dotI$, the restriction to equivalence classes of pairs~$(v,\frac{D}{\partial t}I)$ is required, where two pairs are equivalent if and only if they induce the same temporal change of the image path~$\dotI$. To tackle both problems, Trouv\'e and Younes~\cite{TrYo05a} proposed a nonlinear geometric structure in the space of images~$L^2(\Omega)\coloneqq L^2(\Omega,\mathbb{R})$. In detail, for a given velocity field~$v\in L^2((0,1), \mathcal V)$ and an image path~$I\in L^2((0,1),L^2(\Omega))$ the material derivative is replaced by the function $z\in L^2((0,1),L^2(\Omega))$ known as the \emph{weak material derivative}, which is uniquely determined by \[ \int_0^1\int_\Omega\eta z\,\mathrm{d} x\,\mathrm{d} t=-\int_0^1\int_\Omega(\partial_t\eta+\div(v\eta))I\,\mathrm{d} x\,\mathrm{d} t \] for $\eta\in C^{\infty}_c((0,1)\times\Omega)$. Moreover, for all $I\in L^2(\Omega)$ the associated \emph{tangent space $T_I L^2(\Omega)$} is defined as $T_I L^2(\Omega)=\{I\}\times W/N_{I}$, where $W= \mathcal V \times L^2(\Omega)$ and \[ N_{I}=\left\{w=(v,z)\in W:\int_\Omega z\eta+I\div(\eta v)\,\mathrm{d} x=0 \ \forall\eta\in C^{\infty}_c(\Omega)\right\}\,. \] As usual, the associated \emph{tangent bundle} is given by $TL^2(\Omega)=\bigcup_{I\in L^2(\Omega)}T_{I}L^2(\Omega)$. Then, following Trouv\'e and Younes, a \emph{regular path in the space of images} (denoted by $I\in H^1([0,1],L^2(\Omega))$) is a curve $I\in C^0([0,1],L^2(\Omega))$ such that there exists a measurable path $\gamma\colon[0,1]\rightarrow TL^2(\Omega)$ with bounded $L^2$-norm in space and time and $\pi(\gamma)=I$, where $\pi(I,\overline{(v,z)})=I$ refers to the projection onto the image manifold and $(I,\overline{(v,z)})$ denotes the equivalence class, such that \[ -\int_0^1\int_\OmegaI\partial_t\eta\,\mathrm{d} x\,\mathrm{d} t=\int_0^1\int_\Omega z\eta+I\div(\eta v)\,\mathrm{d} x\,\mathrm{d} t \] for all $\eta\in C^{\infty}_c((0,1)\times\Omega)$. In this paper, we use the alternative definition of the weak material derivative \[ I(t,Y(t,\cdot))-I(s,Y(s,\cdot)) = \displaystyle\int_t^s z(r,Y(r,\cdot))\,\mathrm{d} r \qquad\text{for all }s>t \in [0,1] \] for a given flow $Y$ \cite{TrYo05a}. Finally, if we assume the $\mathcal V$-coercivity of the operator~$L$, then the \emph{path energy in the metamorphosis model} for a regular path $I \in H^1([0,1],L^2(\Omega))$ is defined as \begin{equation} \boldsymbol{\mathcal{E}}(I)=\int_0^1\inf_{\overline{(v,z)}\in T_{I(t)} L^2(\Omega)}\int_\Omega L[v,v]+{\frac{1}{\delta}} z^2\,\mathrm{d} x\,\mathrm{d} t\,. \label{eq:DefinitionPathenergy} \end{equation} The existence of energy minimizing paths in the space of images (known as \emph{geodesic curves}), i.e.~solutions of the boundary value problem \[ \min\{\boldsymbol{\mathcal{E}}(\tildeI):\ \tildeI\in H^1([0,1],L^2(\Omega)),\ \tildeI(0)=I_A,\ \tildeI(1)=I_B\} \] for fixed images $I_A,I_B\in L^2(\Omega)$, is proven in \cite{TrYo05a}. In addition, one can prove the existence of minimizing $(v,z)\in T_{I(t)} L^2(\Omega)$. We remark that all results of this paper can be easily generalized to the space of multichannel or color images~$L^2(\Omega,\mathbb{R}^C)$ for $C\geq 2$ color channels with minor modifications. \subsection{Manifold-valued time discrete metamorphosis model} Now, we pick up the time discrete metamorphosis model for manifold-valued images, for which convergence is studied in this paper. The model itself was thoroughly analyzed in \cite{NPS17} and extends the variational time discretization of the classical metamorphosis model proposed in \cite{BeEf14}. Fix $\gamma,\delta,\epsilon>0$ and $m>1+\frac{n}{2}$ , and let $\mathcal{H}$ be any finite dimensional Hadamard manifold. For two manifold-valued images~$I,\tildeI\in L^2(\Omega,\mathcal{H})$ and an \emph{admissible deformation} \[ \varphi\in \mathcal{A}_\epsilon=\left\{\varphi\in H^m(\Omega,\Omega):\det D\varphi>\epsilon\text{ in }\Omega, \varphi=\mathrm{Id}\text{ on }\partial\Omega\right\} \] the \emph{time discrete energy} for pairs of images is defined as \[ \boldsymbol{R}(I,\tildeI) =\inf_{\varphi\in\mathcal{A}_\epsilon}\boldsymbol{R}(I,\tildeI,\varphi)\,, \] where \[ \boldsymbol{R}(I,\tildeI,\varphi)= \int_\Omega \mathrm{W}(D\varphi(x))+\gamma\lvert D^m\varphi(x)\rvert^2\,\mathrm{d} x +{\frac{1}{\delta}}\mathrm{d}_2^2(I,\tildeI\circ \varphi) \] for an \emph{elastic energy density~$\mathrm{W}$}. Here, $\mathrm{d}_2^2(\cdot, \cdot)$ replaces the squared $L^2$-norm in the time discrete metamorphosis model. The energy~$\boldsymbol{R}$ can be considered as a numerically feasible approximation of the squared Riemannian distance in the underlying image space~\cite{RuWi12b}. Throughout this paper, we assume that $\mathrm{W}$ satisfies the following conditions: \begin{enumerate}[label=(W\arabic*)] \item\label{W1} $\mathrm{W}\in C^4(\mathrm{GL}^+(n),\mathbb{R}_0^+)$ is polyconvex. \item\label{W2} There exist constants $C_{\mathrm{W},1},C_{\mathrm{W},2},r_\mathrm{W}>0$ such that for all $A\in\mathrm{GL}^+(n)$ the following growth estimates hold true: \begin{align} \mathrm{W}(A)&\geq C_{\mathrm{W},1}\vert A^\mathrm{sym}-\mathds{1}\vert ^2\,, &\text{if }\vert A-\mathds{1}\vert <r_\mathrm{W}\,, \label{eq:energy3}\\ \mathrm{W}(A)&\geq C_{\mathrm{W},2}\,, &\text{if }\vert A-\mathds{1}\vert \geq r_\mathrm{W}\,. \label{eq:energy4} \end{align} \item\label{W3} The energy density admits the following representation at $\mathds{1}$: \begin{align} \mathrm{W}(\mathds{1})&=0\,,\quad D\mathrm{W}(\mathds{1})=0\,,\label{eq:energy1}\\ \frac12 D^2\mathrm{W}(\mathds{1})(A,A) &=\frac{\lambda}{2}(\tr A)^2+\mu\tr\left(\left(A^\mathrm{sym}\right)^2\right)\,. \label{eq:energy2} \end{align} \end{enumerate} The assumption~\ref{W1} is required for the lower semicontinuity of the energy functional. Furthermore, \ref{W2} enforces the convergence of the optimal deformations to the identity in the limit~$K\rightarrow\infty$. Finally, \ref{W3} ensures the compatibility of~$\mathrm{W}$ with the elliptic operator~$L$ (cf.~\eqref{eq:ellipticOperator}). Note that \ref{W1} and \ref{W3} are identical to \cite[(W1) and (W3)]{BeEf14}. We recall that in \cite[(W2)]{BeEf14} a growth estimate of the form \begin{equation} \mathrm{W}(A)\geq C(\det A)^{-s}-C \label{eq:oldW2} \end{equation} for $s>n-1$ and a positive constant~$C$ instead of \ref{W2} is assumed. This modification additionally requires essentially bounded images in order to ensure that the deformations are homeomorphic. However, in order to use the Hadamard space of square-integrable images, we have to use \ref{W2} instead, which in particular results in diffeomorphic deformations. The \emph{time discrete path energy} for $K+1$ images $\boldsymbol{I}=(I_0,\ldots,I_K)\in(L^2(\Omega,\mathcal{H}))^{K+1}$, $K\geq 2$, is defined as the weighted sum of the discrete energies $\boldsymbol{R}$ evaluated at consecutive images, i.e. \begin{equation} \boldsymbol{J}_K (\boldsymbol{I}) \coloneqq \inf_{\boldsymbol{\varphi} \coloneqq (\varphi_1,\dots,\varphi_K) \in \mathcal{A}_\epsilon^K} \left\{ \boldsymbol{J}_K(\boldsymbol{I}, \boldsymbol{\varphi}) \coloneqq K \sum_{k = 1}^{K} \boldsymbol{R}(I_{k-1},I_k,\varphi_k) \right\}\,. \label{eq:d_path} \end{equation} For two fixed images $I_A=I_0,\, I_B=I_K\in L^2(\Omega,\mathcal{H})$ a $(K+1)$-tuple $\boldsymbol{I}=(I_0,\ldots,I_K)\in(L^2(\Omega,\mathcal{H}))^{K+1}$ is called a \emph{discrete geodesic curve} if \[ \boldsymbol{J}_K (\boldsymbol{I})\leq \boldsymbol{J}_K ((I_0,\tildeI_1,\ldots,\tildeI_{K-1},I_K)) \] for all $(\tildeI_1,\ldots,\tildeI_{K-1})\in(L^2(\Omega,\mathcal{H}))^{K-1}$. The existence of discrete geodesic curves has been shown in \cite[Section~3]{NPS17} under different assumptions with respect to the energy density. However, by slightly altering these proofs the existence under the assumptions \ref{W1}-\ref{W3} (cf.~also \cite{Effland17}) can be checked. Note that in general neither the discrete geodesic curve nor the associated set of deformations is uniquely determined. The Mosco--convergence of a temporal extension of $\boldsymbol{J}_K$ to $\boldsymbol{\mathcal{E}}$ in the Euclidean case was proven in \cite{BeEf14}. \begin{figure}[tb] \includegraphics[width=0.49\linewidth]{Images/Result_5/Morph_1.jpg} \hfill \includegraphics[width=0.49\linewidth]{Images/Result_5/Morph_5.jpg} \caption{Two slices of DTI images from the CAMINO dataset \cite{Camino}, where the diffusion tensors are visualized as ellipsoids color-coded with respect to the geometric anisotropy. } \label{fig:inputImages} \end{figure} \begin{figure}[tb] \resizebox{\linewidth}{!}{ \begin{tikzpicture} \begin{scope}[shift={(0,0)}] \edef\currentCnt{0} \foreach \j in {1,...,5}{ \node[anchor=south west] at (\currentCnt,0) {\includegraphics[width=0.19\linewidth]{Images/Result_5/Morph_\j.jpg}}; \FPeval{\jj}{clip(\j-1)} \node[anchor=south west] at (\currentCnt+.1,1.4) {$I_{4,\jj}$}; \pgfmathparse{\currentCnt+3.1} \xdef\currentCnt{\pgfmathresult} } \end{scope} \begin{scope}[shift={(0,-2.2)}] \edef\currentCnt{0} \foreach \j in {1,...,5}{ \node[anchor=south west] at (\currentCnt,0) {\includegraphics[width=0.19\linewidth]{Images/Result_9/Morph16_\j.jpg}}; \FPeval{\jj}{clip(\j-1)} \node[anchor=south west] at (\currentCnt+.1,1.4) {$I_{8,\jj}$}; \pgfmathparse{\currentCnt+3.1} \xdef\currentCnt{\pgfmathresult} } \end{scope} \begin{scope}[shift={(0,-4.2)}] \edef\currentCnt{0} \foreach \j in {6,...,9}{ \node[anchor=south west] at (\currentCnt,0) {\includegraphics[width=0.19\linewidth]{Images/Result_9/Morph16_\j.jpg}}; \FPeval{\jj}{clip(\j-1)} \node[anchor=south west] at (\currentCnt+.1,1.4) {$I_{8,\jj}$}; \pgfmathparse{\currentCnt+3.1} \xdef\currentCnt{\pgfmathresult} } \end{scope} \begin{scope}[shift={(0,-6.4)}] \edef\currentCnt{0} \foreach \j in {2,...,6}{ \node[anchor=south west] at (\currentCnt,0) {\includegraphics[width=0.19\linewidth]{Images/Result_17/Morph16_\j.jpg}}; \FPeval{\jj}{clip(\j-1)} \node[anchor=south west] at (\currentCnt+.1,1.4) {$I_{16,\jj}$}; \pgfmathparse{\currentCnt+3.1} \xdef\currentCnt{\pgfmathresult} } \end{scope} \begin{scope}[shift={(0,-8.4)}] \edef\currentCnt{0} \foreach \j in {7,...,11}{ \node[anchor=south west] at (\currentCnt,0) {\includegraphics[width=0.19\linewidth]{Images/Result_17/Morph16_\j.jpg}}; \FPeval{\jj}{clip(\j-1)} \node[anchor=south west] at (\currentCnt+.1,1.4) {$I_{16,\jj}$}; \pgfmathparse{\currentCnt+3.1} \xdef\currentCnt{\pgfmathresult} } \end{scope} \begin{scope}[shift={(0,-10.4)}] \edef\currentCnt{0} \foreach \j in {12,...,16}{ \node[anchor=south west] at (\currentCnt,0) {\includegraphics[width=0.19\linewidth]{Images/Result_17/Morph16_\j.jpg}}; \FPeval{\jj}{clip(\j-1)} \node[anchor=south west] at (\currentCnt+.1,1.4) {$I_{16,\jj}$}; \pgfmathparse{\currentCnt+3.1} \xdef\currentCnt{\pgfmathresult} } \end{scope} \draw [very thick] (0.1,0) -- (15.5,0); \draw [very thick] (0.1,-4.2) -- (15.5,-4.2); \end{tikzpicture} } \caption{Time discrete geodesic paths for $K=4,8,16$ (the input images for $K=16$ are not depicted).} \label{fig:input} \end{figure} \Cref{fig:input} shows different discrete geodesic paths for $K=4,\,8,\,16$ connecting two slices of DTI images given in \Cref{fig:inputImages}. In particular, one experimentally observes an indication of convergence for increasing $K$. \section{Manifold-valued metamorphosis model}\label{sec:manifoldMetamorphosis} In this section, we propose a (time continuous) metamorphosis energy functional~$\boldsymbol{\mathcal{J}}$ for manifold-valued images in~$L^2(\Omega,\mathcal{H})$, where $\mathcal{H}$ is a finite dimensional Hadamard manifold. This energy is identified in \cref{sec:Mosco} as the Mosco--type limit of the above time discrete path energy. A straightforward generalization of the weak notion of the material derivative $z$ is unfeasible. Hence, this is replaced by an inequality relating distances between images along the motion path and the associated scalar material derivative $z$. We prove the equivalence of this novel energy functional with the classical metamorphosis model for $\mathbb{R}^C$-valued images, where the scalar material derivative coincides with the norm of the classical material derivative. The \emph{manifold-valued metamorphosis energy functional~$\boldsymbol{\mathcal{J}}\colon L^2((0,1) \times\Omega,\mathcal{H})\to[0,\infty]$} is defined as follows \begin{equation}\label{eq:c_path} \boldsymbol{\mathcal{J}} (I) \coloneqq \inf_{(v,z) \in \mathcal{C}(I)} \int_0^{1}\int_\Omega L[v,v]+{\frac{1}{\delta}} z^2\,\mathrm{d} x\,\mathrm{d} t\,. \end{equation} Here, $\mathcal{C}(I)$ is the set of pairs $(v,z) \in L^2((0, 1), \mathcal V) \times L^2((0,1), L^2(\Omega))$ such that the flow $Y$ defined by \begin{equation} \label{eq:ODESys1} \begin{array}{rll} \displaystyle{\frac{\,\mathrm{d}}{\,\mathrm{d} t}}Y(t,x)&=v(t,Y(t,x))& \text{for } (t,x) \in [0,1]\times\Omega\,,\\[.5em] Y(0,x)&= x & \text{for } x\in \Omega \end{array} \end{equation} satisfies for all $t<s \in [0,1]$ the inequality \begin{equation}\label{eq:ODESys2} d\big(I(t,Y(t,\cdot)),I(s,Y(s,\cdot))\big) \leq\displaystyle{\int_t^s}z(r,Y(r,\cdot))\,\mathrm{d} r\,. \end{equation} For a given image curve~$t\mapstoI(t,Y(t,\cdot))$ the associated tangential vectors at different times are in general contained in different tangent spaces. In particular, the norm of the material derivative $z$ at time $t$ depends on the image path. The definition of the material derivative via the variational inequality~\eqref{eq:ODESys2} avoids this technical difficulty. Let us now verify the equivalence of both versions of the metamorphosis model for $\mathbb{R}^C$-valued images. In the classical model, the ($C$-dimensional) material derivative $\tilde z$ is defined via the equation \begin{equation}\label{eq:tildez} I(t,Y(t,\cdot))-I(s,Y(s,\cdot)) = \displaystyle\int_t^s \tilde z(r,Y(r,\cdot))\,\mathrm{d} r \end{equation} for all $t<s \in [0,1]$, whereas the scalar material derivative $z$ obeys the inequality \begin{equation}\label{eq:z} \vertI(t,Y(t,\cdot))-I(s,Y(s,\cdot))\vert\leq\displaystyle\int_t^s z(r,Y(r,\cdot))\,\mathrm{d} r\,. \end{equation} In fact, the equivalence is already implied by the following proposition. \begin{proposition} For every $z$ fulfilling \eqref{eq:z} there exists a $\tilde z$ fulfilling \eqref{eq:tildez} with $z \geq |\tilde z|$. Vice versa, for every $\tilde z$ fulfilling \eqref{eq:tildez} there exists a $z$ fulfilling \eqref{eq:z} with $z = |\tilde z|\,$. \end{proposition} \begin{proof} For given $\tilde z$ the result follows from the triangle inequality by choosing $z = \vert \tilde z \vert$. To prove the converse, let $z$ solve \eqref{eq:z}. Taking the $L^2$-norm on both sides implies \begin{align*} \VertI(t,Y(t,\cdot))-I(s,Y(s,\cdot))\Vert_{L^2(\Omega)} \leq \int_s^t \Vert z(r,Y(r,\cdot)) \Vert_{L^2(\Omega)}\,\mathrm{d} r\,, \end{align*} i.e.~the function~$t\mapstoI(t,Y(t,x))$ is $AC^2([0,1],L^2(\Omega))$ in the sense of \cite[Definition~1.1.1]{Ambrosio}. Using \cite[Remark~1.1.3]{Ambrosio} one can additionally infer the a.e.~differentiability with derivative $Z\in L^2((0,1),L^2(\Omega))$ such that \[ I(t,Y(t,x))-I(0,Y(0,x))=\int_0^t Z(r,x)\,\mathrm{d} r = \int_0^t \tilde z(r,Y(r,x))\,\mathrm{d} r \] with $\tilde z(r,x) \coloneqq Z(r,X(r,x))$. Here, $X(r,\cdot)$ is the spatial inverse of $Y(r,\cdot)$. Now set \[ B=\left\{ (r,x)\in[0,1]\times\Omega\;:\;z(r,Y(r,x)) < |\tilde z(r,Y(r,x))|\right\} \] and assume that the Lebesgue measure of $B$ is strictly positive. Note that $B$ can be approximated with finite unions of disjoint semi-open cuboids \cite[Theorem 1.4]{SteSha05}. By taking into account \cite[Theorem~1.1.2/Remark~1.1.3]{Ambrosio} one gets for every such cuboid $[t_1,t_2)\times D\subset[0,1]\times\Omega$ that \[ \int_{t_1}^{t_2} \int_D |\tilde z(t,Y(t,x))|^2 \,\mathrm{d} x\,\mathrm{d} t \leq \int_{t_1}^{t_2} \int_D z(t,Y(t,x))^2 \,\mathrm{d} x \,\mathrm{d} t\,. \] Combining this estimate with the dominated convergence theorem we conclude \[ \int_B |\tilde z(t,Y(t,x))|^2 \,\mathrm{d} x \,\mathrm{d} t \leq \int_B z(t,Y(t,x))^2 \,\mathrm{d} x \,\mathrm{d} t\,. \] This yields a contradiction to the definition of the set~$B$. Hence, $z\geq |\tilde z|$ a.e.~in $t$ and $x$. \end{proof} \section{Temporal extension operators}\label{sec:Extension} In this section, temporal extensions of all relevant quantities required for the convergence proof of the time discrete metamorphosis are proposed, which in particular allows an explicit solution to the optimality conditions~\eqref{eq:ODESys1} and ~\eqref{eq:ODESys2}. We remark that the subsequent construction is similar to \cite{BeEf14} with two major modifications, namely the definitions of the interpolated image sequence~\eqref{eq:DefUk} and the weak material derivative~\eqref{eq:materialDerivativeDef}, which are related to the manifold structure. For fixed $K \in \mathbb{N}$, let a discrete image path $\boldsymbol{I}_K =(I_{K,0}, \dots, I_{K,K}) \in L^2(\Omega, \mathcal{H})^{K+1}$ be given. The existence of the corresponding optimal deformations $\boldsymbol \varphi_K = ({\varphi}_{K,1}, \dots, {\varphi}_{K,K}) \in \mathcal{A}_\epsilon^K$ satisfying \eqref{eq:d_path} is proven in \cite[Section~3]{NPS17}. We refer to $\tau = K^{-1}$ as the \emph{time step size} and the image~$I_{K,k}$ is associated with the \emph{time step $t_{K,k} = k\tau$}, $k=0,\dots,K$. For $k=1,\dots,K$, we define the \emph{discrete transport map $y_{K,k} \colon [t_{K,k-1}, t_{K,k}] \times \overline{\Omega} \to \overline{\Omega}$} as \begin{equation} y_{K,k}(t,x) \coloneqq x + (t - t_{K,k-1})K(\varphi_{K,k}(x) - x)\,. \end{equation} If \[ \max_{k=1,\dots K}\Vert \varphi_{K,k} - \mathrm{Id} \Vert_{C^{1,\alpha}(\overline{\Omega})} < 1\,, \] we can use \cite[Theorem 5.5-1/Theorem 5.5-2]{Cia88} to infer that $\det(Dy_{K,k}(t,\cdot))>0$ holds and that $y_{K,k}(t,\cdot)$ is invertible with inverse $x_{K,k}(t,\cdot)$. The validity of this assumption is proven below and is tacitly assumed for all further considerations. Next, we define the \emph{extension operator $I^{int}_K \colon L^2(\Omega, \mathcal{H})^{K+1} \times \mathcal{A}_\epsilon^K \to L^2((0,1),L^2(\Omega,\mathcal{H}))$}, which is given for $t\in[t_{K,k-1}, t_{K,k})$ and a.e.~$x\in\Omega$ by \begin{equation}\label{eq:DefUk} I^{int}_K(\boldsymbol{I}_K,\boldsymbol{\varphi}_K)(t,x) \coloneqq \gamma_{{I_{K,k-1},I_{K,k}\circ \varphi_{K,k}}}\big(K(t-t_{K,k-1})\big)\big(x_{K,k}(t,x)\big)\,. \end{equation} Thus, $I^{int}_K$ uniquely describes for given $\boldsymbol{I}_K$ and $\boldsymbol{\varphi}_K$ a blending on the manifold along the transport path governed by $y_{K,k}$. In what follows, we set $w_{K,k}=K(\varphi_{K,k}-\mathrm{Id})$ and define the \emph{piecewise constant (in time) velocity} $w_K=w_K(\boldsymbol \varphi_K) \in L^2((0,1),\mathcal V)$ as \[ w_K(\boldsymbol \varphi_K)\big\vert _{[t_{K,k-1}, t_{K,k})} \coloneqq w_{K,k}\,. \] Furthermore, we define the \emph{discrete velocity field} $v_K\colon \mathcal V^K \to L^2((0,1), C^{1,\alpha}(\overline{\Omega}))$, \[ v_K(\boldsymbol \varphi_K)(t,x) \coloneqq K(\varphi_{K,k} - \mathrm{Id} )(x_{K,k}(t,x)) \] for $t\in[t_{K,k-1}, t_{K,k})$ and a.e.~$x\in\Omega$, which is constant along time discrete paths. Note that the extension operator $v_K$ merely admits a $C^{1,\alpha}$-regularity. To see this, we note that the concatenation of two H\"older continuous functions $f\in C^{1,\alpha}(\overline{\Omega})$ and $g \in C^{1,\alpha}(\overline{\Omega},\overline{\Omega})$ is again H\"older continuous~\cite[Lemma~2.2]{BHS2005} and the estimate \[ \Vert f \circ g \Vert_{C^{1,\alpha}(\overline{\Omega})} \leq C\Vert f \Vert_{C^{1,\alpha}(\overline{\Omega})}\left(1+\Vert g \Vert_{C^{1,\alpha}(\overline{\Omega})}\right)^2 \] easily follows. Taking into account \cite[Theorem 2.1]{BHS2005} we infer that $x_{K,k}(t,\cdot) \in C^{1,\alpha}(\overline{\Omega})$ and \[ D(x_{K,k}(t,\cdot)) = K^{-1}\mathrm{Inv} \left( K^{-1}\mathds{1} + (t - t_{K,k-1})(D\varphi_{K,k} - \mathds{1}) (x_{K,k}(t,\cdot))\right)\,, \] where $\mathrm{Inv}\colon GL(n) \to GL(n)$ denotes the smooth inversion operator. Since $\Omega$ is bounded and $x_{K,k}(t,\cdot)$ is a diffeomorphism, we get \begin{equation} \Vert x_{K,k}(t,\cdot) \Vert_{C^{1,\alpha}(\overline{\Omega})} \leq C\left(1 + K^{-1}\max_{k=1,\dots K}\Vert \varphi_{K,k} - \mathrm{Id} \Vert_{C^{1,\alpha}(\overline{\Omega})}\right)\,. \label{eq:BoundInverse} \end{equation} This implies that $v_K(t,\cdot) \in C^{1,\alpha}(\overline{\Omega})$ and \begin{equation} \Vert v_K (t,\cdot) \Vert_{C^{1,\alpha}(\overline{\Omega})} \leq C \Vert w_{K,k}(t,\cdot) \Vert_{C^{1,\alpha}(\overline{\Omega})}\left(1 + K^{-1}\Vert w_{K,k}(t,\cdot) \Vert_{C^{1,\alpha}(\overline{\Omega})}\right)^2\,. \label{eq:velocityEstimate} \end{equation} As a last preparatory step, we define the \emph{discrete path $Y_K \colon [0,1] \times \overline{\Omega} \to\overline{\Omega}$}, which is the concatenation of all small diffeomorphisms $y_{K,k}$ along the motion path. In detail, the mapping is defined for $t \in [0, t_{K,1}]$ by $Y_K(t,x) \coloneqq y_{K,1}(t,x)$ and then recursively for $k=2,\dots,K$ and $t \in (t_{K,k-1}, t_{K,k}]$ by \[ Y_K(t,x) \coloneqq y_{K,k}\left(t,Y_K(t_{K,k-1},x)\right) \] for all $x \in \Omega$. The spatial inverse of $Y_K$ is denoted by $X_K$. Finally, we define the \emph{material derivative} $z_K \in L^2((0,1), L^2(\Omega))$ for $t \in [t_{K,k-1}, t_{K,k})$ as \begin{equation} z_K(t,x) \coloneqq K d\big(I_{K,k-1}(x_{K,k}(t,x)),I_{K,k}\circ \varphi_{K,k} (x_{K,k}(t,x))\big)\,. \label{eq:materialDerivativeDef} \end{equation} In the following proposition it is shown that the temporal extensions of the images, the velocities, the material derivatives and the discrete paths indeed constitute a time continuous solution to \eqref{eq:ODESys1} and \eqref{eq:ODESys2}. \begin{proposition}[Admissible extension]\label{prop:admExtension} For $\boldsymbol {I}_K\in L^2(\Omega, \mathcal{H})^{K+1}$ and corresponding optimal deformations $\boldsymbol{\varphi}_K \in \mathcal{A}_\epsilon^K$, the tuple $(I^{int}_K(\boldsymbol{I}_K, \boldsymbol{\varphi}_K),v_K(\boldsymbol{\varphi}_K), Y_K, z_K)$ is a solution to \eqref{eq:ODESys1} and \eqref{eq:ODESys2}. \end{proposition} \begin{proof} By definition it holds that $Y_K(0,x) = x$ for all $x \in \Omega$. For $t \in (t_{k-1}, t_k)$ and $x \in \Omega$ we get \[ \frac{\,\mathrm{d}}{\,\mathrm{d} t} Y_K(t,x) = \frac{\,\mathrm{d}}{\,\mathrm{d} t} y_k(t,Y_K(t_{k-1},x)) = K(\varphi_k - \mathrm{Id} )(Y_K(t_{k-1},x))=v_K(\boldsymbol{\varphi}_K)(t,Y_K(t,x))\,. \] Therefore, $Y_K$ is a solution of \eqref{eq:ODESys1} in a weak sense according to \cref{rem:DiffeoVeloRem}. A short computation shows for $s\leq t\in [t_{K,k-1}, t_{K,k}]$ that \begin{align*} &d\big(I^{int}_K(\boldsymbol{I}_K,\boldsymbol{\varphi}_K)(t,Y_K(t,x)), I^{int}_K(\boldsymbol{I}_K, \boldsymbol{\varphi}_K)(s,Y_K(s,x))\big)\\ =& d\Bigl(\gamma_{{I_{K,k-1},I_{K,k}\circ \varphi_{K,k}}}(K(t-t_{K,k-1}))(Y_K(t_{K,k-1},x)),\\ &\quad \quad \quad \gamma_{{I_{K,k-1},I_{K,k}\circ \varphi_{K,k}}}(K(s-t_{K,k-1}))(Y_K(t_{K,k-1},x))\Bigr)\\ =& K (t-s)d\Bigl(I_{K,k-1}(Y_K(t_{K,k-1},x)),I_{K,k}\circ \varphi_{K,k}(Y_K(t_{K,k-1},x))\Bigr)\\ \leq & \int_s^t z_K(r,Y_K(r,x)) \,\mathrm{d} r\,. \end{align*} If $s$ and $t$ are not in the same time interval, we can use the triangle inequality multiple times. This concludes the proof. \end{proof} The next lemma allows to bound the $H^m(\Omega)$-norm of the displacements by a function solely depending on the energy~$\boldsymbol{R}$. \begin{lemma}\label{lemm:growthControl} Under the assumptions~\ref{W1} and \ref{W2} there exists a continuous and mono\-tonously increasing function $\theta:\mathbb{R}^+_0\rightarrow\mathbb{R}^+_0$ with $\theta(0)=0$ such that \[ \Vert\varphi-\mathrm{Id}\Vert_{H^m(\Omega)}\leq\theta\left(\boldsymbol{R}(I,\tildeI,\varphi)\right) \] for all $I,\tildeI\in L^2(\Omega, \mathcal{H})$ and all $\varphi\in\mathcal{A}_\epsilon$. Furthermore, $\theta(x)\leq C(x+x^2)^\frac{1}{2}$ for a constant $C>0$. \end{lemma} \begin{proof} Set $\overline{\boldsymbol{R}}=\boldsymbol{R}(I,\tildeI,\varphi)$. The Gagliardo--Nirenberg interpolation inequality \cite{Nir1966} implies \begin{equation} \Vert\varphi-\mathrm{Id}\Vert_{H^m(\Omega)}\leq C\left(\Vert\varphi-\mathrm{Id}\Vert_{L^2(\Omega)}+\vert \varphi-\mathrm{Id}\vert _{H^m(\Omega)}\right)\,. \label{eq:mGrowthDisplacement} \end{equation} The $H^m(\Omega)$-seminorm of the displacement can be controlled as follows \begin{equation} \vert \varphi-\mathrm{Id}\vert _{H^m(\Omega)}=\vert \varphi\vert _{H^m(\Omega)}\leq\sqrt{\tfrac{\overline{\boldsymbol{R}}}{\gamma}}\,, \label{eq:displacementHigherOrderControl} \end{equation} which is implied by the definition of $\overline{\boldsymbol{R}}$. Since $\varphi\in H^m(\Omega,\Omega)$, this already shows for $\alpha\in(0,m-1-\frac{n}{2})$ that \begin{equation} \Vert\varphi-\mathrm{Id}\Vert_{C^{1,\alpha}(\overline\Omega)}\leq C\Vert\varphi-\mathrm{Id}\Vert_{H^m(\Omega)}\leq C+C\sqrt{\overline{\boldsymbol{R}}}\,. \label{eq:displacementControl} \end{equation} To control the lower order term appearing on the right-hand side of~\eqref{eq:mGrowthDisplacement}, we first define the set $\Omega'=\{x\in\Omega:\vert D\varphi(x)-\mathrm{Id}\vert <r_\mathrm{W}\}$. Then, by using \eqref{eq:energy3} and \eqref{eq:energy4} we obtain \[ \vert \Omega\backslash\Omega'\vert C_{\mathrm{W},2}\leq\int_\Omega\mathrm{W}(D\varphi)\,\mathrm{d} x\leq\overline{\boldsymbol{R}}\,, \] which implies $\vert \Omega\backslash\Omega'\vert \leq\frac{\overline{\boldsymbol{R}}}{C_{\mathrm{W},2}}$. Hence, by taking into account the embedding $H^m(\Omega)\hookrightarrow C^1(\overline\Omega)$, Korn's inequality as well as \eqref{eq:displacementControl} we deduce \begin{align} \int_\Omega\vert (D\varphi)^\mathrm{sym}-\mathds{1}\vert ^2\,\mathrm{d} x &= \int_{\Omega'}\vert (D\varphi)^\mathrm{sym}-\mathds{1}\vert ^2\,\mathrm{d} x+\int_{\Omega\backslash\Omega'}\vert (D\varphi)^\mathrm{sym}-\mathds{1}\vert ^2\,\mathrm{d} x\notag\\ &\leq \int_\Omega\frac{\mathrm{W}(D\varphi)}{C_{\mathrm{W},1}}\,\mathrm{d} x+\vert \Omega\backslash\Omega'\vert \left(C+C\sqrt{\overline{\boldsymbol{R}}}\right)^2\notag\\ &\leq\frac{\overline{\boldsymbol{R}}}{C_{\mathrm{W},1}}+\frac{\overline{\boldsymbol{R}}}{C_{\mathrm{W},2}}\left(C+C\overline{\boldsymbol{R}}\right)\,. \label{eq:convergenceLowerLTwo} \end{align} Thus, the lemma follows by combining \eqref{eq:mGrowthDisplacement}, \eqref{eq:displacementHigherOrderControl} and \eqref{eq:convergenceLowerLTwo}. \end{proof} \section{Convergence of time discrete geodesic paths}\label{sec:Mosco} In this section, we prove the convergence of time discrete geodesic paths to a time continuous minimizer of \eqref{eq:c_path}. Indeed, we prove the full \ref{eq:Mosco1} of the definition of Mosco--convergence, but only construct recovery sequences in the context of the \ref{eq:Mosco2} for specific paths in the space of Hadamard--valued images \eqref{eq:admPath}, which suffices to establish the convergence of time discrete minimizers. We give here a comprehensive proof of the convergence result, even though the general procedure follows the Mosco--convergence proof in the Euclidean setting \cite{BeEf14} with major differences due to the manifold setting. These differences are highlighted throughout the proof. In what follows, we pass to subsequences several times and to increase readability, we frequently avoid relabeling of subsequences if obvious. As a first step, we extend the functional~$\boldsymbol{J}_K \colon L^2(\Omega, \mathcal{H})^{K+1} \to [0, \infty]$ to an operator $\boldsymbol{\mathcal{J}}_K \colon L^2([0,1],L^2(\Omega,\mathcal{H}))\to[0, \infty]$ by \[ \boldsymbol{\mathcal{J}}_K (I) = \begin{cases} \boldsymbol{J}_K(\boldsymbol{I}_K,\boldsymbol{\varphi}_K)\,, & \text{if }I = I^{int}_K(\boldsymbol{I}_K, \boldsymbol{\varphi}_K),\text{ where } \boldsymbol{\varphi}_K\text{ is a minimizer of \eqref{eq:d_path}}\,,\\ \infty\,, &\text{else}\,. \end{cases} \] The ingredients for the convergence proof are the \ref{eq:Mosco1} (\Cref{thm:liminf}) and the existence of a recovery sequence for specific paths (\Cref{thm:limsup}). \begin{theorem}[liminf-inequality]\label{thm:liminf} Under the assumptions \ref{W1}, \ref{W2} and \ref{W3} the time discrete path energy $\boldsymbol{\mathcal{J}}_K$ satisfies the \ref{eq:Mosco1} for $\boldsymbol{\mathcal{J}}$ w.r.t. the $L^2(\Omega,\mathcal{H})$-topology. \end{theorem} \begin{proof} Let $I_K\in L^2([0,1],L^2(\Omega,\mathcal{H}))$ be a sequence which weakly converges to an image path~$I\in L^2([0,1],L^2(\Omega,\mathcal{H}))$. If we exclude the trivial case $\liminf_{K \to \infty} \boldsymbol{\mathcal{J}}_K(I_K) = \infty$ and eventually pass to a subsequence (without relabeling), we may assume \[ \boldsymbol{\mathcal{J}}_K(I_K) \leq \overline{\mathcal{J}} < \infty \] for all $K\in \mathbb{N}$. By definition of $\boldsymbol{\mathcal{J}}_K$ this directly implies $I_K = I^{int}_K(\boldsymbol{I}_K, \boldsymbol{\varphi}_K)$ with $\boldsymbol{I}_K =(I_{K,0}, \ldots, I_{K,K}) \in L^2(\Omega, \mathcal{H})^{K+1}$ and $\boldsymbol \varphi_K =(\varphi_{K,1}, \ldots, \varphi_{K,K}) \in \mathcal{A}_\epsilon^K$. In particular, by incorporating \cref{lemm:growthControl} we deduce \begin{equation}\label{eq:growthControl} \max_{k\in\{1,\dots,K\}}\Vert\varphi_{K,k}-\mathrm{Id}\Vert_{C^{1,\alpha}(\overline\Omega)} \leq C\max_{k\in\{1,\dots,K\}}\Vert\varphi_{K,k}-\mathrm{Id}\Vert_{H^m(\Omega)} \leq C\theta(\overline{\mathcal{J}} K^{-1})\leq CK^{-\frac{1}{2}}\,. \end{equation} Note that for $K$ sufficiently large $Y_K$, $X_K$, $v_K$ and $z_K$ exist due to \cref{sec:Extension}.\medskip \paragraph{1. Lower semicontinuity of the weak material derivative} Let us remark that this step resembles the first step of the proof in the Euclidean setting replacing the squared $L^2$-norm by the squared distance in the Hadamard manifold. A straightforward computation shows \begin{align} \int_0^1 \int_\Omega z_K^2\,\mathrm{d} x\,\mathrm{d} t = &\sum_{k=1}^K \int_{t_{K,k-1}}^{t_{K,k}} \int_{\Omega} K^2 d\big(I_{K,k-1}(x_{K,k}(t,x)),I_{K,k}\circ \varphi_{K,k} (x_{K,k}(t,x))\big)^2 \,\mathrm{d} x \,\mathrm{d} t\notag\\ = &\sum_{k=1}^K \int_{t_{K,k-1}}^{t_{K,k}} \int_{\Omega} K^2 d\big(I_{K,k-1}(x),I_{K,k}\circ \varphi_{K,k} (x)\big)^2 \det(Dy_{K,k}(t,x)) \,\mathrm{d} x \,\mathrm{d} t\,. \label{eq:sourcerelation} \end{align} Next, we want to bound the difference of $\det(Dy_{K,k})$ and $1$ in the $L^\infty$-norm. Thus, we have \[ Dy_{K,k}(t,x) = \mathds{1} + K(t-t_{K,k-1})(D\varphi_{K,k}(x) - \mathds{1})\,, \] which together with the local Lipschitz continuity of the determinant implies \[ \Vert \det(Dy_{K,k}(t,x)) - 1 \Vert_{L^\infty([t_{K,k-1},t_{K,k})\times \Omega)} \leq C \Vert \varphi_{K,k} - \mathrm{Id} \Vert_{C^{1,\alpha}(\overline{\Omega})}\,. \] Hence, we can deduce \begin{align*} &\left \vert \sum_{k=1}^K K^2 \int_{t_{K,k-1}}^{t_{K,k}} \int_{\Omega} d\bigl(I_{K,k-1}(x),I_{K,k}\circ \varphi_{K,k}(x)\bigr)^2(\det(Dy_{K,k}(t,x)) -1 )\,\mathrm{d} x \,\mathrm{d} t \right \vert \\ \leq &{\frac{1}{\delta}} \overline{\mathcal{J}} C \max_{k=1,\dots,K}\Vert \varphi_{K,k} - \mathrm{Id} \Vert_{C^{1,\alpha}(\overline{\Omega})} \leq {\frac{1}{\delta}} \overline{\mathcal{J}} C \max_{k=1,\dots,K}\Vert \varphi_{K,k} - \mathrm{Id} \Vert_{H^{m}(\Omega)}\,. \end{align*} Taking into account \eqref{eq:growthControl} this ultimately leads to \[ \lim_{K \to \infty} \int_0^1 \int_\Omega z_K^2\,\mathrm{d} x\,\mathrm{d} t = \lim_{K \to \infty} K\sum_{k=1}^K \int_{\Omega} d\bigl(I_{K,k-1}(x),I_{K,k}\circ \varphi_{K,k}(x)\bigr)^2 \,\mathrm{d} x\,. \] This also shows the uniform boundedness of $z_K \in L^2((0,1), L^2(\Omega))$, which implies the existence of a weakly convergent subsequence with limit $z \in L^2((0,1), L^2(\Omega))$. Hence, using the weak lower semicontinuity of the norm one arrives at \begin{align*} \int_0^1 \int_\Omega z^2\,\mathrm{d} x\,\mathrm{d} t &\leq \liminf_{K \to \infty} \int_0^1\int_\Omega z_K^2\,\mathrm{d} x\,\mathrm{d} t\\ &=\liminf_{K \to \infty} K\sum_{k=1}^K \int_{\Omega} d\bigl(I_{K,k-1}(x),I_{K,k}\circ \varphi_{K,k}(x)\bigr)^2 \,\mathrm{d} x\,. \end{align*} \paragraph{2. Lower semicontinuity of the viscous dissipation} We highlight that this step differs from the corresponding step appearing in \cite{BeEf14} due to the modification of the assumption~\ref{W2}, where the overall structure persists. Note that the velocity fields~$v_K=v_K(\boldsymbol \varphi_K)$ are not necessarily in $L^2((0,1),\mathcal V)$. The sequence $w_K=w_K(\boldsymbol \varphi_K) \in L^2((0,1),\mathcal V)$ is uniformly bounded in $L^2((0,1),\mathcal V)$. To see this, we first assume that $K$ is sufficiently large such that $\max_{k=1,\ldots,K}\|D\varphi_{K,k}-\mathds{1}\|_{C^0(\overline\Omega)}<r_\mathrm{W}$ (see \ref{W2}), which is possible due to \eqref{eq:growthControl}. Then, using Korn's inequality, the Poincar\'e inequality as well as \ref{W2} we obtain \begin{align*} \int_0^1\int_\Omega|w_K|^2\,\mathrm{d} x\,\mathrm{d} t &\leq C\sum_{k=1}^K\int_{t_{k-1}^K}^{t_{K,k}}\int_\Omega K^2|(D\varphi_{K,k})^\mathrm{sym}-\mathds{1}|^2\,\mathrm{d} x\,\mathrm{d} t\\ &\leq CK\sum_{k=1}^K\int_\Omega\frac{\mathrm{W}(D\varphi_{K,k})}{C_{\mathrm{W},1}}\,\mathrm{d} x \leq\frac{C\overline{\mathcal{J}}}{C_{\mathrm{W},1}}\,,\\ \int_0^1\int_\Omega|D^mw_K|^2\,\mathrm{d} x\,\mathrm{d} t &=\sum_{k=1}^K\int_{t_{k-1}^K}^{t_{K,k}}\int_\Omega K^2|D^m(\varphi_{K,k}-\mathrm{Id})|^2\,\mathrm{d} x\,\mathrm{d} t\\ &=\sum_{k=1}^K K\int_\Omega |D^m\varphi_{K,k}|^2\,\mathrm{d} x \leq\frac{\overline{\mathcal{J}}}{\gamma}\,. \end{align*} The Gagliardo--Nirenberg inequality implies the uniform boundedness of the sequence~$w_K$ in $L^2((0,1),\mathcal V)$. By passing to a subsequence (again labeled in the same way) we can deduce $w_K\rightharpoonup v\in L^2((0,1),\mathcal{V})$ for $K\rightarrow\infty$. It remains to verify that \begin{equation}\label{eq:liminfL} \int_0^1 \int_\Omega L[v,v] \leq \liminf_{K \to \infty} K \sum_{k=1}^K \int_{\Omega} \mathrm{W}(D \varphi_{K,k}) + \gamma \vert D^m \varphi_{K,k}\vert^2 \,\mathrm{d} x\,. \end{equation} The second order Taylor expansion around $t_{K,k-1}$ of the function $t\mapsto \mathrm{W}(\mathds{1}+(t-t_{K,k-1})Dw_{K,k})$ evaluated at $t=t_{K,k}$ yields \begin{align} \mathrm{W}(D\varphi_{K,k})=&\mathrm{W}(\mathds{1})+ K^{-1}D\mathrm{W}(\mathds{1})(Dw_{K,k}) +\frac{1}{2K^2}D^{2}\mathrm{W}(\mathds{1})(Dw_{K,k},Dw_{K,k})+r_{K,k}\notag\\ =&K^{-2}\left(\frac{\lambda}{2}\left(\tr(\varepsilon[w_{K,k}])\right)^2+\mu\tr(\varepsilon[w_{K,k}]^2)\right)+r_{K,k}\,. \label{eq:TaylorEnergy} \end{align} Here, the lower order terms vanish due to \eqref{eq:energy1} and the last equality follows from \eqref{eq:energy2}. The remainder satisfies $\vert r_{K,k}\vert\leq CK^{-3}\vert Dw_{K,k}\vert ^3$ if $K$ is large enough due to \cref{eq:growthControl}. Then, \begin{align*} &K\sum_{k=1}^K\int_\Omega\mathrm{W}(D\varphi_{K,k})+\gamma \vert D^m \varphi_{K,k}\vert ^2\,\mathrm{d} x \\ =& K^{-1}\sum_{k=1}^K\int_\Omega\frac{\lambda}{2}(\tr(\varepsilon[w_{K,k}]))^2+\mu\tr(\varepsilon[w_{K,k}]^2)+\gamma\vert D^mw_{K,k}\vert ^2\,\mathrm{d} x + K\sum_{k=1}^K\int_\Omega r_{K,k}\,\mathrm{d} x\,, \end{align*} and the remainder is of order $K^{-\frac{1}{2}}$. To see this, we apply \eqref{eq:growthControl}, \cref{lemm:growthControl} and the uniform bound on the energy to deduce \begin{align*} &\quad K\sum_{k=1}^K\int_\Omega\vert r_{K,k}\vert\,\mathrm{d} x\leq CK\sum_{k=1}^K\int_\Omega K^{-3}\vert Dw_{K,k}\vert ^3\,\mathrm{d} x\\ &\leq CK\max_{k=1,\dots,K}\Vert \varphi_{K,k}-\mathrm{Id}\Vert _{C^1(\overline\Omega)} \sum_{k=1}^K\Vert \varphi_{K,k}-\mathrm{Id}\Vert _{H^m(\Omega)}^2\\ &\leq CK\theta(\overline{\mathcal{J}} K^{-1})\sum_{k=1}^K\theta\bigl(\boldsymbol{R}(I_{K,k-1},I_{K,k},\varphi_{K,k})\bigr)^2 \leq CK^\frac{1}{2}\sum_{k=1}^K\boldsymbol{R}(I_{K,k-1},I_{K,k},\varphi_{K,k}) \leq C\overline{\mathcal{J}} K^{-\frac{1}{2}}\,. \end{align*} Finally, a standard weak lower semicontinuity argument \cite{Da08} shows \begin{align*} &\quad\liminf_{K\to \infty}K\sum_{k=1}^K\int_\Omega\mathrm{W}(D\varphi_{K,k})+\gamma\vert D^m\varphi_{K,k}\vert ^2\,\mathrm{d} x\\ &=\liminf_{K\to \infty} \int_0^1 \int_\Omega \frac{\lambda}{2}(\tr \varepsilon[w_K])^2+\mu\tr(\varepsilon[w_K]^2)+\gamma\vert D^m w_K\vert ^2 \,\mathrm{d} x \,\mathrm{d} t\\ &\geq\int_0^1 \int_\Omega \frac{\lambda}{2}(\tr \varepsilon[v])^2+\mu\tr(\varepsilon[v]^2)+\gamma\vert D^m v\vert ^2 \,\mathrm{d} x \,\mathrm{d} t\,, \end{align*} which implies the weak lower semicontinuity of the path energy along the sequence~$\{I_K\}_{K\in\mathbb{N}}$.\medskip \paragraph{3. Verification of the admissibility of the limit} Finally, it remains to verify that $(I,v,Y,z)$ for a suitable $Y$ is a solution of \eqref{eq:ODESys1} and \eqref{eq:ODESys2}. We have already pointed out that the manifold-valued metamorphosis energy functional necessitates a variational inequality, which results in significant modifications of this step compared to \cite{BeEf14}. Let $\tilde Y$ denote the solution of \begin{align*} \frac{\,\mathrm{d}}{\,\mathrm{d} t}\tilde Y(t,x)& =v(t,\tilde Y(t,x))&&\text{for }(t,x) \in [0,1]\times\Omega\,,\\[.5em] \tilde Y(0,x)&= x && \text{for } x\in \Omega\,, \end{align*} which exists due to \cref{thm:DiffeoVelo}. Furthermore, \eqref{eq:velocityEstimate} and the uniform boundedness of $w_K \in L^2((0,1),\mathcal V)$ imply that the sequence $v_K$ is uniformly bounded in $L^2((0,1),C^{1,\alpha}(\overline{\Omega}))$. Then, incorporating \cref{rem:DiffeoVeloRem} one can infer that $Y_K$ is uniformly bounded in $C^{0}([0,1],C^{1,\alpha}(\overline{\Omega}))$, and by exploiting H\"older's inequality we can even show that the sequence is uniformly bounded in $C^{0,\frac{1}{2}}([0,1],C^{1,\alpha}(\overline{\Omega}))$. Hence, by using the compact embedding of H\"older spaces, the sequence $Y_K$ converges strongly to some $Y$ in $C^{0,\beta}([0,1],C^{1,\beta}(\overline{\Omega}))$ for some $\beta \in (0,\min(\frac{1}{2},\alpha))$. It is left to verify that $\tilde{Y} = Y$. To this end, we denote the solutions associated with $w_K$ by $\tilde{Y}_K$. Then, \[ \Vert Y - \tilde Y \Vert_{C^0([0,1]\times \overline{\Omega})} \leq \Vert Y - Y_K \Vert_{C^0([0,1]\times \overline{\Omega})} + \Vert Y_K - \tilde{Y}_K \Vert_{C^0([0,1]\times \overline{\Omega})} + \Vert \tilde{Y}_K - \tilde Y \Vert_{C^0([0,1]\times \overline{\Omega})}\,. \] Here, the first term converges to zero as shown above and the last term converges to zero by \cref{thm:DiffeoVelo}. Then, we can estimate as follows \begin{align} \Vert Y_K-\tilde{Y}_K\Vert_{C^0([0,1]\times \overline{\Omega})} &\leq C\sum_{k=1}^K\int_{t_{K,k-1}}^{t_{K,k}}\Vertw_{K,k}(s,x_{K,k}(s,\cdot))-w_{K,k}(s,\cdot)\Vert_{C^{0}(\overline{\Omega})}\,\mathrm{d} s\label{eq:YK1}\\ &\leq C\sum_{k=1}^K \int_{t_{K,k-1}}^{t_{K,k}}\Vertw_{K,k}(s,\cdot)\Vert_{H^m(\Omega)}\Vert y_{K,k}(s,\cdot)-\mathrm{Id}\Vert_{C^{0}(\overline{\Omega})}\,\mathrm{d} s\notag\\ &\leq C\Vertw_K\Vert_{L^2((0,1),H^m(\Omega))}\max_{k=1,\dots,K}\Vert\varphi_{K,k}-\mathrm{Id} \Vert_{C^{0}(\overline{\Omega})}\,.\notag \end{align} Here, the first inequality can be deduced from \cref{rem:DiffeoVeloRem}. Furthermore, the second inequality follows from the Lipschitz control of $x\mapstow_{K,k}(s,x_{K,k}(s,x))-w_{K,k}(s,x)$, where the Lipschitz constant is bounded by $C\Vertw_{K,k}(s,\cdot)\Vert_{H^m(\Omega)}$, and the third results from the Cauchy--Schwarz estimate. The uniform control of $w_K$ and \eqref{eq:growthControl} imply $Y = \tilde{Y}$ and by H\"older's inequality $Y \in C^{0,\frac{1}{2}}([0,1],C^{1,\alpha}(\overline{\Omega}))\,$. Finally, $X_K$ is uniformly bounded in $C^{0,\frac{1}{2}}([0,1],C^{1,\alpha}(\overline{\Omega}))$ due to \cref{rem:DiffeoVeloRem}. Thus, \eqref{eq:ODESys1} is fulfilled. Next, note that \begin{align*} &\int_{\Omega} d\big(I_K(t,Y_K(t,x)), I_K(s,Y_K(s,x))\big)^2 \,\mathrm{d} x \leq \int_{\Omega} \left(\int_t^s z_K (r, Y_K(r,x) ) \,\mathrm{d} r \right)^2 \,\mathrm{d} x\\ \leq& \vert s-t\vert \int_{\Omega} \int_t^s z_K (r, Y_K(r,x) )^2 \,\mathrm{d} r \,\mathrm{d} x\,. \end{align*} By the uniform boundedness of $z_K$ in $L^2((0,1), L^2(\Omega))$ we achieve that $I_K\circ Y_K\in A_{\frac{1}{2},L,\vert \det DY \vert}$ for some appropriate~$L$. In what follows, we prove the weak convergence of a subsequence of $I_K\circ Y_K$ to $I\circ Y\in A_{\frac{1}{2},L,\vert \det DY \vert}$. To this end, we observe \begin{align*} \limsup_{K \to \infty}\mathrm{d}_2(I_K,I)^2 &= \limsup_{K \to \infty} \int_\Omega d(I_K,I)^2 \,\mathrm{d} x \\ &=\limsup_{K \to \infty} \int_0^1 \int_{\Omega} d\big(I_K(t,Y_K(t,x)),I(t,Y_K(t,x))\big)^2 \vert \det DY_K \vert \,\mathrm{d} x \,\mathrm{d} t\\ &= \limsup_{K \to \infty} \int_0^1 \int_{\Omega} d\big(I_K(t,Y_K(t,x)),I(t,Y(t,x))\big)^2 \vert \det DY \vert \,\mathrm{d} x \,\mathrm{d} t\,, \end{align*} where we used the transformation formula, the uniform convergence of $DY_K$, the metric triangle inequality and the convergence of $I(t,Y_K(t,x))$ to $I(t,Y(t,x))$ (see \cref{lemm:stet_norm}). To sum up, this proves the weak convergence of $I_K\circ Y_K$ according to \cref{eq:WeakConv} and by \cref{thm:LipClos}, the limit is also contained in $A_{\frac{1}{2},L,\vert \det DY \vert}$. Finally, it remains to verify \eqref{eq:ODESys2}. Assume there exist $s<t\in[0,1]$ such that the set \[ B \coloneqq\left\{x\in\Omega\colon d\big(I(s,Y(s,x)),I(t,Y(t,x))\big)>\int_s^t z(r,Y(r,x)) \,\mathrm{d} r\right\} \] has positive Lebesgue measure. From the joint convexity of the metric $d(\cdot,\cdot)$ and the distance on $L^2(B,\mathcal{H})$ one observes that the mapping $(I,\tilde I)\mapsto\left(\int_B d\bigl(I(s,x),\tilde I(t,x)\bigr)^2 \,\mathrm{d} x\right)^{\frac{1}{2}}$ is continuous and convex on $A_{\frac{1}{2},L,\vert \det DY \vert}$. Now, this implies weak lower semicontinuity \cite[Lemma 3.2.3]{Bac14} and we obtain \begin{align*} &\int_B d\big(I(s,Y(s,x)),I(t,Y(t,x))\big)\,\mathrm{d} x\leq\liminf_{K \to \infty} \int_B d\big(I_K(s,Y_K(s,x)),I_K(t,Y_K(t,x))\big) \,\mathrm{d} x\\ \leq &\liminf_{K \to \infty} \int_B \int_s^t z_K(r,Y_K(r,x)) \,\mathrm{d} r\,\mathrm{d} x =\int_B \int_s^t z(r,Y(r,x)) \,\mathrm{d} r\,\mathrm{d} x\,, \end{align*} where the last equality follows from the weak convergence of $z_K$ combined with the strong convergence of $Y_K$, which also implies the weak convergence of $z_K \circ Y_K$. This yields a contradiction and concludes the proof of the \ref{eq:Mosco1}. \end{proof} \begin{theorem}[Recovery sequence]\label{thm:limsup} Let $I_A,I_B\in L^2(\Omega,\mathcal{H})$ be fixed input images and $z\in L^2((0,1),L^2(\Omega,\mathbb{R}^+_0))$. Furthermore, let $v \in L^2((0,1), \mathcal V)$ be a velocity field with corresponding global flow~$Y \in H^1([0, 1], H^m(\Omega, \Omega))$ related by~\eqref{eq:contFlow}. Assume that \begin{equation} d\big(I_{A}(x),I_{B}\circ Y(1,x)\big) \leq \int_0^1 z(s,Y(s,x)) \,\mathrm{d} s \label{eq:assumptionZ} \end{equation} for a.e.~$x\in\Omega$ and \ref{W1}, \ref{W2} and \ref{W3} hold true. If an image path~$I\in L^2([0,1],L^2(\Omega,\mathcal{H}))$ is given by \begin{equation} I(t,Y(t,x)) = \gamma_{{I_{A}(x),I_{B}\circ Y(1,x)}}(\alpha(t,x))\,, \label{eq:admPath} \end{equation} where $\alpha(t,x) \coloneqq \int_0^t z(s,Y(s,x)) \,\mathrm{d} s \, (\int_0^1 z(s,Y(s,x)) \,\mathrm{d} s)^{-1}$, which is set to zero if the second term vanishes. Then there exists a recovery sequence such that the \ref{eq:Mosco2} in \Cref{def:MoscoConv} is valid. \end{theorem} \begin{remark} It is easy to verify that $I(t,Y(t,\cdot)) \in L^2(\Omega,\mathcal{H})$ for a.e.~$t\in[0,1]$, which immediately implies $t\mapsto I(t,Y(t,\cdot))\in L^2([0,1], L^2(\Omega,\mathcal{H}))$ using continuity arguments. The parametrization of the geodesic $\gamma_{{I_{A}(x),I_{B}\circ Y(1,x)}}$ via $\alpha(t,x)$ can be regarded as the generalization of a linear blending of $\mathbb{R}^C$-valued images along motion paths considered in the classical metamorphosis model. \end{remark} \begin{proof} We proceed in several steps.\medskip \paragraph{1. Construction of the recovery sequence} Note that the definitions of the piecewise constant velocity fields and the deformations coincide with the corresponding construction in the proof of \cite{BeEf14}, whereas the constructions of the image sequence based on the weak material derivative and the reconstructed flow field significantly differ. The velocity field~$v$ corresponding to $Y$ is approximated by piecewise constant functions $w_K \in L^2((0,1),\mathcal V)$ given by $w_K(t,x)=w_{K,k}(x)$ for $t \in [t_{K,k-1},t_{K,k})$ and a.e.~$x\in\Omega$, where \[ w_{K,k}(x) = \Xint-_{t_{K,k-1}}^{t_{K,k}} v(s,x) \,\mathrm{d} s\,. \] A standard argument shows that the sequence $w_K$ converges to $v$ in $L^2((0,1),\mathcal V)$ \cite{FonLeo07}. Moreover, we define a sequence of diffeomorphisms $\boldsymbol{\varphi}_K=(\varphi_{K,1},\ldots,\varphi_{K,K})\in H^m(\Omega,\mathbb{R}^n)^K$ by \[ \varphi_{K,k} = \mathrm{Id} + K^{-1} w_{K,k}\,. \] A straightforward computation leads to \begin{align} &\max_{k \in \{1,\dots,K\}} \Vert \varphi_{K,k} - \mathrm{Id} \Vert_{C^1(\overline \Omega)} = \max_{k \in \{1,\dots,K\}} K^{-1} \left\Vert\Xint-_{t_{K,k-1}}^{t_{K,k}}v(s,\cdot) \,\mathrm{d} s \right\Vert_{C^1(\overline \Omega)}\notag\\ \leq& \max_{k \in \{1,\dots,K\}} C K^{-1} \Xint-_{t_{K,k-1}}^{t_{K,k}} \Vert v(s,\cdot)\Vert_{H^m(\Omega)} \,\mathrm{d} s \leq CK^{-\frac{1}{2}} \left(\int_0^1 \Vertv(s,\cdot)\Vert_{H^m(\Omega)}^2 \,\mathrm{d} s \right)^\frac{1}{2}\,.\label{eq:growthControl2} \end{align} As before, choosing $K$ sufficiently large ensures $\boldsymbol{\varphi}_K \in \mathcal{A}_\epsilon^K$. Hence, we can use the construction from \cref{sec:Extension} to obtain $v_K$, $Y_K$ and $X_K$. Additionally, the same arguments as in the third part of the proof of \cref{thm:liminf} establish the regularity and the convergence results of the flows associated with $v_K$. Then, to define an approximation of $I$ we use the geodesic connection between $I_A$ and $I_B \circ Y(1,\cdot)$ along the path $Y_K$ with the parametrization $t\to \min(1,\alpha_K(t,x))$ as follows \begin{equation}\label{eq:approxSolution} G_K(t,Y_K(t,x)) = \gamma_{{I_{A}(x),I_{B}\circ Y(1,x)}}\bigl(\min\left(1,\alpha_K(t,x)\right)\bigr)\,, \end{equation} where for $\beta_K = \max(\Vert z(\cdot,Y_K(\cdot,\cdot)) - z(\cdot,Y(\cdot,\cdot)) \Vert_{L^2((0,1)\times \Omega)},K^{-1})$ the function \[ \alpha_K(t,x) = \begin{cases} 0 &\text{if }\int_0^1 z(s,Y(s,x))\,\mathrm{d} s=0,\\ \frac{\int_0^t z(s,Y_K(s,x)) \,\mathrm{d} s}{\sqrt{\beta_K}}&\text{if }0<\int_0^1 z(s,Y(s,x))\,\mathrm{d} s \leq \sqrt{\beta_K},\\[.5em] \frac{\int_0^t z(s,Y_K(s,x)) \,\mathrm{d} s}{\int_0^1 z(s,Y(s,x))\,\mathrm{d} s}&\text{else}, \end{cases} \] lies in $L^2(\Omega,\mathbb{R})$. This specific form of $\alpha_K$ is needed in the last step of the proof to identify the limit of the recovery sequence. It is easy to see that $\beta_K\to 0$ and $\int_0^t z(s,Y_K(s,x)) \,\mathrm{d} s \to \int_0^t z(s,Y(s,x)) \,\mathrm{d} s$ for $K\to \infty$. Hence, $\alpha_K(t,\cdot) \to \alpha(t,\cdot)$ in $L^2(\Omega)$ for a.e.~$t\in[0,1]$ as $K\rightarrow\infty$, which is proven below. Finally, the recovery sequence is defined as $I_K = I^{int}_K\left(\boldsymbol{I}_K,\boldsymbol{\overline{\varphi}}_K\right)$, where \[ \boldsymbol{I}_K=\left(I_{K,0},I_{K,1},\dots,I_{K,K}\right)=\left(G_K(t_{K,0},\cdot),\dots,G_K(t_{K,K},\cdot)\right) \] and $\boldsymbol{\overline{\varphi}}_K$ refers to a vector of corresponding optimal deformations.\medskip \paragraph{2. Verification of the limsup-inequality} Note that this step is very similar to the corresponding step in \cite{BeEf14} with modifications necessitated by the manifold structure. A straightforward computation reveals \begin{align*} &\mathcal{J}_K(I_K)=\boldsymbol{J}_K(\boldsymbol{I}_K,\overline{\boldsymbol{\varphi}}_K)\leq\boldsymbol{J}_K[\boldsymbol{I}_K,\boldsymbol{\varphi}_K]\\ =&K\sum_{k=1}^K\int_\Omega\mathrm{W}(D\varphi_{K,k})+\gamma\vert D^m\varphi_{K,k}\vert^2+{\frac{1}{\delta}} d\big(I_{K,k-1},I_{K,k}\circ\varphi_{K,k}\big)^2\,\mathrm{d} x\,. \end{align*} The estimate \eqref{eq:growthControl2} for $K$ sufficiently large as well as the uniform boundedness of $Y_K$ in $C^{0,\frac12}([0,1],C^{1,\alpha}(\overline\Omega))$ imply via a Taylor expansion \begin{align} &\quad\max_{k\in\{1,\ldots,K\}}\sup_{t\in[t_{K,k-1},t_{K,k})}\left\Vert 1-\det(Dx_{K,k}(t,\cdot))\right\Vert _{C^0(\overline\Omega)}\notag\\ &\leq\max_{k\in\{1,\ldots,K\}}\sup_{t\in[t_{K,k-1},t_{K,k})} \left\Vert \frac{\det(Dy_{K,k}(t,\cdot))-\det(Dy_{K,k}(t_{K,k-1},\cdot))}{\det(Dy_{K,k}(t,\cdot))}\right\Vert _{C^0(\overline\Omega)} \leq CK^{-\frac{1}{2}}\,. \label{eq:detGrowth} \end{align} Taking into account \eqref{eq:approxSolution}, \eqref{eq:distancePropertyGeodesics} and \eqref{eq:assumptionZ}, we deduce for a.e.~$x\in\Omega$ that \begin{align} &d\big(I_{K,k-1}\circ Y_K(t_{K,k-1},x),I_{K,k}\circ Y_K(t_{K,k},x)\big)\notag\\ \leq& d\big(I_{A}(x),I_{B}\circ Y(1,x)\big)\left\vert \alpha_K(t_{K,k},x) - \alpha_K(t_{K,k-1},x)\right\vert\leq \int_{t_{K,k-1}}^{t_{K,k}}z(s,Y_K(s,x))\,\mathrm{d} s\,. \label{eq:zEstimate} \end{align} Hence, for any $k=1,\ldots,K$ we infer from \eqref{eq:detGrowth}, \eqref{eq:zEstimate} and Jensen's inequality that \begin{align} &\int_\Omega d\big(I_{K,k-1},I_{K,k}\circ\varphi_{K,k}\big)^2\,\mathrm{d} x\notag\\ =&\int_\Omega d\big(I_{K,k-1}\circ Y_K(t_{K,k-1},x),I_{K,k}\circ Y_K(t_{K,k},x)\big)^2\det(DY_K(t_{K,k-1},x))\,\mathrm{d} x\notag\\ \leq&\int_\Omega\left(\int_{t_{K,k-1}}^{t_{K,k}}z(s,Y_K(s,x))\,\mathrm{d} s\right)^2\det(DY_K(t_{K,k-1},x))\,\mathrm{d} x\notag\\ \leq&\frac{1}{K}\int_{t_{K,k-1}}^{t_{K,k}}\int_\Omega z^2(s,x)\det(Dx_{K,k}(s,x))\,\mathrm{d} x\,\mathrm{d} s\notag\\ \leq&\frac{1}{K}\left(1+CK^{-\frac{1}{2}}\right)\int_{t_{K,k-1}}^{t_{K,k}}\int_\Omega z^2(s,x)\,\mathrm{d} x\,\mathrm{d} s\,.\label{eq:zBoundIII} \end{align} Furthermore, the same Taylor argument as in \cref{eq:TaylorEnergy} implies \begin{align} &\quad\int_\Omega\mathrm{W}(D\varphi_{K,k})+\gamma\vert D^m\varphi_{K,k}\vert ^2 \,\mathrm{d} x \leq K^{-2}\int_\Omega L[w_{K,k},w_{K,k}]\,\mathrm{d} x + CK^{-3}\int_\Omega \vert Dw_{K,k}\vert ^3\,\mathrm{d} x\,. \label{eq:remainderEllipticOperator} \end{align} A direct application of Jensen's inequality shows \begin{align} \int_\Omega L[w_{K,k},w_{K,k}]\,\mathrm{d} x =&\int_\Omega\frac{\lambda}{2}\left(\tr\left(\varepsilon\left[\Xint-_{t_{K,k-1}}^{t_{K,k}}v(s,x)\,\mathrm{d} s\right]\right)\right)^2\notag\\ &+\mu\tr\left(\varepsilon\left[\Xint-_{t_{K,k-1}}^{t_{K,k}}v(s,x)\,\mathrm{d} s\right]^2\right) +\gamma\left\vert D^m\Xint-_{t_{K,k-1}}^{t_{K,k}}v(s,x)\,\mathrm{d} s\right\vert ^2\,\mathrm{d} x\notag\\ \leq& K\int_\Omega\int_{t_{K,k-1}}^{t_{K,k}}L[v,v]\,\mathrm{d} t\,\mathrm{d} x\,. \label{eq:JensenEllipticOperator} \end{align} The derivation of a bound for the remainder of the Taylor expansion in \eqref{eq:remainderEllipticOperator} incorporates a Sobolev embedding theorem, Jensen's inequality and $v\in L^2((0,1),\mathcal{V})$, and is similar to \cref{eq:TaylorEnergy}: \begin{align*} \Vertw_{K,k}\Vert _{C^1(\overline\Omega)}^2&\leq C\sum_{l=1}^K\Vert w_{K,l}\Vert _{H^m(\Omega)}^2 \leq C\sum_{l=1}^K\Xint-_{t_{K,l-1}}^{t_{K,l}}\Vertv(t,\cdot)\Vert _{H^m(\Omega)}^2\,\mathrm{d} t\\ &\leq CK\int_0^1\Vertv(t,\cdot)\Vert _{H^m(\Omega)}^2\,\mathrm{d} t\leq CK\,. \end{align*} Hence, $\max_{k=1,\ldots,K}\Vert w_{K,k}\Vert _{C^1(\overline\Omega)}\leq CK^{\frac12}$, which yields, again in combination with Jensen's inequality, the estimate \begin{align} &\sum_{k=1}^K \int_\Omega \vert Dw_{K,k}\vert^3\,\mathrm{d} x \leq\max_{k=1,\ldots,K}\Vert w_{K,k}\Vert _{C^1(\overline\Omega)}\sum_{l=1}^K\int_\Omega\left\vert \Xint-_{t_{K,l-1}}^{t_{K,l}}Dv(t,x)\,\mathrm{d} t\right\vert ^2\,\mathrm{d} x\notag\\ \leq& C K^{\frac12}K\sum_{l=1}^K\int_\Omega\int_{t_{K,l-1}}^{t_{K,l}}\vert Dv(t,x)\vert ^2\,\mathrm{d} t\,\mathrm{d} x\leq CK^\frac{3}{2}\,. \label{eq:estimateRemainderGrowth} \end{align} Altogether, by taking into account the estimates \eqref{eq:zBoundIII}, \eqref{eq:remainderEllipticOperator}, \eqref{eq:JensenEllipticOperator} and \eqref{eq:estimateRemainderGrowth} we conclude \begin{align*} \mathcal{J}_K(I_K) &\leq\boldsymbol{J}_K(\boldsymbol{I}_K,\boldsymbol{\varphi}_K)= K\sum_{k=1}^K\int_\Omega\mathrm{W}(D\varphi_{K,k})+\gamma\vert D^m\varphi_{K,k}\vert ^2+{\frac{1}{\delta}} d\big(I_{K,k-1},I_{K,k}\circ\varphi_{K,k}\big)^2\,\mathrm{d} x\\ &\leq\sum_{k=1}^K\left(\int_{t_{K,k-1}}^{t_{K,k}}\int_\Omega L[v,v]+CK^{-1}\vert Dw_{K,k}\vert ^3+ {\frac{1}{\delta}}\left(1+CK^{-\frac{1}{2}}\right)z^2(t,x)\,\mathrm{d} x\,\mathrm{d} t\right)\\ &\leq\int_0^1\int_\Omega L[v,v]+{\frac{1}{\delta}} z^2(t,x)\,\mathrm{d} x\,\mathrm{d} t+CK^{-\frac{1}{2}}+C{\frac{1}{\delta}} K^{-\frac{1}{2}}=\mathcal{J}[I]+\mathcal{O}(K^{-\frac{1}{2}})\,, \end{align*} which readily implies the \ref{eq:Mosco2}.\medskip \paragraph{3. Identification of the recovery sequence limit} It remains to verify the convergence $I_K\to I$ in $L^2([0,1],L^2(\Omega,\mathcal{H}))$ as $K\to\infty$. To this end, we prove that every subsequence has a convergent subsequence with limit~$I$. This step substantially differs from the corresponding step in the proof of \cite{BeEf14}. Note that $Y_K$ has a convergent subsequence in $C^{0,\beta}([0,1],C^{1,\beta}(\overline{\Omega}))$ with limit $Y$, where $\beta$ is chosen as in \cref{thm:liminf}. With these results at hand, we are able to use \cref{lemm:stet_norm} in the Bochner space setting to conclude \[ \Vert z(\cdot,Y_K(\cdot,\cdot)) - z(\cdot,Y(\cdot,\cdot)) \Vert_{L^2((0,1)\times \Omega)}\to 0 \] as $K\to\infty$. Setting $\Omega_K = \{x \in \Omega \colon 0 < \int_0^1 z(s,Y(s,x)) \,\mathrm{d} s \leq \sqrt{\beta_K} \}$ we infer for a.e.~$t\in[0,1]$ that \begin{align} &\int_\Omega \left(\alpha_K(t,x) - \alpha(t,x)\right)^2\,\mathrm{d} x\notag\\ \leq & \int_{\Omega_K} \left (\alpha_K(t,x) - \alpha(t,x)\right)^2\,\mathrm{d} x + \frac{\Vert z(\cdot,Y_K(\cdot,\cdot)) - z(\cdot,Y(\cdot,\cdot)) \Vert^2_{L^2((0,1)\times \Omega)}}{\beta_K}\notag\\ \leq & \int_{\Omega_K} 2\left (\frac{\int_0^t z(s,Y(s,x)) \,\mathrm{d} s}{\sqrt{\beta_K}} - \frac{\int_0^t z(s,Y(s,x)) \,\mathrm{d} s}{\int_0^1 z(s,Y(s,x))\,\mathrm{d} s}\right)^2\,\mathrm{d} x + 3\beta_K\notag\\ \leq & 2 \vert\Omega_K\vert + 3\beta_K\rightarrow 0\,.\label{eq:ConvAlpha} \end{align} Here, the second inequality results from an insertion of $\beta_K^{-\frac{1}{2}}\int_0^t z(s,Y(s,x)) \,\mathrm{d} s$ and the triangle inequality. Next, we prove the convergence of a subsequence of $G_K(\cdot,Y_K(\cdot,\cdot))$ to $I(\cdot,Y(\cdot,\cdot))$, which in combination with \cref{cor:stet_norm_2} implies the convergence of this subsequence of $G_K$ to $I$ in $L^2((0,1),L^2(\Omega,\mathcal{H}))$. To this end, using \eqref{eq:distancePropertyGeodesics} we estimate \begin{align} &\mathrm{d}_2\big(G_K(\cdot,Y_K(\cdot,\cdot)), I(\cdot,Y(\cdot,\cdot))\big)^2\notag\\ \leq &\int_0^1 \int_\Omega d\big(I_{A}(x),I_{B} \circ Y(1, x)\big)^2 \left\vert \min\bigl(1,\alpha_K(t,x)\bigr)-\alpha(t,x)\right\vert^2 \,\mathrm{d} x \,\mathrm{d} t \,, \label{eq:ConvGeodesic} \end{align} for which a subsequence converges to zero by \cref{eq:ConvAlpha}, \cref{lemm:convergenceAE} and the dominated convergence theorem. In what follows, we restrict to this subsequence. Using the convergence of $G_K$, we can show that also $I^{int}_K(\boldsymbol{I}_K, \boldsymbol{\varphi}_K)$ converges to $I$. This follows directly from the estimate \begin{align*} &\mathrm{d}_2\big(G_K(\cdot,Y_K(\cdot,\cdot)),I^{int}_K(\boldsymbol{I}_K, \boldsymbol{\varphi}_K)(\cdot,Y_K(\cdot,\cdot))\big)^2\\ =&\sum_{k=1}^K \int_{t_{K,k-1}}^{t_{K,k}}\int_\Omega d\big(G_K(s,Y_K(s,x)),I^{int}_K(\boldsymbol{I}_K, \boldsymbol{\varphi}_K)(s,Y_K(s,x))\big)^2\,\mathrm{d} x \,\mathrm{d} s\\ \leq & C \sum_{k=1}^K\Bigg( \int_{t_{K,k-1}}^{t_{K,k}} \int_\Omega K^{-2}z^2(s,Y_K(s,x)) \,\mathrm{d} x \,\mathrm{d} s\\ &\hspace{3em}+ \int_{t_{K,k-1}}^{t_{K,k}}\int_\Omega d\big(I_{K,k}(Y_K(t_{K,k-1},x)), I^{int}_K(\boldsymbol{I}_K, \boldsymbol{\varphi}_K)(s,Y_K(s,x))\big)^2 \,\mathrm{d} x\,\mathrm{d} s\Bigg)\\ \leq &CK^{-2} \Vert z(\cdot,Y_K(\cdot,\cdot))\Vert_{L^2((0,1) \times \Omega)}^2\,. \end{align*} Here, for the second inequality we exploit the argument form \eqref{eq:zEstimate} for the geodesic distance $d(G_K(s,Y_K(s,x)),I_{K,k}(Y_K(t_{K,k-1},x)))$ together with a Cauchy--Schwarz inequality and the last inequality follows from the same arguments together with the definition of $I^{int}_K$ \eqref{eq:DefUk}. It remains to show that $\mathrm{d}_2\big(I^{int}_K(\boldsymbol{I}_K, \boldsymbol{\varphi}_K),I^{int}_K(\boldsymbol{I}_K, \boldsymbol{\overline{\varphi}}_K)\big) \to 0$. We get \begin{align} \mathrm{d}_2(I^{int}_K(\boldsymbol{I}_K, \boldsymbol{\varphi}_K),I^{int}_K(\boldsymbol{I}_K, \boldsymbol{\overline{\varphi}}_K))^2 \leq &C \sum_{k=1}^{K}\int_{t_{K,k-1}}^{t_{K,k}} \int_\Omega d(I_{K,k-1}\circ x_{K,k}, I_{K,k-1}\circ\overline{x}_{K,k})^2\notag\\ &+ d(I_{K,k}\circ \varphi_{K,k}\circ x_{K,k},I_{K,k}\circ \overline{\varphi}_{K,k}\circ\overline{x}_{K,k})^2\,\mathrm{d} x\,\mathrm{d} t\,, \label{eq:limsup1} \end{align} where we used the convexity of the metric $d$ \eqref{eq:ConvDist} and the fact $K(t-t_{K,k-1}) \leq 1$ for all $t\in[t_{K,k-1}, t_{K,k})$. Here, $\overline{y}_{K,k}$ and $\overline{x}_{K,k}$ denote the discrete transport map associated with $\overline{\varphi}_{K,k}$ and the inverse map, respectively. To prepare this, we first apply \cref{thm:denselip} and approximate the input images $I_{A}$ and $I_{B}$ by Lipschitz continuous functions $I_{A,j}$ and $I_{B,j}$, where the Lipschitz bound is assumed to be smaller than $Cj$ and the maximum value is bounded by $Cj^{\frac{1}{2}}$. We define spatially Lipschitz continuous approximations of $\alpha_K$ by $\alpha_{K,j}(t,\cdot) = \alpha_{K}(t,\cdot)\ast \kappa_j$, where $\{\kappa_j\}_{j\in\mathbb{N}}\subset C_c^\infty(\mathbb{R}^n)$ is a family of non-negative mollifiers with mass $1$ such that $\alpha_{K,j}$ is Lipschitz continuous with Lipschitz constant bounded by $Cj^{\frac{1}{2}}$ independently of $K$. Analogously, we approximate $\alpha$ by $\alpha_j$. The estimate \cref{eq:ConvAlpha} implies \begin{equation} \Vert \alpha_{K,j}(t,\cdot) - \alpha_{j}(t,\cdot) \Vert_{L^2(\Omega)} \leq C\Vert \alpha_{K}(t,\cdot) - \alpha(t,\cdot) \Vert_{L^2(\Omega)} \leq C(2\vert\Omega_K\vert + 3\beta_K)\,. \label{eq:EstMol} \end{equation} Next, $G_K$ is approximated by \[ G_{K,j}(t,Y_K(t,x)) = \gamma_{{I_{A,j}(x),I_{B,j}\circ Y(1,x)}}\left(\min(1,\alpha_{K,j}(t,x))\right)\,, \] which is spatially Lipschitz continuous with Lipschitz constant $Cj$. To verify this, it suffices to prove the Lipschitz continuity of the function $G_{K,j}(t,Y_K(t,\cdot))$ for all~$t\in[0,1]$ since $X_K$ is uniformly bounded in $C^{0,\frac{1}{2}}([0,1],C^{1,\alpha}(\overline{\Omega}))$. For $x_1,x_2 \in \Omega$ we get \begin{align*} &d\big(G_{K,j}(t,Y_K(t,x_1)),G_{K,j}(t,Y_K(t,x_2))\big)\\ \leq &d\big(G_{K,j}(t,Y_K(t,x_1)),\tilde G_{K,j}(t,x_1,x_2)\big) + d\big(\tilde G_{K,j}(t,x_1,x_2),G_{K,j}(t,Y_K(t,x_2))\big)\\ \leq &d\big(I_{A,j}(x_1),I_{A,j}(x_2)\big) + d\big(I_{B,j}\circ Y_K(1,x_1),I_{B,j}\circ Y_K(1,x_2)\big)\\ &+\vert\alpha_{K,j}(t,x_1)-\alpha_{K,j}(t,x_2)\vert d\big(I_{A,j}(x_1),I_{B,j}\circ Y_K(1,x_1)\big)\\ \leq& Cj\vert x_1-x_2\vert\,. \end{align*} Here, we use for comparison \[ \tilde G_{K,j}(t,x_1,x_2) = \gamma_{{I_{A,j}(x_1),I_{B,j}\circ Y(1,x_1)}}\bigl(\min(1,\alpha_{K,j}(t,x_2))\bigr) \] on the discrete curve $t\mapsto Y_K(t,x_1)$. Next, we approximate $I_{K,k}$ by $I_{K,k,j}(x) \coloneqq G_{K,j}(t_{K,k},x)$ and show uniform convergence in $K,k$ of the associated approximation error to zero. To this end, we verify that \begin{align*} \mathrm{d}_2(I_{K,k,j},I_{K,k}) \leq & C \mathrm{d}_2\left(\gamma_{{I_{A},I_{B}\circ Y(1,\cdot)}}\bigl(\min(1,\alpha_{K,j}(t_{K,k},\cdot))\bigr), \gamma_{{I_{A},I_{B}\circ Y(1,\cdot)}}\bigl(\min(1,\alpha_{K}(t_{K,k},\cdot))\bigr)\right)\\ \leq & C \left(\int_\Omega d(I_A(x),I_B \circ Y(1,x)))^2 \min\left(1,\vert \alpha_{K,j}(t_{K,k},x) - \alpha_{K}(t_{K,k},x) \vert^2\right) \,\mathrm{d} x \right)^{\frac12}\\ \leq & C\left(\epsilon + C_\epsilon \esssup_{t\in [0,1]} \Vert \alpha_{K,j}(t,\cdot) - \alpha_{K}(t,\cdot) \Vert_{L^2(\Omega)}\right)\,, \end{align*} where we first use the transformation formula together with the uniform boundedness of the determinants and then split the domain $\Omega$ into a set, where $d(I_A(x),I_B \circ Y(1,x)))^2\leq C_\epsilon$, and the remainder. By the monotone convergence theorem, the remainder can be assumed to be smaller than a fixed $\epsilon$ with $C_\epsilon$ chosen sufficiently large. We observe that $\esssup_{t\in [0,1]} \Vert \alpha_{K,j}(t,\cdot) - \alpha_{K}(t,\cdot) \Vert_{L^2(\Omega)}$ converges to zero for $j\to\infty$ uniformly in $K$. Thus, the approximation error converges to zero uniformly in $K$ and $k$. Now, we are able to prove the convergence of the first integral in \eqref{eq:limsup1}. To this end, we fix $j= \min \{l \in \mathbb{N}\colon l \geq K^{\frac{1}{4}}\}$. Then, \begin{align*} &\sum_{k=1}^{K}\int_{t_{K,k-1}}^{t_{K,k}} \int_\Omega d\big(I_{K,k-1}\circ x_{K,k}(t,x), I_{K,k-1}\circ \overline{x}_{K,k}(t,x)\big)^2 \,\mathrm{d} x \,\mathrm{d} t\\ \leq &C \sum_{k=1}^{K}\int_{t_{K,k-1}}^{t_{K,k}} \int_\Omega d\big(I_{K,k-1}\circ x_{K,k}(t,x), I_{K,k-1,j}\circ x_{K,k}(t,x)\big)^2 + \\[-1ex] & \hspace{6.9em} d\big(I_{K,k-1,j}\circ x_{K,k}(t,x), I_{K,k-1,j}\circ \overline{x}_{K,k}(t,x)\big)^2 + \\ & \hspace{6.9em} d\big(I_{K,k-1,j}\circ \overline{x}_{K,k}(t,x), I_{K,k-1}\circ \overline{x}_{K,k}(t,x)\big)^2 \,\mathrm{d} x \,\mathrm{d} t\,. \end{align*} Here, the first and the last term converge to zero as $K \to \infty$ by using the transformation formula and the uniform boundedness of the determinants. For the second term we obtain \begin{align*} &\sum_{k=1}^{K}\int_{t_{K,k-1}}^{t_{K,k}} \int_\Omega d\big(I_{K,k-1,j}\circ x_{K,k}(t,x), I_{K,k-1,j}\circ \overline{x}_{K,k}(t,x)\big)^2 \,\mathrm{d} x \,\mathrm{d} t\\ \leq & C\sum_{k=1}^{K}\int_{t_{K,k-1}}^{t_{K,k}} \sup_{x \in \Omega}\left(d\big(I_{K,k-1,j}\circ x_{K,k}(t,x), I_{K,k-1,j}\circ \overline{x}_{K,k}(t,x)\big)^2\right)\,\mathrm{d} t\\ \leq & Cj^2 \max_{k=1,\dots,K} \Vert \mathrm{Id} - \overline{x}_{K,k} \circ y _{K,k} \Vert_{C^0([t_{K,k-1},t_{K,k})\times \overline{\Omega})}^2\\ \leq &CK^{\frac{1}{2}}\max_{k=1,\dots,K} \Vert \overline{y}_{K,k} - y _{K,k} \Vert_{C^0([t_{K,k-1},t_{K,k})\times \overline{\Omega})}^2\\ \leq& CK^{\frac{1}{2}} \max_{k=1,\dots,K} \left\{ \Vert \overline{\varphi}_{K,k} - \mathrm{Id} \Vert_{C^1(\overline{\Omega})}^2, \Vert \varphi _{K,k} - \mathrm{Id} \Vert_{C^1(\overline{\Omega})}^2 \right\} \leq CK^{-\frac{1}{2}}\,. \end{align*} For the last inequality we incorporate \eqref{eq:growthControl2} for $\varphi _{K,k}$ and \eqref{eq:growthControl} for $\overline{\varphi}_{K,k}$. The convergence of the second integral in \eqref{eq:limsup1} follows by an analogous reasoning, which concludes the proof. \end{proof} We conclude this section with the desired convergence statement for discrete geodesic paths. \begin{theorem}[Convergence of discrete geodesic paths]\label{thm:convergence} Let $I_A,I_B\in L^2(\Omega,\mathcal{H})$ and suppose that the assumptions \ref{W1}, \ref{W2} and \ref{W3} hold true. For every $K\in\mathbb{N}$ let $I_K$ be a minimizer of $\boldsymbol{\mathcal{J}}_K $ subject to $I_K(0)=I_A$ and $I_K(1)=I_B$. Then, a subsequence of $\{I_K\}_{K\in \mathbb{N}}$ converges weakly in $L^2([0,1],L^2(\Omega,\mathcal{H}))$ to a minimizer of the continuous path energy $\boldsymbol{\mathcal{J}}$ as $K\rightarrow\infty$, and the associated sequence of discrete energies converges to the minimal continuous path energy. \end{theorem} \begin{proof} By a comparison argument we deduce that the path energy $\boldsymbol{\mathcal{J}}_K$ is bounded by $\overline{\mathcal{J}} = {\frac{1}{\delta}}\mathrm{d}_2(I_A,I_B)^2$. For optimal vectors of images~$\boldsymbol{I}_K$ and deformations~$\boldsymbol{\varphi}_K$ we apply the construction of the extension in time from \cref{sec:Extension}. In particular, $\boldsymbol{J}_K(\boldsymbol{I}_K,\boldsymbol{\varphi}_K) \leq \overline{\mathcal{J}}$ for all $K \in \mathbb{N}$. Using \eqref{eq:BoundInverse} and \cref{eq:sourcerelation} we conclude that $z_k$ is uniformly bounded in $L^2((0,1)\times \Omega)$. Then, \cref{rem:DiffeoVeloRem} together with \eqref{eq:velocityEstimate} and \cref{eq:growthControl} imply the uniform boundedness of $Y_K$, $X_K$ in $C^{0}([0,1],C^{1,\alpha}(\overline{\Omega}))$. By incorporating inequality \eqref{eq:ODESys2}, we obtain for a constant function~$f_a(x) = a$ with $a \in \mathcal{H}$ that \[ \mathrm{d}_2(I_K(t,\cdot),f_a) \leq C \big(\mathrm{d}_2\big(I_K(t,Y(t,\cdot)),I_A)\big) + \mathrm{d}_2(I_A,f_a) \big) \leq C\big(\Vert z_K \Vert_{L^2((0,1)\times \Omega)} + 1 \big)\,. \] Therefore, $\{I_K\}_{K\in\mathbb{N}}$ is uniformly bounded in $L^\infty([0,1],L^2(\Omega,\mathcal{H}))$ and a subsequence converges weakly to some $I\in L^2([0,1],L^2(\Omega,\mathcal{H}))$ in $L^2([0,1],L^2(\Omega,\mathcal{H}))$. Now, we assume that there exists an image path $\tildeI\in L^2([0,1],L^2(\Omega,\mathcal{H}))$ with corresponding optimal tuple $(\tildeI,\tilde v, \tilde Y,\tilde z)$ satisfying \eqref{eq:ODESys1} and \eqref{eq:ODESys2} such that \begin{equation} \boldsymbol{\mathcal{J}}[\tildeI]<\boldsymbol{\mathcal{J}}[I]\,. \label{eq:gammaContradiction} \end{equation} Note that such a tuple exists due to the weak lower semi-continuity and the weak closedness of \eqref{eq:ODESys1} and \eqref{eq:ODESys2}, which follows by similar arguments like in step~3 of the proof of \cref{thm:liminf}. Without restriction we assume that $\tilde I$ has the form \[ \tilde I(t,\tilde Y(t,x)) = \gamma_{{I_{A}(x),I_{B}\circ \tilde Y(1,x)}}\left(\frac{\int_0^t \tilde z(s,\tilde Y(s,x)) \,\mathrm{d} s} {\int_0^1 \tilde z(s,\tilde Y(s,x)) \,\mathrm{d} s}\right)\,. \] Then, using the $\limsup$-estimate shown in \cref{thm:limsup}, there exists a sequence $\{\tildeI_K\}_{K\in\mathbb{N}}\subset L^2((0,1),L^2(\Omega,\mathcal{H}))$ satisfying $\limsup_{K\rightarrow\infty}\boldsymbol{\mathcal{J}}_K [\tildeI_K]\leq\boldsymbol{\mathcal{J}}[\tildeI]$. Thus, we obtain \begin{equation}\label{eq:limit} \boldsymbol{\mathcal{J}}[I]\leq\liminf_{K\rightarrow\infty}\boldsymbol{\mathcal{J}}_K [I_K] \leq\limsup_{K\rightarrow\infty}\boldsymbol{\mathcal{J}}_K [\tildeI_K]\leq\boldsymbol{\mathcal{J}}[\tilde I]\,, \end{equation} which contradicts \eqref{eq:gammaContradiction}. Hence, $I$ minimizes the continuous path energy over all admissible image paths. Finally, the discrete path energies converge to the limiting path energy along a subsequence, i.e.~$\lim_{K\rightarrow\infty}\boldsymbol{\mathcal{J}}_K [I_K]=\boldsymbol{\mathcal{J}}[I]$, which again follows from \cref{eq:limit} by using $\tilde I = I$. \end{proof} \section*{Acknowledgments} It is a pleasure to thank Johannes Persch and Gabriele Steidl for many enlightening discussions and inspirations. S.~Neumayer acknowledges funding by the Research Training Group 1932, project area P3. A.~Effland and M.~Rumpf acknowledge support of the Collaborative Research Center 1060 and the Hausdorff Center for Mathematics, both funded by the German Science foundation. A.~Effland additionally acknowledges support from the European Research Council under the Horizon 2020 program, ERC starting grant HOMOVIS, No. 640156.
{ "timestamp": "2019-03-01T02:11:04", "yymm": "1902", "arxiv_id": "1902.10930", "language": "en", "url": "https://arxiv.org/abs/1902.10930" }
\section{INTRODUCTION} \blfootnote{The research reported in this paper has been partly supported by the Austrian Ministry for Transport, Innovation and Technology, the Federal Ministry of Science, Research and Economy, and the Province of Upper Austria in the frame of the COMET center SCCH.} \noindent Multi-camera systems are widely used in motion capture, stereo vision, 3D reconstruction, surveillance and sports tracking. With smartphones ubiquitous now, events are frequently captured by multiple devices. Many multi-view algorithms assume temporal synchronization. The problem of multiple video synchronization is often solved by triggering the cameras by a shared signal. This solution has disadvantages: it is costly and might put a restriction on the distance of the cameras. Cheaper cameras and smartphones do not have a hardware trigger input at all. \begin{figure}[h] \centering{}\includegraphics[width=1\columnwidth]{figures/flashes_96_96_77_78}\caption{\label{fig:4flash_artefacts}Four cameras with rolling shutter sensors capturing a scene when a photographic flash was fired. Part of image rows integrated light from the flash. The leading and trailing edges are easily detectable and on the ice rink also clearly visible. The edges serve as very precise synchronization points. } \end{figure} Content-based synchronization can be performed offline and places no requirements on the data acquisition. It has received stable attention in the last 20 years \cite{Stein1999,Caspi2002,Tresadern2003,Lei2006,Padua2010}. Some of the methods require calibrated cameras, trackable objects, laboratory setting or are limited to two cameras. The wast majority of the methods requires overlapped views. For analysis of high-speed phenomena, a very precise synchronization is critical. The problem of precise sub-frame synchronization was addressed in \cite{Caspi2006,Tresadern2009,Dai2006}. We propose a very simple yet sub-millisecond accurate method for video data with abrupt lighting changes captured by rolling shutter cameras. Such lighting changes could be induced for example by photographic flashes, creative lighting on cultural events or simply by turning on a light source. In controlled conditions, it is easy to produce necessary lighting changes with a stock camera flash. It is very likely that an existing multi-view imaging system uses rolling shutter sensors or that a set of multi-view videos from the public was captured by rolling shutter cameras. The expected image sensor shipment share for CMOS in 2015 was 97\% \cite{IHSInc2012}. Most of the CMOS sensors are equipped with the rolling shutter image capture. The proposed method assumptions are limited to: \begin{itemize} \item a few abrupt lighting changes affecting most of the observed scene, and \item cameras with rolling shutter sensors. \end{itemize} The method does not require an overlapping field of view and the cameras can be heterogeneous with different frame rates and resolutions. The proposed method works with frame timestamps instead of frame numbers. This means that the method is robust to dropped frames. When a lighting abruptly changes during a rolling shutter frame exposure, the transition edge can be reliably detected in multiple cameras and used as a sub-frame synchronization point. An example of captured frames with an abrupt lighting change caused by a single photographic flash is shown in Figure~\ref{fig:4flash_artefacts}. Let us illustrate the importance of precise sub-frame synchronization on an example of tracking ice hockey players and a puck. Players can quickly reach a speed of $\unitfrac[7]{m}{s}$ \cite{Farlinger2007} and the puck $\unitfrac[36]{m}{s}$ \cite{Worobets2006}. When we consider $\unit[25]{fps}$ frame rate, a player can travel $\unit[28]{cm}$, and a puck can move $\unit[1.44]{m}$ in the $\unit[40]{ms}$ duration of one frame. When a synchronization is accurate up to whole frames, the mentioned uncertainties can lead to poor multi-view tracking performance. \begin{figure*} \centering{}\includegraphics[width=1\textwidth]{figures/flash_with_profiles_c1_f1765}\caption{\label{fig:frame_with_profiles}Detection of an abrupt lighting change. A photographic flash fired during the acquisition of the frame $I_{n}$. The flash duration is shorter than the frame duration. Only the lines that were integrating light when the flash was illuminating the scene were affected. The red dotted lines mark the leading and trailing edges of the bright region. The profiles on the right show pixel intensity changes in the frame before the abrupt change and in the frame with the change. } \end{figure*} \section{RELATED WORK} \noindent The computer vision community keeps stable attention to the video synchronization problem. The issue was approached in multiple directions. Synchronization at the acquisition time is done either by a special hardware or using a computer network to synchronize time or directly trigger cameras. A more general approach is video content-based synchronization. The advantage is that it does not have special acquisition requirements. We already mentioned a number of content-based methods. We will review works that make use of a rolling shutter sensor or photographic flashes which are the most relevant to our method. \cite{Wilburn2004} construct rolling shutter camera array that was able to acquire images at 1560 fps. The cameras were hardware synchronized, and the rolling shutter effect was mitigated by outputting slices of a spatio-temporal image volume. \cite{Bradley2009} approach the rolling shutter image capture in two ways. First, they acquire images with stroboscopic light in a laboratory setting, and extract and merge only rows affected by a light pulse that possibly span over two consecutive frames. By changing the frequency and duration of the flashes they effectively create a virtual exposure time and a virtual frame rate. Second investigated approach merges two consecutive frames by a weighted warping along optical flow vectors. This is similar to the spatio-temporal method. Cinematography focused methods for the rolling shutter sensor acquisition were studied by \cite{Hudon}. They analyse stroboscopic light artefacts for the purpose of image reconstruction. \cite{Atcheson2008} applied the rolling shutter flash based synchronization and spatio-temporal volume slice approach to capture gas flows for a 3D reconstruction. The use of photographic flashes for synchronization appeared to our knowledge first in \cite{Shrestha2006}. They find a translation between two video sequences by matching sequences of detected flashes. The final synchronization is accurate to the whole frames. None of the rolling shutter or flash based approaches known to us pays attention to dropped frames and a camera clock drift. \section{METHOD} \noindent The inputs for the synchronization algorithm are frame timestamps extracted from video files or network streams and detected transition edges of abrupt lighting changes. We will refer to the transition edges as \textit{synchronization events} or simply \textit{events}. We find synchronization transformations $s^{c}(f,r)\rightarrow t^{\mathrm{ref}}$ for all cameras $c$ (except a reference camera $c_{\mathrm{ref}}$) that map each camera temporal position $(f, r)$ to the reference camera time $t^{\mathrm{ref}}$. The temporal position is defined by a frame, row pair $\left(f,r\right)$. The situation is presented in Figure~\ref{fig:synchronization}. To correctly model the sub-frame accurate synchronization transformation we have to take into account missing frames, different frame rates, a drift of image sensors clock and hidden \emph{dark rows} in image sensors. \begin{figure} \centering{}\includegraphics[width=1\columnwidth]{figures/synchronization}\caption{\label{fig:synchronization}Sub-frame synchronization of the cameras $c_{1}$ and $c_{2}$ with respect to the reference camera $c_{\mathrm{ref}}$. Frame rates, resolution and temporal shifts between cameras differ. The short black lines on the sides of frame rectangles represent image rows. We find an affine transformation $s^{c}(f,r)\rightarrow t^{\mathrm{ref}}$ for every camera $c$ that maps a time point specified by a frame number $f$ and a row number $r$ to the reference camera time $t^{\mathrm{ref}}$. The dotted lines show a mapping of time instants when rows in $c_1$ and $c_2$ are captured to the reference camera time. } \end{figure} \subsection{On Time Codes} An ideal timing of video frames assumes a stable frame rate $\mathrm{fps}$ and no skipped frames. Time of the first row exposure of a frame $i$ is then $t(i)=i\cdot\frac{1}{\mathrm{fps}}+t_{0}, i \in \{0, 1, \dots\}$. Unfortunately, this is not true for most of the real-world video sequences. The most common deviation from the ideal timing is a dropped frame caused by high CPU load on the encoding system. When a frame is not encoded before the next one is ready, it has to be discarded. Almost all video sources provide frame timestamps or frame durations. This information is necessary to maintain very precise synchronization over tenths of minutes. We'll briefly present frame timing extraction from container format MP4 and streaming protocol RTP. Video container files encapsulate image data compressed by a video codec. The timing data is stored in the container metadata. The MP4\footnote{officially named MPEG-4 Part 14} file format is based on Apple QuickTime. Frame time-stamps are encoded in \emph{Duration} and \emph{Time Scale Unit} entries. The \emph{Time Scale Unit} is defined as ``the number of time units that pass per second in its time coordinate system''. A frequent streaming protocol is Real Time Transfer Protocol (RTP). The codec compressed video data is split into chunks and sent typically over UDP to a receiver. Every packet has an RTP header where the \emph{Timestamp} entry defines the time of the first frame in the packet in units specific to a carried payload: video, audio or other. For video payloads, the \emph{Timestamp} frequency is set to $\unit[90]{kHz}$. \subsection{Rolling Shutter\label{subsec:Rolling-Shutter}} Historically, cameras were equipped with various shutter systems. To name the mechanical shutters, prevalent were the focal plane shutters - where two curtains move in one direction or the diaphragm shutters where a number of thin blades uncover circular aperture. The electronic shutters implemented in image sensors are either global or rolling. CCD type image sensors are equipped with a global shutter, but are already being phased out of the market. Most of the CMOS sensors have a rolling shutter. Recently, a global shutter for the CMOS sensors was introduced, but consumer products are still rare. All shutter types except the global shutter exhibit some sort of image distortion. Mostly different regions of the sensor (or film) integrate light in a different time or the exposure time differs. The rolling shutter equipped image sensor integrates light into the pixel rows sequentially. In the CMOS sensor with the rolling shutter, an electrical charge integrated in all pixels can not be read at once. The readout has to be done row by row. For illustration see Figure~\ref{fig:rolling_shutter}. To preserve constant exposure time for all pixels on the sensor, the exposure starts has to be sequential exactly as the readouts are. This means that every row captures the imaged scene in a slightly different moment. Typically a majority of the row exposure time is shared by spatially close rows \cite{ONSemiconductor:MT9P031,Sony:IMX322LQJ-C}. \begin{figure} \centering{}\includegraphics[width=1\columnwidth]{figures/rolling_shutter}\caption{\label{fig:rolling_shutter}The figure illustrates rows exposure in time. The green rectangles represent the time spans when a row is integrating light. In rolling shutter sensors the rows do not start to integrate light at the same time. Instead the integration begins sequentially with small delays. } \end{figure} To properly compute the start time of a row exposure we have to take into account hidden pixels around the active pixel area. The most image sensors use the hidden pixels to reduce noise and fix colour interpretation at the sensor edges (Figure~\ref{fig:hidden_pixels}). This means that there is a delay, proportional to $R_{0}+R_{1}$, between reading out the last row of a frame and the first row of the next one. Camera or image sensor specifications often include \textit{total} and \textit{effective} pixel count. The difference between the two values is the number of hidden pixels. \begin{figure} \centering{}\includegraphics[width=1\columnwidth]{figures/hidden_pixels}\caption{\label{fig:hidden_pixels}The active pixel matrix on image sensor is surrounded by a strip of hidden pixels, sometimes also called ``dark'' pixels. They serve for black colour calibration or to avoid edge effects when processing colour information stored in the Bayer pattern \cite{ONSemiconductor:MT9P031}. The rolling shutter model (Equation \ref{eq:subframe_timing}) assigns a sub-frame time to a row $r$.} \end{figure} Now it is straightforward to compute sub-frame time for a frame $f$ and a row $r$ as \begin{equation} t(f,r)=t_{f}+\frac{R_{0}+r}{R_{0}+R_{h}+R_{1}}\cdot T_{\mathrm{frame}},\label{eq:subframe_timing} \end{equation} where $R_{0},$$R_{h}$, $R_{1}$ are row counts specified in Figure~\ref{fig:hidden_pixels}, $t_{f}$ is the frame timestamp and $T_{\mathrm{frame}}$ is the nominal frame duration. The constants $R_{0}$ and $R_{1}$ can be found in the image sensor datasheet or the summary value $R_{0}+R_{h}+R_{1}$ of total sensor lines can be estimated, as demonstrated in Subsection \ref{subsec:Synchronization}. \subsection{Abrupt Lighting Changes\label{subsec:Abrupt-Light-Changes}} Abrupt lighting changes are trivially detectable and are suitable for sub-frame synchronization with rolling shutter sensors. The only requirement is that the majority of the observed scene receives light from the source. Many multi-view recordings already fulfil the requirement. Professional sports photographers commonly use flashes mounted on sports arena catwalks to capture photos during indoor matches\footnote{\url{http://www2.ljworld.com/news/2010/mar/21/behind-lens-story-behind-those-flashing-lights-nca/}}, mobile phones or DSLRs flashes are used at many social occasions that are recorded. Creative rapidly changing lighting is frequent at cultural events such as concerts. For photographic flashes (Figures ~\ref{fig:4flash_artefacts}, \ref{fig:rolling_shutter_event}), it is possible to detect both leading and trailing edges. A flash duration is typically one order of magnitude shorter than a frame duration. Flashes produce light for $\nicefrac{1}{1000}$ to $\nicefrac{1}{200}$ of a second in contrast to $\unit[40]{ms}$ frame duration of a $\unit[25]{fps}$ recording. An example profile of the captured light intensity by a rolling shutter sensor is in Figure \ref{fig:diff_profile}. The shape of the profile is formed by two processes. The exponential form of the transition edges corresponds to the physical properties of the lighting source. The partially affected rows at the start and end of an event contribute with a linear ramp to the profile shape. \begin{figure} \centering{}\includegraphics[width=1\columnwidth]{figures/rolling_shutter_event}\caption{\label{fig:rolling_shutter_event}Short abrupt lighting event, e.g., photographic flash, affects only part of the frame rows, in red colour, due to the rolling shutter capture. Rows, depicted half filled in green and red, are being captured at the time of the lighting change. Such rows integrate light of the lighting event only partially. The longer the exposure time the more rows capture an onset of an event.} \end{figure} \begin{figure} \centering{}\includegraphics[width=1\columnwidth]{figures/diff_profile_c2_f1476}\caption{\label{fig:diff_profile}Median line intensity difference between consecutive frames in a moment of a flash. Rows in range 950-1700 were captured when the photographic flash has illuminated the scene. An exponential character of the leading and trailing edges is related to the physical process of the capacitor discharge in a flashtube.} \end{figure} The detection of the abrupt lighting changes is robust and straightforward. As we require that the lighting affects most of the scene, the maximum of difference of median line intensity for a frame shows distinct peaks, see Figure~ \ref{fig:event_detection}. We simply threshold the values to get the frames with the \textit{events}. We use the leading edge as the \textit{synchronization event}. The \textit{event} row is found in the differences of median line intensity profiles, see Figure~\ref{fig:frame_with_profiles}. The method is summarized in Algorithm \ref{alg:event_detection}. \begin{figure} \centering{}\includegraphics[width=1\columnwidth]{figures/feature1d_with_detections_cam2}\caption{\label{fig:event_detection}Detection of the abrupt lighting changes. A median line intensity profile is computed for every frame. Then the profiles in consecutive frames are subtracted. The difference maxima for range of frames is plotted above. The clearly visible peaks correspond to the lighting changes. We threshold the values and detect the events marked on the plot by the red dots.} \end{figure} \begin{algorithm} \SetKw{KwInSet}{in} \SetKw{KwWhere}{where} \KwIn{image sequences} \KwOut{synchronization events} \ForEach{camera}{ \ForEach{frame}{ $m_f$ := line median intensity (frame) \; $m_f \in \mathbb{N}^n$, where $n$ is frame height } \ForEach(compute difference profiles){frame}{ $d_f:=m_{f}-m_{f-1}$ \; $d_f \in \mathbb{Z}^n$, where $n$ is frame height } \For{f \KwInSet $ \{ f \mid \max(d_f)>\mathrm{threshold} \}$}{ $r$ := find raising edge row in $d_f$ \; event := $(f, r)$ \; } } \KwRet{events} \caption{\label{alg:event_detection}Detection of synchronization events} \end{algorithm} \subsection{Synchronization\label{subsec:Synchronization}} We model the time transformation $s^{c}(f,r)\rightarrow t^{\mathrm{ref}}$ from a camera $c$ to a reference camera $c_{\mathrm{ref}}$ as~an~affine mapping similar to \cite{Padua2010}. Substantial difference is that we operate on timestamps instead of frame numbers. The transformation maps the \textit{events} detected in camera $c$ to the same \textit{events} in the time of a~reference camera $c_{\mathrm{ref}}$. The dominant part of the transformation $s^{c}(f,r)$ is a temporal shift between cameras $c$ and $c_{\mathrm{ref}}$. The synchronization model consisting of a constant temporal shift is usable only for shorter sequences. We found out in experiments that camera clocks maintain stable frame duration, but the reported time units are not precisely equal. This deficiency is known as the clock drift. We compensate the drift by a linear component of the transformation. The proposed transformation is \begin{equation}\label{eq:transformation} s(f,r;\alpha,\beta)=\alpha t_{f}+\beta+r\cdot\frac{T_{\mathrm{frame}}}{R}, \end{equation} where $\alpha$ is the camera clock drift compensation, $\beta$ is the temporal shift, $f$ is the frame number, $r$ is the row number, $t_{f}$ is the frame acquisition timestamp and $R = R_{0}+R_{h}+R_{1}$ is the total number of sensor rows. The goal of the synchronization is to find $s^{c}(f,r;\alpha^{c},\beta^{c})$ for all cameras in $C=\left\{ c_{1},c_{2},\ldots,c_{n}\right\} $ except for a reference camera $c_{\mathrm{ref}}$. For an event observed in camera $c$ and $c_{\mathrm{ref}}$ at $\left(f^{c},r^{c}\right)$ and $\left(f^{c_{\mathrm{ref}}},r^{c_{\mathrm{ref}}}\right)$ the synchronized camera time and the reference camera time should be equal: \begin{equation} s^{c}(f^{c},r^{c};\alpha^{c},\beta^{c})=t^{c_{\mathrm{ref}}}(f^{c_{\mathrm{ref}}},r^{c_{\mathrm{ref}}}).\label{eq:synch_eq_reference} \end{equation} We have demonstrated how to detect abrupt lighting changes in Subsection \ref{subsec:Abrupt-Light-Changes}. In the next step, we manually align time in cameras $c$ and $c_{\mathrm{ref}}$ up to whole frames, e.g., for the first matching event, and automatically match the rest of the events to get: \begin{eqnarray} E^{c,c_{\mathrm{ref}}} & = & \bigg\{\left\{ \left(f_{1}^{c},r_{1}^{c}\right),\left(f_{1}^{c_{\mathrm{ref}}},r_{1}^{c_{\mathrm{ref}}}\right)\right\} \label{eq:matching_events}\\ & ,..., & \left\{ \left(f_{k}^{c},r_{k}^{c}\right),\left(f_{k}^{c_{\mathrm{ref}}},r_{k}^{c_{\mathrm{ref}}}\right)\right\} \bigg\}.\nonumber \end{eqnarray} Now we can construct overdetermined system of Equations \ref{eq:synch_eq_reference} for $k$ pairs of matching events $E^{c,c_{\mathrm{ref}}}$. The least squares solution gives the unknowns $\alpha^{c},\beta^{c}$. Optionally also the sensors properties $T^{c}_{\mathrm{row}}:=\nicefrac{T^{c}_{\mathrm{frame}}}{R^{c}}$ and $T^{c_{\mathrm{ref}}}_{\mathrm{row}}:=\nicefrac{T^{c_{\mathrm{ref}}}_{\mathrm{frame}}}{R^{c_{\mathrm{ref}}}}$ can be estimated, when these are not available in the image sensors datasheets. When synchronizing more than two cameras, one system of equations for all cameras has to be constructed to estimate the reference camera \textit{time per image row} $T^{c_{\mathrm{ref}}}_{\mathrm{row}}$ jointly. We summarize the synchronization process in Algorithm \ref{alg:synchronization}. The single global time for a frame $f$ and row $r$ is computed using Equation \ref{eq:subframe_timing} for a reference camera and using Equation \ref{eq:transformation} for other cameras. \begin{algorithm} \SetKw{KwInSet}{in} \SetKw{KwWhere}{where} \KwIn{frame timestamps, detected synchronization events, reference camera $c_{\mathrm{ref}}$} \KwOut{synchronization parameters} \ForEach{$\{ c \in C \mid c \neq c_{\mathrm{ref}} \}$}{ $E^{c,c_{\mathrm{ref}}}$ := match events in $c$ and $c_{\mathrm{ref}}$ \; \ForEach{event \KwInSet $E^{c,c_{\mathrm{ref}}}$}{ $\left\{ \left(f^{c},r^{c}\right),\left(f^{c_{\mathrm{ref}}},r^{c_{\mathrm{ref}}}\right)\right\} := \mathrm{event} $ \; $t^c := $ time stamp for frame $f^{c}$ \; $t^{c_{\mathrm{ref}}} := $ time stamp for frame $f^{c_{\mathrm{ref}}}$ add equation: $\alpha^c t^c+\beta^c + r^c \cdot T^c_{\mathrm{row}} = t^{c_{\mathrm{ref}}}+r^{c_{\mathrm{ref}}}\cdot T^{c_{\mathrm{ref}}}_{\mathrm{row}}$ \; to the system of equations \; } } solve the system in a least squares sense \; \KwRet{$\{\alpha^{c},\beta^{c}, T^{c}_{\mathrm{row}} \mid c \in C $ $\mathrm{and} $ $c \neq c_{\mathrm{ref}} \}, T^{c_{\mathrm{ref}}}_{\mathrm{row}}$} \caption{\label{alg:synchronization}Multi-camera Synchronization} \end{algorithm} \section{DATA\label{sec:DATA}} \noindent The ice hockey data consists of one complete USA versus Russia match captured by 4 cameras. The company Amden s.r.o. provided us the data recorded on the International Ice Hockey Federation World Championship 2015. Example images from the cameras are on Figure~\ref{fig:4flash_artefacts}. The cameras 1 and 2 are observing the ice rink from sides, the cameras 3 and 4 are focusing on the defending and attacking zones, that is from the blue lines to the ends of the rink. The camera pairs 1 and 2, and 3 and 4 are identical models with the same lenses. The cameras 1 and 2 use camera model Axis P1428E with resolution $\unit[3840\times2160]{px}$, the cameras 3 and 4 are equipped with camera model Axis P1354 with resolution $\unit[1280\times720]{px}$. The data was delivered in the Matroska file format and later converted to mp4. The frame timestamps were extracted using \texttt{ffprobe} command line utility included in the ffmpeg package. \section{EXPERIMENTS} \noindent The subsection~\ref{subsec:Abrupt-Light-Changes} and Algorithm~\ref{alg:event_detection} describe the method to detect \textit{synchronization events}. We processed the first 5 minutes of the ice hockey match in the four video streams and detected 18, 22, 13 and 15 flashes in the cameras 1, 2, 3 and 4 respectively. For the sake of simplicity, we omitted the flashes that crossed the frame boundary. The \textit{event} distribution is depicted in Figure~\ref{fig:events_not_synchronized}. \begin{figure} \centering{}\includegraphics[width=1\columnwidth]{figures/events_not_synchronized_1_2_3_4}\caption{\label{fig:events_not_synchronized}Flashes detected in cameras 1-4. The temporal position of the \textit{events} is always in the camera specific time. The inter camera shift is clearly visible unlike the clock drift. The drift error accumulates slowly and is not noticeable in this visualization.} \end{figure} We performed two experiments, first we synchronized four cameras jointly by solving a single system of equations, secondly we synchronized camera pairs independently. The results are presented in Table \ref{tab:results_joint} and Table \ref{tab:results_pairs}. The deviation of the synchronized time from the reference time for the detected \textit{events}, which in ideal case should be 0, can be interpreted as a measure of method accuracy. The standard deviation of the synchronization errors is \unit[0.5]{ms} for the joint synchronization and in range from \unit[0.3]{ms} to \unit[0.5]{ms} for the camera pairs. We can claim that our method is sub-millisecond precise. We validated the found sub-frame synchronization with an observation of a high-speed object in overlapping views. A puck is present in two consecutive frames in the camera 1 and in the time between in the camera 3. We interpolated the puck position in the camera 1 to the time of the puck in camera 3. The puck position in the camera 3 and the interpolated position should be the same. Figure \ref{fig:puck_interpolation} shows that the interpolated puck position is close to the real one from camera 3. \begin{table*} \caption{\label{tab:results_joint}Synchronization parameters and errors for a system of four cameras. The camera 1 was selected as the reference camera $c_{\mathrm{ref}}$ and the cameras 2, 3 and 4 were synchronized to the reference camera time. The found parameters of the synchronization transformations (Eq. \ref{eq:transformation}) are presented in the table below. The \textit{time per image row} for the reference camera is \unit[0.0154]{ms}. The clock drift is in column three presented as a number of rows per second that need to be corrected to maintain synchronization. The standard deviation of the synchronized time from the reference time for the \textit{synchronization events} is presented in the last column. } \centering{}% \newcolumntype{d}[1]{D{.}{.}{#1} } \begin{tabular}{r r d{2} d{2} d{4} d{2}} \toprule camera $c_{\mathrm{ref}}$, $c$ & 1 - clock drift & \multicolumn{1}{c}{drift (in $\nicefrac{\mathrm{lines}}{\mathrm{s}}$)} & \multicolumn{1}{c}{shift (in ms)} & \multicolumn{1}{c}{$T^c_{\mathrm{row}}$ (in ms)} & \multicolumn{1}{c}{std error (in ms)}\tabularnewline \midrule 1 2 & 8.39 $\times10^{-6}$ & -0.56 & 6066.7 & 0.015 & 0.49 \\ 1 3 & -3.12 $\times10^{-6}$ & 0.08 & -37500.2 & 0.0394 & 0.44 \\ 1 4 & -8.35 $\times10^{-6}$ & 0.2 & -23858.7 & 0.0414 & 0.44 \\ \bottomrule \end{tabular} \end{table*} \begin{table*} \caption{\label{tab:results_pairs}Synchronization parameters and errors for independent pairs of cameras. For detailed description see Table \ref{tab:results_joint}.} \centering{}% \newcolumntype{d}[1]{D{.}{.}{#1} } \begin{tabular}{r r d{2} d{2} d{4} d{4} d{2}} \toprule $c_{\mathrm{ref}}$, $c$ & 1 - clock drift & \multicolumn{1}{c}{drift (in $\nicefrac{\mathrm{lines}}{\mathrm{s}}$)} & \multicolumn{1}{c}{shift (in ms)} & \multicolumn{1}{c}{$T^{c_{\mathrm{ref}}}_{\mathrm{row}}$ (in ms)} & \multicolumn{1}{c}{$T^c_{\mathrm{row}}$ (in ms)} & \multicolumn{1}{c}{std error (in ms)}\tabularnewline \midrule 1 2 & 8.47 $\times10^{-6}$ & -0.57 & 6067.49 & 0.0159 & 0.0148 & 0.45 \\ 1 3 & -8.55 $\times10^{-6}$ & 0.22 & -37500.7 & 0.0158 & 0.0396 & 0.42 \\ 1 4 & -7.04 $\times10^{-6}$ & 0.17 & -23859 & 0.0151 & 0.0417 & 0.39 \\ 2 3 & -14.52 $\times10^{-6}$ & 0.37 & -43567.9 & 0.0149 & 0.0397 & 0.39 \\ 2 4 & -17.37 $\times10^{-6}$ & 0.42 & -29926 & 0.015 & 0.0416 & 0.33 \\ 3 4 & -10.12 $\times10^{-6}$ & 0.2 & 13642.3 & 0.0477 & 0.05 & 0.27 \\ \bottomrule \end{tabular} \end{table*} \begin{figure} \begin{centering} \includegraphics[width=1\columnwidth]{figures/puck_visualization_svg} \par\end{centering} \caption{\label{fig:puck_interpolation}Synchronization validation. Moving blurred puck is visible in two synchronized cameras. We show three overlaid images of the same puck: two consecutive frames in the camera 1 and single frame in the camera 3. The acquisition time of the puck for all 3 frames was computed considering frame $f$ and row $r$ of the puck centroid. Knowing the puck acquisition times it possible to interpolate a position of the puck in the camera 1 for the time of acquisition in the camera 3. The interpolated puck position in the camera 1 and the real position in the camera 3 should be equal. The situation is redrawn on the left, on the right are the image data visualized in green and blue channels for the camera 1 and in the red channel for the camera 3. The interpolated position in the camera 1 is depicted as a black circle on the left and a white circle on the right. The interpolated position and the real position in the camera 3 are partially overlapping.} \end{figure} We implemented the system in Python with help of the NumPy, Matplotlib and Jupyter packages \cite{Hunter2007,Perez2007,VanderWalt2011}. \section{CONCLUSIONS } \noindent We have presented and validated a sub-frame time model and a synchronization method for the rolling shutter sensor. We use photographic flashes as sub-frame synchronization \textit{events} that enable us to find parameters of an affine synchronization model. The differences of the synchronized time at \textit{events} that should be ideally $0$ are in range from 0.3 to 0.5 milliseconds. We validated the synchronization method by interpolating a puck position between two frames in one camera and checking against the real position in other camera. We published\footnote{\url{http://cmp.felk.cvut.cz/~smidm/flash_synchronization}} the synchronization code as an easy to use Python module and the paper itself is available in an executable form that allows anybody to reproduce the results and figures. \section*{\uppercase{Acknowledgements}} \noindent Both authors were supported by SCCH GmbH under Project 830/8301434C000/13162. Jiří Matas has been supported by the Technology Agency of the Czech Republic research program TE01020415 (V3C -- Visual Computing Competence Center). We would like to thank Amden s.r.o. for providing the ice hockey video data. \bibliographystyle{apalike}
{ "timestamp": "2019-03-01T02:18:45", "yymm": "1902", "arxiv_id": "1902.11084", "language": "en", "url": "https://arxiv.org/abs/1902.11084" }
\section{Introduction}\label{sec:introduction} \input{intro.tex} \section{Literature Review} \input{literature_review} \section{Preliminaries} To enhance the distance between original and reconstructed data in our DR system, we utilize the structure of Generative Adversarial Network (GAN) \cite{Goodfellow2014} for data manipulation and deep Auto-encoder \cite{Baldi2012} as a reconstruction method. The following sections briefly review Auto-encoder and GAN. \subsection{Auto-encoder} Auto-encoder, an artificial neural network, is aimed at learning a lower dimension representation of an unsupervised data or generative models. Auto-encoder can be used for denoising image data and reducing dimension. Auto-encoder can be implemented by two fully connected neural network components: encoder and decoder. The encoder and decoder perform reverse operations. The encoder input is normally original data while the decoder output is expected to be similar to the input data. The middle layer which has smaller number of neurons than that of the input is extracted as a representation of dimensionally-reduced data. The auto-encoder training process can be described as a minimization problem of a loss function: \begin{equation*} \mathcal{L}(x,g(f(x))) \end{equation*} where x is the data and g(.) is a function of the framework which could be the mean square error between input and output. \subsection{GAN} Generative Adversarial Nets is aimed at approximating distribution $p_d$ of a dataset via generative models over data $x$. GAN simultaneously trains two generative models $G$ and $D$ in which $G$ inputs are sampled from a prior distribution $p_z(z)$. G generates fake samples similar to the real samples. At the same time, $D$ is trained to differentiate between fake samples and real samples and send feedback to $G$ for improvement. GAN can be formed as a two-player minimax game with value function V(G,D): \begin{equation*} \begin{split} \minmax{G,D}{V(G,D)} = & E_{x~\prob{d}{x}} [log(D(x))] + \\& E_{z~ \prob{z}{z} }[log( 1 - D( G(z) ))] \end{split} \end{equation*} In practice, the two GAN components, \textit{generator} and \textit{discriminator} are built on two fully connected neural networks. The loss function of G is to reduce the D accuracy. Meanwhile, the loss function of D is to increase the accuracy of differentiating fake samples from real samples. These two components are trained until the discriminator cannot distinguish between generated samples and real samples. \section{Problem} \input{problem} \section{$\epsilon$-Dimension Reduction Privacy ($\epsilon$-DR Privacy)} \label{theory} \input{def} \section{Methodology} \label{methodology} \input{method.tex} \section{Experiments and Discussion} \input{experiment.tex} \section{Conclusion} In this paper, we introduce a mathematics tool $\epsilon-DR$ to evaluate privacy preserving mechanisms. We also propose a non-linear dimension reduction framework. This framework projects data on lower dimension domain in which it prevents data reconstruction and preserve data utility. The dimensionally-reduced data can be used effectively for the machine learning tasks such as classification. Our future works plan to extend the framework for adapting with different types of data, such as time series and categorical data. We will apply different metrics to compute the distance other than $l_2$ norm and investigate the framework on several applications in security systems and data collaborative contributed systems. \section*{Acknowledgment} This work is sponsored by DARPA Brandeis program under agreement number N66001-15-C-4068. The views, options, and/or findings contained in this article/presentation are those of the author/presenter and should not be interpreted as representing the official views or policies, either expressed or implied, of the Defense Advanced Research Projects Agency or the Department of Defense. \section{$\epsilon$-Dimension Reduction Privacy ($\epsilon$-DR Privacy)} In this section, we introduce the Dimension Reduction Privacy (DR-Privacy), and define a formal definition of the $\epsilon$-DR Privacy to mathematically quantify/evaluate the mechanisms designed to preserve the DR-Privacy via dimension reduction. The DR-Privacy aims to achieve privacy-preserving via dimension reduction, which refers to transforming the data into a lower dimensional subspace, such that concealing the private information while preserving the underlying probabilistic characteristics, which can be utilized for machine learning purposes. To quantify the DR-Privacy and guide us to design such DR functions, we define $\epsilon$-DR Privacy as follows. {\bf Definition 1: ($\epsilon$-DR Privacy)} A Dimension Reduction Function $F(\cdot)$ satisfies $\epsilon-DR$ Privacy if for each i.i.d. $m$-dimension input sample $x$ drawn from the same distribution $D$, and for a certain distance measure $dist(\cdot)$, we have \begin{equation} \begin{split} \label{eq:modularity} E[|dist(x, \hat{x})|] \geq \epsilon \end{split} \end{equation} where $\epsilon \geq 0$, $x^{\prime}=F(x)$, $\hat{x}=R(x^{\prime})$, and $R(\cdot)$ is the Reconstruction Function. For instance, as shown in Fig.~\ref{fig:DRP}, given original data $x$, our framework utilizes certain dimension reduction function $F(x)$ to transform the original data $x$ into the transformed data $x^{\prime}$. The adversaries aim to design a corresponding reconstruction function $R(x^{\prime})$ such that the reconstructed data $\hat{x}$ would be closed/similar to the original data $x$. DR-Privacy aims to design/develop such dimension reduction functions, that the distance between the original data and its reconstructed data would be large enough to protect the privacy of the data owner. \subsection{Experiment Setup} \textit{The Extended Yale Face Database B} contains 2,470 grayscale images of 38 human subjects under different illumination conditions and their identity label. In this dataset, the image size is 168x192 pixels. In our experiment, we crop these images to 168x168 pixels. The AT\&T dataset has 400 face images of 40 subjects. We crop the image size to 92x92 pixels. All pixel values are scaled to the range of [0,1]. We randomly select 20\% of each subject's images for testing dataset. The Generator and Re-constructor in Figure \ref{fig:eGAN} are implemented as convolutional neural networks. Each of them has two convolutional layers and two fully connected hidden layers. Discriminator and Classifier are built on fully connected neural network with three hidden layers. Hyperbolic Tangent function is used as activation function for hidden layers. Each component is trained in 300 local iterations, and the entire system is trained in 1000 iterations with the learning rate of 0.01 for global loop. The target distribution is drawn from Gaussian distribution (with the covariance value of 0.05 and the mean is the average of the training data). For the single-level authentication system in the scenario, we consider half of the subjects in the dataset are valid to access the database system while the rest are invalid. We randomly divide the dataset into two groups of subject and labels their images to \{1\} or \{0\} depending on their access permission. For the cases of multi-level authentication system experiments, we divide the subjects into four groups and eight groups so that the authentication server becomes four-class and eight-class classifier respectively. \subsection{Utility} We use accuracy metric to evaluate the utility of dimensionally-reduced data. The testing dataset is tested with the classifier in our framework. Figure \ref{fig:acc} illustrates the accuracies for different dimensions from one to seven. Overall, the accuracies improve when the dimension number increases. The accuracy result for single-level authentication system on YaleB starts from 97\% with one dimension and reaches 100\% with only five dimensions. As the dimension number is reduced from 28,224 (168x168) to 5, we can achieve a compression ratio of 5,644 yet achieve 100\% of accuracy. In eight-level and four-level authentication, we can achieve accuracies of 97\% and 96\% with seven dimensions. Testing on AT\&T dataset, we could achieve 77\% of accuracy with only one dimension and 96\% with seven dimensions for single-level authentication. This implies we could gain the compression ratio of 4,032 (from 92x92 to 7 dimensions) and maintain a high accuracy. These figures for four-level and eight-level authentication system at seven dimensions are 90\% and 81\% respectively. As shown in the figure \ref{fig:acc}, our method could achieve a very high result with a low number of dimension in terms of accuracy. \subsection{Privacy} Figure \ref{fig:distresult} illustrates the average distances between original images and reconstructed images (taken from the output of the Re-constructor) on testing data with different $\epsilon$ constraints for seven dimensions and single-level authentication. The achieved distances (red lines) are always larger than the hyper-parameter $\epsilon$ (black dotted lines) where $\epsilon$ is less than 0.063 for YaleB and 0.384 for AT\&T. Due to the fact that the Re-constructor is trained using the training dataset (we consider the adversary can reach the model and the training data), our framework can only force the distance within a certain range. Specifically, the distances varies from 0.059 to 0.063 for YaleB and from 0.03839 to 0.03841 for AT\&T. The intersection between the red line and the dotted black line points out the largest distance our framework can achieve. The method satisfies $\epsilon-DR$ privacy with $\epsilon$ values of 0.0384 for YaleB and 0.063 for AT\&T. Figure \ref{fig:reconstruction} demonstrates some samples and their corresponding reconstructions in single-level authentication and seven dimensions. The reconstructed images are nearly identical, thus making it visually hard to recognize the identity of an individual. \section{Comparison to GAP\cite{GAP}} In this section, we compare our framework with GAP. We attempt to visualize AutoGAN-DRP and GAP to highlight their similarities and differences. We also show our experiment results of the two methods on the same dataset. Our work shares many similarities with GAP, such as utilizing minimax algorithms of Generative Adversarial Nets, applying the state-of-the-art convolution neural nets for image datasets, considering $l_2$ norm distance (i.e., distortion in GAP, privacy measurement in AutoGAN-DRP) between the original data and possible exposed data. Specifically, both GAP and AutoGAN-DRP consider the difference between original and exposed images. This difference is understood as the \textit{distortion} between original and perturbed images in GAP and the \textit{distance} between original and reconstructed images in AutoGAN-DRP. Both of the \textit{distance} and \textit{distortion} are computed with $l2$ norm distance. In this context, the distance and distortion refer to the same measurement and have the same meaning. To be consistent, we use the term \textit{distance} to present this measurement in the rest of this section. Figure \ref{fig:AGDRPPvsGAP} illustrates the visualization of AutoGAN-DRP and GAP. In AutoGAN-DRP, the privacy is assessed based on how well an adversary can reconstruct the original data and measured by the distance between original and reconstructed data. The dimensionally-reduced data is reconstructed using the state-of-the-art neural network (an Auto-encoder). The larger the distance is, the more privacy can be achieved. Further, if the reconstructed images are blurry, privacy can be preserved since it is hard to visually determine an individual identity. The data utility is quantified by the accuracy of identity classification task over dimensionally-reduced data which captures the most significant data information. Meanwhile, GAP perturbs images with a certain distortion constraint to achieve privacy. It evaluates data utility by the classification accuracy of non-private label and assesses privacy by the classification accuracy of private label. Similar to AutoGAN-DRP, the high distortion is most likely to yield high level of privacy. In GAP, however, high distortion might dramatically reduce the classification accuracy of non-private label. This might be caused by the high correlation between private and non-private labels. This difference enables AutoGAN-DRP to preserve more utility than GAP at the same distortion level, as the experiment result (displayed in Figure \ref{fig:genki}) reveals. In the experiment, we reproduce a prototype of Transposed Convolutional Neural Nets Privatizer (TCNNP) in GAP using materials and source code provided by \cite{GAP}. We also modify our framework to make it as similar to TCNNP as possible. Specifically, a combination of two convolutional layers with ReLU activation function and two fully connected neural network layers are used for implementing the Generator similar to TCNNP. Our Classifier is constructed on two convolutional layers and two fully connected hidden layers similar to the Adversary in GAP. We also test our framework on GENKI, the same dataset with GAP. The utility is evaluated by the accuracy of facial expression classification (a binary classification). It should be noted that our framework have been shown to work on different datasets with multi-class classification, which is more challenging and comprehensive. Figure \ref{fig:genki} shows the accuracy results of GAP and AutoGAN-DRP for GENKI dataset. AutoGAN-DRP achieves distances ranging from 0.037 to 0.039 for different dimensions from one to seven. At the same range of distance (distortion per pixel), GAP achieves accuracy of only 72\% while AutoGAN-DRP gains accuracy rates starting from 77\% to 91\% for different number of dimensions. It becomes evident that our method can achieve higher accuracy than that of GAP at the same distortion level. \subsection{AutoGAN-based Dimension Reduction for Privacy Preserving (AutoGAN-DRP)} To tackle the problem, we propose a deep learning framework for transforming face images to lower dimension vectors which are hard to be clearly reconstructed. The dimensionally-reduced data can be sent to the authentication server as an authentication request. We consider an adversarial as a re-constructor implemented by a Deep Auto-encoder structure. To prevent fully reconstructing images, the framework utilizes the discriminator in GAN \cite{Goodfellow2014} to direct reconstructed data to a desired target distribution. In this work, the target distribution is sampled from Gaussian distribution and the mean is the average of the whole training data. After the transformation projects the data into a lower dimension domain, the re-constructor can only partially reconstruct the data. Therefore, the adversary might not be able to recognize an individual's identity. To maintain data utility, we use feedback from a classifier. The entire framework can enlarge the distance between original data and its reconstruction to preserve individual privacy and retain significant data information. The dimensionally-reduced transformation model is extracted from the framework and provided to clients for reducing their facial image dimensions. The classification model will be used in an authentication center that classifies whether an authentication request is valid to have access \{1\} or not \{0\}. \begin{algorithm}[h] \caption{Algorithm for stochastic gradient descent training of $\epsilon$ -DR privacy.} \begin{algorithmic}[1] \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \REQUIRE Training dataset $D$. \\Parameter: learning rate $\alpha_r,\alpha_d,\alpha_c,\alpha_g $, training steps $n_r,n_d,n_c,n_g$ \\ A constraint for $\epsilon-DR$ \ENSURE Transformation Model \\ \textit{Initialization.} \FOR {number of global training iteration} \STATE Randomly, sample a mini batch from target distribution and label $\boldsymbol{t}$.\\ \STATE Randomly, sample mini batch of data $\boldsymbol x $ and corresponding label $\boldsymbol{y}$ \FOR{$i = 0 $ to $n_r$ iterations} \STATE Update the Reconstruction and Generator:\\ $ \varphi_{i+1} = \varphi_{i} - \alpha_r \nabla_\varphi{\mathcal{L}_R(\varphi_{i} ,\boldsymbol{x} ) } $\\ $\theta_{l+1} = \theta_{l} - \alpha_r \nabla_\varphi{\mathcal{L}_R(\varphi_{i} ,\boldsymbol{x} ) } $ \ENDFOR \FOR{$j = 0 $ to $n_d$ iterations} \STATE Update the Discriminator parameter:\\ $ \omega_{j+1} = \omega_{j} - \alpha_d \nabla_\omega{\mathcal{L}_D(\omega_{j} ,\boldsymbol{x,t} ) } $ \ENDFOR \FOR{$k = 0 $ to $n_c$ iterations} \STATE Update the Classifier parameter:\\ $ \phi_{k+1} = \phi_{k} - \alpha_c \nabla_\phi{\mathcal{L}_C(\phi_{k} ,\boldsymbol{x,y} ) } $ \ENDFOR \FOR{$l = 0 $ to $n_g$ iterations} \STATE Update the Generator parameter:\\ $\theta_{l+1} = \theta_{l} - \alpha_g \nabla_\theta{\mathcal{L}_G(\theta_{l} ,\boldsymbol{x,t,y} ) }$ \ENDFOR \ENDFOR \RETURN \end{algorithmic} \label{alg} \end{algorithm} \begin{figure*}[ht!] \begin{subfigure}{.5\textwidth} \includegraphics[width=\linewidth]{supplement/yale_acc_cp} \captionsetup{justification=centering} \caption{ Yale\_B} \label{fig:yale_acc} \end{subfigure} \begin{subfigure}{.5\textwidth} \includegraphics[width=\linewidth]{supplement/att_acc_cp} \captionsetup{justification=centering} \caption{AT\&T} \label{fig:att_acc} \end{subfigure} \caption{Accuracy for Different Number of Reduced Dimensions } \label{fig:acc} \end{figure*} \begin{figure*}[ht!] \begin{subfigure}{.5\textwidth} \includegraphics[width=\linewidth]{supplement/ep_yale_cp} \captionsetup{justification=centering} \caption{ Yale\_B } \label{fig:distresultyale} \end{subfigure} \begin{subfigure}{.5\textwidth} \includegraphics[width=\linewidth]{supplement/ep_att_cp} \captionsetup{justification=centering} \caption{ AT\&T} \label{fig:distresultatt} \end{subfigure} \caption{Distance Measurement Result \{ 7 dimensions, Single-Level\}} \label{fig:distresult} \end{figure*} We formulate the problem as follows: Let $D$ be the public training dataset. $(x_i, y_i)$ are the $i$ samples in the dataset which each sample $x_i$ has $d$ features and a ground truth label $y_i$. The system is aimed at learning a dimension reduction transformation $F(.)$ which transforms the data from $d$ dimensions to $d'$ dimensions in which $d' << d $. Let $D'$ be the dataset in lower dimension domain. The dimensionally-reduced data should keep significant information to work with different types of machine learning tasks and should resist against the reconstruction or inference from data owner information. Our proposed framework can learn a DR function $F(.)$ that preserves privacy at certain value of $\epsilon$ evaluated by $\epsilon-DR$ privacy. The larger distance implies higher level of privacy. Figure-\ref{fig:eGAN} represents our learning system in which $D'$ is generated from a Generator $G$. $D'$ can be classified by a classifier $C$ and can resist data reconstruction of an aggressive attack implemented by a trainable Re-constructor $R$. We use a binary classifier for single-level authentication system and multi-class classifier for multi-level authentication system. The Discriminator $D$ is a neural network working with $G$ as a minimax game. The Discriminator aims to differentiate the reconstructed data from a target distribution and send feedback to Generator. The Generator is trained so that it directs reconstructed data distribution close to the target distribution to ensure a distance between reconstructed and original data. To enlarge the distance, the selected target distribution should be different from the original data distribution. The problem becomes finding an optimal solution for the Generator, as shown in (\ref{eqn:G_loss}): \begin{equation} \begin{aligned} \theta^* = \argmin{\theta} ( &\alpha\argmin{\phi}{\mathcal{L}_C} - \beta\argmin{\omega}{\mathcal{L}_D}\\ & -\gamma\argmin{\varphi}{\mathcal{L}_R} + \mathcal{C}(\epsilon)) \end{aligned} \label{eqn:G_loss} \end{equation} Where $\alpha, \beta, \gamma$ are weights of components in the objective function and freely tuned. $\mathcal{C}(\epsilon)$ is a constraint function with respect to hyper-parameter $\epsilon$. The re-constructor plays its role as an aggressive adversary attempting to reconstruct original data by training R using known data. The loss function of $R$ is the mean square error of original training data and reconstructed data, as displayed in (\ref{R_loss}): \begin{align} \mathcal{L}_R = \sum_{i}^n{(x_i - \hat x_i)^2} \label{R_loss} \end{align} The classifier $C$ keeps the performance of the classification task in a lower dimension domain and sends feedback to $G$. The classifier loss function (\ref{C_loss}) is defined by a cross entropy of the class target $y$ and predicted class $\hat y$. $\mathcal{L}_D$ (\ref{D_loss}) is the cross-entropy loss of the Discriminator. \begin{equation} \mathcal{L}_C=-\sum_{i=1}^n\sum_{j=1}^m y_{ij} \log(\hat y_{ij}) \quad y,\hat y \in \{0,1,..,m-1\} \label{C_loss} \end{equation} \begin{equation} \mathcal{L}_D = -\sum_{i}^n{(t_i\log(\hat t_i) + (1 - t_i)\log(1 - \hat t_i))} \quad t,\hat t \in \{0,1\} \label{D_loss} \end{equation} Where $m$ denotes the number of classes and $n$ denotes the number of samples. \subsection{Optimization With Constraint} In order to meet a certain level of distance, we use $\epsilon$ as a hyper-parameter to be tuned in the system. We use an constrained optimization method to put a constraint $\epsilon$ on the Generator as a part of its objective function. We choose the exterior method as our solution. Considering a constrained problem: $$\qquad \underset{x}{minimize} f(x) \qquad s.t \, x \in \Omega $$ It could be approximated as a unconstrained problem: $$ minimize f(x)+ \gamma \mathcal{C}(x) $$ Where $\mathcal{C}: \mathbb{R}^n \rightarrow \mathbb{R}$ is a penalty function and $\gamma$ is penalty parameter. $ \mathcal{C}$ is continuous, $\mathcal{C}(x) \geq 0 $ for all $x \in \mathbb{R}^n$ and $\mathcal{C}(x) = 0$ iff $x \in \Omega $. The penalty function in $G$ loss function becomes \begin{equation} \mathcal{C} = \gamma.max(0,dist-\epsilon)\end{equation} where $dist$ is defined as the distance between original data and reconstructed data. \subsection{Algorithms} Algorithms \ref{alg} describes how we train the entire framework. The framework contains four components and they are trained one by one. First, similar to training an auto-encoder, the Re-constructor is trained in $n_r$ iterations and Generator parameters are concurrently updated while other components are fixed. Second, the Discriminator is trained while the others fixed. Third, the Classifier is trained in $n_c$ iterations. Fourth, the Generator is trained in $n_g$ iterations. We repeat this training process until it reaches a number of global training iterations or the weights are very close to their previous state. \subsection{Problem statement} We introduce the problem through the practical scenario mentioned in Section \ref{sec:introduction}. Staff members in a company request access to its data resources, such as websites and data servers with different privileges through a face recognition access control system. The system collects each member's facial features and sends them to a central server to determine his/her access eligibility. We consider that the system has three levels of privilege (i.e., single-level, four-level, eight-level) corresponding to three member groups. We assume the authentication server is semi-honest (it obeys the work procedure but might be used to infer personal information). An adversary in the authentication center can reconstruct the face features to achieve plain-text face images and determine members' identity. \subsection{Threat model} In the above scenario, we consider that a very strong adversary who has access to the model and training dataset attempts to reconstruct the original face images for inferring a specific member's identity. Our attack model can be represented in Figure \ref{fig:attackmodel}. The adversary utilizes training data and facial features to identify a member identity by reconstructing the original face using Auto-encoder nets. Rather than using fully connected neural network, we implement Auto-encoder on convolutional neural network to create an effective model for image dataset. Our goal is to design a data dimension reduction method for reducing data dimension and resisting full reconstruction of original data.
{ "timestamp": "2019-03-01T02:04:14", "yymm": "1902", "arxiv_id": "1902.10799", "language": "en", "url": "https://arxiv.org/abs/1902.10799" }
\section{Introduction} The Beijing-Arizona Sky Survey (BASS) is a new g- and r-band imaging survey conducted by National Astronomical Observatory of China (NAOC) and Steward Observatory \citep{zou+etal+2017+aj}. BASS uses the 2.3 m Bok telescope at Kitt Peak where a 90Prime camera is installed at the prime focus of the telescope. The camera is composed of four 4096${\times}$4032 CCDs and provides a field of view of about 1 $deg^{2}$. The depths of the single-epoch images are about 23.4 mag and 22.9 mag for g and r, respectively. The typical exposure time is about 60 s, depending on the meteorological conditions. BASS, Dark Energy Camera Legacy Survey \citep{blum+16}, and MOSAIC z-band Legacy Survey \citep{silva+16} constitute the Dark Energy Spectroscopic Instrument (DESI) imaging surveys. BASS will survey about 5400 $deg^{2}$ in the northern Galactic cap and provide spectroscopic targets for DESI. Besides, BASS will also provide unique science opportunities for studying Galactic halo substructures, satellite dwarf galaxies around the Milky Way, galaxy clusters, high-redshift quasars, and so on \citep{zou+etal+2017+aj}. Table~\ref{tab0} summarizes the specifications of the Bok telescope and the CCD detector. \begin{table} \caption{\label{tab0}Specifications of the Bok telescope and CCD detector} \centering \begin{tabular}{rr} \hline Telescope &2.3 m Bok\\ \hline F-Ratio &f/2.98\\ Diameter of primary mirror &230cm\\ Absolute pointing &<3${\arcsec}$ rms\\ CCD field of view &1.08${\degr}$${\times}$1.03${\degr}$\\ Size of CCD array &4096${\times}$4032\\ Size of pixel &${15\mu}{\times}{15\mu}$\\ Approximate angular extent per pixel &0${\farcs}$454/pixel\\ Typical exposure time &60 s\\ \hline \end{tabular} \end{table} However, the astrometry of BASS is unsatisfactory. The astrometric solutions of BASS are derived by cross-identifying objects in the frames with SDSS and 2MASS star catalogue \citep{zhou+16}, during the first data release of BASS, the median positional errors of stars from 9000 frames in right ascension and declination are about 0.10 arcsec and 0.11 arcsec, respectively \citep{zou+etal+2017+aj}. There are several factors that affect the accuracy of astrometric solution, such as the precision of the source positional measurements, the solution of the GD, the positional precision of the reference stars, and so on. Among these factors, the solution of the GD is an important one. In general, there are two ways to derive the GD. The direct way would be to observe an astrometric flat field where we have the prior knowledge of the positions of all the stars in some absolute and accurate system \citep{Anderson+2003}. Then the distortion would be obtained by means of the residuals. However, such standard calibration field is usually not available due to the lack of precision and star density. Although Gaia Data Release 1 (DR1) \citep{gaia+16b,gaia+16a} can provide position precision and star density, the lack of proper motion for most of stars can induce negligible errors. With the release of Gaia DR2 \citep{gaia+2018}, it can be expected that Gaia DR2 should achieve good results. Another way is to take advantage of the instrument itself to calibrate. Anderson et al. (\citeyear{Anderson+2006}) applied the self-calibration method to the wide-field ground-based images and achieved good precision. Peng et al. (\citeyear{Peng+etal+2012}) presented an alternative self-calibration method and successfully applied the method to the observations of some natural satellites \citep{Peng+etal+2015,Wang+etal+2017,Peng+etal+2017}. Therefore, self-calibration method is more practical and effective as the derivation of the GD is free from the errors of the star catalogue in positions and proper motions, but additional observations for the calibration field should be made. According to Peng et al. (\citeyear{Peng+etal+2012}), the observation of the calibration field should be made at different offset in a dithered pattern of either "+" or "$\#$". The offset between any two neighboring exposures should be appropriate and the size of the offset depends on the change of the GD pattern. The field of view of the Bok telescope is quite wide and thus much requirement should be met for the offset (such as more reference stars, smaller offsets and so on). However, during the exposures of the same sky field for BASS, the offset between two neighboring exposures are about 10 arcmin in right ascension or in declination, see Figure~\ref{Fig1}. Due to the insufficient overlapping area, the derived lookup table is inadequate and the positional precision of the star is affected. Then two third-order polynomials were tried beyond the numerical GD model. The results show that the positional precision of the stars is improved greatly after the third-order polynomial GD correction. Here, the averaged GD in a small box (see section 3.1) is called a numerical GD and the GD derived from a polynomial is called an analytical GD. Moreover, a precise pixel positional measurement is fundamental. During the pixel positional measurement, two dimensional Gauss fit was used to get the precise positions of the stars \citep{liz+09} as we have done in our previous work \citep{Peng+etal+2015,Wang+etal+2015,Wang+etal+2017}. Besides, in order to correct all the smaller-scale systematic positional errors, an additional lookup table correction was found useful. \begin{figure} \centering \includegraphics[width=8cm, angle=0]{f1.eps} \caption{Dithered observational scheme for the CCD frames in 2016. The 49 images are organized in a 7${\times}$7 array. The unit is degree. The observational scheme for 2017 is similar to that of 2016.} \label{Fig1} \end{figure} The contents of this paper are organized as follows. In Section 2, the observations are described. Section 3 presents the solution of the GD models. In Section 4, we make analysis of the relative positions of the chips. Finally, in Section 5, conclusions are drawn. \section{Observations with different filters} \label{sect:Obs} Before observation in each night, the x-axis of each CCD chip is nearly parallel to the real equator by a star-trailing operation. Then two sets of the observations were taken in 2016 and 2017, see Table~\ref{tab1}. All the observations were made with the 2.3 m Bok telescope at Kitt Peak. The site (i.e. IAU code 691) is at longitude E248${\degr}$ 24${\arcmin}$ 3.6${\arcsec}$, latitude N31${\degr}$ 57${\arcmin}$ 28.7${\arcsec}$ and height 2079.8 m above sea level. The exposure time for each CCD frame is 60 s and the pixel scale is about 0.454${\arcsec}$. The observation in 2016 were made at high galactic latitude and in each frame a small number of reference stars ($\sim$~300 stars) are available. The center of the field of observation is at E162.05${\degr}$ in right ascension, N28.65${\degr}$ in declination. The observation in 2017 were made at low galactic latitude and a plenty of reference stars ($\sim$~3200 stars) are available in each frame. The center of the field is at E147.35${\degr}$ in right ascension, N1${\degr}$ in declination. An exposure includes four frames and a total of 4${\times}$87 frames of CCD images have been obtained in a dithered pattern (see Figure~\ref{Fig1}). There are inter-chip gap in the center of the array, see Figure~\ref{Fig2} (\citealt{zou+etal+2017+aj}). The Bok telescope has an equatorial mount and other information about the telescope and the observations could be found from the FITS header. A sample FITS header could be seen in Appendix. \begin{table} \centering \begin{minipage}[]{90mm} \caption[]{Observations for the calibration field. Column 2 lists the number of CCD frames and column 3 lists the filter.\label{tab1}}\end{minipage} \setlength{\tabcolsep}{1pt} \small \begin{tabular}{ccccccc} \\ \hline\noalign{\smallskip} Obs Date&Calibration fields No.&filter\\ \hline\noalign{\smallskip} 2016-01-17 & 4${\times}$49&r\\ 2017-03-05 & 4${\times}$38&g\\ \noalign{\smallskip}\hline \end{tabular} \end{table} \begin{figure} \centering \includegraphics[width=6cm, angle=0]{f2.eps} \caption{Layout of the CCD array. There are four CCDs: CCD$\#$1, CCD$\#$2, CCD$\#$3, CCD$\#$4 and four identifiers are listed at the corners of each CCD. An exposure includes four CCD frames. (see Figure 1 in \citealt{zou+etal+2017+aj})} \label{Fig2} \end{figure} \section{Solve for the GD models progressively} The reduction procedures were carried out according to \citet{Peng+etal+2012}. First, a numerical GD model was derived. Then an analytical GD model was tried as the residuals of the stars are still large after the numerical GD correction. After the analytical GD correction, the positional precision of stars has been improved greatly. Finally, an additional lookup table is set up as the residual GD does not converge for the preset threshold. More explanations are described as follows. \subsection{The derivation of the numerical GD model} \label{sect:Red} The derivation of the numerical GD model mainly involves the following steps: (1) Firstly, some information about the date of the observation, exposure time, and so on are extracted automatically from the FITS header by our own software (\citealt{Peng+etal+2017,Wang+etal+2017}). Then after some preprocessing (de-bias, flat correction), the pixel positions of the stars in each CCD frame are measured with two dimensional Gauss fit and this is different from the past when SExtractor (\citealt{Bertin+1996}) was used for the pixel positional measurement. As the solution of the GD can be affected by the precision of those stars too bright (brighter than 14 mag and saturated), these stars will not be considered. For each CCD frame in 2016, about 100 stars are rejected and about 200 stars are used. For each CCD frame in 2017, about 200 stars are rejected and about 3000 stars are used. Then the stars in each frame are matched with the stars in Gaia DR2 star catalogue by using some fast matching algorithm (\citealt{ren+2010}). The pixel positional measurement and the match process are performed automatically by our own software. (2) For each chip we adopted the center of the CCD array as the tangential point on the tangential plane of the image. Then the standard coordinates of the stars are obtained by the central projection (\citealt{green1985}). Next, the positional residuals (observed minus averaged; O-A) of the stars can be obtained after four parameters transformation. As an accurate GD model requires high precision of the stars positions, then during the computations of the standard coordinate, the topocentric apparent position and atmospheric refraction for each matched star in each CCD frame are also taken into account. (3) Due to the plenty of reference stars available in 2017, each CCD frame is divided into 1024 equal-area boxes and the size of each box is 128${\times}$126 pixels. Each CCD frame in 2016 is divided into 256 boxes and the size is 256${\times}$252 pixels for each box. Afterward the averaged GD in each box was obtained by cancelling out the catalogue errors and compressing the measured errors. By ten to twelve iterations the numerical GD converged and was derived out. In fact the derived numerical GD model is a lookup table. Figure~\ref{FigGD16} and Figure~\ref{FigGD17} show the numerical GD patterns for the Bok telescope in 2016 and 2017, respectively. We can see that the telescope suffers a serious GD, especially in the corners. The maximum GD is about 22 pixels ($\sim$~10.18 arcsec). Figure~\ref{Fig5} and Figure~\ref{Fig6} show the positional residuals of the stars with respect to the magnitude in 2016 and 2017, respectively. We can see the positional residuals are suppressed largely after the numerical GD correction. \begin{figure} \centering \includegraphics[width=8.5cm, angle=0]{f3.eps} \caption{GD for the Bok 2.3 m telescope. The GD was derived from the observations in 2016 and r- filter was used. The maximum GD (Max) and the median GD (Med) are listed in units of pixels and a factor of 20 is used to exaggerate the magnitude of the GD vectors.}\label{FigGD16} \end{figure} \begin{figure} \centering \includegraphics[width=8.5cm, angle=0]{f4.eps} \caption{GD for the Bok 2.3 m telescope. The GD was derived from the observations in 2017 and g- filter was used. The maximum GD (Max) and the median GD (Med) are listed in units of pixels and a factor of 20 is used to exaggerate the magnitude of the GD vectors.}\label{FigGD17} \end{figure} \begin{figure} \centering \includegraphics[width=8.5cm, angle=0]{f5.eps} \caption{(O-A) residuals of the stars in 2016. The dark points represent the residuals before the numerical GD correction where four parameters transformation is used and the red ones represent the residuals after the numerical GD correction.}\label{Fig5} \end{figure} \begin{figure} \centering \includegraphics[width=8.5cm, angle=0]{f6.eps} \caption{(O-A) residuals of the stars in 2017. The dark points represent the residuals before the numerical GD correction where four parameters transformation is used and the red ones represent the residuals after the numerical GD correction.}\label{Fig6} \end{figure} \subsection{The derivation of the analytical GD model} By using the numerical GD correction, the precision of the stars is improved. However, the positional residuals of the stars are still large. Specifically, we found the residuals did not distribute randomly with respect to the stars' pixel positions (see Figure~\ref{Fig5_1} and Figure~\ref{Fig5_2}). Besides, the GD pattern in 2017 seems to be divided into several blocks (see Figure~\ref{FigGD17}). Then the analytical GD model was tried beyond the numerical GD model. \begin{figure} \centering \includegraphics[width=8.5cm, angle=0]{f5_1.eps} \caption{Positional residuals of the stars with respect to the pixel positions in 2016.}\label{Fig5_1} \end{figure} \begin{figure} \centering \includegraphics[width=8.5cm, angle=0]{f5_2.eps} \caption{Positional residuals of the stars with respect to the pixel positions in 2017.}\label{Fig5_2} \end{figure} As we know, the previous GD pattern is a numerical model and each derived GD is an average in a box with a size of 128${\times}$126 or 256${\times}$252 pixels. Then the averaged GD in the small box cannot represent the GD pattern finely due to the serious GD pattern. In order to depict the GD pattern in more details, two third-order polynomials were used to fit the previous numerical GD. Higher-order polynomials were also tried, but the results show that the positional residuals of the stars are almost the same as that of third-order polynomial fit. So the third-order polynomial fit was chosen. The positional residuals of the stars for higher-order polynomial fit will be given later (see Table 5). Here, the detailed third-order polynomials are shown by the following equations: \begin{equation}\label{poly} \begin{aligned} {\Delta X} &= a_{1}x^3 + a_{2}x^2y + a_{3}xy^2 + a_{4}y^3+a_{5}x^2+a_{6}xy+a_{7}y^2\\ & +a_{8}x+a_{9}y+a_{10}\\ {\Delta Y} &= b_{1}x^3 + b_{2}x^2y + b_{3}xy^2 + b_{4}y^3+b_{5}x^2+b_{6}xy+b_{7}y^2\\ & +b_{8}x+b_{9}y+b_{10} \end{aligned} \end{equation} where ($\mit \Delta X$, $\mit \Delta Y$) is the solved GD for the center of each box, ($\mit x$, $\mit y$) is its normalized pixel coordinate between -1 and 1, $\mit a$$_{i}$ and $\mit b$$_{i}$ ($\mit i$=1 to 10) are the coefficients of the polynomials. We use the normalized pixel positions: \begin{center} $x=(x_{obs}-2048)/2048$\\ $y=(y_{obs}-2016)/2016$ \end{center} This normalization makes it easier to recognize the size of the contribution of each term to the solution. After fitting, the coefficients of the polynomial were obtained and the analytical GD model was derived. Next an iterative procedure is necessary. Table~\ref{tab2} lists the solved coefficients of the third-order polynomials in 2016 and 2017, respectively and Figure~\ref{Fig6_1} and Figure~\ref{Fig6_2} show the positional residuals of the stars with respect to the magnitude. From Figure~\ref{Fig6_1} and Figure~\ref{Fig6_2} we can see that the positional precision of the stars is improved greatly. The final GD correction reaches a positional precision level of $\sim$~0.05 pixel ($\sim$~22 mas) in each coordinate and most of the GD was removed. Then the definition of precision of the GD correction can be given, that is, the precision of the GD correction can be denoted by the star positional residual after the GD correction. \begin{table*} \begin{minipage}[]{180mm} \caption[]{The coefficient of the analytical model for the observations in 2016 and 2017. Column 1 lists the designation of each CCD chip and column 2 lists the date. Column 3 lists the direction ($\emph{a}$ or $\emph{b}$). Column 4 to 13 list the coefficients of the polynomials.\label{tab2}} \end{minipage} \centering \setlength{\tabcolsep}{1pt} \small \begin{tabular}{cccrrrrrrrrrr} \hline\noalign{\smallskip} CCD&date&&1$\quad$$\quad$&2$\quad$$\quad$&3$\quad$$\quad$&4$\quad$$\quad$&5$\quad$$\quad$&6$\quad$$\quad$&7$\quad$$\quad$&8$\quad$$\quad$&9$\quad$$\quad$&10$\quad$\\ \hline\noalign{\smallskip} $\#$1&2016$\quad$&$\emph{a}$$\quad$& 1.897$\quad$& 0.090$\quad$& 1.857$\quad$& 0.020$\quad$&-6.256$\quad$&-3.953$\quad$&-2.039$\quad$&-0.658$\quad$& 2.335$\quad$& 2.754\\ &2016$\quad$&$\emph{b}$$\quad$& 0.024$\quad$& 1.875$\quad$& 0.068$\quad$& 2.065$\quad$&-1.962$\quad$&-4.186$\quad$&-5.744$\quad$& 6.392$\quad$&-1.371$\quad$& 2.559\\ $\#$2&2016$\quad$&$\emph{a}$$\quad$& 1.880$\quad$&-0.068$\quad$& 1.856$\quad$&-0.021$\quad$&-6.264$\quad$& 3.694$\quad$&-2.041$\quad$& 0.038$\quad$&-2.298$\quad$& 2.757\\ &2016$\quad$&$\emph{b}$$\quad$&-0.068$\quad$& 1.848$\quad$&-0.057$\quad$& 1.909$\quad$& 1.820$\quad$&-4.195$\quad$& 5.373$\quad$&-5.936$\quad$&-1.052$\quad$&-2.388\\ $\#$3&2016$\quad$&$\emph{a}$$\quad$& 1.889$\quad$&-0.090$\quad$& 1.855$\quad$&-0.037$\quad$& 5.871$\quad$&-3.973$\quad$& 1.913$\quad$&-1.820$\quad$&-1.426$\quad$&-2.585\\ &2016$\quad$&$\emph{b}$$\quad$&-0.027$\quad$& 1.874$\quad$&-0.085$\quad$& 1.904$\quad$&-1.978$\quad$& 3.896$\quad$&-5.799$\quad$&-6.813$\quad$&-1.791$\quad$& 2.582\\ $\#$4&2016$\quad$&$\emph{a}$$\quad$& 1.886$\quad$& 0.087$\quad$& 1.865$\quad$& 0.035$\quad$& 5.906$\quad$& 3.687$\quad$& 1.937$\quad$&-0.153$\quad$& 2.474$\quad$&-2.604\\ &2016$\quad$&$\emph{b}$$\quad$& 0.036$\quad$& 1.878$\quad$& 0.082$\quad$& 1.910$\quad$& 1.831$\quad$& 3.932$\quad$& 5.354$\quad$& 5.114$\quad$&-0.825$\quad$&-2.385\\ $\#$1&2017$\quad$&$\emph{a}$$\quad$& 1.942$\quad$& 0.084$\quad$& 1.857$\quad$& 0.015$\quad$&-6.333$\quad$&-4.049$\quad$&-2.044$\quad$&-1.198$\quad$& 4.493$\quad$& 2.790\\ &2017$\quad$&$\emph{b}$$\quad$& 0.023$\quad$& 1.885$\quad$& 0.074$\quad$& 1.889$\quad$&-2.020$\quad$&-4.200$\quad$&-5.877$\quad$& 4.389$\quad$&-1.613$\quad$& 2.630\\ $\#$2&2017$\quad$&$\emph{a}$$\quad$& 1.926$\quad$&-0.030$\quad$& 1.867$\quad$&-0.005$\quad$&-6.308$\quad$& 3.668$\quad$&-2.025$\quad$&-0.383$\quad$&-4.012$\quad$& 2.775\\ &2017$\quad$&$\emph{b}$$\quad$&-0.002$\quad$& 1.881$\quad$&-0.027$\quad$& 1.873$\quad$& 1.813$\quad$&-4.150$\quad$& 5.364$\quad$&-4.091$\quad$&-1.589$\quad$&-2.390\\ $\#$3&2017$\quad$&$\emph{a}$$\quad$& 1.922$\quad$&-0.032$\quad$& 1.829$\quad$&-0.019$\quad$& 5.987$\quad$&-4.069$\quad$& 1.929$\quad$&-1.559$\quad$&-4.398$\quad$&-2.636\\ &2017$\quad$&$\emph{b}$$\quad$&-0.009$\quad$& 1.861$\quad$&-0.031$\quad$& 1.830$\quad$&-2.051$\quad$& 3.919$\quad$&-5.922$\quad$&-4.093$\quad$&-1.329$\quad$& 2.655\\ $\#$4&2017$\quad$&$\emph{a}$$\quad$& 1.911$\quad$& 0.052$\quad$& 1.878$\quad$& 0.007$\quad$& 6.010$\quad$& 3.643$\quad$& 1.941$\quad$&-0.852$\quad$& 3.845$\quad$&-2.648\\ &2017$\quad$&$\emph{b}$$\quad$& 0.013$\quad$& 1.898$\quad$& 0.049$\quad$& 1.832$\quad$& 1.810$\quad$& 3.913$\quad$& 5.316$\quad$& 3.663$\quad$&-1.612$\quad$&-2.373\\ \noalign{\smallskip}\hline \end{tabular} \end{table*} \begin{figure} \centering \includegraphics[width=8.5cm, angle=0]{f6_1.eps} \caption{Positional residuals of the stars with respect to the magnitude in 2016. The black points represent the residuals after the numerical GD correction and the red ones after the analytical GD correction.}\label{Fig6_1} \end{figure} \begin{figure} \centering \includegraphics[width=8.5cm, angle=0]{f6_2.eps} \caption{Positional residuals of the stars with respect to the magnitude in 2017. The black points represent the residuals after the numerical GD correction and the red ones after the analytical GD correction.}\label{Fig6_2} \end{figure} \subsection{Setting up an additional lookup table} Although the precision for a star's reduced position is greatly improved after the analytical GD correction, during the iterative solution of the analytical GD model, the numerical GD does not converge for the preset threshold, for example, 0.01 pixel. Therefore an additional lookup table was set up and its corresponding lookup table correction was followed after the analytical GD correction. After the lookup table correction, the numerical GD converged quickly during the iterative solution. Figure~\ref{Fig6_3} and Figure~\ref{Fig6_4} show the analytical GD patterns for the Bok telescope in 2016 and 2017, respectively. From Figure~\ref{Fig6_4} we can see that the blocks found in Figure~\ref{FigGD17} have disappeared. In order to show the result of the GD correction, the final residual patterns after the analytical GD correction and the lookup table correction were also shown in Figure~\ref{Fig11_1} and Figure~\ref{Fig11_2} where the magnitude of the residual is magnified by a factor of 20000. \begin{figure} \centering \includegraphics[width=8.5cm, angle=0]{f6_3.eps} \caption{GD for the Bok 2.3 m telescope in 2016. The GD was derived from the analytical GD model and the additional lookup table in 2016. The maximum GD (Max) and the median GD (Med) are listed in units of pixels and a factor of 20 is used to exaggerate the magnitude of the GD vectors.} \label{Fig6_3} \end{figure} \begin{figure} \centering \includegraphics[width=8.5cm, angle=0]{f6_4.eps} \caption{GD for the Bok 2.3 m telescope in 2017. The GD was derived from the analytical GD model and the additional lookup table in 2017. The maximum GD (Max) and the median GD (Med) are listed in units of pixels and a factor of 20 is used to exaggerate the magnitude of the GD vectors.}\label{Fig6_4} \end{figure} \begin{figure} \centering \includegraphics[width=8.5cm, angle=0]{f11_1.eps} \caption{Residuals for the Bok 2.3 m telescope in 2016. The maximum GD (Max) and the median GD (Med) are listed in units of pixels and a factor of 20000 is used to exaggerate the magnitude of the GD vectors.} \label{Fig11_1} \end{figure} \begin{figure} \centering \includegraphics[width=8.5cm, angle=0]{f11_2.eps} \caption{Residuals for the Bok 2.3 m telescope in 2017. The maximum GD (Max) and the median GD (Med) are listed in units of pixels and a factor of 20000 is used to exaggerate the magnitude of the GD vectors.}\label{Fig11_2} \end{figure} Besides, Figure~\ref{Fig7_1} and Figure~\ref{Fig7_2} show the positional residuals of the stars after the analytical GD correction and the additional lookup table correction. Table~\ref{tab3} shows the mean and standard deviation of the stars's positional residuals by using the different GD models. From Figure~\ref{Fig7_1}, Figure~\ref{Fig7_2} and Table~\ref{tab3} we can see that the positional precision of the stars is further improved. Furthermore, the positional residuals of the stars with respect to the pixel positions are also shown in Figure~\ref{Fig8_1} and Figure~\ref{Fig8_2}. From these two figures, we can see that the positional residual distributes almost randomly after the analytical GD correction and the additional lookup table correction. However, a small hopping is seen in the right panel of Figure~\ref{Fig8_1}, and further study needs to be made in future. Table~\ref{tab4} shows the mean and standard deviation of the stars's positional residuals by using different orders polynomial fit. Figure~\ref{Fig9_1} shows the standard deviation of each star's positional residual in 2016 in right ascension and declination, respectively and Figure~\ref{Fig9_2} shows the standard deviation in 2017. In Figure~\ref{Fig9_1} and Figure~\ref{Fig9_2}, the stars that appeared more than twice are included and a smooth line is also drawn for every 500 points. In short, from these figures we can see that after the GD correction, the positional measurement precision for the bright stars is better than 20 mas. The positional measurement precision in 2017 is even better for most stars. \begin{figure} \centering \includegraphics[width=8.5cm, angle=0]{f7_1.eps} \caption{Positional residuals of the stars with respect to the magnitude in 2016 in right ascension and declination, respectively. The black points represent the residuals of the stars after the analytical GD correction and the additional lookup table correction.}\label{Fig7_1} \end{figure} \begin{figure} \centering \includegraphics[width=8.5cm, angle=0]{f7_2.eps} \caption{Positional residuals of the stars with respect to the magnitude in 2017 in right ascension and declination, respectively. The black points represent the residuals of the stars after the analytical GD correction and the additional lookup table correction.}\label{Fig7_2} \end{figure} \begin{table} \centering \begin{minipage}[]{90mm} \caption[]{Statistics of the positional residuals for all stars by using different GD models for each data set. Column 1 is the date. The second column shows the GD model. The following columns list the mean positional residual and its standard deviation (SD) in right ascension and declination, respectively. All units are in arcseconds.\label{tab3}} \end{minipage} \setlength{\tabcolsep}{1pt} \small \begin{tabular}{ccccccccccccc} \\ \hline\noalign{\smallskip} Date& GD model &$<$O-A$>$& SD & $<$O-A$>$& SD \\ & &\multicolumn{2}{c}{RA}&\multicolumn{2}{c}{Dec.} \\ \hline\noalign{\smallskip} 2016-01-17&Analytical&0.000&0.019&0.000&0.023\\ &Analytical+lookup table&0.000&0.018&0.000&0.021\\ 2017-03-05&Analytical&0.000&0.019&0.000&0.021\\ &Analytical+lookup table&0.000&0.018&0.000&0.020\\ \noalign{\smallskip}\hline \end{tabular} \end{table} \begin{table} \centering \begin{minipage}[]{90mm} \caption[]{Statistics of the positional residuals for all stars after the analytical GD correction and the additional lookup table correction by using different orders polynomial fit. Column 1 is the date. Column 2 list the order of the polynomial fit. The following columns list the mean positional residual and its standard deviation (SD) in right ascension and declination, respectively. All units are in arcseconds.\label{tab4}} \end{minipage} \setlength{\tabcolsep}{1pt} \small \begin{tabular}{ccccccccccccc} \\ \hline\noalign{\smallskip} Date& Orders &$<$O-A$>$& SD & $<$O-A$>$& SD \\ & &\multicolumn{2}{c}{RA}&\multicolumn{2}{c}{Dec.} \\ \hline\noalign{\smallskip} 2016-01-17&2-order&0.000&0.035&0.000&0.039\\ &3-order&0.000&0.018&0.000&0.021\\ &4-order&0.000&0.018&0.000&0.021\\ &5-order&0.000&0.018&0.000&0.021\\ 2017-03-05&2-order&0.000&0.047&0.000&0.049\\ &3-order&0.000&0.018&0.000&0.020\\ &4-order&0.000&0.018&0.000&0.020\\ &5-order&0.000&0.018&0.000&0.020\\ \noalign{\smallskip}\hline \end{tabular} \end{table} \begin{figure} \centering \includegraphics[width=8.5cm, angle=0]{f8_1.eps} \caption{Positional residuals of the stars with respect to the pixel positions in 2016. The dark points represent the residuals after the analytical GD correction and the additional lookup table correction.}\label{Fig8_1} \end{figure} \begin{figure} \centering \includegraphics[width=8.5cm, angle=0]{f8_2.eps} \caption{Positional residuals of the stars with respect to the pixel positions in 2017. The dark points represent the residuals after the analytical GD correction and the additional lookup table correction.}\label{Fig8_2} \end{figure} \begin{figure} \centering \centering \includegraphics[width=8.5cm, angle=0]{f9_1.eps} \caption{Standard deviation of the positional residual for each star after the analytical GD correction and the additional lookup table correction with respect to the magnitude in 2016. A Savitzky-Golay smooth line is drawn for every 500 points.}\label{Fig9_1} \end{figure} \begin{figure} \centering \centering \includegraphics[width=8.5cm, angle=0]{f9_2.eps} \caption{Standard deviation of the positional residual for each star after the analytical GD correction and the additional lookup table correction with respect to the magnitude in 2017. A Savitzky-Golay smooth line is drawn for every 500 points.}\label{Fig9_2} \end{figure} \section{Relative positions of the chips} \label{sect:diss} Due to the accurate GD pattern, it is meaningful to know the relative positions of the CCD array. As the center of the CCD array was adopted as the tangential point on the tangential plane, the standard coordinate for each pixel can be computed and the inter-chip gaps can be obtained. We use CCD$\#$2 and CCD$\#$3 as a reference to compute the inter-chip gap for each observational set (see Figure 2). As the inter-chip gaps were derived from the extrapolation of the final GD model, the far distance from the reference chip's boundary was not reliable for the measurement of the inter-chip gaps. As such, only twelve points were given to compute their positional changes. When CCD$\#$2 is used as the reference, the pixel positions for the corner 5, 6, and 8 are (0.5, 4031.5), (4031.5, 4031.5) and (4031.5, -0.5). When CCD$\#$3 is used as the reference, the pixel positions for the corner 9, 11, and 12 are (0.5, 4031.5), (0.5, -0.5) and (4031.5, -0.5). The pixel positions for the other corners are listed in Table~\ref{tab5}. Figure~\ref{Fig10_1} shows the change of the inter-chip gap when CCD$\#$2 is used as the reference in 2016 and 2017, respectively, and Figure~\ref{Fig10_2} shows the change of the inter-chip gap when CCD$\#$3 is used as the reference. From Figure~\ref{Fig10_1} and Figure~\ref{Fig10_2} we can see there was a slight shift in the inter-chip gap in 14 months. Furthermore, Figure~\ref{Fig10_3} shows the change of the inter-chip gap and Figure~\ref{Fig10_4} shows the change of the roll angles between 2016 and 2017 with respect to CCD$\#$2 (chosen as reference). The statistics of inter-chip gaps and the roll angles in 2016 and 2017 are also shown in Table~\ref{tab6}. From Table~\ref{tab6} we can see that the change of the roll angle in 14 months is no more than 0.1 degree and the roll angles appear to be stable. \begin{table*} \centering \begin{minipage}[]{180mm} \caption[]{The positions of the points at the corners (see Figure 2). Column 2 to 9 show the mean pixel positions and their corresponding standard deviations. The top panel uses CCD$\#$2 as the reference and the bottom one CCD$\#$3 as the reference. All units are in pixels.\label{tab5}} \end{minipage} \setlength{\tabcolsep}{1pt} \small \begin{tabular}{lccccccccc} \\ \hline\noalign{\smallskip} Corner&\multicolumn{2}{c}{3}& \multicolumn{2}{c}{4}&\multicolumn{2}{c}{13}& \multicolumn{2}{c}{15} \\ Date& $<$mean$>$& SD & $<$mean$>$& SD& $<$mean$>$& SD & $<$mean$>$& SD\\ \hline\noalign{\smallskip} 2016&( -7.029,4177.380)&(0.063,0.059)&(4024.407,4178.564)&(0.069,0.089)&(4472.624,4012.683)&(0.042,0.242)&(4464.065,-17.983)&(0.034,0.059)\\ 2017&(-10.275,4173.382)&(0.049,0.095)&(4020.632,4175.989)&(0.059,0.092)&(4467.475,4009.499)&(0.033,0.052)&(4460.441,-21.430)&(0.038,0.056)\\ \noalign{\smallskip}\hline \\ \hline Corner&\multicolumn{2}{c}{2}& \multicolumn{2}{c}{4}&\multicolumn{2}{c}{13}& \multicolumn{2}{c}{14} \\ Date& $<$mean$>$& SD & $<$mean$>$& SD& $<$mean$>$& SD & $<$mean$>$& SD\\ \hline\noalign{\smallskip} 2016&(-363.701,4053.247)&(0.039,0.067)&(-366.320,22.462)&(0.032,0.193)&(24.878,-148.552)&(0.063,0.073)&(4056.526,-135.165)&(0.051,0.082)\\ 2017&(-362.698,4052.952)&(0.040,0.048)&(-365.073,22.179)&(0.042,0.060)&(20.084,-141.796)&(0.066,0.100)&(4050.305,-133.761)&(0.063,0.102)\\ \noalign{\smallskip}\hline \end{tabular} \end{table*} \begin{figure} \centering \includegraphics[width=8.5cm, angle=0]{f10_1.eps} \caption{The change of the inter-chip gap when CCD$\#$2 is used as the reference. The left panel shows the change in horizontal and the right panel shows the change in vertical. The black line shows the change in 2016 and the red line shows the change in 2017.}\label{Fig10_1} \end{figure} \begin{figure} \centering \centering \includegraphics[width=8.5cm, angle=0]{f10_2.eps} \caption{The change of the inter-chip gap when CCD$\#$3 is used as the reference. The left panel shows the change in horizontal and the right panel shows the change in vertical. The black line shows the change in 2016 and the red line shows the change in 2017.}\label{Fig10_2} \end{figure} \begin{figure} \centering \includegraphics[width=8.5cm, angle=0]{f10_3.eps} \caption{The change of the inter-chip gap in 2016 and 2017. The left panel shows the change of the vertical gap with respect to CCD$\#$2, the right panel shows the change of the horizontal gap.}\label{Fig10_3} \end{figure} \begin{figure} \centering \centering \includegraphics[width=8.5cm, angle=0]{f10_4.eps} \caption{The change of the roll angel in 2016 and 2017. The left panel shows the change of chip$\#$1 with respect to CCD$\#$2, the right panel shows the change of chip$\#$4 with respect to CCD$\#$2.}\label{Fig10_4} \end{figure} \begin{table} \centering \begin{minipage}[]{90mm} \caption[]{Statistics of the inter-chip gaps and the roll angles. "16gap" is the abbreviation of the gap in 2016. "16angle" is the abbreviation of the angle in 2016. Column 2 to 9 show the averaged values and the standard deviation for the gaps and the angles. CCD$\#$2 and CCD$\#$3 are used as the reference. The unit for row 1 to 2 is in arcsec and for row 3 to 4 is in degree. The unit of the standard deviation for row 3 to 4 is in arcmin.\label{tab6}} \end{minipage} \setlength{\tabcolsep}{1pt} \small \begin{tabular}{lccccccccc} \\ \hline\noalign{\smallskip} Date& $<$mean$>$& SD & $<$mean$>$& SD& $<$mean$>$& SD & $<$mean$>$& SD\\ Parameter&\multicolumn{2}{c}{CCD1(CCD2)}& \multicolumn{2}{c}{CCD4(CCD3)}&\multicolumn{2}{c}{CCD4(CCD2)}& \multicolumn{2}{c}{CCD1(CCD3)} \\ \hline\noalign{\smallskip} 16gap&66.13&0.03&64.20&0.03&168.58&0.01&165.09&0.02\\ 17gap&64.63&0.04&62.38&0.04&166.53&0.01&164.63&0.02\\ 16angle&0.013&0.06&0.190&0.02&89.879&0.12&89.961&0.06\\ 17angle&0.037&0.03&0.115&0.02&89.901&0.00&89.965&0.00\\ \noalign{\smallskip}\hline \end{tabular} \end{table} \section{Conclusion} \label{sect:con} Although the Bok telescope provides favourable astrometric properties, the GD must be solved carefully. The distortion solution for BASS on the Bok telescope is quite different from the previous method where the numerical GD model is enough. Instead of the numerical GD model, the analytical GD model and the additional lookup table were used to improve the astrometry of BASS. After the analytical GD correction and the additional lookup table correction are applied, the internal agreement or precision of the stars are estimated at about 20 mas and even better in each direction. This improvement can tap the astrometric potential of the Bok telescope and carry out some research work such as the reference frame, the brown dwarf, the parallax, the proper motion, and so on. \section*{Acknowledgements}\emph{} We acknowledge the support of the staff of the national Astronomical Observatory of China and the staff of the Bok telescope at Steward Observatory. This work was supported by the Joint Research Fund in Astronomy (U1431227) under cooperative agreement between the National Natural Science Foundation of China (NSFC) and Chinese Academy of Sciences (CAS), by the National Natural Science Foundation of China (Grant No. 11703008, 11873026), by the Natural Science Foundation of Guangdong Province, China (Grant No. 2016A030313092), by the Opening Project of Guangdong Province Key Laboratory of Computational Science at the Sun Yat-sen University, and partly by the Fundamental Research Funds for the Central Universities. This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement. \bibliographystyle{mnras}
{ "timestamp": "2019-03-01T02:09:22", "yymm": "1902", "arxiv_id": "1902.10891", "language": "en", "url": "https://arxiv.org/abs/1902.10891" }
\section{Infinite Nested Pochhammer Sums} \label{sec:1} The goal of this article is to find and prove identities of the following form: \begin{eqnarray*} \sum\limits_{n=1}^{\infty } \frac{\left(\frac{1}{2}\right)_n \sum\limits_{i=1}^n \frac{\sum\limits_{j=1}^i \frac{1}{j^2}}{i}}{(n+1)!}&=&3 \zeta _3,\\ \sum\limits_{n=1}^{\infty } \frac{\left(\frac{1}{2}\right)_n \sum\limits_{i=1}^n \frac{\sum\limits_{j=1}^i \frac{1}{j}}{i^3}}{(n+1)!}&=&\frac{2 l_2^4}{3}-4 \zeta _3 l_2+\frac{2}{3} \pi ^2 l_2^2+16 p_4-\frac{13 \pi ^4}{180},\\ \sum\limits_{n=1}^{\infty } \frac{\left(\frac{1}{4}\right)_n \sum\limits_{i=1}^n \frac{1}{i^2}}{(n+1)!}&=&\frac{7 \pi ^2}{18}-\frac{16 C}{3}-6 \mylog{2}^2+2 \pi \mylog{2},\\ \sum\limits_{n=1}^{\infty } \frac{\left(\frac{1}{3}\right)_n \left(\sum\limits_{i=1}^n \frac{1}{3i+1}-\sum\limits_{i=1}^n \frac{1}{3i+2}\right)}{(n+1)!}&=&\frac{\pi }{\sqrt{3}}-\frac{3}{4}-\frac{\sqrt{3 \pi } \Gamma \left(\frac{5}{6}\right)}{\sqrt[3]{2} \Gamma \left(\frac{1}{3}\right)},\\ \sum _{n=1}^{\infty } \frac{\left(\frac{1}{2}\right)_n \sum\limits_{i=1}^n \frac{\sum\limits_{j=1}^i \frac{1}{2 j+1}}{2 i+1}}{(2n+1)^2 n!}&=&\frac{1}{96} \pi \left(4 \pi ^2 \mylog{2}-72 \mylog{2}^2+56 \mylog{2}^3-9 \zeta_{3}\right),\\ \sum\limits_{i=1}^{\infty } \frac{ \left(\frac{1}{3}\right)_n\sum\limits_{i=1}^n \frac{\sum\limits_{j=1}^{i} \frac{\sum\limits_{k=1}^{j} \frac{\sum\limits_{l=1}^{k} \frac{\sum\limits_{m=1}^{l} \frac{1}{m}}{l}}{k}}{j}}{i}}{(n+1)!}&=&180 \zeta_{5}-\frac{\pi ^5}{\sqrt{3}},\\ \sum _{n=1}^{\infty } \frac{\left(\frac{1}{2}\right)_n \sum _{i=1}^n \frac{1}{i^9}}{(n+1)!}&=&-\frac{2339 \pi ^8 \mylog{2}}{453600}-\frac{79 \pi ^6 \mylog{2}^3}{5670}-\frac{1}{75} \pi ^4 \mylog{2}^5-\frac{8}{945} \pi ^2 \mylog{2}^7+\frac{8 \mylog{2}^9}{2835}\\ &&-\frac{79 \pi ^6 \zeta _{3}}{3780}-\frac{1}{5} \pi ^4 \mylog{2}^2 \zeta _{3}-\frac{4}{9} \pi ^2 \mylog{2}^4 \zeta _{3}+\frac{16}{45} \mylog{2}^6 \zeta _{3}-\frac{4}{3} \pi ^2 \mylog{2} \zeta _{3}^2\\ &&+\frac{16}{3} \mylog{2}^3 \zeta _{3}^2+\frac{8 \zeta _{3}^3}{3}-\frac{3 \pi ^4 \zeta _{5}}{10}-4 \pi ^2 \mylog{2}^2 \zeta_{5}+8 \mylog{2}^4 \zeta _{5}+48 \mylog{2} \zeta _{3} \zeta _{5}\\ &&-6 \pi ^2 \zeta _{7}+72 \mylog{2}^2 \zeta _{7}+\frac{340 \zeta _{9}}{3},\\ \end{eqnarray*} where $(x)_n$ denotes the Pochhammer symbol, $\mylog{k}:=\log(k),\ \zeta_k:=\sum_{n=1}^\infty\frac{1}{n^k}$ and $C$ denotes Catalan's constant. Note that similar identities were given in~\cite{Liu:2019,BinomialSumIdentities}. Such identities are of interest in physics: in particular, such sums have been studied in order to perform calculations of higher order corrections to scattering processes in particle physics \cite{Ablinger:2013eba,Davydychev:2001,Davydychev:2003mv,Fleischer:1998nb,Jegerlehner:2003,Kalmykov:2000qe,Kalmykov:2007dk,Ogreid:1997bx,Weinzierl:2004bn}. Moreover similar identities were also considered in \cite{Borwein:2001,Borwein:2000,Lehmer:1985,ZhiWei:2014,Zucker:1985}, and there is a connection to Ap\'ery's proof of the irrationality of $\zeta(3)$ (see \cite{Borwein:1987}). \cite{Weinzierl:2004bn,ZhiWei:2014,Zucker:1985}. While \cite{BinomialSumIdentities} basically deals with sums of the form $$\sum _{n=1}^{\infty } \frac{\left(\frac{1}{2}\right)_n}{n!}f(n) \textnormal{ and } \sum _{n=1}^{\infty } \frac{\left(-\frac{1}{2}\right)_n}{n!}f(n),$$ we are going to consider a much wider class of sums in the frame of this paper. In addition we will state a general computer algebra method to evaluate a large class of sums in terms of nested integrals. Moreover, we will be able to prove a structural theorem, about when such a sum can be expressed in terms of so called \textit{cyclotomic polylogarithms}. The main purpose of this article is to present methods which can be automated, hence not all identities presented in this paper are new identities. To make more precise which class of sums we are considering, some definitions are in place. Let $r\in\mathbb N$ and let $a_i,c_i\in\mathbb N$ and $b_i\in\mathbb N_0$ for $1\leq i\leq r$ then we call $\S{(a_1,b_1,c_1),\ldots,(a_r,b_r,c_r)}n$ defined as \begin{eqnarray}\label{cyclosum} \S{(a_1,b_1,c_1),\ldots,(a_r,b_r,c_r)}n:=\sum_{i_1=1}^n\frac{1}{(a_1 i_1+b_1)^{c_1}}\sum_{i_2=1}^{i_1}\frac{1}{(a_2 i_2+b_2)^{c_2}}\cdots\sum_{i_r=1}^{i_{r-1}}\frac{1}{(a_r i_r+b_r)^{c_r}} \end{eqnarray} a \textit{cyclotomic harmonic sum} (compare~\cite{Ablinger:2013jta,Ablinger:2013hcp,Ablinger:2013eba,Ablinger:2011te}) of depth $r$. Note that if $a_i=1$ and $b_i=0$ for $1\leq i\leq r$ we write \begin{eqnarray}\label{hsum} \S{c_1,c_2,\ldots,c_r}n:=\S{(1,0,c_1),(1,0,c_2),\ldots,(1,0,c_r)}n, \end{eqnarray} and we call $\S{c_1,c_2,\ldots,c_r}n$ a \textit{multiple harmonic sum} (see, e.g.,~\cite{Ablinger:2013hcp,Ablinger:2013cf,Bluemlein1999,Bluemlein2000,Vermaseren1998}). The sums we are considering take the form \begin{eqnarray}\label{Pochhammersum} \sum_{n=1}^\infty \frac{(p)_n}{(an+b)^c(n+d)!}f(n), \end{eqnarray} where $a,b,c,d\in\mathbb N_0,\ p\in\mathbb R$ and $f(n)$ is a cyclotomic harmonic sum. We will refer to~(\ref{Pochhammersum}) as \textit{Pochhammer sum}. We are going to find representations of these Pochhammer sums in terms of special classes of integrals (that are similar to the iterated integrals in~\cite{Ablinger:2014} and correspond to the iterated integrals in~\cite{BinomialSumIdentities}). These classes of integrals are iterated integrals over \textit{hyperexponential} functions. More precisely a function $f(x)$ is called \textit{hyperexponential} if $$\frac{f^\prime(x)}{f(x)}=q(x),$$ where $q(x)$ is a rational function in $x.$ Then an \textit{iterated integral} over the hyperexponential functions $f_1(x),f_2(x),\ldots,f_k(x)$ is defined recursively by $$ \GL{f_1(\tau),f_2(\tau),\cdots,f_k(\tau),x}=\int_0^xf_1(\tau_1)\GL{f_2(\tau),\cdots,f_k(\tau),\tau_1}d\tau_1, $$ with the special case $\GL{x}=1.$ Since some letters might have a non-integrable singularity at the base point $x=0$ we consistently define $$ \GL{f(\tau),x}:=\int_0^x \left(f(t)-\frac{c}{t}\right)dt+c\log(x), $$ where $c$ takes the unique value such that the integrand on the right hand side is integrable at $t=0.$ It is important to note that this definition preserves the derivative $\frac{d}{dx}\GL{f(\tau),x}~=~f(x).$ In general, we set \begin{eqnarray*} \GL{f_1(\tau),\ldots,f_j(\tau),x}&:=&\int_0^x \left(f_1(t)\GL{f_2(\tau),\ldots,f_j(\tau),t}-\sum_{i=0}^kc_i\frac{\log(t)^i}{t}\right)dt\\&&+\sum_{i=0}^k \frac{c_i}{i+1}\log(x)^{i+1}, \end{eqnarray*} where $k$ and $c_0,\ldots,c_k$ are chosen to remove any non-integrable singularity. Again the result is unique and retains $$\frac{d}{dx}\GL{f_1(\tau),\ldots,f_j(\tau),x}=f_1(x)\GL{f_2(\tau),\ldots,f_j(\tau),x}.$$ In the following we will define a subclass of iterated integrals (compare~\cite{Ablinger:2011te}). For $a \in \mathbb N$ and $b \in \mathbb N,$ $b < \varphi(a),$ where $\varphi$ denotes Euler's totient function, we define \begin{eqnarray} &&f_a^b:(0,1)\mapsto \mathbb R\nonumber\\ &&f_a^b(x)=\left\{ \begin{array}{ll} \frac{1}{x}, & \textnormal{if }a=b=0 \\ \frac{x^b}{\Phi_a(x)}, & \textnormal{otherwise}, \end{array} \right. \nonumber \end{eqnarray} where $\Phi_a(x)$ denotes the $a$th cyclotomic polynomial, e.g.,\ the first cyclotomic polynomials are given by \begin{eqnarray*} \Phi_1(x) &=& x - 1 \\ \Phi_2(x) &=& x + 1 \\ \Phi_3(x) &=& x^2 + x + 1 \\ \Phi_4(x) &=& x^2 + 1 \\ \Phi_5(x) &=& x^4 + x^3 + x^2 + x+ 1~~\text{etc.} \end{eqnarray*} Now, let $m_i=(a_i,b_i) \in \mathbb N^2,$ $b_i<\varphi(a_i);$ for $x\in (0,1)$ we define \textit{cyclotomic polylogarithms} recursively as follows (compare e.g.,\ ~\cite{Ablinger:2011te}): \begin{eqnarray} \H{}{x}&=&1,\nonumber\\ \H{m_1,\ldots,m_k}{x} &=&\left\{ \begin{array}{ll} \frac{1}{k!}(\log{x})^k,& \textnormal{if }m_i=(0,0)\\ &\\ \int_0^x{f_{a_1}^{b_1}(y) \H{m_2,\ldots,m_k}{y}dy},& \textnormal{otherwise}. \end{array} \right. \nonumber \end{eqnarray} We call $k$ the weight of a cyclotomic polylogarithm and in case the limit exists we extend the definition to $x=1$ and write $$ \H{m_1,\ldots,m_k}{1}:=\lim_{x\to1}\H{m_1,\ldots,m_k}{x}. $$ Note that restricting the alphabet to the letters $(0,0),(1,0)$ and $(2,0)$ leads to \textit{harmonic polylogarithms}~\cite{Remiddi:1999ew}. The proposed strategy to prove and find Pochhammer sum identities reads as follows and follows the method proposed in~\cite{BinomialSumIdentities}): \begin{description}\label{genmethod} \item[Step 1:] Rewrite the sums in terms of nested integrals. \item[Step 2:] Rewrite the integrals in terms of cyclotomic polylogarithms (see~\cite[Section~4]{BinomialSumIdentities}). \item[Step 3:] Provide a sufficiently strong database to eliminate relations among these cyclotomic polylogarithms and find reduced expressions (see Section~4). \end{description} This article focuses on Step~1 and we will present three different possibilities to find integral representations of Pochhammer sums. In order to accomplish this task, we view infinite sums as specializations of generating functions~\cite{BinomialSumIdentities,Ablinger:2014}. Namely, if we are given an integral representation of the generating function of a sequence, then we can obtain an integral representation for the infinite sum over that sequence if the limit $x \to 1$ can be carried out. This approach to infinite sums can be summarized by the following formula: \[ \sum_{i=1}^\infty f(i) = \lim_{x\to1}\sum_{i=1}^\infty x^if(i). \] For details on Step~2 (implemented in the command \texttt{SpecialGLToH} in \texttt{HarmonicSums}) and on Step~3 we refer to~\cite{BinomialSumIdentities}. It has to be mentioned that we computed and used relation tables of harmonic polylogarithms at one up to weight~12, for cyclotomic polylogarithms of cyclotomy~4 and~6 we computed and used relation tables of cyclotomic polylogarithms at~1 up to weight~6. The size of these tables amounts to several gigabytes. Note that the full strategy has been implemented in the Mathematica package {\tt HarmonicSums}\footnote{The package {\tt HarmonicSums} can be downloaded at\\ \url{http://www.risc.jku.at/research/combinat/software/HarmonicSums}.}\cite{HarmonicSums}. To complete this introduction we define a number of constants that will appear throughout this article: \begin{tabular}{lll} $\mylog{2}:=\log(2)$ & $\mylog{3}:=\log(3)$ & $\zeta_3:=S_{3}(\infty );$\\ $\zeta_5:=S_{5}(\infty );$ & $\zeta_7:=S_{7}(\infty );$ &$\zeta_9:=S_{9}(\infty );$\\ $\zeta_{11}:=S_{11}(\infty );$&$C:= \text{Catalan};$ & $p_ 4:= \text{Li}_ 4\left(\frac{1}{2}\right);$\\ $p_ 5:= \text{Li}_ 5\left(\frac{1}{2}\right);$ & $p_ 6:= \text{Li}_ 6\left(\frac{1}{2}\right);$ & $p_ 7:= \text{Li}_ 7\left(\frac{1}{2}\right);$\\ $p_ 8:= \text{Li}_ 8\left(\frac{1}{2}\right);$ & $p_ 9:= \text{Li}_ 9\left(\frac{1}{2}\right);$ & $s_1:=S_{-5,-1}(\infty );$\\ $s_2:=S_{5,-1,-1}(\infty );$ & $s_3:=S_{-5,1,1}(\infty );$ & $s_4:=S_{5,3}(\infty );$\\ $s_5:=S_{-7,-1}(\infty );$ & $s_6:=S_{-5,-1,-1,-1}(\infty );$ & $s_7:=S_{-5,-1,1,1}(\infty );$\\ $h_1:=H_{(3,0),(0,0)}(1);$ & $h_2:=H_{(3,0),(0,0),(1,0)}(1); $ & $h_3:=H_{(3,0),(0,0),(0,0),(0,0)}(1); $\\ $h_4:=H_{(3,0),(0,0),(1,0),(1,0)}(1); $ & $h_6:=H_{(5,1)}(1); $ & $h_6:=H_{(5,3)}(1); $\\ $h_7:=H_{(5,1),(0,0)}(1); $ & $h_8:=H_{(5,2),(0,0)}(1); $ \end{tabular} Here we extend the definition~(\ref{hsum}) to negative indices by $$\S{c_1,c_2,\ldots,c_r}n:=\sum_{i_1=1}^n\frac{\sign{c_1}^{i_1}}{\abs{i_1}^{c_1}}\sum_{i_2=1}^{i_1}\frac{\sign{c_2}^{i_2}}{\abs{i_2}^{c_2}}\cdots\sum_{i_r=1}^{i_{r-1}}\frac{\sign{c_r1}^{i_r}}{\abs{i_r}^{c_r}}.$$ Note that these constants do not possess any further relations induced by the algebraic properties given in~\cite[Section~4]{BinomialSumIdentities}, namely shuffle, stuffle, multiple argument and duality relations. In the following sections we will use different methods to compute integral representations of the generating function. In Section~\ref{sec:2} we will use holonomic closure properties while in Section~\ref{sec:3} and~\ref{sec:4} we will use rewrite rules. In Section~\ref{sec:4} we will consider a subclass of Pochhammer sums, for which we can directly find representations in terms of cyclotomic polylogarithms i.e.,\ we do not have to deal with Step~2 of the proposed strategy. \section{Using Closure Properties of Holonomic Functions to derive Generating Functions} \label{sec:2} In the following we repeat important definitions and properties (compare~\cite{Ablinger:2014,InvMellin,KauersPaule:2011}). Let $\mathbb K$ be a field of characteristic~0. A function $f=f(x)$ is called \textit{holonomic} (or \textit{D-finite}) if there exist polynomials $p_d(x),p_{d-1}(x),\ldots,p_0(x)\in \mathbb K[x]$ (not all $p_i$ being $0$) such that the following holonomic differential equation holds: \begin{equation} p_d(x)f^{(d)}(x)+\cdots+p_1(x)f'(x)+p_0(x)f(x)=0. \end{equation} We emphasize that the class of holonomic functions is rather large due to its closure properties. Namely, if we are given two such differential equations that contain holonomic functions $f(x)$ and $g(x)$ as solutions, one can compute holonomic differential equations that contain $f(x)+g(x)$, $f(x)g(x)$ or $\int_0^x f(y)dy$ as solutions. In other words any composition of these operations over known holonomic functions $f(x)$ and $g(x)$ is again a holonomic function $h(x)$. In particular, if for the inner building blocks $f(x)$ and $g(x)$ the holonomic differential equations are given, also the holonomic differential equation of $h(x)$ can be computed.\\ Of special importance is the connection to recurrences. A sequence $(f_n)_{n\geq0}$ with $f_n\in\mathbb K$ is called holonomic (or \textit{P-finite}) if there exist polynomials $p_d(n),p_{d-1}(n),\ldots,p_0(n)\in \mathbb K[n]$ (not all $p_i$ being $0$) such that the holonomic recurrence \begin{equation} p_d(n)f_{n+d}+\cdots+p_1(n)f_{n+1}+p_0(n)f_n=0 \end{equation} holds for all $n\in\mathbb N$ (from a certain point on). In the following we utilize the fact that holonomic functions are precisely the generating functions of holonomic sequences: if $f(x)$ is holonomic, then the coefficients $f_n$ of the formal power series expansion $$f(x) = \sum\limits_{n=0}^{\infty} f_n x^n$$ form a holonomic sequence. Conversely, for a given holonomic sequence $(f_n)_{n\geq0}$, the function defined by the above sum (i.e.,\ its generating function) is holonomic (this is true in the sense of formal power series, even if the sum has a zero radius of convergence). Note that given a holonomic differential equation for a holonomic function $f(x)$ it is straightforward to construct a holonomic recurrence for the coefficients of its power series expansion. For a recent overview of this holonomic machinery and further literature we refer to~\cite{KauersPaule:2011}. Since cyclotomic sums are holonomic sequences with respect to~$n$ and the iterated integrals we consider are holonomic functions with respect to~$x,$ we can use holonomic closure properties to derive integral representations of Pochhammer sums: Given a Pochhammer sum $$\sum_{n=1}^\infty \frac{(p)_n}{(an+b)^c(n+d)!}g(n),$$ where $g(n)$ is a cyclotomic sum. We proceed as proposed in on page~\pageref{genmethod}: define $$f_n:=\frac{(p)_n}{(an+b)^c(n+d)!}g(n)$$ and try to find an iterated integral representation of $$f(x):=\sum_{n=1}^\infty x^n f_n$$ using the following steps: \begin{enumerate} \item Compute a holonomic recurrence equation for $(f_n)_{n\geq0}.$ \item Compute a holonomic differential for $f(x).$ \item Compute initial values for the differential equation. \item Solve the differential equation to get a closed form representation for $f(x).$ \end{enumerate} This procedure is implemented in the packages \texttt{HarmonicSums}\ and can be called by $$ \textbf{ComputeGeneratingFunction}\left[\frac{(p)_n}{(an+b)^c(n+d)!}g(n),x,\{n,1,\infty \}\right]. $$ We will succeed in finding a closed form representation for $f(x)$ in terms of iterated integrals, if we can find a full solution set of the derived differential equation. The command~\texttt{ComputeGeneratingFunction} internally uses the differential solver implemented in~\texttt{HarmonicSums}, which finds all solutions of holonomic differential equations that can be expressed in terms of iterated integrals over hyperexponential alphabets~\cite{InvMellin,Ablinger:2014,Bronstein,Singer:99,Petkov:92}; these solutions are called d'Alembertian solutions~\cite{Abramov:94}, in addition for differential equations of order two it finds all solutions that are Liouvillian~\cite{InvMellinKovacic,Kovacic,Singer:99}. If we succeed in finding a closed form representation for $f(x)$ in terms of iterated integrals, we proceed with Step~2 and Step~3 of the proposed strategy. Hence we send $x\to 1$ and try to transform these iterated integrals to expression in terms of cyclotomic polylogarithms and finally we use relations between cyclotomic polylogarithms at one to derive an expression in terms of known constants. The Pochhammer sum \begin{eqnarray}\label{GeneralExampleSum} \sum _{n=1}^{\infty } \frac{\left(-\frac{1}{2}\right)_nS_1(n) }{(3+n)^2 (n-1)!} \end{eqnarray} will deal as a representative example to illustrate all three different methods that are presented in this article. First, we work out the sum using the method presented above. \begin{example}\label{GeneralExample} We consider the sum~(\ref{GeneralExampleSum}) and start to derive a recurrence for $$f_n:=\frac{\left(-\frac{1}{2}\right)_nS_1(n) }{(3+n)^2 (n-1)!};$$ we find: \begin{eqnarray*} &&(2+n) (4+n)^2 (1+2 n) (3+2 n) f_{n}-2 (1+n) (5+n)^2 (3+2 n) (5+2 n) f_{n+1}\\&&+4 (1+n) (2+n) (3+n) (6+n)^2 f_{n+2}=0. \end{eqnarray*} Using the closure properties of holonomic functions we find the following differential equation \begin{eqnarray*} &&96 f(x)+3 (-250+343 x) f'(x)+3 \left(144-590 x+481 x^2\right) f''(x)\\&&+x \left(352-942 x+599 x^2\right) f^{(3)}(x)+8 x^2 \left(9-20 x+11 x^2\right) f^{(4)}(x)\\&&+4 (-1+x)^2 x^3 f^{(5)}(x)=0, \end{eqnarray*} satisfied by $$\sum _{n=1}^{\infty } x^n\frac{\left(-\frac{1}{2}\right)_nS_1(n) }{(3+n)^2 (n-1)!}.$$ We can solve this differential equation for example using the differential equation solver implemented in \texttt{HarmonicSums}: \begin{eqnarray*} \textbf{SolveDE}\bigl[&&\hspace{-0.7cm}96 f[x]+3 (-250+343 x) f'[x]+3 \left(144-590 x+481 x^2\right) f''[x]\\&&\hspace{-0.7cm}+x \left(352-942 x+599 x^2\right) f^{(3)}[x]+8 x^2 \left(9-20 x+11 x^2\right) f^{(4)}[x]\\ &&\hspace{-0.7cm}+4 (-1+x)^2 x^3 f^{(5)}[x]==0,f[x],x\bigr]. \end{eqnarray*} By checking initial values we find \small \begin{eqnarray} &&\frac{1}{7350 x^3}\Biggl( -1776+808 x-319 x^2-888 x^3-600 x^4+6656 \biggl[\text{G}\left(\frac{\sqrt{1-\tau }}{\tau };x\right)-\text{G}\left(\frac{1}{\tau };x\right)\biggr]\nonumber\\ &&+3360\Biggl[ \text{G}\left(\frac{1}{\tau },\frac{1}{\tau };x\right)-\text{G}\left(\frac{1}{\tau },\frac{\sqrt{1-\tau }}{\tau };x\right) +\text{G}\left(\frac{\sqrt{1-\tau }}{\tau },\frac{1}{1-\tau };x\right)+ \text{G}\left(\frac{\sqrt{1-\tau }}{\tau },\frac{1}{\tau };x\right)\nonumber\\ &&- \text{G}\left(\frac{\sqrt{1-\tau }}{\tau },\frac{\sqrt{1-\tau }}{\tau };x\right)\biggr] +4\sqrt{1-x}\biggl(\left(404-218 x-111 x^2-75 x^3\right) \biggl[\text{G}\left(\frac{1}{1-\tau };x\right)\nonumber\\ &&+ \text{G}\left(\frac{1}{\tau };x\right)- \text{G}\left(\frac{\sqrt{1-\tau }}{\tau };x\right)\biggr]+222 \left(2-x-x^2\right)\biggr) \Biggr). \label{ExampleGLRep} \end{eqnarray} \normalsize At this point we send $x\to 1$ and use the command \texttt{SpecialGLToH} in \texttt{HarmonicSums}\ to derive an expression in terms of cyclotomic polylogarithms (compare~\cite[Section 3]{BinomialSumIdentities}). This leads to \begin{eqnarray*} &&-\frac{9367}{7350}-\frac{3328 H_{(0,0)}(1)}{3675}+\frac{8}{35} H_{(0,0)}(1){}^2-\frac{64 H_{(2,0)}(1)}{3675}-\frac{32}{35} H_{(2,0)}(1){}^2-\frac{16}{35} H_{(0,0),(0,0)}(1)\\&&-\frac{32}{35} H_{(0,0),(1,0)}(1)-\frac{32}{35} H_{(2,0),(0,0)}(1)+\frac{64}{35} H_{(2,0),(1,0)}(1)+\frac{64}{35} H_{(2,0),(2,0)}(1). \end{eqnarray*} Finally, we can use relations between cyclotomic polylogarithms at one (compare~\cite[Section 4]{BinomialSumIdentities}) to derive \begin{eqnarray}\label{ExampleResult} \sum _{n=1}^{\infty } \frac{\left(-\frac{1}{2}\right)_nS_1(n) }{(3+n)^2 (n-1)!}=\frac{-9367+560 \pi ^2+6720 \mylog{2}^2-128 \mylog{2}}{7350}. \end{eqnarray} Note that in the last step of this example we are actually only dealing with harmonic polylogarithms (see~\cite{Remiddi:1999ew}). \end{example} Let us now list several identities that could be computed using this method: \begin{eqnarray*} \sum _{n=1}^{\infty } \frac{\left(\frac{1}{3}\right)_n S_{1,1,1}(n)}{(n+1)!}&=&18 \zeta _3-\frac{\pi ^3}{\sqrt{3}},\\ \sum _{n=1}^{\infty } \frac{\left(\frac{1}{3}\right)_n S_2(n)}{(n+1)!}&=&\frac{5 \pi ^2}{16}+\frac{27 h_1}{8}+\frac{3}{8} \sqrt{3} \pi l_3-\frac{27 l_3^2}{16},\\ \sum\limits_{n=1}^{\infty } \frac{\left(\frac{1}{4}\right)_n \S{2}{n}}{(n+1)!}&=&-\frac{16 C}{3}+\frac{7 \pi ^2}{18}-6 \mylog{2}^2+2 \pi \mylog{2},\\ \sum _{n=1}^{\infty } \frac{\left(\frac{1}{2}\right)_n S_{(2,1,1)}(n)}{(2 n+1)^2 n!}&=&\frac{1}{4} \pi \mylog{2} (3 \mylog{2}-2),\\ \sum _{n=1}^{\infty } \frac{\left(\frac{1}{2}\right)_n S_{(2,1,1),(2,1,1)}(n)}{(2n+1)^2 n!}&=&\frac{1}{96} \pi \left(4 \pi ^2 \mylog{2}-72 \mylog{2}^2+56 \mylog{2}^3-9 \zeta_{3}\right). \end{eqnarray*} Several formulas that can be found in~\cite{Liu:2019} can be also discovered and proved using the described method. Here we are going to list some of them: \begin{eqnarray*} \sum _{n=0}^{\infty } \frac{\left(\frac{1}{2}\right)_n \left(S_1(n){}^2-S_2(n)\right)}{(n+1)!}&=&8 l_2^2+\frac{2 \pi ^2}{3},\\ \sum _{n=0}^{\infty } \frac{\left(\frac{1}{2}\right)_n \left(S_1(n){}^3-3 S_1(n) S_2(n)+2 S_3(n)\right)}{(n+1)!}&=&24 \zeta _3+16 l_2^3+4 \pi ^2 l_2,\\ \sum _{n=0}^{\infty } \frac{\left(\frac{1}{4}\right)_n \left(S_1(n){}^3-3 S_1(n) S_2(n)+2 S_3(n)\right)}{(n+1)!}&=&-96 C l_2+16 \pi C+72 \zeta _3+36 l_2^3-18 \pi l_2^2\\&&+13 \pi ^2 l_2-\frac{9 \pi ^3}{2},\\ \sum _{n=0}^{\infty } \frac{\left(\frac{1}{4}\right)_n \left(S_1(n){}^2-S_2(n)\right)}{(n+1)!}&=&288 C l_2+48 \pi C+216 \zeta _3+108 l_2^3+54 \pi l_2^2\\&&+39 \pi ^2 l_2+\frac{27 \pi ^3}{2},\\ \sum _{n=0}^{\infty } \frac{\left(\frac{1}{4}\right)_n \left(S_1(n){}^2-S_2(n)\right)}{(n+1)!}&=&-\frac{32 C}{3}+12 l_2^2-4 \pi l_2+\frac{13 \pi ^2}{9},\\ \sum _{n=0}^{\infty } \frac{\left(\frac{3}{4}\right)_n \left(S_1(n){}^2-S_2(n)\right)}{(n+1)!}&=&32 C+36 l_2^2+12 \pi l_2+\frac{13 \pi ^2}{3},\\ \sum _{n=1}^{\infty } \frac{\left(\frac{1}{2}\right)_{n-1} \left(S_1(n){}^2-2 S_1(n)+S_2(n)\right)}{n!}&=&8. \end{eqnarray*} Note that this method can also be used to compute integral representations of sums of the form \begin{eqnarray*} \sum _{n=1}^{\infty } \frac{x^n (3)_n S_{3}(n)}{n^2 n!}. \end{eqnarray*} Here we find \begin{eqnarray*} \sum _{n=1}^{\infty } \frac{x^n (3)_n S_{3}(n)}{n^2 n!}&=&H_{(0,0),(1,0),(0,0),(0,0),(1,0)}(x)-\frac{3 \text{Li}_2(x){}^2}{4}-\frac{\text{Li}_3(x)}{2 (-1+x)}\\&&-\frac{3}{2} \log (1-x) \text{Li}_3(x)+\frac{3 \text{Li}_4(x)}{2}+\text{Li}_5(x). \end{eqnarray*} and sending for instance $x\to \frac{1}2$ we get: \begin{eqnarray*} \sum _{n=1}^{\infty } \frac{\left(\frac{1}{2}\right)^n (3)_n S_{3}(n)}{n^2 n!}.&=&-\frac{\pi^4}{192}-\frac{\pi^2l_2}{12}+\frac{\pi^4l_2}{288}-\frac{1}{16}\pi^2l_2^2+\frac{l_2^3}{6}-\frac{5}{72}\pi^2l_2^3+\frac{l_2^4}{16}+\frac{11l_2^5}{120}\\ &&+\frac{3p_4}{2}+3l_2p_4+4p_5+\frac{7\zeta_3}{8}-\frac{7\pi^2\zeta_3}{48}+\frac{21l_2\zeta_3}{16}+\frac{7}{8}l_2^2\zeta_3-\frac{81\zeta_5}{64}. \end{eqnarray*} Finally, we consider \begin{eqnarray}\label{S11Example} \sum _{n=1}^{\infty } \frac{\left(\frac{1}{2}\right)_n S_{11}(n)}{(n+1)!}, \end{eqnarray} proceeding as proposed, we find a differential equation of order 16: \small \begin{eqnarray*} &&430080 f(x)+210 (-4096+1592275 x) f'(x)\\ &&+42 \left(-33554432-6356812 x+407269601 x^2\right) f''(x)\\ &&+\left(671088640-33746963856 x-8037305736 x^2+192200072453 x^3\right) f^{(3)}(x)\\ &&+x \left(11047661360-204994450032 x-61653276602 x^2+771941124781 x^3\right) f^{(4)}(x)\\ &&+13 x^2 \left(3812823056-38280317036 x-13991732902 x^2+109483643797 x^3\right) f^{(5)}(x)\\ &&+26 x^3 \left(3555308396-22952549314 x-9866689087 x^2+53652573053 x^3\right) f^{(6)}(x)\\ &&+572 x^4\left(152216474-697858881 x-344085550 x^2+1394066246 x^3\right) f^{(7)}(x)\\ &&+143 x^5 \left(323583896-1123610312 x-623388464 x^2+1975831409 x^3\right) f^{(8)}(x)\\ &&+143 x^6 \left(103854560-285705072 x-175728306 x^2+451597351 x^3\right) f^{(9)}(x)\\ &&+286 x^7 \left(10492016-23636810 x-15928139 x^2+34105982 x^3\right) f^{(10)}(x)\\ &&+3 x^8 \left(130094536-246156812 x-180008400 x^2+328091581 x^3\right) f^{(11)}(x)\\ &&+x^9 \left(32842216-53242200 x-41920782 x^2+66163633 x^3\right) f^{(12)}(x)\\ &&+x^{10} \left(1762640-2487876 x-2095294 x^2+2904173 x^3\right) f^{(13)}(x)\\ &&+2 x^{11} \left(28900-35986 x-32239 x^2+39703 x^3\right) f^{(14)}(x)\\ &&+4 x^{12} \left(262-291 x-276 x^2+305 x^3\right) f^{(15)}(x)\\ &&+8 (-1+x)^2 x^{13} (1+x) f^{(16)}(x)=0. \end{eqnarray*} \normalsize Solving this differential equation is possible but takes quite some time, so this indicates, that we might look for more feasible methods to find generating function representations for Pochhammer sums of that kind. In the following sections we will introduce rewrite rules, which will allow to compute generating function representations of Pochhammer sums without having to solve differential equations. \section{Using Rewrite Rules to derive Generating Functions} \label{sec:3} In this section we are going to state rewrite rules which will allow us to find integral representations of the generating functions of Pochhammer sums without having to solve differential equations. We will summarize these rewrite rules in the following lemmas. We start with the base cases where there is no inner sum present: \begin{lemma}\label{GLBaseCase} Let ${\mathbb K}$ be a field of characteristic 0. Then the following identities hold in the ring ${\mathbb K}[[x]]$ of formal power series with $a,c\in\mathbb N$ and $b,d\in\mathbb Z$: \begin{eqnarray} \sum_{n=1}^\infty x^n\frac{(p)_n}{(n+d)!} &=& \frac{(1-x)^{d-p}}{x^d}(p)_{-d},\ d<0, \label{eq:GenFunPochhammerSum01}\\ \sum_{n=1}^\infty x^n\frac{(p)_n}{n} &=& (1-x)^{-p}-1,\label{eq:GenFunPochhammerSum02}\\ \sum_{n=1}^\infty x^n\frac{(p)_n}{(n+d)!} &=& (1-x)^{d-p}\frac{p}{d!x^d}\int_0^x (1-t)^{p-d-1}t^d dt,\ d>0, \label{eq:GenFunPochhammerSum03}\\ \sum_{n=1}^\infty x^n\frac{(p)_n}{(a\,n+b)^c(n+d)!} &=& \frac{x^{-\frac{b}{a}}}{a}\int_0^xt^{\frac{b}{a}-1}\sum_{n=1}^\infty t^n \frac{(p)_n}{(a\,n+b)^{c-1}(n+d)!}dt.\label{eq:GenFunPochhammerSum04} \end{eqnarray} \end{lemma} In case an inner sum is present we will make use of the following three lemmas. \begin{lemma}\label{GLc0dl0} Let ${\mathbb K}$ be a field of characteristic 0 and let $f:\mathbb N \to {\mathbb K}.$ Then the following identity holds in the ring ${\mathbb K}[[x]]$ of formal power series with $d<0$: \begin{eqnarray} &&\sum_{n=1}^\infty x^n\frac{(p)_n}{(n+d)!}\sum_{i=1}^nf(i) = \label{eq:GenFunPochhammerSum2}\\ &&\hspace{1cm}=\frac{(1-x)^{d-p}}{x^d}\left((p)_{-d}\sum_{i=1}^{-d}f(i)+\int_0^x\frac{(1-t)^{p-d-1}}{t^{1-d}}\sum_{n=1}^\infty t^n \frac{(p)_n}{(n+d-1)!}f(n)dt\right).\nonumber \end{eqnarray} \end{lemma} \begin{proof} Both sides satisfy the following initial value problem for $y(x),$ which has a unique solution near $x=0:$ \begin{eqnarray*} y'(x)-\frac{px-d}{(1-x)x}y(x)&=&\frac{1}{(1-x)x}\sum_{n=1}^{\infty}x^n\frac{(p)_n}{(n+d-1)!}f(n),\\ y(0)&=&0. \end{eqnarray*} \end{proof} \begin{lemma}\label{GLc0dgeq0} Let ${\mathbb K}$ be a field of characteristic 0 and let $f:\mathbb N \to {\mathbb K}.$ Then the following identity holds in the ring ${\mathbb K}[[x]]$ of formal power series with $d\geq0$: \begin{eqnarray} &&\sum_{n=1}^\infty x^n\frac{(p)_n}{(n+d)!}\sum_{i=1}^nf(i) =\label{eq:GenFunPochhammerSum1}\\ &&\hspace{2cm}=\frac{(1-x)^{d-p}}{x^d}\int_0^xt^{d-1}(1-t)^{p-d-1}\sum_{n=1}^\infty t^n \frac{(p)_n}{(n+d-1)!}f(n)dt.\nonumber \end{eqnarray} \end{lemma} \begin{proof} Both sides satisfy the following initial value problem for $y(x),$ which has a unique solution near $x=0:$ \begin{eqnarray*} y'(x)-\frac{p\,x-d}{(1-x)x}y(x)&=&\frac{1}{(1-x)x}\sum_{n=1}^{\infty}x^n\frac{(p)_n}{(n+d-1)!}f(n),\\ y(0)&=&0. \end{eqnarray*} \end{proof} \begin{lemma}\label{GLcgeq1} Let ${\mathbb K}$ be a field of characteristic 0 and let $f:\mathbb N \to {\mathbb K}.$ Then the following identity holds in the ring ${\mathbb K}[[x]]$ of formal power series with $a,c\in\mathbb N$ and $b\in\mathbb Z$: \begin{eqnarray} &&\sum_{n=1}^\infty x^n\frac{(p)_n}{(a\,n+b)^c(n+d)!}\sum_{i=1}^nf(i)=\label{eq:GenFunPochhammerSum3}\\ &&\hspace{2cm}=\frac{x^{-\frac{b}{a}}}{a}\int_0^xt^{\frac{b}{a}-1}\sum_{n=1}^\infty t^n \frac{(p)_n}{(a\,n+b)^{c-1}(n+d)!}\sum_{i=1}^nf(i)dt.\nonumber \end{eqnarray} \end{lemma} \begin{proof} Both sides satisfy the following initial value problem for $y(x),$ which has a unique solution near $x=0:$ \begin{eqnarray*} y'(x)-\frac{b}{a\,x}y(x)&=&\frac{1}{a\,x}\sum_{n=1}^{\infty} x^n\frac{(p)_n}{(a\,n+b)^{c-1}(n+d)!}\sum_{i=1}^nf(i),\\ y(0)&=&0. \end{eqnarray*} \end{proof} Note that formulas related to the previous lemmas concerning binomial sums can be found in~\cite{Ablinger:2014}. Let us now, for the second time, consider~(\ref{GeneralExampleSum}) and illustrate how the previous lemmas can be used as rewrite rules to find integral representations of Pochhammer sums. \begin{example} We again look for a closed form representation in terms of iterated integrals of $$\sum _{n=1}^{\infty } x^n \frac{\left(-\frac{1}{2}\right)_nS_1(n) }{(3+n)^2 (n-1)!}.$$ We start by using Lemma~\ref{GLcgeq1} twice: \begin{eqnarray*} \sum _{n=1}^{\infty } x^n\frac{\left(-\frac{1}{2}\right)_nS_1(n) }{(3+n)^2 (n-1)!}&=&x^{-3}\int_0^xt^2 \sum _{n=1}^{\infty } t^n\frac{S_1(n) \left(-\frac{1}{2}\right)_n}{(3+n) (n-1)!}dt\\ &=&x^{-3}\int_0^xt^{-1} \int_0^tu^2 \sum _{n=1}^{\infty } u^n\frac{S_1(n) \left(-\frac{1}{2}\right)_n}{(n-1)!}dudt. \end{eqnarray*} Now we apply Lemma~\ref{GLc0dl0} followed by applying~(\ref{eq:GenFunPochhammerSum04}) and~(\ref{eq:GenFunPochhammerSum01}) \begin{eqnarray*} &&\sum _{n=1}^{\infty } x^n\frac{\left(-\frac{1}{2}\right)_nS_1(n) }{(3+n)^2 (n-1)!}\\ &&\hspace{1cm}=x^{-3}\int_0^xt^{-1} \int_0^t\frac{u^3}{\sqrt{1-u}}\biggl(\int_0^u\frac{1}{v^2\sqrt{1-v}}\sum _{n=1}^{\infty } v^n\frac{\left(-\frac{1}{2}\right)_n}{n(n-2)!} dv-\frac{1}{2}\biggr)dudt\\ &&\hspace{1cm}=x^{-3}\int_0^xt^{-1} \int_0^t\frac{u^3}{\sqrt{1-u}}\biggl(\int_0^u\frac{1}{v^2\sqrt{1-v}}\int_0^v\frac{1}{w}\sum_{n=1}^\infty w^n \frac{(-\frac{1}{2})_n}{(n-2)!}dw dv-\frac{1}{2}\biggr)dudt\\ &&\hspace{1cm}=x^{-3}\int_0^xt^{-1} \int_0^t\frac{u^3}{\sqrt{1-u}}\biggl(\int_0^u\frac{1}{v^2\sqrt{1-v}}\int_0^v \frac{-w}{4(1-w)^{\frac{3}{2}}} dw dv-\frac{1}{2}\biggr)dudt. \end{eqnarray*} At this point we rewrite the expression in terms of iterated integrals (this can be done by hand or by using the command \texttt{GLIntegrate} of \texttt{HarmonicSums}) and arrive again at~(\ref{ExampleGLRep}) and hence we can proceed as in Example~\ref{GeneralExample} to arrive at \begin{eqnarray*} \sum _{n=1}^{\infty } \frac{\left(-\frac{1}{2}\right)_nS_1(n) }{(3+n)^2 (n-1)!}=\frac{-9367+560 \pi ^2+6720 \mylog{2}^2-128 \mylog{2}}{7350}. \end{eqnarray*} \end{example} Note that this method is implemented in the package \texttt{HarmonicSums}\ using the command \texttt{PochhammerSumToGL}. Calling \begin{eqnarray*} \textbf{PochhammerSumToGL}\left[\frac{\left(-\frac{1}{2}\right)_n S_1(n)}{(3+n)^2 (-1+n)!},x,\{n,1,\infty \}\right] \end{eqnarray*} will immediately give~(\ref{ExampleGLRep}). Reconsidering~(\ref{S11Example}) we find \begin{eqnarray*} \sum _{n=1}^{\infty } \frac{\left(\frac{1}{2}\right)_n S_{11}(n)}{(n+1)!}&=& -4 \text{G}\left(\frac{1}{\tau },\frac{1}{\tau },\frac{1}{\tau },\frac{1}{\tau},\frac{1}{\tau },\frac{1}{\tau },\frac{1}{\tau },\frac{1}{\tau },\frac{1}{\tau},\frac{\sqrt{1-\tau }-1}{\tau };1\right)\\ &&+2 \text{G}\left(\frac{1}{\tau},\frac{1}{\tau },\frac{1}{\tau },\frac{1}{\tau },\frac{1}{\tau },\frac{1}{\tau},\frac{1}{\tau },\frac{1}{\tau },\frac{1}{\tau },\frac{1}{\tau},\frac{\sqrt{1-\tau }-1}{\tau };1\right)\\ &=&-\frac{677 \pi ^{10} \mylog{2}}{475200}-\frac{2339 \pi ^8 \mylog{2} ^3}{680400}-\frac{79\pi ^6 \mylog{2}^5}{28350}-\frac{2 \pi ^4 \mylog{2}^7}{1575}-\frac{4 \pi ^2 \mylog{2}^9}{8505}+\frac{16 \mylog{2}^{11}}{155925}\\ &&-\frac{2339 \pi ^8 \zeta_{3}}{453600}-\frac{79 \pi ^6 \mylog{2}^2 \zeta_{3}}{1890}-\frac{1}{15} \pi ^4 \mylog{2}^4 \zeta_{3}-\frac{8}{135} \pi ^2 \mylog{2}^6 \zeta_{3}+\frac{8}{315} \mylog{2}^8\zeta_{3}\\ &&-\frac{1}{5} \pi ^4 \mylog{2} \zeta_{3}^2-\frac{8}{9} \pi ^2 \mylog{2}^3 \zeta_{3}^2+\frac{16}{15} \mylog{2}^5 \zeta_{3}^2-\frac{4 \pi ^2 \zeta_{3}^3}{9}+\frac{16}{3} \mylog{2}^2 \zeta_{3}^3-\frac{79 \pi ^6 \zeta_{5}}{1260}\\ &&-\frac{3}{5} \pi ^4 \mylog{2}^2 \zeta_{5}-\frac{4}{3} \pi ^2 \mylog{2}^4\zeta_{5}+\frac{16}{15} \mylog{2}^6 \zeta_{5}-8 \pi ^2 \mylog{2} \zeta_{3} \zeta_{5}+32 \mylog{2}^3 \zeta_{3} \zeta_{5}+24 \zeta_{3}^2 \zeta_{5}\\ &&+72 \mylog{2} \zeta_{5}^2-\frac{9 \pi ^4 \zeta_{7}}{10}-12 \pi ^2 \mylog{2}^2 \zeta_{7}+24 \mylog{2}^4\zeta_{7}+144 \mylog{2} \zeta_{3} \zeta_{7}-\frac{170 \pi ^2 \zeta_{9}}{9}\\ &&+\frac{680}{3} \mylog{2}^2 \zeta_{9}+372 \zeta_{11}. \end{eqnarray*} Note that all the identities listed in Section~\ref{sec:2} can also be computed using rewrite rules. But using these rewrite rules turns out to be much more efficient. We are now going to list several additional identities that could be computed with the help of this command: \begin{eqnarray} \sum _{n=1}^{\infty } \frac{\left(\frac{1}{3}\right)_n S_3(n)}{(n+1)!}&=&-\frac{5\pi^3}{32\sqrt{3}}+\frac{9}{16}\sqrt{3}\pi h_1-\frac{15\pi^2l_3}{32}-\frac{81h_1l_3}{16}-\frac{9}{32}\sqrt{3}\pi l_3^2\nonumber\\ &&+\frac{27l_3^3}{32}+6\zeta_3,\\ \sum _{i=1}^{\infty } \frac{\left(\frac{1}{2}\right)_n S_{1,1,1,1,1}(n)}{(n+1)!}&=&60 \zeta_{5},\\ \sum _{i=1}^{\infty } \frac{\left(\frac{1}{3}\right)_n S_{1,1,1,1,1}(n)}{(n+1)!}&=&180 \zeta_{5}-\frac{\pi ^5}{\sqrt{3}},\\ \sum _{n=1}^{\infty } \frac{\left(\frac{1}{2}\right)_n S_4(n)}{n (n+1)!}&=&-\frac{\pi ^4}{20}-\frac{2}{3} \pi ^2 l_2^2-\frac{4}{9} \pi ^2 l_2^3+\frac{4 l_2^4}{3}+\frac{8 l_2^5}{15}+16 l_2 p_4+16 p_5\nonumber\\ &&-\frac{7 \pi ^2 \zeta _3}{12}+8 l_2 \zeta _3+7 l_2^2 \zeta _3-\frac{63 \zeta _5}{8},\\ \sum _{n=1}^{\infty } \frac{\left(\frac{1}{2}\right)_n S_{3,1}(n)}{(n+1)! n^3}&=&\frac{13 \pi ^4}{180}+\frac{\pi ^6}{189}+\frac{199 \pi ^6 l_2}{7560}-\frac{2}{3} \pi ^2 l_2^2-\frac{4}{27} \pi ^4 l_2^3-\frac{2 l_2^4}{3}+\frac{8}{45} \pi ^2 l_2^5\nonumber\\ &&-16 p_4+\frac{16}{3} \pi ^2 l_2 p_4+\frac{16 \pi ^2 p_5}{3}+24 s_1+\frac{200 l_2 s_1}{7}+\frac{136 s_2}{7}\nonumber\\ &&-\frac{200 s_3}{7}-\frac{3 \pi ^2 \zeta _3}{2}+\frac{29 \pi ^4 \zeta _3}{168}+4 l_2 \zeta _3-3 \pi ^2 l_2 \zeta _3+\frac{5}{6} \pi ^2 l_2^2 \zeta _3\nonumber\\ &&-\frac{3}{2} l_2^4 \zeta _3-36p_4 \zeta _3+\frac{9 \zeta _3^2}{8}-\frac{243}{7} l_2 \zeta _3^2+\frac{75 \zeta _5}{4}-\frac{2935 \pi ^2 \zeta _5}{168}\nonumber\\ &&-9 l_2 \zeta _5-\frac{111}{2} l_2^2 \zeta _5+\frac{12685 \zeta _7}{112},\\ \sum _{n=1}^{\infty } \frac{\left(\frac{1}{2}\right)_n S_3(n)}{(n+1)! n^5}&=&\frac{37\pi^4}{360}+\frac{89\pi^6}{5040}-\frac{63031\pi^8}{3024000}+\frac{2}{3}\pi^2\mylog{2} +\frac{37}{180}\pi^4\mylog{2}+\frac{463\pi^6\mylog{2}}{7560}\nonumber\\ &&+\frac{1}{3}\pi^2\mylog{2}^2+\frac{47}{180}\pi^4\mylog{2}^2+\frac{1079\pi^6\mylog{2}^2}{7560}-\frac{8\mylog{2}^3}{3}+\frac{2}{9}\pi^2\mylog{2}^3+\frac{47}{270}\pi^4\mylog{2}^3-\frac{\mylog{2}^4}{3}\nonumber\\ &&-\frac{5}{18}\pi^2\mylog{2}^4+\frac{19}{180}\pi^4\mylog{2}^4-\frac{2\mylog{2}^5}{15}-\frac{1}{9}\pi^2\mylog{2}^5+\frac{2\mylog{2}^6}{5}-\frac{1}{27}\pi^2\mylog{2}^6+\frac{4\mylog{2}^7}{35}+\frac{\mylog{2}^8}{35}\nonumber\\ &&-8p_4-\frac{4}{3}\pi^2 p_4+\frac{4}{9}\pi^4p_4+16\mylog{2}^2p_4+16p_5+\frac{8}{3}\pi^2p_5+32\mylog{2}p_5\nonumber\\ &&-32\mylog{2}^2p_5-\frac{16}{3}\pi^2p_6-128\mylog{2}p_6+64\mylog{2}^2p_6-128p_7+384\mylog{2}p_7\nonumber\\ &&+1100s_5-\frac{134}{3}\pi^2s_1-32\mylog{2}s_1-104\mylog{2}^2s_1+\frac{6939}{40}s_4-16s_3 \nonumber\\ &&+32s_2-64\mylog{2}s_2+128s_680s_7-4\zeta_{3}+\frac{7\pi^2\zeta_{3}}{12}-\frac{13\pi^4\zeta_{3}}{60}-7\mylog{2}\zeta_{3}\nonumber\\ &&-\frac{271}{90}\pi^4\mylog{2}\zeta_{3}-7\mylog{2}^2\zeta_{3}-\frac{20}{9}\pi^2\mylog{2}^3\zeta_{3}+\frac{4}{3}\mylog{2}^5\zeta_{3}-160p_5\zeta_{3}-\frac{43\pi^2\zeta_{3}^2}{2}\nonumber\\ &&-9\mylog{2}\zeta_{3}^2+32\mylog{2}^2\zeta_{3}^2-\frac{203\zeta_{5}}{8}-\frac{249\pi^2\zeta_{5}}{16}-\frac{203}{4}\mylog{2}\zeta_{5}-\frac{361}{12}\pi^2\mylog{2}\zeta_{5}\nonumber\\ &&-\frac{203}{4}\mylog{2}^2\zeta_{5}+\frac{201}{2}\mylog{2}^3\zeta_{5}+\frac{393\zeta_{3}\zeta_{5}}{8}+\frac{3955\zeta_{7}}{16}-\frac{11533}{16}\mylog{2}\zeta_{7}\nonumber\\ &&+640p_8+48\mylog{2}s_3. \end{eqnarray} To conclude this section we consider the sum $$\sum _{n=1}^{\infty } \frac{\left(-\frac{1}{2}\right)_n S_{(3,1,1)}(n)}{(n-1)! n}.$$ We find that it equals \begin{eqnarray}\label{notcyclo} \frac{1-2 \text{G}\left(\frac{\sqrt{1-\tau }}{1-\tau ^{1/3}};1\right)+2 \text{G}\left(\frac{\sqrt{1-\tau }}{1+\tau ^{1/3}+\tau ^{2/3}};1\right)-33 \text{G}\left(\sqrt{1-\tau } \tau ^{1/3};1\right)-2 \text{G}\left(\frac{\sqrt{1-\tau } \tau ^{1/3}}{1+\tau ^{1/3}+\tau ^{2/3}};1\right)}{45}. \end{eqnarray} Here we fail to transform the iterated integrals in terms of cyclotomic polylogarithms, however, since the integrals are simple enough, we are able to perform the integrals in~(\ref{notcyclo}) for example by using \texttt{Mathematica} and find the result \begin{eqnarray*} \sum _{n=1}^{\infty } \frac{\left(-\frac{1}{2}\right)_n S_{(3,1,1)}(n)}{(n-1)! n}=-\frac{4 \pi ^{3/2}}{27 \sqrt{3} \Gamma \left(\frac{5}{3}\right) \Gamma \left(\frac{11}{6}\right)}. \end{eqnarray*} Another example where we fail to transform the iterated integrals in terms of cyclotomic polylogarithms but still can do the integrals is: \begin{eqnarray*} \sum _{n=1}^{\infty } \frac{\left(\frac{1}{3}\right)_n \left(S_{(3,1,1)}(n)-S_{(3,2,1)}(n)\right)}{(n+1)!}&=&-\frac{3}{4}+\frac{\pi }{\sqrt{3}}-\frac{\sqrt{3 \pi } \Gamma \left(\frac{5}{6}\right)}{\sqrt[3]{2} \Gamma \left(\frac{1}{3}\right)}. \end{eqnarray*} In the following section we will consider a subclass of Pochhammer sums, for which we will always be able to derive a representation in terms of cyclotomic polylogarithms. \section{Using Rewrite Rules to directly derive Generating Functions in terms of Cyclotomic Polylogarithms} \label{sec:4} In this section we will deal with a sub class of the Pochhammer sums, namely we restrict the inner sum to be a multiple harmonic sum and we set $p=1/q$ with $q\in\mathbb Z\setminus\{0\}$ and $a=1$ in~(\ref{Pochhammersum}) i.e.,\ we are considering sums of the form \begin{eqnarray}\label{PochhammersumsH} \sum_{n=1}^\infty \frac{(p)_n}{(n+b)^c(n+d)!}\S{c_1,c_2,\ldots,c_r}n, \end{eqnarray} where $c,c_i\in\mathbb N,\ b,d\in\mathbb Z$ and $p=\frac{1}{q}$ with $q\in\mathbb Z\setminus\{0\}.$ Considering a Pochhammer sum in this subclass we could again use the rewrite rules presented in Section~\ref{sec:3} to find an integral representation, however we can also use the following lemmas. These new rewrite rules will directly lead to cyclotomic polylogarithms. We again start with the base cases where no inner sum is present (compare Lemma~\ref{GLBaseCase}): \begin{lemma}\label{lemma:GenFunPochhammerSumToH1} Let ${\mathbb K}$ be a field of characteristic 0. Then the following identities hold in the ring ${\mathbb K}[[x]]$ of formal power series with $c\in\mathbb N$ and $b,d\in\mathbb Z$: \begin{eqnarray} \sum_{n=1}^\infty x^n\frac{(p)_n}{(n+d)!} &=& -\frac{(1-x)^{d-p}x^{-d}}{\abs{p}\, d!}\int_1^{(1-x)^{\abs{p}}} \frac{(1-t^{\frac{1}{\abs{p}}})^{d}}{t^{1-\sign{p}}\;t^{\frac{d}{\abs{p}}}} dt,\ d>0, \label{eq:GenFunPochhammerSumToH03}\\ \sum_{n=1}^\infty x^n\frac{(p)_n}{(n+b)^c(n+d)!} &=& \frac{-1}{\abs{p}\, x^{b}}\int_1^{(1-x)^{\abs{p}}}\frac{\left(1-t^{\frac{1}{\abs{p}}}\right)^{b-1}}{t^{1-\frac{1}{\abs{p}}}}\sum_{n=1}^\infty \frac{(p)_n\left(1-t^{\frac{1}{\abs{p}}}\right)^n}{(n+b)^{c-1}(n+d)!}dt.\label{eq:GenFunPochhammerSumToH04} \end{eqnarray} \end{lemma} In the cases where there is an inner multiple harmonic sum present we can refine the Lemmas~\ref{GLc0dl0},~\ref{GLc0dgeq0} and~\ref{GLcgeq1} and get the following result. \begin{lemma}\label{lemma:GenFunPochhammerSumToH2} Let ${\mathbb K}$ be a field of characteristic 0 and let $f:\mathbb N \to {\mathbb K}.$ Then the following identities hold in the ring ${\mathbb K}[[x]]$ of formal power series with $c\in\mathbb N,\ b,d\in\mathbb Z$ and $\S{m_1,\ldots,m_r}n$ a multiple harmonic sum: \begin{eqnarray} &&c=0,\ d<0:\nonumber\\ &&\sum_{n=1}^\infty x^n\frac{(p)_n}{(n+d)!} \S{m_1,\ldots,m_r}n=\label{eq:GenFunPochhammerSumToH05}\\ &&\frac{(1-x)^{d-p}}{x^{d}}\left((p)_{-d}s(-d)-\frac{1}{\abs{p}}\int_1^{(1-x)^{\abs{p}}} \frac{\left(1-t^{\frac{1}{\abs{p}}}\right)^{d-1}}{t^{1-\sign{p}}\;t^{\frac{d}{\abs{p}}}}\sum_{n=1}^\infty\frac{\left(1-t^{\frac{1}{\abs{p}}}\right)^n (p)_n}{(n+d-1)!n^{m_1}}\bar{s}(n) dt\right); \nonumber\\ &&c=0,\ d\geq0:\nonumber\\ &&\sum_{n=1}^\infty x^n\frac{(p)_n}{(n+d)!} \S{m_1,\ldots,m_r}n= \label{eq:GenFunPochhammerSumToH06}\\ &&-\frac{(1-x)^{d-p}x^{-d}}{\abs{p}}\int_1^{(1-x)^{\abs{p}}} \frac{\left(1-t^{\frac{1}{\abs{p}}}\right)^{d-1}}{t^{1-\sign{p}}\;t^{\frac{d}{\abs{p}}}}\sum_{n=1}^\infty\frac{\left(1-t^{\frac{1}{\abs{p}}}\right)^n(p)_n}{(n+d-1)!n^{m_1}}\bar{s}(n) dt; \nonumber\\ &&c>0:\nonumber\\ &&\sum_{n=1}^\infty x^n\frac{(p)_n}{(n+b)^c(n+d)!} \S{m_1,\ldots,m_r}n=\label{eq:GenFunPochhammerSumToH07}\\ &&-\frac{x^{-b}}{\abs{p}}\int_1^{(1-x)^{\abs{p}}}t^{\frac{1}{\abs{p}}-1}\left(1-t^{\frac{1}{\abs{p}}}\right)^{b-1}\sum_{n=1}^\infty \left(1-t^{\frac{1}{\abs{p}}}\right)^n \frac{(p)_n}{(n+b)^{c-1}(n+d)!}s(n)dt.\nonumber \end{eqnarray} Here we use the abbreviations $s(n):=\S{m_1,\ldots,m_r}n$ and $\bar{s}(n):=\S{m_2,\ldots,m_r}n.$ \end{lemma} \begin{proof} For all these equalities it is possible to find an initial value problem, which has a unique solution near $x=0$ and is satisfied by both sides of the respective equation. \end{proof} Note that the polynomials arising in the left hand sides of the equations in Lemma~\ref{lemma:GenFunPochhammerSumToH1} and Lemma~\ref{lemma:GenFunPochhammerSumToH2} are of the form $t^i$ or $(1-t^i)^k$ for $i,k\in\mathbb Z,$ hence integrating over these integrands will lead to cyclotomic polylogarithms. Therefore the Pochhammer sums of the form~(\ref{PochhammersumsH}) will be expressible in terms of cyclotomic polylogarithms, and we can state the following structural theorem. \begin{theorem} Any sum of the form \begin{eqnarray} \sum_{n=1}^\infty \frac{\left(\frac{1}q\right)_n}{(n+b)^c(n+d)!}\S{c_1,c_2,\ldots,c_r}n, \end{eqnarray} where $c,c_i\in\mathbb N,\ b,d\in\mathbb Z$ and $q\in\mathbb Z\setminus\{0\},$ can be expressed in terms of cyclotomic polylogarithms at one. \end{theorem} Let us now, for the third time, consider~(\ref{GeneralExampleSum}) and illustrate how the previous lemmas can be used as rewrite rules to directly find a representation in terms of cyclotomic polylogarithms. \begin{example} We seek a closed form representation in terms of cyclotomic polylogarithms of $$\sum _{n=1}^{\infty } x^n \frac{\left(-\frac{1}{2}\right)_nS_1(n) }{(3+n)^2 (n-1)!},$$ so we use~(\ref{eq:GenFunPochhammerSumToH07}) twice: \begin{eqnarray*} &&\sum _{n=1}^{\infty } x^n\frac{\left(-\frac{1}{2}\right)_nS_1(n) }{(3+n)^2 (n-1)!}=\\ &&\hspace{2cm}=-\frac{2}{x^3} \int_1^{\sqrt{1-x}} t \left(1-t^2\right)^2 \sum _{n=1}^{\infty } \frac{\left(1-t^2\right)^n \left(-\frac{1}{2}\right)_n S_1(n)}{(3+n) (n-1)!} \, dt\\ &&\hspace{2cm}=\frac{4}{x^3} \int_1^{\sqrt{1-x}} \frac{t}{1-t^2} \int_1^t u \left(1-u^2\right)^2 \sum _{n=1}^{\infty } \frac{\left(1-u^2\right)^n \left(-\frac{1}{2}\right)_n S_1(n)}{(n-1)!} \, du \, dt. \end{eqnarray*} Now we apply~(\ref{eq:GenFunPochhammerSumToH05}) followed by applying~(\ref{eq:GenFunPochhammerSumToH04}) and~(\ref{eq:GenFunPochhammerSum01}): \begin{eqnarray*} &&\sum _{n=1}^{\infty } x^n\frac{\left(-\frac{1}{2}\right)_nS_1(n) }{(3+n)^2 (n-1)!}\\ &&\hspace{1cm}=-\frac{4}{x^3} \int_1^{\sqrt{1-x}} \frac{t}{1-t^2} \int_1^t \left(1-u^2\right)^3 \left(2 \int_1^u \frac{\sum\limits_{n=1}^{\infty } \frac{\left(1-v^2\right)^n \left(-\frac{1}{2}\right)_n}{n (n-2)!}}{\left(1-v^2\right)^2} \, dv+\frac{1}{2}\right) \, du \, dt\\ &&\hspace{1cm}=\frac{4}{x^3} \int_1^{\sqrt{1-x}} \frac{t}{1-t^2} \int_1^t \left(1-u^2\right)^3 \left(\int_1^u \frac{\int_1^v \frac{4w \sum\limits _{n=1}^{\infty } \frac{\left(1-w^2\right)^n \left(-\frac{1}{2}\right)_n}{(n-2)!}}{1-w^2} \, dw}{\left(1-v^2\right)^2} \, dv-\frac{1}{2}\right) \, du\, dt\\ &&\hspace{1cm}=\frac{4}{x^3} \int_1^{\sqrt{1-x}} \frac{t}{1-t^2} \int_1^t \left(1-u^2\right)^3 \left(\int_1^u \frac{\int_1^v \frac{w^2-1}{w^2} \, dw}{\left(1-v^2\right)^2} \, dv-\frac{1}{2}\right) \, du \, dt. \end{eqnarray*} Now we can send $x\to1$ and rewrite this expression directly in terms of cyclotomic harmonic polylogarithms (again this can be done by hand or by using the command \texttt{GLIntegrate} of \texttt{HarmonicSums}) and arrive again at \begin{eqnarray}\label{ExpampleHRep} -\frac{9367}{7350}-\frac{64 H_{(2,0)}(1)}{3675}-\frac{32}{35} H_{(0,0),(1,0)}(1)-\frac{32}{35} H_{(2,0),(0,0)}(1)+\frac{64}{35} H_{(2,0),(1,0)}(1). \end{eqnarray} Finally, we can again use relations between cyclotomic polylogarithms at one to derive~(\ref{ExampleResult}). \end{example} Note that this is implemented in the command \texttt{PochhammerSumToH}, so calling \begin{eqnarray*} \textbf{PochhammerSumToH}\left[\frac{\left(-\frac{1}{2}\right)_n S_1(n)}{(3+n)^2 (-1+n)!},x,\{n,1,\infty \}\right]/.x\to1 \end{eqnarray*} will immediately give~(\ref{ExpampleHRep}). To conclude we are going to list several identities that could be computed with the help of this command (note that these identities could have also be computed using the methods presented in the previous sections): \begin{eqnarray} \sum _{n=1}^{\infty } \frac{\left(\frac{1}{5}\right)_n S_1(n)}{(n+1)!}&=&\frac{25 h_6}{4},\\ \sum _{n=1}^{\infty } \frac{\left(\frac{1}{5}\right)_n S_2(n)}{(n+1)!}&=&\frac{875 h_6^2}{48}+\frac{125}{12} \sqrt{5} h_6^2+\frac{125 h_7}{16}+\frac{125 h_8}{8},\\ \sum _{n=1}^{\infty } \frac{\left(\frac{1}{2}\right)_n S_{2,2,2}(n)}{(n+1)!}&=&\frac{2 \pi ^6}{189}-\frac{9 \zeta _3^2}{4}-\frac{15 l_2 \zeta _5}{2},\\ \sum _{n=1}^{\infty } \frac{\left(\frac{1}{2}\right)_n S_{2,2}(n)}{(n+2)!}&=&\frac{2 \pi ^2}{9}+\frac{\pi ^4}{45}-\frac{8 l_2^2}{3}-\zeta _3-2 l_2 \zeta _3,\\ \sum _{n=1}^{\infty } \frac{\left(\frac{1}{2}\right)_n S_{2,2}(n)}{n (n+2)!}&=&-\frac{\pi ^2}{9}-\frac{2 \pi ^4}{45}+\frac{4 l_2^2}{3}+\frac{\zeta _3}{2}+4 l_2 \zeta _3+\frac{15 \zeta _5}{16},\\ \sum _{n=1}^{\infty } \frac{\left(\frac{1}{3}\right)_n S_{2,2}(n)}{(n+3)!}&=&\frac{9}{320}+\frac{3 \sqrt{3} \pi }{320}+\frac{51 \pi ^2}{512}+\frac{11 \pi ^3}{128 \sqrt{3}}+\frac{91 \pi ^4}{5760}+\frac{1377 h_1}{1280}\nonumber\\ &&-\frac{27}{128} \sqrt{3} \pi h_1-\frac{135 \pi ^2 h_1}{128}+\frac{243h_2}{64}+\frac{27}{64} \sqrt{3} \pi h_2+\frac{1539 h_3}{128}\nonumber\\ &&-\frac{243 h_4}{16}-\frac{27 l_3}{320}+\frac{153 \sqrt{3} \pi l_3}{1280}+\frac{11 \pi ^3 l_3}{128 \sqrt{3}}-\frac{27}{128} \sqrt{3} \pi h_1 l_3\nonumber\\ &&+\frac{243 h_2l_3}{64}-\frac{1377 l_3^2}{2560}-\frac{81 \zeta _3}{64}+\frac{39}{64} \sqrt{3} \pi \zeta _3-\frac{81 l_3 \zeta _3}{64}. \end{eqnarray} \subsection*{Acknowledgements} The author would like to thank C. Schneider for useful discussions.
{ "timestamp": "2019-04-11T02:16:40", "yymm": "1902", "arxiv_id": "1902.11001", "language": "en", "url": "https://arxiv.org/abs/1902.11001" }
\section{Introduction} With the development of imaging technology, current hyperspectral sensors can fully portray the surface of the Earth using hundreds of continuous and narrow spectral bands, ranging from the visible spectrum to the short-wave infrared spectrum. The generated hyperspectral image (HSI) is often considered as a three-dimensional cube. The first two are spatial dimensions, which record the locations of each object. The third one is spectral dimension, which captures the spectral signature (reflective or emissive properties) of each material in different bands along the electromagnetic spectrum \cite{ghamisi2017}. Using such rich information, HSIs have been widely applied to various applications, such as land cover/land use classification, precision agriculture, and change detection. For these applications, one basic but important procedure is HSI classification, whose goal is to assign candidate class labels to each pixel. In order to acquire accurate classification results, numerous methods have been proposed. For example, one can directly consider the rich spectral signature as features and feed them into advanced classifiers, such as support vector machine (SVM) \cite{mountrakis2011}, random forest \cite{belgiu2016} and extreme learning machine \cite{li2015}. However, due to the dense spectral sampling of HSIs, there may exist some redundant information among adjacent spectral bands. This easily leads to the so-called curse of dimensionality (the Hughes effect) which causes a sudden drop in classification accuracy when there is no balance between the high number of spectral channels and a limited number of training samples. Therefore, a large number of works were proposed to mine discriminative features from the high-dimensional spectral signature \cite{jia2013}. Popular models include principle component analysis (PCA), linear discriminant analysis (LDA) \cite{liao2013,hang2016,hang2017}, and graph embedding \cite{lunga2014,hang2018,zhao2018land}. Besides, representation-based models have also been employed to HSI classification in recent years. In \cite{chen2011} and \cite{fang2014}, sparse representation was proposed to learn discriminative features from HSIs. Similarly, collaborative representation was also widely explored \cite{li2014},\cite{liw2014}. In these models, an input spectral signature is usually represented by a linear combination of atoms from a dictionary, and the classification result can be derived from the reconstructed residual without needing to train extra classifiers, which often costs much time. Although the aforementioned models have demonstrated their effectiveness in the field of HSI classification, there still exist some drawbacks to address. For the traditional feature extraction models, we need to pre-define a mining criterion (e.g., maximizing the between-class scatter matrix in LDA), which heavily depends on the domain knowledge and experience of experts. For the representation-based models, their goal is to reconstruct the input signal, leading to sub-optimal representation for classification. Additionally, all of them can be considered as shallow-layer models, which limit their potentials to learn high-level semantic features. Recently, deep learning \cite{lecun2015},\cite{schmidhuber2015}, a very hot research topic in machine learning, has shown its huge superiority in most fields of computer vision \cite{krizhevsky2012,he2016,girshick2014,ren2015} and natural language processing \cite{collobert2011},\cite{sutskever2014}. The goal of deep learning is to learn nonlinear, high-level semantic features from data in a hierarchical manner. Due to the effects of multi-path scattering and the heterogeneity of sub-pixel constituents, HSI often lies in a nonlinear and complex feature space. Deep learning can be naturally adopted to deal with this issue \cite{zhang2016},\cite{zhu2017}. In the past few years, many deep learning models were successfully applied to HSI classification. For example, in \cite{chen2014,tao2015,ma2016}, the autoencoder model has been used to learn deep features from high-dimensional spectral signature directly. Similar to autoencoder, deep belief network was also explored to extract spectral features \cite{chen2015,zhou2017,zhong2017}. However, both of them belong to fully-connected networks, which contain large numbers of parameters to train. Different from them, convolutional neural networks (CNNs) have local connection and weight sharing properties, thus largely reducing the number of training parameters \cite{li2017hyperspectral,zhao2017object,zhang2018}. In \cite{hu2015}, Hu \textit{et al}. proposed to use one-dimensional CNN to learn and represent the spectral information. This model is comprised of an input layer, a convolutional layer, a pooling layer, a fully-connected layer and an output layer. The whole model is trained in an end-to-end manner, thus achieving satisfying results for HSI classification. Besides spectral information, HSIs also have rich spatial information. How to combine them together has been an active research topic in the field of HSI classification \cite{he2018},\cite{ghamisi2018}. One potential method is to extend the spectral classification model into its spectral-spatial counterpart. For instance, in \cite{chen2016,li2017,shi2018}, a three-dimensional CNN was employed to spectral-spatial classification of HSIs. However, due to the simultaneous convolution operators in both spectral domain and spatial domain, the computational complexity is dramatically increased. In addition, the number of trainable parameters in three-dimensional CNNs is also a problem. In order to perform three-dimensional convolution, the dimensionality of the input and the dimensionality of the kernel (filter) should be equal. This heavily increases the number of parameters. Another candidate method for spectral-spatial classification is the one based on two-branch networks. One branch is for spectral classification and the other one for spatial classification. In \cite{yang2017,xu2018,hao2018}, one-dimensional CNN or autoencoder was used to learn spectral features and two-dimensional CNN was designed to learn spatial features. These two features are then integrated together via feature-level fusion or decision-level fusion. For two-dimensional CNNs, only a few principal components were extracted and used as inputs, thus reducing the computational consuming compared to three-dimensional CNNs. Most of existing models can be considered as vector-based methodologies. Recently, a few works attempted to regard HSIs as sequential data, so recurrent neural networks (RNNs) were naturally used to learn features. In \cite{wu2017}, Wu \textit{et.al} proposed using RNN to extract spectral features from HSIs. In \cite{liu2017} and \cite{zhou2018}, a variant of RNN using long short-term memory (LSTM) units was designed to learn spectral-spatial features from HSIs. In \cite{zhou2018Integrating}, another variant of RNN using gated recurrent units (GRUs) was employed. Compared to the widely explored CNN models, RNNs have many superiorities. For example, the key component of CNNs is the convolutional operator. Due to the kernel size limitations of it, one-dimensional CNNs can only learn the local spectral dependency while easily ignoring the effects of non-adjacent spectral bands. Different from them, RNNs, especially using GRU or LSTM, often input spectral bands one by one via recurrent operators, thus capturing the relationship from the whole spectral bands. Besides, RNNs often have smaller numbers of parameters to train than CNNs, so they will be more efficient in the training and inferring phases. Benefiting from its powerful learning ability from sequential data, current RNN-related models often simply input the whole spectral bands to networks, which may not fully explore the redundant and complementary properties of HSIs. The redundant information between adjacent spectral bands will increase the computational burden of RNNs without improving the classification results. Sometimes such redundancy may reduce the classification accuracy since it increases within-class variances and decreases between-class variances in the feature space. Besides, it may also increase the difficulties in learning complementary information. To address these issues, we propose a cascaded RNN model using gated recurrent units (GRUs) in this paper. This model mainly consists of two RNN layers. The first RNN layer focuses on reducing the redundant information of adjacent spectral bands. These reduced information are then fed into the second RNN layer to learn their complementary features. Besides, in order to improve the discriminative ability of the learned features, we design two strategies for the proposed model. Finally, we also extend the proposed model to its spectral-spatial version by incorporating some convolutional layers. The major contributions of this paper are summarized as follows. \begin{enumerate} \item We propose a cascaded RNN model with GRUs for HSI classification. Compared to the existing RNN-related models, our model can sufficiently consider the redundant and complementary information of HSIs via two RNN layers. The first one is to reduce redundancy and the second one is to learn complementarity. These two layers are integrated together to generate an end-to-end trainable model. \item In order to learn more discriminative features, we design two strategies to construct connections between the first RNN layer and the output layer. The first strategy is the weighted fusion of features from two layers, and the second one is the weighted combination of different loss functions from two layers. Their weights can be adaptively learned from data itself. \item To capture the spectral and spatial features simultaneously, we further extend the proposed model to its spectral-spatial counterpart. A few convolutional layers are integrated into the proposed model to learn spatial features from each band, and these features are then combined together via recurrent operators. \end{enumerate} The rest of this paper is organized as follows. Section II describes the details of the proposed models, including a brief introduction of RNN, and the structure of the proposed model as well as its modifications. The descriptions of data sets and experimental results are given in Section III. Finally, Section IV concludes this paper. \section{Methodology} \begin{figure*}[!t] \centering \includegraphics[scale = 0.6]{figure1.pdf}\\ \caption{Flowchart of the proposed model.}\label{RNNs} \end{figure*} As shown in Fig.$~$\ref{RNNs}, the proposed cascaded RNN model mainly consists of four steps. For a given pixel, we firstly divide it into different spectral groups. Then, for each group, we consider the spectral bands in it as a sequence, which is fed into a RNN layer to learn features. After that, the learned features from each group are again regraded as a sequence and fed into another RNN layer to learn their complementary information. Finally, the output of the second RNN layer is connected to a softmax layer to derive the classification result. \subsection{Review of RNN} RNN has been widely used for sequential data analysis, such as speech recognition and machine translation \cite{sutskever2014},\cite{graves2013}. Assume that we have a sequence data $\mathbf{x} = (\mathbf{x}_{1}, \mathbf{x}_{2}, \cdots, \mathbf{x}_{T})$, where $\mathbf{x}_{t}, t\in\{1,2,\cdots,T\}$ generally represents the information at the $t$-th time step. When applying RNN to HSI classification, $\mathbf{x}_{t}$ will correspond to the spectral value at the $t$-th band. For RNN, the output of hidden layer at time $t$ is \begin{equation}\label{hidden} \mathbf{h}_{t} = \phi(\mathbf{W}_{hi}\mathbf{x}_{t} + \mathbf{W}_{hh}\mathbf{h}_{t-1} + \mathbf{b}_{h}) \end{equation} where $\phi$ is a nonlinear activation function such as logistic sigmoid or hyperbolic tangent functions, $\mathbf{b}_{h}$ is a bias vector, $\mathbf{h}_{t-1}$ is the output of hidden layer at the previous time, $\mathbf{W}_{hi}$ and $\mathbf{W}_{hh}$ denote weight matrices from the current input layer to hidden layer and the previous hidden layer to current hidden layer, respectively. From this equation, we can observe that via a recurrent connection, the contextual relationships in the time domain can be constructed. Ideally, $\mathbf{h}_{T}$ can capture most of the time information for the sequence data. For classification tasks, $\mathbf{h}_{T}$ is often fed into an output layer, and the probability that the sequence belongs to $i$-th class can be derived by using a softmax function. These processes can be formulated as \begin{equation} \begin{aligned} & \qquad \mathbf{O}_{T} = \mathbf{W}_{oh}\mathbf{h}_{T} + \mathbf{b}_{o}\\ & P(\tilde{y}=i|\boldsymbol{\theta},\mathbf{b}) = \frac{e^{\boldsymbol{\theta}_{i}\mathbf{O}_{T}+b_{i}}}{\sum_{j=1}^{C}e^{\boldsymbol{\theta}_{j}\mathbf{O}_{T}+b_{j}}} \end{aligned} \end{equation} where $\mathbf{b}_{o}$ is a bias vector, $\mathbf{W}_{oh}$ is the weight matrix from hidden layer to output layer, $\boldsymbol{\theta}$ and $\mathbf{b}$ are parameters of softmax function, $C$ is the number of classes to discriminate. All of these weight parameters in Equation (1) and (2) can be trained using the following loss function \begin{equation}\label{Loss} \mathcal{L}=-\frac{1}{N}\sum_{i=1}^{N}[y_{i}\textmd{log}(\tilde{y}_{i})+ (1-y_{i})\textmd{log}(1-\tilde{y}_{i})] \end{equation} where $N$ is the number of training samples, $y_{i}$ and $\tilde{y}_{i}$ are the true label and the predicted label of the $i$-th training sample, respectively. This function can be optimized using a backpropagation through time (BPTT) algorithm. \subsection{Cascaded RNNs}\label{CasRNNs} HSIs can be described as a three-dimensional matrix $\mathbf{X}\in \mathfrak{R}^{m\times n\times k}$, where $m$, $n$ and $k$ represent the width, height and number of spectral bands, respectively. For a given pixel $\mathbf{x}\in \mathfrak{R}^{k}$, we can consider it as a sequence whose length is $k$, so RNN can be naturally employed to learn spectral features. However, HSIs often contain hundreds of bands, making $\mathbf{x}$ a very long sequence. Such long-term sequence increases the training difficulty since the gradients tend to either vanish or explode \cite{chung2014}. To address this issue, one popularly used method is to design a more sophisticated activation function by using gating units such as the LSTM unit and GRU \cite{cho2014}. Compared to LSTM unit, GRU has a fewer number of parameters \cite{chung2014}, which may be more suitable for HSI classification because it usually has a limited number of training samples. Therefore, we select GRU as the basic unit of RNN in this paper. The core components of GRU are two gating units that control the flow of information inside the unit. Instead of using Equation$~$(\ref{hidden}), the activation of the hidden layer for band $t$ is now formulated as \begin{equation} \mathbf{h}_{t} = (1-u_{t})\mathbf{h}_{t-1} + u_{t}\tilde{\mathbf{h}}_{t} \end{equation} where $u_{t}$ is the update gate, which can be derived by \begin{equation} u_{t} = \sigma(w_{u}x_{t} + \mathbf{v}_{u}\mathbf{h}_{t-1}) \end{equation} where $\sigma$ is a sigmoid function, $w_{u}$ is a weight value, and $\mathbf{v}_{u}$ is a weight vector. Similarly, $\tilde{\mathbf{h}}_{t}$ can be computed by \begin{equation} \tilde{\mathbf{h}}_{t} = tanh(\mathbf{w}x_{t} + \mathbf{V}(\mathbf{r}_{t}\odot \mathbf{h}_{t-1})) \end{equation} where $\odot$ denotes an element-wise multiplication, and $\mathbf{r}_{t}$ is the reset gate, which can be derived by \begin{equation} \mathbf{r}_{t} = \sigma(\mathbf{w}_{r}x_{t} + \mathbf{V}_{r}\mathbf{h}_{t-1}) \end{equation} Due to the dense spectral sampling of hyperspectral sensors, adjacent bands in HSIs have some redundancy while non-adjacent bands have some complementarity. In order to take account of such information comprehensively, we propose a cascaded RNN model. Specifically, we divide the spectral sequence $\mathbf{x}$ into $l$ sub-sequences $\mathbf{z} = (\mathbf{z}_{1}, \mathbf{z}_{2}, \cdots, \mathbf{z}_{l})$, each of which consists of adjacent spectral bands. Besides the last sub-sequence $\mathbf{z}_{l}$, the length of the other sub-sequences is $d = \textmd{floor}(k/l)$, which denotes the nearest integers less than or equal to $k/l$. Thus, for the $i$-th sub-sequence $\mathbf{z}_{i}, i\in\{1,2,\cdots,l\}$, it is comprised of the following bands \begin{align} \mathbf{z}_{i} = \left\{ \begin{array}{ll} (x_{(i-1)\times d+1}, \cdots, x_{i\times d}), & \hbox{if}\: \;i\neq l,\\ (x_{(i-1)\times d+1}, \cdots, x_{k}), & \hbox{otherwise}. \end{array} \right. \end{align} \begin{figure} \centering \includegraphics[scale=0.6]{Improve1.pdf}\\ \caption{The first improvement strategy.}\label{Improve1} \end{figure} Then, we feed all the sub-sequences into the first-layer RNNs respectively. These RNNs have the same structure and share parameters, thus reducing the number of parameters to train. In the sub-sequence $\mathbf{z}_{i}$, each band has an output from GRU. We use the output of the last band as the final feature representation for $\mathbf{z}_{i}$, which can be denoted as $\mathbf{F}_{i}^{(1)}\in \mathfrak{R}^{H_{1}}$, where $H_{1}$ is the size of the hidden layer in the first-layer RNN. After that, we can combine $\mathbf{F}_{i}^{(1)}, i\in\{1,2,\cdots,l\}$ together to generate another sequence $\mathbf{F} =(\mathbf{F}_{1}^{(1)}, \mathbf{F}_{2}^{(1)}, \cdots, \mathbf{F}_{l}^{(1)})$ whose length is $l$. This sequence is fed into the second-layer RNN to learn their complementary information. Similar to the first-layer RNNs, we also use the output of GRU at the last time $l$ as the learned feature $\mathbf{F}^{(2)}$. To get a classification result of $\mathbf{x}$, we need to input $\mathbf{F}^{(2)}$ into an output layer whose size equals to the number of candidate classes $C$. Both of these two-layer RNNs have many weight parameters. We choose Equation$~$(\ref{Loss}) as a loss function and use the BPTT algorithm to optimize them simultaneously. \subsection{Improvement for Cascaded RNNs} \begin{figure} \centering \includegraphics[scale=0.6]{Improve2.pdf}\\ \caption{The second improvement strategy.}\label{Improve2} \end{figure} As described in subsection \ref{CasRNNs}, the second-layer RNN is directly connected to the output layer, so it may be optimized better than the first-layer RNNs. However, the performance of the first-layer RNNs will have effects on the second-layer RNN. In order to improve the discriminative ability of $\mathbf{F}^{(2)}$, an intuitive method is to construct relations between the first-layer RNNs and the output layer. Here, we propose two strategies to achieve this goal. The first strategy is based on the feature-level connection shown in Fig.$~$\ref{Improve1}. Instead of feeding the output of the second-layer RNN into the output layer only, we attempt to feed all the output features from the first- and the second-layer RNNs in a weighted concatenation manner. Specifically, the input of the output layer is computed as follows \begin{equation}\label{feature-level} \tilde{\mathbf{F}} = [w_{1}^{(1)}\mathbf{F}_{1}^{(1)}, w_{2}^{(1)}\mathbf{F}_{2}^{(1)},\cdots, w_{l}^{(1)}\mathbf{F}_{l}^{(1)}, w^{(2)}\mathbf{F}^{(2)}] \end{equation} where $w_{i}^{(1)}\in\mathfrak{R}^{1}, i\in\{1,2,\cdots,l\}$ are fusion weights for the first-layer RNNs, and $w^{(2)}\in\mathfrak{R}^{1}$ is the fusion weight for the second-layer RNN. These weights can be integrated into the whole network and their optimal values are automatically learned from data. The same as the original two-layer RNN model, we also use Equation$~$(\ref{Loss}) to construct the loss function and use the BPTT algorithm to optimize it. \begin{figure*} \centering \includegraphics[scale=0.6]{figure4.pdf}\\ \caption{Flowchart of spectral-spatial cascaded RNN model.}\label{SSRNN} \end{figure*} Different from the first improvement strategy, our second strategy is based on the output-level connection. As shown in Fig.$~$\ref{Improve2}, we feed the features extracted by the first-layer RNNs into output layers, respectively, so that they can learn more discriminative features. Combining these features together using the second-layer RNN will result in a better $\mathbf{F}^{(2)}$. In particular, for $\mathbf{F}_{i}^{(1)}, i\in\{1,2,\cdots,l\}$, we can input it into an output layer and construct a loss function $L_{i}^{(1)}, i\in\{1,2,\cdots,l\}$. Meanwhile, we also input $\mathbf{F}^{(2)}$ into an output layer and construct another loss function $L^{(2)}$. After that, a weighted summation method can be used to combine them together, which can be formulated as \begin{equation}\label{output-level} \tilde{L} = \frac{1}{l}\sum_{i=1}^{l}w_{i}^{(1)}L_{i}^{(1)} + w^{(2)}L^{(2)} \end{equation} where $w_{i}^{(1)}\in\mathfrak{R}^{1}$ and $w^{(2)}\in\mathfrak{R}^{1}$ are fusion weights, $L_{i}^{(1)}$ and $L^{(2)}$ are derived from Equation$~$(\ref{Loss}). The final loss function $\tilde{L}$ can be optimized by using the BPTT algorithm. In the prediction phase, we can delete the output layers of the first-layer RNNs and use the output from the second-layer RNN as the final classification result. \subsection{Spectral-spatial Cascaded RNNs} Due to the effects of atmosphere, instrument noises, and natural spectrum variations, materials from the same class may have very different spectral responses, while those from different classes may have similar spectral responses. If we only use the spectral information, the resulting classification maps will have many outliers, which is known as the ``salt and pepper'' phenomenon. As a three-dimensional cube, HSIs also have rich spatial information, which can be used as a complement to address this issue. Among numerous deep learning models, CNNs have demonstrated their superiority in spatial feature extraction. In \cite{chen2016}, a typical two-dimensional CNN is designed to extract spatial features from HSIs. The input of this model is the first principle component of HSIs. Inspired from the two-dimensional CNN model, we extend the cascaded RNN model to its spectral-spatial version by adding some convolutional layers. Fig.$~$\ref{SSRNN} shows the flowchart of the proposed spectral-spatial cascaded RNN model. For a given pixel $\mathbf{x}\in \mathfrak{R}^{k}$, we select a small cube $\hat{\mathbf{x}}\in \mathfrak{R}^{\omega\times\omega\times k}$ centered at it. Then, we split this cube into $k$ matrices $\hat{\mathbf{x}}_{i}\in \mathfrak{R}^{\omega\times\omega}, i\in\{1,2,\cdots, k\}$ across the spectral domain. For each $\hat{\mathbf{x}}_{i}$, we feed it into several convolutional layers to learn spatial features. The same as \cite{chen2016}, we also use three convolutional layers, and the first two layers are followed by pooling layers. The input size $\omega\times\omega$ is $27\times 27$. The sizes of the three convolutional filters are $4\times 4\times 32$, $5\times 5\times 64$ and $4\times 4\times 128$, respectively. After these convolutional operators, each $\hat{\mathbf{x}}_{i}$ will generate a $128$-dimensional spatial feature $\mathbf{s}_{i}$. Similar to the cascaded RNN model, we can also consider $\mathbf{s}=(\mathbf{s}_{1},\mathbf{s}_{2},\cdots,\mathbf{s}_{k})$ as a sequence whose length is $k$. This sequence is divided into $l$ sub-sequences, and they are subsequently fed into the first-layer RNNs respectively to reduce redundancy inside each sub-sequence. The outputs from the first-layer RNNs are combined again to generate another sequence, which are fed into the second-layer RNN to learn complementary information. Compared to the cascaded RNN model, the spectral-spatial cascaded RNN model is deeper and more difficult to train. Therefore, we propose a transfer learning method to train it. Specifically, we firstly pre-train the convolutional layers using all of $\hat{\mathbf{x}}_{i}, i\in\{1,2,\cdots, k\}$. We replace two-layer RNNs by an output layer whose size is the number of classes $C$. Besides, we assume that the label of $\hat{\mathbf{x}}_{i}$ equals to the label of its corresponding pixel $\mathbf{x}$. Then, we will have $N\times k$ samples. These samples are used to train convolutional layers. After that, the weights of these convolutional layers are fixed and the $N$ training samples are used again to train the two-layer RNNs. Finally, the whole network is fine-tuned based on the learned parameters. \section{Experiments} \subsection{Data Description} Our experiments are conducted on two HSIs, which are widely used to evaluate classification algorithms. \textit{Indian Pines Data}: The first data set was acquired by the AVIRIS sensor over the Indian Pine test site in northwestern Indiana, USA, on June 12, 1992. The original data set contains 224 spectral bands. We utilize 200 of them after removing four bands containing zero values and 20 noisy bands affected by water absorption. The spatial size of the image is $145\times145$ pixels, and the spatial resolution is 20 m. The number of training and test pixels are reported in Table$~$\ref{IPData}. Fig.$~$\ref{RGBIP} shows the false-color image, as well as training and test maps of this data set. \begin{table} \centering \caption{Numbers of training and test pixels used in the Indian Pines data set.}\label{IPData} \scalebox{0.9}{ \begin{tabular}{cccc} \hline Class No. & Class Name & Training & Test\\ \hline \hline 1 & Corn-notill & 50 & 1384 \\ 2 & Corn-mintill & 50 & 784 \\ 3 & Corn & 50 & 184 \\ 4 & Grass-pasture & 50 & 447 \\ 5 & Grass-trees & 50 & 697 \\ 6 & Hay-windrowed & 50 & 439 \\ 7 & Soybean-notill & 50 & 918 \\ 8 & Soybean-mintill & 50 & 2418 \\ 9 & Soybean-clean & 50 & 564 \\ 10 & Wheat & 50 & 162 \\ 11 & Woods & 50 & 1244 \\ 12 & Building-grass-trees & 50 & 330 \\ 13 & Stone-steel-towers & 50 & 45 \\ 14 & Alfalfa & 15 & 39 \\ 15 & Grass-pasture-mowed & 15 & 11 \\ 16 & Oats & 15 & 5 \\ \hline \hline - & Total & 695 & 9671 \\ \hline \end{tabular} } \end{table} \begin{figure} \centering \includegraphics[scale = 0.6]{RGBIP.pdf}\\ \caption{Visualization of the Indian Pines data. (a) False-color image. (b) Training data map. (c) Test data map.}\label{RGBIP} \end{figure} \textit{Pavia University Scene Data}: The second data set was acquired by the ROSIS sensor during a flight campaign over Pavia, northern Italy, on July 8, 2002. The original image was recorded with 115 spectral channels ranging from 0.43 $\mu m$ to 0.86 $\mu m$. After removing noisy bands, 103 bands are used. The image size is $610\times340$ pixels with a spatial resolution of 1.3 m. There are nine classes of land covers with more than 1000 labeled pixels for each class. The number of pixels for training and test are listed in Table$~$\ref{PUSData}. Their corresponding distribution maps are demonstrated in Fig.$~$\ref{RGBPUS}. \begin{table} \centering \caption{Numbers of training and test pixels used in the Pavia University data set.}\label{PUSData} \scalebox{0.9}{ \begin{tabular}{cccc} \hline Class No. & Class Name & Training & Test\\ \hline \hline 1 & Asphalt & 548 & 6631 \\ 2 & Meadows & 540 & 18649 \\ 3 & Gravel & 392 & 2099 \\ 4 & Trees & 524 & 3064 \\ 5 & Metal sheets& 265 & 1345 \\ 6 & Bare Soil & 532 & 5029 \\ 7 & Bitumen & 375 & 1330 \\ 8 & Bricks & 514 & 3682 \\ 9 & Shadows & 231 & 947 \\ \hline \hline - & Total & 3921 & 42776 \\ \hline \end{tabular} } \end{table} \begin{figure} \centering \includegraphics[scale = 0.5]{RGBPUS.pdf}\\ \caption{Visualization of the Pavia University data. (a) False-color image. (b) Training data map. (c) Test data map.}\label{RGBPUS} \end{figure} \subsection{Experimental Setup} In order to highlight the effectiveness of our proposed models, we compare them with SVM, one-dimensional CNN (1D-CNN), two-dimensional CNN (2D-CNN), and the original RNN using GRU (RNN). For simplicity, the cascaded RNN model using GRUs is abbreviated as CasRNN; the two improvement methods of CasRNN based on feature-level and output-level connections are abbreviated as CasRNN-F and CasRNN-O, respectively; the spectral-spatial CasRNN is abbreviated as SSCasRNN. Some of their explanations are summarized as follows. \begin{enumerate} \item SVM: The input of SVM is the original spectrum signature. We choose Gaussian kernel as its kernel function. The penalty parameter and the spread of the Gaussian kernel are selected from a candidate set $\{10^{-3}, 10^{-2}, \cdots, 10^{3}\}$ using a fivefold cross-validation method. \item 1D-CNN: The structure of 1D-CNN is the same as that in \cite{hu2015}. It contains an input layer, a convolutional layer with 20 kernels whose size is $11\times1$, a max-pooling layer whose kernel size is $3\times1$, a fully-connected layer with 100 hidden nodes, and an output layer. \item 2D-CNN: The structure of 2D-CNN is the same as that in \cite{chen2016}, which consists of three convolutional layers and two max-pooling layers. Please refer to Table IX in \cite{chen2016} for the design details of it. \item RNN: GRU is used as the basic unit of RNN. The number of hidden nodes is chosen from a candidate set $\{2^{4}, 2^{5}, \cdots, 2^{10}\}$ via a fivefold cross-validation method. \end{enumerate} The deep learning models are constructed with a PyTorch framework. To optimize them, we use a mini-batch stochastic gradient descent algorithm. The batch size, the learning rate and the number of training epochs are set to 64, 0.001 and 300, respectively. For SVM, we use a libsvm package in a MATLAB framework. All of the experiments are implemented on a personal computer with an Intel core i7-4790, 3.60GHz processor, 32GB RAM, and a GTX TITAN X graphic card. The classification performance of each model is evaluated by the overall accuracy (OA), the average accuracy (AA), the per-class accuracy, and the Kappa coefficient. OA defines the ratio between the number of correctly classified pixels to the total number of pixels in the test set, AA refers to the average of accuracies in all classes, and Kappa is the percentage of agreement corrected by the number of agreements that would be expected purely by chance. \subsection{Parameter Analysis} \begin{figure} \centering \includegraphics[scale=0.4]{CasRNNHiddenNum.pdf}\\ \caption{Performance of the CasRNN model with different sizes of hidden layers on the Indian Pines data (Left) and the Pavia University data (Right).}\label{CasRNNHiddenNum} \end{figure} \begin{figure} \centering \includegraphics[scale=0.4]{SSCasRNNHiddenNum.pdf}\\ \caption{Performance of the SSCasRNN model with different sizes of hidden layers on the Indian Pines data (Left) and the Pavia University data (Right).}\label{SSCasRNNHiddenNum} \end{figure} \begin{figure} \centering \includegraphics[scale=0.5]{SubsequenceNumIP.pdf}\\ \caption{Performance of different models on the Indian Pines data with different sub-sequence numbers $l$. }\label{SubsequenceNumIP} \end{figure} \begin{figure} \centering \includegraphics[scale=0.5]{SubsequenceNumPUS.pdf}\\ \caption{Performance of different models on the Pavia University data with different sub-sequence numbers $l$. }\label{SubsequenceNumPUS} \end{figure} There exist three important hyperparameters in the proposed models. They are sub-sequence numbers $l$, as well as the size of hidden layers in the first-layer RNN and the second-layer RNN. To test the effects of them on the classification performance, we firstly fix $l$ and select the size of hidden layers from a candidate set $\{16,32,64,128,256,384\}$. Then, we fix the size of hidden layers and choose $l$ from another set $\{2,4,6,\cdots,16,18,20\}$. Since the same hyperparameter values are used for CasRNN and its two improvements (i.e., CasRNN-F and CasRNN-O), we only demonstrate the performance of CasRNN here, shown in Fig.$~$\ref{CasRNNHiddenNum}. In this three-dimensional diagram, the first two axes (named $Hidden1$ and $Hidden2$) respectively correspond to the number of hidden nodes in the first-layer RNN and the second-layer RNN, while the third axis represents the classification accuracy OA. From this figure, we can observe that when $Hidden1\geq32$ and $Hidden2\geq128$, CasRNN can achieve better OA than the other values on the Indian Pines data. The best OA appears when $Hidden1=128$ and $Hidden2=256$. For the Pavia University data, OA changes a little larger than the Indian Pines data, but we can still find the best value when $Hidden1=256$ and $Hidden2=16$. Similarly, Fig.$~$\ref{SSCasRNNHiddenNum} shows OA values achieved by SSCasRNN using different hidden sizes. We can see the optimal parameter values are $Hidden1=128, Hidden2=256$ for the Indian Pines data, and $Hidden1=256, Hidden2=256$ for the Pavia University data, respectively. \begin{table*} \centering \caption{Classification results (\%) of different models on the Indian Pines data.}\label{IPResults} \scalebox{0.9}{ \begin{tabular}{ccccccccc} \hline Class No. & SVM & 1D-CNN & RNN & CasRNN & CasRNN-F & CasRNN-O & 2D-CNN & SSCasRNN \\ \hline \hline 1 & 64.31 & 61.34 & 64.74 & 68.35 & 68.93 & 68.21 & 82.51 & \textbf{86.99} \\ 2 & 70.92 & 60.33& 61.35& 64.8&67.6&67.35&88.14&\textbf{98.72} \\ 3 & 84.78&80.43&74.46&77.17&83.7&85.87&\textbf{100}&\textbf{100}\\ 4 & 91.05 & 89.04 & 83.45 & 91.50 & 90.60 & 89.93 & \textbf{94.85} & 94.41 \\ 5 & 85.94 & 90.53 & 77.04 & 79.34 & 80.49 & 80.92 & 85.80 & \textbf{97.42} \\ 6 & 93.62 & 96.13 & 87.70 & 92.03 & 92.94 & 92.94 & 99.77 & \textbf{100} \\ 7 & 69.17 & 72.11 & 76.03 & 74.84 & 78.54 & 79.30 & 82.35 & \textbf{87.15} \\ 8 & 52.90 & 54.47 & 60.79 & 67.41 & 67.49 & 66.91 & 73.86 & \textbf{85.98} \\ 9 & 76.60 & 75.71 & 61.17 & 65.60 & 67.02 & 65.43 & 86.00 & \textbf{87.23} \\ 10 & 97.53 & 99.83 & 93.21 & 95.06 & 96.91 & 98.15 & \textbf{100} & \textbf{100} \\ 11 & 77.49 & 80.87 & 81.67 & 83.28 & 90.03 & 86.09 & 94.53 & \textbf{97.51} \\ 12 & 73.33 & 78.48 & 55.45 & 54.85 & 67.88 & 54.55 & 97.27 & \textbf{99.70} \\ 13 & \textbf{100} & 91.11 & 86.67 & 93.33 & 95.56 & 93.33 & \textbf{100} & \textbf{100} \\ 14 & 87.18 & 94.87 & 69.23 & 76.92 & 84.61 & 76.92 & 97.44 & \textbf{100} \\ 15 & 90.91 & 90.91 & 90.91 & 90.91 & 90.91 & 90.91 & \textbf{100} & \textbf{100} \\ 16 & \textbf{100} & \textbf{100} & 80.00 & \textbf{100} & \textbf{100} & 80 & \textbf{100} & \textbf{100} \\ \hline \hline OA & 70.55 & 70.79 & 69.82 & 73.49 & 75.85 & 74.60 & 85.43 & \textbf{91.79} \\ AA & 82.23 & 82.23 & 75.24 & 79.71 & 82.70 & 79.80 & 92.66 & \textbf{95.94} \\ Kappa & 66.90 & 67.07 & 65.87 & 69.91 & 72.57 & 71.19 & 83.49 & \textbf{90.62} \\ \hline \end{tabular} } \end{table*} \begin{figure*} \centering \includegraphics[scale = 0.85]{figure11.pdf}\\ \caption{Classification maps of the Indian Pines data using different models. (a) SVM. (b) 1D-CNN. (c) RNN. (d) CasRNN. (e) CasRNN-F. (f) CasRNN-O. (g) 2D-CNN. (h) SSCasRNN. }\label{MapsIP} \end{figure*} \begin{table*} \centering \caption{Classification results (\%) of different models on the Pavia University data.}\label{PUSResults} \scalebox{0.9}{ \begin{tabular}{ccccccccc} \hline Class No. & SVM & 1D-CNN & RNN & CasRNN & CasRNN-F & CasRNN-O & 2D-CNN & SSCasRNN \\ \hline \hline 1 & 84.74 & 80.94 & 81.51 & 82.34 & 83.56 & 83.52 & 77.39 & \textbf{89.82} \\ 2 & 64.50 & 70.37 & 62.58 & 67.13 & 70.65 & 71.37 & \textbf{98.89} & 96.06 \\ 3 & 72.56 & 77.32 & 64.65 & 60.51 & 68.75 & 64.51 & 56.74 & \textbf{78.89} \\ 4 & 97.13 & 85.93 & \textbf{98.89} & 98.63 & 98.11 & 98.43 & 92.75 & 95.89 \\ 5 & 99.55 & 99.70 & 99.26 & 99.41 & 99.55 & 99.33 & 99.78 & \textbf{100} \\ 6 & \textbf{93.30} & 93.26 & 88.90 & 84.97 & 88.29 & 89.08 & 47.27 & 57.67 \\ 7 & 91.28 & \textbf{95.41} & 92.63 & 90.60 & 76.54 & 91.13 & 80.08 & 80.53 \\ 8 & 91.99 & 84.47 & 91.04 & 92.23 & 86.04 & 93.54 & \textbf{96.69} & 96.80 \\ 9 & 95.56 & 92.08 & 95.35 & 94.40 & 95.35 & 94.72 & \textbf{96.30} & 95.99 \\ \hline \hline OA & 78.75 & 79.55 & 76.58 & 78.03 & 79.56 & 80.86 & 86.18 & \textbf{90.30} \\ AA & 87.85 & 86.61 & 86.09 & 85.58 & 85.21 & 87.29 & 82.88 & \textbf{87.97} \\ Kappa & 73.62 & 74.28 & 71.02 & 72.55 & 74.31 & 75.93 & 81.22 & \textbf{86.26} \\ \hline \end{tabular} } \end{table*} \begin{figure*} \centering \includegraphics[scale = 0.85]{figure12.pdf}\\ \caption{Classification maps of the Pavia University data using different models. (a) SVM. (b) 1D-CNN. (c) RNN. (d) CasRNN. (e) CasRNN-F. (f) CasRNN-O. (g) 2D-CNN. (h) SSCasRNN. }\label{MapsPUS} \end{figure*} Fig.$~$\ref{SubsequenceNumIP} and Fig.$~$\ref{SubsequenceNumPUS} evaluate the effects of $l$ on classifying the Indian Pines and the Pavia University data sets, respectively. In these figures, different colors represent different models. They are CasRNN, CasRNN-F, CasRNN-O and SSCasRNN. As $l$ increases, OAs achieved by these models tend to increase firstly and then decrease. Given the same $l$, SSCasRNN significantly outperforms the other three models. For the Indian Pines data, the maximal OAs of four models appear at the same $l$, so their optimal $l$ values are set as 10. Different from the Indian Pines data, four models have different optimal $l$ values on the Pavia University data. As shown in Fig.$~$\ref{SubsequenceNumPUS}, the optimal $l$ value is 4 for SSCasRNN, and 8 for the other three models. \subsection{Performance Comparison} In this section, we will report quantitative and qualitative results of our proposed models and their comparisons with the other state-of-the-art models. Table$~$\ref{IPResults} reports the detailed classification results of different models on the Indian Pines data, including OA, AA, Kappa and class specific accuracy. The bold fonts in each row denote the best results. Several conclusions can be observed from this table. First, if we directly input the whole spectral bands into RNN, its OA, AA and Kappa values are 69.82\%, 75.42\% and 65.87\%, respectively, which are all lower than those achieved by SVM and 1D-CNN models. This indicates that RNN cannot fully explore the long-term spectral sequence of HSIs. On the contrary, considering the redundant and complementary properties of spectral signature, our proposed model CasRNN can improve the performance of RNN by 4 percents, thus outperforming SVM and 1D-CNN. Second, compared to CasRNN, CasRNN-F and CasRNN-O can obtain better results, which validates the effectiveness of the two improvement strategies. In terms of each class accuracy, CasRNN-F almost increases all of them in comparison with CasRNN, so it might be more powerful than CasRNN-O on the Indian Pines data. Third, compared to spectral classification models, 2D-CNN significantly improves the classification results by about 10 percents. It means that the consideration of spatial information is very important on the Indian Pines data, because there are many large and homogeneous objects shown in Fig.$~$\ref{RGBIP}(c). By incorporating the spatial information into CasRNN model, our proposed model SSCasRNN can further increase the performance to above 90 percents. Besides, it can obtain highest accuracies in 15 different classes, which sufficiently certifies the effectiveness of SSCasRNN. In addition to the quantitative results, we also visualize classification results of different models shown in Fig.$~$\ref{MapsIP}. Different colors in this figure correspond to different classes. Compared to the groundtruth map in Fig.$~$\ref{RGBIP}(c), spectral classification models (i.e., SVM, 1D-CNN, RNN, CasRNN, CasRNN-F and CasRNN-O) have many outliers in the classification map due to the spectral variability of materials. This phenomenon can be alleviated by 2D-CNN, because it makes use of the spatial contextual information instead of the spectral information. For homogeneous regions, especially large objects, 2D-CNN performs very well. However, it will easily result in an over-smoothing problem especially for small objects, as demonstrated in Fig.$~$\ref{MapsIP}(g). Different from 2D-CNN and spectral models, SSCasRNN takes advantage of spectral and spatial information simultaneously. As shown in Fig.$~$\ref{MapsIP} (h), it has significantly fewer outliers than spectral models, and retains more boundary details of objects than 2D-CNN. Table$~$\ref{PUSResults} and Fig.$~$\ref{MapsPUS} are the classification results of different models on the Pavia University data. Similar conclusions can be observed from them. For spectral models, CasRNN is better than RNN, while CasRNN-F and CasRNN-O are superior to CasRNN. All of these models have the ``salt and pepper'' phenomenon in their classification maps. Compared to the best spectral model, 2D-CNN can improve OA and Kappa by more than 5 percents. In addition, it generates fewer outliers and leads to a more homogeneous classification map. Nevertheless, without using the spectral information, its performance is not very high, and the classification map is easily to be over-smoothed. Combining the spectral and spatial information together, our proposed model SSCasRNN can alleviate these issues. It improves OA from 86.18\% to 90.30\%, and generates more details in the classification map. However, in comparison with the Indian Pines data, the classification results achieved by SSCasRNN are still not very high. One possible reason is that there exist many small objects in the Pavia University data, which increases the difficulty in exploring spatial features. \section{Conclusions} In this paper, we proposed a cascaded RNN model for HSI classification. Compared to the original RNN model, our proposed model can fully explore the redundant and complementary information of the high-dimensional spectral signature. Based on it, we designed two improvement strategies by constructing connections between the first-layer RNN and the output layer, thus generating more discriminative spectral features. Additionally, considering the importance of spatial information, we further extended the proposed model into its spectral-spatial version to learn spectral and spatial features simultaneously. To test the effectiveness of the proposed models, we compared them with several state-of-the-art models on two widely used HSIs. The experimental results demonstrate that the cascaded RNN model can obtain higher performance than RNN, and its modifications can further improve the performance. Besides, we also thoroughly evaluated the effects of different hyperparameters on the classification performance of the proposed models, including the hidden sizes and the number of sub-sequences. In the future, more experiments will be conducted to validate the effectiveness of our proposed models. In addition, more powerful spectral-spatial models will be explored. Since the sizes and shapes of different objects vary, using the patches or cubes with same sizes as inputs easily leads to the loss of spatial information.
{ "timestamp": "2019-03-01T02:07:36", "yymm": "1902", "arxiv_id": "1902.10858", "language": "en", "url": "https://arxiv.org/abs/1902.10858" }
\section{Introduction} The fact that physical theories simultaneously aim at explaining phenomena already observed, and need to be, at least in principle, experimentally falsifiable, puts measurement process in the centre of attention. Statistics of single measurement outcomes, and the nature of correlations between them, depends on the physical theory describing the process of generating measurement outcomes. It cannot be a surprise that access to systems exhibiting richer statistics of measurement outcomes enables one to improve performance in different tasks related to information processing. For example, the non-existence of a hidden-variable model for observed measurement outcomes, attested by violation CHSH inequality, is a sufficient condition for a string of outcomes to be intrinsically random, i.e. to exhibit randomness that cannot be explained by our lack of knowledge about the system \cite{Pironio2010}. Intrinsic randomness of quantum correlations can be used for secure establishment of a cryptographic key \cite{Bennett1984}, even in a situation when one relies solely on the measurement statistics, without a need to trust that a system has been prepared, and measurement performed in a specific way \cite{Acin2007}. Furthermore, a gap of effectiveness for usage of quantum and classical resources is present in communication complexity tasks \cite{Buhrman2010}. The qualitative difference between classical and quantum systems, through the notion of Bell non-locality \cite{Bell1964}, can be quantified in two equivalent frameworks: Bell inequalities and non-locality games. In the game setting, the correlators present in the Bell inequality may be assigned some desired values, and the task for the physical system may be set to satisfy all the given constraints with the highest probability, with respect to a previously known probability distribution of measurement settings. This is interpreted as questions to parties of the physical system. There exist games with optimal classical and quantum strategies proven to yield different values of winning probability. The most famous example is the game associated with the CHSH Bell inequality. Taking this into account, it is of vital importance to calculate quantum and classical values achievable for a given non-locality game. In this paper, we propose a family of non-locality unique games for two parties, defined on a square lattice on an arbitrary surface. The games show a lot of similarities with error correction codes proposed for fault tolerant quantum computation by Kiteav \cite{Kitaev2003}. We use them to calculate their classical values in a polynomial time (while it is an NP-hard problem in general), for number of measurement outcomes $d=2$. Furthermore, due to these geometrical properties we are also able to establish a classification of games into equivalence classes with respect to local relabeling of outcomes of measurements, and to study the role which periodic boundary conditions can play in setting of classical and quantum values of these games. We assume that the periodic lattice has even number of cells in order to allow for bipartite structure needed for non-locality game; otherwise, the construction can be used to define a contextuality game. We also point out important differences between Kitaev codes and our games. The above is achieved through representation of non-locality games in a graph form. The graph description of non-locality games, its basic properties, as well as general notation and calculation of games classical and quantum values, are introduced in Section \ref{Pre}. Sections \ref{2}, \ref{3}, \ref{sub:4+} are devoted to classification of games for gradually incising number of possible measurement outcomes, provided that the winning conditions are described by a specific group of permutations. Section \ref{6} contains a description of the procedure for calculation of classical values for $d=2$, and discussion of possible extensions to higher dimensions, one being a generalization of the method for $d=2$, while the second one based on an unique representation of each game with respect to a chosen maximal spanning tree. Section \ref{ex} is devoted to analytic and numerical studies of the role which periodic boundary conditions can play in setting classical and quantum values for the family of games. We conclude with Section \ref{con}. \section{Preliminaries}\label{Pre} \subsection{Non-locality games for 2 parties} The setting of the game is the following: a referee asks a question $x$ from the set $A=\{A_{1},\dots,A_{|A|}\}$ to one part of the spatially separated system, Alice, and a question $y\in B=\{1,\dots,B_{|B|}\}$ to another party, Bob. Then Alice and Bob return answers $a\in\{0,\dots,d-1\}$ and $b\in\{0,\dots,d-1\}$, respectively. Case of different number of possible answers for Alice and Bob can also be described in this way, with a subset of answers not used by one party. The parties cannot communicate after receiving the questions, and they return answers such that the probability of winning the game is maximized, with uniform probability of different pairs of questions to appear. The winning conditions (i.e., a set of accepted answers for given questions), are known to both parties. The choice of answers can be based on a pre-established classical strategy (then the maximal winning probability is called a classical value of the game ) or outcomes of measurements on a shared quantum state (with maximal winning probability called quantum value of the game). The dimension of underlying quantum systems and measurement operators can be in principle arbitrary. This makes it calculating the quantum value difficult. It is still not known if this is possible for an arbitrary game, although semidefinite programs can be used to compute upper bounds \cite{Navascues2015}. Therefore, one is interested in a special class of non-locality games, called \textit{unique games}. They are defined by the property that for every pair of questions and an answer by Alice: $(x,y,a)$, there exists exactly one answer $b$ that satisfies the winning condition of the game. In other words, for each pair of questions the constraint which the winning answers must satisfy is defined by pre-established function $\pi:A_A\mapsto A_B. $ The players win iff $b=\pi(a)$. For this class of games, it has been showed that the quantum value can be approximated to a constant factor in polynomial time \cite{Kempe2010}. Furthermore, in the scenario of $XOR$ game, where $A_{A}=A_{B}=2$ and winning conditions depend solely on $a\oplus b$ for a given $(x,y)$, the quantum value can be computed exactly in polynomial time \cite{Cleve2004, Wehner2010} due to Tsirelson theorem \cite{Tsirelson1980}. On the other hand, calculating the classical value of a non-locality game is a vertex labeling problem, and as such, for every positive constant $\delta$, there is always $A_{A}=A_{B}$ high enough such that it is NP-hard even to decide whether a given game has a classical strategy satisfying all winning conditions, or there is no strategy satisfying more than $\delta$ fraction of the winning conditions \cite{Arora1998,Arora1998B,Raz1998}. If the Unique Games Conjecture \cite{Khot2002} is true, then the above applies as well to unique non-locality games, with the modification that the task is to distinguish between existence of a strategy satisfying almost all winning conditions, and a non-existence of strategy satisfying more than a small fraction of winning conditions. For unique games in general, and XOR games in particular, it is known that calculating their exact classical values is an NP-hard problem \cite{Hastad2001}. In this paper we propose a class of two party non-locality unique games for arbitrary local dimensions $d=A_{A}=A_{B}$. This class is defined on bipartite graphs, with vertices of the graph corresponding to questions asked by the referee, and edges labeled by permutations between measurement outcomes that satisfy the winning condition. \subsection{Graph description} \label{sec:labeledgraphs} Below we introduce basic notions associated with graph representation of non-locality games. In this representation, the vertices of a graph correspond to questions asked by the referee (measurements). Two vertices are connected by an edge iff both corresponding measurements can be performed simultaneously. A function $K:E(G)\mapsto S_d$ assigns to each edge a permutation of the set of $d$ elements. These permutations represent the desired correlations between measurement outcomes. If the graph is connected and bipartite, then each of the two independent sets of vertices corresponds to measurements performed by a distinct party. Then the labeled graph directly corresponds to a non-locality game. The classification provided in further chapters applies to generalized XOR games, which have been investigated in detail in \cite{Rosicka2016}. XOR games are characterized by binary measurement outcomes ($A_{A}=A_{B}=2$) with correlations/anticorrelations demanded between outcomes of selected parties. In the generalized version of a XOR game, we allow for $A_{A}=A_{B}=d$ possible outcomes from the set $\{0, ...,d-1\}$. Constraints on the edges connecting vertices $u$ and $v$ are defined by permutations $S_{n}$ of the set $\{0, ..., n-1\}$. We will focus on games in which these permutations belong to the set $L_n=\{\tilde{\pi_i}:\tilde{\pi_i}(x)=i-x\mod n\}$ or $L_n'=\{\tilde{\sigma_i}:\tilde{\sigma_i}(x)=x+i \mod n\}.$ A graph together with an edge labeling will be referred to as a \textit{labeled} graph. If all permutations assigned to the edges are equal to their inverse, we will talk about an undirected labeled graph. In this paper, we will be interested in connected bipartite graphs defined on a cubit lattice, but many of the results presented here do not depend on the type of the connected bipartite graph. \begin{figure}[h!] \includegraphics[scale=0.25]{fig1.pdf} \caption{\label{fig:1} Examples of cycles for $d=3$: a) good, b) ugly, c) bad. Colors correspond to permutations from the group $L_3=\{\tilde{\sigma}_i:\tilde{\sigma}_i(x)=i-x \mod 3\}$, with $\tilde{\sigma}_0$ (red) preserving value 0, $\tilde{\sigma}_1$ (blue) preserving value 2, $\tilde{\sigma}_2$ (green) preserving value 1. $x$ is the input of the permutation, whereas $i$ labels permutations.} \end{figure} We will start with characterization of amount of classicality for a XOR game $d=2$. For this we will use a notion of consistency. An assignment $k:V(G)\mapsto \{0,\dots,d-1\}$ of measurement outcome values to vertices is \textit{consistent} if it has no contradiction on any edge of the graph, i.e. for every edge $uv$ the relation between outcomes on its vertices is given by a permutation labeling this edge, $k(v)=\pi(k(u))$. A connected labeled graph can have no more than $d$ consistent assignments, as assigning a value to one vertex determines the values of all its neighbors. We will say that a labeled graph (or its subgraph) is \textit{good} if it has $d$ consistent vertex-assignments and \textit{bad} if no assignment is consistent. If the number of consistent assignments is larger than $0$ but less than $d$, we say that the graph is \textit{ugly} (see Fig. \ref{fig:1}). Every consistent assignment defines a deterministic strategy for a given game, which allows the players to win with probability 1. If a labeled graph has no consistent assignment, then no such strategy exists for the game. Thus, good and ugly graphs will describe games that can be won by strategies in which a state of the system is purely classical, and proper measurements just reveal the properly correlated values, whereas in games represented by bad graphs one can expect that quantum strategies may outperform classical ones. \subsection{Equivalence of labeled graphs} The notion of \textit{equivalence} between two games has to be properly defined in the language of their graph representation. \textit{We will say that two games are equivalent iff the corresponding labeled graphs are equivalent.} We say that two labeled graphs are equivalent iff one can be obtained from the other through: \begin{enumerate} \item an isomorphism of the underlying graphs \item changing the direction of an edge and replacing the permutation on this edge with its inverse \item switching operations $s(v,\sigma)$, which changes the labels on all edges incident with the vertex $v$ as follows: \begin{enumerate} \item if $\overrightarrow{uv}\in E(G),$ we replace $K(\overrightarrow{uv})=\pi$ with $K'(\overrightarrow{uv})=\sigma\pi$, \item if $\overrightarrow{vu}\in E(G),$ we replace $K(\overrightarrow{vu})=\pi$ with $K'(\overrightarrow{vu})=\pi\sigma^{-1}$. \end{enumerate} \end{enumerate} Each of the above operations can be interpreted as renaming the inputs and/or outputs. It follows that equivalent games have equal classical and quantum winning probabilities. In this paper, however, we will largely focus on the equivalence between different labelings on the same graph. We say that two labelings of a graph are \textit{equivalent} iff one can be obtained from the other through switches. It is clear that any two games defined on the same graph with equivalent labelings must be equivalent. \begin{figure}[h!] \includegraphics[scale=0.25]{fig2.pdf} \caption{\label{fig:2}Examples of equivalent games for $d=2$: a) equivalence in terms of switches (relabeling measurement outputs), b) equivalence in terms of graph isomorphism (relabeling measurements). Black and red edges represent permutations $\sigma_0=I$ and $\sigma_1=(01)$, respectively. $X$ denotes vertices where switches $\tilde{\sigma}_1$ are applied. } \end{figure} For games represented by a labeled graph on a planar grid, their classification will depend only on a \textit{local} structure of the graph. Speaking more precisely, the equivalence of two labelings of such a graph will be determined by sets and type of cycles defined on cells of the grid. By a cell we mean here a cycle that does not contain any other cycles (In a square lattice, this is a cycle with four edges). A bad cell will be referred to as a \textit{defect}. For grids on surfaces other than the plane (eg. a torus), in order to classify corresponding non-locality games we will have to take into account cycles arising from the topological structure. This is similar to the way in which classes of homology of error paths have to be taken into account in topological error correction codes in order to describe a logical state of a code, and we will comment on observed similarities and differences between the two. \section{Classification of non-locality games for $d=2$}\label{2} The group of permutations of two elements does not have any non-trivial subgroups and consists only of identity and transposition operations: $S_{2}=\{Id,(01)\}$. This group is the example of a permutation group defined by $L_d'$, that, along the group $L_d$, will be subjected to a more detailed analysis for $d>2$ in the following chapters. Proofs of theorems will be based on a concept of a canonical representation of a graph, used in \cite{Zaslavsky1982} to prove equivalent statements for signed graphs (which are functionally identical to labeled graphs with $d=2$ outcomes). Later, we generalize this line of reasoning to the case of higher $d$. \begin{figure}[h!] \includegraphics[scale=0.25]{fig3.pdf} \caption{\label{figura3}Transition from a labeled graph (left) to its canonical form (right) by switches ($\tilde{\sigma}_{1}\in L_{2}$) applied to vertices marked with 'X'. A selected spanning tree encircled in brown.} \end{figure} A \textit{spanning tree} of a graph is a subgraph containing all of its vertices and some of the edges such that there is exactly one path connecting each two vertices of the graph. We use this to define the \textit{canonical representation} of a game. The canonical representation of a game $(G,K)$ with respect to the spanning tree $T$ is a game on the same graph with a labeling equivalent to $K$ such that the $I$ permutation is assigned to all edges of $T$. This is similar to a concept introduced in \cite{Zaslavsky1982} for signed graphs. Later, we shall use a generalized version for graphs labeled with $S_d$ for an arbitrary $d$. It is clear from the definitions that one can always obtain a canonical representation of any game through switches (see Fig. \ref{figura3}). For $d=2$, the equivalence classes of graphs are uniquely determined by their canonical representation: \begin{prop}\label{Th_Equiv} Two labelings of the same graph with $d=2$ outcomes are equivalent iff the corresponding games have the same canonical representation. The canonical representation can be defined with respect to any spanning tree. \end{prop} \begin{figure}[h!] \includegraphics[scale=0.2]{fig4_other_version.pdf} \caption{\label{figura4}A transformation of permutations due to application of switch $s(v,\pi)$.} \end{figure} Before we move to the proof, let us stress that Proposition {\ref{Th_Equiv}} is valid for games defined on surfaces with and without boundary conditions, as in both these cases it is possible to transform a game to its canonical form. \begin{proof} ($\Leftarrow$ part). The same canonical representation of two games implies that one can be transformed into the other by performing switches that bring one of them to the common canonical form, and then transform the canonical form into the other game. Therefore, the games are equivalent. ($\Rightarrow$ part). It follows from the fact that, for $d=2$, a game has only one canonical representation with respect to a given spanning tree. To see this, let us notice that $S_{2}=L_{2}'=L_{2}$, and show how the structure of labelings changes due to local permutations. Let $\overrightarrow{uv}$ be an edge originally labeled with a permutation $\sigma$ ($\sigma(u)=v$, where we abuse the notation and denote by $u, v$ labelings of the vertices), and we apply a switch, then $\sigma$ changes to $\pi\sigma$ for the switch $s(v,\pi)$, and to $\sigma\pi^{-1}$ for $s(u,\pi)$ (see Fig. \ref{figura4}). According to this rule, any switch applied on one vertex belonging to an edge outside the spanning tree and aimed at changing the permutation assigned to this edge, would have to be accompanied by an inverse transformation on neighboring vertices that are connected through the spanning tree. But the inverse of $(01)$ is $(01)$. Therefore, we have to apply the same permutation on every other vertex of the spanning tree when we construct the new canonical representation of the game, so that the $Id$ permutations on the spanning tree remain unaffected. But this would imply performing a switch on both ends of every edge not in the spanning tree, and as permutations belonging to $S_{2}$ commute, we have $(01)\sigma(01)^{-1}=\sigma$, and permutations $\sigma\in\{Id, (01)\}$ assigned to all such edges remain unchanged. Thus the canonical representation will remain the same. It follows that two labelings of a graph with $S_2$ are equivalent if and only if they have a shared canonical representation with respect to an arbitrary spanning tree. \end{proof} Notice that the first part of the proof does not depend on the number of outputs. Hence we have the following result. \begin{cor}\label{Cor_1} If two games for any $d$ have the same canonical representations (on an arbitrarily selected spanning tree), then they are equivalent. \end{cor} For $d=2$, the following holds as well \begin{theorem}\label{Th1} Two labelings of a graph with $S_2$ are equivalent iff they have the same set of bad cycles. \end{theorem} Note that the above is the definition of equivalence for signed graphs in \cite{Harary1953}. In the simple case of $d=2$, every bad cycle contains odd number of transpositions. \begin{proof} ($\Rightarrow$ part). For an arbitrary cycle, let $\sigma$ be the composition of all permutations along the cycle. If we perform a switch on an arbitrary vertex of the cycle, then $\sigma=\sigma_1\sigma_2$ becomes $\sigma_1\pi^{-1}\pi\sigma_2$ (or $\pi\sigma\pi^{-1}$ if the switch was on the starting vertex, but the permutations in $S_2$ commute, so $\pi\sigma\pi^{-1}=\sigma_1\pi^{-1}\pi\sigma_2$). Since $\sigma_1\pi^{-1}\pi\sigma_2=\sigma_1\sigma_2=\sigma$, no switch can change the permutation $\sigma$. Thus any two equivalent labelings have the same set of bad cycles. ($\Leftarrow$ part). It follows from Corollary \ref{Cor_1} that if two labelings are not equivalent, then their canonical representations with respect to the same spanning tree are different. Different canonical representations imply different sets of bad cycles, because one can always find a differentiating cycle. Let $e$ be an edge which differs between the two canonical representations. Any cycle consisting of $e$ and some edges of the spanning tree is good in one of the games and bad in the other (see Fig. \ref{figura5}). \end{proof} \begin{figure}[h!] \includegraphics[scale=0.25]{fig5.pdf} \caption{\label{figura5}games with different canonical representation for $d=2$, and difference in set of bad cycles: $\sigma_{1}\neq \sigma_{2}$.} \end{figure} Bad cycles in a grid can have two origins -- they can arise as a result of the existence of defects, or can have a non-local character, i.e. are not a function of defects in the graph. These two origins have distinctive implications in the classification of games, therefore we will present this classification separately for planar games and games with on surfaces other than the plane. \subsection{Planar graphs for $d=2$} We will show that for a planar graph for $d=2$ all bad cycles arise from defects. Let us note that for $d=2$ the cycles can have only two classes: good and bad. A good cycle is a cycle with defect class $I$ (or $0$) and a bad cycle is a cycle with defect class $(01)$ (or $1$). \begin{figure}[h!] \includegraphics[scale=0.45]{fig6.pdf} \caption{\label{figura6} Defect class of a cell is associated with value $x$ of $\tilde{\sigma}_{x}=\sigma_3^{-1}\sigma_2^{-1}\sigma_1\sigma_0$, $\tilde{\sigma}_{x}\in L'_{3}$.} \end{figure} Proposition \ref{Tdodawanie_n2} provides an easy way of finding the defect class of a larger cycle based on the classes of the cells contained within. For a labeling with $d=2$ outcomes, the class of every cell is either $0$ or $1$, which implies the following. \begin{cor}\label{cor_bad} For a planar graph labeled with $S_2$, the cycle is bad if it contains an even number of bad cells, otherwise it is good. \end{cor} By Theorem \ref{Th1} we know that for $d=2$ two labelings are equivalent iff they have the same set of bad cycles. From Proposition \ref{Tdodawanie_n2} we see that for a planar graph the set of bad cycles is uniquely associated with the set of defects. Therefore, we have \begin{cor}\label{cor_2} Two labelings of a planar graph with $S_2$ are equivalent iff they have the same set of defects. \end{cor} \begin{figure}[h!] \includegraphics[scale=0.2]{fig8.pdf} \caption{\label{figura8} A network with cyclic boundary conditions on left/right and bottom/up boundaries, and a graph for $d=2$. $\sigma_{1}$, $\sigma_{2}$ -- bad cycles stemming from defects. Bad cycles of red permutations along green and blue lines -- two cycles characterizing the same equivalence class of the game (transition between the cycles can be performed through $\tilde{\sigma}_{1}$ switches applied to vertices denoted by 'X'). A bad cycle along the purple line, characterizing different equivalence class.} \end{figure} \subsection{Graphs with periodic boundary conditions for $d=2$}\label{periodic2} If we admit for periodic boundary conditions on the grid, i.e. place the grid on some surface other than the plane, then bad cycles can arise due to lines of $(01)$ permutations in the dual lattice, that give rise to no defects (see Fig. \ref{figura8}). Fig. \ref{figura9} shows how the periodicity of boundary conditions creates opportunity for non-local paths of errors to arise -- they constitute paths from the center to the exterior of the continuously deformed dual graph. The effect of taking into account periodic boundary conditions is depicted in Fig. \ref{figura10}. If there are no boundary conditions, the edges of the graph can be divided into a spanning tree (brown) and a remaining set of black edges. Corollary \ref{cor_2} states that two games on a plane for $d=2$ are equivalent, iff they have the same set of defects. If the graph is driven to a canonical representation with respect to the spanning tree depicted, defects are determined only by the black edges. However, introducing a periodic boundary condition (in one or more possible directions) implies an addition of a column/row of additional edges. Naturally, labelings of these new edges will potentially give rise to new defects. Furthermore, as these new edges bypass the spanning tree of the graph, they can all be labeled by (01) permutations without creating a single defect. Such chains of permutations cannot be removed or contracted to a point by switches. We will refer to them as \textit{loops}. \begin{figure}[h!] \includegraphics[scale=0.18]{Rys1bis2.pdf} \caption{\label{figura9} Periodicity of boundary conditions (a) enables continuous transformation of a game to a form (b), (d). Both red and blue sets of (01) permutations lead to no defects. While blue permutations can be erased by applying switches on vertices labeled with $X$ (b), which contracts the associated path in a dual lattice to a point (c), red permutations remain joining the center of the dual lattice with its exterior (e) -- applying a selected choice of switches (d) gives rise to an orange line (e), with the same end points as the initial, red line. } \end{figure} \begin{figure}[h!] \includegraphics[scale=0.35]{Rys3.pdf} \caption{\label{figura10} Due to boundary conditions applied along directions marked by yellow/green arrows, the graph structure can be divided into the spanning tree $T$ (brown), black edges within the extended tree $T^L$ and yellow/green edges joining the boundaries and bypassing the spanning tree. Cells with black and brown edges contribute to the equivalence class of the graph by experiencing defects. Defects can be obtained on cells with yellow/green edges as well, but furthermore yellow/green edges can be collectively labeled by (01) permutations without creating a defect, despite the fact that such a labeling is not equivalent to labeling all edges with $I$. } \end{figure} \begin{figure}[h!] \includegraphics[scale=0.3]{Rys10b.pdf} \caption{\label{figura10b} Removal of a path connecting two opposite boundaries without periodicity conditions by switches.} \end{figure} Pairs of defects are also typically connected by similar chains of $(01)$ permutations. We can think of those chains of permutations as paths and cycles in the dual lattice. They can be seen as an analogue of paths describing logical operators in a Kitaev code on a torus (see Fig. \ref{figura10b}). However, in the case of the planar grid there is a notable difference between the game and the code defined by the same labeled graph. In Kitaev codes it is not possible to remove a path of errors on the sharp boundary of the code (because the removal here can be performed by application of stabilizer operators, and there are no stabilizer operators that act on a single qubit as in this setting this would violate the demand for the stabilizers to commute with each other). On the other hand, on the graph associated with a $d=2$ non-local game it is possible to remove such a path. The possibility of destroying a path connecting two boundaries without periodicity condition is already a result of the existence of a spanning tree that joins all the boundaries without periodic conditions (cf. Fig. \ref{figura10}). Let us focus on equivalence conditions for two non-planar games. Each of the games can be characterized by a set of (01) permutations labeling the edges. These correspond to a set of paths in the dual lattice and we will call them \textit{error paths}, for the analogy with quantum error correction codes. We will prove the following: \begin{theorem}\label{Th8} Two labelings $K_1, K_2$ of a graph are equivalent iff the labeling $K$ defined as $K(e)=K_1(e)K_2(e)^{-1}$ for all edges is equivalent to $K_{I}$ defined as $K_{I}(e)=I$ for all $e$. \end{theorem} \begin{proof} Let $K(e)=K_1(e)K_2(e)^{-1}$ be equivalent to $K_{I}.$ By the definition of equivalent labelings, one can transform $K$ into $K_{I}=KK^{-1}$ through switches. If the same switches are applied to $K_1$, it is transformed into $K_1K^{-1}=K_2$. Now assume that the labelings are equivalent. Then $K_2$ can be obtained from $K_1$ through switches. Applying the same set of switches to the labeling $K$ transforms it into $K_{I}.$ \end{proof} An easy way to check that the labeling $K$ is equivalent to $K_{I}$ is the following. \begin{obs} Let $P$ be the set of error paths of a labeling $K$ with $S_2$. The labeling is equivalent to $K_{I}$ iff the number of loops in each homology class is even and all other error paths are contractible to a point. \end{obs} \begin{proof} A pair of loops within the same homology class can be annihilated through switches by moving one onto the other. Obviously, an error path is considered contractible if and only if it can be removed by switches. On the other hand, an unpaired loop cannot be removed by switches and thus a labeling with an odd number of loops in some homology class is not equivalent to $K_{I}.$ \end{proof} From the above we see that a necessary condition for two games two be equivalent is that they have the same set of defects. Otherwise, error paths in $P$ will be unconctractible to a point. The remaining mechanism to generate paths in $P$ that are uncontractible is solely associated with the topology of the surface on which the graph is defined. Namely, a lattice (e.g. on a cylinder or a torus) can allow for construction of paths that cannot be contracted to a point by a continuous transformation -- a set of paths with this property, that can be transformed into each other, is called a \textit{homology class}. Therefore, games with error paths belonging to different homology classes are not equivalent. From the definition of equivalence of labelings we see that, as local transformations do not change homology class of the paths, nor the position of defects, they have to be the same for games belonging to the same equivalence class. Therefore \begin{prop} Two labelings of a $K_1$, $K_2$ of a grid are equivalent iff they have the same set of defects and the labeling $K=K_1K_2^{-1}$ contains no loops which cannot be annihilated by switches. \end{prop} The role of the topological properties of the surface on which the graph is defined in the equivalence between two games is based on the fact that switches which transform equivalent graphs into each other can deform error paths only in a continuous manner. Therefore, two equivalent games cannot have different sets of loops within each of the homology classes admitted by the geometry. This property does not depend on $d$, and therefore will be crucial for classification of games for higher number outcomes. \section{Classification of non-locality games for $d=3$}\label{3} For games with $d=3$ possible measurement outcomes (and the same number of different correlations that can be demanded for outcomes of a pair of measurements), the group $S_{3}$ can be generated by $L_{3}$, where $L_d=\{\tilde{\pi}_i:\tilde{\pi}_i(x)=i-x \mod d\}$. A composition of even number of permutations from $L_d$ forms a permutation belonging to a subset $L_d'=\{\tilde{\sigma}_i:\tilde{\sigma}_i(x)=i+x \mod d\}$ of $S_{d}$. Also, $S_3=L_3\cup L_3',$. Note that for $d=2$, this structure degenerates, as $L_{2}=L_{2}'$. Before going to classification of these games, we prove some useful statements about graphs labeled by permutations from $L_{d}$ and $L_{d}'$. \begin{obs} \label{Oequal} For a bipartite graph labeled with permutations from $L_{d}$ ($L_{d}'$), there exists an equivalent labeling with permutations from $L_{d}'$ ($L_{d}$). \end{obs} \begin{proof} Because the graph is bipartite, we can divide its vertices into two disjoint sets. Applying switches with permutations from $L_{d}$ to vertices from one of these sets will transform permutations from $L_{d}$ ($L_{d}'$) that label edges into permutations from $L_{d}'$ ($L_{d}$). \end{proof} For games that can be described by labeled graphs on square lattices, we take into account situations where all edges are labeled with permutations from $L_{d}$, or equivalently from $L_{d}'$. This implies that the defect class of each cell is a permutation from $L_{d}'$. Nevertheless, many of the results can be easily generalized to different types of lattices or to all planar graphs. Furthermore, Observation \ref{Oequal} implies that the results can also be (indirectly) applied to graphs labeled with $L_d.$ In the case of $d>2$, because the size of the group $L_{d}'$ is larger than $L_{2}'$, each cell can have one of $d$ different defect classes. It follows from the proof of Proposition \ref{Tdodawanie_n2} that two graphs labeled by $L_{d}'$ have the same sets of defects of each class then they have the same sets of cycles of each class. The lemmas \ref{L1} and \ref{L2} from the Appendix applied to $d=3$ are useful to derive the theorem for equivalence of games with correlations defined by permutations from $L_{3}$ or $L_{3}'$. \begin{theorem} \label{Tn3} Two labelings with $L_{3}'$ are equivalent iff either the defect class of each cell is the same for both labelngs or the set of defects of class $x$ for one labeling is the set of defects of class $-x$ for the other labeling, for every $x\in\{0,1,2\}$. \end{theorem} \begin{proof} ($\Rightarrow$ part). Two labelings are equivalent if we can obtain one from the other through switches. Select a cell of defect class $x$ and let $v_0$ denote the starting vertex of the characteristic permutation. We apply a switch $s(v,\pi)$: \begin{enumerate} \item If $v\neq v_0$, then the defect class of the cell remains unchanged, due to Lemma \ref{L1} and Lemma \ref{L2}. \item If $v = v_0$ and $\pi=\tilde{\sigma}_{i}\in L_3',$ then the class of the cell remains unchanged, due to Lemma \ref{L1}. \item If $v = v_0$ and $\pi=\tilde{\pi}_{i}\in L_3,$ then the defect class of the cell after the switch is changed into $-x$, due to Lemma \ref{L2}. \end{enumerate} Since $S_3=L_3\cup L_3',$ a switch can only change the defect class of a cell from $x$ to $-x.$ If a switch by $\pi\in L_3$ is applied to some vertex $v$ of $G$, we obtain a labeling in which some edges are labeled with $L_3$. In order to return to $L_3'$ we must switch all vertices adjacent to $v$ by some permutation $\pi\in L_3.$ This shows that if we switch a vertex by some $\pi\in L_n,$ we must also apply such switches to all vertices of the graph. Hence, the defect class of every cell is changed from $x$ to $-x.$ Otherwise we only switch by permutations $\sigma\in L_3'$ and and the classes of all cells remain unchanged. \end{proof} \begin{proof} ($\Leftarrow$ part) Same as in the proof of \ref{T:any_n}, which can be found in the Appendix. \end{proof} \section{Classification of non-locality games for $d\geq 4$} \label{sub:4+} In Theorem \ref{T:any_n} we present a generalization of Theorem \ref{Tn3} to a graph with an arbitrary number of outcomes and labeled with $L_{d}'$. \begin{theorem} \label{T:any_n} Two labelings $K,L$ of a connected planar graph with permutations from $L_{d}'$ are equivalent iff there exists a permutation $\pi\in S_d$ such that for every cell $c$ of a graph we have $cl(c,L)=\pi cl(c,K)\pi^{-1}$, where $cl(c,L)$ is a defect class of a cell $c$ in labeling $L$. \end{theorem} The proof of this theorem can be found in the Appendix. From the above we see that any transformation which connects two equivalent labelings with $L_{d}'$ has to be composed from switches $s(v,\sigma_{1}(v)\pi\sigma_{2}(v))$, such that $\sigma_{1}, \sigma_{2}\in L_{d}'$ and may depend on the vertex $v$, while $\pi$ is the same for all vertices and in general does not have to belong to $L_{d}'$. In Section \ref{periodic2} we provided conditions for equivalence of labelings with $L_{2}'$ on different surfaces. Here we generalise these results to arbitrary $d$. A switch $s(v,\tilde{\sigma}_{i}\in L_{d}')$ on an arbitrary vertex of the network continously deforms the labelings, in a sense that, in a convention in which permutations on the adjacent edges point from the vertex the the exterior, every permutation $\tilde{\sigma}_{x}$ on an edge is shifted: $\tilde{\sigma}_{x}\rightarrow\tilde{\sigma}_{i}\tilde{\sigma}_{x}=\tilde{\sigma}_{i+x}$. Let us define an operation of adding labelings $L$, $K$ from $L_{d}'$ of a given graph $G=(V,E)$. The sum $M=K+L$ is a labeling in which $M(e)=K(e)L(e)$ for every $e\in E.$ The difference $K-L$ will denote the sum $K+L^{-1}.$ \begin{lemma} \label{addswitch} Let $K_s:E\mapsto L_d'$ be a labeling obtained from $K_{I}$ through a single switch $s(v,\pi).$ Then the labeling $M=K+K_s$ is equivalent to $K:E\mapsto L_d'$ and adding $K_s$ to a labeling has the same effect as performing the switch $s(v,\pi).$ \end{lemma} \begin{proof} The labeling $K_s$ assigns $I$ to all edges not incident with the vertex $v$, $\pi$ to all edges $vu$ coming out of $v$ an $\pi^{-1}$ to all edges $uv$ going into $v$. It follows that \begin{equation} M(e)= \left\{\begin{array}{lcr} K(e)\pi & if & e = vu\\ K(e)\pi^{-1} & if & e = uv\\ K(e) & & otherwise. \end{array}\right. \end{equation} Since the permutations in $L_d'$ commute, $K(e)\pi^{-1} = \pi^{-1}K(e)$ and thus $M$ is the labeling obtained from $K$ through the switch $s(v,\pi).$ \end{proof} From this, we have the following. \begin{cor} \label{addswitches} Let $K_1,K_2:E\mapsto L_n'$ be two labelings of a graph. $K_2$ can be obtained from $K_{I}$ through switching by permutations from $L_d'$ iff $K_1+K_2$ is equivalent to $K_1.$ \end{cor} Therefore, we arrive with the following equivalence criterion for all planar games with permutations from $L_{d}'$ for arbitrary $d$: \begin{theorem} \label{Th15} Let $K_1$ and $K_2$ be two labelings of a graph with $L_d'$. The labelings are equivalent iff there exists a labeling $M$ and permutation $\pi$ such that $K_1=\pi(K_2)+M,$ where $M$ is equivalent to $K_{I}$ and and $\pi(K)$ denotes the labeling obtained from $K$ by applying the switches $s(v,\pi)$ to all vertices $v\in V.$ \end{theorem} \begin{proof} $(\Rightarrow)$ Let $K_1$ and $K_2$ be two equivalent labelings. It follows from Corollary \ref{cor:any_n} that there exists a permutation $\pi$ such that $K_1$ and $\pi(K_2)$ have the same set of defects of each class. We can now transform $\pi(K_2)$ into $K_1$ using only switches from $L_d'.$ Thus, by Corollary \ref{addswitches}, we have $K_1=\pi(K_2)+M$ for some labeling $M$ equivalent to $K_{I}.$ $(\Leftarrow)$ Now assume that $K_1=\pi(K_2)+M,$ where $M$ is a labeling obtained from $K_{I}$ through a switches with permutations from $L_d'.$ This means that $\pi(K_2)$ can be transformed into $K_1$ through the same set of switches. Thus, $\pi(K_2)$ is equivalent to $K_1.$ Since $\pi(K_2)$ is equivalent to $K_2,$ it means that $K_1$ and $K_2$ are equivalent. \end{proof} From the above we can obtain yet another equivalence condition, analogous to the one given in Theorem \ref{Th8} for labelings with $d=2$. \begin{cor} \label{minus} Two labelings $K_1, K_2:E(G)\mapsto L_d'$ are equivalent iff, there exists a permutation $\pi\in S_d$ such that the labeling $K=K_1-\pi(K_2)$ is equivalent to $K_{I}.$ \end{cor} In order to characterize equivalence classes of labelings with $L_{d}'$ permutations on a surface of genus $2$ or more, we make use of graph topological properties. First we define the \textit{enlarged spanning tree} $T^{L}$ as follows: \begin{dfn} For a graph $G$, let us enlarge a set of edges $T^{0}$ forming a spanning tree $T(G)$ by adding an edge which belongs to a cell such that this edge is the only member of the cell not belonging to $T$. Update the set $T^{0}$ with this enlarged set. The above procedure is to be applied unless there is no cell that contains only one edge not belonging to $T^{0}$. Such a set: $T^{L}=T^{0}$ will be called the enlarged spanning tree of $T$. \end{dfn} Note that if the graph $G$ is planar, then $T^{L}=G$. On a torus/surface with a higher genus some edges of the graph remain outside of $T^{L}$. \begin{lemma} \label{L:tree+} The complement $G-T^{L}$ of the enlarged spanning tree $T^{L}$ is composed of all edges of a graph that do not belong to $T$ and belong to some nontrivial loops. $G-T^L$ contains exactly one nontrivial loop in each homology class. \end{lemma} As an analogue to a notion of the class of a cell, we define the class of a loop defined on the complement of $T^{L}$ to be equal to the index of a permutation from the set $\tilde{\sigma}_{0},\tilde{\sigma}_{1},\tilde{\sigma}_{2},\dots,\tilde{\sigma}_{d-1}$ that labels the edges belonging to the loop. \begin{theorem}\label{Prop17} Equivalence class of a game of permutations from $L_{d}'$, defined on a graph with a spanning tree $T$ on a surface with genus $g$, is completely characterized by: \begin{itemize} \item sets of cells of classes $cl=1,2,\dots,d-1$, \item sets of loops on the complement of $T^{L}$ of classes $cl=1,2,\dots,d-1$, \\ \\ where all the above sets are described modulo a chosen permutation $\pi\in S_{d}$ and $\pi\notin L_{d}'$ that maps permutations from $L_{d}'$ into $L_{d}'$. Such permutation maps defect classes and permutations on loops belonging to the complement of $T^{L}$ in the same way: $cl\rightarrow \pi cl \pi^{-1}$, $\tilde{\sigma}_{i}\rightarrow \pi\tilde{\sigma}_{i}\pi^{-1}$. Number of the non-equivalent loops of permutations is equal to $d^{2g}$. \end{itemize} \end{theorem} In the proof of the above Theorem we will use the fact that canonical form of a $L_{d}'$ game associated with a given spanning tree $T$ depends on permutations that belong to $T^{L}$, but not to $T$ (they are fixed, up to permutation $\pi$ applied to every vertex, by classes of defects), as well as by permutations that belong to the complement of $T^{L}$ (they are partially fixed, up to permutation $\pi$ applied to every vertex, by classes of defects, but also provide some additional characteristic of a game arising from periodic boundary conditions). \section{Classical values from defects and loops}\label{6} In this section we discuss some methods for calculating the classical value of a game based on the properties of the corresponding labeled graph. On planar graphs, the classical value is given by error correction algorithm in Kitaev code. A modification of the algorithm is necessary when periodic boundary conditions are enforced. Since the sets of defects and loops of each class are enough to define a game, one could use these properties to calculate its classical and quantum values. The values are winning probabilities given by the formula \begin{equation}\label{win} p_{win}=\max_{P}\sum_{x\in A,y\in B}p(x,y)\sum_{a,b\in [0,\dots,d-1]}V(ab|xy)p(ab|xy). \end{equation} Above, $V(ab|xy)$ is the winning condition, taking value $1$ iff $\pi_{xy}(a)=b$, where $\pi_{xy}$ is a permutation between the responses for a pair of questions, and fixed by the unique game, $p(ab|xy)$ is the conditional probability of obtaining a pair of answers $(a,b)$ given questions $(x,y)$, and $p(x,y)$ is the distribution of questions. Below, we take it uniform. Maximization is performed over families of conditional probabilities. In classical case, the optimization can be performed over \textit{deteministic} local hidden variable models, i.e. where we have $\sum_{b}p(ab|xy)=p(a|x)$ for every $x$ and $y$, and that $p(a'|x)=1$ for a selected response $a'$ \cite{Fine1982,Brunner2014} (the same applies to Bob responses). Therefore for the games investigated in this paper, classical values can be calculated through search over all possible classical assignments of values $[0,\dots,d-1]$ to nodes of the corresponding graph. In a case of optimal labeling, the classical value is going to be $p_{win,cl}=\frac{|E(G)|-\beta_{C}}{|E(G)|}$, where $|E(G)|$ is the number of pairs of questions (number of edges of the corresponding graph), and $\beta_{C}$ is the number of contradictions, i.e. number of winning conditions $\pi_{xy}$ that are not satisfied by the optimal labeling. Here we show that in the case of $d=2$ outputs the algorithm for calculating the classical value can be efficient. The method is similar to ones used in error correction and relies on connecting pairs of defects with the shortest possible paths in the dual lattice to minimize the number of contradictions. We also consider the case with $3$ or more outputs and propose an analogous algorithm. In this case defects may have to be gathered into sets of more than two and connected with minimum Steiner trees in the dual lattice instead of shortest paths. A \textit{Steiner tree} for a given set $S$, as defined in \cite{Hwang1992}, is a tree which connects all vertices in $S$. Figure \ref{defectsets} provides examples of such paths and trees. First notice that the labeling $K_{I}:E\mapsto L_d'$ which assigns $I$ to all edges of the graph gives rise to no contradictions. In fact, every labeling with no contradiction is equivalent to $K_{I}$. We can use this fact to prove the following lemma. \begin{lemma} For every labeling $K$ of a graph $G$ there exists an equivalent labeling $K_{o}$ such that $K_o(e)\neqI$ for exactly $\beta_C(G,K)$ edges. No labeling equivalent to $K$ assigns $I$ to more than $\left|E(G)\right|-\beta_C(G,K)$ edges. \end{lemma} \begin{proof} Notice that a labeling $K$ has no contradictions if and only if it is equivalent to $K_{I}$. It follows that $\beta_C(G,K)=k$ if and only if changing the labels of a certain set $S$ of $k$ edges results in a labeling $K'$ equivalent to $K_{I}$. The labeling $K_{I}$ can be obtained from $K'$ through a specific set of switches. Applying the same set of switches to $K$ results in a labeling $K_o$ which assigns $I$ to an edge $e$ if and only if $e\notin S.$ Obviously any labeling in which identity is assigned to more than $\left|E(G)\right|-\beta_C(G,K)$ edges has fewer than $\beta_C(G,K)$ contradictions and thus it is not equivalent to $K$. \end{proof} We will call $K_o$ an \textit{optimal labeling} and many of our methods will involve finding an optimal labeling for a given configuration of defects and loops. The simplest case is a torus (or higher genus surface) with no defects. In this case for each loop of length $k$ the graph contains a set of $k$ disjoint bad cycles, each containing one edge of the loop. Even if two loops intersect, the cycles associated with those loops are still edge-disjoint. Thus, the contradiction number is the total length of all loops. This does not depend on the number of outputs or the classes of the loops. \subsection{Perfect matching approach for $d=2$} Now we consider a game with $d=2$ outputs and no loops. In this type of game optimal labelings are defined by sets of paths in the dual lattice $G^*$ such that a) each path connects two defects and b) the total number of edges in the set of paths is as small as possible. Such a set is equivalent to a perfect matching with maximum weight in a weighted graph $G'$ defined as follows. \begin{enumerate} \item $G'$ is a complete graph and $V(G')$ is the set of defects of $(G,K).$ \item The weight of an edge $uv$ is $w(uv)=\rm{diam}\,(G^*)-d(u,v)$, where $d(u,v)$ is the length of a shortest path in the dual lattice connecting the defects $u$ and $v$. \end{enumerate} In short, the method consists of the following steps: \begin{enumerate} \item Find the distance (i.e. the length of the shortest path) between each pair of defects; \item Choose the pairs so that: a) every defect has a pair and b) the total length of the paths is minimized. \end{enumerate} Finding a maximum weight perfect matching is a well studied problem and a polynomial time algorithm for solving it can be found in \cite{Ed65}. It is a generalization of a method described in \cite{Kuhn55} and \cite{Mun57}. The method relies on Berge's lemma \cite{Berge57}, which allows us to increase the size of a non-maximum matching using augmenting paths, as well as a method known as blossom shrinking. A blossom is defined as an odd cycle in $G'$ with a maximum matching. Shrinking all blossoms in the graph, i.e. replacing them with single vertices whose neighborhood is the same as the neighborhood of the blossom, allows us to apply the algorithm for bipartite graphs to all graphs. \begin{figure}[h!] \includegraphics[scale=0.2]{defectpairs.pdf} \includegraphics[scale=0.2]{defectsets.pdf} \caption{\label{defectsets} a) Pairs of defects connected by paths for $d=2$, b) Sets of defects connected by trees for $d=3$.\\Notice that for $d=2$ all defects are of class $1$ and thus a pair of defects always adds up to $0 \mod 2$. } \end{figure} In the case of labelings on a non-planar grid, where loops are a possibility, some additional steps are needed to determine the classical value. From the above method we obtain a minimal labeling $M$ with the smallest set of non-identity edges possible for a given configuration of defects. However, it is still possible that $M$ is not equivalent to the original labeling $K$. In this case, we have $K-M = L$, where $L$ is a labeling with no defects, but containing one or more non-trivial loops. Thus, to find an optimal labeling equivalent to $K$, we must add the necessary loops to $M.$ In order to ensure that the resulting labeling is indeed optimal, we must add a set $S$ of loops in such a way that: \begin{enumerate} \item $S\equiv L$, \item $\left|S-M\right|+\left|M-S\right|$ is minimized. \end{enumerate} This way we obtain a labeling $K_o$, which is equivalent to $K$. We now show that this labeling is indeed optimal. Suppose some labeling $K_o'$ is equivalent to $K$ and has fewer non-identity edges than $K_o.$ Such a labeling is obviously also equivalent to $K_o$ and, as such, differs from $M$ by the same set of loops. Thus, $K_o'$ is a labeling obtained from $M$ by adding a set $S'\equiv S$ to the minimal labeling $M$. But if $S$ minimizes $\left|S-M\right|+\left|M-S\right|$, then $K_o'$ cannot have fewer non-identities than $K_o.$ \subsection{Methods for $d\geq 3$} A generalization of the above method can most likely be used for $d\geq 3.$ In this case the defects are not paired, as they were for $d=2$, but grouped into sets of up to $d$ elements such that the classes of defects in each set add up to $0 \mod d$. Here the steps are: \begin{enumerate} \item Find all minimal sets of defects which add up to $0$ mod $d$; \item For every such set, calculate the minimum number of edges in a tree connecting them in the dual lattice; \item Choose the sets so that: a) every defect is in exactly one set and b) the total length of the trees is minimized. \end{enumerate} Note that the above generalization resembles generalization of decoding algorithm for qubit surface codes into qudit systems \cite{Watson2015}. However, first step of our algorithm is applied globally to the whole structure of the graph, in constrast with grouping defects into local clusters of increasing size, which in \cite{Watson2015} is performed in spirit of renormalization group approach \cite{Bravyi2011}. The final step of the algorithm is equivalent to finding a maximum weight perfect matching in the weighted hypergraph $G'$ defined as follows: \begin{enumerate} \item $V(G')$ is the set of all defects in $(G,K).$ \item A set $S\subset V(G')$ is an edge iff it is a minimal set such that $\sum_{x\in S}cl(x)=0$ (mod $d$). \item The weight of an edge is $w(S)=T(G^*)-St(S)$, where $St(S)$ is the minimum length of a Steiner tree for the set $S$ and $T(G^*)$ is the number of edges in the spanning tree of the dual lattice. \end{enumerate} By a perfect matching in a hypergraph we mean a set $M$ of edges such that each vertex belongs to exactly one edge in $M$. One problem with such a generalization is properly defining the hypergraph equivalent of a blossom. The problem of Steiner trees is also NP-hard in general, which may increase the complexity of the algorithm for large numbers of outputs. Nevertheless, an interesting implication is the fact that the hypergraph $G'$ is guaranteed to have a perfect matching. \subsection{Spanning tree approach} Another approach, which may be better for games with a large number of outputs, relies on the following result. \begin{lemma} Every optimal labeling $K_o$ of a grid $G$ is a canonical representation for some spanning tree of $G$. \end{lemma} \begin{proof} Let $K_o$ be an optimal labeling of a grid $G$. First, remove any edges from $G$ which define a loop in $K_o.$ Next, remove an arbitrary set of edges which defines a loop in each direction in which there is no loop in $K_o$. The remaining graph $H$ is a planar grid. The set $S$ of edges $e\in E(H)$ such that $K_o(e)\neq I$ corresponds to a certain set $S^*$ of edges in the dual lattice $H^*$ of $H$. The set $S^*$ is a subset of the edge set of a spanning tree $T$ of $H^*$. If we remove all edges in $H$ corresponding to edges of $T$, what remains is a spanning tree $T_1$ of $H$ such that $K_o$ assigns $I$ to all of its edges. Since any spanning tree of $H$ is also a spanning tree of $G$, It follows that $K_o$ is the canonical representation with respect to the spanning tree $T_1$. \end{proof} It is easy to see that a game has only one canonical representation for any given spanning tree. This representation is easy to obtain from the tree itself using the tree enlarging procedure from section \ref{sub:4+}. Unfortunately, not every canonical representation is an optimal labeling for a given set of defects and loops. Therefore to find the optimal labeling we need to compare canonical representations with respect to different spanning trees. The number of spanning trees grows exponentially with respect to the number of vertices in the grid, so this method appears to be NP-hard. However, unlike the algorithm based on matchings, its complexity does not depend on the number of outputs. \section{Examples}\label{ex} Below we show how local and non-local conditions, associated with topology of a given game, can influence maximal probabilities of winning the games, with classical and quantum resources provided. Classical values can be calculated exactly using methods described in the previous section. In the quantum case, the optimization in (\ref{win}) is performed over all states $|\Psi\rangle$ in a Hilbert space and sets of ortogonal projective measurements on Alice and Bob subsystems such that $p(ab|xy)=\langle\Psi |M_{x}^{a}M_{y}^{B}|\Psi\rangle$, and $[M_{x}^{a},M_{y}^{b}]=0$. Due to the fact that dimension of Hilbert space is not fixed, every POVM measurement can be described in the above form. The task of finding the optimal protocol saturating the quantum value is equivalent to finding a Bell operator $S=\sum_{a,b,x,y}V(ab|xy)M_{x}^{a}M_{y}^{b}$ with maximal norm. Upper bound on a quantum value can be found by solving a semidefinite problem resulting from relaxing the conditions for set of quantum probabilities aimed at satisfying restrictions of a particular game. An infinite series of these relaxations, in a form of the so called NPA hierarchy \cite{Nav2008} of semidefinite programs, was shown to give values converging to quantum value as defined above. Here, we will calculate the upper bound $Q^{\uparrow}$ on quantum values by solving first level problems of the hierarchy, i.e. by optimizing (\ref{win}) under the restriction that the matrix $\Gamma$ with entries $\Gamma_{ij}=\langle|\Psi O_{i}^{\dagger}O_{j}|\Psi\rangle$ is semidefinite positive, where the set $O=Id\cup\{M_{x}^{a}\}_{a,x}\cup\{M_{y}^{b}\}_{b,y}$ is composed of single orthogonal projective operators (and Identity). A lower bound $Q^{\downarrow}$ is calculated on the basis of the so-called see-saw algorithm \cite{Werner2001}. The algorithm, for a given dimension of Hilbert space, is based on a sequence of semidefinite programs that aim at looking for an optimal Bell operator $S$ based on linear structure of $S$ with respect to measurement of Alice when measurements of Bobs are fixed (and vice-versa). It starts from randomly chosen measurement operators for Alice and Bob and an initial state shared between them, and then performs 3 steps: for given measurement operators, finds a state that leads to maximal winning probability; ii) optimizes measurement operators for Alice given quantum state and measurement operators for Bob; iii) optimizes measurement operators for Bob given quantum state and measurements operators for Alice. At each step, an SDP program is invoked to solve the optimization problem. For a given initial random configuration, this 3 step process is repeated 20 times, and a lower bound on quantum value presented here is calculated as maximum over 20 independent runs of the program. While it is not even known if, for a fixed dimension of the Hilbert space, the see-saw algorithm allows for convergence into the global maximum, for a qubit case we obtain exact quantum values of the games, as both upper and lower bounds coincide. We found that the analytical upper bound on quantum value of a d-outcome XOR game \cite{Ramanathan2016} coincides with the one calculated by an SDP in the first level of the NPA hierarchy (for games with periodic boundary conditions), or exceeds it (for games without periodic boundary conditions). Therefore, its values are not reported. We start with the problem of boundary conditions, for a XOR game (n=2). In Fig. \ref{fig:grid} the grids a) and b) without boundary conditions are equivalent. Since a) is equivalent to a grid with identity assigned to all edges, it contains no contradictions. Thus, both classical and quantum winning probability for this graph is equal to $1$. \begin{figure}[h!] \includegraphics[scale=0.30]{example_grid2.pdf} \caption{A $4\times 4$ grid with and without periodic boundary conditions. Blue edges are associated with $I$ permutations, while red edges are associated with a permutation $(01)$. No defects present. k+1 homologically non-trivial bad cycles cross red edges (d).} \label{fig:grid} \end{figure} On the torus a loop can occur which cannot be removed by switches. Hence the graphs in Fig. \ref{fig:grid} c) and d) are not equivalent. In the case of a $k\times k$ torus with one loop and no defects $\beta_C=k$, regardless of the number of outputs or the class of the loop. This is because the graph contains a set of $k$ disjoint chordless bad cycles, one for each edge of the loop. These cycles behave like defects and cannot be removed by switches. Since the cycles are disjoint, one edge needs to be changed in order to make each cycle good. Hence, the classical winning probability for this game is $p=\frac{\left|E\right|-\beta_C}{\left|E\right|}=\frac{2k^2-k}{2k^2}=\frac{2k-1}{2k}$. \begin{figure}[h!] \includegraphics[scale=0.4]{example_lines.pdf} \caption{Two non-locality games of different sizes with periodic boundary conditions, with permutations $\sigma_{x}$ on red edges, $\sigma_{y}$ on black edges, and Identities on blue edges. Classical and quantum values given in Table \ref{Tabel1}. Nodes corresponding to different questions $A_{i}$ and $B_{j}$, $i,j=1,\dots,8$ are marked on the smaller graph.} \label{fig:lines} \end{figure} \begin{figure}[h!] \includegraphics[scale=0.30]{example_bipartite.pdf} \caption{The grids from Fig. \ref{fig:grid} depicted as bipartite graphs} \label{fig:bipartite} \end{figure} In general, if we consider a torus with two possible loops of permutations $\sigma_{x}, \sigma_{y}\neq Id$ (Fig. \ref{fig:lines}), we obtain games for which classical values do not depend on a particular type of non-trivial permutation, but solely on its presence (see Table \ref{Tabel1}). For such games, an upper quantum bound calculated based on first level of the NPA hierarchy, and lower than 1 due to periodic boundary conditions, coincide with an analytic quantum bound introduced in \cite{Ramanathan2016}. However, for non-periodic boundaries the latter bound exceeds 1. Classical values, as well as both bounds on quantum values, show an additive behavior: introduction of a new path connecting the boundaries leads to their decrease by a value that does not depend on presence of other paths of permutations. For example, if by $F(n,x,y)=C(n,x,y),Q^{\uparrow}(n,x,y),Q^{\downarrow}(n,x,y)$ we denote a function representing a classical value of a game or one of its quantum bounds, specified in Table \ref{Tabel1}, we have $1-F(n,1,1)=1-F(n,1,0)+1-F(n,0,1)$, which does not depend on the size of the lattice. \begin{widetext} \begin{table} \small \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{3}{*}{n} & \multirow{3}{*}{\rotatebox{90}{type}} & \multicolumn{8}{c|}{x,y} \\ \cline{3-10} & & \multicolumn{2}{c|}{1,0} &\multicolumn{2}{c|}{0,1}& \multicolumn{2}{c|}{1,1}& \multicolumn{2}{c|}{1,2}\\ \cline{3-10} & & $C$ & $Q$ &$C$ & $Q$ &$C$ & $Q$ &$C$ & $Q$ \\\cline{1-10} \multirow{4}{*}{2} & \multirow{2}{*}{a)} & \multirow{2}{*}{0.875} & 0.926666/ & \multirow{2}{*}{0.875} & 0.926666/ & \multirow{2}{*}{0.75} & 0.853553/& - & - \\ & & & 0.926666 & & 0.926666 & & 0.853553 & - & - \\ \cline{2-10} &\multirow{2}{*}{b)} & \multirow{2}{*}{0.875} & 0.926666/ & \multirow{2}{*}{0.9375} & 0.980970/ & \multirow{2}{*}{0.8125} & 0.907747/& - & - \\ & & & 0.926666 & & 0.980970 & & 0.907747 & - & - \\ \cline{1-10} \multirow{4}{*}{3} &\multirow{2}{*}{a)} & \multirow{2}{*}{0.875} & 0.915578/ & \multirow{2}{*}{0.875} & 0.915578/ & \multirow{2}{*}{0.75} & 0.831812/& \multirow{2}{*}{0.75} & 0.833062 / \\ & & & 0.955342 & & 0.955342 & & 0.910684 & & 0.910684 \\ \cline{2-10} &\multirow{2}{*}{b)} & \multirow{2}{*}{0.875} & 0.915357/ & \multirow{2}{*}{0.9375} & 0.977439 / & \multirow{2}{*}{0.8125} & 0.893144 /& \multirow{2}{*}{0.8125} & 0.891906 / \\ & & & 0.955342 & & 0.988642 & & 0.943984 & & 0.943984 \\ \cline{1-10} \hline \multirow{3}{*}{n} & \multirow{3}{*}{\rotatebox{90}{type}} & \multicolumn{8}{c|}{x,y} \\ \cline{3-10} & & \multicolumn{2}{c|}{2,1} &\multicolumn{2}{c|}{0,2}& \multicolumn{2}{c|}{2,0}& \multicolumn{2}{c|}{2,2}\\ \cline{3-10} & & $C$ & $Q$ &$C$ & $Q$ &$C$ & $Q$ &$C$ & $Q$ \\\cline{1-10} \multirow{4}{*}{2} & \multirow{2}{*}{a)} & \multirow{2}{*}{-} & - & \multirow{2}{*}{-} & - & \multirow{2}{*}{-} & -& - & - \\ & & & -& & - & & -& - & - \\ \cline{2-10} &\multirow{2}{*}{b)} & \multirow{2}{*}{-} & - & \multirow{2}{*}{-} & - & \multirow{2}{*}{-} &-& - & - \\ & & & -& & -& & - & - & - \\ \cline{1-10} \multirow{4}{*}{3} &\multirow{2}{*}{a)} & \multirow{2}{*}{0.75} & 0.832910 / & \multirow{2}{*}{0.875} & 0.915578 / & \multirow{2}{*}{0.875} & 0.915578 /& \multirow{2}{*}{0.75} & 0.833325 / \\ & & & 0.910684 & & 0.955342 & & 0.955342 & & 0.910684 \\ \cline{2-10} &\multirow{2}{*}{b)} & \multirow{2}{*}{0.8125} & 0.892333/ & \multirow{2}{*}{0.9375} & 0.977439 / & \multirow{2}{*}{0.9375} & 0.915520 /& \multirow{2}{*}{0.8125} & 0.891906 / \\ & & & 0.943984 & & 0.988642 & & 0.955342 & & 0.943984 \\ \cline{1-10} \end{tabular} \caption{Classical (\textit{C}) and quantum (\textit{Q}) values of non-locality games from Fig. \ref{fig:lines}, for given $x,y$ defining permutations $\sigma_{x}$ and $\sigma_{y}$ forming loops connecting periodic boundaries. For quantum case, upper $Q^{\uparrow}$ and lower $Q^{\downarrow}$ bounds shown in a format $\frac{Q^{\downarrow/}}{Q^{\uparrow}}$; they coincide for $n=2$. Classical and quantum values do not depend on a type of a non-trivial permutation. Increasing lattice length does not affect classical and quantum values for games with loops of non-trivial permutations along this direction.}\label{Tabel1}. \end{table} \end{widetext} \begin{figure}[h!] \includegraphics[scale=0.40]{line_edge.pdf} \caption{Two non-locality games with periodic boundary conditions, with permutations $\sigma_{x}$ on red edges, $\sigma_{y}$ on black edges, $\sigma_{z}$ on green edges, and Identities on blue edges. Quantum values are given in Tables \ref{Tabel2} and \ref{Tabel3}.} \label{fig:line_edge} \end{figure} On the other hand, permutations that do not form closed loops influence quantum values in a non-additive manner (Fig. \ref{fig:line_edge}a). While both quantum bounds for $n=2$ coincide, their properties depend on position of respective permutations, e.g. for $n=2$ we have e.g. $1-Q(n=2,x=1,\textbf{y=0, z=1})\neq1-Q(n=2,x=1,\textbf{y=0, z=0}))+1-Q(n=2,x=0,\textbf{y=0, z=1})$, while $1-Q(n=2,x=1,\textbf{y=1, z=0})\neq1-Q(n=2,x=1,\textbf{y=0, z=0}))+1-Q(n=2,x=0,\textbf{y=1, z=0})$ (see Table \ref{Tabel2}). Furthermore, we also see that $Q^{\downarrow}$ does not distinguish between games for $n=2$ and $n=3$ settings that have only one permutation present. Classical values for the game depend only on the number of non-trivial permutations present, which equals to $\beta_{C}$, so we have $C=\frac{32-\beta_{C}}{32}=0.90625, 0.9375, 0.96875$ for $3,2,1$ permutations, respectively. This indicates that going to a higher dimension may open a gap between classical and quantum values, as it is visible for cases of single permutations present. At the end, we show that the non-additive behavior of $Q^{\uparrow}$ is present for games with periodic boundary conditions even when it comes to adding a loop of permutations to a game already hosting a non-trivial permutation on one of the edges (see Fig. \ref{fig:line_edge}b ). Table \ref{Tabel3} shows bounds on quantum values, while classical values are independent of $n$ and equal to $0.96875, 0.875, 0.84375, 0.8125$ for a single permutation, loop connecting boundaries, loop with a single permutation and loop with 2 permutations present, respectively. Not only $1-Q^{\uparrow}(n,x=1,y=1,z=0)\neq1-Q^{\uparrow}(n,x=1,y=0,z=0)+1-Q^{\uparrow}(n,x=0,y=1,z=0)$, but a closed loop of permutations completely overshadows presence of single edge permutations for $n=2$. It is also visible from the above results that games that are equivalent according to Theorem \ref{Prop17} are characterized by the same bounds on quantum values. According to Theorem \ref{Prop17}, all games with permutation types, both local and in form of non-contractible loops, modified by the same permutation $\pi\in S_{d}$ and $\pi\notin L_{d}'$, are equivalent. \begin{widetext} \begin{table} \small \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline n & \multicolumn{10}{c|}{} \\ \cline{1-11} \multirow{12}{*}{2} & \multicolumn{10}{c|}{x,y,z} \\ \cline{2-11} & 1,0,0 & 0,1,0 & 0,0,1 & 2,0,0 & 0,2,0 & 0,0,2 &0,1,1 & 0,2,1 & 0,1,2 & 0,2,2 \\\cline{2-11} & 0.968750/& 0.968750/& 0.968750/& -/& -/& -/& 0.949843/& -& -& -\\ & 0.968750& 0.968750& 0.968750& - & - & -& 0.949843& -& -& -\\\cline{2-11} & \multicolumn{10}{c|}{x,y,z} \\ \cline{2-11} & 1,1,0 & 2,1,0 & 1,2,0 & 2,2,0 & 1,0,1 & 2,0,1 &1,0,2 & 2,0,2 & 1,1,1 & 2,2,2 \\\cline{2-11} & 0.937842/& -& -& -& 0.937500/& -& -& -& 0.922388/& -\\ & 0.937842& -& -& -& 0.937500& -& -& -& 0.922388& -\\\cline{2-11} & \multicolumn{10}{c|}{x,y,z} \\ \cline{2-11} & 1,1,2 & 1,2,1 & 2,1,1 & 2,2,1 & 2,1,2 & 1,2,2 & & & & \\\cline{2-11} & -& -& -& -& -& -& & & & \\ & -& -& -& -& -& -& & & & \\\cline{1-11} \multirow{12}{*}{3} & \multicolumn{10}{c|}{x,y,z} \\ \cline{2-11} & 1,0,0 & 0,1,0 & 0,0,1 & 2,0,0 & 0,2,0 & 0,0,2 &0,1,1 & 0,2,1 & 0,1,2 & 0,2,2 \\\cline{2-11} & 0.968750/& 0.968750/& 0.968750/& 0.968750/& 0.968750/& 0.968750/& 0.942724/& 0.937500/& 0.937500/& 0.942724/\\ & 0.977378& 0.977378& 0.977378& 0.977378& 0.977378& 0.977378& 0.968772& 0.945183& 0.945183& 0.968772\\\cline{2-11} & \multicolumn{10}{c|}{x,y,z} \\ \cline{2-11} & 1,1,0 & 2,1,0 & 1,2,0 & 2,2,0 & 1,0,1 & 2,0,1 &1,0,2 & 2,0,2 & 1,1,1 & 2,2,2 \\\cline{2-11} & 0.937500/& 0.937500/& 0.937500/& 0.937500/& 0.937500/& 0.937500/& 0.937500/& 0.937500/& 0.913291/& 0.913290/\\ & 0.958457& 0.951486& 0.951486&0.958457& 0.955486& 0.954088& 0.954088& 0.955486& 0.951016& 0.951016\\\cline{2-11} & \multicolumn{10}{c|}{x,y,z} \\ \cline{2-11} & 1,1,2 & 1,2,1 & 2,1,1 & 2,2,1 & 2,1,2 & 1,2,2 & & & & \\\cline{2-11} & 0.906250/& 0.906250 /& 0.912307/& 0.906250 /& 0.906250/& 0.912307/& & & & \\ & 0.925112& 0.920754& 0.941841& 0.925112& 0.920754& 0.941841& & & & \\\cline{1-11} \end{tabular} \caption{Quantum values for a) of Fig. \ref{fig:line_edge}. For comparision: classical values are $0.96875, 0.9375, 0.90625$ for $1,2,3$ non-trivial permutations present, respectively. }\label{Tabel2} \end{table} \begin{table} \small \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline n & \multicolumn{10}{c|}{} \\ \cline{1-11} \multirow{12}{*}{2} & \multicolumn{10}{c|}{x,y,z} \\ \cline{2-11} & 1,0,0 & 0,1,0 & 0,0,1 & 2,0,0 & 0,2,0 & 0,0,2 &0,1,1 & 0,2,1 & 0,1,2 & 0,2,2 \\\cline{2-11} & 0.926777/& 0.968750/& 0.968750/& -& -& -& 0.937842 /& -/& -/& -/\\ & 0.926777& 0.968750& 0.968750& - & - & -& 0.937842& -& -& -\\\cline{2-11} & \multicolumn{10}{c|}{x,y,z} \\ \cline{2-11} & 1,1,0 & 2,1,0 & 1,2,0 & 2,2,0 & 1,0,1 & 2,0,1 &1,0,2 & 2,0,2 & 1,1,1 & 2,2,2 \\\cline{2-11} & 0.896670/& -& -& -& 0.896670/& -& -& -& 0.866172/& -\\ & 0.896670& -& -& -& 0.896670& -& -& -& 0.866173& -\\\cline{2-11} & \multicolumn{10}{c|}{x,y,z} \\ \cline{2-11} & 1,1,2 & 1,2,1 & 2,1,1 & 2,2,1 & 2,1,2 & 1,2,2 & & & & \\\cline{2-11} & -& -& -& -& -& -& & & & \\ & -& -& -& -& -& -& & & & \\\cline{1-11} \multirow{12}{*}{3} & \multicolumn{10}{c|}{x,y,z} \\ \cline{2-11} & 1,0,0 & 0,1,0 & 0,0,1 & 2,0,0 & 0,2,0 & 0,0,2 &0,1,1 & 0,2,1 & 0,1,2 & 0,2,2 \\\cline{2-11} & 0.915578/& 0.968750 /&0.968750 /&0.915577 /&0.968750 /&0.968750 /&0.937500 /&0.937500 /&0.937500 /&0.937500 /\\ & 0.955342 & 0.977378& 0.977378 & 0.955342& 0.977378& 0.977378& 0.958457& 0.951486& 0.951486&0.958457 \\\cline{2-11} & \multicolumn{10}{c|}{x,y,z} \\ \cline{2-11} & 1,1,0 & 2,1,0 & 1,2,0 & 2,2,0 & 1,0,1 & 2,0,1 &1,0,2 & 2,0,2 & 1,1,1 & 2,2,2 \\\cline{2-11} & 0.884399/ &0.884399/ &0.884399/ &0.884398 / &0.884399/ & 0.884399/ &0.884399/ &0.884399/ &0.853247 / &0.853225 /\\ & 0.933402& 0.933402& 0.933402& 0.933402&0.933402& 0.933402& 0.933402& 0.933402 &0.914745 &0.914745\\\cline{2-11} & \multicolumn{10}{c|}{x,y,z} \\ \cline{2-11} & 1,1,2 & 1,2,1 & 2,1,1 & 2,2,1 & 2,1,2 & 1,2,2 & & & & \\\cline{2-11} &0.853209 / &0.853210/ &0.853251/ &0.853210/ &0.853209/ &0.853248/ & && & \\ & 0.908438& 0.908438&0.914745&0.908438&0.908438&0.914745&& & &\\\cline{1-11} \end{tabular} \caption{Quantum values for b) of Fig. \ref{fig:line_edge}. For comparison: classical values are $0.9687, 0.875, 0.84375, 0.8125$ for single permutation, loop connecting boundaries, loop with a single permutation and loop with 2 permutations present, respectively.}\label{Tabel3}. \end{table} \end{widetext} \section{Discussion and conclusions}\label{con} Similarity between properties of the presented family of non-locality games and Kitaev error correction codes is not complete: as was shown for a $d=2$ case and a square lattice on a plane, a chain of anticorrelations in the dual lattice, which join two opposite boundaries, cannot be removed by local operations (i. e. by multiplying it by generators of stabilizer group), yet its analogue can be erased in the game setting by local relabelings of measurement outcomes. Nevertheless, for $d=2$ and in a case of non-periodic boundary conditions of the lattice, the same algorithm can be used for calculating most probable chain of errors and classical value, in the quantum error correction code and non-locality games, respectively. Periodic boundary conditions do not impose any significant challenges, nor in game classification, nor in the computation of classical value of a game, and the problems can be addressed with modification of the algorithm used for non-periodic scenario. The non-contractible loops of permutations appear to have independent impacts on quantum values, as exact quantum values (for $d=2$) and both bounds on quantum values (for $d=3$) show additive behavior. This resembles commuting relation between logical operators acting in the codespace of the stabilizer error correction code, and enables us to conjecture that this is indeed property of a quantum set of correlations in the described setting. The non-additive behavior emerges only in presence of local features of the game -- defects, which, in the associated error correction picture, correspond to excitation of a state of the system out of the codespace, and happening due to local errors. Due to game constructions on a square lattice, for a fixed surface its classical value cannot go to zero in the asymptotic limit of the number of questions asked, and it is lower bounded by $\frac{|T(G)|}{|E(G)|}$, with $|E(G)|$ being number of edges of a graph representing the game, and $|T(G)|$ number of edges belonging to its maximal spanning tree. Nevertheless, by constructing games with relatively short maximal spanning trees, i.e. by utilizing surfaces with high genus and small minimal lengths of curves within each homotopy class, one could try to minimize classical value, and obtain high quantum/classical gap in high $d$, provided quantum values increase with number of possible outcomes. \section*{Acknowledgments} The authors would like to thank Justyna \L{}odyga and Waldemar K\l{}obus for useful discussions at the early stage of the project, and Anubhav Chaturvedi for helpful remarks and sharing his expertise in semidefinite programming. This work was supported by National Science Centre, Poland, grant OPUS 9. 2015/17/B/ST2/01945. \section{Appendix} In this paper we attempt to classify the labelings of a grid graph with permutations from the set $L_d'=\{\tilde{\sigma_i}:\tilde{\sigma_i}(x)=x+i\mod d\}$. We say that two labeled graphs $(G_1,K_1)$ and $(G_2,K_2)$ are\emph{equivalent} if one can be obtained from the other by means of the following operations. \begin{enumerate} \item Isomorphism between $G_1$ and $G_2,$ \item In a directed graph, replacing an arc $\overrightarrow{uv}$ labeled with $\pi$ with an arc $\overrightarrow{vu}$ labeled with $\pi^{-1},$ \item Switching operations $s(v,\sigma)$ for any vertex $v\in V_{G_1}$ and permutation $\sigma\in S_d,$ defined as follows. For every vertex $u\in N_{G}(v)$: \begin{enumerate} \item if $\overrightarrow{uv}\in E(G),$ we replace $K(\overrightarrow{uv})=\pi$ with $K'(\overrightarrow{uv})=\sigma\pi$, \item if $\overrightarrow{vu}\in E(G),$ we replace $K(\overrightarrow{vu})=\pi$ with $K'(\overrightarrow{vu})=\pi\sigma^{-1}$. \end{enumerate} \end{enumerate} However, in this paper we mostly consider an equivalence between different labelings of the same graph. We understand two labelings of a graph $G$ to be equivalent iff one can be obtained from the other by means of a series of switches $s(v,\sigma).$ It is clear that if the labelings $K_1, K_2$ of a graph $G$ are equivalent, then the labeled graphs $(G,K_1)$ and $(G,K_2)$ are also equivalent. \begin{dfn} A \textit{cell} of a lattice $G$ is a cycle which does not contain any other cycles within it. In the square lattice specifically, a cell is a cycle with four edges. \\By a \textit{defect} we mean a cell which is a bad cycle, i.e. has no consistent vertex assignment. \end{dfn} \begin{dfn} The \textit{defect class} $Cl(C)$ of a cycle $C=(v_0,e_1,v_1,...,e_k,v_0)$ in a labeled grid $(G,K)$ is the composition of all permutations assigned to the edges of the cycle, beginning from the starting vertex $v_0$ and moving in a counterclockwise direction, i.e. $cl(C,K)=K(e_1)K(e_2)...K(e_k)$. \end{dfn} Conveniently, the defect class of a larger cycle can be calculated from the classes of the cells within the cycle. In Proposition \ref{Tdodawanie_n2} we show that for $\tilde{\sigma}_{x}\in L_{d'}$, the number $x$ uniquely determines the defect class of the cell (in general though, the defect class of the cell should be identified with permutations itself). Similarly, the defect class of any cycle is defined by the composition of all permutations assigned to the edges of the cycle. By $D_x$ we denote the set of all defects of class $x$ in the labeled graph. The notion of the class of the cycle does not depend to the choice of a starting point $v_{0}$: \begin{prop} In a graph labeled with $L_{d'}$, the defect class of a cycle does not depend on the starting point. \end{prop} \begin{proof} Changing the starting point used in the definition of the defect class will cause the change of the order of permutations $\tilde{\sigma}_{i}\in L_{d}'$ that $cl(C,K)$ is composed of, but this will not affect $cl(c,K)$, because all permutations in $L_{d}'$ commute with each other. \end{proof} \begin{prop} \label{Tdodawanie_n2} In a planar graph labeled with $L_d'$, the class of a cycle is the sum of the classes of all cells within the cycle. \end{prop} \begin{proof} Let $C$ and $C'$ be two cycles in a planar graph which have two vertices $a$ and $b$ in common (see Fig. \ref{figura7}). We denote the compositions of all permutations on the $a-b$ path not belonging to $C'$ as $\pi_1$, on the $b-a$ path not belonging to $C$ as $\pi_3$, and on the $b-a$ path belonging to both cycles as $\pi_2,$. Then we have $\Pi_C=\pi_2\pi_1$ and $\Pi_{C'}=\pi_3\pi_2^{-1}.$ The composition of permutations on the outer cycle $\hat{C}$ (containing both $C$ and $C'$) can be written as $\Pi_{\hat{C}}=\pi_3\pi_1=\pi_3\pi_2^{-1}\pi_2\pi_1=\Pi_C\Pi_{C'}.$ If the classes of $C$ and $C'$ are $x$ and $y$, respectively, we have $\tilde{\sigma}_y\tilde{\sigma}_x(a)=y+(x+a)=\tilde{\sigma}_{x+y}(a).$ Thus, if the cycle is formed on the boundary of two cycles of class $x$ and $y$ that share a common edge, its class will be $x+y$. \end{proof} \begin{figure}[h!] \includegraphics[scale=0.35]{fig7.pdf} \caption{\label{figura7} Addition of defects of class $x$ and $y$.} \end{figure} \begin{dfn} By a (nontrivial) \textit{loop} we understand a series of edges in a grid such that the corresponding edges in the dual lattice form a cycle which is not contractible to a point. Typically we will refer to loops consisting of non-identity edges. \end{dfn} Any loop of length $k$ which exists in a grid with no defects defines a set of $k$ bad cycles in the graph, all of which have the same defect class. We will refer to the defect class of those cycles as the class $Cl(L)$ of the loop. On a higher genus surface two labelings $K_1$ and $K_2$ with the same sets of defects of each class may be nonequivalent. In such cases the labeling $K_1-K_2$ contains no defect, but it has at least one loop. It would be convenient to define the equivalence classes based solely on the configuration of defects and loops within a labeling. However, unlike defects, loops may not actually be an inherent property of a given labeling. It is possible for $K_1-K_2$ to contain a loop even if no loop can be identified in either $K_1$ or $K_2$. Which labeling in this case can be said to possess a loop? This is necessarily a matter of convention. For every possible configuration of defects one must choose one default labeling $D$ with the minimum number of non-identity edges. We will say that this labeling has no loops. Any labeling $K$ with the same configuration of defects as $D$ will be said to have a loop of class $x$ iff the labeling $K-D$ has such a loop. Note that $K-D$ is a labeling with no defects. Thus, it is equivalent to a labeling in which all non-identity edges form unambigous loops. \subsection{Classification} \begin{lemma} \label{L1} If, by applying local permutations from $L_{d}'$ one can transform into each other two graphs labeled by permutations from $L_{d}'$, then these two labeled graphs have the same set of defects of each type, i.e. every cell has the same class in both graphs. \end{lemma} \begin{proof} Let $\Pi=\tilde{\sigma}_{x}\in{L_{d}'}$ be the defect class of a selected cell. A switch $s(v,\tilde{\sigma}_y)$ changes the class of the cell $\Pi\rightarrow\tilde{\sigma}_{y}\tilde{\sigma}_{x}\tilde{\sigma}_{y}^{-1}=\tilde{\sigma}_{y}\tilde{\sigma}_{y}^{-1}\tilde{\sigma}_{x}=\Pi$ if $v$ is the starting vertex of $\Pi$, and $\Pi=\tilde{\sigma_{i}}\tilde{\sigma_{j}}\rightarrow\tilde{\sigma_{i}}\tilde{\sigma}_{y}\tilde{\sigma}_{y}^{-1}\tilde{\sigma_{j}}=\Pi$ otherwise. \end{proof} \begin{lemma} \label{L2} Applying switch $s(v,\tilde{\pi}_{y})$, where $\tilde{\pi_y}\in L_d$ to a vertex of a cell with defect class $x$ and labeled with $L_{d}'$ will change the defect class of the cell into $-x$, if the switch was applied to the starting vertex , and will not affect the defect class otherwise. \end{lemma} \begin{proof} Because for all $\tilde{\pi}_{x}\in L_{d}$ we have $\tilde{\pi}_{x}^{-1}=\tilde{\pi}_{x}$, the application of the switch to the starting vertex will change the cell's defect class in the following way $\Pi=\tilde{\sigma}_{x}\rightarrow\tilde{\sigma}_{y}\tilde{\sigma}_{x}\tilde{\sigma}_{y}=\tilde{\sigma}_{-x}$, because $\tilde{\pi}_y\tilde{\sigma}_x\tilde{\pi}_{y}(a) = y - (x + (y - a)) = -x + a.$ If the switch was applied to a different vertex, then we have $\Pi=\tilde{\sigma}_x=\tilde{\sigma}_i\tilde{\sigma}_j\rightarrow\tilde{\sigma}_i\tilde{\pi}_y\tilde{\pi}_y^{-1}\tilde{\sigma}_j=\tilde{\sigma}_x=\Pi$. \end{proof} \begin{proof}[Proof of Theorem \ref{T:any_n}] ($\Rightarrow$) If two labelings are equivalent, then one can be transformed into the other by switches. Assume without loss of generality that a permutation $\sigma_{1}\in L_d'$ on the edge $e=v_1v_2$ is transformed into $\sigma_{2}\in L_{d}'$ by switches $s(v_1,\eta)$ and $s(v_2,\pi)$: $\sigma_{2}=\pi\sigma_{1}\eta^{-1}$. Such a transformation changes the defect class of the cell $c_2$ with starting point $v_{2}$ from $cl(c_{2},K)$ into $\pi cl(c_{2},K) \pi^{-1}$, whereas the defect class of the cell $c_1$ with starting point $v_{1}$ is changed from $cl(c_{1},K)$ into $\eta cl(c_{1},K) \eta^{-1}$. Notice that $\eta=\sigma_{2}^{-1}\pi\sigma_{1}$, where $\sigma_{1},\sigma_{2}^{-1}\in L_{d}'$ and as such, they do not change the class of the cell. Now we see that the defect class of the cell $c_{1}$ changes into $\pi cl(c_{1},K) \pi^{-1}$. This implies $cl(c,L)=\pi cl(c,K)\pi^{-1}$ for all cells. \item[($\Leftarrow$)]Let $K,L:E\mapsto L_d'$ be labelings such for every cell $c$, we have $cl(c,K)=\pi cl(c,L)\pi^{-1}$, where $\pi\in S_d$ is the same for all cells. When we start from the labeling $K$, and apply permutations $\pi$ on all vertices, we obtain an equivalent labeling $\pi(K)$ such that $cl(c,\pi(K))=cl(c,L)$ for every cell $c$. For every labeling with $L_{d'}$, we can obtain a canonical representation using only switches with permutations from $L_{d'}$. Such switches do not change the defect class of any cell. Furthermore, for a given spanning tree there is only one labeling with a given set of defects and all edges of the spanning tree labeled with identity. Thus, the labelings $\pi(K)$ and $L$ have a shared canonical representation, and are therefore equivalent. \end{proof} Note that the $(\Rightarrow)$ part of the proof does not depend on the graph being planar. Hence, we have the following corollary. \begin{cor} \label{cor:any_n} Let $K$ and $L$ be two labelings of the same graph. If the labelings are equivalent, then there exists a permutation $\pi$ such that $cl(c,K)=\pi^{-1}cl(c,L)\pi$ for every cell $c$ of the graph. \end{cor} \begin{proof}[Proof of Lemma \ref{L:tree+}] It is obvious that all edges of $G-T^L$ belong to $G-L,$ as $T$ is a subgraph of $T^L.$ Now let us consider the dual lattice $G*.$ The graph $G*-T^L*$ has no vertices of degree $1$, as they were all absorbed into $T^L*$ during the enlarging procedure. It follows that every edge of $G*-T^L*$ belongs to a cycle and thus, every edge of $G-T^L$ belongs to a loop. Any contractible loop in $G$ divides its edge set into two disconnected components. Since $T^L$ is a connected graph, all loops contained in $G-T^L$ must be noncontractible. The spanning tree $T$ contains no cycle. It follows that there is a loop in every homology class with no edges in $T.$ Since every cell belonging to such a loop has at least two edges not belonging to $T$, the loop dose not get absorbed into $T^L.$ Thus, $G-T^L$ contains a loop from every homology class. Finally, two loops belonging to the same homology class can be continuously transformed into one another. But in this case these two loops would be separating the graph into two non-connected subgraphs. This, however, is impossible, since $T^L$ is connected. Therefore, for a given spanning tree $T$ and genus $g$ of the surface, there are 2g unique closed loops in the complement of $T^{L}$. \end{proof} In the proof of Theorem \ref{Prop17} we will use the fact that canonical form of a $L_{d}'$ game associated with a given spanning tree $T$ depends on permutations that belong to $T^{L}$, but not to $T$ (they are fixed, up to permutation $\pi$ applied to every vertex, by classes of defects), as well as by permutations that belong to the complement of $T^{L}$ (they are partially fixed, up to permutation $\pi$ applied to every vertex, by classes of defects, but also provide some additional characteristic of a game arising from periodic boundary conditions). \begin{proof}[Proof of Theorem \ref{Prop17}] First assume that there exists a permutation $\pi\in S_d$ such that $cl(c,K_1)=\pi cl(c,K_2)\pi^{-1}$ for every cell $c$ of the grid and $Cl(l,K_1)=\pi Cl(l,K_2)\pi^{-1}$ for every loop $l$. Let $T$ be a spanning tree of the graph. There is clearly only one way to label the enlarged tree $T^{L}$ such that all edges of $T$ are assigned the identity and the classes of all cells are preserved. The permutations assigned to the edges of $G-T^L$ must also be the same. If they were not, the classes of loops in the two labelings would be different. Thus $K_1$ and $\pi(K_2)$ have a shared canonical representation, which means that they are equivalent. Since $\pi(K_2)$ is equivalent to $K_2$, it follows that $K_1$ and $K_2$ are equivalent. Now assume that $K_1$ and $K_2$ are equivalent. As shown before, $K_2$ can be obtained from $K_1$ by applying a switch $S(v,\sigma_v\pi)$ to each vertex, where $\sigma_v\in L_d'$ and $\pi\in S_d-L_d'$ is the same for all vertices. As shown before, the switch $s(v,\sigma_v\pi)$ changes the defect class of a cycle from $\sigma$ to $\pi^{-1}\sigma\pi$ if $v$ is the starting point of the cycle and leaves it unchanged otherwise. This applies to contractible cycles on the surface, which arise from defects, as well as to noncontractible cycles, which give rise to nontrivial loops. Combined with the definition of the class of a loop, this shows that the set of switches which transforms $K_1$ into $K_2$ changes the classes of defects and loops in exactly the same way, namely $cl(c,K_1)=\pi cl(c,K_2)\pi^{-1}$ for every cell $c$ and $Cl(l,K_1)=\pi Cl(l,K_2)\pi^{-1}$ for every loop $l$. \end{proof}
{ "timestamp": "2019-03-01T02:17:28", "yymm": "1902", "arxiv_id": "1902.11053", "language": "en", "url": "https://arxiv.org/abs/1902.11053" }
\section{Introduction} \label{sec:introduction} Nowadays, the demand for high performance structures with superior mechanical properties and chemical stability in engineering applications has been significantly increased. With these cellular structures, the porous materials whose the excellent properties such as lightweight, excellent energy absorption, heat resistance has been extensively employed in various fields of engineering including aerospace, automotive, biomedical and other areas \cite{tampieri2001porosity,pompe2003functionally, lefebvre2008porous,betts2012benefits, smith2012steel}. However, the existence of internal pores leads to significant reduction in the structural stiffness \cite{xia2013effects}. In order to overcome this shortcoming, the reinforcement with carbonaceous nanofillers such as carbon nanotubes (CNTs) \cite{iijima1991helical, liew2015mechanical} and graphene platelets (GPLs) \cite{mittal2015review, papageorgiou2017mechanical} into the porous materials is an excellent and practical choice to strengthen their mechanical properties. More importantly, this reinforcement aims also to maintain their potential for lightweight structures \cite{groven2012solution,duarte2015effective}. In comparison with CNTs, GPLs have demonstrated great potentials to become a good candidate for reinforcement \cite{rafiee2009enhanced,zaman2012carbon} since GPLs have superior mechanical properties, a lower manufacturing cost, a larger specific surface area and two-dimensional geometry. In order to increase the performance of structure, the functionally graded (FG) porous structures reinforced by GPLs have been proposed in the literature to obtain the desired mechanical properties by modifying the sizes, the density of the internal pores in different directions as well as the dispersion patterns of GPLs \cite{hassani2012production,hangai2013compression,{he2014preparation}}. In terms of numerical analysis, large number of investigations have been conducted to study the influences of the internal pores and GPLs on the behaviours of structures under various different conditions. Kitipornchai et al. \cite{kitipornchai2017free} and Chen et al. \cite{chen2017nonlinear} examined the free vibration, elastic buckling and the nonlinear free vibration, postbuckling behaviours of the FG porous beams reinforced with GPLs, respectively based on the Timoshenko's beam theory and Ritz method. Yang et al. \cite{yang2018buckling} utilized the first-order shear deformation plate theory (FSDT) and Chebyshev-Ritz method to study the uniaxial, biaxial, shear buckling and free vibration of the FG porous plates reinforced with GPLs uniformly or non-uniformly distributed in the metal matrix. Based on the isogeometric analysis (IGA), Li et al. \cite{li2018isogeometric} analysed the static, free vibration and buckling of the FG porous plates reinforced by GPLs using both first- and third-order shear deformation plate theories. Based on combination of the Galerkin method and the fourth-order Runge-Kutta approach, Li et al. \cite{li2018nonlinear} studied the nonlinear vibration and dynamic buckling of the sandwich FG porous plate reinforced by GPL resting on Winkler-Pasternak elastic foundation.\\ On the other hand, the piezoelectric materials have also been extensively applied to build up advanced smart structures for modern industrial products. One of the excellent and essential features of these materials is the ability of transformation between the electrical and mechanical energy which is known as the piezoelectric effect and the converse phenomenon \cite{wang1997static}. Regarding the analysis for the plate structures integrated with piezoelectric layers, a lot of studies have been conducted to predict their behaviours in the literature \cite{he2001active, liew2003modelling,ebrahimi2008free, talebitooti2016optimal, selim2016active,phung2017nonlinear}. In addition, the piezoelectric FG carbon nanotubes reinforced composite plates (FG-CNTRC) also attracted remarkable attention of researchers. Alibeigloo \cite{alibeigloo2013static}, \cite{ALIbeigloo2014free} investigated the static and the free vibration analyses of FG-CNTRC plate as well as cylindrical panel embedded in thin piezoelectric layers using the three-dimensional theory of elasticity. Using FSDT and von K\'arm\'an strain assumptions, Rafiee et al. \cite{rafiee2014non} investigated the nonlinear parametric instability of initially imperfect the piezoelectric FG-CNTRC plates under a combination of the electrical and thermal loadings. Then, Sharma et al. \cite{sharma2016smart} investigated the active vibration control of FG-CNTRC rectangular plates with piezoelectric sensor and actuator layers using FEM based on FSDT. Selim et al. \cite{selim2017active} studied the free vibration behaviour and active vibration control of FG-CNTRC plates with piezoelectric layers using element-free IMLS-Ritz model based on Reddy's higher-order shear deformation theory. Nguyen-Quang et al. \cite{nguyen2018isogeometric} studied the dynamic response of laminated CNTRC plates integrated with piezoelectric layers using IGA and HSDT. Recently, Malekzade et al. \cite{malekzadeh2018vibration} employed the transformed differential quadrature method for the free vibration analysis of FG eccentric annular plates reinforced with GPLs and integrated piezoelectric layers.\\ It is known that the different basis functions are applied for approximation of the geometries and solutions in the framework of traditional FEM which leads to errors in the computational process. To improve the accuracy of solutions as well as reduce computational costs, the IGA \cite{hughes2005isogeometric} which employs non-uniform rational B-splines (NURBS) basis as shape functions had been discovered. The main idea of IGA is fulfilled by using the same basis functions to describe the geometry model and to approximate the solution field. The IGA has successfully been applied to various fields of engineering and science. In comparison with the standard FEM, the NURBS based IGA provides better accuracy and reliability for various engineering problems, especially for ones with complex geometries which are reported in the literature. The basic and review of IGA are presented in the established literature \cite{cottrell2009isogeometric, nguyen2015isogeometric}. Nevertheless, the implementation of NURBS based IGA approach is not often easy as their basis functions are not confined to a unique element but span over the entire domain instead. To overcome these obstacles, Borden et al. \cite{borden2011isogeometric} proposed the B\'ezier extraction which represents the NURBS basis function in the form of Bernstein polynomials basis defined over $C^0$ continuous isogeometric B\'ezier element. By incorporating Bernstein polynomials which are similar to the Lagrangian basis functions as basis function in B\'ezier extraction, the implementation of IGA becomes analogous to the traditional FEM. As a result, the IGA approach can easily be embedded in most existing FEM codes while its advantages are still kept naturally.\\ In the context of plate theories, there is a great deal of theories that have been introduced and developed to estimate the responses of plate structures under different conditions. While CPT or Kirchhoff-Love shows its drawbacks in the analysis of thick plates, FSDT which is capable of both thin and thick plates requires an appropriate shear correction factor. To overcome these shortcomings, several higher-order plate theories (HSDTs) have been proposed in the literature \cite{reddy1984simple,senthilnathan1987buckling,karama2003mechanical,thai2014isogeometric,vu2018enhanced}. Nevertheless, these HSDTs require the $C^1$-continuity of the generalized displacement field which leads to the second-order derivatives of deflection in the stiffness formulation. Therefore, several $C^0$-continuous elements were proposed \cite{shankara1996c0element,kant2002analytical,nguyen2017polygonal}.\\ As previously mentioned, most of the studies mainly focused on studying the plates integrated with piezoelectric layers which address only the core layer composed of FGM or FG-CNTRC. Furthermore, the geometrically nonlinear static and dynamic analyses of the piezoelectric FG plates under various loading types are still somewhat limited. In this study, in order to fill the existing gap in the literature, the geometrically nonlinear static and transient responses of piezoelectric FG plates which have the core layer composed of FG porous materials reinforced by GPLs using B\'ezier extraction via IGA and $C^0$-type HSDT. More importantly, the active control of the geometrically nonlinear static and dynamic responses of the FG porous plates with the effect of the structural damping based on a closed-loop control with piezoelectric sensors and actuators is investigated. By using the B\'ezier extraction, the IGA preserves the element structure, which allows the IGA approach to integrate conveniently into the existing FEM routine. For material distribution, the core layer is constituted by the combination of two porosity distributions and three dispersion patterns of GPLs along the thickness plate, while the piezoelectric layer is perfectly bonded on the both top and bottom surfaces of plate. The Newmark's integration scheme incorporation with Newton-Raphson iterative procedure is utilized for the geometrically nonlinear static and dynamic analyses. Then, some verification investigations are also conducted to prove the accuracy and stability of the present method. The influence of some specific parameters such as different porosity distributions, porosity volume fractions and GPL dispersion patterns, input voltages on the nonlinear behaviours of plate is addressed and discussed in detail through various numerical examples.\\ The outline of this paper is as follows. Section \ref{sec:Theoretical_formulations} provides the material models, the variational and approximate formulations of the piezoelectric FG porous plates reinforced by GPLs based on $C^0$-type HSDT. Meanwhile, Section \ref{sec:Active_Control} describes the active control algorithm. Section \ref{sec:num_exam} presents the numerical examples for the geometrically nonlinear static and transient analyses as well as the active control of the piezoelectric FG porous plates reinforced with GPLs before some concluding remarks are given in Section \ref{sec:conclusion}.\\ \section{Theoretical formulations} \label{sec:Theoretical_formulations} \subsection{Material models of the FG porous plate reinforced with GPLs} We consider a FG plate model whose core layer is made of metal foams reinforced by GPLs with piezoelectric layers as depicted in Fig. \ref{fig:Model_plates}. The length, width and total the thickness of piezoelectric FG porous plate are defined as $a$, $b$ and $h=h_c+2h_p$, respectively, in which $h_c$ and $h_p$ are the thicknesses of the porous core and the piezoelectric layers, respectively. The porous core layer of plate is constituted by combining of two different porosity distribution types and three GPL dispersion patterns along the thickness direction of plates which are depicted in Fig. \ref{fig:porositydis}, respectively. The material properties including the Young's modulus, shear modulus and mass density through the thickness of the porous core layer corresponding to two porosity distribution types can be expressed as \begin{linenomath*} \begin{equation} \left\{ \begin{array}{l} E\left( z \right) = {E_1}\left[ {1 - {e_0}\lambda \left( z \right)} \right],\\ G\left( z \right) = E\left( z \right)/\left[ {2\left( {1 + \nu\left( z \right)} \right)} \right],\\ \rho \left( z \right) = {\rho _1}\left[ {1 - {e_m}\lambda \left( z \right)} \right], \label{eqn:materialpro} \end{array} \right. \end{equation} \end{linenomath*} where \begin{linenomath*} \begin{equation} \lambda \left( z \right) = \left\{ {\begin{array}{*{20}{l}} {cos\left( {\pi z/h_c} \right)},&\textrm {Porosity distribution 1}\\ {cos\left( {\pi z/2h_c + \pi /4} \right)},&\textrm {Porosity distribution 2} \label{eqn:porositydis} \end{array}} \right. \end{equation} \end{linenomath*} in which $E_1$ and $\rho_1$ denote the maximum values of Young's modulus and mass density in the thickness direction of the porous core layer, respectively. Meanwhile, the coefficient of porosity $e_0$ is determined by \begin{linenomath*} \begin{equation} {e_0} = 1 - E'_2/E'_1. \end{equation} \end{linenomath*} where $E'_1$ and $E'_2$ stand for the maximum and minimum values of Young's modulus for the porous core layer without GPLs, as shown in Fig. \ref{fig:porositydis}. Based on Gaussian Random Field (GRF) scheme \cite{roberts2001elastic}, the mechanical properties of closed-cell cellular solids can be given as \begin{linenomath*} \begin{equation} \frac{{E\left( z \right)}}{{{E_1}}} = {\left( {\frac{{\rho \left( z \right)/{\rho _1} + 0.121}}{{1.121}}} \right)^{2.3}} \; \textrm{for} \; \left( {0.15 < \frac{{\rho \left( z \right)}}{{{\rho_1}}} < 1} \right). \end{equation} \end{linenomath*} Then, the mass density coefficient $e_m$ in Eq. \ref{eqn:materialpro} can be determined as \begin{linenomath*} \begin{equation} {e_m} = \frac{{1.121\left( {1 - \sqrt[{2.3}]{{1 - {e_0}\lambda \left( z \right)}}} \right)}}{{\lambda \left( z \right)}}. \end{equation} \end{linenomath*} Also according to the closed-cell GRF scheme \cite{roberts2002computation}, Poisson's ratio $\nu(z)$ is determined by \begin{linenomath*} \begin{equation} \nu \left( z \right) = 0.221p' + {\nu _1}\left( {0.342p{'^2} - 1.21p' + 1} \right), \end{equation} \end{linenomath*} where $\nu_1$ represents the Poisson's ratio of metal without internal pores with $p'$ is given as \begin{linenomath*} \begin{equation} {p}' = 1.121\left( {1 - \sqrt[{2.3}]{{1 - {e_0}\lambda \left( z \right)}}} \right). \end{equation} \end{linenomath*} The volume fraction of GPLs varies along the thickness direction of plate for three dispersion patterns which are illustrated in Fig. \ref{fig:porositydis} can be given as \begin{linenomath*} \begin{equation} {V_{GPL}} = \left\{ {\begin{array}{*{20}{l}} {{S_{i1}}\left[ {1 - \cos \left( {\pi z/h_c} \right)} \right]}, &\textrm {Pattern A}\\ {{S_{i2}}\left[ {1 - \cos \left( {\pi z/2h_c + \pi /4} \right)} \right]},& \textrm{Pattern B}\\ {{S_{i3}}}, & \textrm{Pattern C} \end{array}} \right. \end{equation} \end{linenomath*} where $S_{i1}$,$S_{i2}$ and $S_{i3}$ are the maximum values of GPL volume fraction, in which $i = 1,2$ correspond to two porosity distributions. The weight fraction of GPLs is related to its volume content which are given as follows \begin{linenomath*} \begin{equation} \frac{{{\Lambda _{GPL}}{\rho _m}}}{{{\Lambda _{GPL}}{\rho _m} + {\rho _{GPL}} - {\Lambda _{GPL}}{\rho _{GPL}}}} \times \int_{ - {h_c}/2}^{{h_c}/2} {\left[ {1 - {e_m}\lambda \left( z \right)} \right]} dz = \int_{ - {h_c}/2}^{{h_c}/2} {{V_{GPL}}\left[ {1 - {e_m}\lambda \left( z \right)} \right]} dz. \end{equation} \end{linenomath*} The effective Young's modulus of porous core layer reinforced with GPLs without internal pores is determined by the Halpin-Tsai micromechanics model \cite{affdl1976halpin, tjong2013recent} as \begin{linenomath*} \begin{equation} {E_1} = \frac{3}{8}\left( {\frac{{1 + \xi _L\eta _L{V_{GPL}}}}{{1 - \eta _LV_{GPL}}}} \right){E_m} + \frac{5}{8}\left( {\frac{{1 + \xi _W\eta _W{V_{GPL}}}}{{1 - \eta _WV_{GPL}}}} \right){E_m}, \end{equation} \end{linenomath*} in which \begin{linenomath*} \begin{equation} \begin{array}{l} \xi _L = \frac{{2{l_{GPL}}}}{{{t_{GPL}}}},\;\xi _W = \frac{{2{w_{GPL}}}}{{{t_{GPL}}}},\; \eta _L= \frac{{\left( {{E_{GPL}}/{E_m}} \right) - 1}}{{\left( {{E_{GPL}}/{E_m}} \right) + \xi _L}},\;\eta _W = \frac{{\left( {{E_{GPL}}/{E_m}} \right) - 1}}{{\left( {{E_{GPL}}/{E_m}} \right) + \xi _W}}, \end{array} \end{equation} \end{linenomath*} where $w_{GPL}$, $l_{GPL}$ and $t_{GPL}$ are dimensions of GPLs including the average width, length and thickness, respectively; Meanwhile, $E_{GPL}$ and $E_m$ are the Young's modulus of GPLs and metal matrix, respectively. Finally, $\rho_1$ and $\nu_1$ denote the mass density and Poisson's ratio of the GPLs reinforced porous metal matrix can be determined based on the rule of mixture \cite{nakamura2000determination} \begin{linenomath*} \begin{equation} {\rho _1} = {\rho _{GPL}}{V_{GPL}} + {\rho _m}{V_m}, \end{equation} \end{linenomath*} \begin{linenomath*} \begin{equation} {\nu _1} = {\nu _{GPL}}{V_{GPL}} + {\nu _m}{V_m}, \end{equation} \end{linenomath*} where the mechanical properties for GPLs and metal matrix are denoted with subscript symbols $GPL$ and $m$, respectively. Meanwhile, $V_{GPL}$ and $V_m=1-V_{GPL}$ denote the volume fraction of GPLs and metal matrix, respectively. \subsection{Weak form of the governing equations} The governing equations of motion for the piezoelectric FG plate can be obtained by applying the Hamilton's variational principle \cite{hwang1993finite} which can be expressed as follows \begin{linenomath*} \begin{equation} \delta \int_{{t_1}}^{{t_2}} {Ldt = 0}, \label{eqn:galerkin} \end{equation} \end{linenomath*} in which $t_1$ and $t_2$ denote the starting and finish time values, respectively. Meanwhile, $L$ is the general energy functional which contains the summation of the kinetic energy, the strain energy, the dielectric energy and the external work is expressed as follows \begin{linenomath*} \begin{equation} L = \frac{1}{2}\int_\Omega {\left( {\rho {{{\bf{\dot u}}}^T}{\bf{\dot u}} - {{\bf{\boldsymbol{\sigma} }}^T}\boldsymbol{\varepsilon} + {{\bf{D}}^T}{\bf{E}}} \right)} {\rm{d}}\Omega + \int_{{\Gamma _s}} {{{\bf{u}}^T}{{\bf{f}}_s}} {\rm{d}}{\Gamma _s} - \int_{{\Gamma _\phi }} {{{\phi} }{{\bf{q}}_s}} {\rm{d}}{\Gamma _\phi } + \sum {{{\bf{u}}^T}{{\bf{F}}_p}} - \sum {{\phi} {{\bf{Q}}_p}}, \label{eqn:total_Ener} \end{equation} \end{linenomath*} where $\rho$ denotes the mass density; $\bf u$ and $\dot{\bf u}$ represent the mechanical displacement and velocity field vectors; $\phi$ represents the electric potential; Meanwhile, ${\bf f}_s$ and ${\bf F}_p$ denote the external mechanical surface and concentrated load vectors; ${\bf q}_s$ and ${\bf Q}_p$ indicate the external surface and point charges, respectively; $\Gamma_s$ and $\Gamma_\phi$ denote the external mechanical and the electrical loading surface, respectively.\\ Then, the variational form for the equations of motion can be expressed as follows \begin{linenomath*} \begin{equation} \begin{array}{l} \int_{{t_1}}^{{t_2}} {\int_\Omega {\left( {\rho {\bf{\ddot u}}\delta {\bf{u}}^T - {\bf{\boldsymbol{\sigma} }}^T\delta {\boldsymbol\varepsilon} + {\bf{D}}^T\delta {\bf{E}}} \right)} } {\rm{d}}\Omega {\rm{dt + }}\int_{{t_1}}^{{t_2}} {\int_{{\Gamma _s}} {{{\bf{f}}_s}\delta {\bf{u}}^T} } {\rm{d}}{\Gamma _s}{\rm{dt - }}\int_{{t_1}}^{{t_2}} {\int_{{\Gamma _\phi }} {{{\bf{q}}_s}\delta {\phi} } } {\rm{d}}{\Gamma _\phi }{\rm{dt}}\\ \\ {\rm{ + }}\int_{{t_1}}^{{t_2}} {\sum {{{\bf{F}}_p}} } \delta {{\bf{u}}^T}dt - \int_{{t_1}}^{{t_2}} {\sum {{{\bf{Q}}_p}} } \delta \phi dt = 0. \end{array} \label{eqn:varia_form} \end{equation} \end{linenomath*} In this study, the linear constitutive relationships of the FG porous plate reinforced by GPLs with the piezoelectric layers can be presented as follows \cite{wang2001vibration} \begin{linenomath*} \begin{equation} \left[ {\begin{array}{*{20}{c}} {\boldsymbol{\sigma }}\\ {\bf{D}} \end{array}} \right] = \left[ {\begin{array}{*{20}{c}} {\bf{c}}&{ - {{\bf{e}}^T}}\\ {\bf{e}}&{\bf{g}} \end{array}} \right]\left[ {\begin{array}{*{20}{c}} {\bar {\boldsymbol\varepsilon} }\\ {\bf{E}} \end{array}} \right], \end{equation} \end{linenomath*} in which $\bar {\boldsymbol\varepsilon} = {{\rm{[}}{{\boldsymbol{\varepsilon }}},\;{{\boldsymbol{\gamma}}}{\rm{]}}^T}$ and ${\boldsymbol{\sigma }}$ represent the strain and stress vectors, respectively; $\bf D$ denotes the dielectric displacement vector and $\bf e$ represents the stress piezoelectric constant matrix; $\bf g$ indicates the dielectric constant matrix; Meanwhile, the electric field vector $\bf E$ which is calculated following the electric potential field $\phi $ can be defined as \cite{tzou1990distributed} \begin{linenomath*} \begin{equation} {\bf{E}} = - {\rm{grad}}\phi = - \nabla \phi. \label{eqn: elec_potent} \end{equation} \end{linenomath*} \\ And the material constant matrix $\bf c$ is defined as \begin{linenomath*} \begin{equation} {\bf{c}} = \left[ {\begin{array}{*{20}{c}} {\bf{A}}&{\bf{B}}&{\bf{L}}&{\bf{0}}&{\bf{0}}\\ {\bf{B}}&{\bf{G}}&{\bf{F}}&{\bf{0}}&{\bf{0}}\\ {\bf{L}}&{\bf{F}}&{\bf{H}}&{\bf{0}}&{\bf{0}}\\ {\bf{0}}&{\bf{0}}&{\bf{0}}&{{{\bf{A}}^s}}&{{{\bf{B}}^s}}\\ {\bf{0}}&{\bf{0}}&{\bf{0}}&{{{\bf{B}}^s}}&{{{\bf{D}}^s}} \end{array}} \right], \end{equation} \end{linenomath*} in which \begin{linenomath*} \begin{equation} \begin{array}{l} \left( {\bf A,\bf B,\bf G,\bf L,\bf F,\bf H} \right) = \int_{ - h_c/2}^{h_c/2} {\left( {1,z,{z^2},f(z),zf(z),{f^2}(z)} \right)}Q^b_{ij}dz,\\ \left( {{\bf A}_{}^s,{\bf B}_{}^s,{\bf D}_{}^s} \right) = \int_{ - h_c/2}^{h_c/2} {(1,f'(z),{{f'}^2}(z))}Q^s_{ij}dz, \end{array} \label{eqn:matmatrices1} \end{equation} \end{linenomath*} in which \begin{linenomath*} \begin{equation} {\bf Q}^b = \frac{{{E_e}}}{{1 - \nu _e^2}}\left[ {\begin{array}{*{20}{c}} 1&{{\nu _e}}&0\\ {{\nu _e}}&1&0\\ 0&0& {\frac {1 - \nu _e}{2}} \end{array}} \right],\;\;{\bf Q}^s = \frac{{{E_e}}}{{2(1 + \nu _e^{})}}\left[ {\begin{array}{*{20}{c}} 1&0\\ 0&1 \end{array}} \right]. \end{equation} \end{linenomath*} where $E_e$ and $\nu_e$ are the effective Young's modulus and Poisson's ratio, respectively. \subsection{Approximation of the mechanical displacement field} \subsubsection{$C^0$-type higher-order shear deformation theory} \label{sec:plateTheory} Considering a plate carrying a domain ${\bf V}=\Omega\times(\frac{-h}{2},\frac{h}{2})$, in which $\Omega\in \mathbb{R}^2$. Based on the higher-order shear deformation theory \cite{aydogdu2009new}, the displacement field at an arbitrary point in the plate can be presented as follows \begin{linenomath*} \begin{equation} {\bf{u}}(x,y,z) = {\bf{u}}^0(x,y) + z{\bf{u}}^1(x,y) + f(z){\bf{u}}^2(x,y), \label{eqn:dispstrain} \end{equation} \end{linenomath*} where \begin{linenomath*} \begin{equation} {\bf{u}} = \left\{ {\begin{array}{*{20}{c}} u\\v\\w \end{array}} \right\},{\rm{ }}{{\bf{u}}^0} = \left\{ {\begin{array}{*{20}{c}} {{u_0}}\\{{v_0}}\\{{w_0}} \end{array}} \right\},\,{\rm{ }}\,{{\bf{u}}^1} = - \left\{ {\begin{array}{*{20}{c}} {{w_{0,x}}}\\{{w_{0,y}}}\\0 \end{array}} \right\},{\rm{ }}{{\bf{u}}^2} = \left\{ {\begin{array}{*{20}{c}} {{\theta _x}}\\{{\theta _y}}\\0 \end{array}} \right\}, \label{eqn:dispstrain1} \end{equation} \end{linenomath*} in which $u_0, v_0, w_0, \theta_x $ and $\theta_y$ are the displacement components in the $x$, $y$, $z$ directions and the rotation components in the $y$- and the $x$- axes, respectively. Meanwhile, the subscript symbols $x$ and $y$ denote the derivatives of any function with respect to $x$ and $y$ directions, respectively; and $f(z)$ is a function of the $z-$coordinate which is defined to describe the shear strains and stresses along the thickness of plate, as is listed in \cite{nguyen2016general}. In this work, the famous third-order function proposed by Reddy is utilized as $f(z)=z-\frac{4z^3}{3h^2}$ \cite{reddy2000analysis}.\\ In order to avoid the high order derivations in approximate formulations and conveniently impose the boundary conditions, additional assumptions are formulated as follows \begin{linenomath*} \begin{equation} w_{0,x}=\beta_x,\;w_{0,y}=\beta_y. \label{eqn:dispstrain2} \end{equation} \end{linenomath*} Then, substituting Eq. \ref{eqn:dispstrain2} into Eq. \ref{eqn:dispstrain1}, one obtains \begin{linenomath*} \begin{equation} {{\bf{u}}^0} = {\left\{ {\begin{array}{*{20}{c}} {{u_0}}\\{{v_0}}\\{{w_0}} \end{array}} \right\}},{{\bf{u}}^1} = - {\left\{ {\begin{array}{*{20}{c}} {{\beta _x}}\\{{\beta _y}}\\0 \end{array}} \right\}},{{\bf{u}}^2} = {\left\{ {\begin{array}{*{20}{c}} {{\theta _x}}\\{{\theta _y}}\\0 \end{array}} \right\}.} \label{eqn:dispstrain3} \end{equation} \end{linenomath*} It can be seen that the compatible strain fields which are presented in Eq. \ref{eqn:dispstrain3} only require the $C^0$-continuity of the generalized displacements. Therefore, this theory is called as the $C^0$-type higher-order shear deformation theory ($C^0$-HSDT).\\ The Green strain vector of a bending plate can be expressed in compact form as follows \begin{linenomath*} \begin{equation} {\varepsilon _{ij}} = \frac{1}{2}\left( {\frac{{\partial {u_i}}}{{\partial {x_j}}} + \frac{{\partial {u_j}}}{{\partial {x_i}}} + \frac{{\partial {u_k}}}{{\partial {x_i}}}\frac{{\partial {u_k}}}{{\partial {x_j}}}} \right). \label{eqn:Green_strain} \end{equation} \end{linenomath*} Employing the von K\'arm\'an assumptions, the strain-displacement relations can be rewritten as \begin{linenomath*} \begin{subequations} \begin{align} \boldsymbol{\varepsilon}={\left\{ {{\boldsymbol{\varepsilon} _{xx}},{\boldsymbol{\varepsilon} _{yy}},{\boldsymbol{\gamma} _{xy}}} \right\}^T} = {{\boldsymbol{\varepsilon}} _0} + z{{\boldsymbol{\kappa} }_1} + {f(z)}{{\boldsymbol{\kappa} }_2},\\ {\bf{\boldsymbol{\gamma} }} = {\left\{ {{\gamma _{xz}},{\gamma _{yz}}} \right\}^T} = {{\bf{\boldsymbol{\varepsilon} }}_s} + {f'(z)}{{\bf{\boldsymbol{\kappa} }}_s}, \label{eqn:strainplane} \end{align} \end{subequations} \end{linenomath*} where \begin{linenomath*} \begin{equation} \begin{array}{l} {{\boldsymbol{\varepsilon} }_0} = \left\{ {\begin{array}{*{20}{c}} {{u_{0,x}}}\\ {{v_{0,y}}}\\ {{u_{0,y}} + {v_{0,x}}} \end{array}} \right\} + \frac{1}{2}\left\{ {\begin{array}{*{20}{c}} {w_{,x}^2}\\ {w_{,y}^2}\\ {2w_{,xy}^{}} \end{array}} \right\} = {\boldsymbol{\varepsilon} }_0^L + {\boldsymbol{\varepsilon} }_0^{NL},\\ {{\boldsymbol{\kappa} }_1} = - \left\{ {\begin{array}{*{20}{c}} {{\beta _{x,x}}}\\ {{\beta _{y,y}}}\\ {{\beta _{x,y}} + {\beta _{y,x}}} \end{array}} \right\},{{\boldsymbol{\kappa} }_2} = \left\{ {\begin{array}{*{20}{c}} {{\theta _{x,x}}}\\ {{\theta _{y,y}}}\\ {{\theta _{x,y}} + {\theta _{y,x}}} \end{array}} \right\},\\ {\boldsymbol{\varepsilon} ^{0}} = \left\{ {\begin{array}{*{20}{c}} {{w_{0,x}} - {\beta _x}}\\ {{w_{0,y}} - {\beta _y}} \end{array}} \right\},{\boldsymbol{\kappa}_{s}} = \left\{ {\begin{array}{*{20}{c}} {{\theta _x}}\\ {{\theta _y}} \end{array}} \right\} \end{array} \label{eqn:extract_disp} \end{equation} \end{linenomath*} where the nonlinear strain component is expressed as follows \begin{linenomath*} \begin{equation} {\boldsymbol{\varepsilon} }_0^{NL}=\frac{1}{2}\left[ {\begin{array}{*{20}{c}} {{w_{,x}}}&0\\ 0&{{w_{,y}}}\\ {{w_{,y}}}&{{w_{,x}}} \end{array}} \right]\left\{ {\begin{array}{*{20}{c}} {{w_{,x}}}\\ {{w_{,y}}} \end{array}} \right\} = \frac{1}{2}{\boldsymbol\Theta} {\bf{\Lambda }}. \end{equation} \end{linenomath*} \subsubsection{Isogeometric analysis based on B\'ezier extraction of NURBS} \label{sec:IGA_Section} \setcounter{secnumdepth}{4} \paragraph{B-spline and NURBS basis functions} In one dimensional (1D) space, the B-spline basis functions can be expressed by a set of knot vector in the parametric space which is defined by ${\bf{\Xi }} = \left\{ {{\xi _1},{\xi _2},...,{\xi _{n + p + 1}}} \right\}$, where $(i = 1,...n + p)$ denotes the knot index. Meanwhile $n$ and $p$ are the number of basis functions and the polynomial order, respectively. For a given knot vector $\bf{\Xi }$, the B-spline basis functions are defined according to recursive form \begin{linenomath*} \begin{equation} {N_{i,0}}\left( \xi \right) = \left\{ {\begin{array}{*{20}{l}} 1,& \textrm{ if}\;\;{{\xi _i} \le \xi \le {\xi _{i + 1}}}\\ 0,& \textrm {otherwise} \end{array}}, \textrm {for} \; p=0, \right. \end{equation} \end{linenomath*} \begin{linenomath*} \begin{equation} {N_{i,p}}\left( \xi \right) = \frac{{\xi - {\xi _i}}}{{{\xi _{i + p}} - {\xi _i}}}{N_{i,p - 1}}\left( \xi \right) + \frac{{{\xi _{i + p + 1}} - \xi }}{{{\xi _{i + p + 1}} - {\xi _{i + 1}}}}{N_{i + 1,p - 1}}\left( \xi \right), \textrm {for} \; p>0. \end{equation} \end{linenomath*} Then, B-spline curves can be determined by taking a linear association of B-spline basis functions and the control points ${P_i}\left( {i = 1,2,...,n} \right)$ as \begin{linenomath*} \begin{equation} {\bf T}\left( \xi \right) = \sum\limits_{i=1}^n {{\bf P}_i{N_{i,p}}\left( \xi \right)}. \end{equation} \end{linenomath*} \\ In two dimensional (2D) space, the B-splines basis functions can be also obtained by taking a tensor product of two basis functions in 1D space. Similarly, the B-splines surfaces are also expressed by \begin{linenomath*} \begin{equation} {\bf S}\left( {\xi ,\eta } \right) = \sum\limits_{i = 1}^n {\sum\limits_{j = 1}^m {{\bf P}_{i,j}{N_{i,p}}} } \left( \xi \right){M_{j,q}}\left( \eta \right) = {\bf P}^T{\bf N}\left( {\xi ,\eta } \right). \label{eqn:B-spli_sur} \end{equation} \end{linenomath*} in which ${N_{i,p}}$ and ${M_{j,q}}$ represent the basis functions with orders $p$ and $q$ in the $\xi$ and $\eta$ directions corresponding with the knot vectors ${\bf{\Xi }} = \left\{ {{\xi _1},{\xi _2},...,{\xi _{n + p + 1}}} \right\}$ and ${\bf H} = \left\{ {{\eta _1},{\eta _2},...,{\eta _{m + q + 1}}} \right\}$, respectively.\\ Due to B-splines basis functions are limited the ability to exactly description some conic shapes such as circles, cylinders, ellipsoids and spheres, the NURBS have been introduced based on the B-spline and a set of weights. Accordingly, the NURBS basis functions can be expressed as \begin{linenomath*} \begin{equation} {R_{i,j}}\left( {\xi ,\eta } \right) = \frac{{{N_{i,p}}\left( \xi \right){M_{j,q}}\left( \eta \right){w_{i,j}}}}{{\sum\limits_{{\hat i}=1}^n {\sum\limits_{{\hat j}=1}^m {{N_{{{\hat i}},p}}\left( \xi \right){M_{{{\hat j}},q}}\left( \eta \right){w_{\hat i,\hat j}}}}}}, \end{equation} \end{linenomath*} where ${w_{i,j}}$ represents the weight values. Then, the NURBS surfaces can be determined as follows \begin{linenomath*} \begin{equation} {\bf S}\left( {\xi ,\eta } \right) = \sum\limits_{i = 1}^n {\sum\limits_{j = 1}^m {{R_{i,j}}\left( {\xi ,\eta } \right)} } {{\bf P}_{i,j}}. \end{equation} \end{linenomath*} \setcounter{secnumdepth}{4} \paragraph{B\'ezier extraction of NURBS} The major purpose of B\'ezier extraction is to instead the NURBS basis functions by the $C^0$-continuous Bernstein polynomial basis functions defined over B\'ezier elements which have the similar element structure with standard FEM. By using the Bernstein polynomial as the basis function in B\'ezier extraction, the IGA approach is straightforwardly performed as well as can be integrated in most available FEM structures. It is well known that the B-spline basis function of $p$th order has $C^{p-k}$ continuity across each element, in which $k$ represents the multiplicity of knots in the knot value. Therefore, the $C^0$-continuity can be obtained by inserting the new knots into the B-spline basis function until $k = p$. Accordingly, a new knot $\bar \xi \in \left[ {{\xi _k},{\xi _{k + 1}}} \right]$ with $(k>p)$ is inserted into the original knot vector ${\bf{\Xi }} = \left\{ {{\xi _1},{\xi _2},...,{\xi _{n + p + 1}}} \right\}$. As a result, a new set of control points are obtained and expressed as follows \cite{hughes2005isogeometric} \begin{linenomath*} \begin{equation} {\bar {\bf P}_i} = \left\{ {\begin{array}{*{20}{l}} {{{\bf {P}}_1}},\\ {{\alpha _i}{{\bf {P}}_i}{\rm{ + (1 - }}{\alpha _i}){{\bf {P}}_{i - 1}}}\\ {{{\bf {P}}_n}} \end{array}} \right.{\rm{ }}\begin{array}{*{20}{l}} {{\rm{ }}i = 1},\\ {1 < i < n + 1},\\ {i = n + 1}, \end{array} \end{equation} \end{linenomath*} where \begin{linenomath*} \begin{equation} {\alpha _i} = \left\{ {\begin{array}{*{20}{l}} 1 &1\le i \le k - p,\\ \frac{{\bar \xi - {\xi _i}}}{{{\xi _{i + p}} - {\xi _i}}}&{k - p + 1 \le i \le k},\\ 0&{i \ge k + 1}, \end{array}} \right. \end{equation} \end{linenomath*} in which ${\bf P}_i$ and ${\bar {\bf P}_i}$ are the original and new control points, respectively.\\ Then, the B\'ezier extraction operator can be determined by using the new set of knots $\left\{ {{{\bar \xi }_1},{{\bar \xi }_2},...,{{\bar \xi }_{n + 1}}} \right\}$ as follows \cite{borden2011isogeometric, do2017limit} \begin{linenomath*} \begin{equation} {\bf C}^j = \left[ {\begin{array}{*{20}{c}} {{\alpha _1}}&{1 - {\alpha _2}}&0& \ldots &{}&{}&0\\ 0&{{\alpha _2}}&{1 - {\alpha _3}}&0& \ldots &{}&0\\ 0&0&{{\alpha _3}}&{1 - {\alpha _4}}&0& \ldots &0\\ \vdots &{}&{}&{}&{}&{}&{}\\ 0& \ldots &{}&{}&0&{{\alpha _{n + j - 1}}}&{1 - {\alpha _{n + j}}} \end{array}} \right]. \end{equation} \end{linenomath*} \\ Applying the B\'ezier extraction operator ${\bf C}^j$, a new B\'ezier control points ${\bf P}^b$ associated with Bernstein polynomial basis can be determined as follows \cite{thomas2015bezier} \begin{linenomath*} \begin{equation} {\bf P}^b = {\bf C}^T\bf P, \label{eqn:new_cont_point} \end{equation} \end{linenomath*} where the whole B\'ezier extraction operator $\bf C$ is defined as \begin{linenomath*} \begin{equation} {\bf C} = {\prod\nolimits_{j = 1}^n {{{\bf C}^j}}}. \end{equation} \end{linenomath*} It should be noted that the geometries will not change after inserting a new knot into the original knot vector. As a result, the B-spline surface is also obtained based on Bernstein polynomials and B\'ezier control points as \begin{linenomath*} \begin{equation} {\bf S}\left( {\xi ,\eta } \right) = \sum\limits_i^n {\sum\limits_j^m {{B_{i,j}}\left( {\xi ,\eta } \right){\bf P}_{i,j}^b} } = {\left( {{\bf P}^b} \right)^T}{\bf B}\left( {\xi ,\eta } \right), \label{eqn:Bezier_sur} \end{equation} \end{linenomath*} in which the 2D Bernstein polynomials ${\bf B}\left( {\xi ,\eta } \right)$ in terms of parametric coordinates $\xi$ and $\eta$ are defined recursively as \begin{linenomath*} \begin{equation} \begin{array}{l} {B_{i,j,p}}\left( {\xi ,\eta } \right) = \frac{1}{4}\left( {1 - \xi } \right)\left( {1 + \eta } \right){B_{i,j - 1,p - 1}}\left( {\xi ,\eta } \right) + \frac{1}{4}\left( {1 - \xi } \right)\left( {1 - \eta } \right){B_{i,j,p - 1}}\left( {\xi ,\eta } \right) + \\ \frac{1}{4}\left( {1 + \xi } \right)\left( {1 - \eta } \right){B_{i - 1,j,p - 1}}\left( {\xi ,\eta } \right) + \frac{1}{4}\left( {1 + \xi } \right)\left( {1 + \eta } \right){B_{i - 1,j - 1,p - 1}}\left( {\xi ,\eta } \right), \end{array} \end{equation} \end{linenomath*} where \begin{linenomath*} \begin{equation} {B_{1,1,0}}\left( {\xi ,\eta } \right) = 1,\;{\rm{ }}{B_{i,j,p}}\left( {\xi ,\eta } \right) = 0\; \left( {i,j < 1\; \textrm{ or }\; i{\rm{, }}j{\rm{ > {p+1}}}} \right). \end{equation} \end{linenomath*} From Eq. \ref{eqn:B-spli_sur} and Eq. \ref{eqn:Bezier_sur}, yields the following relation \begin{linenomath*} \begin{equation} {\left( {{\bf P}^b} \right)^T}{\bf B}\left( {\xi ,\eta } \right) = {\bf P}^T{\bf N}\left( {\xi ,\eta } \right). \label{eqn:relation_2B} \end{equation} \end{linenomath*} According to Eq. \ref{eqn:new_cont_point} for 2D, the B-spline basis functions in Eq. \ref{eqn:relation_2B} can be rewritten based on Bernstein polynomials as follows \begin{linenomath*} \begin{equation} {\bf N}\left( {\xi ,\eta } \right) = {\bf CB}\left( {\xi ,\eta } \right), \label{eqn:new_Nurbs} \end{equation} \end{linenomath*} Based on Eq. \ref{eqn:new_Nurbs}, the NURBS basis functions can be presetned by Bernstein polynomials as follows \begin{linenomath*} \begin{equation} {\bf R}\left( {\xi ,\eta } \right) = \frac{\bf W}{{W\left( {\xi ,\eta } \right)}}{\bf N}\left( {\xi ,\eta } \right) = \frac{\bf W}{{W\left( {\xi ,\eta } \right)}}{\bf CB}\left( {\xi ,\eta } \right), \label{eqn: NURBS_basic_func} \end{equation} \end{linenomath*} where $\bf{W}$ denotes the diagonal matrix of the local NURBS weights. Meanwhile, the weight functions $W\left( {\xi ,\eta } \right)$ are expressed with the Bernstein basis functions as follows \begin{linenomath*} \begin{equation} W\left( {\xi ,\eta } \right) = {\left( {{\bf C}^T{\bf w}} \right)^T}{\bf B}\left( {\xi ,\eta } \right) = {\left( {{\bf w}^b} \right)^T}{\bf B}\left( {\xi ,\eta } \right), \end{equation} \end{linenomath*} where $\bf{w}$ and $\bf{w}^b$ are the weights for the NURBS and B\'ezier, respectively. The relation of B\'ezier control points and NUBRS ones is described by \begin{linenomath*} \begin{equation} {\bf P}^b = {\left( {{\bf W}^b} \right)^{ - 1}}{\bf C}^T{\bf {WP}}. \end{equation} \end{linenomath*} \setcounter{secnumdepth}{4} \paragraph{B\'ezier extraction of NURBS for FG porous plate formulations} Based on the B\'ezier extraction of NURBS, the mechanical displacement field $\bf{u{(\xi ,\eta) }}$ of the FG porous plate can be approximated as follows \begin{linenomath*} \begin{equation} {\bf{u}}\left( {\xi ,\eta } \right) = \sum\limits_A^{m \times n} {R_A^e\left( {\xi ,\eta } \right)} \bf{d_A}, \label{eqn:appx_mech_disp} \end{equation} \end{linenomath*} in which $n\times m$ represents the number of basis functions. Meanwhile, ${R}_A^e\left( {\xi ,\eta } \right)$ denotes a NURBS basis function which is presented in Eq. \ref{eqn: NURBS_basic_func}; ${\bf d}_A = {\left\{ {\begin{array}{*{20}{c}}{{u_{0A}}}&{{v_{0A}}}&{{w_{0A}}}&{{\beta _{xA}}}&{{\beta _{yA}}}&{{\theta _{xA}}}&{{\theta _{yA}}} \end{array}} \right\}^T}$ is the vector of the nodal degrees of freedom associated with the control point A.\\ By substituting Eq. \ref{eqn:appx_mech_disp} into Eq. \ref{eqn:extract_disp}, the in-plane and shear strains can be expressed as \begin{linenomath*} \begin{equation} {\left[ {{\boldsymbol{\varepsilon }},{\boldsymbol{\gamma }}} \right]^T} = \sum\limits_{A = 1}^{m \times n} {\left( {{\bf{B}}_A^L + \frac{1}{2}{\bf{B}}_A^{NL}} \right)} {{\bf{d}}_A}, \end{equation} \end{linenomath*} in which ${\bf{B}}_A^L = {\left[ {\begin{array}{*{20}{c}} {{\bf{B}}_A^1}&{{\bf{B}}_A^2}&{{\bf{B}}_A^3}&{{\bf{B}}_A^{s1}}&{{\bf{B}}_A^{s2}} \end{array}} \right]^T}$, where \begin{linenomath*} \begin{equation} \begin{array}{l} {\bf B}^1 = {\left[ {\begin{array}{*{20}{c}} {{R_{A,x}}}&0&0&0&0&0&0\\ 0&{{R_{A,y}}}&0&0&0&0&0\\ {{R_{A,y}}}&{{R_{A,x}}}&0&0&0&0&0 \end{array}} \right]},\;{\bf B}^2 = - {\left[ {\begin{array}{*{20}{c}} 0&0&0&{{R_{A,x}}}&0&0&0\\ 0&0&0&0&{{R_{A,y}}}&0&0\\ 0&0&0&{{R_{A,y}}}&{{R_{A,x}}}&0&0 \end{array}} \right]},\\ \\ {\bf B}^3 = {\left[ {\begin{array}{*{20}{c}} 0&0&0&0&0&{{R_{A,x}}}&0\\ 0&0&0&0&0&0&{{R_{A,y}}}\\ 0&0&0&0&0&{{R_{A,y}}}&{{R_{A,x}}} \end{array}} \right]},\\ \\ {\bf B}^{s1} = {\left[ {\begin{array}{*{20}{c}} 0&0&{{R_{A,x}}}&{ - {R_A}}&0&0&0\\ 0&0&{{R_{A,y}}}&0&{ - {R_A}}&0&0 \end{array}} \right]},\;{\bf B}^{s2} = {\left[ {\begin{array}{*{20}{c}} 0&0&0&0&0&{{R_A}}&0\\ 0&0&0&0&0&0&{{R_A}}\end{array}} \right]}, \end{array} \end{equation} \end{linenomath*} and ${\bf{B}}_A^{NL} = {\boldsymbol\Theta} {\boldsymbol\Lambda}$, in which \begin{linenomath*} \begin{equation} {\boldsymbol\Theta} = {\left[ {\begin{array}{*{20}{c}} w_{A,x}&0\\ 0&w_{A,y}\\ w_{A,y}&w_{A,x}\\ \end{array}} \right]},\;{\boldsymbol\Lambda}= {\left[ {\begin{array}{*{20}{c}} 0&0&R_{A,x}&0&0&0&0\\ 0&0&R_{A,y}&0&0&0&0 \end{array}} \right]}. \end{equation} \end{linenomath*} \subsection{Approximation of the electric potential field} By discretizing the piezoelectric layer into finite sublayers along the thickness, the electric potential field on each layer is then approximated. Accordingly, in each sublayer, the electric potential variation is considered to be linear and is approximated through the thickness as follows \cite{wang2004finite} \begin{linenomath*} \begin{equation} {\phi ^i}(z) = {\bf R}_\phi ^i{\boldsymbol\phi} ^i, \end{equation} \end{linenomath*} where ${\bf R}_\phi ^i$ is the shape function of the electric potential function which is determined in Eq. \ref{eqn: NURBS_basic_func} with $p = 1$. Meanwhile, ${\boldsymbol\phi} ^i = \left[ {\begin{array}{*{20}{c}} {{\phi ^{i - 1}}},&{{\phi ^i}} \end{array}} \right]$ with $(i = 1,2,...,{n_{sub}})$ denotes the electric potentials at the top and bottom surfaces of the sublayer, where $n_{sub}$ represents the number of piezoelectric sublayers.\\ In each sublayer element, the values of the electric potentials are estimated to equal at the same height according to $z-$ direction \cite{wang2001vibration}. Therefore, the electric potential field $\bf E$ for each sublayer element which is presented in Eq. \ref{eqn: elec_potent} can be expressed as follows \begin{linenomath*} \begin{equation} {\bf{E}} = - \nabla {\bf R}_\phi ^i{\boldsymbol\phi} ^i = - {\bf{B}}_\phi ^{}{\boldsymbol\phi} ^i, \label{eqn:approx_elec} \end{equation} \end{linenomath*}\\ where \begin{linenomath*} \begin{equation} {\bf{B}}_\phi ^{} = \left\{ {\begin{array}{*{20}{c}} 0&0&{\frac{1}{{{h_p}}}} \end{array}} \right\}^T. \end{equation} \end{linenomath*} \\ Finally, the stress piezoelectric constant matrix $\bf e$, the strain piezoelectric constant matrix $\bf k$ and the dielectric constant matrix $\bf g$ can be determined by \cite{wang2004finite} \begin{linenomath*} \begin{equation} {\bf{e}} = \left[ {\begin{array}{*{20}{c}} 0&0&0&0&{{e_{15}}}\\ 0&0&0&{{e_{15}}}&0\\ {{e_{13}}}&{{e_{31}}}&{{e_{33}}}&0&0 \end{array}{\rm{ }}\begin{array}{*{20}{c}} \end{array}} \right],{\bf k} = \left[ {\begin{array}{*{20}{c}} 0&0&0&0&{{k_{15}}}\\ 0&0&0&{{k_{15}}}&0\\ {{k_{31}}}&{{k_{32}}}&{{k_{33}}}&0&0 \end{array}{\rm{ }}\begin{array}{*{20}{c}} \end{array}} \right],{\bf{g}}\, = \left[ {\begin{array}{*{20}{c}} {{p_{11}}}&0&0\\ 0&{{p_{22}}}&0\\ 0&0&{{p_{33}}} \end{array}} \right], \end{equation} \end{linenomath*} \subsection{Governing equation of motion} By substituting Eqs. \ref{eqn:appx_mech_disp} and \ref{eqn:approx_elec} into Eq. \ref{eqn:varia_form}, the final form of the elementary governing equation can be obtained and expressed as follows \cite{wang2004finite} \begin{linenomath*} \begin{equation} \left[ {\begin{array}{*{20}{c}} {{{\bf{M}}_{uu}}}&{\bf 0}\\ {\bf 0}&{\bf 0} \end{array}} \right]\left[ {\begin{array}{*{20}{c}} {{\ddot{\bf d}}}\\ {\ddot{\boldsymbol\phi}} \end{array}} \right] + \left[ {\begin{array}{*{20}{c}} {{{\bf{K}}_{uu}}}&{{{\bf{K}}_{u\phi }}}\\ {{{\bf{K}}_{\phi u}}}&{{{-\bf{K}}_{\phi \phi }}} \end{array}} \right]\left[ {\begin{array}{*{20}{c}} {\bf{d}}\\ {\boldsymbol\phi} \end{array}} \right] = \left[ {\begin{array}{*{20}{c}} {\bf{f}}\\ {\bf{Q}} \end{array}} \right], \label{eqn:General_FEM} \end{equation} \end{linenomath*} where \begin{linenomath*} \begin{equation} \begin{array}{l} {{\bf{K}}_{uu}} = \int_\Omega {({{\bf B}^L+{\bf B}^{NL})}^T{\bf c}({{\bf B}^L+\frac{1}{2}{\bf B}^{NL})}d\Omega },\;{{\bf{K}}_{\phi \phi }} = \int_\Omega {{\bf B}_\phi ^T{\bf g}{{\bf B}_\phi }d\Omega },\\ \\ {{\bf{K}}_{u\phi }} = \int_\Omega {{({\bf B}^L)}^T{{\bf \tilde{e}}^T}{{\bf B}_\phi }d\Omega },\;{{\bf{K}}_{\phi u}} = {\bf{K}}_{u\phi }^T,\; {\bf{M}_{uu}}{\rm{ = }}\int_\Omega {{{{\bf{\tilde N}}}^T}{\bf m}{\bf{\tilde N}}d\Omega },\;{\bf{f}} = \int_\Omega {{{\bar q}_0}{\bf{\bar N}}{\rm{d}}\Omega }, \end{array} \label{eqn:General_FEM1} \end{equation} \end{linenomath*} in which \begin{linenomath*} \begin{equation} \begin{array}{l} {\bf \tilde {e}} = [\begin{array}{*{20}{c}} {{\bf e}_m^T}&{z{\bf e}_m^T}&{f\left( z \right){\bf e}_m^T}&{{\bf e}_s^T}&{f'\left( z \right){\bf e}_s^T} \end{array}],\\ \\ {\bf{\tilde N}} = [\begin{array}{*{20}{c}}{{\bf N}^0}&{{\bf N}^1}&{{\bf N}^2}\end{array}]^T,\;\bar{\bf N} = \left[ {\begin{array}{*{20}{c}} 0&0&{{R_A}}&0&0&0&0 \end{array}} \right], \end{array} \end{equation} \end{linenomath*} where \begin{linenomath*} \begin{equation} \begin{array}{l} {\bf e}_m = \left[ {\begin{array}{*{20}{c}} 0&0&0\\ 0&0&0\\ {{e_{31}}}&{{e_{32}}}&{{e_{33}}} \end{array}} \right],{\bf e}_s = \left[ {\begin{array}{*{20}{c}} 0&{{e_{15}}}\\ {{e_{15}}}&0\\ 0&0 \end{array}} \right], {\bf N}^0 = \left[ {\begin{array}{*{20}{c}} {{R_A}}&0&0&0&0&0&0\\ 0&{{R_A}}&0&0&0&0&0\\ 0&0&{{R_A}}&0&0&0&0 \end{array}} \right],\\ \\{\bf N}^1 = - \left[ {\begin{array}{*{20}{c}} 0&0&0&{{R_A}}&0&0&0\\ 0&0&0&0&{{R_A}}&0&0\\ 0&0&0&0&0&0&0 \end{array}} \right],\;{\rm{ }} {\bf N}^2 = \left[ {\begin{array}{*{20}{c}} 0&0&0&0&0&{{R_A}}&0\\ 0&0&0&0&0&0&{{R_A}}\\ 0&0&0&0&0&0&0 \end{array}} \right] \end{array} \end{equation} \end{linenomath*} and \begin{linenomath*} \begin{equation} {\bf m} = \left[ {\begin{array}{*{20}{c}} {{I_1}}&{{I_2}}&{{I_4}}\\ {{I_2}}&{{I_3}}&{{I_5}}\\ {{I_4}}&{{I_5}}&{{I_6}} \end{array}} \right] \end{equation} \end{linenomath*} in which the mass inertia terms ${I}_i$ with $(i=1:6)$ are given as \begin{linenomath*} \begin{equation} \left( {I}_1,{I}_2,{ I}_3,{I}_4,{I}_5,{I}_6 \right) = \int_{ - h/2}^{h/2}{\rho (z)\left( {1,z,{z^{\rm{2}}},f(z),zf(z),{f^2}(z)} \right)}{\rm{d}}z. \end{equation} \end{linenomath*} \\ Since the electric field $\bf E$ exists only according to the $z$ direction, ${\bf K}_{u\phi}$ in Eq. \ref{eqn:General_FEM1} can be rewritten a \begin{linenomath*} \begin{equation} {{\bf{K}}_{u\phi }} = \int_\Omega {\left( {{{\left( {{\bf B}^1} \right)}^T}{\bf e}_m^T{\bf B}_\phi + z{{\left( {{\bf B}^2} \right)}^T}{\bf e}_m^T{\bf B}_\phi + f(z){{\left( {{\bf B}^3} \right)}^T}{\bf e}_m^T{\bf B}_\phi} \right)} d\Omega. \end{equation} \end{linenomath*} Now, substituting the second equation into the first one of Eq. \ref{eqn:General_FEM}, one obtains \begin{linenomath*} \begin{equation} {{\bf{M}}_{uu}}{\bf{\ddot d}} + \left( {{{\bf{K}}_{uu}}+ {{\bf{K}}_{u\phi }}{\bf{K}}_{_{\phi \phi }}^{ - 1}{{\bf{K}}_{\phi u}}} \right){\bf{d}} = {\bf{F}} + {{\bf{K}}_{u\phi }}{\bf{K}}_{_{\phi \phi }}^{ - 1}{\bf{Q}}. \label{eqn:Total_Eqn} \end{equation} \end{linenomath*} \section{Active control analysis} \label{sec:Active_Control} In this section, a piezoelectric FG porous plate, as depicted in Fig. \ref{fig:Plate_Control}, is considered for the active control the static and dynamic responses of the FG plates. Whereas the bottom layer is a piezoelectric sensor labeled with the subscript $s$, the top layer represents a piezoelectric actuator denoted with the subscript $a$. The combination between the displacement feedback control \cite{wang2001vibration}, which helps the piezoelectric actuator to generate the charge, and the velocity feedback control \cite{hwang1993finite}\cite{lam1997finite}\cite{liu2004static}, which can provide a velocity component based on an appropriate electronic circuit, is utilized in this study. Furthermore, a consistent method \cite{liu2004static}\cite{hong1999modeling} which can predict the dynamic responses of piezoelectric FG plate is also applied. Two constant gains $G_d$ and $G_v$ of the displacement and velocity feedback control, respectively, are adopted in order to couple the input actuator voltage vector ${\boldsymbol\phi}_a$ and the output sensor voltage vector ${\boldsymbol\phi}_s$ as follows \cite{liu2004static} \begin{linenomath*} \begin{equation} {\boldsymbol\phi}_a=G_d{\boldsymbol\phi}_s+G_v{\dot{\boldsymbol\phi}}_s. \label{eqn:Phi_a} \end{equation} \end{linenomath*} \\ Assuming without any the external charge $\bf Q$, the generated potential from the piezoelectric sensor layer can be obtained from the second equation of Eq. \ref{eqn:General_FEM} \begin{linenomath*} \begin{equation} {\boldsymbol\phi} _s = {\left[ {{\bf{K}}_{\phi \phi }^{ - 1}} \right]_s}{\left[ {{\bf{K}}_{\phi u}^{}} \right]_s}{{\bf{d}}_s}, \label{eqn:Phi_s} \end{equation} \end{linenomath*} and the sensor charge resulted due to the deformation is determined by \begin{linenomath*} \begin{equation} {{\bf{Q}}_s} = {\left[ {{\bf{K}}_{\phi u}} \right]_s}{{\bf{d}}_s}. \end{equation} \end{linenomath*} which can be understood that when the FG plate structures deform, the electric charges are generated and gathered in the sensor layer because of the piezoelectric effect. After that, these electric charges are amplified based on a closed loop control in order to convert into the voltage signal before being sent and applied to the actuator layer. Due to the converse piezoelectric effect, the strains and stresses of structures are formed which can be applied to actively control the dynamic response of the FG porous plate.\\ By substituting Eqs. \ref{eqn:Phi_a} and \ref{eqn:Phi_s} into the second equation in Eq. \ref{eqn:General_FEM}, one obtains \begin{linenomath*} \begin{equation} {{\bf{Q}}_a} = {\left[ {{\bf{K}}_{\phi u}^{}} \right]_a}{{\bf{d}}_a} - {G_d}{\left[ {{\bf{K}}_{\phi \phi }^{}} \right]_a}{\left[ {{\bf{K}}_{\phi \phi }^{ - 1}} \right]_s}{\left[ {{\bf{K}}_{\phi u}^{}} \right]_s}{{\bf{d}}_s} - {G_v}{\left[ {{\bf{K}}_{\phi \phi }^{}} \right]_a}{\left[ {{\bf{K}}_{\phi \phi }^{ - 1}} \right]_s}{\left[ {{\bf{K}}_{\phi u}^{}} \right]_s}{{\bf{\dot d}}_s}. \label{eqn:Qa} \end{equation} \end{linenomath*} Then, substituting Eq. \ref{eqn:Qa} into Eq. \ref{eqn:Total_Eqn} yields \begin{linenomath*} \begin{equation} {\bf{M\ddot d}} + {\bf{C\dot d}} + {{\bf{K}}^*}{\bf{d}} = {\bf{F}}, \label{eqn:Total_FEMwithC} \end{equation} \end{linenomath*} in which \begin{linenomath*} \begin{equation} {{\bf{K}}^*} = {\bf{K}}_{uu}^{} + {G_d}{\left[ {{\bf{K}}_{u\phi }^{}} \right]_a}{\left[ {{\bf{K}}_{\phi \phi }^{ - 1}} \right]_s}{\left[ {{\bf{K}}_{\phi u}^{}} \right]_s}, \end{equation} \end{linenomath*} and $\bf C$ is the active damping matrix which is expressed as \begin{linenomath*} \begin{equation} {\bf{C}} = {G_v}{\left[ {{\bf{K}}_{u\phi }} \right]_a}{\left[ {{\bf{K}}_{\phi \phi }^{ - 1}} \right]_s}{\left[ {{\bf{K}}_{\phi u}^{}} \right]_s}. \end{equation} \end{linenomath*} Considering the effect of the structural damping, Eq. \ref{eqn:Total_FEMwithC} can be rewritten as \begin{linenomath*} \begin{equation} {\bf{M\ddot d}} + {({\bf C}+{\bf C}_R)\dot {\bf d}} + {{\bf{K}}^*}{\bf{d}} = {\bf{F}}, \end{equation} \end{linenomath*} in which ${\bf C}_R$ denotes the Rayleigh damping matrix which is defined based on a linear association between $\bf M$ and ${\bf K}_{uu}$ as follows \begin{linenomath*} \begin{equation} {\bf C}_R=\alpha_R{\bf M}+\beta_R{\bf K}_{uu}, \end{equation} \end{linenomath*} where $\alpha_R$ and $\beta_R$ are Rayleigh damping coefficients that can be defined from experiments. In this study, the procedure in order to define the Rayleigh damping coefficients was reported in \cite{chowdhury2003computation}.\\ \section{Numerical examples} \label{sec:num_exam} In this study, the Newton-Rapshon iterative procedure \cite{reddy2014introduction} is employed to obtain the solutions of the nonlinear problems. Accordingly, the iterations, where the solutions of current time step can be obtained based on the solutions of previous time step, are repeated until the solutions converge. For the geometrically nonlinear dynamic analysis of the FG plate under various dynamic loadings, which the equations of dynamic problem depend on both the time domain and unknown displacement vector, the Newmark's integration scheme \cite{newmark1959method} is selected. In all numerical examples, the PZT-G1195N piezoelectric is employed and perfectly bonded on the top and bottom surfaces of the FG plate structure as well as ignored the adhesive layers. \subsection{Validation analysis} In this section, various numerical studies regarding the geometrically nonlinear static and dynamic analyses of the isotropic as well as the piezoelectric FG square plates are carried out in order to demonstrate the accuracy and stability of the present approach. Firstly, a fully clamped (CCCC) isotropic square plate is considered to show the validity of the present formulation for the geometrically nonlinear analysis. The plate is subjected to uniformly distributed load while the width-to-thickness ratio $(a/h)$ is taken equal to $100$. The material properties of plate are $E=3\times10^7$ psi and $\nu=0.316$. In this example, the normalized central deflection and load parameter can be defined as $\overline{w}=w/h$ and $P=q_0a^4/(Eh^2)$, respectively. Table \ref{tab:ISO_CCCC} presents the normalized central deflections of the isotropic square plate which are compared with those of the Levy's analytical solution \cite{levy1942square}, Urthaler and Reddy's mixed FEM using FSDT \cite{urthaler2008mixed} and Nguyen et al. based on IGA and refined plate theory (RPT) \cite{nguyen2017geometrically}. As can be observed that the proposed results are in good agreement with the existing analytical solution as well as other approximate results.\\ Next, in order to verify the accuracy of the proposed approach for the geometrically nonlinear transient analysis, a fully simply supported (SSSS) orthotropic square plate subjected to uniform load with $q_0=1.0$ MPa is conducted in this example. The material properties and the geometry of plate are considered as follows: Young's modulus $E_1=525$ GPa, $E_2=21$ GPa, shear modulus $G_{12}=G_{23}= G_{13}=10.5$ GPa, Poisson's ratio $\nu=0.25$, mass density $\rho=800$ kg/m$^3$, length of the plate $L=0.25$ m and thickness $ h=5$ mm. Fig. \ref{fig:Nonlinear_Iso} depicts the geometrically nonlinear transient response of the square plate subjected to uniform load. It can be seen that the present results are in an excellent agreement with those obtained from the finite strip method, which was reported by Chen at al. \cite{chen2000nonlinear}.\\ Last but not least, a cantilever piezoelectric FG square plate is exhaustively presented to demonstrate the accuracy and validity of the present method for the static analysis of the FG plates integrated with piezoelectric layers. The FG plate which is bonded by two piezoelectric layers on both the upper and the lower surfaces is made of aluminum oxide and Ti-6A1-4V materials whose material properties are given in Table \ref{tab:Table_Material}. In this study, the rule of mixture \cite{nakamura2000determination} is utilized to describe the distribution of the ceramic and metal phases in core layer. The plate has a side length $a=b=0.4$ m while the thickness of the FG core layer and each piezoelectric layer are $h_c=5$ mm and $h_p=0.1$ mm, respectively. The cantilever piezoelectric FG plate is subjected to simultaneously a uniformly distributed load with $q_0$=100 N/m$^2$ and various input voltage values. The centerline linear deflections of the piezoelectric FG square plate are plotted in Fig. \ref{fig:Compare_CSDSG3} while the tip node deflections are also listed in Table \ref{tab:Deflec_Tip} with various material index $n$. The results which are generated from the proposed method are compared with those reported in \cite{nguyen2017analysis} using a cell-based smoothed discrete shear gap method (CS-DSG3) based on FSDT. It can be observed that the results obtained by the present formulation generally agree well with the reference solutions. \\ In the next part, investigations into the geometrically nonlinear static and dynamic responses of the piezoelectric FG porous square plate reinforced by GPLs will be presented. \subsection{Geometrically nonlinear static analysis} \label{sec:Geo_Static} Firstly, the geometrically nonlinear static analysis of a piezoelectric FG plate subjected to uniform load with parameter load $\overline{q} = q_0\times 10^3$ is addressed. A SSSS piezoelectric FG square plate which is made of aluminum oxide and Ti-6A1-4V has a side length $a =b= 0.2$ m, thickness of FG core layer $h_c= 2$ mm and thickness of each piezoelectric layer $h_p = 0.1$ mm. Fig. \ref{fig:Nonlinear_Deflec} illustrates the influence of the material index $n$ on the normalized linear and nonlinear central deflections of the piezoelectric FG plates under mechanical load. As can be seen that, by increasing the material index $n$, the deflection of the piezoelectric FG plate decreases gradually. The largest deflection is obtained when material index $n=0$, where the plate consists only of Ti-6A1-4V leads to the decrease in the bending stiffness. Furthermore, the values of the central deflection of the geometrically nonlinear analysis are always smaller than that of the linear one and this difference reduces with the increase of the material index.\\ Next, a SSSS piezoelectric FG plate with porous core layer which is constituted by combining of two porosity distribution types and three GPL dispersion patterns, respectively, is considered in this example. The piezoelectric FG plate is subjected to sinusoidally distributed load which is defined as $q=q_0sin(\pi x/a)sin(\pi y/b)$ in which $q_0=1.0$ MPa. The plate has a side length $a =b= 0.4$ m, thickness of the FG porous core layer $h_c= 20$ mm and thickness of each piezoelectric layer $h_p = 1$ mm. In this study, the copper is chosen as the metal matrix whose material properties are given in Table \ref{tab:Table_Material} while the dimensions of GPLs are ${l_{GPL}} = 2.5\;\mu m$, ${w_{GPL}} = 1.5\;\mu m$, ${t_{GPL}} = 1.5\;nm$. Fig. \ref{fig:Compare_P12} examines the influence of the porosity coefficients on the nonlinear deflection of the piezoelectric FG porous plate with GPL dispersion pattern $A$ $(\Lambda_{GPL}=1.0\; wt.\;\%)$ for two porosity distribution types, respectively. It can be observed that an increase of the porosity coefficients leads to the increase of the nonlinear deflection of the FG porous plate since the higher density of internal pores in material yields the reduction stiffness of plate structures. In addition, Fig. \ref{fig:Compare_PatternGPL} depicts the effect of the weight fraction and the GPL dispersion patterns on the nonlinear deflection of the piezoelectric FG porous plate with $e_0=0.2$ and two porosity distribution types, respectively. It can be observed that the effective stiffness of the FG porous core layer is greatly strengthened when adding a small amount of GPLs ($\Lambda_{GPL}=1.0\;wt.\;\% )$ into metal matrix as evidenced by decreasing the nonlinear deflection of the FG plate. More importantly, the reinforcing effect of GPLs also depends significantly on the dispersion of GPLs in material matrix. Accordingly, with the same weight fraction of GPLs, the dispersion pattern $A$, where GPLs are dispersed symmetric through the midplane of porous core layer, achieves the smallest nonlinear deflection while the asymmetric dispersion pattern $B$ provides the largest one. For further illustration, Fig. \ref{fig:Non_P1_G123} depicts the variation of the nonlinear deflection of the piezoelectric FG porous plate reinforced by GPLs which is constituted by porosity distribution 1 and three different GPL dispersion patterns corresponding to parameter load 10, respectively.\\ The combination influences of two porosity distribution types and three GPL dispersion patterns on the nonlinear deflection of the piezoelectric FG porous plate with $\Lambda_{GPL}=1.0\; wt.\;\%$ and $e_0=0.4$ is also investigated. As evidently depicted in Fig. \ref{fig:Nonlinear_P_G}, for all the considered associations, the combination between the porosity distribution 1 and the GPL dispersion pattern $A$ obtains the best reinforcing performance in the geometrically nonlinear static analysis of the piezoelectric FG porous plate. This indicated that the plate structures, where the internal pores are distributed on the midplane and GPLs are dispersed around the top and bottom surfaces, can provide the optimum reinforcement. \\ \subsection{Geometrically nonlinear dynamic analysis} \label{sec:Transient} In this part, the geometrically nonlinear dynamic responses of a CCCC piezoelectric FG porous plate reinforced by GPLs are studied. The dimensions and the material properties of the FG plate are the same previous example. The plate is assumed to be subjected to time-dependent sinusoidally distributed transverse loads which are expressed as follows $q=q_0sin(\pi x/a)sin(\pi y/b)F(t)$, where $F(t)$ is defined as \begin{linenomath*} \begin{equation} {F}\left( t \right) = \left\{ {\begin{array}{*{20}{c}} {\left\{ {\begin{array}{*{20}{c}} 1\\ 0 \end{array}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,} \right.}&{\begin{array}{*{20}{c}} {0 \le t \le {t_1},}\\ {\,\,\,\,\,\,\,t > {t_1},} \end{array}}&{{\rm{Step\; load}}\,\,\,\,\,\,\,\,\,\,\,\,\,\,}\\ {\left\{ {\begin{array}{*{20}{c}} {1 - t/{t_1}\,\,\,\,\,\,}\\ {0\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,} \end{array}} \right.}&{\begin{array}{*{20}{c}} {0 \le t \le {t_1},}\\ {\,\,\,\,\,\,\,t > {t_1},} \end{array}}&{{\rm{Triangular \;load}}\,\,\,}\\ {\left\{ {\begin{array}{*{20}{c}} {{\rm{sin}}\left( {\pi t/{t_1}} \right)}\\ 0 \end{array}} \right.}&{\begin{array}{*{20}{c}} {0 \le t \le {t_1},}\\ {\,\,\,\,\,\,\,t > {t_1},} \end{array}}&{{\rm{Sinusoidal\; load}}\,\,\,}\\ {{e^{ - \gamma t}},\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,}&{}&{\,\,\,\,\,\,\,{\rm{Explosive\; blast\; load}}} \end{array}} \right. \end{equation} \end{linenomath*} in which $q_0=100$ MPa, $\gamma=330s^{-1}$ and the time history $F(t)$ is plotted in Fig. \ref{fig:Loading}.\\ Fig. \ref{fig:Transient_e} illustrates the influence of the porosity coefficient on the nonlinear transient response of the piezoelectric FG porous plate with porosity distribution 1 and dispersion pattern $A$ $(\Lambda_{GPL}=1.0\; wt.\;\%)$ under step and sinusoidal loads, respectively. It can be seen that by increasing the porosity coefficients, the amplitude of the transverse deflection of the FG porous plate can be increased while the period of motion does not seem to affect. It can be concluded that the presence of porosities in core layer of the FG plate reduces the capacity of itself against external excitation. Furthermore, Fig. \ref{fig:Transient_GPL} demonstrates the influence of the weight fraction and the dispersion pattern of GPLs on the nonlinear transient response of the piezoelectric FG porous plate with $e_0=0.2$ and porosity distribution 2 corresponding to triangular and explosive blast loads, respectively. As expected, smaller magnitude of the deflection can be obtained when the weight fraction of GPLs in metal matrix increase. Again, the dispersion of GPLs into the metal matrix also affects the reinforcing performance of structure that dispersion pattern $A$ provides the smallest magnitude of the deflection.\\ Next, the combination influences of various porosity distribution types and the GPL dispersion patterns on the nonlinear dynamic response of the piezoelectric FG plate is also examined and indicated in Fig. \ref{fig:Transient_P_GPL}. For this specific example, the porous core layer of the piezoelectric plate has the porosity coefficient $e_0=0.4$ and the GPL weight fraction $\Lambda_{GPL}=1.0\; wt.\;\%$. As clearly demonstrated in Fig. \ref{fig:Transient_P_GPL}, the combination between the porosity distribution 1 and the GPL dispersion pattern $A$ always provides the best reinforcement as evidenced by obtaining the smallest amplitude of the deflection. Moreover, the dynamic responses of the linear and nonlinear of the FG porous plate with porosity distribution 2 $(e_0=0.3)$ and the GPL dispersion pattern $C$ ($\Lambda_{GPL}=1.0\; wt.\;\%$) under triangular and sinusoidal loads are also considered and depicted in Fig. \ref{fig:Compare_LN}. As can be observed, the geometrically nonlinear responses generally obtain smaller magnitudes of the deflection and periods of motion. \\ \subsection{Static and dynamic responses active control} In this section, the active control for the static and dynamic responses of the FG porous plate reinforced by GPLs using integrated sensors and actuators is investigated. Firstly, the active control for the linear static responses of a SSSS FG plate which is subjected to a uniformly distributed load with $q_0$=100 N/m$^2$ is investigated to perform the accuracy of the proposed approach. The FG plate composed of Ti-6A1-4V and aluminum oxide materials with material index $n=2$ has the side length $a=b= 0.2$ m while thickness of core FG layer and each piezoelectric layer are taken to be 1 mm and 0.1 mm, respectively. Fig. \ref{fig:Plot_Gd_compare} illustrates the linear static deflections of the FG plate with various the displacement feedback control gains $G_d$. As can be observed that the present results agree well with the reference solution which is reported in \cite{nguyen2017analysis} who employed the CS-DSG3 based on FSDT. As expected, when the displacement feedback control gain $G_d$ increases, the linear static deflection of the FG plate decreases. Furthermore, the active control for the linear dynamic responses of the FG plate is also investigated based on a constant velocity feedback control algorithm $G_v$ and a closed loop control. In this specific example, the FG plate is initially subjected to a uniform load $q_0$=100 N/m$^2$ and then the load is suddenly removed. In this study, the modal superposition is adopted in order to reduce the computational cost and the first six modes are considered in the modal space analysis, while the initial modal damping ratio for each mode is assumed to be 0.8 $\%$. Fig. \ref{fig:Plot_Lin_Tra_Gd} shows the linear dynamic responses of the central deflection of the FG plate. The results which are generated from present method agree well with the reference solution \cite{nguyen2017analysis}.\\ Next, the active control for the nonlinear static responses of the SSSS FG porous plate reinforced with GPLs is further investigated in this part. The FG plate consisting of combined the porosity distribution 1 and GPL dispersion pattern A, which provides the best structural performance, is selected to study. The material properties of the FG porous plate are the same in Section \ref{sec:Geo_Static}. The plate has a side length $a =b= 0.4$ m, thickness of the FG porous core layer $h_c= 20$ mm and thickness of each piezoelectric layer $h_p = 1$ mm under sinusoidally distributed load which is defined as $q=q_0sin(\pi x/a)sin(\pi y/b)$ with $q_0=1.0$ MPa. Fig. \ref{fig:Plot_Non_Gd} depicts the nonlinear static deflection of the FG porous reinforced by GPLs with the porosity coefficient $e_0=0.4$ and the GPL weight fraction $\Lambda_{GPL}=1.0\; wt.\;\%$ corresponding to various displacement feedback control gains. As can be observed that the deflection of the FG porous plate decreases significantly when the displacement feedback control gain increase.\\ In the last example, the active control for the geometrically nonlinear dynamic responses of the CCCC FG porous plate reinforced by GPLs is conducted. The plate has the both length and width set the same at $0.2$ m with the thickness of core layer $h_c= 10$ mm and each piezoelectric layer $h_p = 0.1$ mm. The FG plate with the porosity distribution 1 $(e_0=0.4)$ and dispersion pattern $A$ $(\Lambda_{GPL}=1.0\; wt.\;\%)$ is subjected to sinusoidally distributed transverse loads which are the same as those in Section \ref{sec:Transient}. Fig. \ref{fig:Control_Transient_GPL} illustrates the nonlinear dynamic responses of the central deflection of the FG plate corresponding to various the velocity feedback control gains $G_v$. It can be observed that when the control gain $G_v$ is equal to zero corresponding to without control case, the nonlinear dynamic response of the FG porous plate still attenuates with respect to time since the effect of the structural damping is considered in this study. More importantly, the geometrically nonlinear dynamic response can be suppressed more faster in the case controlled by higher velocity feedback control gain values. As a result, depending on the specific cases, the responses of the FG porous plate structures including deflection, oscillation time or even both can be controlled to satisfy an expectation by designing an appropriate value for the velocity feedback control gain. It should be noted that the feedback control gain values could not be increased without limit since piezoelectric materials have their own breakdown voltage values. In addition, Fig. \ref{fig:Plot_LN_Compare} depicts the influence of the velocity feedback control gain $G_v$ on the linear and nonlinear responses of the CCCC FG porous square plate subjected to step load. As expected, the geometrically nonlinear dynamic responses provide smaller magnitudes of the deflection and periods of motion. \section{Conclusions} \label{sec:conclusion} In this study, the IGA based on the B\'ezier extraction and the $C^0$-HSDT was successfully was presented for the geometrically nonlinear static and dynamic responses for FG porous plates with GPLs reinforcement and integration with piezoelectric layers. The equations of motion were derived based on the $C^0$-HSDT in conjunction with von K\'arm\'am strain assumptions. Whereas the mechanical displacement field is approximated using the $C^0$-HSDT based on the B\'ezier extraction of NURBS, the electric potential field was considered as a linear function through the thickness of each piezoelectric sublayer. Two porosity distributions and three dispersion patterns of GPLs with various related parameters were exhaustively carried out through numerical examples. The control algorithms based on the constant displacement and velocity feedbacks were utilized to control the geometrically nonlinear static and dynamic responses of the FG porous plate reinforced with GPLs. Through the present numerical results, several major remarks can be drawn: \begin{itemize} \item By applying Bernstein polynomials as basis functions in the B\'ezier extraction, the IGA approach can easily be integrated into most existing FEM structures while its advantages are maintained effectively. \item After adding a small amount of GPLs into the metal matrix, the stiffness of the structures is significantly improved while an increase of the porosity coefficients leads to the decrease of the reinforcing effect. Furthermore, the distribution of porosities and GPLs in metal matrix also affect significantly the reinforcing performance of the structures. For all the combinations, the association between the porosity distribution type 1 with internal pores distributed on the midplane and the GPL dispersion pattern $A$, where GPLs are dispersed around the top and bottom surfaces, obtained the best reinforcing performance. \item For geometrically nonlinear static responses control of the FG porous plates, two effective algorithms are considered including the input voltage control with opposite signs applied across the thickness of two piezoelectric layers and the displacement feedback control algorithm. In addition, the dynamic response of the FG porous plate can be expectantly suppressed based on the effectiveness of the velocity feedback control algorithm. \item Finally, the combination advantages of both the porous architecture and GPL reinforcement into material matrices is a good choice to provide the advanced ultra-light high-strength structures in engineering. \end{itemize} \section*{Acknowledgement} The support provided by RISE-project BESTOFRAC (734370)–H2020 is gratefully acknowledged. \section*{References} \bibliographystyle{elsarticle-num}
{ "timestamp": "2019-03-01T02:04:22", "yymm": "1902", "arxiv_id": "1902.10806", "language": "en", "url": "https://arxiv.org/abs/1902.10806" }
\section{Introduction} \label{intro} Clusters of galaxies represent the highest-density peaks of the matter distribution in the Universe. Forming at the intersection of cosmic filaments, they grow hierarchically through continuous accretion of material. Composed of dark matter (DM; 85\%), ionised hot gas in the intracluster medium (ICM; 12\%), and stars ($\sim 3\%$), their matter content reflects that of the Universe. Their distribution in mass and redshift, and its evolution, allow us to probe both the physics of structure formation through gravitational collapse and the underlying cosmology in which this process takes place \citep[e.g.][]{all11,kra12}. Thus together with the redshift, the mass of a cluster is its most fundamental property. X-ray follow-up of objects in the {\it R\"ontgensatellit} (ROSAT) catalogues\footnote{\url{http://www.mpe.mpg.de/xray/wave/rosat/index.php}} allowed significant progress to be made on obtaining cosmological constraints from cluster number counts \cite[e.g.][]{bor01,rb02,Vikhlinin09b,man10} and baryon fraction \cite[e.g.][]{Allen08}. From the beginning, such studies consistently indicated a low matter density, with a mean normalized matter density $\Omega_{\rm m}\sim0.3$, and a matter fluctuation amplitude $\sigma_{8} \sim 0.7-0.8$. However, while the first cosmological constraints from Sunyaev-Zeldovich (SZ) cluster number count studies broadly confirmed these findings \cite[][]{rei13,has13,PCXX2014}, the high statistical precision of the {\it Planck}\footnote{\url{https://www.cosmos.esa.int/web/planck}} Cosmic Microwave Background (CMB) measurements revealed an up to $\sim 2\sigma$ difference in the measurement of the key parameter $\sigma_8$ \citep{PCXX2014}. A number of physical effects have been advanced to explain this discrepancy, including invoking `new physics' (a massive neutrino component), but Occam's Razor would suggest that the simplest explanation lies in uncertainties in the cluster mass scale. A number of different methods can be used to obtain individual cluster masses. The most commonly used are galaxy kinematics (the use of galaxy orbits as tracers of the underlying potential), X-ray and SZ observations (using the distribution of the ICM as a probe of the potential), and lensing (using distorsions of background galaxies to probe the intervening mass distribution). Each method has its inherent assumptions, and much work has gone into using numerical simulations to explore the possible biases that these assumptions might introduce into the final mass estimation. When cluster surveys are used to trace the growth of structure and samples are defined for use as cosmological probes, it is not possible to obtain individual masses for every object. Furthermore, one must understand the probability that a cluster of a given mass is detected with a given value of the survey observable $\mathcal{O}$ (generally the X-ray or SZ signal, and more recently, the total optical richness), i.e. the relationship between $\mathcal{O}$ and the mass and the scatter about this relation. It is common practice to calibrate such a relationship for a limited number of objects, and then apply the resulting scaling law to the full sample. This approach has been successfully applied to a number of cluster samples. It requires accurate mass estimates of the calibration sample, and understanding of how the calibration and survey sample(s) map to the underlying population (i.e. knowledge of the sample selection function). While these uncertainties can be built into the marginalisation over cosmological parameters, tighter parameter constraints go hand in hand with our understanding of these issues. The mass scale is thus fundamental for the study of clusters. This review aims to take stock of the current status of cluster mass estimation methods and its impact on cosmological parameter estimation using the cluster population, and to address the prospects for future improvements. \section{Theoretical insights from cosmological simulations} Cosmological simulations have been a workhorse for making predictions for the structure and shape of dark matter haloes for more than twenty years \citep[see e.g.][for reviews]{kra12,pla15}. Moreover, the abundance and clustering properties of dark matter haloes that form in the concordance cold dark matter (CDM) models are the standard against which observations are compared in order to derive cosmological constraints. Modern hydrodynamic simulations further provide insights into the effects of baryons on the dark matter halo properties, and on the internal structure of gas and stars within the dominant dark matter potential. In this Section we summarise a number of important insights that numerical simulations have provided for the interpretation of observational data. The most commonly-used definition of mass, derived from theoretical studies but now used almost universally, is the three-dimensional mass enclosed within a given radius $R_{\Delta}$ inside which the mean interior density is $\Delta$ times the critical mass density, $\rho_{\rm c}(z)$, at the redshift of the cluster. Alternatively, one can use $\Delta$ times the mean mass density $\rho_{\rm m}(z)=\Omega_{\rm m}(z)\,\rho_{\rm c}(z)$. The standard notation expresses these quantities as \begin{eqnarray} M_{\Delta \rm c}&=&\frac{4\pi}{3}\, \Delta \rho_{\rm c}\,(z)\, R_{\Delta \rm c}^3, \nonumber \\ M_{\Delta \rm m}&=&\frac{4\pi}{3}\, \Delta \rho_{\rm m}\,(z)\, R_{\Delta \rm m}^3. \label{eq:delta} \end{eqnarray} One sometimes simply uses $M_\Delta$ and $R_\Delta$ for the former case. Commonly-used values of $\Delta$ in observational studies include 2500 (corresponding to the central parts of the halo), 500 (roughly equivalent to the virialised region that is accessible to the current generation of X-ray telescopes), and 200 (corresponding approximately to the `virial' radius). \subsection{Dark matter density profiles} \subsubsection{NFW model} The mass and internal structure of galaxy clusters reflect the properties of primordial density perturbations and the nature of the dark matter. In the standard hierarchical CDM scenario of cosmic structure formation, numerical simulations predict that dark matter haloes spanning a wide mass range can be well described by a universal mass density profile \citep{NFW96,NFW97}. The so-called Navarro-Frenk-White (NFW) profile is expressed in the form: \begin{equation} \rho_{\rm NFW}(R)=\frac{\rho_s}{(R/R_s)(1+R/R_s)^2}, \label{eq:rho_nfw} \end{equation} where $\rho_s$ is the central density parameter and $R_s$ is the scale radius that divides the two distinct regimes of asymptotic mass density slopes $\rho\propto R^{-1}$ and $R^{-3}$. The NFW profile is fully specified by two parameters: $M_\Delta$ and the halo concentration $c_\Delta=R_\Delta/R_s$. The three-dimensional spherical mass, $M_\Delta$, enclosed by the radius, $R_\Delta$, is given by \begin{equation} M_{\rm NFW}(<R_\Delta)=\frac{4\pi \rho_s R_\Delta^3}{c_\Delta^3}\left[ \ln(1+c_\Delta)-\frac{c_\Delta}{1+c_\Delta}\right]. \label{eq:MNFW} \end{equation} As the central density reflects the mean density of the Universe at the time of formation, haloes with increasing mass are expected to have lower mass concentration at a given redshift \citep[e.g.][]{NFW97,Bullock01,Gao04b,Dolag04,Duffy08,Stanek10,Klypin11,Bhattacharya13,Meneghetti14,Ludlow14,Diemer15}. Numerical simulations usually describe the relation between the mass and the NFW concentration (i.e. the $c-M$ relation) for simulated haloes using a power-law function \cite[e.g.][]{Bhattacharya13, Meneghetti14, Diemer15}. This relation exhibits large intrinsic scatter for a given halo mass owing to the wide distribution in formation times \citep[e.g.][]{neto07} and the evidence that not all systems are well described by a NFW model \citep[e.g.][]{jin00}. Recently, \citet{Diemer15} have proposed that a seven-parameter, double power-law functional form computed by peak height and the slope of the linear matter power spectrum can describe concentrations in the fiducial $\Lambda$CDM cosmology with $5\%$ accuracy. Although non-baryonic dark matter exceeds baryonic matter by a factor of $\Omega_{\rm DM}/\Omega_{\rm b} \approx 6$ on average, the gravitational field in the central regions of galaxies is dominated by stars. In the hierarchical galaxy formation model the stars are formed in the condensations of cooling baryons in the halo centre. As the baryons condense in the centre, they pull the dark matter particles inward thereby increasing their density in the central region. The response of dark matter to baryonic infall has traditionally been calculated using the model of adiabatic contraction \citep{eggen62}, which has also been tested and/or calibrated numerically using both idealised \citep{ryden_gunn87,blumenthal86} and cosmological simulations \citep{gnedin04, rudd08, Duffy08, velliscig14,shir18}. \subsubsection{Einasto model} Recent high-resolution N-body simulations \citep[e.g.][]{Navarro04,Gao12} indicate that an Einasto profile \citep{Einasto65} better describes the spherically averaged mass density profile for dark matter haloes than the NFW profile. The Einasto profile has the form: \begin{eqnarray} \rho_{\rm Einasto}=\rho_{-2} \exp\left(-\frac{2}{\alpha}\left[\left(\frac{R}{R_{-2}}\right)^\alpha-1\right]\right) \label{eq:Einasto} \end{eqnarray} where $\alpha$ is a shape parameter that describes the degree of curvature of the profile, and $\rho_{-2}$ and $R_{-2}$ are a mass density and a scale radius at which the logarithmic slope is $-2$, respectively. The NFW model corresponds to a $\alpha\sim0.18$ case for the Einasto profile. The spherical mass enclosed within $R_\Delta$ is given by \begin{eqnarray} & & M_\Delta= 4\pi \rho_{-2}\, R_{-2}^3 \frac{1}{\alpha} \left(\frac{2}{\alpha}\right)^{-3/\alpha} \nonumber \\ & & \qquad \times \exp\left(2/\alpha \right)\left[ \Gamma\left(\frac{3}{\alpha}\right)-\Gamma\left(\frac{3}{\alpha}, \frac{2}{\alpha}\left(\frac{R_\Delta}{R_{-2}}\right)^\alpha\right)\right], \end{eqnarray} where $\Gamma(x)$ and $\Gamma(a,x)$ are the gamma function and the upper incomplete gamma function, respectively. The Einasto profile is specified by the three parameters $M_\Delta$, $c_\Delta=R_\Delta/R_{-2}$, and $\alpha$. \subsubsection{Sparsity} An alternative to a parameterised description of the dark matter profile is the use of the halo sparsity, $s_{\Delta}$. This quantity measures the ratio of halo masses at two different overdensities: \begin{equation} s_{\Delta_1\, \Delta_2} = M_{\Delta_1} / M_{\Delta_2} \end{equation} and has been recently proposed as new cosmological probe for galaxy clusters \citep{balmes14,corasaniti18}. If the halo follows a NFW profile, the sparsity and concentration are directly related. However, halo sparsity has the key feature that the ensemble average value at a given redshift exhibits much smaller scatter than that of the mass concentration, and does not require any modelling of the mass density profile, but only the mass measurements within two overdensities. It is thus also an attractive quantity for comparison with observations. \subsection{The shape and distribution of dark matter and gas} Although the above discussion assumes spherical symmetry, the CDM model predicts that cluster-size dark matter haloes are generally triaxial and are elongated along the direction of their most recent major mergers \citep[e.g.][]{thomas98,jing_suto02,hopkins05,kasun_evrard05,bett07,gottloeber_yepes07}. The degree of triaxiality is correlated with the halo formation time \citep[e.g.][]{allgood06}, suggesting that at a given epoch more massive haloes are more triaxial. For the same reason, triaxiality is sensitive to the linear structure growth function and is higher in cosmological models in which haloes form more recently \citep{maccio08}. Furthermore, inclusion of baryons in simulations modifies the shapes of cluster-size dark matter haloes, causing them to become rounder due to gas dissipation associated with galaxy formation processes \citep[e.g.][]{kazantzidis04}. \begin{figure*} \hfill\resizebox{\hsize}{!}{\includegraphics[width=0.465\textwidth]{fig1a.pdf} \hfill \includegraphics[width=0.40\textwidth]{fig1b.pdf}} \caption{\emph{Left:} Random-to-thermal pressure ratio for simulated clusters from the $\Omega_{500}$ simulation \citep[reproduced from][]{Nelson14}. The various curves show the median non-thermal pressure profiles sorted by a proxy for the mass accretion rate $\Gamma$ defined as the difference in mass between $z=0.5$ and $z=0$ \citep{Diemer14}. \emph{Right:} Hydrostatic mass bias $b_{M}=(M_{\rm HSE}-M_{\rm true})/M_{\rm true}$ as a function of radius for a sample of 29 clusters simulated with the SPH code \texttt{GADGET-3} \citep[reproduced from][]{biffi16}. The two panels show the radial profiles of $b_M$ sorted by the system core state, where CC indicates cool core and NCC denotes non-cool core objects (top), and dynamical state (bottom). The shaded areas show the dispersion around the median.} \label{fig:hse_sim} \end{figure*} \citet{lau11} showed that gas traces the shape of the underlying potential rather well outside the core, as expected if the gas were in hydrostatic equilibrium (HE hereafter) in the cluster potential, but that the gas and potential shapes differ significantly at smaller radii. These simulations further suggest that with radiative cooling, star formation and stellar feedback (CSF) intracluster gas outside the cluster core ($R\gtrsim 0.1\,R_{500}$) is more spherical compared to non-radiative simulations, while in the core the gas in the CSF runs is more triaxial and has a distinctly oblate shape. The latter reflects the ongoing cooling of gas, which settles into a thick oblate ellipsoid as it loses thermal energy. In the CSF runs, the difference reflects the fact that gas is partly rotationally supported. In non-radiative simulations the difference between gas and potential shape at small radii is due to random gas motions, which make the gas distribution more spherical than the equipotential surfaces. Results are similar for unrelaxed clusters but with considerable scatter. In both CSF and non-radiative runs, the gravitational potential was found to be much more spherical than DM. Stochastic feedback from a central active galactic nucleus (AGN) will also heat and redistribute the gas in the core regions \citep[e.g.][]{lebrun14,tru18}. Due to their shallower potential wells, such feedback has a stronger effect on the gas distribution of lower mass systems, leading to a radial and mass dependent modification of the gas content in the core regions. \subsection{ICM energy budget and departures from equilibrium} \label{sec:icm_energy} The deep potential well of galaxy clusters compresses the collapsing baryons (consisting mostly of pristine hydrogen and helium with densities of $\sim 10^{3}$ particle cm$^{-3}$), and heats them to temperatures of $10^7$ K ($\sim 1$ keV) and above. Given its high temperature, the ICM emits in X-rays principally via thermal Bremsstrahlung, with a continuum emission typically following $\epsilon \propto n_{\rm gas}^2 T^{1/2}$. Inverse Compton scattering of CMB photons by ICM electrons produces the SZ effect that is observed in millimetric bands \citep{sun72}. The SZ signal is proportional to the electron pressure integrated along the line-of-sight. The spatial distribution and thermodynamical properties of the ICM depend on the underlying dark matter potential and the merging history of a cluster. From a general point of view, the dynamics of an inviscid collisional gas follows the Euler equation, \begin{equation} \frac{\partial \mathbf{v}}{\partial t}+(\mathbf{v}\cdot\nabla)\, \mathbf{v}+\frac{1}{\rho}\nabla P = -\nabla\Phi \label{eq:euler} \end{equation} where $\mathbf{v}$ denotes the three-dimensional velocity field, $P$, $\rho$ are the gas pressure and density, and $\Phi$ the cluster gravitational potential. After few sound crossing times of the order of $10^9$ years, the ICM is expected to reach HE, and the kinetic energy thermalises, such that the pressure support should be dominated by the thermal pressure ($P\approx P_{\rm th}$). The velocity field becomes negligible and the Euler equation reduces to the HE equation, \begin{equation} \frac{1}{\rho} \frac{d P_{\rm th}}{d r} = -\frac{GM(<R)}{R^2}, \label{eq:hee} \end{equation} \noindent where $G$ is the gravitational constant. Under this assumption, the mass profile can be reconstructed from the radial profiles of ICM thermodynamic quantities (see Sect. \ref{sec:X-ray}). \begin{figure*} \includegraphics[width=0.6\textwidth]{fig2a.pdf} \hfill \includegraphics[width=0.4\textwidth]{fig2b.pdf} \caption{\emph{Left:} A measure of the predicted inhomogeneity in the gas density and temperature distributions. The Figure shows the average width of the distribution of gas density (top) and temperature (bottom) within spherical shells for five different simulation setups \citep[reproduced from][]{rasia14}. The colours refer to the following runs: non-radiative SPH (red), non-radiative AMR (brown), SPH with cooling and star formation (blue), AMR with cooling and star formation (green), and SPH with AGN feedback (black). \emph{Right:} Radial profiles of gas clumping factor $C=\left(\langle \rho^2\rangle/\langle\rho\rangle^2\right)^{1/2}$ estimated from non-radiative AMR simulations of a set of massive clusters \citep[reproduced from][]{vazza13}. The simulated systems are sorted into relaxed (black), merging (blue), and post-merger (red) categories. } \label{fig:inhomogeneities} \end{figure*} However, residual random gas motions can produce a non-negligible contribution to balance the gravitational field, which causes an underestimation of the true mass when the energy is assumed to be fully thermalised. The total pressure balancing gravity can be described as the sum of thermal pressure and random kinetic pressure, \begin{equation} P_{\rm tot} \simeq P_{\rm th}+\frac{1}{3}\, \rho\, \sigma_v^2, \end{equation} where $\sigma_v$ denotes the velocity dispersion of isotropically moving gas particles (i.e. turbulent motions). More generally, by integrating the Eqn.~\ref{eq:euler} over a radial shell, the total enclosed mass within the radius $R$ can be written as, \begin{equation} M(<R) = M_{\rm therm} + M_{\rm rand} + M_{\rm rot} + M_{\rm cross}+ M_{\rm stream} +M_{\rm accel}, \label{eq:mhse_tot} \end{equation} where the expressions for all six terms are given in \citet{Lau13}. The first term in the equation is the hydrostatic mass (see Eqn.~\ref{eq:mhe}). The second and third terms indicate the pressure support induced by random and rotational gas motions, respectively. The fourth and fifth terms describe the contribution from cross and stream motions; the final term is the acceleration. Each of these additional terms will introduce corrections to the HE assumption and need to be properly understood in order to derive accurate masses from the hydrostatic method. Given the difficulty of directly measuring gas motions in the ICM\footnote{Existing experimental constraints and future prospects on gas motions in the ICM are discussed in detail in another chapter of this series Simionescu et al. (2019).}, numerical simulations have been widely exploited to set constraints on the relative importance of each of these terms \citep{rasia06,Nagai07,Vazza09,Lau09,nelson12,battaglia13,Suto13,rasia14,Nelson14,Nelson14b,biffi16,shi15,shi16}. Most studies consistently predict that random residual gas motions (i.e. turbulence) dominate the required correction, independent of the dynamical state. The amplitude of the turbulent pressure support, however, varies from cluster-to-cluster, with predictions in the range of $10-30\%$ at $R_{500}$ depending on the mass accretion histories of clusters \citep{Nelson14,shi15,shi16}. Bulk motions and acceleration provide an important contribution only in merging clusters. Note that the acceleration term is very small within the virialised regions of galaxy clusters, but becomes a non-negligible and irreducible mass bias in merging clusters or the outskirts of all clusters \citep{Lau13,Suto13,Nelson14b}. Figure~\ref{fig:hse_sim} shows the radial profiles of non-thermal pressure and hydrostatic mass bias from two different sets of simulations \citep{Nelson14,biffi16}. Both studies predict a trend of increasing non-thermal-to-thermal pressure ratio with radius, and hydrostatic mass biases ranging from $<5\%$ in the core to $\sim30\%$ at $R_{500}$. Both studies also find a dependence of the predicted hydrostatic mass bias on the cluster dynamical state and accretion rate, the non-thermal pressure contribution being on average higher in highly accreting systems. The relatively low-values of the non-thermal pressure derived from the X-COP data (in Sect.~\ref{sec:fgasxcop}) are consistent with the expectation that relaxed clusters have the lower level of non-thermal pressure support. Future work should focus on detailed understanding of the nature of gas flows in the density-stratified ICM in cluster outskirts \citep{shi18,vazza18}. \subsection{The presence of gas inhomogeneities} \label{sec:icm_inhom} In practice, interpretation of X-ray measurements of the thermodynamical properties of the ICM may be complicated by the presence of structure and inhomogeneities in the gas temperature and density distributions \citep{Mazzotta04,vikhlinin06b}. Unfortunately, numerical simulations have not yet converged on what the typical level of temperature inhomogeneities in the ICM should be, as the result appears to depend substantially on the adopted physical and computational setup. The two main hydrodynamical solvers in numerical simulations of clusters are Smoothed Particle Hydrodynamics (SPH) and Adaptive Mesh Refinement (AMR). \citet{rasia14} compared the predicted level of temperature anisotropies in five sets of numerical simulations featuring both SPH and AMR hydrodynamical solvers, and for different implementations of baryonic physics (non-radiative, cooling and star formation, and AGN feedback). The predicted level of temperature inhomogeneity ranges from 5 to 25\% (see Fig. \ref{fig:inhomogeneities}). Similarly, the gas density determined from X-ray observations of the ICM may be biased by the presence of inhomogeneities in the gas distribution of the ICM. Over-dense regions exhibit an enhanced X-ray signal because of the $\rho^2$ dependence of the emissivity, which boosts the estimated gas density towards high values \citep{mathiesen99}. The overestimation of the gas density is usually quantified by the clumping factor $C=\left(\langle \rho^2\rangle/\langle\rho\rangle^2\right)^{1/2}\geq1$. Numerical studies predict that the clumping factor $C$ should increase from values close to 1 in the central regions to $1.2-1.3$ around $R_{200}$ \citep{nagai07b,vazza13,roncarelli13,zhu13,planelles17}, with substantial scatter from one system to another. As an example, the right-hand panel of Fig. \ref{fig:inhomogeneities} shows the radial profiles of the clumping factor in a set of 20 massive clusters simulated with the AMR code {\sc Enzo} and sorted according to their dynamical state, showing that the ICM in merging systems is on average more clumpy than in relaxed objects \citep{vazza13}. The HE equation in the presence of clumping should be modified by the gradient of the clumping factor \citep{roncarelli13}, neglect of which can cause biases of $\sim5\%$ on the reconstructed masses. Note that the effect of clumping on the gas fraction is expected to be larger, as it biases simultaneously the gas mass high and the hydrostatic mass low. The corresponding values of $f_{\rm gas}$ can be overestimated by $\sim10\%$ at $R_{500}$ \citep{eckert+15}. \subsection{Baryon budget} \label{sec:depletion} \subsubsection{Total baryonic content} Because of their large mass and deep gravitational potential, the total baryon content in galaxy clusters is expected to reflect that of the Universe as a whole \citep{White93,Evrard97,Kravtsov05}. The total baryon fraction $f_b=(M_{\rm gas}+M_{\star})/M_{\rm tot}$ should thus match the cosmic baryon fraction estimated from primordial nucleosynthesis and the CMB power spectrum. Simulations using different hydrodynamical solvers and baryonic physics substantially agree in predicting that the depletion of baryons within $R_{200}$ during the hierarchical formation process should be small \citep[$\sim5\%$,][]{Planelles13,Sembolini13,lebrun14,wu15,nifty1,nifty2,hahn17,barnes17,barnes18,lovell18}. In particular, \citet{nifty1} resimulated the region surrounding a massive cluster ($M_{\rm vir}=1.1\times10^{15}M_\odot$) with 13 different hydrodynamical codes from the exact same initial conditions and compared the output. The comparison includes classical SPH ({\sc Gadget-2}), advanced SPH ({\sc Gadget-3}), AMR ({\sc Art, Ramses}), and moving-mesh ({\sc Arepo}) codes. In the left-hand panel of Fig. \ref{fig:depletion} the baryon fraction of the simulated cluster is shown as a function of radius. While in the central regions and out to $\sim300$ kpc the various codes do not converge, around $R_{500}$ and beyond they agree within a few percent. Thus, the baryon fraction within $R_{200}$ is very robustly predicted by numerical simulations, independent of the exact input physics or the numerical scheme. \begin{figure*}[t] \includegraphics[width=0.48\textwidth]{fig3a.pdf} \hfill \includegraphics[width=0.51\textwidth]{fig3b.pdf} \caption{\textit{Left:} Baryon fraction profiles for a massive galaxy cluster resimulated with 13 different codes from the same initial condition. The dotted vertical line indicates the value of $R_{500}$. Figure from \citet{nifty1}. \textit{Right:} ICM gas fraction profiles as a function of halo mass in simulations implementing various prescriptions for baryonic physics (non-radiative, red; cooling and star formation, yellow; AGN feedback, blue, green and magenta). The AGN feedback models range from gentle, self-regulated feedback (AGN 8.0, blue) to more bursty and energetic injection (AGN 8.7, magenta). The black points show a compilation of observed ICM gas fraction measurements. Figure from \citet{lebrun14}.} \label{fig:depletion} \end{figure*} \subsubsection{Effect of non-gravitational processes} \label{sec:fbar_ngrav} Various works have also studied the impact of baryonic physics (cooling, star formation, supernova and AGN feedback) on the depletion of baryons within the virial radius. In the case where a large amount of non-gravitational energy is injected within the ICM, the gaseous atmosphere expands and a global depletion of baryons within the virial radius may occur. AGN feedback is the main source of non-gravitational energy in the ICM \citep[see][for a review]{mcnamara07}. The baryon depletion caused by AGN feedback is known to be important for haloes with masses below $\sim 10^{14}M_\odot$ \citep{Planelles13,lebrun14,wu15,lovell18}, for which the baryon budget falls short of the cosmic value by a factor $\sim2$. The right-hand panel of Fig. \ref{fig:depletion} from \citet{lebrun14} shows the hot gas fraction in several sets of numerical simulations implementing various prescriptions for baryonic physics (non-radiative, cooling and star formation, and three models for AGN feedback) and compares the results with published datasets. While the non-radiative run predicts little depletion for haloes in the range $10^{13}-10^{15}M_\odot$, the runs implementing additional physics largely differ for haloes of $M_{500}\leq10^{14}M_\odot$. Note that the run including cooling and star formation but no AGN feedback suffers from the overcooling problem, and predicts stellar fractions that are largely in excess of the measured values. At high mass ($M_{500}\sim10^{15}M_\odot$), all but the most extreme AGN feedback model converge to a very similar value for the hot gas fraction, indicating that baryonic effects are subdominant. \subsection{Mass estimates from mass proxies and scaling relations} \label{sec:scaling} The gravitational potential of galaxy clusters can be probed through observations of the ICM in X-rays and the SZ effect, or through the richness in optical/NIR wavelengths. One expects simple scaling relations between the mass and global ICM properties such as the X--ray luminosity, $L_{\rm X}$, or the SZ Compton parameter $Y_{\rm SZ}$, and galaxy content. More specifically, the simplest models of structure formation, based on simple gravitational collapse, predict that galaxy clusters constitute a self-similar population. As discussed above (Eqn.~\ref{eq:delta}), the virialised part of a cluster corresponds roughly to a fixed density contrast ($\Delta \sim 500$) as compared to the critical density of the Universe, $\rho_{\rm c}(z) $ at the redshift in question: \begin{equation} \frac{M_{\Delta}}{ \frac{4\pi}{3}\,R_{\Delta}^{3} } = \Delta\,\rho_{\rm c}(z) \label{eq:rho} \end{equation} \noindent with a strong similarity in the internal structure of virialised dark matter haloes within the corresponding radius, $R_{\Delta}$. This reflects the fact that there is no characteristic scale in the gravitational collapse. The gas properties directly follow from the dark matter properties, assuming that the gas evolution is purely driven by gravitation, i.e. by the evolution of the dark matter potential. The internal gas structure is universal, as is the case for the dark matter. The gas mass fraction $f_{\rm gas}$ reflects the Universal value, since the gas 'follows' the collapse of the dark matter. It is thus constant: \begin{equation} \frac{M_{{\rm gas}, \Delta}}{M_{\Delta}} = f_{\rm gas} = {\rm const.} \end{equation} \noindent Furthermore, as the gas is roughly in HE in the potential of the dark matter, the virial theorem gives: \begin{equation} T_{\rm X} = \beta \frac{G \mu {\rm m_{\rm p}}\,M_{\Delta} }{R_{\Delta}} \label{eq:Tvir} \end{equation} \noindent where $\mu$ is the mean molecular weight in amu for an ionised plasma, $m_{\rm p}$ is the proton mass, $T_{\rm X}$ is the gas mean temperature, and $\beta$ is a normalization factor which depends on the cluster internal structure. Since this structure is universal, $\beta$ is a constant, independent of redshift $z$ and cluster mass. Each cluster can therefore be defined by two parameters only: its mass and its redshift. From the basic equations, Eqn.~\ref{eq:rho}-\ref{eq:Tvir}, one can derive a scaling law for each physical property, $Q$, of the form $Q\propto A(z)M_{\Delta}^{\alpha}$, that relates it to the redshift and mass. The evolution factor, $A(z)$, in the scaling relations is due to the evolution of the mean dark matter (and thus gas) density, which varies with the critical density of the Universe, $ \overline{\rho_{\rm gas}} \propto \overline{\rho_{\rm DM}}= \Delta \rho_{\rm c}(z) \propto E^{2}(z)$. For instance, the gas mass scales as $M_{{\rm gas}, \Delta} \propto M_{\Delta}$, the temperature as $T_{\rm X} \propto E^{2/3}(z) M_{\Delta}^{2/3}$. The integrated SZ signal $Y_{\rm SZ}$, or its X-ray equivalent $Y_{\rm X}=M_{{\rm gas}, \Delta}\,T_{\rm X}$, introduced by \citet{kra06}, scales as $Y_{\rm SZ} \propto E^{2/3}(z)\,M_{\Delta}^{5/3}$, while the (bolometric) X--ray luminosity scales as $L_{\rm X} \propto E(z)^{7/3}\,M_{\Delta}^{4/3}$. \begin{figure*} \hbox{ \includegraphics[width=0.48\textwidth, keepaspectratio]{fig4a.pdf} \hfill \includegraphics[width=0.48\textwidth, keepaspectratio]{fig4b.pdf} } \caption{Relation between the SZ signal (left) or the X--ray luminosity (right) and the mass from the numerical simulations of \citet{pike14}. Observational data points are from \citet{PEPXI} and \citet{pratt09}. The relations are plotted for different implementation of the gas physics: the non-radiative model (NR, red crosses), cooling and star formation model (CSF; blue stars), supernova feedback (SFB; green diamonds) and AGN models (magenta triangles). These relations are compared to observations (black line and black crosses). The $Y_{\rm SZ}$ is proportional to the total thermal energy and the relation between $Y_{\rm SZ}$ and the mass depends weekly on cooling and feedback processes. The scatter is tight, reflecting the similarity in shape of the pressure profiles. In contrast, the X-ray luminosity depends on the square of the density and is dominated by the core properties. It is very sensitive to gas physics and presents a large scatter at a given mass, reflecting the large scatter in the scaled density profile in the core of clusters. } \label{fig:scalingtheor} \end{figure*} These scaling relations then allow estimation of the mass through so-called mass proxies, i.e. global physical properties, directly related to the mass, but easier to measure. However, there are intrinsic limitations to these 'cheap' mass estimates. Even in the simplest, purely gravitational model, the normalisation of the relations depends on the formation history, and must be derived from numerical simulations. Furthermore, each relation has an intrinsic scatter due to individual cluster formation histories \citep{poo07,yu15}. Even more importantly, non gravitational physics (cooling and galaxy/AGN feedback) affects the normalisation, slope, scatter, and evolution of each relation. In particular, as $L_{\rm X}$ and $Y_{\rm SZ}$ depend on the gas content, the slope of the $L_{\rm X}$--$M$ and $Y_{\rm SZ}$--$M$ relations is directly affected by the mass dependence of the baryon depletion (Sec.~\ref{sec:fbar_ngrav}). A large numerical simulation effort has been undertaken to understand how these scaling relations depend on the gas physics \citep[e.g.][]{pike14, planelles14,lebrun14,tru18}, including their scatter and evolution \citep{lebrun17}. There is now a consensus that AGN feedback is a key ingredient of realistic models. A key recent advance is the development of new cosmological hydrodynamical simulations, with calibrated sub-grid feedback models that are able to reproduce the observed gas and stellar properties of local clusters \citep{mccarthy17}. The most robust mass proxies correspond to the lowest-scatter relations that depend as little as possible on the gas physics. In this respect, the SZ signal, proportional to the integral of the pressure, or equivalently to the total thermal energy of the gas, is generally believed to be particularly well-behaved \citep[e.g.][]{das04,mot05}. The SZ signal and the corresponding pressure profiles beyond the core are mostly governed by the characteristics of the underlying potential well, with a weak dependence on dynamical state and on the poorly-understood non-gravitational physics. $Y_{\rm SZ}$ is thus considered to be a robust, low scatter mass proxy. In contrast, the X--ray luminosity is more complex. The X--ray flux is sensitive to core properties, which presenting a large scatter and a strong dependence on thermodynamical state (Fig.\,\ref{fig:scalingtheor}). The gas mass may be a good mass proxy for the most massive systems, but not at low mass where it is strongly dependent on galaxy feedback. A recent comparison of the properties of various mass proxies, seen from the point of view of numerical simulations, can be found in \citet{lebrun17}. Improved understanding of covariances among different observables is one of the important steps toward improving constraining power of upcoming multi-wavelength cluster surveys \citep[e.g.][]{sta10,shir16}. \section{Observational mass estimation methods} \label{sec:mem} In this Section, we discuss the principal methods that are used to estimate individual cluster masses. Each method is briefly described, along with its underlying assumptions, and the various systematic uncertainties and potential biases that can be encountered in translating the observation into a mass measurement are discussed. \subsection{Kinematics} The first estimate of the mass of a cluster of galaxies \citep{Zwicky33,Zwicky37} was obtained by applying the virial theorem to the distribution of cluster galaxies in projected phase-space (PPS), and it was based on the assumption that galaxies are unbiased tracers of the cluster mass distribution. If this assumption is relaxed, the virial mass estimate can vary by an order of magnitude or more \citep{Merritt87,Wolf+10}. It is therefore important not to make any assumption on the relative distribution of the cluster mass and the galaxies, even if several studies have shown that red, passive galaxies do indeed trace the total mass profile of clusters \citep[e.g.][]{vanderMarel+00,BG03}. Observational selection tends to make the bias in the spatial distribution stronger than the bias in the velocity distribution \citep{Biviano+06}, so it is more robust to estimate a cluster mass directly from the velocity distribution of cluster galaxies, using a scaling relation \citep[e.g.][]{Evrard+08,Munari+13,Ntampaka+15}, rather than using the virial theorem. The intrinsic scatter of the mass-velocity dispersion relation is $\leq 5$\%, but observational effects (see Sect.~\ref{sss:pbms}) increase the scatter to $\sim 40$\% \citep{WCS10,SMBD13}. \subsubsection{Methods} \label{sss:kinmet} If $\gtrsim 100$ tracers of the cluster gravitational potential are available, cluster masses and mass profiles can be determined without any assumption about the spatial and/or velocity distribution of the tracers relative to the mass. One possibility is to relate the observed PPS distribution of galaxies to their intrinsic phase-space distribution via \citep[see, e.g.,][]{DM92} \begin{eqnarray} & & g(R,v_{\rm los}) = \nonumber \\ & & \qquad 2\int_R^\infty \frac{r\,d r}{(r^2-R^2)^{1/2}} \int_{-\infty}^{+\infty} \int_{-\infty}^{+\infty} f(E,L) \,d v_\theta \,d v_R \,, \end{eqnarray} where $r$ is the radial distance from the cluster centre in 3D, $(v_R,v_{\theta})$ are Cartesian components of the velocity along the polar coordinates $(R,\theta)$ in the plane of the sky, and $v_{\rm los}$ is the line-of-sight velocity component (i.e. the one we observe via the redshift measurement). The intrinsic phase-space distribution $f$ is expressed in terms of the energy $E$ and angular momentum $L$. The gravitational potential is related to $f(E,L)$ through the Poisson equation. Since the shape of the $f(E,L)$ distribution function is not known from theory, it is generally estimated for haloes extracted from cosmological simulations \citep{Wojtak+08,Wojtak+09}. The $f(E,L)$ method has been used to estimate the mass profiles $M(r)$ of 41 nearby clusters by \citet{WL10} and a stack of sixteen $z=0.17-0.55$ clusters by \citet{vanderMarel+00}. Another widely adopted method for the $M(r)$ determination is to search for a solution of the Jeans equation for a collisionless system of galaxies in dynamical equilibrium, \begin{equation}\label{e:Jeans} G \, M(r) = - r \langle v_r^2 \rangle \left( \frac{d \ln \nu}{d \ln r} + \frac{ d \ln \langle v_r^2 \rangle}{d \ln r} +2 \beta \right). \end{equation} where $\nu(r)$ is the cluster 3d galaxy number density profile, and $\langle v_r^2 \rangle$ is the mean squared radial velocity component, that reduces to the radial veocity dispersion $\sigma_r$ in the absence of bulk motions. $\beta(r)$ is the velocity anisotropy profile, \begin{equation} \beta(r) \equiv 1 - \frac{\langle v_{\theta}^2 \rangle}{\langle v_r^2 \rangle}, \end{equation} where $\langle v_{\theta}^2 \rangle$ is the mean squared velocity component along one of the two tangential directions in spherical coordinates, that reduces to the tangential velocity dispersion $\sigma_{\theta}$ in the absence of bulk motions. Since most clusters of galaxies do not rotate \citep{HL07}, it is usually assumed that the two tangential components of the velocity are identical. The Abel integral equation relates $\nu(r)$ to the observable (projected) galaxy number density profile $N(R)$ \citep{BT87}, under the assumption of spherical symmetry. On the other hand, one cannot directly determine $\sigma_r(r)$ from the observable $\sigma_{\rm{los}}(R)$, since knowledge of $\beta(r)$ is required. This is the so-called \textit{mass-anisotropy degeneracy} \citep[MAD hereafter,][]{BM82} and it is the critical point of this method. To solve the MAD, one can use the mean $\beta(r)$ of cluster-size haloes extracted from cosmological simulations \citep{MBM10,lau10}; $M(r)$ then follows directly from the observables in a non-parametric approach \citep{MB10,Wolf+10}. Other possibilities to solve the MAD problem is to use the fourth-order (kurtosis) Jeans equation \citep{Lokas02,LM03,RF13}, or to separately solve the Jeans equation for different tracers, e.g. early-type and late-type galaxies, since they may have different $\beta(r)$ for the same $M(r)$ \citep{Battaglia+08,BP09}. The Jeans equation has been used to determine $M(r)$ of many individual clusters or stacks of several clusters \citep[e.g.][]{CYE97,BG03,LM03,KBM04,BP09,Lemze+09}. \texttt{MAMPOSSt} \citep[\textit{Modelling of Anisotropy and Mass Profiles of Observed Spherical Systems,}][]{MBB13} is a hybrid method that solves the Jeans equation Eqn.~(\ref{e:Jeans}) to compute the probability of observing a galaxy in a given $(R,v_{\rm{los}})$ position in PPS, by assuming models for $M(r)$ and $\beta(r)$ and a shape (e.g. a Gaussian) for the 3D velocity distribution (and not, as is usually done, for the line-of-sight velocity). The probability of observing a galaxy with velocity $v_{\rm{los}}$ at the projected radius $R$ is: \begin{eqnarray} & & p(v_{\rm{los}}|R) = (2\pi)^{-1/2}\, \nonumber \\ & & \times \int_0^\infty (\nu/ \sigma_{\rm{los}})\,\exp [-v_{\rm{los}}^2/(2\,\sigma_{\rm{los}}^2)] \, dz \, / \, \int_0^\infty \nu\, dz \ , \label{e:mamp} \end{eqnarray} where $z$ is the direction of the line-of-sight. The 3D number density profile $\nu(r)$ comes from the observed (projected) number density profile $N(R)$ via the Abel integral. The line-of-sight velocity dispersion comes from $\sigma_r$ and $\beta$ via: \begin{equation} \sigma_{\rm{los}}^2(R,r) = [1-\beta(r)\,(R/r)^2]\;\sigma_r^2(r) \ , \label{e:slos} \end{equation} and $\sigma_r$ is obtained from $\beta(r)$ and $M(r)$ as a solution of the Jeans Eqn.~(\ref{e:Jeans}) \citep{vanderMarel94}, \begin{equation} \nu \, \sigma_r^2 = - G \int_{r}^{\infty} \nu(\xi) M(\xi)/\xi^2 \, \exp [ 2 \int_{r}^{\xi} \beta \, d \eta/\eta] \, d \xi\, . \label{e:sr} \end{equation} The best-fit parameters of the input models for $M(r)$ and $\beta(r)$ are obtained by maximising the product of the $p(v_{\rm{los}}|R)$ probabilities. \texttt{MAMPOSSt} has been used to determine several individual or stack cluster mass profiles \citep{Biviano+13,MBM14,Durret+15,Balestra+16,Biviano+16,Verdugo+16,Biviano+17a,Biviano+17b}. In combination with gravitational lensing (see Sect.~\ref{ss:lensing}) \texttt{MAMPOSSt} has also been used to constrain the nature of gravity \citep{Pizzuti+16,Pizzuti+17} and the equation of state of dark matter \citep{Sartoris+14}. The \texttt{Caustic} method \citep{DG97,Diaferio99,Serra+11} has been developed to estimate cluster masses beyond the virial region, i.e. outside the domain of validity of the methods described above, that all rely on the asumption of dynamical equilibrium. This method defines the caustic in PPS by identifying steep density gradients in PPS along the velocity axis. N-body simulations show that the caustic amplitude ${\cal A}(R)$ can be used to estimate $M(r)$ via \begin{equation} G \, [M(r)-M(r_0)]=\int_{r_0}^r {\cal A}^2(x) \, {\cal F}_{\beta}(x) \, dx \, . \label{eq:cau} \end{equation} ${\cal F}_{\beta}(x)$ is a radial varying function of both $\beta(r)$ and the gravitational potential itself. Eqn.~\ref{eq:cau} can be solved by assuming a constant value for ${\cal F}_{\beta}$. This assumption is violated within the virial region, leading to a mass over-estimate \citep{Serra+11}, but it is a valid one outside the virial region, where numerical simulations indicate ${\cal F}_{\beta} \approx 0.5-0.7$ \citep[][and references therein]{GMK13}. Since Eqn.~(\ref{eq:cau}) is differential in $M(r)$, one can obtain the mass profile out to a given radius $r_0$ by another technique, e.g. \texttt{MAMPOSSt}, and then use the \texttt{Caustic} method to determine $M(r>r_0)$ \citep{BG03,Biviano+13}. The \texttt{Caustic} method has been extensively used to determine cluster mass profiles \citep[e.g.][]{BG03,Biviano+13,Geller+14,Guennou+14} \subsubsection{Sources of systematic uncertainty} \label{sss:pbms} In Sect~\ref{sss:kinmet} we have already mentioned the systematic uncertainties that are specific to each individual method. Here we discuss how these and other issues propagate into systematic effects in the resulting mass estimate. The typical level of systematic uncertainty in cluster mass estimates inherent in current methods, assuming typical data-sets of $\sim 100$ cluster members, is summarised below in percentages (a value of 0 means the bias can be fully corrected):\\ \noindent\begin{minipage}{\columnwidth} \begin{itemize} \item mass-anisotropy degeneracy \dotfill $8\%$ \item uncertainty in ${\cal F}_{\beta}$ \dotfill $15 \%$ (Caustic-method specific) \item dynamical equilibrium \dotfill $30 \%$ (irrelevant for Caustic method) \item interlopers \dotfill $10 \%$ \item spatial incompleteness \dotfill $0\%$ \item triaxiality \dotfill $30\%$ \end{itemize} \end{minipage} \\ \paragraph{\bf Dynamical equilibrium:} Deviation from dynamical equilibrium can result from ongoing major mergers between a cluster and a massive accreting subcluster. \citet{TNM10} find that a cluster-subcluster collision may lead to a factor $\sim 2$ mass over-estimate from kinematics, for $\sim 1$ Gyr after each core passage of the subcluster, but only if the collision axis is aligned with the line-of-sight. Most of the effects of the collision are erased after a dynamical timescale. Observationally, deviation from dynamical equilibrium can be identified by the analysis of the shape of the velocity distribution of cluster galaxies \citep{Biviano+06,RLT11,RPHL18}. \paragraph{\bf Interlopers:} Interlopers can be defined in two ways: (1) galaxies that are located in projection within a given radius from the cluster centre, but are outside the sphere of same radius, or (2) galaxies that are unbound to the cluster. While interloper-removal techniques have become increasingly sophisticated with time \citep[e.g.][]{YV77,Fadda+96,Wojtak+07,MBB13}, it is impossible to reduce contamination by interlopers to zero and, at the same time, retain all the real members in the sample \citep{MBM10}. Comparison to numerical simulations indicate that contamination by interlopers and incompleteness of real cluster members tend to overestimate cluster masses at the low-mass end and underestimate cluster masses at the high-mass end \citep{Wojtak+18}. \paragraph{\bf Spatial incompleteness:} A particular observational set-up (e.g. caused by fiber collision of slit positioning) can cause a spatially-dependent incompleteness of the spectroscopic sample. If not properly corrected, this incompleteness induces an error in the determination of $\nu(r)$. On the other hand, the velocity distribution of cluster galaxies is mildly, if at all, affected by spatially-dependent incompleteness. If the incompleteness cannot be corrected it is more robust to base the cluster mass estimate on the velocity distribution only \citep{Biviano+06}, or to use the complete photometric sample with background subtraction, to estimate $\nu(r)$ \citep[see, e.g.][]{Biviano+13}. \paragraph{\bf Triaxiality:} All clusters are triaxial, and the velocity dispersion tensor is elongated along the same direction as the galaxy spatial distribution. If the cluster major axis is aligned along (perpendicular to) the line-of-sight, the observed velocity dispersion will be higher (lower) than the average of the three components of the velocity dispersion tensor \citep{Wojtak13}. The cluster projection angle relative to the cluster major axis is generally very difficult to determine \citep{Sereno07}, so triaxiality becomes an important source of (systematic) error in the cluster mass estimate, up to $\sim 60$\%, but generally much lower than this \citep{Wojtak13}. Stacking several clusters together is a simple and effective way of getting rid of the triaxiality problem \citep[e.g.][]{vanderMarel+00}. \begin{figure*} \includegraphics[width=0.33\textwidth, keepaspectratio]{fig5a.pdf} \includegraphics[width=0.33\textwidth, keepaspectratio]{fig5b.pdf} \includegraphics[width=0.33\textwidth, keepaspectratio]{fig5c.pdf} \caption{Left to right: Reconstructed gas density, temperature, and pressure profiles from the REXCESS sample \citep{Pratt10,arn10}, plotted as a function of $R_{500}$. The profiles are colour-coded as a function of dynamical state as defined in \citet{pratt09}: cool core (blue), morphologically disturbed (red), cool core and morphologically disturbed (green) and intermediate (black). } \label{fig:physprof} \end{figure*} \subsection{X-ray and hybrid SZ} \label{sec:X-ray} Upon reaching equilibrium, the thermodynamical properties of the ICM satisfy the HE relation between the ICM pressure $P_{\rm gas}$, the ICM density $\rho_{\rm gas} = \mu m_p n_{\rm gas}$ and the potential (see Eqn. \ref{eq:hee}). We discuss in Sect.~\ref{sec:icm_energy} the insights gained from numerical simulations for when the condition of HE is not satisfied, and what this implies for the mass reconstruction. Measurement of cluster masses in X-rays, through the hydrostatic assumption, gained significant traction after the launch of ROSAT in 1990, owing to the easy availability of spatially resolved density profile observations. \subsubsection{Method} \label{sec:xraymethod} Assuming a spherically symmetric distribution, one can write the HE equation: \begin{equation} M_{\rm tot}(<r) = - \frac{r \, P_{\rm gas}}{\mu m_{\rm u} G \, n_{\rm gas}} \frac{d \log P_{\rm gas}}{d \log r}, \label{eq:mhe} \end{equation} where $G$ is the gravitational constant, $m_{\rm u} = 1.66 \times 10^{-24}$ g is the atomic mass unit, and $\mu= \rho_{\rm gas} / (m_{\rm u} n_{\rm gas}) \approx (2 X +0.75 Y +0.56 Z)^{-1} \approx 0.6$ is the mean molecular weight in atomic mass unit for an ionized plasma; $X$, $Y$ and $Z$ being the mass fraction for hydrogen, helium and other elements, respectively. For a typical metallicity of 0.3 times Solar abundance, and assuming the abundance table of \citet{ag89}, $X+Y+Z=1$, with $X \approx 0.716$ and $Y \approx 0.278$. Assuming the ICM follows the equation of state for a perfect gas ($P_{\rm gas} = k T_{\rm gas} n_{\rm gas}$, where $k$ is the Boltzmann constant), the directly-observable physical quantities in X-rays are the radial density $n_{\rm gas}$ and temperature $T_{\rm gas}$ of the plasma (e.g. Fig.~\ref{fig:physprof}). The gas density can be obtained from the geometrical deprojection of the X-ray surface brightness, extracted in thin annuli. Corrections for contaminating gas clumps can be obtained by masking substructures (which are spatially resolved with XMM-{\it Newton}\ and {\it Chandra}), and by measuring the azimuthal median, instead of the azimuthal mean \citep{zhu13,mor13,eckert+15}. The radial gas temperature distribution is built from spectra extracted in annuli that are wider than those used for the surface brightness, and can be modelled with an absorbed thermal component. A relatively recent development is the availability of spatially-resolved SZ electron pressure profiles, $P_{\rm gas}$, which can be obtained from geometric deprojection of the azimuthally-averaged integrated Comptonization parameter $y$ \citep[e.g.][]{mro09,planck13,sayers+16,romero+17,rup18}. Joint deprojection of SZ and X-ray images can be applied to avoid the use of X-ray spectroscopic data \citep[e.g.][]{lar06,ameglio07,adam16,rup17,shitanishi+18,ghi18b}, and also for total mass estimates \citep[e.g.][]{ameglio09,adam16,rup17,rup18}. \begin{figure*} \includegraphics[width=\textwidth, keepaspectratio]{fig6.pdf} \caption{3D total mass profiles of the clusters in the sample analysed by \citet{bartalucci18} at $z \sim 1$, with several mass distribution reconstruction methods overplotted.} \label{fig:bmprof} \end{figure*} The two main approaches adopted to solve Eqn.~\ref{eq:mhe} are known as the {\it backward} and {\it forward} methods. In the {\it backward} method, a parametric mass model is assumed and combined with the gas density profile to predict a gas temperature profile $T$, which is then compared to the measured $T_{\rm m}$ in the spectral analysis. In the {\it forward} method, functional forms are fitted to the deprojected gas density or surface brightness profiles, and to the temperature \citep[e.g.][]{pra02,vik06,pointecouteau05} or pressure profiles \citep[e.g.][]{pra16,ghi18a,ettori18}, with no assumptions on the form of the gravitational potential. In all cases, the procedure takes into account projection and PSF effects (the latter can be neglected for {\it Chandra}\ data). The HE equation (Eqn.~\ref{eq:mhe}) is then applied to evaluate the radial distribution of the mass. More details on the traditional methods applied to X-ray data to estimate the mass profile can be found in \citet{ettori+13}. The forward method has several variants, as described by \citet{bartalucci18}. The fully parametric method \citep[e.g.][]{vik06} directly uses the best-fitting analytic density and temperature or pressure models. Another approach, the non-parametric like method \citep[e.g.][]{pra16}, uses parametric models only to correct the observed temperature profiles for projection and PSF effects, and to smooth the density gradients. The various methods have been compared by \citet{bartalucci17} and \citet{bartalucci18}. They showed that the density profile is exceptionally robust to both the method used for its reconstruction and to instrumental systematic effects. They found that mass profile estimates are also insensitive to the reconstruction method in the radial range of the measured temperature profile. On the other hand, the mass uncertainty does depend on the method, with fully parametric methods yielding the smallest uncertainties. The mass estimate also depends on the method when extrapolation is required, especially in the case of irregular profiles (see Fig.~\ref{fig:bmprof}). If SZ data are available, the likelihood can also include a comparison with $T = P_{\rm SZ} /n_{\rm gas}$. This method takes advantage of the larger extension of the SZ signal in constraining the mass profile model \citep[e.g.][]{planck13,ghi18a, ettori18}. As combination with SZ data does not need spectroscopic temperature measurements, this method also allows for hydrostatic mass profile estimates out to higher redshift \citep{rup18}. \subsubsection{Sources of systematic uncertainty} The hydrostatic mass estimate depends on the direct measurement of the gas density profile from X-ray data, combined with the radial profile of either the X-ray spectroscopic temperature, or the SZ-derived pressure of the ICM. Any bias on these measurements propagates into systematic effects on the resulting mass estimate, which can be roughly summarised in percentages as follows:\\ \noindent\begin{minipage}{\columnwidth} \begin{itemize} \item assumption of spherical symmetry \dotfill few \% \item hydrostatic mass bias \dotfill $< 10$ - 30\% \item gas temperature inhomogeneities \dotfill few - 10-15\% \item gas clumping \dotfill few \% \item absolute X-ray temperature calibration \dotfill 15-20\% \end{itemize} \end{minipage} \\ \paragraph{\bf Spherical assumption:} The biases induced by the assumption of spherical symmetry were investigated by \citet{buo12}, who found that while the mean bias is small ($\lesssim 1\%$), substantial variations can occur on a cluster to cluster basis, depending on the exact viewing orientation. \paragraph{\bf Hydrostatic mass bias:} The fundamental assumption of the X-ray and SZ analyses described above is that the gas is in HE in the dark matter potential. As discussed in Sect.~\ref{sec:icm_energy}, numerical simulations are unanimous in predicting that such an assumption is likely to lead to an underestimate of the mass due to neglect of bulk motions and turbulence in the ICM. The effect will naturally be most important in dynamically disturbed systems (up to $\sim 30\%$), and least important in relaxed objects ($\lesssim 10\%$). The actual magnitude of this so-called `hydrostatic bias' is difficult to ascertain both numerically (see Sect.~\ref{sec:icm_inhom}) and observationally, although great progress has recently been made in this area and is discussed below in Sect.~\ref{sec:stateoftheart}. \paragraph{\bf Gas temperature inhomogeneities:} An issue that can potentially affect the X-ray analysis is the presence of temperature inhomogeneities in the gas (Sect.~\ref{sec:icm_inhom}). If a significant amount of cool gas is present, then a single temperature fit will be biased towards lower temperatures, which will in turn have an impact on the mass estimate. Usually, X-ray spectra are accumulated within annular regions and their spectral shape is fitted assuming that the gas temperature within the considered shell is uniform. Essentially all X-ray instruments used thus far for estimating ICM temperatures possess an effective area that peaks around 1~keV and declines steeply above $\sim5$ keV. This renders current X-ray telescopes intrinsically more sensitive to the gas in the temperature range $\approx1-3$ keV \citep{Mazzotta04,vikhlinin06b}. If the gas distribution within a given shell is multiphase, the X-ray spectra fitted assuming that the plasma is single-phase should in principle underestimate the mean gas-mass-weighted temperature. This effect can be enhanced in cases where the cooler gas phase coincides with infalling substructures, which feature an increased gas density with respect to their environment and thus contribute strongly to the total X-ray flux. The percentages listed above come from numerical simulations \citep{rasia14}, but estimates of the effect vary widely owing to differences in numerical schemes and input physics. For example, simulations with heat conduction always predict more homogeneous temperature distributions, minimising any bias, while simulations lacking non-gravitational input from supernovae and AGN predict long-lasting, dense cool cores that will strongly contribute to any bias. While most observational studies attempt to make allowances for temperature inhomogeneities by masking detected structures and taking the line-of-sight gradient into account with a spectral-like temperature weighting while deprojecting, there may remain an additional source of temperature inhomogeneities which current studies cannot pinpoint. \paragraph{\bf Gas clumping:} A further issue is gas clumping, i.e. deviations from an isotropic distributions induced by substructures, which can potentially bias measurements of the gas density, and thus the mass, when the X-ray signal is azimuthally averaged over concentric annuli. Current limits from X-ray observations \citep{eckert+15,morandi13,urban14,tchernin16} agree with the predictions from numerical simulations \citep[e.g.][]{roncarelli13}. Observational constraints on gas clumping are described in detail in the chapter of this series related to galaxy cluster outskirts \citep{wal19}. \paragraph{\bf Absolute X-ray calibration:} A final issue is the absolute calibration of X-ray instrumentation. In recent years, it has become apparent that ICM temperatures estimated with different X-ray instruments (in particular XMM-{\it Newton}\ and \emph{Chandra}) show a systematic offset \citep{nevalainen10,Mahdavi13,Martino14,donahue14,schellenberger15}. The observed differences result from the calibration of the effective area of the two telescopes, which is inconsistent at the 5-10\% level. In Fig. \ref{fig:hse_systematics}, taken from \citet{schellenberger15}, we show a comparison between spectroscopic temperatures measured with XMM-{\it Newton}\ and \emph{Chandra} for the same regions. While for temperatures below 5 keV the offset between the two is small, at high temperatures the measured temperatures differ by 15-20\%. Hydrostatic masses estimated with X-ray data only are principally proportional to the gas temperature. The corresponding uncertainty propagates linearly to the hydrostatic mass in the first approximation (for a mass at a fixed overdensity, the scaling is roughly $\propto M\sim T^{1.5}$). However the hydrostatic mass also depends on the temperature gradient. In this context, we note that \citet{Martino14} actually found a very good agreement (within 2\%) between hydrostatic masses estimated from {\it Chandra}\ and XMM-{\it Newton}. \begin{figure} \includegraphics[width=1.025\columnwidth]{fig7.pdf} \hfill \caption{ICM gas temperatures measured with the three XMM-{\it Newton}\ instruments (MOS1, MOS2, and PN) plotted against temperatures measured with \emph{Chandra}/ACIS for the same regions. The black line is the one-to-one relation. Reproduced from \citet{schellenberger15}.} \label{fig:hse_systematics} \end{figure} One possible way of investigating the issue is to compare in a systematic way the spectroscopic X-ray temperatures with the temperatures estimated by combining the gas density with the pressure measured through the SZ effect, $\eta_T=T_{\rm X}n_{\rm X}/P_{\rm SZ}$. \citet{bourdin17} measured $\eta_T=1.02_{-0.03}^{+0.02}$ with a low-redshift XMM-{\it Newton}\ sample. A very similar value, $\eta_T=1.04\pm0.08$, was reported for the X-COP sample \citep{ghi18b}. \citet{adam17} performed a detailed comparison of temperatures in the hot cluster MACS\,J0717.5+3745 between XMM-{\it Newton}, \emph{Chandra} and \emph{NIKA}, and found that the joint X/SZ temperatures lie somewhat in between, $\eta_T\sim0.9$ and $\eta_T\sim1.1$ for XMM-{\it Newton}\ and \emph{Chandra}, respectively. The statistical quality of such comparisons is expected to increase substantially in the near future given the growing number of systems with available deep SZ data. \subsection{Weak lensing analysis} \label{ss:lensing} The deep potential well of a galaxy cluster weakly and coherently distorts the shapes of background galaxies through the differential deflection of light rays. A statistical treatment of the coherent distortion pattern allows us to measure cluster masses without assumptions about their physical nature or dynamical state. This is the well-known weak gravitational lensing (WL hereafter) effect, which has recently become extremely competitive as a means to estimate cluster masses. In general, wide-field cameras installed on large ground-based telescopes are the best instruments for weak-lensing analysis; e.g. Suprime-cam and Hyper Suprime-Cam (HSC) on the Subaru telescope, and MegaCam of the Canada-France-Hawaii Telescope, and Dark Energy Camera (DECam) of the Victor M. Blanco 4-meter Telescope. Specifically, large mirrors can observe galaxies up to $z\sim1$ in short observing times, and the wide field-of-view cameras cover out to the virial radius with superb image quality. The advent of the prime-focus cameras installed on large mirror telescope has made a tremendous progress of weak-lensing analysis for the last decade. \subsubsection{Method} Images of background source galaxies are distorted by the tidal gravitational field. Image distortion of background source galaxies is expressed by the complex shear, $\gamma\equiv \gamma_1 + i \gamma_2$. The complex shear is related to the convergence $\kappa$, through \begin{eqnarray} \gamma (\mbox{\boldmath $\theta$}) = \frac{1}{\pi}\int d^2 \mbox{\boldmath $\theta$}' D(\mbox{\boldmath $\theta$}-\mbox{\boldmath $\theta$}')\kappa(\mbox{\boldmath $\theta$}') \end{eqnarray} with \begin{eqnarray} D(\theta)=\frac{\theta_2^2-\theta_1^2-2i\theta_1\theta_2}{|\theta|^4}, \end{eqnarray} where \mbox{\boldmath $\theta$} is an apparent angular position. Here, the convergence $\kappa$ is the dimensionless projected mass density, given as \begin{eqnarray} \kappa(\mbox{\boldmath $\theta$})=\frac{\Sigma(\mbox{\boldmath $\theta$})}{\Sigma_{\rm cr}(z_{\rm l},z_{\rm s})}, \label{eq:kappa} \end{eqnarray} with the dimensional projected mass density $\Sigma$ and the critical surface mass density \begin{eqnarray} \Sigma_{\rm cr}(z_l,z_s)=\frac{c^2}{4\pi G}\frac{D_{\rm s}}{D_{\rm l}D_{\rm ls}}. \label{eq:sigma_cr} \end{eqnarray} Here $D_{\rm l}$ is the angular diameter distance to the lens, and $D_{\rm s}$ and $D_{\rm ls}$ are the angular diameter distances from the observer to the sources and from the lens to the sources, respectively. The complex ellipticity of individual galaxies is defined as \citep{Bartelmann01}, \begin{eqnarray} \varepsilon=\frac{Q_{11}-Q_{22}+2iQ_{12}}{Q_{11}+Q_{22}+2(Q_{11}Q_{22}-Q_{12}^2)^{1/2}} \end{eqnarray} \citep[e.g.][]{KSB,Okabe08,Kitching08,Oguri12,Heymans12,Miller13,Applegate14,Umetsu14, Hoekstra15,Okabe16b,DESWL16,Okura18} or \begin{eqnarray} \chi=\frac{Q_{11}-Q_{22}+2iQ_{12}}{Q_{11}+Q_{22}} \end{eqnarray} \citep[e.g.][]{Bernstein02,Hirata03,HSCWL1styr}, where $Q_{ij}$ is the quadruple moment of the surface brightness. The observed ellipticites, $\varepsilon,\chi$ are distorted by the gravitational lensing effect, and expressed in the weak-limit ($\kappa\ll 1$ and $\gamma\ll 1$), as follows, $\varepsilon \rightarrow \varepsilon_s+g$ and $\chi\rightarrow \chi_s+2g$, where the subscript $s$ denotes the intrinsic (unlensed) ellipticity and $g$ is the reduced shear \begin{eqnarray} g=\frac{\gamma}{1-\kappa}. \end{eqnarray} Since the gravitational lensing signals in the central regions of galaxy clusters are somewhat strong, one in general uses the reduced shear, $g$, rather than the shear $\gamma$ for cluster mass measurements. Assuming that orientations of intrinsic ellipticity are random ($\langle \varepsilon_s \rangle=0$ and $\langle \chi_s \rangle=0$ ), the gravitational lensing signal can be measured by an ensemble average of background galaxies, $\langle g \rangle \simeq \langle \gamma \rangle \simeq \langle \varepsilon \rangle \simeq \langle \chi \rangle /2$. The statistical uncertainty of the shear component, $\sigma_g \simeq (\langle \varepsilon_s^2 \rangle/N)^{1/2}$, decreases with increasing the number of background galaxies, $N$. Therefore, weak-lensing analysis requires a large number of background galaxies. Weak-lensing observables do not provide direct estimates of three-dimensional masses of clusters because the lensing signal probes the two-dimensional projected mass distribution. One therefore estimates $M_\Delta$ by fitting a three-dimensional model to the data. For this purpose, a tangential distortion profile as a function of cluster-centric radius is widely used in weak-lensing mass measurements. This quantity is computed in a given annulus by azimuthally averaging the measured galaxy ellipticities. In recent studies, the expression of a dimensional component, $\Delta \Sigma_+$, is being widely used rather than a dimensionless expression, $g_+$, thanks to recent updates of photometric redshifts. The tangential components of reduced shear in the $i$-th radial bin are estimated as \begin{eqnarray} \langle \Delta \Sigma_+ \rangle (R_i) =\frac{\sum_n \Sigma_{{\rm cr},n} \varepsilon_{+,n}\,w_n}{\sum_n w_n}, \label{eq:DSimga_+} \end{eqnarray} where the subscript $n$ denotes the $n$-th galaxy located in the annulus spanning $R_1<R_i<R_2$ and $w_{n} \propto \Sigma_{{\rm cr},n}^{-2}$ is the weighting function considering the intrinsic ellipticity and shape measurement errors. We here use the notations of projected radii $R$ and three-dimensional radii $r$. When $\Sigma_{\rm cr}$ is computed by integrating the full probability function, $P(z)$, $\Sigma_{\rm cr} \equiv \langle \Sigma_{{\rm cr}}^{-1}\rangle^{-1}$ where the bracket denotes the average over redshifts. The projected distance from a given cluster centre, $R_i$, is defined by the weighted harmonic mean radius of sparsely distributed galaxies \begin{eqnarray} R_i=\frac{\sum_n w_n}{\sum_n w_n R_n^{-1}}, \end{eqnarray} \citep{Okabe16b}. When one corrects the measured values using the multiplicative calibration bias $m$ for individual galaxies \citep{Heymans06,Massey07}, the measured ensemble shear becomes $\Delta \Sigma_{+,i} \rightarrow \Delta \Sigma_{+,i}/(1+K_i)$, where the correction factor, $K$, is described by \begin{eqnarray} K_i=\frac{\sum_n m_n w_n}{\sum_n w_n}.\label{eq:Kcor} \end{eqnarray} When one computes the tangential shear profile in comoving coordinates, all the equations are computed with the conversion factors of $\Sigma_{\rm cr}^{\rm c}\equiv \Sigma_{\rm cr}(1+z_l)^{-2}$ and $R^{\rm c}\equiv (1+z_l) R$. Given the tangential shear profile, the log-likelihood is expressed by \begin{eqnarray} -2\ln {\mathcal L}&=&\ln(\det(C_{ij})) + \\ &&\sum_{i,j}(\Delta \Sigma_{+,i} - f_{{\rm model}}(R_i))C_{ij}^{-1} (\Delta \Sigma_{+,j} - f_{{\rm model}}(R_j)), \nonumber \label{eq:likelihood} \end{eqnarray} where the subscripts $i$ and $j$ are the $i-$ and $j-$th radial bins. Here, $f_{\rm model}$ is the reduced shear prediction for a specific mass model. The covariance matrix, $C$, in Eqn.~\ref{eq:likelihood} is given by: \begin{eqnarray} C&=&C_g+C_s+C_{{\rm LSS}}+C_{\rm int},\label{eq:C_WL} \end{eqnarray} \citep[e.g.][]{Gruen15,Umetsu16,Miyatake18}. Here $C_g$, $C_s$, and $C_{\rm LSS}$ are the shape noise, the photometric redshift error, and the covariance matrix of uncorrelated large-scale structure (LSS) along the line-of-sight \citep{Schneider98}, respectively. The covariance matrix of uncorrelated large-scale structure (LSS), $C_{\rm LSS}$, along the line-of-sight \citep{Schneider98} is given by \begin{eqnarray} C_{{\rm LSS},ij}=\Sigma_{\rm cr}^2(z_l,\langle z_s \rangle) \int \frac{ldl}{2\pi} C^{\kappa\kappa}(l) J_2(l R_i/D_l) J_2(l R_j/D_l), \label{eq:CovLSS} \end{eqnarray} where $J_2(l\theta_i)$ is the first kind and second order Bessel function \citep{Hoekstra03} and $C^{\kappa\kappa}(l)$ is the weak-lensing power spectrum, obtained by \begin{eqnarray} C^{\kappa\kappa}= \frac{9H_0^2\Omega_m^2}{4c^4} \int^{\chi_s}_0 d\chi \left(\frac{\chi_s-\chi}{\chi_s a(\chi)}\right)^2 P_{\rm nl}(l/\chi;z), \end{eqnarray} \citep[e.g.][]{Schneider98,Hoekstra03}. Here, $\chi_s$ is the comoving distance for the source at the average source redshift, $\langle z_s \rangle$. The latter is calculated by $\mathcal L$ (Eqn. \ref{eq:Lz}) averaged over the radial range of the tangential shear profile. $P_{\rm nl}$ is the non-linear matter power spectrum \citep[e.g.][]{Smith03,Eisenstein98}. $C_{\rm int}$ accounts for the intrinsic variations of projected cluster mass profiles such as halo triaxiality and the presence of correlated haloes \citep{Gruen15}. The intrinsic covariance becomes a significant component of the uncertainty budget of WL mass measurements as the mass increases and the data quality improves. Since this component strongly depends on the prior and realisations, one should carefully consider applications and limitations to the data. In general, each paper clearly specifies which components in the covariance matrix are considered, and thus it is important to undertake a careful reading to understand each analysis. The model for the dimensional reduced tangential shear, $f_{\rm model}$, is expressed by \begin{eqnarray} f_{\rm model}(R_i)=\frac{\bar{\Sigma}_{\rm model}(<R_i)-\Sigma_{\rm model}(R_i) }{1-{\mathcal L}_i \Sigma_{\rm model}(R_i)}, \label{eq:fmodel} \end{eqnarray} with \begin{eqnarray} \mathcal{L}=\frac{\sum_n \Sigma_{{\rm cr},n}^{-1} w_n}{\sum_n w_n}.\label{eq:Lz} \end{eqnarray} Here, $\bar{\Sigma}$ and $\Sigma$ are the averaged surface mass density within the radius and the local surface mass density at the radius, respectively. The average source redshift, $\langle z_s \rangle$, is calculated by $\mathcal L$. The denominator describes the non-linear correction in terms of the reduced tangential shear, and can be also rewritten by $(1-{\mathcal L} \Sigma_{\rm model} )^{-1}\simeq 1 + {\mathcal L} \Sigma_{\rm model}$. \subsubsection{Sources of systematic uncertainty} A weak-lensing analysis is generally composed of four steps: shape measurement, estimation of photometric redshifts, selection of background galaxies, and mass modelling. Systematic errors inherent in the steps can be roughly summarised in percentage terms as follows: \\ \noindent\begin{minipage}{\columnwidth} \begin{itemize} \item accuracy of shape measurements \dotfill $\sim$ a few - $10$ \% \item accuracy of photometric redshifts \dotfill $\lesssim$ sub - a few \% \item background galaxies in shape catalogues \dotfill $\lesssim40$ \% \item mass modelling \dotfill $\sim 10$ \%. \end{itemize} \end{minipage} \\ The first three systematic errors depend on the technical details adopted in each paper and/or the data quality. The last is related to both assumed mass models and intrinsic cluster physics, such as the distribution of internal structures, halo triaxiality, outer slope of dark matter halo and surrounding environments. A key effort of recent studies of cluster weak-lensing analysis is how to control these systematic issues. \paragraph{\bf Shape measurements:} Shape measurement methods can be categorised into several types: moment measurements in real space or Fourier space, model fitting through maximum likelihood method or Bayesian approach, and machine learning \citep[e.g.][]{KSB,Mandelbaum15}. The anisotropic PSF ellipticities can be decomposed into three components: optical aberration, atmospheric turbulence and chip-misalignment \citep{Hamana13}, of which the optical aberration is the major contributor. The PSF anisotropy is corrected via the stellar ellipticity $\varepsilon^*$, where an asterisk denotes stellar objects. A good star and galaxy separation is essential in this procedure. Since both galaxies and stars are sparsely distributed over images, a model function of the distortion patterns, such as bi-polynomial functions, is computed by fitting stellar ellipticites. However, the isotropic PSF correction cannot be tested by the imaging data itself, thus mock analysis of simulated images is essential to assess the reliability of shape measurement technique of faint small galaxies \citep[see the details][]{Heymans06,Massey07,Bridle10,Kitching12,Kitching13,Mandelbaum15,Mandelbaum17}. The STEP programme \citep{Heymans06,Massey07} introduces a formula to describe the accuracy of measurement pipelines, defined by \begin{eqnarray} g-g_{\rm input}=mg_{\rm input}+c \end{eqnarray} where $g$ and $g_{\rm input}$ are the measured and input shear, $m$ is a multiplicative bias and $c$ is a residual additive term. Based on their pipeline tests, a multiplicative bias can correct the measured shear (Eqn. \ref{eq:Kcor}) if necessary. Potential systematic uncertainties can be examined using cross-correlations between maps derived from different quantities \citep[e.g.][]{Vikram15,Oguri18}: E-mode and B-mode maps with galaxy ellipticites, raw stellar ellipticities, residual stellar ellipticites, and foreground galaxies. \paragraph{\bf Photometric redshifts:} It is important to estimate the redshifts of background galaxies because the lensing efficiency is proportional to $\beta\equiv D_{\rm ls}/{D_{\rm s}}$ (Eqns. \ref{eq:kappa} and \ref{eq:sigma_cr}) at a fixed cluster redshift ($z_l$). This quantity is significantly affected by changing source redshifts for objects at $z\lower.5ex\hbox{$\; \buildrel > \over \sim \;$} 0.4$. As it is not realistic to obtain spectroscopic redshifts for all background galaxies, photometric redshifts (photo-$z$) are always used. Typically, weak-lensing analysis of individual clusters uses two- or three-band imaging due to limitations in observing times. Such a minimum combination of bands cannot in principle be used to estimate photometric redshifts, one instead determines them by matching magnitudes of source galaxies with those in photometric redshift catalogues in the literature \citep[e.g.][]{Okabe10b,Oguri12,Hoekstra12,Applegate14,Hoekstra15,Okabe16b,Umetsu16}. To be more precise, the value of $\beta$ of the $i$-th background galaxy ($\beta_i$) or the average value ($\langle \beta \rangle$) in the target fields is computed by taking into account adequate weights assigned to the background galaxies. The most widely-used external photometric catalogue is from the COSMOS survey \citep{Ilbert13}, for which the limiting magnitude $i'\simeq 27.5$ is sufficiently deep for galaxies used in weak-lensing analysis. The COSMOS photometric redshifts based on the thirty bands are well-calibrated by comparing with spectroscopic values. Some papers also compute $\beta_i$ or $\langle \beta \rangle$ using the full probability function $P(z)$ from the COSMOS photometric catalog. The photometric redshift distribution can also be computed from pointed observations with five- or four-band imaging \citep[e.g.][]{Applegate14,Ziparo16}, and wide area galaxy surveys, e.g. the Canada France Hawaii Telescope Legacy Survey \citep[CFHTLS;][]{CHFTPhotoz06}, the Hyper SuprimeCam Survey \citep[HSC-SSP;][]{Tanaka18}, the Dark Energy Survey \citep[DES;][]{DESPhotoz14,DESPhotoz18}, and the Kilo-Degree Survey \citep[KiDS;][]{KIDSPhozoz17}. The advantage of this method is that the photo-$z$ are estimated using the same data as those used for the shape measurements. An accurate characterisation of the true underlying redshift distribution of galaxies is one of the major challenges. The photo-$z$ estimations are roughly categorised into two types: template-fitting methods \citep[e.g.][]{Arnouts99,Ilbert13}, and machine-learning methods \citep[e.g.][]{ANNz,MLZ14}. Both template-fitting and machine learning methods are complementary and necessary to each other \citep[e.g.][]{Salvato18}. To test the performance of photo-$z$ estimations, one can apply a few standard quantities such as bias (a systematic offset between photo and spectro-$z$), scatter, and outlier fraction. Given five broad-band filters, a sub-percent level bias between photo- and spectro-$z$ is typically achieved \citep[e.g.][]{CHFTPhotoz06,Tanaka18, DESPhotoz14,DESPhotoz18, KIDSPhozoz17}, with a scatter of a few percent after $N$ $\sigma$ clipping. Since weak-lensing analysis of galaxy clusters uses a large number of galaxies at $z\sim1$, the statistical uncertainty of average photometric redshift would be reduced by $\sim N^{-1/2}$ where $N\lower.5ex\hbox{$\; \buildrel > \over \sim \;$} \mathcal O(10^4)$ is the number of the background galaxies. Therefore, the statistical uncertainty of cluster masses caused by photo-z estimations is typically in the order of sub percent. Such a photo-$z$ uncertainty effect on cluster mass measurements can be considered in the error covariance matrix (Eqn. \ref{eq:C_WL}). \paragraph{\bf Background Galaxy Selection and dilution effects:} Contamination of background catalogues by unlensed galaxies leads to an underestimate of the weak-lensing signal, and thus it is of critical importance to obtain a secure background catalogue. The main source of contaminated unlensed galaxies are faint galaxies belonging to the cluster, rather than foreground objects \citep{Broadhurst05}. The number density of cluster galaxies increases with decreasing projected cluster-centric radius. The ratio of cluster galaxies to background galaxies, $f_{\rm mem}$, increases with decreasing projected cluster-centric radius, and thereby dilutes the shear signal more at smaller radii than at larger radii, resulting in a significant underestimate in the concentration parameter of the universal mass density profile. This is often referred to as a dilution effect \citep[e.g.][]{Broadhurst05,Umetsu10,Umetsu14,Umetsu15,Okabe10b,Okabe13,Okabe16b,Medezinski10,Medezinski17}. The number of cluster members increases as cluster richness increases, while the ratio of cluster member galaxies to background galaxies decreases as cluster redshift decreases \citep{Okabe16}. Therefore, the dilution effect is a redshift-, richness-, and radially-dependent phenomenon. \citet{Okabe16b} have shown that dilution of lensing signals for massive clusters at $z\sim0.2$ can reach up to $\sim40\%$ at small radii, which is significantly larger than the systematic errors of shape measurements and photometric redshifts. Therefore, background selections are the dominant source of systematic bias in weak-lensing measurements of galaxy clusters. Corrections for dilution can take the form of the co-called `boost factor correction', which attempts to correct lensing signals for a number density excess, $(1+n_{\rm non-bkg}/n_{\rm bkg})$, under the assumption of a radially uniform distribution of background galaxies \citep[e.g.][]{Applegate14,Hoekstra15,Melchior17}. However, the assumption of a flat observed number density profile of background galaxies ignores magnification bias \citep[e.g.][]{Broadhurst95,Umetsu14} -- i.e.\ the depletion of the number density of background galaxies at small radii due to lensing magnification. In addition, as the boost factor correction and the concentration parameter are highly degenerate at small radii, this approach cannot constrain the overall mass profiles of galaxy clusters. Another approach is to obtain a pure background source catalogue using colour information \citep[e.g.][]{Okabe08,Umetsu14,Umetsu15,Okabe10b,Okabe13,Okabe16b,Medezinski10,Medezinski18} or photometric redshifts \citep{Applegate14,Medezinski18}. The basic idea is to select a colour space region in which the contribution from cluster member galaxies is negligible by monitoring the consistency of the information from colour, lensing signal, and the external photometric redshift catalogue. The advantages of this technique are the consistency assessment between different datasets of galaxy colour, lensing information and photometric redshifts; the quantitative control of the purity of background galaxies; and no assumption of specific cluster mass models or a radial distribution for the background galaxies. \citet{Okabe16} have shown that lensing signals corrected by the boost factor, with the assumption of the uniform background distribution, are significantly underestimated compared to those derived from the pure background catalogue. A final method is to use photometric redshifts directly computed by the same dataset, simply selecting with the criteria $z_s>z_{\rm min}$. Here $z_{\rm min}$ is the minimum redshift defined by authors. With the full probability function, $P(z)$, background galaxies can be also defined as \begin{eqnarray} p_0 < \int_{z_{\rm min}}^\infty P(z) dz, \end{eqnarray} where $<$ means that the probability beyond $z_{\rm min}$ is higher than the requirement for background selection, $p_0$ \citep[e.g.][]{Heymans12,Applegate14,Medezinski18}. \paragraph{\bf Mass modelling of the tangential shear profile:} Given mass models, such as NFW (Eqn. \ref{eq:rho_nfw}) and Einasto (Eqn. \ref{eq:Einasto}) one can analytically or numerically compute the local $\Sigma(R)$ and averaged surface mass density within a radius $\bar{\Sigma}(<R)$ by integration of the three-dimensional mass density along the line-of-sight. The NFW form has an analytic expression for $\Sigma$ and $\bar{\Sigma}$ \citep{Bartelmann96}, while the other models mentioned above require numerical integrations. In addition to the cluster halo model, the projected mass density of the outer density profiles (i.e. a two-halo term) can also also considered when tangential shear profiles extend far into the outskirts of galaxy clusters \citep{Oguri11a,Oguri11b}. Such a two-halo term is sometimes shown in stacked lensing profiles. The haloes of real clusters are not perfectly spherical, but have many subhaloes and a triaxial structure. Lensing-projection bias caused by such intrinsic properties induces bias and scatter into weak-lensing mass measurements. For instance, if the major axis of a triaxial halo is aligned along the line-of-sight, this leads to an overestimate of the halo concentration \citep{Oguri04b}. The presence of massive subhaloes enhances the local surface mass density and consequently underestimates the tangential shear \citep{Okabe14a}. Since both the angular resolution and the signal-to-noise ratio of a tangential shear profile of an individual cluster are relatively low, it is very difficult to uncover all the internal properties through lensing information alone. \citet{Becker11} have estimated weak-lensing masses using the tangential shear profile of simulated haloes considering the shape noise only, and found that the bias and scatter in $M_{500}$ for massive clusters are $\sim -5\%$ and $\sim 30\%$, respectively. \citet{Oguri11b} have shown using numerical simulations that weak-lensing masses, $M_{\rm vir}$ are underestimated by up to $5-10\%$ and a choice of the outer boundary for fitting affects mass estimates. \citet{Meneghetti10} have compared weak-lensing masses at three overdensities of $\Delta=2500,500$ and $200$ with input true mass from numerical simulations, and found that the mean masses agree with the input value but there is $~16\%$ scatter in realisations. \citet{Okabe16b} have shown, based on a method to adaptively choose the radial ranges for fitting, that the geometric mean of WL masses agrees with the input masses for massive clusters with $\sim5\%$ scatter. Since the assumed set-up parameters for observing conditions, such as cluster mass ranges and redshifts, the number density of background galaxies, and the fitting method, are all different in the literature, it is difficult to quantitatively compare results. \begin{figure*} \begin{center} \includegraphics[width=0.7\textwidth]{fig8.pdf} \caption{Weak lensing total mass comparisons at $\Delta=500$ for LoCuSS \citep{Okabe16b}, CCCP \citep{Hoekstra15}, WtG \citep{Applegate14}, and CLASH \citep{Umetsu16}.} \label{fig:Mwl_com} \end{center} \end{figure*} \section{Recent advances} \label{sec:stateoftheart} In this Section, we discuss recent advances in lensing and X-ray methods in addressing the various outstanding questions and problems outlined above. We also discuss recent mass measurement comparisons between methods. \subsection{Lensing} \subsubsection{Results from new samples} In lensing, possibly the most significant recent advance is the ready availability of mass measurements and profile shape parameters for moderately-large samples (many 10s) of objects. In this context, weak-lensing mass measurements for individual clusters have been carried out by several projects, e.g. the Local Cluster Substructure Survey \citep[LoCuSS;][]{Okabe10b,Okabe13,Okabe16b}, the Canadian Cluster Comparison Project \citep[CCCP;][]{Hoekstra12,Hoekstra15}, the Cluster Lensing And Supernova survey with Hubble \citep[CLASH;][]{Merten14,Umetsu14,Umetsu16}, and Weighing the Giants \citep[WtG;]{vonderLinden14a,Kelly14,Applegate14}. The LoCuSS project presented Subaru weak-lensing mass measurements of 50 clusters, selected in X-ray luminosity from the RASS, in the redshift range of $0.15-0.3$. The CCCP project complied CFHT data of 50 clusters at redshifts $0.15 < z < 0.55$; 30 out of 50 clusters were selected to have {\it ASCA} X-ray temperatures of $kT > 5$ keV. CLASH presented results for a sample of 16 X-ray-regular and 4 high magnification galaxy clusters at $0.19<z<0.69$, combined with Subaru and {\it HST} data. WtG carried out Subaru weak-lensing analysis for 51 of the most X-ray luminous galaxy clusters at $0.15<z<0.7$. The weak-lensing analysis philosophies for the four projects demonstrate some strong differences, as summarised in Table \ref{tab:comp}. The LoCuSS project \citep{Okabe16b} made a pure background catalogue by checking the consistency between colour, lensing strength and photo-$z$ in the colour-magnitude plane. They treated mass and concentration as free parameters and carried out tangential shear fitting with various combinations of radial ranges and number of bins, to choose a set close to the average mass and concentration, because sparse distributions of background galaxies and intrinsic cluster properties such as substructures might affect mass estimations. The CCCP project \citep{Hoekstra15} selected background galaxies in colour-magnitude planes, adopted a boost factor correction, and restricted the fit to $0.5-2h_{70}^{-1}$ Mpc to minimise lensing bias \citep{Becker11}. They assumed the mass-concentration relation of \citet{Dutton14} because of the radial range of the fit. The uncertainty in the determination of photometric redshifts is the largest source of systematic error for their mass estimates. The CLASH project \citep{Umetsu16} selected background galaxies in the colour-colour plane and combined information on the tangential shear profile, the magnification bias, and the projected mass estimated by {\it HST} strong lensing for the mass measurements. They did not employ a boost factor to compensate for contamination of their background galaxy catalogues. The halo concentration for the NFW model was treated as a free parameter. Their covariance error matrix is composed of the shape noise, photo-$z$ error, uncorrelated LSS lensing, and the intrinsic scatter. The WtG \citep{Applegate14} selected background galaxies in the colour-magnitude plane and corrected tangential shear profiles ($0.75-3h_{70}^{-1}\,{\rm Mpc}$) with a boost factor profile using priors from the X-ray gas distribution of \citet{Mantz10}. They assumed a concentration parameter of $c_{200}=4$. The uncertainty of mean source redshifts is negligible in their analysis. They also selected background galaxies using the full probability function of the photometric redshifts for a subsample of clusters with five-band imaging. \begin{table*} \caption{Summary of weak-lensing analysis methods of LoCuSS \citep{Okabe16b}, CCCP \citep{Hoekstra15}, CLASH \citep{Umetsu16} and WtG \citep{Applegate14}. Column {\it Method} denotes either tangential shear fitting ($g_+$), or joint fitting using tangential shear profiles, strong-lens and the magnification bias (SL, $g_+$ \& $\mu$). Column {\it Calibration factor} is the shear-calibration factor, with `Yes' indicating that such a factor was applied to the shear signal before fitting mass models, and `No' indicating otherwise. Column {\it Boost factor} is the correction factor by the number density caused by imperfect background selection -- Yes/No indicates whether or not this factor was calculated and applied to the data. Column {\it Radial bins} gives the choice of radial binning scheme for the fitting of the shear profile. $c_\Delta$ states whether the concentration parameter was a free parameter in the fit, or fixed, or scaling with the mass. Noise denotes treatments of covariance matrix in the fitting (Eqn. \ref{eq:CovLSS}). } \label{tab:comp} \begin{center} {\small \begin{tabular}{ccccccc} \hline \hline Name & {\small Method} & {\small Calibration} & {\small Boost} & {\small Radial} & $c_\Delta$ & Noise\\ & & {\small factor} & {\small factor} & {\small bins} & & \\ \hline LoCuSS & $g_+$ & No/Yes & No & Adaptive & Free & $C_g+C_s+C_{\rm LSS}$\\ CCCP & $g_+$ & Yes & Yes & Fixed & Scaling & $C_g+C_s$\\ CLASH & SL, $g_+$ \& $\mu$ & Yes & No & Fixed & Free & $C_g+C_s+C_{\rm LSS}+C_{\rm int}$\\ WtG & $g_+$ & Yes & Yes & Fixed & Fixed & $C_g$\\ \hline \end{tabular} } \end{center} \end{table*} Comparison of the cluster mass measurements between these different surveys is of paramount importance for cluster cosmology experiments. Mass comparisons can be expressed in terms of the geometric mean $\exp\left(\langle \ln(Y/X)\rangle\right)$, or fitting with the lognomal quantities ($\ln X$ and $\ln Y$), because the two quantities are interchangable. \citet{Hoekstra15}, \citet{Umetsu16}, and \citet{Okabe16b} found that the latest weak-lensing masses of CCCP, CLASH and LoCuSS are in excellent agreement, within $\sim5\%$, and that the WtG masses are somewhat larger than the others ($\sim10-15\%$). A comparison of $M_{500}$ is shown in Figure \ref{fig:Mwl_com}. All the projects use only two- or three-band imaging, nevertheless weak-lensing masses estimated from different methods show good agreement, which is promising for further weak-lensing surveys for galaxy clusters. \citet{Okabe16b} pointed out that the mass discrepancy between the WtG and the others is caused by a shallow number density profile for the boost factor. They found a number density excess in the boost-factor profile even outside $R_{200}$, which may incorrectly enhance lensing signals and overestimate cluster masses. \begin{figure*} \includegraphics[width=0.425\textwidth]{fig9a.pdf} \hfill \includegraphics[width=0.5375\textwidth]{fig9b.pdf} \caption{({\it Left}): The observed distribution of the concentration parameters $c_{\rm 200}$ as a function of the cluster masses $M_{\rm 200}$ for 50 clusters \citep{Okabe16b}. The errors denote 68\% confidence intervals. The thick and thin lines (red) are the best-fit function and the errors, respectively. The dashed blue, dotted green and dotted-dashed magenta lines are the mean mass-concentration relation from recent numerical simulations of \citet{Bhattacharya13}, \citet{Diemer14} and \citet{Meneghetti14} at $z_l=0.23$, respectively. ({\it Right}): constraints on the mass–concentration relation for shear-selected clusters \citep{Miyazaki18}. The open and filled circles denote the halo concentration computed with and without the dilution effect, respectively. The filled triangle shows the results for 16 X-ray-selected clusters at an average redshift of $0.34$ obtained from a strong and WL analysis of \citet{Umetsu17}. The filled square is from \citet{Okabe16b}, estimated from 50 X-ray luminous (LoCuSS) clusters at redshifts between 0.15 and 0.3, The filled diamond shows the results for a sample of four strong-lensing selected superlens clusters at an average redshift of $0.32$ from a strong and WL analysis of \citet{Umetsu11}. } \label{fig:MC} \end{figure*} \subsubsection{Mass and concentration} \paragraph{\bf NFW models} A weak-lensing study is a powerful direct way to constrain the mass and concentration relation, because tangential shear profiles computed from the wide-field data easily cover the entire radial extent of galaxy clusters, in contrast to X-ray observations which typically cover out to $\sim R_{500}$. The purity of background galaxies in shape catalogues is the most important issue for studies of mass-concentration relation. \citet{Okabe13} have shown that the concentration parameter is significantly underestimated by the inclusion of unlensed cluster member galaxies in a shape catalogue. The contamination from member galaxies should be at the $1\%$ level, otherwise the concentration parameter is underestimated \citep{Okabe10b}. The CLASH project \citep{Umetsu14,Merten15} have shown through a joint shear and magnification study and a strong- and weak-lensing study that the concentration for 20 X-ray clusters at $z\sim0.35$ is in a good agreement with a recent prediction \citep{Meneghetti14}. \citet{Okabe16b} have found that the mass and concentration relation for 50 X-ray selected clusters at $z\sim0.23$ is in good agreement with those of three independent numerical simulations (the left panel Figure \ref{fig:MC}). A fitting formula of the mass and concentration relation should take account of the correlation between the errors on concentration and mass by calculating the error covariance matrix. The intrinsic scatter of halo concentration could be considered if necessary. \citet{Cibirka17} have carried out a stacked lensing analysis for 27 richness selected galaxy clusters at $z\sim0.5$ and found a good agreement with expectations for shape and evolution. \citet{Miyazaki18} have discovered 67 galaxy clusters through peak-finding in weak-lensing mass maps reconstructed from the high number density of background galaxies ($n_g\sim25$\,[arcmin$^{-2}$]) of the HSC-SSP survey \citep{HSC1styr}. The clusters in the resulting catalogue are referred to as `shear-selected clusters', and represent one of the first applications to the HSC of this potentially powerful selection method which is complementary to X-ray, SZ and optical selection. They have carried out a stacked lensing analysis and found that the halo concentration for shear-selected clusters agrees well with those for X-ray selected clusters. This indicates that shear-selected clusters are less biased by halo orientation, in contrast to high concentration parameter for strong-lensing clusters \citep[e.g.][]{Broadhurst05,Umetsu11}. Current observational studies probe the relation between mass and concentration only in narrow redshift ranges. This is purely caused by dataset limitations. On-going and future optical surveys such as the DES \citep[][]{DES16}, HSC-SSP \citep[][]{HSC1styr} and the Large Synoptic Survey Telescope \citep[LSST;][]{Ivezic08} will detect large samples of galaxy clusters in wide mass and redshift ranges, and will enable us to constrain the redshift evolution of the mass and concentration. Moreover, sample selection biases will be investigated in detail. \begin{figure*} \includegraphics[width=0.525\hsize]{fig10a.pdf} \hfill \includegraphics[width=0.42\hsize]{fig10b.pdf} \caption{{\it Left} Comparison of models to the ensemble-averaged surface mass density $\Sigma(R)$ \citep{Umetsu16} (black squares) obtained for 16 X-ray-selected clusters. Models with a probability, $p$, corresponding to $\chi^2$ is higher than 0.05 are shown with solid lines, while those with $p<0.05$ are shown with dashed lines. The blue, red and magenta solid curves show the best-fit for NFW, NFW+two halo term and Einasto profiles, respectively. The lower panel shows the deviations in units of $\sigma$ of the best-fit profiles with respect to the observed $\Sigma$ profile. {\it Right}:$\alpha$-mass relation \citep{Okabe16b}. The cross denotes the best-fit parameters and the contours show the 68.3\%, 95.4\%, and 99.7\% confidence levels. Blue dashed and green dotted lines are from \citet{Dutton14} and \citet{Gao08}, respectively. } \label{fig:Einasto} \end{figure*} \paragraph{\bf Einasto models} As a next step for mass modelling, one aims to measure the shape parameter, $\alpha$, of the \citet{Einasto65} profile (Eqn. \ref{eq:Einasto}) which describes the spherically averaged mass density profile for simulated haloes better than the NFW profile \citep{Navarro04}. Since it is very difficult to distinguish between the NFW and Einasto profiles with the tangential profiles of individual clusters, one in general adopts the NFW model for individual mass measurements. On the other hand, a stacked lensing profile \citep[e.g.][]{Okabe10b,Okabe13,Umetsu14,Umetsu16,Umetsu17,Okabe16b,Cibirka17,Miyazaki18} is a powerful route to constrain the average mass density profile. First, the average distortion or projected mass profiles are less sensitive to internal substructures and the asphericity of the individual cluster mass distributions and also to uncorrelated large-scale structure along the same line-of-sight. This is because these structures are averaged out via the stacking, under the assumption that the universe is statistically homogeneous and isotropic. Second, stacking procedures improve the signal-to-noise ratio of lensing profiles. Since the lensing signals at larger radii could be detected, one usually adopts a main halo and the two halo term. \citet{Umetsu16} have computed the stacked $\Sigma$ profile for 16 X-ray selected clusters and constrain $\alpha=0.232_{-0.038}^{+0.042}$ (left panel of Figure \ref{fig:Einasto}). \citet{Okabe16b} have compared the relation between the shape parameter and mass for 50 X-ray selected clusters with predictions of numerical simulations \citep{Dutton14,Gao08}, and found that they are in agreement with each other (right panel of Figure \ref{fig:Einasto}). More precise observational constraints on the density profile shape of clusters, including on the mass dependence of the Einasto profile parameters, awaits larger cluster samples from on-going or future surveys. \subsection{X-rays and hybrid SZ} \begin{figure*} \includegraphics[bb=260 0 530 207, clip,scale=1.,width=0.55\textwidth, keepaspectratio]{fig11a.pdf} \hfill \includegraphics[width=0.445\textwidth, keepaspectratio]{fig11b.pdf} \caption{{\it Left:} Mass profiles for a sample of five clusters at $z\sim1$ derived by \citet{bartalucci18} using XMM-{\it Newton}\ and {\it Chandra}\ data (coloured lines), compared to local REXCESS mass profiles (grey lines). {\it Right:} Relative errors (median, and 1st and 3rd quartiles) on the mass (red dots) and NFW concentration (black diamonds) estimated in the following studies: P05 \citep{pointecouteau05}, V06 \citep{vik06}, E10 \citep{ettori10}, A16 \citep{amodeo16}, E18 \citep{ettori18}, B18 \citep{bartalucci18}. } \label{fig:Mz} \end{figure*} \subsubsection{Hydrostatic mass and mass profiles } The launch of XMM-{\it Newton}\ and {\it Chandra} opened the way to precise spatially resolved X-ray spectroscopy, enabling measurement of both the gas density and the temperature profiles, and thus the total mass profile using the HE equation. \citet{pointecouteau05} and \citet{vik06}, using respectively XMM-{\it Newton}\ and {\it Chandra}, measured high precision mass profiles for small samples ($\sim 10$) of local ($z<0.15$) relaxed clusters ($M_{\Delta}>10^{14} \mathrel{M_\odot}$). \citet{buo07} extended this work into the group regime \citep[see also][]{gas07}, while \citet{ettori10} studied a larger sample, albeit with lower precision, using the XMM-{\it Newton}\ archive (44 clusters at $z<0.3$). The consistent picture that emerges from these observations is that the dark matter profile is indeed cuspy. Fits with parametric models usually reject profiles with a finite core or are inconclusive \citep[see also][]{buo04,voi06}. Generally, self-similarity of shape is also evident from all techniques, although there is no quantitative assessment of the intrinsic scatter. All quantitative tests of $\Lambda$CDM predictions are based on parametric profile fitting with the NFW profile. X-ray determinations of the $c-M$ relation are consistent with theoretical predictions, and have even been used to provide independent constraints on $\Omega_{m}$ and $\sigma_8$ \citep{buo07,ettori10}. When constraints can be put on more general profiles, such as the generalised NFW or Einasto profiles \citep[e.g.][]{voi06,man16}, the central logarithmic slope has been found to be consistent with unity, i.e. with an NFW profile. More recent studies have pushed the measurements for relaxed clusters to higher redshifts, e.g. the studies of \citet[][34 relaxed clusters at $0.06<z<0.7$]{sch07}, \citet[][40 relaxed objects at $0.1 < z < 1.1$]{man16}, or \citet[][an archival sample at $0.4<z<1.2$]{amodeo16}, but the individual mass profiles in these studies generally have large uncertainties, particularly at the highest redshifts. The evolution factor of the corresponding $c-M$ relations, expressed as $(1+z)^\alpha$, is consistent with theoretical expectations, but with large uncertainties ($\alpha = 0.71\pm0.52$, and $\alpha= 0.12 \pm 0.61$, respectively). \citet[][see Fig.~\ref{fig:Mz}]{bartalucci18} have recently reconstructed the hydrostatic mass profiles of the five most massive ($M_{500} > 5 \times 10^{14} M_{\odot}$) SZ-selected clusters at high redshift ($z\sim1$), combining deep observations from XMM-{\it Newton}\ and {\it Chandra}. Using both forward and backward methods, they investigated halo shape parameters such as sparsity and concentration, measured to high accuracy. Comparing to local clusters, they found hints for evolution in the central regions (or for selection effects). The total baryonic content is in agreement with the cosmic value at $R_{500}$. Comparison with numerical simulations shows that the mass distribution and concentration are in line with expectations. \citet{bartalucci18} also investigated the sparsity of their sample, finding good agreement with expectations \citep[see also][]{corasaniti18}. Typical uncertainties on the NFW concentration as a function of redshift are illustrated in the right hand panel of Fig.~\ref{fig:Mz}. \subsubsection{New results from combination with SZ} As detailed in Sect.~\ref{sec:xraymethod}, a recent observational development is the ready availability of spatially-resolved SZ electron pressure profiles, which can be obtained from geometrical deprojection of the azimuthally-averaged integrated Comptonization parameter. The power of the SZ effect is that it directly measures the line-of-sight pressure. However, measurement of other key thermodynamic quantities such as temperature and entropy requires access to the gas density. This is trivial to obtain from X-ray imaging. Previous studies \citep[e.g.][]{bas10,planck13} have been limited to a few massive local systems due to the intrinsic faintness of the SZ signal and the 1-2 orders of magnitude difference in angular resolution between X-ray and SZ observations. \begin{figure*} \includegraphics[width=0.435\textwidth, keepaspectratio]{fig12a.pdf} \hfill \includegraphics[bb=25 0 525 395 ,clip,scale=1.,width=0.53\textwidth, keepaspectratio]{fig12b.pdf} \caption{{\it Left:} Relative statistical errors on the hydrostatic masses measured at $R_{200}$ in the X-COP sample from \citet{ettori18}. {\it Right:} HE mass profile of PSZ2\,G$144.83+25.11$ at redshift $z = 0.58$, derived from XMM-{\it Newton}\ density and temperature profiles (triangle), compared to the mass profile derived from the XMM-{\it Newton}\ density combined with the NIKA2 pressure profile (dark blue region). Figure from \citet{rup18}. } \label{fig:MSZ} \end{figure*} For local systems ($z<0.2$), in the most recent works published on the hydrostatic mass, improved data quality and refined analysis techniques have allowed to reach statistical uncertainties on the reconstructed mass of about 10\% at $R_{200}$ (see Fig.~\ref{fig:MSZ}). In nearby ($z<0.1$) massive systems, the X-COP collaboration \citep[e.g.][]{ghi18b, ettori18, eckert18} has been able to reconstruct hydrostatic mass profiles out to $2 R_{500}$ by combining X-ray and SZ data. They find that (i) the NFW mass model provides, on average, the best-fitting mass model in reproducing the observed radial profiles of relaxed massive nearby systems, with relative errors at $R_{200}$ lower than 10\% (see Fig.~\ref{fig:MSZ}), (ii) alternative models of gravity that do not require any dark matter contribution (such as MOND or Emergent Gravity) show significant tensions when compared with the prediction from the HE equation, (iii) estimates of the dark matter distribution obtained for the same objects with different techniques (as e.g. lensing, galaxy dynamics, scaling laws) are consistent with the hydrostatic mass with differences in the order of 15\%. At higher redshifts ($z>0.5$), the new sensitive, high resolution SZ instruments such as NIKA2 and Mustang/Mustang2, are potentially game-changers. For example, the angular resolution of NIKA2 is comparable to that of XMM-{\it Newton}, over a 6.5' diameter field of view, finally opening the way to effective exploitation of the X-ray/SZ synergy. As an example, \citet{rup18} recently published a novel non-parametric X-ray/SZ analysis of the cluster PSZ2 G144.83 +25.11 at $z = 0.59$. The 150 GHz image at $< 18^{\prime \prime}$ resolution showed a clear extension to the SW that may be a merging subclump. Excluding this region, the radial profiles resulting from the combination of the density from XMM-{\it Newton}\ and the SZ pressure from NIKA2 and {\it Planck}\ were in excellent agreement with those obtained from the X-ray data alone (see Fig.~\ref{fig:MSZ}). The resulting hydrostatic mass profile provides competitive constraints to the X-ray only analysis. \begin{figure*} \includegraphics[width=0.49\textwidth]{fig13a.pdf} \hfill \includegraphics[width=0.49\textwidth]{fig13b.pdf} \caption{\emph{Left:} Hydrostatic gas fraction profiles $f_{\rm gas,HE}(R)=M_{\rm gas}(<R)/M_{\rm HE}(<rR$ for 12 clusters in the X-COP sample \citep[reproduced from][]{eckert18}. The dashed and dash-dotted vertical lines represent the position of $R_{500}$ and $R_{200}$, respectively. The horizontal shaded area show the cosmic baryon fraction from \emph{Planck} CMB \citep{P15XIII}. \emph{Right:} Non-thermal pressure fraction at $R_{500}$ and $R_{200}$ inferred by comparing the measured hydrostatic gas fraction of X-COP clusters with the expectations, taking into account baryon depletion and stellar fraction. The blue and green lines and shaded areas are the predictions for the random-to-thermal pressure fraction from two sets of numerical simulations \citep{Nelson14,Cui18}.} \label{fig:xcop_fgas} \end{figure*} \subsubsection{Baryon budget and gas fraction} As described in Sect. \ref{sec:depletion}, galaxy clusters are expected to be fair archives of the baryon budget in the Universe. \emph{Planck} data constrain the cosmic baryon fraction with a statistical precision of just 2\% \citep[$f_{b}=0.156\pm0.003$,][]{P15XIII}. Thus, the baryon fraction of massive clusters within their virial radius is in principle known with a high level of accuracy, and measurements are highly sensitive to the accuracy of the estimated mass. The ICM contains the vast majority of the baryons, with stars within galaxies and intracluster light typically contributing $1-1.5\%$ of the total mass within $R_{200}$ \citep[e.g.][]{gonzalez07,gonzalez13,Leauthaud12,lagana13,coupon15,eckert16,chiu17}. Measurements of the ICM gas fraction using hydrostatic mass estimates typically infer gas fractions of 10-15\% within $R_{500}$ \citep{Vikhlinin06,Allen08,ettori09,Pratt10,mantz14}, in good agreement with the expected cosmic baryon budget. However, recent observations extending measurements out to $R_{200}$ and beyond have reported excesses in the gas fraction over the cosmic value when using hydrostatic masses \citep{Simionescu11,Kawaharada10,Ichikawa13,ghi18a}. Such differences disappear when computing gas fractions with weak-lensing masses \citep{Okabe14a}. On the other hand, several studies have also reported hydrostatic gas fractions consistent with expectations all the way out to the virial radius \citep{tchernin16,Walker12a,Walker13}. Recently, \citet{eckert18} reported ICM gas fractions estimated using the HE assumption for the X-COP sample, a sample of 12 clusters with deep XMM-{\it Newton}\ X-ray and \emph{Planck} SZ data out to $R_{200}$ and beyond. In the left-hand panel of Fig. \ref{fig:xcop_fgas} we show the gas fraction profiles estimated through a joint fit to XMM-{\it Newton}\ and \emph{Planck} SZ data. With the exception of one system, A2319, for which substantial non-thermal pressure support was detected \citep[][]{ghi18a}, all measurements converge towards a gas fraction at the virial radius that is very close to the expected baryon budget. \subsection{Non-thermal pressure, feedback, and the validity of the hydrostatic assumption} \subsubsection{Constraints from X-ray and SZ observations} \label{sec:fgasxcop} The gas fraction can be used to put constraints on the non-thermal pressure support if it is assumed that the deviations from the expected true (i.e. Universal) value originate from random isotropic gas motions (see Eqn. \ref{eq:mhse_tot}). The ratio of hydrostatic to true gas fraction is related to the non-thermal pressure fraction $\alpha=P_{\rm rand}/P_{\rm tot}$ as \citep[see Sect. 3.2 of][]{eckert18}, \begin{equation} \frac{f_{\rm gas,HE}}{f_{\rm gas,true}}=\left(1-\frac{P_{\rm th}\,R^2}{(1-\alpha)\, \rho_{\rm gas} GM_{\rm HE}}\frac{d\alpha}{dR}\right)(1-\alpha)^{-1}. \end{equation} Taking into account the expected depletion of baryons induced by hydrodynamical processes and the stellar fraction, constraints on the amount of pressure in the form of random gas motions can be obtained. In the right-hand panel of Fig. \ref{fig:xcop_fgas} we show the inferred non-thermal pressure fraction for the 12 X-COP clusters, which is then compared with the predictions from two different sets of numerical simulations ($\Omega_{500}$, \citealt{Nelson14}; The300, \citealt{Cui18}). The median non-thermal pressure fraction is 6\% at $R_{500}$ and 10\% at $R_{200}$, which can be translated into a typical Mach number $\mathcal{M}_{3D}=\sigma_{v}/c_s\approx0.33$ at $R_{500}$. While the ICM gas fraction is very sensitive to the hydrostatic mass bias, one may argue whether the assumption that the true baryon fraction should match the cosmic baryon fraction with small ($\sim5\%$) corrections can be violated. This can occur if a large amount of non-gravitational energy is injected within the ICM, in particular by AGN feedback (see Sect. \ref{sec:depletion}). Given the measured gas fractions for the X-COP sample (see Fig. \ref{fig:xcop_fgas}), a large hydrostatic bias ($>20\%$) would imply that a substantial amount of baryons have been driven outside of the virial radius even for the most massive local clusters. These systems contain a total thermal energy of several $10^{63}$ ergs, implying that feedback energies in excess of $10^{62}$ ergs are required to substantially deplete the overall baryon fraction. Such an energy input corresponds to an overall AGN luminosity of $\sim10^{45}$ ergs/s injected continuously over 10 Gyr, assuming 100\% coupling with the ICM and neglecting cooling losses. The recent high spectral resolution results from {\it Hitomi} have provided an unprecedented view of gas motions in the Perseus cluster \citep{hitomi16,hitomi18}. Although the purpose of these observations was to obtain constraints on the interaction between the central AGN and the surrounding ICM, these unique data have given insight into the level of turbulence close to the core of Perseus. They have shown that even in the presence of the AGN the turbulent line broadening is rather modest ($164 \pm 10$ km s$^{-1}$). Better constraints will be obtained from XRISM and Athena (Sect.~\ref{forward:TNG}). The above illustrates that the constraints on departures from HE and the gas depletion due to feedback are linked on a fundamental level, and can be used more as a consistency check than as an absolute constraint. For example, an extreme HE bias of $\sim 60\%$, as would be suggested from the tension between {\it Planck} SZ cluster counts and CMB, would imply a level of gas depletion that is completely at odds with reasonable feedback prescriptions in cosmological simulations. The two issues should thus be addressed self-consistently in both observations and simulations. \subsubsection{Constraints from X-ray and optical observations} \label{sec:xrayWL} A comparison of HE and WL masses for a large number of clusters is a useful route to test the validity of the HE assumption. The mass bias, $b_{\rm WL}$, relative to the WL mass, can be estimated by the geometric mean for targeted clusters, \begin{eqnarray} 1-b_{\rm WL}= \exp \left[ \left(\sum_i \ln \left(\frac{M_{\rm HE}}{M_{\rm WL}}\right)_i w_i\right)\left(\sum w_i\right)^{-1} \right], \end{eqnarray} where $w_i$ is a weight function ($w_i=1$ for a uniform weight), or equivalently by fitting the lognormal quantities. \citet{Mahdavi13} compared X-ray masses with weak-lensing masses for 50 CCCP clusters and found that the average mass ratio of X-ray to WL masses is $1-b_{\rm WL}=0.88\pm0.05$ at $R_{500}$. \citet{Hoekstra15} subsequently updated the CCCP WL masses and reported masses on average $19\%$ higher. Thus, applying a factor 1.19 correction to the denominator of the CCCP implies an average mass ratio $1-b_{\rm WL}\sim 0.74$. \citet{Smith16} have complied 50 LoCuSS clusters at $0.15 < z < 0.3$ and found the mean ratio of X-ray to lensing mass $1-b_{\rm WL}=0.95\pm0.05$, where X-ray masses \citep{Martino14} used spectroscopic-like temperature \citep{Mazzotta04} and WL masses are from Subaru/Suprime-Cam \citep{Okabe16b}. We note that the \citet{Martino14} X-ray masses are on average $\sim14\%$ higher than those of \citet{Mahdavi13}. \citet{Applegate16} have investigated a lensing to X-ray mass ratio for 12 relaxed clusters from the WtG project, using WL masses \citep{Applegate14} and {\it Chandra} masses. They reported $b_{\rm WL}-1=0.967^{+0.063}_{-0.092}$ and $1.059^{+0.092}_{-0.096}$ at $R_{2500}$ and $R_{500}$, respectively. \citet{Siegel18} carried out a joint analysis of {\it Chandra}\ X-ray observations, Bolocam thermal SZ observations, HST strong-lensing data, and Subaru/Suprime-Cam weak-lensing data for 6 CLASH clusters, and constrained the nonthermal pressure fraction at $R_{500}$ to be $<0.11$ at 95\% confidence. A recent analysis of a sample of 16 massive clusters by \citet{mau16} suggested that the mass profiles obtained independently from X-ray hydrostatic and caustic (Sect.~\ref{sss:kinmet}) methods agree to better than 20\% on average across the radial range probed by the observations. At $R_{500}$, they measure a mass ratio $M_{\rm X} / M_{\rm C} \gtrsim 0.9$, implying a low or zero value of the hydrostatic bias if the caustic masses are assumed to be equivalent to the true mass. Interestingly, \citet{mau16} found no dependence of the $M_{\rm X} / M_{\rm C}$ scatter on dynamical state. To further illustrate the above, we compare in Fig.~\ref{fig:Mx_com} the X-ray mass measurements from LoCuSS \citep{Martino14}, WtG \citep{Applegate16} and CLASH \citep{Siegel18} at $\Delta=2500$. Since the CCCP X-ray masses \citep{Mahdavi13} are measured within radii determined by the WL mass measurement, we do not include them in the comparison. The {\it Chandra} and XMM-{\it Newton}\ results are denoted by open and solid symbols, respectively. Scatter between each X-ray measurement is significantly larger than that of WL mass measurements (Fig. \ref{fig:Mwl_com}). \citet{Sereno2015b}, compiling WL and X-ray masses from the literature, have found that the intrinsic scatter of HE masses ($\sim20-30$ per cent) is larger than that of WL masses ($\sim 10-15$ per cent). The {\it Chandra} masses are generally systematically higher than those from XMM-{\it Newton}\ due to the absolute temperature calibration issues described above. The WtG masses are also systematically higher than those of other projects. \begin{figure*} \begin{center} \includegraphics[angle=90,width=0.75\textwidth]{fig14.pdf} \caption{X-ray mass comparisons of LoCuSS (diamonds), WtG (triangles) and CLASH (circles) at $\Delta=2500$. Open and solid symbols are from {\it Chandra} and XMM-{\it Newton}\ observations.} \label{fig:Mx_com} \end{center} \end{figure*} \subsection{Halo triaxiality} Halo asphericity and orientation induces significant scatter in the projected lensing signals. Simultaneous modelling of the mass, concentration, shape, and orientation using lensing data and/or independent data has been proposed by various papers \citep[e.g.][]{Oguri05,DeFilippis05,Sereno07,Corless09,Sereno11,Sereno13b,Umetsu15}. The previous review by \citet{Limousin13} gives a good summary of the technique. Lensing information probes the structure and morphology of the matter distribution in projection. X-ray observations provide us with the characteristic size and orientation of the ICM in the sky plane. The elongation of the ICM along the line-of-sight can be constrained from the combination of X-ray and thermal SZ observations, because of a difference of emissivity. Therefore, the triaxial model can be constrained by combining these complementary data. Recently, \citet{Sereno13b} have developed a parametric triaxial framework to combine and couple independent morphological constraints from lensing and X-ray/SZ data, using minimal geometric assumptions. \citet{Umetsu15} applied the technique to A1689 and found that the mass distribution is elongated with an axis ratio of $\sim 0.7$ in projection and the thermal gas pressure contributes to $\sim 60\%$ of the total pressure balanced with the mass. \citet{Chiu18} have carried out a three-dimensional triaxial analysis \citep[see also][]{Umetsu18} for 20 CLASH clusters and obtained a joint ensemble constraint on the minor-to-major axis ratio $q=0.652_{-0.078}^{+0.162}$. Assuming priors on the axis ratios derived from numerical simulations, they found that the degree of triaxiality for the full samples prefers a prolate geometry for cluster haloes. \citet{Sereno18} have measured based on a full 3D analysis of lensing, X-ray and SZ measurements shapes of X-ray selected CLASH clusters and found that it is in a good agreement with numerical simulations for dark matter only. \begin{figure*} \includegraphics[width=0.48\textwidth]{fig15a.pdf} \hfill \includegraphics[width=0.48\textwidth]{fig15b.pdf} \caption{({\it Left}): The observed $M_{500}$--$Y_{\rm X}$ relation from XMM-{\it Newton}\ observations of ten relaxed clusters \citep[][red points with $1\sigma$ uncertainty envelope]{arn07} compared to the predicted relation from numerical simulations including cooling and galaxy feedback \citep[green dot-dash line, true mass; solid green line, HE mass, from][]{Nagai07}. The observed relations from {\it Chandra}\ are also shown \citep{Nagai07,mau07}. ({\it Right}): Ratio of the hydrostatic and the weak lensing mass estimates as a function of mass, from \citet{Hoekstra15}. The CCCP sample yields an average value of $0.76 \pm 0.05$ (dark hatched region), while the average for the WtG measurements is $0.62 \pm 0.04$ (pink region). } \label{fig:proxycomp} \end{figure*} \subsection{Mass estimates from mass proxies} The mass estimate is essential both when using clusters to constrain cosmology, and for studying structure formation physics. Mass proxies play an important role in this context. They can provide statistically more precise mass measurements, especially at high redshift and low mass, as compared to WL or hydrostatic masses, and in some cases the masses from mass proxies may even be less biased (e.g. for highly disturbed systems far from HE). We recall also that in cluster surveys, objects are never detected directly by the mass, but through their observable baryon signature (i.e. through a mass proxy). The calibration of the corresponding mass proxy scaling relations is always needed to understand the selection function (i.e. the probability that a cluster of a given mass is detected via its given baryon signal). This step is necessary even if individual masses were available for all objects in subsequent follow up, in order to relate the theoretical mass function to the observed number counts. There is a vast amount of literature on the subject of scaling relations, a discussion of which is beyond the scope of the current paper. A recent observational review can be found in \citet[][]{gio13}. We summarise here some important recent advances. \begin{itemize} \item The precise measurements available from XMM-{\it Newton}\ and {\it Chandra} have enabled excellent convergence in X-ray scaling relations, calibrated using hydrostatic masses of well-chosen samples of relaxed clusters, to minimize the HE bias. This is illustrated in the left hand panel of Fig.~\ref{fig:proxycomp}, which shows the $M_{500}$--$Y_{\rm X}$ relations from \citet{vik06} and \citet{arn07} are consistent at the $1\sigma$ level, with a normalisation that differs by less than $5\%$. These measurements have also allowed exploration of the scatter about the scaling relations (for relaxed objects), confirming that the X-ray luminosity is a high-scatter mass proxy except when the core is excised \citep[][]{mau07,pratt09,man18}, and that $Y_{\rm X}$ is a low-scatter proxy \citep{arn07}. \item These scaling relations were then exploited for the cosmological analysis of the X-ray selected sample of \citet{vik09} and the new SZ samples from {\it Planck}. \citet{PCXX2014} combined the $M_{500}$--$Y_{\rm X}$\ relation obtained on a sample of {\it relaxed} clusters with masses derived from HE equation \citep{arn10}, and the $Y_{\rm X}$--$Y_{\rm SZ}$ relation calibrated on the sub-sample of 71 {\it Planck} clusters in the cosmological sample with archival XMM-{\it Newton}\ data. They introduced a mass bias parameter, $b$, allowing for any difference between the X--ray determined masses and true cluster halo mass: $M_{\rm HE,X} = (1-b)\,M_{\Delta}$. This factor encompasses {\it all} systematics in our knowledge of the exact relationship between the SZ signal and the mass \citep[see the extensive discussion in the Appendix of ][]{PCXX2014}. Such a difference can arise from cluster physics, such as a violation of HE or temperature structure in the gas, or from observational effects, essentially instrumental calibration. Even with a fiducial $(1-b)=0.8$ motivated by numerical simulations, this calibration yielded the well-known tension between cosmology from cluster number counts and the {\it Planck} CMB cosmology. \item This generated a large effort to recalibrate the relation between {\it Planck} $Y_{\rm SZ}$ and mass using next-generation WL data from CCCP, WtG, LoCuSS, CLASH, etc, as described above in Sect.~\ref{sec:xrayWL}. The resulting $1-b$ from these lensing data is summarised in Table~2 of \citet{planckSZ15}, and ranges from $0.688\pm0.072$ to $0.780\pm0.092$, with systematic differences between studies on the $\sim 10\%$ level \citep[][see right-hand panel of Fig.~\ref{fig:proxycomp}]{Hoekstra15}. \item In parallel, a large effort has been undertaken on the calibration of optical mass proxies based on the richness, exploiting large-area optical surveys such as SDSS. These studies have developed new, robust mass proxies based on richness \citep[e.g.][]{ryk12,ryk14}, and calibrated them using WL stacking techniques \citep[e.g.][]{roz09}. \item Another key effort has been the critical comparison of various mass estimates, obtained from proxies and/or from direct lensing and/or X-ray analysis, and published in the literature \citep[e.g.][]{roz14a,ser15a,ser15b,gro16}. These studies use the various samples to better understand the relation between different proxies and their associated biases, and have proposed new calibrations based on an approach that aims for consistency between different analyses \citep{roz14b,ser17}. These studies have underlined the necessity to properly take into account the covariance between various quantities, and the need for treatment of the Eddington/Malmquist bias effects due to the scatter about the relations. \end{itemize} \begin{figure*} \includegraphics[width=0.45\textwidth]{fig16a.pdf} \hfill \includegraphics[width=0.535\textwidth]{fig16b.pdf} \caption{({\it Left}): Illustration of the importance of selection biases in scaling relations, and the need to take them into account in cosmological analyses using the cluster population. \citet{gil17} showed that neglect of selection effects in their sample would lead to a 40 per cent underestimate in the mass for a given luminosity. The solid line in this plot is the estimated relation when selection effects are accounted for. Red points in this plot represent relaxed systems; blue points represent disturbed objects. The green line and shaded area represent the best-fitting relation of \citet{man10b} and the corresponding $1\sigma$ uncertainty. ({\it Right}): The normalisation of the $M_{500}$--$Y_{\rm SZ}$ relation depends on the underlying cosmological prior \citep{boc15}.} \label{fig:biases} \end{figure*} The last point is especially important when calibrating the relation between the mass and the survey observable using external mass estimates of a sub sample. This is illustrated in the left hand panel of Fig.~\ref{fig:biases}, which shows the importance of the Malmquist bias in the survey observable. Here \citet{gil17}, analysing a complete sample of 34 luminous X-ray clusters, found that not correcting for selection biases would result in a 40 per cent underestimation of the mass of a cluster at a given luminosity. As this bias depends on both the cluster mass function and the survey selection function, it is becoming more common to perform a fully-consistent joint analysis of the scaling laws and cosmological constraints \citep[e.g.][]{man14,deh16}. However, there is a certain degeneracy between the normalisation of the scaling laws and the cosmology, as discussed further below. In these analyses, it is critically important to understand where the constraints are really coming from when imposing consistency on a multi-dimensional data set. An example is shown in the right hand panel of Fig.~\ref{fig:biases}, taken from \citet{boc15}, which illustrates how the normalisation of the scaling relation depends on the prior on the underlying cosmology. \section{Impact on cosmology with clusters} \label{sec:impact} \subsection{Clusters as cosmological probes and the sensitivity to the mass scale} The methods used to estimate cosmological parameters with clusters include use of the baryon fraction (and its evolution), cluster number counts (and their evolution), and the internal cluster shape. All of these approaches require a robust and precise mass estimate. The first two methods have so far provided the most competitive constraints, so in this Section we summarise the nature of these studies, and discuss their sensitivity to the mass scale. \subsubsection{The baryon fraction} In galaxy clusters, the relative amount of baryons and dark matter should be close to the cosmic baryon fraction $\Omega_{\rm b}/\Omega_{\rm m}$, provided that the measurement has been performed over a sufficiently large volume inside which the effects of baryonic physics can be neglected. Using X-ray observations, the dominant component of the baryon budget can be well constrained and combined with total mass measurements (from e.g. lensing signal or assuming the HE of the X-ray emitting plasma with the gravitational potential) to recover the gas mass fraction $f_{\rm gas} = M_{\rm gas}/M_{\rm tot}$. When combined with the total stellar content to give the total baryon fraction, $f_{\rm b}$, some fundamental cosmological parameters can be constrained. This is because, as first proposed in \citet{White93}, the cosmic matter density $\Omega_{\rm m}$ is equal to $Y_{\rm b} \Omega_{\rm b} / f_{\rm b}$, where $Y_{\rm b}$ is the depletion parameter indicating the fraction of cosmic baryons that fall into the cluster halo as estimated from hydrodynamical cosmological simulations \citep[see e.g.][]{Planelles13}. In addition, if $f_{\rm gas}$ is adopted as a standard ruler and assumed to be constant as function of cosmic time in the `correct' cosmology, constraints can be obtained on the dark energy component $\Omega_{\Lambda}$ \citep[see][]{sasaki96}. Here the cosmological constraints come from the dependence of the observed $f_{\rm gas}$ value on the angular distance, $f_{\rm gas} \propto D_{\rm A}(z))^{3/2}$ (for X-rays; for SZ, $f_{\rm gas} \propto (D_{\rm A}(z))$) . These methods all need a robust and precise calibration of the total gravitating mass. The error on $\Omega_{\rm m}$ and $D_{\rm A}(z)^{3/2}$ depends linearly on the mass uncertainty with $f_{\rm gas}$, which can be translated into a corresponding accuracy on $\Omega_{\Lambda}$ via the functional dependence of $D_{\rm A}(\Omega_{\rm m},\Omega_{\Lambda})$ in the $z$ range under consideration. Application of the methods have typically relied on X-ray masses derived from the HE equation (of better statistical precision than lensing mass) and are thus directly sensitive to corresponding systematic effects in the HE mass estimates, in particular the HE bais. Furthermore, a good understanding of the depletion factor and its evolution is required. This quantity is expected to be more robust for massive systems, where gravity dominates the energy budget over other physical phenomena such as AGN and SN feedback and gas cooling. To minimise these systematics, the methods have been essentially applied using the most massive relaxed systems \citep[e.g.][]{lar06,Allen08,ettori09,mantz14}. Conversely, the methods can provide a consistency check of the mass estimates in galaxy clusters, if the cosmological parameters are adopted from independent techniques (such as modelling of the temperature anisotropies in the CMB, or SN). Indeed, a knowledge of $\Omega_{\rm b}$ and $\Omega_{\rm m}$, combined with the measurements of $M_{\rm gas}$, which is one of the better constrained quantities from X-ray observations, limits the level of systematics on the measurement of $M_{\rm tot}$ (see Sect.~\ref{sec:fgasxcop}). \begin{figure*} \begin{center} \includegraphics[scale=0.7]{fig17.pdf} \caption{Constraints on $\sigma_8$ at $\Omega_{\rm M}=0.3$ from the cluster mass function (sometimes combined with $f_{\rm gas}$ constraints) are shown with blue symbols. Standard deviation ($=0.033$) and error ($=0.012$) around the unweighted mean ($=0.789$) of all seven independent cluster analyses are shown as light and dark blue shaded bands, respectively. Also shown are constraints from WL/cosmic shear/galaxy clustering (green symbols) and from CMB (red symbols). Details on all the constraints are provided in Sections~\ref{impact:results} and \ref{impact:summary}. Note that analysis details differ for the various works. Adapted from \citet{sr17b}.} \label{fig:impact:S8} \end{center} \end{figure*} \subsubsection{The mass function} The mass function, defined as the number of haloes of a given mass per unit volume, can be written as: \begin{equation} \label{eq:mf} \frac{dN}{dM}(M,z)=f(\sigma_{\rm M})\,\frac{\rho_{\mathrm m}(z=0)}{ M}\, \frac{ d\ln\sigma_{\rm M}^{-1} }{dM} \,, \end{equation} where ${\rho}_\mathrm{m}(z=0)$ is the mean matter density at $z=0$, and $\sigma_{\rm M}$ is the power spectrum of density perturbations \citep{jen01,tin08}. Its dependence on mass and redshift can be written as: \begin{eqnarray} & & \sigma_{\rm M} (M,z) = \sigma_{\rm M} (M,z=0) D_{\rm grow}(z) \nonumber \\ & & \qquad {\rm with} ~~ \sigma_{\rm M} (M,z=0) \sim \sigma_{\rm 8} \left(\frac{M}{M_8}\right)^\alpha \end{eqnarray} where $D_{\rm grow}(z)$ is the growth factor, and considering that the present day power spectrum $ \sigma_{\rm M} (M,z=0) $ is close to a power law at cluster scales, with $\alpha \sim -1/3$. The logarithmic derivative term in Eqn.~\ref{eq:mf} is approximately constant, and the mass function depends on mass and $\sigma_{8}$ essentially as: \begin{equation} \frac{dN}{dM} \propto f(\sigma_{8}\, M^\alpha)/M \end{equation} The mass function is thus very sensitive to $\sigma_8$, via the exponential behaviour of the function \begin{equation} f(\sigma)\propto\left[1+(\sigma/b)^{-a}\right] \exp(-c/\sigma^2). \end{equation} The determination of $\sigma_{8}$ will thus essentially be degenerate with any mass bias, expressed as $M_{\rm obs}=(1-b)\,M_{\rm true}$, along the degeneracy line $\sigma_{8} (1-b)^{-\alpha }= {\rm cst}$, i.e. $\sigma_8 \propto (1-b)^{1/3}$ or $(1-b) \propto 1/\sigma_{8}^3$. \subsection{Recent results on cosmological constraints from galaxy clusters and their dependence on the mass determination} \label{impact:results} In the last $\sim$5 years there has been significant progress in cluster cosmology, including new samples selected in different wavebands and improved treatments of cluster masses. In the following we call a mass calibration procedure `internal' if mass measurements are available for (a subset of) the clusters in the sample used for the cosmological tests; we call it `external' if the mass calibration is based on other clusters, e.g. a scaling relation from the literature. As discussed above, the cosmological parameters $\sigma_8$ and $\Omega_{\rm M}$ are particularly strongly affected by systematic mass uncertainties therefore we list those constraints as well. Figure~\ref{fig:impact:S8} shows galaxy cluster constraints on $\sigma_8$ from the last $\sim$5 years, plus selected other constraints (cosmic shear/galaxy clustering and CMB). The Figure shows results from the following studies. \begin{itemize} \item {\bf X-ray samples:} \label{impact:results:X} \paragraph{\citet{pcg16,ppm18}:} These mass function constraints are based on the XXL sample of 178 X-ray selected galaxy clusters detected out to $z=1$. A mostly internal WL mass calibration of a subset of clusters is performed. For $\Omega_M=0.3$ the authors approximately find $\sigma_8=0.752 \pm 0.060$. These uncertainties are large because they use a large prior on $h=0.7\pm 0.1$, use only the redshift distribution (which peaks around $0.3<z<0.5$), and allow for a large prior on the evolution of cluster luminosities in the scaling laws ($L(T,z)/L(T,z=0) = E(z)^{0.47\pm 0.68}$). \paragraph{\citet{sr17b}:} The sample of the X-ray brightest clusters in the sky is used (HIFLUGCS, 64 clusters with $\bar{z}=0.05$). The focus of this work is not on using large numbers of clusters but on taking advantage of very high-quality data for all objects in the sample (i.e., 100\% internal mass `calibration'). This includes on average about 100 ks of {\it Chandra}\ data and 200 cluster galaxy velocities per cluster. Hydrostatic masses are taken from \citet{sr17a} and dynamical masses from \citet{zrs16}. Furthermore, a comparison to {\it Planck}\ `SZ masses' is undertaken. They find $\sigma_8(\Omega_{\rm M}/0.3)^{0.5} = 0.793_{-0.026}^{+0.029}$ for their default results combining mass function, and $f_{\rm gas}$, and $\sigma_8(\Omega_{\rm M}/0.3)^{0.5} = 0.759^{+0.040}_{-0.042}$ when using the mass function alone. \paragraph{\citet{mva15}:} The mass function is determined using 224 clusters from RASS-selected samples, extending to $z=0.5$. {\it Chandra}\ and ROSAT X-ray data for 94 clusters are used for gas mass determinations. Weak gravitational lensing data from the WtG project are used directly for the 27 internal clusters and for 23 further clusters through the (therefore partially external) gas mass--total mass relation. Furthermore, the $f_{\rm gas}$ constraints described in the paragraph below are incorporated. Shear profiles are compared to an NFW model with $c=4$ for the lensing mass determination, the cosmology-dependence of the predicted NFW model is accounted for. They find $\sigma_8(\Omega_{\rm M}/0.3)^{0.17} = 0.81 \pm 0.03$. \paragraph{\citet{mantz14}:} The gas mass fraction, $f_{\rm gas}$, for a sample of 40 massive, relaxed clusters is determined at a range of redshifts. The nearby clusters are used to constrain $\Omega_{\rm M}$ and the apparent evolution of the gas mass fraction to constrain dark energy parameters. Gas masses are obtained from {\it Chandra}\ X-ray observations; total masses from X-ray hydrostatic analysis and, for a subset of 12 clusters, weak gravitational lensing observations from the WtG project. The radial range 0.8--1.2\,$R_{2500}$ is exploited for these measurements. The cosmology-dependence of the $f_{\rm gas}$ measurements is taken into account self-consistently, including modelling the radial dependence of $f_{\rm gas}$ as an {\it average} power law in the relevant range. The mass calibration is internal. They find $\Omega_{\rm M} = 0.27 \pm 0.04$. \begin{figure*} \begin{minipage}[c]{0.62\textwidth} \vspace{0pt} \includegraphics[width=\textwidth]{fig18.pdf} \end{minipage}\hfill \begin{minipage}[c]{0.35\textwidth} \vspace{0pt} \caption{Dependence of the constraints on cosmological parameters $\sigma_8$ and $\Omega_{\rm m}$ on the normalisation of the $M_{500}$--$Y_{\rm SZ}$ relation. Each coloured contour represents the cosmological constraints for a given prior on the mass normalisation. The red arrow indicates the effect of an increasing normalisation of the $M_{500}$--$Y_{\rm SZ}$ scaling relation. Reproduced from \citet{planckSZ15,arn17}. } \label{fig:mydependence} \end{minipage} \end{figure*} \paragraph{\citet{mam16}:} The authors study the mass distribution of 40 relaxed clusters from the Weighing the Giants sample. They find consistency with a simple NFW profile in the central parts, but also significant scatter. \paragraph{\citet{b17reflex7}} determine the cluster mass function at 10\% level of uncertainty over the mass range $3\times 10^{12} - 5 \times 10^{14} M_{\odot}$ by fitting the observed cluster X-ray luminosity distribution from the REFLEX~II galaxy cluster survey. They conclude that about 14\% (4.4\%) of the matter in the Universe is bound in clusters with a mass larger than $10^{13} (10^{14}) M_{\odot}$, and that it is unlikely that any cluster with a mass $M_{\Delta} \gtrsim 10^{15} M_{\odot}$ is present at $z>1$. \paragraph{\citet{b17noras1}} present the NORAS II galaxy cluster survey, based on X-ray data from the northern part of the RASS down to a flux limit of $1.8 \times 10^{-12}$ erg s$^{-1}$ cm$^{-2}$ (0.1-2.4 keV), containing 860 objects with a median redshift of 0.1. They constrain $\sigma_8$ and $\Omega_{\rm m}$ using the X-ray luminosity, finding results that are in agreement with their previous findings \citep[][]{b14reflex4}. \\ \item {\bf SZ samples:} \label{impact:results:SZ} \paragraph{\citet{planckSZ15}:} The cluster sample consists of 439 clusters selected from {\it Planck}\ data. Internal mass calibrations based on gravitational lensing are applied, including as baseline the CCCP \citep{hhm15} constraints. For their baseline assumptions (CCCP+BAO+BBN) they find $\sigma_8(\Omega_{\rm M}/0.31)^{0.3} = 0.774 \pm 0.034$. \citet{planckSZ15} explored several different mass normalisations, including for the first time a calibration based on CMB lensing \citep{mel15}. Figure~\ref{fig:mydependence} illustrates the dependence of the cosmological constraints on the mass normalisation. These results refine and confirm the previous results of \citet{PCXX2014}, for which the (internal) mass calibration came from an $M_{500}$--$Y_{\rm SZ}$ relation calibrated from X-ray data, with a bias estimated from comparison to numerical simulations, as discussed extensively in the Appendix of \citet{PCXX2014}. \paragraph{\citet{deh16}:} For the mass function constraints 377 clusters with $z>0.25$ are selected from the South Pole Telescope (SPT) survey\footnote{\url{https://pole.uchicago.edu/spt/}}. An external WL mass calibration is applied as well as an additional constraint from {\it Chandra}\ data for 82 clusters. Note that the central value and uncertainties in the arXiv version (arXiv:1603.06522v1) differ from those in the published version. Here, we use the results from the published version: $\sigma_8(\Omega_{\rm M}/0.27)^{0.3} = 0.797 \pm 0.031$. This analysis has been expanded recently to internal WL mass calibration in \citet{bds18}, who find $\sigma_8(\Omega_{\rm M}/0.3)^{0.2} = 0.766 \pm 0.025$. \paragraph{\citet{has13}:} The mass function is determined using 15 Atacama Cosmology Telescope (ACT)\footnote{\url{https://act.princeton.edu/}} clusters, using external mass calibration. We show the BBN+H0+ACTcl(B12) constraints from their Tab.~3, which assume a \emph{fixed} scaling relation: $\sigma_8(\Omega_{\rm M}/0.27)^{0.3} = 0.848 \pm 0.032$.\\ \item {\bf Optical samples:} \label{impact:results:opt} \paragraph{\citet{crs18}:} Clusters selected from a redMaPPer \citep{rrb14} search of the Sloan Digital Sky Survey (DR8) are used. Weak lensing mass profiles stacked in richness bins from the same data are employed for internal mass calibration. They find $\sigma_8(\Omega_{\rm M}/0.3)^{0.5} = 0.79^{+0.05}_{-0.04}$.\\ \item {\bf Other constraints shown in Fig.~\ref{fig:impact:S8}:} \paragraph{\citet{hoh18}:} Cosmic shear constraints from the Subaru Hyper Suprime-Cam first-year data. They find $\sigma_8(\Omega_{\rm M}/0.3)^{0.5} = 0.780^{+0.030}_{-0.033}$. \paragraph{\citet{hkv18,hvh17,vjj18}:} Cosmic shear constraints from the VST KiDS survey. They find, respectively $\sigma_8(\Omega_{\rm M}/0.3)^{0.5} = 0.737\pm 0.040$, $0.745\pm 0.039$, and $0.800_{-0.027}^{+0.029}$. \paragraph{\citet{DES18}:} DES Y-1 constraints from galaxy clustering and WL. Note that the central value and uncertainties in the arXiv version (arXiv:1708.01530v1) differ from those in the published version. Here, we use the results from the published version: $\sigma_8(\Omega_{\rm M}/0.3)^{0.5} = 0.773_{-0.020}^{+0.026}$. \paragraph{\citet{P18VI,P15XIII}:} Planck 2018 (VI) and Planck 2015 (XIII) constraints from the CMB. For the former, we show the {TT,TE,EE+lowE+lensing} results from the`Combined' column of their Table~1: $\sigma_8(\Omega_{\rm M}/0.3)^{0.5} = 0.830 \pm 0.013$. For the latter, we show the TT,TE,EE+lowP+lensing results\footnote{Note that because of this choice these constraints are tighter than the ones shown in \citet{sr17b}.} from their Tab.~4: $\sigma_8(\Omega_{\rm M})^{0.5} = 0.4553 \pm 0.0068$ \paragraph{\citet{hlk13}:} WMAP9 constraints from the CMB. We show the values quoted in their Section~5: $\sigma_8(\Omega_{\rm M})^{0.5} = 0.434 \pm 0.029$. \end{itemize} \subsection{Summary and Interpretation} \label{impact:summary} Figure~\ref{fig:impact:S8} leads us to draw the following conclusions: \begin{itemize} \item All the recent cluster constraints agree surprisingly well with each other within the uncertainties despite the fact that they differ dramatically in selection, mass treatment, and analysis. One apparent slight exception is the ACT result for fixed default scaling relation; however, on the one hand, this is expected statistically given seven independent constraints and, on the other hand, as \citet{has13} describe in their paper (see, in particular, their Fig.~14) choosing a different fixed scaling relation (their `UPP') would in fact bring $\sigma_8$ \emph{below} the mean of all the cluster results. \item The standard deviation of the cluster results is comparable to the typical uncertainty of the individual results, which can be viewed as indication that confirmation bias is small among the cluster results. \item The cosmic shear/galaxy clustering results agree with the cluster results within their uncertainties. \item The CMB constraints from WMAP agree with the cluster and cosmic shear results. \item The {\it Planck}\ CMB constraints are close to all of the above, but outside the standard error of the mean of the cluster results. \end{itemize} The general agreement between different probes seems healthy, and one could argue that within the statistical expectations and the still reasonably small ($\ll$$25$) total number of constraints, there is nothing to be excited about. However, progress in physics (also) comes from measurement disagreements, and, initially, these are typically small. Let us briefly outline possible interpretations of the slight tension between the mean cluster constraints ($\sigma_8=0.789\pm 0.012$) and the \citet{P18VI} CMB constraints ($\sigma_8=0.830\pm 0.013$). The first suspect is unaccounted-for systematic effects. For both clusters and CMB these might come from, e.g. instrumental calibration or modelling issues. For galaxy clusters, further sources of systematic uncertainty include cluster selection effects and the mass determination, and this review article indeed focusses on the cosmological impact of the latter. Other, more exotic explanations can also be brought to bear. In terms of physics and cosmology, a summed neutrino mass higher than the minimum mass, $\sim$$0.06$ eV, might help alleviate the tension. It is also interesting to speculate about possible more exotic physics that might be causing this tension. For instance, a modification of gravity, a self-interacting dark matter component, warm dar matter, or a dark energy component that interacts with dark matter would all change the predicted mass function. \section{The future} \label{forward} \subsection{New surveys and samples} \label{forward:new_survs} \subsubsection{X-ray} \label{forward:new_survs:X} \paragraph{X-ray surveys:} The ROSAT All-Sky Survey (RASS) was performed almost 30 years ago and many cluster cosmology samples have been derived from it (e.g. Section~\ref{impact:results:X}). New opportunities based on this venerable dataset include running more sophisticated source detection algorithms, as well as combining the X-ray data with a wealth of new multiwavelength data, particularly in the optical/IR and sub-mm/mm (SZ) regimes. As an example for the former, \citet{xrp18} recently showed that galaxy groups and clusters with unusually extended surface brightness profiles were missed in previous RASS cluster surveys. They achieved this by employing a dedicated source detection pipeline particularly sensitive to extended low surface brightness emission, while many of the previous RASS source catalogues were constructed using detection methods optimized for point sources. If missed clusters are not accounted for, their number density will be underestimated, resulting in underestimated values for $\Omega_{\rm M}$ and/or $\sigma_8$ (e.g., Fig.~10 in \citealt{sr17b}); i.e. a qualitatively similar effect as a systematic underestimate of cluster masses. Whether or not the unaccounted-for missed fraction is large enough to significantly affect cosmological parameter constraints derived from previous RASS cluster samples is not yet clear. Examples of the multiwavelength studies include \citet{sbv04} and \citet{tma18} who combined RASS data in a matched filter approach for joint detection with SDSS and {\it Planck}\ data, respectively. They showed that detection probability, purity, and source identification can be improved in such an approach. With the still-functioning satellites XMM-{\it Newton}\ and {\it Chandra}, progress is being made particularly in surveys for new clusters serendipitously detected using archival XMM-{\it Newton}\ observations. Recent examples include, e.g. the XMM-{\it Newton}\ Cluster Survey \citep[XCS-DR1][]{mrh12}, the XMM-{\it Newton}\ CLuster Archive Super Survey \citep[XCLASS; ][]{csp12,rcs17}, and XXL \citep{pcg16,ppm18}. These surveys have by now detected well over 1\,000 new clusters. Given these surveys are typically much deeper but cover a much smaller area than the RASS, the recovered cluster populations are typically at higher redshifts and/or have lower masses. \paragraph{Mass and mass proxy calibration} A large amount of effort continues to be put into follow-up and archival studies of the mass and calibration of the mass proxy relations. Some examples of recent \emph{Chandra} and XMM-{\it Newton}\ archival and/or dedicated follow-up of {\it Planck}-, SPT-, and ACT-selected clusters, include \citet{PEPXI}, \citet{PIPIII}, \citet{PCXX2014}, \citet{lfj17}, \citet{bcm18}, \citet{msb13}, and of RASS-selected clusters (e.g. eeHIFLUGCS, \citealt{r17}). The high-quality X-ray data enable many cluster studies but also the determination of hydrostatic masses as well as very precise mass-proxies like the gas mass, temperature, or the $Y_{\rm X}$ parameter. One outcome of these studies has been that the comparison of {\it Planck}\ SZ-selected clusters with X-ray selected clusters \citep{ros16,ros17,and17,lfj17} has indicated that there may be a tendency in X-ray surveys to preferentially detect clusters with a centrally-peaked morphology, which are more luminous at a given mass, and on average more relaxed. This raises concerns about how representative the X-ray selected samples, used to define our current understanding of cluster physics and to calibrate numerical simulations, have been. In addition, mass comparisons between various samples are hampered by the heterogeneous nature of the data. A large effort is now ongoing, in the form of a multi-year heritage project on the XMM-{\it Newton}\ satellite, to obtain homogeneous X-ray and lensing mass estimates up to $R_{500}$, of a well-defined {\it Planck}\ SZ-selected sample of more than 100 clusters. \subsubsection{Microwave} \label{forward:new_survs:SZ} The recent arrival of large-area SZ galaxy clusters surveys, by ACT \citep[][]{mar11,has13}, SPT \citep[][]{SPTSZ15} and {\it Planck}\ \citep{esz,psz1,psz2} has resulted in a rapid growth in the number of known, massive galaxy clusters especially at $z>0.5$ -- a regime which was classified as `high redshift' by the galaxy cluster community as recently as a decade ago due to the paucity of known systems at these distances. To date, on the order of 1000s of massive galaxy clusters have been detected through blind surveys using the thermal SZ effect. SZ surveys have the advantage over more traditional cluster-detection methods (e.g. X-ray, optical, near-IR) in that they are roughly redshift independent, selecting only on mass. There is also a strong complementarity in the coverage of the mass-redshift plane between the new SZ surveys and their X-ray counterparts. The spatial resolution of ACT and SPT ($\sim 1^\prime$) allows them to probe the cluster population above a nearly constant mass threshold up to very high redshift, $z\sim1.5$, but their smaller area limits the number of high mass objects. The lower spatial resolution of {\it Planck} ($\sim 5^\prime$) is offset by its being the first all-sky blind survey since the {\it ROSAT} All Sky Survey (RASS). Although less sensitive, it is uniquely suited to finding high mass, high redshift systems. In the future, one way to improve the cluster mass calibration is through lensing of the CMB \citep{mel15,planckSZ15,bax18}. CMB-cluster lensing offers a robust and accurate way to constrain galaxy cluster masses, especially at high redshift ($z>1$) where optical lensing measurements are challenging. With CMB lensing we expect to improve mass uncertainty to 3\% for upcoming experiments such as AdVACT, SPT-3G etc., and to 1\% for next generation CMB experiments such as CMB-S4 (discussed below). \subsubsection{Lensing and optical/IR} \label{forward:new_survs:opt} \begin{table*} \caption{Wide-field survey properties \citep{HSCWL1styr,DES1stWL,KIDS1stWL}. $^{(a)}$ surveys, $^{(b)}$ bands, $^{(c)}$ planned survey area, $^{(d)}$ limiting magnitude, $^{(e)}$ the number of background galaxies for weak-lensing analysis, and $^{(f)}$ typical seeing-size for shape measurements}\label{table:OIRsurveys} \begin{center} \begin{tabular}{cccccccc} \hline Survey$^{(a)}$ & Bands$^{(b)}$ & Area$^{(c)}$ & $m_{\rm lim}^{(d)}$ & $n_{g}^{(e)}$ & Seeing$^{(f)}$ \\ & & (deg$^2$) & (ABmag) & (arcmin$^{-2}$) & (arcsec) & \\ \hline HSC-SSP & $grizy$ & 1400 & $r\sim26.4$ ($S/N=5$) & $\sim25$ & $\sim0.58$ \\ DES & $grizY$ & 5000 & $r\sim23.3$ ($S/N=10$) & $\sim7$ & $\sim0.9$ \\ KiDS & $ugri$ & 1500 & $r\sim24.9$ ($S/N=5)$ & $\sim7$ & $\sim0.66$ \\ \hline \end{tabular} \end{center} \end{table*} On-going imaging surveys (HSC-SSP, DES, and KiDS) enable identification of galaxy clusters using luminous red galaxies and weak-lensing mass maps, and/or enable direct weak-lensing mass measurements of galaxy clusters. The respective survey properties are summarised in Table \ref{table:OIRsurveys}. The HSC-SSP survey\footnote{\url{https://hsc.mtk.nao.ac.jp/ssp/}} \citep{HSC1styrOverview} is an ongoing wide-field imaging survey using the HSC \citep{Miyazaki18HSC} which is a new prime focus camera on the 8.2m-aperture Subaru Telescope, and is composed of three layers of different depths (Wide, Deep and UltraDeep). The Wide layer is designed to obtain five-band ($grizy$) imaging over $1400$~deg$^2$. The HSC-SSP survey has both excellent imaging quality ($\sim$$0.7^{\prime\prime}$ seeing in $i$-band) and deep observations ($r\simlt26$~AB~mag). \citet{Oguri18} constructed a CAMIRA cluster catalogue from HSC-SSP S16A dataset covering $\sim 240$~deg$^2$ using the CAMIRA algorithm \citep{Oguri14b} which is a red-sequence cluster finder based on the stellar population synthesis model fitting. The catalogue contains $\sim 1900 $ clusters at $0.1<z<1.1$. \citet{Miyazaki18} have searched galaxy clusters based on weak-lensing analysis of the ˜160 deg$^2$ area and discovered 65 shear-selected clusters of which signal-to-noise ratio is higher than 4.7 in the weak-lensing mass map. The DES survey\footnote{\url{https://www.darkenergysurvey.org/}} \citep{DES16Overview} covers a 5000 deg$^2$ area of the southern sky using the new Dark Energy Camera (DECam) mounted on the Blanco 4-meter telescope at the Cerro Tololo Inter-American Observatory. \citet{Rkyoff16} have applied a photometric red-sequence cluster finder (redMaPPer) to 150 deg$^2$ of Science Verification and found $\sim800$ clusters at $0.2<z<0.9$. The Kilo-Degree Survey\footnote{\url{http://kids.strw.leidenuniv.nl/}} \citep[KiDS;][]{KIDS1stDR} covers 1500 deg$^2$ using the VLT Survey Telescope (VST),located at the ESO Paranal Observatory. \citet{Bellagamba18} have developed Adaptive Matched Identifier of Clustered Objects (AMICO) algorithm and applied to $\sim440\,{\rm deg}^2$ survey data. They found $\sim8000$ candidates of galaxy clusters in the redshift range $0.1<z<0.8$ down to $S/N>3.5$ with a purity approaching 95\% over the entire redshift range. The optical and weak-lensing cluster finders are complementary to the ICM observations through the SZ and X-ray method. Future multi-wavelength comparison for optical, shear-selected, X-ray and SZ clusters willgive detailed insights into cluster physics and the sample selection functions. \begin{figure*} \includegraphics[width=\textwidth]{fig19.pdf} \caption{Expected constraints on cosmological parameters from {\it eROSITA} (cluster mass function plus clustering) and their dependence on the mass calibration precision. For this purpose, the $L_{\rm X}$--$M$ relation is modelled as a powerlaw with four parameters. The different grey shades illustrate increasing precision coming from direct mass measurements. Light grey means the parameters are completely unconstrained while black shows the constraints when fixing them (see inset on the right figure). As the authors describe, the improvement is strongest for the $\Omega_{\rm M}$--$\sigma_8$ plane and weakest for the $w_0$--$w_a$ plane. Reproduced from \citet{prp18} with permission.} \label{fig:forward:eROSITA-L-M} \end{figure*} \subsection{Next-generation data} \label{forward:TNG} \subsubsection{eROSITA} \label{forward:TNG:eROSITA} \emph{eROSITA}\footnote{\url{https://www.mpe.mpg.de/eROSITA}} is the main instrument onboard the Spektrum-Roentgen-Gamma satellite to be launched in 2019 \citep{pab14}. It will perform eight X-ray all-sky surveys resulting in at least 20 times higher sensitivity than the RASS \citep{mpb12}. The primary science driver is the study of dark energy with galaxy clusters. \emph{eROSITA} is expected to detect about 100,000 galaxy clusters \citep{ppr12,prp18,crr18}. For a small subsample ($\sim$2\,000 clusters) precise gas temperatures will be measured directly from the survey data \citep{brm14,hsc17}. Competitive constraints on dark energy are expected: e.g., $\Delta w_0 = \pm 0.07$ and $\Delta w_a = \pm 0.25$ in an optimistic scenario with accurate mass calibration down to the low-mass galaxy group regime \citep[][see Fig.~\ref{fig:forward:eROSITA-L-M}]{prp18}, making \emph{eROSITA} one of the first Stage IV dark energy experiments. Mock light cones based on cosmological simulations and including, e.g., expected \emph{eROSITA} photon count rates, are publically available \citep{zfp18}. The dependence of {\it eROSITA} cosmological constraints on the mass calibration accuracy are shown in Fig.~\ref{fig:forward:L-M_bia}. \begin{figure*} \begin{minipage}[c]{0.65\textwidth} \vspace{0pt} \includegraphics[bb=270 20 700 450 ,clip,scale=1.,scale=0.55]{fig20.pdf} \end{minipage}\hfill \begin{minipage}[c]{0.35\textwidth} \vspace{0pt} \caption{Illustrative predicted \emph{eROSITA} constraints on $\Omega_{\rm M}$ and $\sigma_8$ from the cluster mass function and their dependence on the mass calibration accuracy. For the blue contours unbiased mass estimates are assumed while for the red ones a systematic mass bias of 5\% is simulated. Shown are the 68\% and 95\% credibility levels as dark and light shades, respectively. One notes that, for \emph{eROSITA}, this mass bias results in a significant (at the 95\% credibility level) bias for these cosmological parameters. Figure credit: Katharina Borm. } \label{fig:forward:L-M_bia} \end{minipage} \end{figure*} As discussed in Section~\ref{impact:results}, precise and accurate internal mass calibration is important. For \emph{eROSITA}, the plan is to rely particularly on weak gravitational lensing mass calibration using data from, e.g., the VST, DECam, and HSC surveys, and later from Euclid and LSST \citep{mpb12,gmd18}. \subsubsection{Euclid and LSST} The LSST\footnote{\url{https://www.lsst.org/}} \citep{LSST08} will cover $20,000\,{\rm deg}^2$ south of $+15$ deg, with a limiting magnitude of $26.9$ during ten years of operation. The LSST will have first light in 2020, and start the operations phase in 2022. LSST will construct a large catalogue of clusters detected through their member galaxy population to redshift $z\sim1.2$. Mass calibration using WL mass measurements will measure the cluster mass function. \citet{LSST18} have reported observational requirements for a dark energy analysis consistent with the Dark Energy Task Force definition of a Stage IV dark energy experiment, using conservative assumptions based on current observational resources. \begin{figure*}[t] \begin{center} \includegraphics[width=0.495\textwidth]{fig21a.pdf} \hfill \includegraphics[width=0.495\textwidth]{fig21b.pdf} \caption{ {\it Left:} Uncertainties $\Omega_M$, $A_s$ and the sum of neutrino masses from an SZ catalogue carried out with CMB-S4 in combination with constraints from S4 primary and lensing power spectra, as well as {\it Planck}\ temperature and polarisation on $\ell<30$. Results are shown in the absence of lensing mass estimates (black ellipses), and for lensing masses computed using only polarisation (red ellipses) and temperature and polarisation (green ellipses). The results are marginalised over all other cosmological parameters as well as the cluster nuisance parameters. Figure taken from \citet{Louis17}. {\it Right:} The $1-\sigma$ uncertainty on neutrino mass obtained when marginalizing over $\Lambda$CDM, $\Lambda$CDM+$w_0$ and $\Lambda$CDM$+w_0+w_a$, from tSZ clusters detected using a CMB Stage-4 telescope with $1'$ beam FWHM at 150 GHz. Constraints when the mass calibration is a combination of internal CMB and optical WL. Figure taken from \citet{Madhavacheril17}. } \label{fig:CMB-S4} \end{center} \end{figure*} The Euclid survey\footnote{\url{https://www.euclid-ec.org/}} \citep{Euclid11,Euclid16} is a space-based optical/near-infrared survey mission that will operate at L2. A Euclid wide survey will cover $15,000\,{\rm deg}^2$ with limiting magnitude $24-24.5$ during six-years of operations; the deep survey will cover $40\,{\rm deg}^2$ at about two magnitudes deeper. The Euclid survey will collect shape and photo-z for $1.5\times10^{9}$ background galaxies ($\sim30$ per arcmin$^{-2}$) available for WL mass measurements and spectra for $5\times10^7$ galaxies. The Euclid surveys will show the three-dimensional distribution of dark and luminous matter up to $z\sim2$. Euclid will find $\sim 2 \times10^5$ clusters with a S/N greater than 5 at $0.2<z<2$ \citep{Sartoris16}, especially at $z>1$ thanks to the near-infrared bands. Ground-based telescopes are helpful to collect photometry in optical bands for an accurate estimation of photometric redshifts in a combination with Euclid bands. The on-going survey data from the DES, HSC-SSP, and KiDS will be used for the purpose. The CIFS survey using the CFHT will cover $\sim5000$ deg$^2$ with the r-band and $\sim10000$ deg$^2$ with the u-band by Jan. 31st 2020. The depth will be 24.1 mag and 23.6 mag, which is defined by $10\sigma$ aperture magnitudes for point sources within $2''$ diameter, for the r-band and the u-band, respectively. The Javalambre-Euclid Deep Imaging Survey (JEDIS-g) using JST/T250 (Javalambre Suvery Telescope) will collect g-band data of $\sim5000$ deg$^2$ of the northern sky in common with the CIFS survery. In both Euclid and LSST, cluster mass measurements will be statistically and dramatically improved by the increase in the number of background galaxies. As a result, systematic biases on shape measurements and photometric redshifts will largely dominate over statistical errors. The main challenge for cluster mass measurements is control and minimisation of these systematic uncertainties. The resulting cosmological constraints from the cluster mass function will be complementary to cosmic-shear cosmology. \subsubsection{Simons Observatory and CMB-S4} \label{sec:futureCMB} The next decade of CMB survey instruments will continue to progress to large detector counts and additional bands from $\sim$30-300 GHz, promising to dramatically increase the statistical sample of SZ observations of galaxy clusters as well as facilitate the separation of the tSZ, kSZ and rSZ contributions in hundreds of high-mass systems\footnote{For more details, see the review by \citep{mro19} in this volume.}. Advanced ACTpol, which is the current generation instrument on ACT, and SPT-3G, the current generation camera on SPT, will also see some upgrades that will allow them to detect on the order of thousands of SZ selected clusters. Both are in the field and operating. SPT-3G for example is predicted to find $\sim5000$ clusters at a signal-to-noise $\geq$ 4.5 \citep{Benson14}. The ACT 6-meter will join the Simons Observatory, and both SPT and ACT will likely become part of CMB-S4. The Simons Observatory\footnote{\url{https://simonsobservatory.org/}} \citep[S.O.;][]{simons} will combine several existing CMB experiments in the Atacama desert, and add a new 6-metre telescope with a similar optical design to CCAT-prime with an anticipated first light in 2021. Looking further ahead, CMB-S4\footnote{\url{https://cmb-s4.org/}} will likely add up to three 6-metre antennas of similar design as the S.O. and CCAT-prime 6-m, and several more lower resolution 1-metre class antennas \citep{CMB-S4} with an anticipated first light in 2028. S.O. and CMB-S4 will find on the order of $10^4$ and $10^5$ galaxy clusters respectively, including 10s to 1000s of high-z clusters \citep{Louis17}, through the thermal SZ effect. Figure~\ref{fig:CMB-S4} shows that CMB-S4 will place competitive and independent constraints on cosmological parameters \citep{Louis17}, including e.g. the sum of neutrino masses \citep{Madhavacheril17}. \begin{figure*}[tbp] \includegraphics[clip=true,trim=0.0cm 0.0cm 0.0cm 0.0cm,width=0.45\textwidth]{fig22a.pdf} \hfill \includegraphics[clip=true,trim=1.cm 1.0cm 0.0cm 0.0cm,width=0.47\textwidth]{fig22b.pdf} \caption{Mock SXS / XRISM analysis of a relaxed galaxy cluster extracted from hydrodynamical simulations. {\it Left:} Mock XRISM image in a $5-10$~keV band. The region shown is about $2.6$~Mpc across, and the dotted circle indicates $R_{500}$. {\it Right panel:} XRISM/Resolve measurements of the gas velocity dispersion as a function of radius with $10$, $30$, $100$, $300$, $500$~ksec exposures. The black lines represent the true mass-weighted 3-D (solid) and projected (dashed) gas velocity dispersion profiles. From \citet{nagai13}.} \label{fig:mock-xarm} \end{figure*} \subsubsection{XRISM and Athena} Gas motions in galaxy clusters play an important role in determining the properties of the ICM and in cosmological parameter estimation through X-ray and SZ effect observations of galaxy clusters. Recently, the Hitomi X-ray satellite has provided the first direct measurements of gas motions in the core of the Perseus Cluster \citep{hitomi16}. Upcoming X-ray missions equipped with high-spectral resolution X-ray micro-calorimeters, such as XRISM and Athena, will continue Hitomi's legacy by measuring ICM motions through Doppler shifting and broadening of emission lines in a larger number of galaxy clusters, and at larger radii.\footnote{Measurements will also be possible in the outskirts of high-redshift clusters using high-resolution SZ spectral imaging observations through tSZ pressure fluctuations and direct kSZ measurements of internal gas motions \citep[see the review by][in this volume]{mro19}.} To assess the feasibility and future prospects of directly measuring the random and bulk gas motions at large cluster-centric radii in X-ray observations, \citet{nagai13} performed an analysis of mock {\it Hitomi} Soft X-ray Spectrometer (SXS; analogous to Resolve) spectra of a relaxed galaxy cluster extracted from cosmological numerical simulations, and find that a detailed characterization of the gas velocity profile out to beyond $r\approx R_{2500}$ will require of order 500~ksec of XRISM time, with a significant exposure spent on the outermost radial bins. However, if the significant investment of observing time is made, the gas velocity is recovered in good agreement with the 3D (deprojected) mass-weighted velocity dispersion profile up to $r\approx R_{500}$ (see Figure \ref{fig:mock-xarm}) and enables us to correct for the hydrostatic mass bias of galaxy clusters. On the other hand, perhaps counterintuitively at first, the XRISM mock measurement is slightly ($\sim 30-50$~~km~s$^{-1}$) smaller than the \textit{projected} mass-weighted gas velocity dispersion in a given radial bin, although the mock observations should probe the integrated motions along the line-of-sight. This difference occurs because the measured velocity is spectral-weighted, and hence the inner regions where the gas density is higher but the gas velocity is smaller carry a higher weight. Also, \citet{Ota18} showed that XRISM is capable of measuring non-thermal pressure provided by bulk and random motions and correcting for the hydrostatic mass bias. Going forward, this synergy between numerical simulations, mock observations, and real data will need to be employed frequently in order to correctly interpret the measurements and uncover any potential biases or significant projection effects. Some details of the upcoming missions include the following. \begin{itemize} \item The X-Ray Imaging and Spectroscopy Mission (XRISM), formerly XARM (the X-ray Astronomy Recovery Mission, \citealt{tmt18}) is planned as a successor of the Hitomi satellite, and will carry a high spectral resolution X-ray microcalorimeter (Resolve), which is identical to the SXS. \item Athena\footnote{\url{https://www.the-athena-x-ray-observatory.eu/}} \citep{nbb13} is ESA's second Large Mission after the planetary mission Juice, and before the gravitational wave mission LISA. Details of cluster science expectations are provided in \citet{pra13,epd13,csh13}. The expected launch is in the early 2030s. The major increase in collecting area together with the superb spectral resolution of the micro-calorimeter instrument (X-IFU) will allow us to map turbulence and bulk motions in the inner regions \citep{ron18}. Together with measurements of these quantities in the outer parts of clusters, it is expected that we will be able to set tight limits on hydrostatic mass bias from non-thermal pressure support resulting from gas velocities. Furthermore, the wide field-of-view of the active pixel sensor (WFI) results in outstanding cluster survey capabilities. \end{itemize} \subsubsection{Studies} X-ray satellite missions under study which would have an impact on cluster science include: \begin{itemize} \item AXIS, the Advanced X-ray Imaging Satellite\footnote{\url{http://axis.astro.umd.edu/}} \citep{axis}, is a Probe-class mission under study for the 2020 Decadal. The concept combines $\sim 0.4^{\prime \prime}$ imaging across a $24^\prime \times 24^\prime$ field of view, with CCD-type spectral resolution. \item Lynx\footnote{\url{https://www.lynxobservatory.com/}} \citep{lynx} is a flagship mission concept under study for the 2020 NASA Decadal, for launch around the middle of the 2030s. A large collecting area ($\sim 2$ m$^2$ at 1~keV), high resolution ($\lesssim 1^{\prime\prime}$) imaging, and high-resolution spectroscopy ($\lesssim 2$ eV), combined with survey capability at CCD-type spectral resolution, will push spatially-resolved measurements of clusters and proto-clusters out to $z\sim3$. \end{itemize} \section{Summary} Galaxy cluster mass measurements are fundamental to our understanding of structure formation, and to the use of the cluster population and its evolution to constrain cosmological parameters. As illustrated in this review, there are a number of different methods with which to measure the mass of galaxy clusters, making use of galaxy velocity dispersions, WL, X-ray, and X-ray/SZ observations. In recent years, great progress has been made in all of these methods. Better observations have allowed more precise masses to be obtained; larger samples have allowed statistical analysis; inter-comparison between methods has enabled us to better understand systematic effects. A large parallel effort has been undertaken by the theoretical community, using numerical simulations to explore the assumptions and biases inherent in each mass estimation method. In individual systems, mass profile measurements from {\it all} methods now indicate no significant deviation from the cusped dark matter profile shape predicted from numerical simulations, from local systems up to the most distant observations currently possible, at $z\sim1$. Furthermore, the systematic effects inherent in each method are now well-identified: lack of dynamical equilibrium and triaxiality in optical; the hydrostatic bias in X-rays; the purity of background catalogues and triaxiality in lensing. A fully consistent picture of the mass of galaxy clusters is necessary for both understanding their formation and evolution, and for their use as cosmological probes. Critically, the methods are fully independent, so that inter-comparison of their results can help to better understand the assumptions and biases inherent in each one. Further progress will require mass measurements of a large sample of clusters, selected in as unbiased a manner as possible, using all available methods. Extension to lower masses is also necessary, both to better understand cluster formation and evolution, and also to exploit the rich data sets that will be available from new surveys in optical (HSC, LSST, DES, Euclid), X-ray ({\it eROSITA}), and SZ (Simons Observatory, CMB-S4). The combination of methods, such as using X-ray and SZ observations of similar angular resolution, will allow extension of mass measurements to higher redshifts. In the future, measurement of bulk motions and turbulence in the inner regions of nearby systems will be possible with XRISM and Athena, and in the outskirts and in high-redshift systems with high-resolution SZ imaging. These new surveys in optical, X-ray, SZ, and lensing will yield new samples and allow us to probe selection effects and reveal the properties of the true underlying population. Further progress is expected given the wealth of current and forthcoming data. \begin{acknowledgements} This work was initiated during a visit to the International Space Science Institute (ISSI) in Bern and we acknowledge ISSI’s hospitality. GWP and MA acknowledge funding from the European Research Council under the European Union's Seventh Framework Programme (FP7/2007-2013)/ERC grant agreement No. 340519. SE acknowledges financial contribution from the contracts NARO15 ASI-INAF I/037/12/0, ASI 2015-046-R.0, ASI-INAF n.2017-14-H.0 and funding from the European Union's Horizon 2020 Programme under the AHEAD project (grant agreement n. 654215). DN acknowledges Yale University for granting a triennial leave and the Max-Planck-Institut f\"ur Astrophysik for hospitality when this work was carried out. THR acknowledges support from the German Aerospace Agency (DLR) with funds from the Ministry of Economy and Technology (BMWi) through grant 50 OR 1514. \end{acknowledgements} \bibliographystyle{apj}
{ "timestamp": "2019-03-01T02:05:47", "yymm": "1902", "arxiv_id": "1902.10837", "language": "en", "url": "https://arxiv.org/abs/1902.10837" }
\section{Introduction} In recent years, a variety of smart speakers have been deployed and achieved great success, such as Google Home, Amazon Echo, Tmall Genie, which facilitate goal-oriented dialogues and help users to accomplish their tasks through voice interactions. Natural language understanding (NLU) is critical to the performance of goal-oriented spoken dialogue systems. NLU typically includes the intent classification and slot filling tasks, aiming to form a semantic parse for user utterances. Intent classification focuses on predicting the intent of the query, while slot filling extracts semantic concepts. Table~\ref{tab:example} shows an example of intent classification and slot filling for user query ``Find me a movie by Steven Spielberg''. \begin{table}[ht] \begin{center} \scalebox{0.9}{ \begin{tabular}{l| l l} \hline \textbf{Query} & \multicolumn{2}{l}{Find me a movie by Steven Spielberg} \\ \hline \multirow{3}{*}{\textbf{Frame}} & \textbf{Intent} & find\_movie \\ & \multirow{2}{*}{\textbf{Slot}} & genre = movie \\ & & directed\_by = Steven Spielberg \\ \hline \end{tabular} } \end{center} \caption{An example from user query to semantic frame.} \label{tab:example} \end{table} Intent classification is a classification problem that predicts the intent label $y^i$ and slot filling is a sequence labeling task that tags the input word sequence $x =(x_1, x_2, \cdots, x_T )$ with the slot label sequence $y^s = (y^s_1 , y^s_2 , \cdots, y^s_T )$. Recurrent neural network (RNN) based approaches, particularly gated recurrent unit (GRU) and long short-term memory (LSTM) models, have achieved state-of-the-art performance for intent classification and slot filling. Recently, several joint learning methods for intent classification and slot filling were proposed to exploit and model the dependencies between the two tasks and improve the performance over independent models~\citep{DBLP:conf/slt/GuoTYZ14,DBLP:conf/interspeech/Hakkani-TurTCCG16,DBLP:conf/interspeech/LiuL16,DBLP:conf/naacl/GooGHHCHC18}. Prior work has shown that attention mechanism~\citep{DBLP:journals/corr/BahdanauCB14} helps RNNs to deal with long-range dependencies. Hence, attention-based joint learning methods were proposed and achieved the state-of-the-art performance for joint intent classification and slot filling~\citep{DBLP:conf/interspeech/LiuL16,DBLP:conf/naacl/GooGHHCHC18}. Lack of human-labeled data for NLU and other natural language processing (NLP) tasks results in poor generalization capability. To address the data sparsity challenge, a variety of techniques were proposed for training general purpose language representation models using an enormous amount of unannotated text, such as ELMo~\citep{DBLP:conf/naacl/PetersNIGCLZ18} and Generative Pre-trained Transformer (GPT)~\citep{DBLP:techreport/ge1ne8r}. Pre-trained models can be fine-tuned on NLP tasks and have achieved significant improvement over training on task-specific annotated data. More recently, a pre-training technique, Bidirectional Encoder Representations from Transformers (BERT)~\citep{DBLP:journals/corr/abs-1810-04805}, was proposed and has created state-of-the-art models for a wide variety of NLP tasks, including question answering (SQuAD v1.1), natural language inference, and others. However, there has not been much effort in exploring BERT for NLU. The technical contributions in this work are two folds: 1) we explore the BERT pre-trained model to address the poor generalization capability of NLU; 2) we propose a joint intent classification and slot filling model based on BERT and demonstrate that the proposed model achieves significant improvement on intent classification accuracy, slot filling F1, and sentence-level semantic frame accuracy on several public benchmark datasets, compared to attention-based RNN models and slot-gated models. \section{Related work} Deep learning models have been extensively explored in NLU. According to whether intent classification and slot filling are modeled separately or jointly, we categorize NLU models into independent modeling approaches and joint modeling approaches. Approaches for intent classification include CNN~\citep{DBLP:conf/emnlp/Kim14,DBLP:conf/nips/ZhangZL15}, LSTM~\citep{DBLP:conf/interspeech/RavuriS15}, attention-based CNN~\citep{DBLP:conf/interspeech/ZhaoW16}, hierarchical attention networks~\citep{DBLP:conf/naacl/YangYDHSH16}, adversarial multi-task learning~\citep{DBLP:conf/acl/LiuQH17}, and others. Approaches for slot filling include CNN~\citep{DBLP:conf/interspeech/Vu16}, deep LSTM~\citep{DBLP:conf/slt/YaoPZYZS14}, RNN-EM~\citep{DBLP:conf/nlpcc/PengYJW15}, encoder-labeler deep LSTM~\citep{DBLP:journals/corr/KurataXZY16}, and joint pointer and attention~\citep{DBLP:conf/acl/ZhaoF18}, among others. Joint modeling approaches include CNN-CRF~\citep{DBLP:conf/asru/XuS13}, RecNN~\citep{DBLP:conf/slt/GuoTYZ14}, joint RNN-LSTM~\citep{DBLP:conf/interspeech/Hakkani-TurTCCG16}, attention-based BiRNN~\citep{DBLP:conf/interspeech/LiuL16}, and slot-gated attention-based model~\citep{DBLP:conf/naacl/GooGHHCHC18}. \section{Proposed Approach} We first briefly describe the BERT model~\citep{DBLP:journals/corr/abs-1810-04805} and then introduce the proposed joint model based on BERT. Figure~\ref{fig:bert} illustrates a high-level view of the proposed model. \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{BERT} \caption{A high-level view of the proposed model. The input query is ``play the song little robin redbreast''.} \label{fig:bert} \end{figure} \subsection{BERT} The model architecture of BERT is a multi-layer bidirectional Transformer encoder based on the original Transformer model~\citep{DBLP:conf/nips/VaswaniSPUJGKP17}. The input representation is a concatenation of WordPiece embeddings~\citep{DBLP:journals/corr/WuSCLNMKCGMKSJL16}, positional embeddings, and the segment embedding. Specially, for single sentence classification and tagging tasks, the segment embedding has no discrimination. A special classification embedding ([CLS]) is inserted as the first token and a special token ([SEP]) is added as the final token. Given an input token sequence ${\vect x} = (x_1,\dots, x_T)$, the output of BERT is ${\mat H} = ({\vect h}_1,\dots, {\vect h}_T)$. The BERT model is pre-trained with two strategies on large-scale unlabeled text, i.e., masked language model and next sentence prediction. The pre-trained BERT model provides a powerful context-dependent sentence representation and can be used for various target tasks, i.e., intent classification and slot filling, through the fine-tuning procedure, similar to how it is used for other NLP tasks. \begin{table*}[ht] \begin{center} \begin{tabular}{l c c c c c c} \hline \multirow{2}{*}{\textbf{Models}} & \multicolumn{3}{c}{\textbf{Snips}} & \multicolumn{3}{c}{\textbf{ATIS}} \\ & \textbf{Intent} & \textbf{Slot} & \textbf{Sent} & \textbf{Intent} & \textbf{Slot} & \textbf{Sent} \\ \hline RNN-LSTM~\citep{DBLP:conf/interspeech/Hakkani-TurTCCG16} & 96.9 & 87.3 & 73.2 & 92.6 & 94.3 & 80.7 \\ Atten.-BiRNN~\citep{DBLP:conf/interspeech/LiuL16} & 96.7 & 87.8 & 74.1 & 91.1 & 94.2 & 78.9 \\ Slot-Gated~\citep{DBLP:conf/naacl/GooGHHCHC18} & 97.0 & 88.8 & 75.5 & 94.1 & 95.2 & 82.6 \\ \hline Joint BERT & \textbf{98.6} & \textbf{97.0} & \textbf{92.8} & {97.5} & \textbf{96.1} & {88.2} \\ Joint BERT + CRF & {98.4} & {96.7} & {92.6} & \textbf{97.9} & {96.0} & \textbf{88.6}\\ \hline \end{tabular} \end{center} \caption{NLU performance on Snips and ATIS datasets. The metrics are intent classification accuracy, slot filling F1, and sentence-level semantic frame accuracy (\%). The results for the first group of models are cited from~\citet{DBLP:conf/naacl/GooGHHCHC18}.} \label{tab:result} \end{table*} \subsection{Joint Intent Classification and Slot Filling} BERT can be easily extended to a joint intent classification and slot filling model. Based on the hidden state of the first special token ([CLS]), denoted ${\vect h}_1$, the intent is predicted as: \begin{align} y^i = \mathrm{softmax}({\mat W}^i {\vect h}_1 + {\vect b}^i) \,, \end{align} For slot filling, we feed the final hidden states of other tokens $\vect{h}_2,\dots,\vect{h}_T$ into a softmax layer to classify over the slot filling labels. To make this procedure compatible with the WordPiece tokenization, we feed each tokenized input word into a WordPiece tokenizer and use the hidden state corresponding to the first sub-token as input to the softmax classifier. \begin{align} y^s_n = \mathrm{softmax}({\mat W}^s {\vect h}_n + {\vect b}^s) \,, n \in {1 \dots N}\ \end{align} \noindent where \({\vect h}_n\) is the hidden state corresponding to the first sub-token of word \(x_n\). To jointly model intent classification and slot filling, the objective is formulated as: \begin{align} p(y^i, y^s|{\vect x}) = p(y^i|{\vect x}) \prod_{n=1}^{N}{p(y^s_n|{\vect x})} \,, \end{align} The learning objective is to maximize the conditional probability $p(y^i, y^s|{\vect x})$. The model is fine-tuned end-to-end via minimizing the cross-entropy loss. \subsection{Conditional Random Field } Slot label predictions are dependent on predictions for surrounding words. It has been shown that structured prediction models can improve the slot filling performance, such as conditional random fields (CRF).~\citet{DBLP:conf/acl/ZhouX15} improves semantic role labeling by adding a CRF layer for a BiLSTM encoder. Here we investigate the efficacy of adding CRF for modeling slot label dependencies, on top of the joint BERT model. \section{Experiments and Analysis} We evaluate the proposed model on two public benchmark datasets, ATIS and Snips. \subsection{Data} The ATIS dataset~\citep{DBLP:conf/slt/TurHH10} is widely used in NLU research, which includes audio recordings of people making flight reservations. We use the same data division as~\citet{DBLP:conf/naacl/GooGHHCHC18} for both datasets. The training, development and test sets contain 4,478, 500 and 893 utterances, respectively. There are 120 slot labels and 21 intent types for the training set. We also use Snips~\citep{DBLP:journals/corr/abs-1805-10190}, which is collected from the Snips personal voice assistant. The training, development and test sets contain 13,084, 700 and 700 utterances, respectively. There are 72 slot labels and 7 intent types for the training set. \subsection{Training Details} We use English uncased BERT-Base model\footnote{https://github.com/google-research/bert}, which has 12 layers, 768 hidden states, and 12 heads. BERT is pre-trained on BooksCorpus (800M words)~\citep{DBLP:conf/iccv/ZhuKZSUTF15} and English Wikipedia (2,500M words). For fine-tuning, all hyper-parameters are tuned on the development set. The maximum length is 50. The batch size is 128. Adam~\cite{DBLP:journals/corr/KingmaB14} is used for optimization with an initial learning rate of 5e-5. The dropout probability is 0.1. The maximum number of epochs is selected from [1, 5, 10, 20, 30, 40]. \subsection{Results} \begin{table}[ht!] \begin{center} \begin{tabular}{l c c c} \hline \textbf{Model} & \textbf{Epochs} & \textbf{Intent} & \textbf{Slot} \\ \hline Joint BERT & 30 & 98.6 & 97.0 \\ No joint & 30 & 98.0 & 95.8 \\ Joint BERT & 40 & 98.3 & 96.4 \\ Joint BERT & 20 & 99.0 & 96.0 \\ Joint BERT & 10 & 98.6 & 96.5 \\ Joint BERT & 5 & 98.0 & 95.1 \\ Joint BERT & 1 & 98.0 & 93.3 \\ \hline \end{tabular} \end{center} \caption{Ablation Analysis for the Snips dataset.} \label{tab:ablation} \end{table} \begin{table*}[ht] \begin{center} \scalebox{0.9}{ \begin{tabular}{l p{15cm}} \hline \textbf{Query} & need to see \textbf{mother joan of the angels} in one second \\ \hline \hline \multicolumn{2}{l}{Gold, predicted by joint BERT correctly} \\ \hline \textbf{Intent} & SearchScreeningEvent \\ \textbf{Slots} & O O O B-movie-name I-movie-name I-movie-name I-movie-name I-movie-name B-timeRange I-timeRange I-timeRange \\ \hline \hline \multicolumn{2}{l}{Predicted by Slot-Gated Model~\cite{DBLP:conf/naacl/GooGHHCHC18}} \\ \hline \textbf{Intent} & BookRestaurant \\ \textbf{Slots} & O O O B-object-name I-object-name I-object-name I-object-name I-object-name B-timeRange I-timeRange I-timeRange \\ \hline \end{tabular} } \end{center} \caption{A case in the Snips dataset.} \label{tab:case} \end{table*} Table~\ref{tab:result} shows the model performance as slot filling F1, intent classification accuracy, and sentence-level semantic frame accuracy on the Snips and ATIS datasets. The first group of models are the baselines and it consists of the state-of-the-art joint intent classification and slot filling models: sequence-based joint model using BiLSTM~\citep{DBLP:conf/interspeech/Hakkani-TurTCCG16}, attention-based model~\citep{DBLP:conf/interspeech/LiuL16}, and slot-gated model~\citep{DBLP:conf/naacl/GooGHHCHC18}. The second group of models includes the proposed joint BERT models. As can be seen from Table~\ref{tab:result}, joint BERT models significantly outperform the baseline models on both datasets. On Snips, joint BERT achieves intent classification accuracy of 98.6\% (from 97.0\%), slot filling F1 of 97.0\% (from 88.8\%), and sentence-level semantic frame accuracy of 92.8\% (from 75.5\%). On ATIS, joint BERT achieves intent classification accuracy of 97.5\% (from 94.1\%), slot filling F1 of 96.1\% (from 95.2\%), and sentence-level semantic frame accuracy of 88.2\% (from 82.6\%). Joint BERT+CRF replaces the softmax classifier with CRF and it performs comparably to BERT, probably due to the self-attention mechanism in Transformer, which may have sufficiently modeled the label structures. Compared to ATIS, Snips includes multiple domains and has a larger vocabulary. For the more complex Snips dataset, joint BERT achieves a large gain in the sentence-level semantic frame accuracy, from 75.5\% to 92.8\% (22.9\% relative). This demonstrates the strong generalization capability of joint BERT model, considering that it is pre-trained on large-scale text from mismatched domains and genres (books and wikipedia). On ATIS, joint BERT also achieves significant improvement on the sentence-level semantic frame accuracy, from 82.6\% to 88.2\% (6.8\% relative). \subsection{Ablation Analysis and Case Study} We conduct ablation analysis on Snips, as shown in Table~\ref{tab:ablation}. Without joint learning, the accuracy of intent classification drops to 98.0\% (from 98.6\%), and the slot filling F1 drops to 95.8\% (from 97.0\%). We also compare the joint BERT model with different fine-tuning epochs. The joint BERT model fine-tuned with only 1 epoch already outperforms the first group of models in Table~\ref{tab:result}. We further select a case from Snips, as in Table~\ref{tab:case}, showing how joint BERT outperforms the slot-gated model~\cite{DBLP:conf/naacl/GooGHHCHC18} by exploiting the language representation power of BERT to improve the generalization capability. In this case, ``mother joan of the angels'' is wrongly predicted by the slot-gated model as an object name and the intent is also wrong. However, joint BERT correctly predicts the slot labels and intent because ``mother joan of the angels" is a movie entry in Wikipedia. The BERT model was pre-trained partly on Wikipedia and possibly learned this information for this rare phrase. \section{Conclusion} We propose a joint intent classification and slot filling model based on BERT, aiming at addressing the poor generalization capability of traditional NLU models. Experimental results show that our proposed joint BERT model outperforms BERT models modeling intent classification and slot filling separately, demonstrating the efficacy of exploiting the relationship between the two tasks. Our proposed joint BERT model achieves significant improvement on intent classification accuracy, slot filling F1, and sentence-level semantic frame accuracy on ATIS and Snips datasets over previous state-of-the-art models. Future work includes evaluations of the proposed approach on other large-scale and more complex NLU datasets, and exploring the efficacy of combining external knowledge with BERT.
{ "timestamp": "2019-03-01T02:09:54", "yymm": "1902", "arxiv_id": "1902.10909", "language": "en", "url": "https://arxiv.org/abs/1902.10909" }
\section{Introduction} Fock's nonlinear relativity is characterized by the following coordinate transformation \cite{Fock} \begin{equation} t^{\prime }=\frac{\gamma(t-ux/c^2)}{\alpha_R},\quad x^{\prime }=\frac{ \gamma(x-ut)}{\alpha_R},\quad y^{\prime }= \frac{y}{\alpha_R},\quad z^{\prime} =\frac{z}{\alpha_R}, \end{equation} where $ \gamma =(1-u^{2}/c^{2})^{-1/2} $, $ {\alpha _{R}}=1+\left[(\gamma -1)ct-\gamma xu/c\right]/R $ and $R$ being the universe radius. This transformation was recently reproduced from an appropriate deformed Poisson brackets \cite{BF} by using an analogous procedure as in \cite{Ghosh-Pal}, where the coordinate transformation of Deformed Special Relativity \cite{Ameliano1, Ameliano2, Mag-Smol1, Mag-Smol2} is recovered. It is shown then that in addition to the constant $c$ which represents the speed of light at the limit $R\rightarrow\infty$, transformation (1) keeps also invariant the universe radius. It defines the so-called $R$-Minkowski spacetime. Contrary to earlier versions of Fock's nonlinear relativity, this approach involving deformed Poisson brackets generated a transformation of the momentum with which the contraction $p_{\mu}x^{\mu}$ is an invariant, allowing then a coherent description of the plane waves in the context of the Fock transformation. After first quantization, these Poisson brackets were replaced by the following commutations relations \begin{align} \left[x^\mu,x^\nu\right] & = 0, \\ \left[x^\mu,p^\nu\right] & = - i \hbar \eta^{\mu\nu} + \frac{i \hbar}{R}\eta^{0\nu}x^\mu, \\ \left[p^\mu,p^\nu\right] & = -\frac{i \hbar}{R}[p^\mu\eta^{0\nu}-p^\nu\eta^{\mu0}], \end{align} constituting the phase space algebra of the $R$-Minkowski spacetime \cite{FB1}. Here $\eta^{\mu\nu} = (+1,-1,-1,-1)$ and $\mu ,\nu =0,1,2,3$. The above $R$-algebra was completed by commutation relations involving angular momentum components, \begin{equation} \textbf{J}^{\mu \nu }=x^{\mu }p^{\nu }-x^{\nu }p^{\mu }, \end{equation} which satisfy the following relations \begin{align} \left[x^{\mu}, \textbf{J}^{\nu\lambda}\right] & = -i\hbar \left(\eta^{\mu\lambda}x^{\nu}-\eta^{\mu\nu}x^{\lambda}\right) + \frac{i\hbar}{R}x^{\mu} \left(\eta^{0\lambda}x^{\nu}-\eta^{0\nu}x^{\lambda}\right), \\ \left[p^{\mu}, \textbf{J}^{\nu\lambda}\right] & = i\hbar \left(\eta^{\nu\mu}p^{\lambda}-\eta^{\lambda\mu}p^{\nu}\right)+ \frac{i\hbar}{R}p^{\mu} \left(\eta^{\nu 0}x^{\lambda}-\eta^{\lambda 0}x^{\nu}\right),\\ \left[\textbf{J}^{\mu\nu}, \textbf{J}^{\lambda\sigma}\right] & = i\hbar \left(\eta^{\nu\lambda}\textbf{J}^{\mu\sigma}- \eta^{\nu\sigma}\textbf{J}^{\mu\lambda} -\eta^{\mu\lambda}\textbf{J}^{\nu\sigma}+\eta^{\mu\sigma}\textbf{J}^{\nu\lambda}\right). \end{align} The corresponding first Casimir is constructed and allowed the derivation of the Klein-Gordon equation which turned out to be the Klein-Gordon equation in de Sitter Space given by its conformal metric \cite{FB1}. This result established a correspondence between the $R$-Minkowski spacetime and de Sitter's spacetime. Also this correspondence was confirmed since Dirac's equation was derived in $R$-Minkowski spacetime \cite{FB2} and it turned out to be exactly the Dirac equation in the conformally de Sitter spacetime. Elsewhere, Dyson \cite {Dyson} claimed that Feynman derived Maxwell's equations and the Lorentz force from Newton's law of motion and the commutator between position and velocity. Tanimura \cite{Tanimura} formulated the relativistic version of Feynman's derivation by starting from \begin{align} \left[x^\mu,x^\nu \right]&=0, \\ \left[x^\mu,\dot{x}^\nu \right]&=-\frac{i\hbar}{m}\eta^{\mu\nu}, \\ m\ddot{x}^{\mu} &= F^{\mu}(x,\dot{x}) \end{align} and by defining the electromagnetic tensor as \begin{equation} F^{\mu\nu} \equiv -\frac{m^{2}}{i\hbar q} \left[\dot{x}^{\mu},\dot{x}^{\nu} \right], \end{equation} where $x^{\mu}=x^{\mu}(\tau)$, $\tau$ is a parameter with a dimension of time, the dot refers to the derivative with respect to $\tau$ and $q$ is the charge of a particle of mass $m$. He derived the relations \begin{align} 0 & = \partial^{\lambda}F^{\mu\nu}+ \partial^{\nu}F^{\lambda\mu}+\partial^{\mu}F^{\nu\lambda}, \\ F^{\mu} & = qF^{\mu\nu} \dot{x}_{\nu}, \end{align} which represent respectively the first group of Maxwell's equations and the Lorentz force. Later, B\'{e}rard et al \cite{BGM1,BGM2,BM} developed this approach and showed how the magnetic monopole appears as a consequence of the restoration of the Lorentz algebra symmetry. Also, Harikumar et al \cite{H1,H2} established the Deformed Special Relativistic version of this formulation, up to first order in the deformation parameter, by using the commutation relations of the $\kappa$-Minkowski spacetime \cite{Ghosh-Pal}. In this extension, it is shown that the $\kappa$-deformed Maxwell equations preserve the electric-magnetic duality and predict the manifestation of the magnetic monopole. In this paper, we will generalize the use of the Feynman method to Fock's nonlinear relativity. In section 2, we will establish, up to first order in the deformation, the deformed Maxwell equations and Lorentz force in the $R$-Minkowski spacetime. In section 3, we restore the $R$-Lorentz symmetry by modifying the momentum generators and therefore we prove the presence of the Dirac magnetic monopole. Section 4 is devoted to conclusion. \section{Maxwell's equation in $R$-Minkowski spacetime} In order to establish Maxwell's equations in the context of Fock's nonlinear relativity, let us first search for the analogous of relations (10) and (12) in the framework of $R$-Minkowski spacetime. For this purpose, let us recall that it is shown in \cite{BF} that the following canonical variables \begin{align} X^{\mu} & = \left(1-\frac{x^0}{R}\right)^{-1}x^{\mu} , \\ P^{\mu} & = \left(1-\frac{x^0}{R}\right)p^{\mu} \end{align} obey to the laws of special relativity. Thus if we substitute the Poisson brackets by commutators, we have \begin{align} \left[ X^{\mu}, X^{\nu}\right] & = 0, \\ \left[ X^{\mu}, P^{\nu}\right] & = -i \hbar \eta^{\mu\nu}, \\ \left[ P^{\mu}, P^{\nu}\right] & = 0. \end{align} At first order in $1/R$, relation (15) gives \begin{equation} X^{\mu} = \left(1+\frac{x^0}{R}\right)x^{\mu} \end{equation} and then \begin{equation} \dot{X}^{\mu} = \dot{x}^{\mu} + \frac{1}{2R} \left(\dot{x}^{0}x^{\mu} + x^{\mu}\dot{x}^{0} + x^{0}\dot{x}^{\mu} + \dot{x}^{\mu} x^{0} \right), \end{equation} where the symmetrization operation was performed. Let us begin by the case where the electromagnetic field is absent. Using the fact that \begin{equation} P^{\mu}=m\dot{X}^{\mu}, \end{equation} relations (18) and (19) allow to write \begin{align} \left[ X^{\mu}, \dot{X}^{\nu} \right] & = - \frac{i \hbar}{m} \eta^{\mu\nu} , \\ \left[ \dot{X}^{\mu}, \dot{X}^{\nu} \right] & = 0. \end{align} Using expressions (20) and (21) in (23), we obtain \begin {eqnarray} \left[ x^{\mu}, \dot{x}^{\nu} \right] + \frac{1}{R} \left\{ \frac{3}{2} x^{0} \left[ x^{\mu}, \dot{x}^{\nu} \right] + \frac{1}{2} \left[ x^{\mu}, \dot{x}^{0} \right] x^{\nu} \right. \hskip30mm&& \nonumber \\ \left. + \frac{1}{2} x^{\nu} \left[ x^{\mu}, \dot{x}^{0} \right] + \frac{1}{2} \left[ x^{\mu}, \dot{x}^{\nu} \right] x^{0} + \left[ x^{0}, \dot{x}^{\nu} \right] x^{\mu} \right\} = - \frac{i \hbar}{m} \eta^{\mu\nu}. \end {eqnarray} The zeroth order solution of this last equation is obtained by taking the limit $R\rightarrow \infty$ \begin{equation} \left[ x^{\mu}, \dot{x}^{\nu} \right]_{(0)} = - \frac{i \hbar}{m} \eta^{\mu\nu}. \end{equation} At first order, all the commutators between braces in (25) can be replaced by their value of zeroth order since they are preceded by the factor $1/R$. Then, we deduce that \begin{equation} \left[ x^{\mu}, \dot{x}^{\nu} \right] = - \frac{i \hbar}{m} \eta^{\mu\nu} + \frac{i \hbar}{mR} \left( 2x^{0} \eta^{\mu\nu} + x^{\nu}\eta^{\mu0} + x^{\mu}\eta^{0\nu} \right). \end{equation} In the same way, the use of (21) in (24) leads to \begin{equation} \left[ \dot{x}^{\mu}, \dot{x}^{\nu} \right] = \frac{i \hbar}{Rm} \left( \dot{x}^{\mu}\eta^{0\nu} - \dot{x}^{\nu}\eta^{\mu 0} \right). \end{equation} Contrary to the special relativistic case, we see that this commutator has a non-zero value. By using (16), we can check that this result is compatible with relation (4). Let us now move on to the case where the electromagnetic field is present. As in special relativistic case, the commutator $\left[ X^{\mu}, \dot{X}^{\nu} \right]$ keeps the same expression in the absence or in the presence of the electromagnetic field. In fact, taking into account relation (17), even if we use in (18) instead of (22) the following expression \begin{equation} P^{\mu}=m\dot{X}^{\mu} + \frac{q}{c} A^{\mu}(X), \end{equation} we reproduce (23), $A^{\mu}$ being the four-potential. For the same reason, if we follow the same procedure as above to determine the commutator $\left[ x^{\mu}, \dot{x}^{\nu} \right]$ in presence of the electromagnetic field, we reproduce expression (27). Concerning the commutator $\left[ \dot{x}^{\mu}, \dot{x}^{\nu} \right]$, definition (12) allows us to observe the following properties in the special relativistic case : \begin{enumerate} \item The electromagnetic tensor is antisymmetric : $F^{\mu\nu}=-F^{\nu\mu}$. \item In the absence of the electromagnetic field, the commutator $\left[ \dot{x}^{\mu}, \dot{x}^{\nu} \right]$ takes a vanishing value, what is compatible with the fact that $\left[ p^{\mu}, p^{\nu} \right]=0$. \item The electromagnetic tensor does not depend on $\dot{x}^{\lambda}$ since $\left[F^{\mu\nu} , x^{\lambda} \right]=0 $ . \end{enumerate} \noindent In the context of Fock's nonlinear relativity, definition (12) does not work since the second and the third of the above properties are not satisfied. However, if we define \begin{equation} F^{\mu\nu} \equiv -\frac{m^{2}}{i\hbar q} \left[\dot{x}^{\mu},\dot{x}^{\nu} \right] + \frac{m}{qR} \left( \dot{x}^{\mu}\eta^{0\nu} - \dot{x}^{\nu}\eta^{\mu 0} \right) \nonumber, \end{equation} meaning that \begin{equation} \left[\dot{x}^{\mu},\dot{x}^{\nu} \right] = \frac{i\hbar}{Rm} \left(\eta^{0\nu}\dot{x}^{\mu} - \eta^{\mu 0}\dot{x}^{\nu} \right) -\frac{i\hbar q}{m^{2}}F^{\mu\nu}, \end{equation} we observe that $F^{\mu\nu}$ keeps its antisymmetric property and that the commutator $\left[\dot{x}^{\mu},\dot{x}^{\nu} \right]$ reduces in the absence of the electromagnetic field to expression (28) established above. We can also check that $F^{\mu\nu}$ does not depend on $\dot{x}^{\lambda}$. In fact, substituting expression (30) in the following Jacobi identity \begin{equation} \left[\left[\dot{x}^{\mu},\dot{x}^{\nu} \right], x^{\lambda}\right] = - \left[ \left[x^{\lambda}, \dot{x}^{\mu} \right], \dot{x}^{\nu} \right] - \left[\left[\dot{x}^{\nu}, x^{\lambda} \right], \dot{x}^{\mu}\right] \nonumber, \end{equation} we deduce that \begin {eqnarray} \left[ F^{\mu\nu}, x^{\lambda} \right] = \frac{m^{2}}{i \hbar q} \left\{ \left[ \left[x^{\lambda}, \dot{x}^{\mu} \right], \dot{x}^{\nu} \right] + \left[\left[\dot{x}^{\nu}, x^{\lambda} \right], \dot{x}^{\mu} \right] \right\} \hskip20mm&& \nonumber \\ + \frac{m}{qR} \left\{ \eta^{0\nu} \left[\dot{x}^{\mu}, x^{\lambda} \right] - \eta^{\mu 0} \left[\dot{x}^{\nu}, x^{\lambda} \right] \right\} . \end {eqnarray} With the use of (27), it is easy to show that $\left[F^{\mu\nu} , x^{\lambda} \right]=0 $, meaning that $F^{\mu\nu} $ does not depend on $\dot{x}^{\lambda}$. In order to establish the first group of Maxwell equations, as in \cite{Tanimura} our starting point is the following Jacobi identity \begin{equation} \left[\dot{x}^{\mu},\left[\dot{x}^{\nu}, \dot{x}^{\lambda}\right] \right] + \left[ \dot{x}^{\lambda}, \left[\dot{x}^{\mu} , \dot{x}^{\nu} \right] \right] + \left[\dot{x}^{\nu}, \left[\dot{x}^{\lambda} , \dot{x}^{\mu}\right] \right] = 0. \end{equation} Relations (27) and (2) indicate that $\left[x^{\lambda},\left[\dot{x}^{\nu}, x^{\alpha}\right] \right] =0 $. This allows to write \begin{equation} \left[\dot{x}^{\mu}, F^{\nu\lambda}\right] = \frac{\partial F^{\nu\lambda}}{\partial x^{\alpha}} \left[\dot{x}^{\mu}, x^{\alpha}\right]. \end{equation} Using relation (30) in (32) and taking into account (33), we deduce that \begin {eqnarray} \partial^{\lambda}F^{\mu\nu}+ \partial^{\nu}F^{\lambda\mu}+\partial^{\mu}F^{\nu\lambda} \hskip63mm&& \nonumber \\ = \frac{1}{R} \left\{ -2 \left(\eta^{0\lambda}F^{\mu\nu} + \eta^{\mu 0}F^{\nu\lambda}+ \eta^{0\nu}F^{\lambda\mu}\right) \right. \hskip13mm&& \nonumber \\ \left. + x^{\alpha}\partial_{\alpha} \left( \eta^{0\lambda}F^{\mu\nu} + \eta^{\mu 0}F^{\nu\lambda}+ \eta^{0\nu}F^{\lambda\mu} \right) \right. \hskip8mm&& \nonumber \\ \left. + x^{\mu}\partial^{0}F^{\nu\lambda} + x^{\nu}\partial^{0}F^{\lambda\mu} + x^{\lambda}\partial^{0}F^{\mu\nu} \right\}. \end {eqnarray} This equation represents the first group of Maxwell's equations, up to the first order, in $R$-Minkowski spacetime. Contracting two indices gives \begin {eqnarray} \partial_{\mu}F^{\mu\lambda}+ \partial_{\mu}F^{\lambda\mu} = \frac{1}{R} \left\{ -2 \left( F^{0\lambda} + F^{\lambda 0} \right) \right. \hskip47mm&& \nonumber \\ \left. + x_{\mu}\left( \partial^{\mu}F^{0\lambda}+ \partial^{\mu}F^{\lambda 0} \right) + x_{\mu}\left( \partial^{0}F^{\mu\lambda}+ \partial^{0}F^{\lambda \mu} \right) \right\}. \end {eqnarray} With an analogous procedure as in the context of special relativity \cite{BGM1} , let us define the four-current in the following manner \begin{equation} \mu_{0}J^{\lambda} = - \partial_{\mu}F^{\lambda\mu} + \frac{1}{R} \left(-2\xi F^{\lambda 0} + \sigma x_{\mu} \partial^{\mu}F^{\lambda 0} + \omega x_{\mu} \partial^{0}F^{\lambda \mu} \right), \end{equation} where the constants $\xi$, $\sigma$ and $\omega$ can take freely any value about $0$, $+1$ and $-1$. Because of the antisymmetric feature of the tensor $F^{\mu\nu}$, relation (35) allows to write the second group of Maxwell's equations as \begin{equation} \partial_{\mu}F^{\mu\lambda} + \frac{1}{R} \left(2\xi F^{0 \lambda } - \sigma x_{\mu} \partial^{\mu}F^{0 \lambda} - \omega x_{\mu} \partial^{0}F^{\mu \lambda} \right) = \mu_{0}J^{\lambda}. \end{equation} In order to determine the exact values of $\xi$, $\sigma$ and $\omega$, we will impose that the well-known electric-magnetic duality be a symmetry of the above Maxwell's equations in the absence of the sources. For this purpose, let us introduce the electric and the magnetic fields, \begin{equation} E^{i}=cF^{i0}, \hskip35mm B_{i}=\varepsilon_{ijk}F^{jk}/2, \end{equation} where $\varepsilon_{ijk}$ is the Levi-Civita antisymmetric tensor $(\varepsilon_{123}=1)$. From (34) and (37), we can deduce that \begin{align} \frac{1}{c} \overrightarrow{\nabla} \wedge \overrightarrow{E} + \partial_{0} \overrightarrow{B} + \frac{1}{R} \left[ 2\overrightarrow{B} - x^{0} \partial_{0} \overrightarrow{B} - x^{\alpha}\partial_{\alpha} \overrightarrow{B} + \frac{1}{c} \overrightarrow{r} \wedge \partial_{0} \overrightarrow{E} \right] & = \overrightarrow{0} ,\\ \overrightarrow{\nabla}\cdot\overrightarrow{B} + \frac{1}{R} \overrightarrow{r}\cdot \partial_{0} \overrightarrow{B} & = 0 , \\ \overrightarrow{\nabla}\cdot\overrightarrow{E} + \frac{\omega}{R} \overrightarrow{r}\cdot \partial_{0} \overrightarrow{E} & = c\mu_{0}{j^{0}} , \\ \overrightarrow{\nabla} \wedge \overrightarrow{B} -\frac{1}{c} \partial_{0} \overrightarrow{E} - \frac{1}{R} \left[ \frac{2\xi}{c}\overrightarrow{E} - \frac{\omega}{c} x^{0} \partial_{0} \overrightarrow{E} \right. \hskip27mm&& \nonumber \\ \left. - \frac{\sigma}{c} x^{\alpha}\partial_{\alpha} \overrightarrow{E} - \omega \overrightarrow{r} \wedge \partial_{0} \overrightarrow{B} \right] & = \mu_{0}\overrightarrow{j}. \end{align} In the absence of the source, it is easy to show that the invariance of the above equations under the electric-magnetic duality \begin{equation} \overrightarrow{E} \mapsto c\overrightarrow{B}, \ \ \ \ \ \ \ \ \ \ \ \ \overrightarrow{B} \rightarrow - \frac{1}{c}\overrightarrow{E} \end{equation} requires to take $\xi=\sigma=\omega=+1$. With these values, (37) represents the second group of Maxwell's equations in $R$-Minkowski spacetime, up to the first order. Let us now move on to the Lorentz force. As in special relativity, the force is given by $F^{\mu}=m\ddot{x}^{\mu}$ and then \begin{equation} \left[x^{\mu},F^{\nu} \right] = m \left[x^{\mu},\ddot{x}^{\nu}\right] = m \frac{d}{d\tau}\left[x^{\mu},\dot{x}^{\nu} \right] - m \left[\dot{x}^{\mu},\dot{x}^{\nu} \right]. \end{equation} Using (27) and (30), we obtain \begin{equation} \left[x^{\mu},F^{\nu} \right] = \frac{i\hbar q}{m} F^{\mu\nu} + \frac{2i\hbar }{R} \left(\eta^{\mu\nu} \dot{x}^{0} + \eta^{\mu 0} \dot{x}^{\nu} \right). \end{equation} At zeroth order, we have \begin{equation} \frac{i\hbar q}{m} F^{\mu\nu} = \left[x^{\mu},F^{\nu} \right]_{(0)} = \left[x^{\mu},\dot{x}^{\alpha}\right]_{(0)} \frac{\partial F^{\nu} }{\partial \dot{x}^{\alpha}} = -\frac{i\hbar }{m} \frac{\partial F^{\nu} }{\partial \dot{x}_{\mu}}, \end{equation} which gives \begin{equation} F^{\nu} = <q F^{\nu\mu} \dot{x}_{\mu} >+ G^{\nu}(x), \end{equation} where $G^{\nu}$ is an arbitrary function of $x$ and the symbol $<...>$ refers to symmetrization. Thus, equation (45) indicates that the expression at the first order for $F^{\nu}$ will take the form \begin{equation} F^{\nu} = <q F^{\nu\mu} \dot{x}_{\mu} > + G^{\nu}(x) + \frac{1}{R} H^{\nu}(x,\dot{x}), \end{equation} where $H^{\nu}$ is independent on $R$. As $F^{\mu\nu}$ does not depend on $\dot{x}$, using this last expression of $F^{\nu}$ and taking into account (27), we obtain \begin{equation} \left[x^{\mu},F^{\nu} \right] = \frac{i\hbar q}{m} F^{\mu\nu} + \frac{i\hbar q}{mR} \left(2x^{0}F^{\nu\mu} + \eta^{\mu 0} x_{\lambda} F^{\nu\lambda} + F^{\nu 0} x^{\mu} \right) + \frac{1}{R} \left[x^{\mu},H^{\nu} \right]. \end{equation} Identifying (45) and (49), we deduce \begin{equation} \left[x^{\mu},H^{\nu} \right] = 2i\hbar \left(\eta^{\mu\nu} \dot{x}^{0} + \eta^{\mu 0} \dot{x}^{\nu} \right) - \frac{i\hbar q}{m} \left(2x^{0}F^{\nu\mu} + \eta^{\mu 0} x_{\lambda} F^{\nu\lambda} + F^{\nu 0} x^{\mu} \right). \end{equation} In (49), the commutator $\left[x^{\mu},H^{\nu} \right]$ can be replaced by its value of zeroth order since it is preceded by the factor $1/R$. At this order, we have \begin{equation} \left[ \dot{x}^{\mu},\left[x^{\nu},\dot{x}^{\lambda} \right]_{(0)} \right]_{(0)} = 0 \end{equation} and therefore \begin{equation} \left[x^{\mu},H^{\nu} \right]_{(0)} = \left[x^{\mu},\dot{x}^{\lambda} \right]_{(0)}\frac{\partial H^{\nu}}{\partial \dot{x}^{\lambda}} = - \frac{i\hbar }{m} \frac{\partial H^{\nu}}{\partial \dot{x}_{\mu}}. \end{equation} With the use of (50), we find \begin{equation} \frac{\partial H^{\nu}}{\partial \dot{x}_{\mu}} = -2m \left(\eta^{\mu\nu} \dot{x}^{0} + \eta^{\mu 0} \dot{x}^{\nu} \right) + q \left(2x^{0}F^{\nu\mu} + \eta^{\mu 0} x_{\lambda} F^{\nu\lambda} + F^{\nu 0} x^{\mu} \right), \end{equation} and we then deduce \begin {eqnarray} H^{\nu}(x, \dot{x})= - m \left(\dot{x}^{\nu} \dot{x}^{0} + \dot{x}^{0} \dot{x}^{\nu}\right) \hskip45mm&& \nonumber \\ + < 2q x^{0}F^{\nu\mu} \dot{x}_{\mu} + q F^{\nu\lambda} x_{\lambda} \dot{x}_{0} + q F^{\nu 0} x^{\mu}\dot{x}_{\mu} >. \end {eqnarray} With this result, expression (48) turns out to be \begin {eqnarray} F^{\nu} = <q F^{\nu\mu} \dot{x}_{\mu} > + G^{\nu}(x) + \frac{1}{R} \left[ -m \left(\dot{x}^{\nu} \dot{x}^{0} + \dot{x}^{0} \dot{x}^{\nu}\right) \right. \hskip20mm&& \nonumber \\ \left. + < 2q x^{0}F^{\nu\mu} \dot{x}_{\mu} + q F^{\nu\lambda} x_{\lambda} \dot{x}_{0} + q F^{\nu 0} x^{\mu}\dot{x}_{\mu} > \right] , \end {eqnarray} Contrary to the DSR case \cite{H1}, the Lorentz force in $R$-Minkowski spacetime contains a corrective term proportional to $1/R$ at first order. \section{The $R$-Lorentz symmetry} Now, we will focus on some consequences of the above extension of Maxwell's equations on the $R$-Lorentz symmetry. First, we consider the $R$-Lorentz Lie algebra in the absence of the electromagnetic field. In this case, we will note $J^{\mu\nu}$ the angular momentum components. Multiplying at left equation (16) by $\left(1+x^{0}/R\right)$, and using successively (22) and (20), we find at first order in $1/R$ the following expression for the momentum \begin{equation} p^{\mu} = m\dot{x}^{\mu} + \frac{m}{R} \left(x^{\mu}\dot{x}^{0} + 2 x^{0}\dot{x}^{\mu}\right). \end{equation} Using this expression in (5), the angular momentum becomes \begin{equation} J^{\mu\nu} = m\left( 1+ \frac{2x^{0}}{R}\right) \left(x^{\mu}\dot{x}^{\nu} - x^{\nu}\dot{x}^{\mu}) \right). \end{equation} The symmetrization operation allows us to write \begin{eqnarray} J^{\mu\nu} = \frac{m}{2} \left( x^{\mu}\dot{x}^{\nu} + \dot{x}^{\nu} x^{\mu} - x^{\nu}\dot{x}^{\mu} - \dot{x}^{\mu} x^{\nu}\right) + \frac{m}{3R} \left(2x^{0}x^{\mu}\dot{x}^{\nu} + x^{0}\dot{x}^{\nu}x^{\mu} \right. \hskip8mm&& \nonumber \\ \left. + x^{\mu}\dot{x}^{\nu}x^{0} + 2\dot{x}^{\nu}x^{\mu}x^{0} - 2x^{0} x^{\nu}\dot{x}^{\mu} - x^{0} \dot{x}^{\mu}x^{\nu} - x^{\nu}\dot{x}^{\mu}x^{0} - 2\dot{x}^{\mu}x^{\nu}x^{0} \right). \end {eqnarray} From relation (27), it is easy to show that $\left[x^{\mu},\dot{x}^{\nu} \right]= \left[x^{\nu},\dot{x}^{\mu} \right]$ and then to deduce that \begin{equation} x^{\mu}\dot{x}^{\nu} - x^{\nu}\dot{x}^{\mu} = \dot{x}^{\nu}x^{\mu} - \dot{x}^{\mu}x^{\nu}. \end{equation} It follows that \begin{equation} J^{\mu\nu} = m\left( x^{\mu}\dot{x}^{\nu} - x^{\nu}\dot{x}^{\mu}\right) + \frac{m}{R} \left[ x^{0}\left( x^{\mu}\dot{x}^{\nu} - x^{\nu}\dot{x}^{\mu}\right) + \left( x^{\mu}\dot{x}^{\nu} - x^{\nu}\dot{x}^{\mu}\right) x^{0} \right]. \end{equation} Again, the use of (27) allows us to write \begin{equation} \dot{x}^{\mu} x^{0} = \frac{i\hbar}{m} \eta^{0\mu} - \frac{i\hbar}{mR} \left(3x^{0}\eta^{0\mu} + x^{\mu} \right) + x^{0}\dot{x}^{\mu} \end{equation} with which the angular momentum takes finally the following simple form \begin{equation} J^{\mu\nu} = m\left(1+ \frac{2x^{0}}{R} \right)\left( x^{\mu}\dot{x}^{\nu} - x^{\nu}\dot{x}^{\mu}\right) + \frac{i\hbar}{R} \left( x^{\mu} \eta^{0\nu} - x^{\nu} \eta^{0\mu}\right). \end{equation} Thus, with the help of (27) and (28), we can show after some calculations that \begin{align} \left[x^{\mu},J^{\nu\lambda} \right] & = i\hbar \left(\eta^{\mu\nu} x^{\lambda} - \eta^{\mu\lambda} x^{\nu} \right) + \frac{i\hbar}{R} \left( \eta^{0\lambda}x^{\mu}x^{\nu} - \eta^{0\nu} x^{\mu}x^{\lambda}\right), \\ \left[\dot{x}^{\mu},J^{\nu\lambda} \right] & = i\hbar \left(\eta^{\mu\nu} \dot{x}^{\lambda} - \eta^{\mu\lambda} \dot{x}^{\nu} \right) - \frac{i\hbar}{R} x^{\mu} \left(\eta^{0\nu} \dot{x}^{\lambda} - \eta^{0\lambda} \dot{x}^{\nu} \right) \nonumber \\ & \hskip4mm - \frac{i\hbar}{R} \left(\eta^{0\nu} x^{\lambda} - \eta^{0\lambda} x^{\nu} \right) \dot{x}^{\mu} - \frac{\hbar^{2}}{Rm} \left(\eta^{0\lambda}\eta^{\mu\nu} - \eta^{0\nu} \eta^{\mu\lambda} \right), \\ \left[J^{\mu\nu},J^{\lambda\sigma}\right] & = i\hbar \left(\eta^{\nu\lambda}J^{\mu\sigma} - \eta^{\nu\sigma}J^{\mu\lambda} - \eta^{\mu\lambda}J^{\nu\sigma} + \eta^{\mu\sigma}J^{\nu\lambda}\right). \end{align} We observe that the commutators (63) and (65) have the same forms as in (6) and (8) while (64) can be derived from (7). In the presence of the electromagnetic field, we will note $M^{\mu\nu}$ the contribution of the field to the angular momentum and therefore we write \begin{equation} \textbf{J}^{\mu\nu} = J^{\mu\nu} + M^{\mu\nu}. \end{equation} Also, for the commutator $\left[\dot{x}^{\mu},\dot{x}^{\nu} \right]$, it is expression (30) which must be used instead of (28). Then, the above $R$-Lorentz Lie algebra turns out to be \begin{align} [x^{\mu}, \textbf{J}^{\nu\lambda}] & = -i\hbar\left(\eta^{\mu\lambda}x^{\nu}-\eta^{\mu\nu}x^{\lambda}\right) + \frac{i\hbar}{R}x^{\mu}\left(\eta^{0\lambda}x^{\nu}-\eta^{0\nu}x^{\lambda}\right) + \left[x^{\mu},M^{\nu\lambda}\right] \\ \left[\dot{x}^{\mu},\textbf{J}^{\nu\lambda} \right] & = i\hbar \left(\eta^{\mu\nu} \dot{x}^{\lambda} - \eta^{\mu\lambda} \dot{x}^{\nu} \right) - \frac{i\hbar}{R} x^{\mu} \left(\eta^{0\nu} \dot{x}^{\lambda} - \eta^{0\lambda} \dot{x}^{\nu} \right) \nonumber \\ & \hskip4mm - \frac{i\hbar}{R} \left(\eta^{0\nu} x^{\lambda} - \eta^{0\lambda} x^{\nu} \right) \dot{x}^{\mu} - \frac{\hbar^{2}}{Rm} \left(\eta^{0\lambda}\eta^{\mu\nu} - \eta^{0\nu} \eta^{\mu\lambda} \right) \nonumber \\ & \hskip4mm - \frac{i\hbar q}{m} \left(1+ \frac{2x^{0}}{R}\right) \left(x^{\nu}F^{\mu\lambda} - x^{\lambda}F^{\mu\nu}\right) + \left[\dot{x}^{\mu},M^{\nu\lambda}\right], \\ \left[\textbf{J}^{\mu\nu},\textbf{J}^{\lambda\sigma}\right] & = i\hbar \left(\eta^{\nu\lambda}J^{\mu\sigma} - \eta^{\nu\sigma}J^{\mu\lambda} - \eta^{\mu\lambda}J^{\nu\sigma} + \eta^{\mu\sigma}J^{\nu\lambda}\right) \nonumber \\ & \hskip4mm + i\hbar q \left(1+ \frac{4x^{0}}{R}\right) \left(x^{\mu}x^{\sigma}F^{\nu\lambda} - x^{\mu}x^{\lambda}F^{\nu\sigma} - x^{\nu}x^{\sigma}F^{\mu\lambda} \right. \nonumber \\ & \hskip4mm \left. + x^{\nu}x^{\lambda}F^{\mu\sigma}\right) + \left[J^{\mu\nu}, M^{\lambda\sigma}\right] + \left[M^{\mu\nu}, J^{\lambda\sigma}\right] + \left[M^{\mu\nu}, M^{\lambda\sigma}\right]. \end{align} In order to restore the $R$-algebra constituted by (6), (7) and (8) and represented at first order in the absence of the electromagnetic field by (63), (64) and (65), we will impose: \begin{align} & [x^{\mu}, M^{\nu\lambda}] = 0, \\ & [\dot{x}^{\mu}, M^{\nu\lambda}] -\frac{i\hbar q}{m}\left(1+\frac{2x^{0}}{R}\right) \left(x^{\nu}F^{\mu\lambda}-x^{\lambda}F^{\mu\nu}\right) = 0, \\ & i\hbar \left(-\eta^{\nu\lambda}M^{\mu\sigma} + \eta^{\nu\sigma}M^{\mu\lambda} + \eta^{\mu\lambda}M^{\nu\sigma} - \eta^{\mu\sigma}M^{\nu\lambda}\right) \nonumber \\ & \hskip2mm + i\hbar q \left(1+ \frac{4x^{0}}{R}\right) \left(x^{\mu}x^{\sigma}F^{\nu\lambda} - x^{\mu}x^{\lambda}F^{\nu\sigma} - x^{\nu}x^{\sigma}F^{\mu\lambda} + x^{\nu}x^{\lambda}F^{\mu\sigma}\right) \hskip1mm \nonumber \\ & \hskip34mm + \left[J^{\mu\nu}, M^{\lambda\sigma}\right] + \left[M^{\mu\nu}, J^{\lambda\sigma}\right] + \left[M^{\mu\nu}, M^{\lambda\sigma}\right] = 0. \end{align} This last relation is obtained from (69) by substituting the components $J^{\mu\nu}$ by $\textbf{J}^{\mu\nu} - M^{\mu\nu}$ and requiring (8) to hold. Condition (70) implies that $M^{\mu\nu}$ does not depend on $\dot{x}$. It follows that the last commutator in (72) takes a vanishing value. The two others can be determined by using expression (62) of $J^{\mu\nu}$ \begin{eqnarray} \left[J^{\mu\nu}, M^{\lambda\sigma}\right] + \left[M^{\mu\nu}, J^{\lambda\sigma}\right] = m\left(1+ \frac{2x^{0}}{R} \right) \left\{ x^{\mu}\left[\dot{x}^{\nu} , M^{\lambda\sigma}\right] \right. \hskip19mm&& \nonumber \\ \left. - x^{\nu}\left[\dot{x}^{\mu} , M^{\lambda\sigma}\right] - x^{\lambda}\left[\dot{x}^{\sigma} , M^{\mu\nu}\right] + x^{\sigma}\left[\dot{x}^{\lambda} , M^{\mu\nu}\right] \right\}. \end{eqnarray} Condition (71) determines the commutator $[\dot{x}^{\mu}, M^{\nu\lambda}]$ and allows therefore to write \begin{eqnarray} \left[J^{\mu\nu}, M^{\lambda\sigma}\right] + \left[M^{\mu\nu}, J^{\lambda\sigma}\right] = 2i\hbar q \left(1+ \frac{4x^{0}}{R} \right) \left[ x^{\mu}x^{\lambda}F^{\nu\sigma} \right. \hskip24mm&& \nonumber \\ \left. - x^{\mu}x^{\sigma}F^{\nu\lambda} - x^{\nu}x^{\lambda}F^{\mu\sigma} + x^{\nu}x^{\sigma}F^{\mu\lambda} \right]. \end{eqnarray} Using this result in condition (72), we obtain \begin{eqnarray} \eta^{\nu\lambda}M^{\mu\sigma} - \eta^{\nu\sigma}M^{\mu\lambda} - \eta^{\mu\lambda}M^{\nu\sigma} + \eta^{\mu\sigma}M^{\nu\lambda}\hskip44mm & \nonumber \\ = q \left(1+ \frac{4x^{0}}{R} \right) \left[ x^{\mu}x^{\lambda}F^{\nu\sigma} - x^{\mu}x^{\sigma}F^{\nu\lambda} - x^{\nu}x^{\lambda}F^{\mu\sigma} + x^{\nu}x^{\sigma}F^{\mu\lambda} \right]. \end{eqnarray} Let us consider the spatial components by performing in the last relation the substitution of the indices $(\mu, \nu, \lambda, \sigma)$ respectively by $(i,j,k,l)$. Then, by contracting the obtained equation with $\eta_{il}$, we get to \begin{equation} M^{jk}=q\left(1+\frac{4x^{0}}{R}\right)(x^{k}x_{l}F^{jl}-x_{l}x^{l}F^{jk}+x^{j}x_{l}F^{lk}), \end{equation} where we have used the fact that $M^{\mu\nu}$ is, from its definition (66), antisymmetric. Let us define \begin{equation} M_{i} \equiv \frac{1}{2} \varepsilon_{ijk} M^{jk} = \frac{1}{2}\varepsilon_{ijk} \ q\left(1+\frac{4x^{0}}{R}\right)(x^{k}x_{l}F^{jl}-x_{l}x^{l}F^{jk}+x^{j}x_{l}F^{lk}), \end{equation} $M_{i}$ is called the magnetic angular momentum \cite{BGM2}. From the second equation in (38), we have \begin{equation} F^{ij} = \varepsilon^{kij} B^{k}. \end{equation} The use of this relation in (77) leads to \begin{equation} M^{i}=q\left(1+\frac{4x^{0}}{R}\right)(x^{j}B^{j})x^{i}, \end{equation} which can be written as \begin{equation} \overrightarrow{M}=q\left(1+\frac{4x^{0}}{R}\right)\left(\overrightarrow{r}\cdot\overrightarrow{B}\right)\overrightarrow{r}. \end{equation} Otherwise, as $M^{jk}$ does not depend on $\dot{x}$ and taking into account (27), we have \begin{align} [\dot{x}^{i}, M^{jk}] & = [\dot{x}^{i}, x^{\mu}] \frac{\partial M^{jk}}{\partial x^{\mu}} \nonumber \\ & = \frac{i\hbar}{m}\left(1-\frac{2x^{0}}{R}\right)\frac{\partial M^{jk}}{\partial x_{i}} - \frac{i\hbar}{mR}\frac{\partial M^{jk}}{\partial x_{0}} x^{i}. \end{align} In the case where $\overrightarrow{B}$ does not depend on $x^{0}$, (76) indicates that the zeroth order of $M^{jk}$ does not depend on $x^{0}$. Thus, taking into account condition (71), relation (81) allows to write at first order \begin{equation} \frac{\partial M^{jk}}{\partial x_{i}}=q \left(1+\frac{4x^{0}}{R}\right)(x^{j}F^{ik}-x^{k}F^{ij}). \end{equation} By multiplying by $\varepsilon^{ljk} $ this last equation and using (77) and (78), we obtain \begin{equation} \frac{\partial M^{l}}{\partial x_{i}}= q \left(1+\frac{4x^{0}}{R}\right) \left(B^{l}x^{i} - \overrightarrow{r}\cdot \overrightarrow{B} \delta^{li} \right). \end{equation} With the help of (79), this last relation gives \begin{equation} \frac{\partial B^{j}}{\partial x_{i}}x^{j}x_{l} = B^{i}x_{l} + B_{l}x^{i}. \end{equation} As in \cite{BGM2}, the solution of this equation is \begin{equation} B^{i} = \frac{\mu_{0} g}{4\pi} \frac{x^{i}}{r^{3}}, \end{equation} where $r=\left(x^{j}x^{j}\right)^{1/2}$ and $g$ is an integration constant. It is easy to check that $\overrightarrow{\nabla}\cdot\overrightarrow{B}=\mu_{0}g\delta(\overrightarrow{r})$. This means that $g$ can be identified to the magnetic charge and that equation (85) describes the Dirac magnetic Monopole. Substituting this expression of $B^{i}$ in (80), we obtain the magnetic angular momentum \begin{equation} \overrightarrow{M}=\frac{\mu_{0}qg}{4\pi}\left(1+\frac{4x^{0}}{R}\right) \frac{\overrightarrow{r}}{r}. \end{equation} We also observe that in a stationary case, the electric field keeps its usual form without any corrective term. In fact, on the one hand, condition (71) allows to write \begin{equation} [\dot{x}^{0}, M^{ij}] = \frac{i\hbar q}{m}\left(1+\frac{2x^{0}}{R}\right) \left(x^{i}F^{0j}-x^{j}F^{0i}\right), \end{equation} and on the other, from (77) and (86), we obtain \begin{equation} M^{ij}=\varepsilon^{ijk} M^{k}=\frac{\mu_{0}qg}{4\pi}\varepsilon^{ijk} \left(1+\frac{4x^{0}}{R}\right) \frac{x^{k}}{r}, \end{equation} with which we can deduce \begin{equation} [\dot{x}^{0}, M^{ij}] = [\dot{x}^{0}, x^{l}] \frac{\partial M^{ij}}{\partial x^{l}}=0. \end{equation} Relations (87) and (89) indicate that \begin{equation} x^{i}F^{0j}-x^{j}F^{0i} = 0, \end{equation} which leads to the following usual expression for the electric field : \begin{equation} E^{i} = Kq \frac{x^{i}}{r^{3}}. \end{equation} The last observation concerns the component $M^{0i}$. If we perform in (75) the substitution of the indices $(\mu, \nu, \lambda, \sigma)$ respectively by $(0,j,i,j)$, we can show that $M^{0i}=0$. \section{Conclusion} Up to first order in the deformation, we have derived in $R$-Minkowski Spacetime the Maxwell Equations by using a generalized version of Feynman's method. The electric-magnetic duality symmetry is imposed to fix some arbitrary parameters appearing in the present approach. We have also established the expression of the Lorentz force in this context. The $R$-Lorentz algebra, established in \cite{BF}, is restored in the presence of the electromagnetic field by adding to the angular momentum the electromagnetic field contribution which generates the Dirac magnetic monopole. Contrary to the DSR case \cite{H1}, the angular magnetic momentum is affected by the parameter deformation. We would like to add that unlike the earlier works in which the form (85) of the Dirac magnetic monopole was obtained without imposing the no dependence on the time despite its similarity with the electrostatic field, in our present paper, this condition is necessary to reach expression (85).
{ "timestamp": "2019-03-01T02:17:05", "yymm": "1902", "arxiv_id": "1902.11040", "language": "en", "url": "https://arxiv.org/abs/1902.11040" }
\section{Algorithm}\label{sec:algorithm} \subsection{Matrix Estimation \& Hard Singular Value Thresholding}\label{sec:alg_ME_HSVT} Our proposed algorithm to solve the error-in-variable regression problem relies on a ``blackbox" matrix estimation procedure as an important subroutine, which we define as follows: \begin{definition} \label{def:matrix_estimation} A matrix estimation algorithm, denoted as $\text{ME}: \mathbb{R}^{N \times p} \rightarrow \mathbb{R}^{N \times p}$, takes as input a matrix $\bZ}%{\overline{\bZ}$, which is a partially observed, noisy version of $\bA}%{\overline{\bA}$, and outputs an estimate $\bhA}%{\widehat{\bbA}$. \end{definition} \paragraph{HSVT.} For concreteness, we describe one of the most commonly used matrix estimation subroutines, hard singular value thresholding (HSVT). For any $\lambda > 0$, we define the map $\text{HSVT}_{\lambda}: \mathbb{R}^{N \times p} \to \mathbb{R}^{N \times p}$, which shaves off the input matrix's singular values below $\lambda$. Specifically, given $\boldsymbol{B} = \sum_{i=1}^{N \wedge p} \sigma_i x_i y_i^T$, we define \begin{align} \label{eq:prox_matrix} \text{HSVT}_{\lambda}(\boldsymbol{B}) &= \sum_{i = 1}^{N \wedge p} \sigma_i \mathbb{1}(\sigma_i \ge \lambda) x_i y_i^T, \end{align} where $\mathbb{1}$ denotes the indicator function. \paragraph{HSVT with missing data.} Let $\bZ}%{\overline{\bZ}$ have the following singular value decomposition: \[ \bZ}%{\overline{\bZ} = \sum_{i=1}^{N \wedge p} s_i u_i v_i^T. \] Let $\widehat{\rho}$ denote the proportion of observed entries in $\bZ}%{\overline{\bZ}$\footnote{More precisely, we define $\widehat{\rho} = \frac{1}{Np} \sum_{i=1}^{N} \sum_{j=1}^p \mathbb{1}(Z_{ij} \text{ observed}) \vee \frac{1}{Np}$.}. Using a HSVT subroutine, we define the estimator of $\boldsymbol{A}$ as \begin{align}\label{eq:HSVT_missing_data} \widehat{\bA}%{\overline{\bA}} &= \frac{1}{\widehat{\rho}} \text{HSVT}_{\lambda^*}(\bZ}%{\overline{\bZ}) = \frac{1}{\widehat{\rho}} \sum_{i=1}^{N \wedge p} s_i \mathbb{1}(s_i \ge \lambda^*) u_i v_i^T. \end{align} \subsection{``Error-in-Variable" Regression via Matrix Estimation} We can now formally state our ``Matrix Estimation Regression" method in Algorithm \ref{alg:main_algorithm}. \begin{algorithm} [h] \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{$(Y^{\Omega}, \bZ}%{\overline{\bZ} ) \in \mathbb{R}^{n \times 1} \times \mathbb{R}^{N \times p}$} \Output{$(\widehat{Y}, \bhA}%{\widehat{\bbA}) \in \mathbb{R}^{N \times 1} \times \mathbb{R}^{N \times p}$} \begin{algorithmic}[1] \item De-noise and impute $\bZ}%{\overline{\bZ}$ to obtain $\bhA}%{\widehat{\bbA} = \text{ME}(\bZ}%{\overline{\bZ})$. \item Let $\widehat{\bA}^{\Omega}$ be the sub-matrix formed from the rows of $\bhA}%{\widehat{\bbA}$ associated with $Y^{\Omega}$. \item Perform linear regression: $\widehat{\beta} \in \argmin_{\beta} \big\| Y^{\Omega} - \widehat{\bA}^{\Omega} \beta \big\|_2^2$. \item Define $\widehat{Y} = \bhA}%{\widehat{\bbA} \widehat{\beta}$. \end{algorithmic} \caption{Matrix Estimation Regression}\label{alg:main_algorithm} \end{algorithm} \begin{remark} We let $\widehat{\beta} = \big( \widehat{\bA}^{\Omega} \big)^{\dagger} Y^{\Omega}$. In the classical regime ($n \geq p$), this reduces to least squares solution. In the high-dimensional setup ($n \ll p$), this yields one solution in the row span of $\widehat{\bA}^{\Omega}$. \end{remark} Our main result (Theorem \ref{thm:mse_train_general}) holds for any matrix estimation algorithm that bounds the max column sum error (MCSE) (refer to Definition \ref{def:MCSE}). However, given that MCSE is a nonstandard error metric for matrix estimation, we instantiate it for HSVT in Theorem \ref{thm:mcse_train_hsvt}. \section{Proof of Theorem \ref{thm:mcse_train_hsvt}} \label{sec:mse_train_hsvt} First, we show that the four conditioning events hold with high probability by proving Lemmas \ref{lemma:E1}, \ref{lemma:E2}, \ref{lemma:E3}, and \ref{lemma:E4} in \ref{sec:conditioning_events}. Then we complete the proof of Theorem \ref{thm:mcse_train_hsvt} in \ref{sec:proof_mcse}. \subsection{Completing the Proof of Theorem \ref{thm:mcse_train_hsvt}}\label{sec:proof_mcse} \begin{proof} Let $E \triangleq \mathcal{E}_1 \cap \mathcal{E}_2 \cap \mathcal{E}_3 \cap \mathcal{E}_4$. Recall the definition of $\text{MCSE}_{\Omega}(\bhA}%{\widehat{\bbA}) $ from \eqref{eq:mcse} in Definition \ref{def:MCSE}. Observe that \begin{align} \text{MCSE}_{\Omega}(\bhA}%{\widehat{\bbA}) &= \mathbb{E} \max_{j \in [p]} \norm{\widehat{\bA}_{\cdot, j} - \boldsymbol{A}_{\cdot, j}}_2^2 \nonumber\\ &= \mathbb{E} \bigg[ \max_{j \in [p]} \norm{\widehat{\bA}_{\cdot, j} - \boldsymbol{A}_{\cdot, j}}_2^2 \cdot \mathbb{1}(E)\bigg] + \mathbb{E} \left[ \max_{j \in [p]} \norm{\widehat{\bA}_{\cdot, j} - \boldsymbol{A}_{\cdot, j}}_2^2 \cdot \mathbb{1}(E^c)\right]. \label{eq:MCSE_decomp} \end{align} In the rest of the proof, we upper bound the two terms in \eqref{eq:MCSE_decomp} separately. \paragraph{Upper bound on the first term in \eqref{eq:MCSE_decomp}.} Recall from \eqref{eq:MCSE_bound} in Theorem \ref{thm:mcse_whp} that conditioned on $\cE$, \begin{align*} \max_{j \in [p]} \Big\| \bhA}%{\widehat{\bbA}_{\cdot, j} - \bA}%{\overline{\bA}_{\cdot, j} \Big\|_2^2 &\leq \frac{C (K_{\alpha} + \Gamma)^2}{\rho^2} \left( \frac{ \Delta^2 N }{ \rho^2( \tau_{r} - \tau_{r+1} )^2} + r \right) \log^{\frac{2}{\alpha}}(Np) + 2\max_{j \in [p]} \norm{ \bE}%{\overline{\bE}_{\cdot, j} }_2^2. \end{align*} where $C > 0$ is an absolute constant. Observe that $\Prob{E } \leq 1$ and that \begin{align} \mathbb{E} \bigg[ \max_{j \in [p]} \norm{\widehat{\bA}_{\cdot, j} - \boldsymbol{A}_{\cdot, j}}_2^2 \cdot \mathbb{1}(E)\bigg] &= \mathbb{E} \bigg[ \max_{j \in [p]} \norm{\widehat{\bA}_{\cdot, j} - \boldsymbol{A}_{\cdot, j}}_2^2 ~\Big|~ E~ \bigg] \Prob{E } \nonumber\\ &\leq \frac{C (K_{\alpha} + \Gamma)^2}{\rho^2} \left( \frac{ \Delta^2 N }{ \rho^2( \tau_{r} - \tau_{r+1} )^2} + r \right) \log^{\frac{2}{\alpha}}(Np) + 2\max_{j \in [p]} \norm{ \bE}%{\overline{\bE}_{\cdot, j} }_2^2. \label{eqn:term1_upper} \end{align} \paragraph{Upper bound on the second term in \eqref{eq:MCSE_decomp}.} To begin with, we note that for any $j \in [p]$, \[ \norm{\widehat{\bA}_{\cdot, j} - \boldsymbol{A}_{\cdot,j}}_2 \le\norm{ \widehat{\bA}_{\cdot, j} }_2 + \| \boldsymbol{A}_{\cdot, j} \|_2 \] by triangle inequality. By the model assumption, the covariates are bounded (Property \ref{prop:bounded_covariates}) and $\norm{ \boldsymbol{A}_{\cdot, j} }_2 \leq \Gamma \sqrt{N}$ for all $j \in [p]$. Recall from \eqref{eq:HSVT_missing_data} and Algorithm \ref{alg:main_algorithm} that for any $j \in [p]$, \[ \widehat{\bA}_{\cdot, j} = \frac{1}{\widehat{\rho}} \text{HSVT}_{\lambda}\big(\boldsymbol{Z} \big)_{\cdot,j} \] for a given threshold $\lambda$. Therefore, \[ \| \widehat{\bA}_{\cdot, j} \|_2 = \frac{1}{\widehat{\rho}} \big\| \text{HSVT}_{\lambda}\big(\boldsymbol{Z} \big)_{\cdot,j} \big\|_2 \stackrel{(a)}{\le} Np \big\| \text{HSVT}_{\lambda}\big(\boldsymbol{Z} \big)_{\cdot,j} \big\|_2 \stackrel{(b)}{\le} Np \| \boldsymbol{Z}_{\cdot,j} \|_2. \] Here, (a) follows from $\widehat{\rho} \geq \frac{1}{Np}$ (see the footnote in Algorithm \ref{alg:main_algorithm}); and (b) follows from Lemma \ref{lemma:HSVT_contraction} -- the $\text{HSVT}$ operator is a contraction on the columns. \begin{align} \max_{j \in [p]} \| \widehat{\bA}_{\cdot, j} - \boldsymbol{A}_{\cdot,j} \|_2 &\le \max_{j \in [p]} \| \widehat{\bA}_{\cdot, j} \|_2 + \max_{j \in [p]} \, \| \boldsymbol{A}_{\cdot, j} \|_2 \nonumber \\&\le Np ~ \max_{j \in [p]} \| \boldsymbol{Z}_{\cdot,j} \|_2 + \Gamma \sqrt{N} \nonumber \\&\le \big(N^{\frac{3}{2}} p + \sqrt{N} \big)\Gamma + N^{\frac{3}{2}}p \max_{ij} \abs{\eta_{ij}} \nonumber \\&\le 2N^{\frac{3}{2}} p \Big( \Gamma + \max_{ij} \abs{\eta_{ij}} \Big) \label{eq:proof_mcse_1} \end{align} because $\max_{j \in [p]} \| \boldsymbol{Z}_{\cdot,j} \|_2 \leq \sqrt{N} \max_{i,j} \abs{Z_{ij}} \leq \sqrt{N} \max_{i,j} \abs{A_{ij} + \eta_{ij}} \leq \sqrt{N} \big( \Gamma + \max_{i,j} \abs{\eta_{ij}} \big)$. % Now we apply Cauchy-Schwarz inequality on $ \mathbb{E} \big[ \max_{j \in [p]} \|\widehat{\bA}_{\cdot, j} - \boldsymbol{A}_{\cdot, j}\|_2^2 \cdot \mathbb{1}(E^c)\big]$ to obtain \begin{align} \mathbb{E} \Big[ \max_{j \in [p]} \big\|\widehat{\bA}_{\cdot, j} - \boldsymbol{A}_{\cdot, j}\big\|_2^2 \cdot \mathbb{1}(E^c)\Big] &\leq \mathbb{E} \Big[ \max_{j \in [p]} \big\|\widehat{\bA}_{\cdot, j} - \boldsymbol{A}_{\cdot, j}\big\|_2^4\Big]^{\frac{1}{2}} \cdot \mathbb{E} \Big[ \mathbb{1}(E^c)\Big]^{\frac{1}{2}} \nonumber\\ &= \mathbb{E} \Big[ \max_{j \in [p]} \big\|\widehat{\bA}_{\cdot, j} - \boldsymbol{A}_{\cdot, j}\big\|_2^4\Big]^{\frac{1}{2}} \cdot \Prob{ E^c }^{\frac{1}{2}} \nonumber\\ &\stackrel{(a)}{\leq} 4 N^3 p^2 \mathbb{E} \Big[ \Big( \Gamma + \max_{ij} \abs{\eta_{ij}} \Big)^4 \Big]^{\frac{1}{2}} \cdot \Prob{ E^c }^{\frac{1}{2}} \nonumber\\ &\stackrel{(b)}{\leq} 8\sqrt{2} N^3 p^2 \Big( \Gamma^4 + \mathbb{E} \big[ \max_{ij} \abs{\eta_{ij}}^4 \big] \Big)^{\frac{1}{2}} \cdot \Prob{ E^c }^{\frac{1}{2}} \nonumber\\ &\stackrel{(c)}{\leq} 8\sqrt{2} N^3 p^2 \Big( \Gamma^2 + \mathbb{E} \big[ \max_{ij} \abs{\eta_{ij}}^4 \big]^{\frac{1}{2}} \Big) \cdot \Prob{ E^c }^{\frac{1}{2}}. \label{eq:proof_mcse_2} \end{align} Here, (a) follows from \eqref{eq:proof_mcse_1}; and (b) follows from Jensen's inequality: \begin{align*} \mathbb{E} \Big[ \Big( \Gamma + \max_{ij} \abs{\eta_{ij}} \Big)^4 \Big] &= \mathbb{E} \bigg[ \Big( \frac{1}{2} \big( 2 \Gamma + 2 \max_{ij} \abs{\eta_{ij}} \big) \Big)^4 \bigg] \leq \mathbb{E} \bigg[ \frac{1}{2} \Big( \big( 2 \Gamma \big)^4 + \big( 2 \max_{ij} \abs{\eta_{ij}} \big)^4 \Big) \bigg]\\ &= 8 \mathbb{E} \Big[ \Gamma^4 + \max_{ij} \abs{\eta_{ij}}^4 \Big] = 8 \Big( \Gamma^4 + \mathbb{E} [ \max_{ij} \abs{\eta_{ij}}^4 ] \Big) ; \end{align*} and (c) follows from the trivial inequality: $\sqrt{A + B} \leq \sqrt{A} + \sqrt{B}$ for any $A, B \geq 0$. Now it remains to find an upper bound for $\mathbb{E} \big[ \max_{ij} \abs{\eta_{ij}}^4 \big]$. Note that for any $\alpha >0$ and $\theta \geq 1$, $\eta_{ij}$ being a $\psi_{\alpha}$-random variable implies that $\big| \eta_{ij}\big|^{\theta}$ is a $\psi_{\alpha/\theta}$-random variable. With the choice of $\theta =4 $, we have that \begin{align} \mathbb{E} \max_{ij} \abs{\eta_{ij}}^4 &\le C' K_\alpha^4 \log^{\frac{4}{\alpha}}(Np) \label{eq:proof_mcse_3} \end{align} for some $C' > 0$ by Lemma \ref{lemma:max_subg} (also see Remark \ref{rem:max_psialpha}). Inserting \eqref{eq:proof_mcse_3} to \eqref{eq:proof_mcse_2} yields \begin{align} \mathbb{E} \Big[ \max_{j \in [p]} \big\|\widehat{\bA}_{\cdot, j} - \boldsymbol{A}_{\cdot, j}\big\|_2^2 \cdot \mathbb{1}(\cE^c)\Big] &\leq 8\sqrt{2} N^3 p^2 \Big( \Gamma^2 + {C'}^{1/2} K_\alpha^2 \log^{\frac{2}{\alpha}}(Np) \Big) \cdot \Prob{ E^c }^{\frac{1}{2}} \nonumber\\ &\stackrel{(a)}{\leq } 32 \Big( \Gamma^2 + {C'}^{1/2} K_\alpha^2 \log^{\frac{2}{\alpha}}(Np) \Big) \frac{1}{N p^{3/2}}. \label{eqn:term2_upper} \end{align} Observe that (a) follows from the following observation: \begin{align} \label{eq:prob_e_complement} \Prob{ E^c } = \Prob{\mathcal{E}_1^c} + \Prob{\mathcal{E}_2^c} + \Prob{\mathcal{E}_3^c} + \Prob{\mathcal{E}_4^c} \leq \frac{8}{N^8p^7}, \end{align} which can be obtained by applying the union bound on the upper bounds stated in Remarks \ref{rem:E1}, \ref{rem:E2}, and Lemmas \ref{lemma:E3}, \ref{lemma:E4}. \paragraph{Concluding the Proof.} Thus, combining \eqref{eqn:term1_upper} and \eqref{eqn:term2_upper} in \eqref{eq:MCSE_decomp} gives the desired bound: \begin{align} \text{MCSE}(\widehat{\bA}) &\le \delta(N,p) + \frac{C}{N p^{3/2}} \Big(\Gamma^2 + K^2_\alpha \log^{\frac{2}{\alpha}}(Np) \Big). \end{align} Note that we let $C = 32 \cdot \big( 1 \vee {C'}^{1/2} \big)$ and $C$ is independent of $N, p$. \end{proof} \section{Proof of Corollary \ref{cor:mse_train_hsvt}, Propositions \ref{prop:low_rank_finite_sample} and \ref{prop:geo_decay_finite_sample}}\label{sec:proof_case_studies} \subsection{Proof of Corollary \ref{cor:mse_train_hsvt}} \begin{proof} Recall from Theorem \ref{thm:mcse_train_hsvt} that for \[ \delta(N, p) := \frac{C_1 (K_{\alpha} + \Gamma)^2}{\rho^2} \left( \frac{ \Delta^2 N }{ \rho^2( \tau_{r} - \tau_{r+1} )^2} + r \right) \log^{\frac{2}{\alpha}}(Np) + 2\max_{j \in [p]} \norm{ \bE}%{\overline{\bE}_{\cdot, j} }_2^2, \] the MSE is bounded as follows \begin{align*} \text{MSE}_{\Omega}(\widehat{Y}) \le \frac{1}{n} \left( \norm{\beta^*}_1^2 \left[\delta(N,p) + \frac{C_2 }{N p^{3/2}} \Big(\Gamma^2 + K^2_\alpha \log^{\frac{2}{\alpha}}(Np) \Big) \right] + 2 \sigma^2 r \right), \end{align*} where $C_1, C_2 \ge 0$ are absolute constants. \\ \noindent Under the assumption that $n = \Theta(N)$, collecting all terms associated with $K_\alpha, \Gamma, \gamma, \norm{\beta^*}_1, \sigma$ and bounding it by $\text{poly} ( K_\alpha, \Gamma, \gamma, \norm{\beta^*}_1, \sigma )$ as defined in the statement of Corollary \ref{cor:mse_train_hsvt}, the bound above simplifies to \begin{align*} \text{MSE}_{\Omega}(\widehat{Y}) &\le \frac{\text{poly} ( K_\alpha, \Gamma, \gamma, \norm{\beta^*}_1, \sigma )}{n} \cdot \left( \frac{n \Delta^2}{ \rho^4 (\tau_r - \tau_{r+1})^2} + \frac{r}{\rho^2} + E^2 + \frac{1}{N p^{3/2}} + r \right) \cdot \log^{\frac{2}{\alpha}}(np) \\ &\le \frac{\text{poly} ( K_\alpha, \Gamma, \gamma, \norm{\beta^*}_1, \sigma)}{n} \left( \frac{n \Delta^2}{ \rho^4 (\tau_r - \tau_{r+1})^2} + E^2 + \frac{r}{\rho^2} \right) \cdot \log^{\frac{2}{\alpha}}(np). \end{align*} Noting that $\Delta^2 \le \text{poly} ( K_\alpha, \Gamma, \gamma ) \cdot ( n \rho^2 + p) \cdot \log^3(np)$ and simplifying the above terms completes the proof. \end{proof} \subsection{Proof of Proposition \ref{prop:low_rank_finite_sample}} \begin{proof} We begin by setting the bound in Corollary \ref{cor:mse_train_hsvt} to be less than $\delta$ for any $\delta > 0$, which gives the following: \begin{align} n &\ge \frac{C}{\delta} \cdot \max\left \{\frac{r}{\rho^2}, E^2 \right \} \cdot \log^5(p) \label{eq:n_condition} \\ \delta &\ge \frac{C (n \rho^2 + p)}{\rho^4 (\tau_{r} - \tau_{r+1})^2} \cdot \log^5(np), \label{eq:delta_condition} \end{align} where $E = \max_{j \in [p]} \norm{ \boldsymbol{E}_{\cdot, j} }_2$. From Condition (1), we have that $\tau_{r+1} = 0$, rendering $E = 0$. Using Conditions (1)-(4), it follows that $\tau_r = (\frac{C_1 n p}{r})^{1/2}$, where $C_1$ is an absolute constant. Thus, we have for some $C_2 > 0$, \begin{align*} \frac{C_2 (n \rho^2 + p)}{\rho^4 (\tau_{r} - \tau_{r+1})^2} &= \frac{C_1}{\rho^4} \cdot \frac{r (n \rho^2 + p)}{np} = C_1 r \cdot \Big( \frac{1}{\rho^2 p} + \frac{1}{\rho^4 n} \Big). \end{align*} Inserting this $C_2$ in place of $C$ in \eqref{eq:delta_condition} yields $\delta \ge C_1 r \cdot \left( \frac{1}{\rho^2 p} + \frac{1}{\rho^4 n} \right) \cdot \log^5(np)$. Solving this inequality for $n$ and $p$, we obtain \begin{align*} n &\ge \frac{C r }{\delta \rho^4} \log^5(p) \quad \text{ and } \quad p \ge \frac{C r }{\delta \rho^2} \log^5(n). \end{align*} Comparing the above inequalities with \eqref{eq:n_condition} (recalling that $E = 0$), we observe $n \ge \frac{C r }{\delta \rho^4} \log^5(p)$ is the most restrictive condition and this observation completes the proof. \end{proof} \subsection{Proof of Proposition \ref{prop:geo_decay_finite_sample}} \begin{proof} Given Condition (5) of Proposition \ref{prop:geo_decay_finite_sample} and Corollary \ref{cor:mse_train_hsvt}, it is sufficient to show each of the following terms to decay to $0$ to establish Proposition \ref{prop:geo_decay_finite_sample}: (i) $\frac{r}{n \rho^2} \log^5(np)$; (ii) $\frac{\max_{j \in [p]} \norm{ \bE}%{\overline{\bE}_{\cdot, j} }_2^2}{n} \log^5(np)$; and (iii) $\frac{1}{n}\frac{(n\rho)^2 + np} {\rho^4 (\tau_{r} - \tau_{r+1})^2} \log^5(np)$. Throughout these derivations, we will abuse notation wherein the value of $C$ changes line-by-line, but remains a universal constant. \paragraph{First term.} Recall that $\boldsymbol{A}$ of our interest is only approximately low-rank and $r$ does not stand for $\rank(\boldsymbol{A})$. Rather, $r$ is a tunable parameter, which is determined according to the level of descriptive power the user wants. It is easy to observe that $\frac{r}{n \rho^2} \log^5(np) = o(1)$ if and only if \begin{equation}\label{eq:r_upper.1} n \gg \frac{r}{\rho^2}\log^5(np) \qquad\text{or equivalently,}\qquad r \ll \big( n \rho^2 \log^{-5}(np) \big). \end{equation} \paragraph{Second term.} Turning our attention to the second term, we have for any $j \in [p]$, \begin{align*} \frac{1}{n}\norm{\bE}%{\overline{\bE}_{\cdot, j}}^2 &= \frac{1}{n} \bigg\| \bigg(\sum^N_{i = r + 1} \tau_i \mu_i \nu^T_i \bigg) e_j \bigg\|^2 = \frac{1}{n} \bigg\|\sum^N_{i = r + 1} \tau_i \mu_i (\nu^T_i e_j) \bigg\|^2 \\ &\stackrel{(a)}= \frac{1}{n} \sum^N_{i = r + 1} \tau_i^2 (\nu^T_i e_j)^2 \\ &\stackrel{(b)}\le \frac{1}{n} \sum^N_{i = r + 1} \tau^2_1 \theta^{2(i -1)} (\nu^T_i e_j)^2 \\ &\stackrel{(c)}\le \frac{C_1 Np}{n} \sum^N_{i = r + 1} \theta^{2(i -1)} (\nu^T_i e_j)^2 \\ &\stackrel{(d)}\le \frac{C_1 Np}{np} \sum^N_{i = r + 1} \theta^{2(i -1)} \\ &\stackrel{(e)}\le C \theta^{2r}. \end{align*} Here, (a) follows from the orthonormality of the (left) singular vectors; (b) follows from Condition (2); (c) follows from Condition (1); (d) follows from Condition (4); (e) follows from Condition (3) and an application of the geometric series. Observe that $C\theta^{2r} \log^5(np) = O \big( \frac{1}{\mathsf{polylog} ~p} \big)$ if and only if \begin{equation}\label{eq:r_lower} r \geq C'\frac{ \log \log (np) }{\log\big( \frac{1}{\theta} \big)} \end{equation} for a sufficiently large $C' > 0$. \paragraph{Third term.} Observing the third term, we see that \begin{align} \frac{1}{n}\frac{(n\rho)^2 + np}{\rho^4 (\tau_{r} - \tau_{r+1})^2} \log^5(np) &\stackrel{(a)}= \frac{n\rho^2 + p}{ C_1 Np \theta^{2(r-1)} \rho^4 (1 - \theta)^2} \log^5(np) \nonumber\\ &\stackrel{(b)}= C \bigg( \frac{1}{\theta^{2(r-1)} p \rho^2 }\log^5(np) + \frac{1}{ \theta^{2(r-1)} n \rho^4 } \log^5(np) \bigg) \label{eqn:terms_geometric} \end{align} where (a) follows from Condition (2); and (b) follows from Condition (3). Next, we investigate under what condition the two terms in \eqref{eqn:terms_geometric} vanish as $n \to \infty$. First, we can verify that $\frac{1}{\theta^{2(r-1)} p \rho^2 }\log^5(np) \ll 1$ if and only if \begin{equation}\label{eq:r_upper.2} \rho \gg \frac{ C }{ \theta^{r-1} \sqrt{p}}\log^{\frac{5}{2}}(np) \qquad\text{or equivalently,}\qquad r \ll \frac{ C }{\log\big( \frac{1}{\theta}\big)} \log \bigg( \frac{p \rho^2}{\log(np)} \bigg). \end{equation} Second, we can see in the same manner that $\frac{1}{ \theta^{2(r-1)} n \rho^4 } \log^5(np) = o(1)$ if and only if \begin{equation}\label{eq:r_upper.3} n \gg \frac{ C }{ \theta^{2(r-1)} \rho^4} \log^5(np) \qquad\text{or equivalently,}\qquad r \ll \frac{ C }{\log\big( \frac{1}{\theta}\big)} \log \bigg( \frac{n \rho^4}{\log(np)} \bigg). \end{equation} \paragraph{Concluding the proof.} Note that the condition stated in \eqref{eq:r_upper.1} is much less restrictive than that imposed by \eqref{eq:r_upper.3}. Therefore, the first term, i.e., $\frac{r}{n \rho^2} \log^5(np)$, vanishes as long as the third term vanishes. Assume that $n \ll p$ and \begin{equation}\label{eqn:r_choice} r = C_2\frac{ \log \log (p) }{\log \big( \frac{1}{\theta} \big)} \end{equation} for a sufficiently large $C_2 > 0$. Then the second term, $\frac{1}{n}\frac{(n\rho)^2 + np} {\rho^4 (\tau_{r} - \tau_{r+1})^2} \log^5(np)$ is $O(1/\mathsf{polylog} ~ p)$. Lastly, if \[ \rho \gg \frac{ C }{ \theta^{r-1} \sqrt{p}}\log^{\frac{5}{2}}(p) \qquad\text{and}\qquad n \gg C \frac{ 1 }{ \theta^{2(r-1)} \rho^4} \log^5(p), \] then the third term also becomes $o=(1)$. Realizing that \[ \theta^{-(r-1)} = e^{(r-1) \log\big( \frac{1}{\theta} \big)} \leq \big( \log(p) \big)^{C_2} \] completes the proof. \end{proof} \section{Proof of Theorem \ref{thm:mse_test_hsvt}} \label{sec:mse_test_hsvt} We first prove a useful connection relating low-rank covariate matrices with sparse regression vectors. This proposition will be useful in proving Lemma \ref{lemma:rademacher_complexity_equality}. \begin{proposition} \label{prop:low_rank_sparsity} Let $\boldsymbol{X}$ be any real-valued $n \times p$ matrix. Let $M$ denote an $n$-dimensional vector of linear measurements, i.e., $M = \boldsymbol{X} v$ where $v \in \mathbb{R}^p$. If $\emph{rank}(\boldsymbol{X}) = r$, then there exists a $v^* \in \mathbb{R}^p$ such that $M = \boldsymbol{X} v^*$ and $\norm{v^*}_0 = r$. \end{proposition} \begin{proof} Without loss of generality, assume that $\{\boldsymbol{X}_{\cdot, 1}, \dots, \boldsymbol{X}_{\cdot, r}\}$ form a collection of $r$ linearly independent vectors. Then, for any $i \in \{r+1, \dots, p\}$, there exists some $c(i) \in \mathbb{R}^r$ such that \begin{align} \label{eq:low_rank} \boldsymbol{X}_{\cdot, i} &= \sum_{k=1}^r c_k(i) \boldsymbol{X}_{\cdot, k}. \end{align} We know $M = \boldsymbol{X} v = \sum_{i=1}^p v_i \boldsymbol{X}_{\cdot, i}$. It suffices to show that there exists a $r$-sparse vector $v^*$ such that $M = \sum_{i=1}^r v^*_i \boldsymbol{X}_{\cdot, i}$. Using \eqref{eq:low_rank}, we have \begin{align*} M &= \sum_{i=1}^p v_i \boldsymbol{X}_{\cdot, i} \\ &= \sum_{i=1}^r v_i \boldsymbol{X}_{\cdot, i} + \sum_{j=r+1}^p v_j \boldsymbol{X}_{\cdot, j} \\ &= \sum_{i=1}^r v_i \boldsymbol{X}_{\cdot, i} + \sum_{j=r+1}^p v_j \Big( \sum_{i=1}^r c_i(j) \boldsymbol{X}_{\cdot, i} \Big) \\ &= \sum_{i=1}^r v_i \boldsymbol{X}_{\cdot, i} + \sum_{i=1}^r \boldsymbol{X}_{\cdot, i} \Big( \sum_{j=r+1}^p c_i(j) v_j \Big) \\ &= \sum_{i=1}^r \Big(v_i + \sum_{j=r+1}^p c_i(j) v_j \Big) \boldsymbol{X}_{\cdot, i} . \end{align*} Defining $v^*_i = v_i + \sum_{j=r+1}^p c_i(j) v_j$ completes the proof. \end{proof} \begin{lemma} \label{lemma:mcdiarmid_condition} Let $\Omega = \{i_1, \dots, i_n\}$ and $\Omega' = \{i_1, \dots, i'_j, \dots, i_n\}$ such that $\Omega$ and $\Omega'$ differ only in their $j$-th elements. Let \begin{align} \label{eq:generalization_error} \phi(\Omega) &= \sup_{\beta \in \mathcal{F}} \Big( \mathcal{E}(\beta) - \widehat{\cE}_{\Omega}(\beta) \Big). \end{align} For any $\beta \in \mathcal{F}$, suppose $\max_{i \in [N]} \ell(S_i; \beta)\in [0, c]$ for some $c > 0$. Then, \begin{align*} \abs{ \phi(\Omega) - \phi(\Omega') } &\le \frac{c}{n}. \end{align*} \end{lemma} \begin{proof} Here, we will show that \[ \phi(\Omega) = \sup_{\beta \in \mathcal{F}} \left(\mathcal{E}(\beta) - \widehat{\cE}_{\Omega}(\beta) \right) \] satisfies the conditions necessary to invoke McDiarmid's Inequality. We begin by noting that for any real-valued functions $f_1, f_2$, $\sup_x f_1(x) - \sup_x f_2(x) \le \sup_x (f_1(x) - f_2(x))$. Hence, \begin{align} \label{eq:sup_upper_bound} \phi(\Omega) - \phi(\Omega') &= \sup_{\beta \in \mathcal{F}} \Big(\mathcal{E}(\beta) - \widehat{\cE}_{\Omega}(\beta) \Big) - \sup_{\beta \in \mathcal{F}} \Big(\mathcal{E}(\beta) - \widehat{\cE}_{\Omega'} (\beta) \Big) \nonumber \\ &\le \sup_{\beta \in \mathcal{F}} \left( \mathcal{E}(\beta) - \widehat{\cE}_{\Omega}(\beta) - \mathcal{E}(\beta) + \widehat{\cE}_{\Omega'} (\beta) \right) \nonumber \\ &= \sup_{\beta \in \mathcal{F}} \left( \widehat{\cE}_{\Omega'} (\beta) - \widehat{\cE}_{\Omega}(\beta) \right) \\ &= \sup_{\beta \in \mathcal{F}} \frac{1}{n} \left( \ell(S_{i'_j}; \beta) - \ell(S_{i_j}; \beta) \right) \le \frac{c}{n}, \end{align} where the final equality holds because $\Omega$ and $\Omega'$ differ by only one element; and the final inequality holds because $\ell(\cdot; \beta) \in [0,c]$ for any $\beta \in \mathcal{F}$. Using a similar argument, we can prove that $\phi(\Omega') - \phi(\Omega) \le c/n$, and therefore $\abs{\phi(\Omega) - \phi(\Omega')} \le c/n$. \end{proof} \subsection{Proof of Lemma \ref{lemma:E5}} \begin{proof} Using the arguments that led to \eqref{eq:max_error_bound.term1}, we have that \begin{align*} \widehat{\bA}_{i, \cdot} \beta &= \widehat{\bA}_{i, \cdot} \beta_r = \sum_{j \in I_{\beta_r}} \widehat{\bA}_{ij} \cdot (\beta_r)_j \le r B \max_{ j \in I_{\beta_r}} | \widehat{A}_{ij} |. \end{align*} Applying Lemma \ref{lemma:HSVT_contraction} and conditioning on the event $\mathcal{E}_2$, we claim (for some $C > 0$) \begin{align*} \max_{j \in I_{\beta_r}} | \widehat{A}_{ij} | &\le \frac{C}{\rho} \Big( \Gamma + \max_{ j \in I_{\beta_r}} \abs{\eta_{ij}} \Big), \end{align*} where $\rho$ satisfies Condition 3 of Theorem \ref{thm:mcse_train_hsvt}. Recall that Property \ref{prop:covariate_noise_structure} states that for all $(i,j)$, $\eta_{ij}$ is a $\psi_\alpha$-random variable; hence, for any $t \ge 0$, \begin{align*} \mathbb{P} \left\{ \abs{\eta_{ij}} \ge t \right\} &\le 2 \exp(- t^{\alpha} / K_\alpha^{\alpha}). \end{align*} Taking the maximum over all $i \in [N]$ and a union bound, we obtain \begin{align*} \mathbb{P} \left\{ \max_{i \in [N], j \in I_{\beta_r}} \abs{\eta_{ij}} \ge t \right\} &\le 2 r N \exp(- t^{\alpha} / K_\alpha^{\alpha}). \end{align*} Let $t = K_\alpha \log^{\frac{1}{\alpha}}(r (Np)^9)$. Then for any $\beta \in \mathcal{F}$, we have with probability at least $1 - \frac{2 }{(Np)^8}$, \begin{align*} \max_{i \in [N]} \widehat{\bA}_{i, \cdot} \beta &\le \frac{C r B}{\rho} \left( \Gamma + 9 K_\alpha \log^{\frac{1}{\alpha}}(rNp) \right). \end{align*} Moreover, by Property \ref{prop:bounded_covariates} and Holder's Inequality, we have \begin{align*} \max_{i \in [N]}\abs{\boldsymbol{A}_{i, \cdot} \beta^*} &\le \max_{i \in [N]} \norm{\boldsymbol{A}_{i, \cdot}}_{\infty} \, \norm{\beta^*}_1 \le \Gamma \norm{\beta^*}_1. \end{align*} Observing that $( \widehat{\bA}_{i, \cdot} \beta - \boldsymbol{A}_{i, \cdot} \beta^*)^2 \le 2(\widehat{\bA}_{i, \cdot} \beta)^2 + 2(\boldsymbol{A}_{i, \cdot} \beta^*)^2$ and combining the above results (and taking the maximum over all $i \in [N]$) completes the proof. \end{proof} \subsection{Proof of Lemma \ref{lemma:E6}} \begin{proof} Under the event $\mathcal{E}_5$, we know that $\ell(\cdot; \beta) \in [0, c(N,p)]$ for all $\beta \in \mathcal{F}$. Lemma \ref{lemma:mcdiarmid_condition} then allows us to apply McDiarmid's Inequality (Lemma \ref{lem:mcdiarmid}), which gives \begin{align*} \mathbb{P} \{ \phi(\Omega) - \mathbb{E}_{\Omega} \phi(\Omega) \ge t_1 \} &\le \exp(-t_1^2 n /c(N,p)). \end{align*} Setting $t_1 = \sqrt{ 8 \, c(N,p) \log(Np) / n}$ completes the proof. \end{proof} \subsection{Proof of Lemma \ref{lemma:rademacher_bound}} \begin{proof} Let $\Omega' = \{i'_1, \dots, i'_n\}$ be a ``ghost sample'', i.e., $\Omega'$ is an independent set of $n$ locations sampled uniformly at random and without replacement from $[N]$. Moreover, let $S' = \{(\widehat{\bA}_{i'_k}, \boldsymbol{A}_{i'_k} \beta^*)\}_{k=1:n}$. Observe that $\mathcal{E}(\beta) = \mathbb{E}_{\Omega'} [\widehat{\cE}_{\Omega'} (\beta) ]$ and $\widehat{\cE}_{\Omega}(\beta) = \mathbb{E}_{\Omega'} [\widehat{\cE}_{\Omega}(\beta) ]$. Thus, \begin{align*} \mathbb{E}_{\Omega} \phi(\Omega) &= \mathbb{E}_{\Omega} \left[ \sup_{\beta \in \mathcal{F}} \left(\mathcal{E}(\beta) - \widehat{\cE}_{\Omega}(\beta) \right) \right] \\ &= \mathbb{E}_{\Omega} \left[ \sup_{\beta \in \mathcal{F}} \left( \mathbb{E}_{\Omega'} \left[ \widehat{\cE}_{\Omega'} (\beta) -\widehat{\cE}_{\Omega}(\beta) \right] \right) \right] \\ &\le \mathbb{E}_{\Omega, \Omega'} \left[ \sup_{\beta \in \mathcal{F}} \left( \widehat{\cE}_{\Omega'} (\beta) -\widehat{\cE}_{\Omega}(\beta) \right) \right] \\ &= \mathbb{E}_{\Omega, \Omega'} \left[ \sup_{\beta \in \mathcal{F}} \frac{1}{n} \sum_{k=1}^n \left( \ell(S'_{i_k}; \beta) - \ell(S_{i_k}; \beta) \right) \right], \end{align*} where the inequality follows by the convexity of the supremum function and Jensen's Inequality. To proceed, we will use the ghost sampling technique. Recall that the entries of $\Omega$ and $\Omega'$ were drawn uniformly at random from $[N]$. As a result, $\ell(S'_{i_k}; \beta) - \ell(S_{i_k}; \beta)$ and $\ell(S_{i_k}; \beta) - \ell(S'_{i_k}; \beta)$ have the same distribution. Further, since $\sigma_k$ takes value $1$ and $-1$ with equal probability, we have \[ \mathbb{E}_{\Omega, \Omega'} \left[ \sup_{\beta \in \mathcal{F}} \frac{1}{n} \sum_{k=1}^n \left( \ell(S'_{i_k}; \beta) - \ell(S_{i_k}; \beta) \right) \right] = \mathbb{E}_{\sigma, \Omega, \Omega'} \left[ \sup_{\beta \in \mathcal{F}} \frac{1}{n} \sum_{k=1}^n \sigma_k \left( \ell(S'_{i_k}; \beta) - \ell(S_{i_k}; \beta) \right) \right]. \] Combining the above relation with the fact that the supremum of a sum is bounded above by the sum of supremums, we obtain \begin{align*} \mathbb{E}_{\Omega} \phi(\Omega) &\le \mathbb{E}_{\sigma, \Omega, \Omega'} \left[ \sup_{\beta \in \mathcal{F}} \frac{1}{n} \sum_{k=1}^n \sigma_k \left( \ell(S'_{i_k}; \beta) - \ell(S_{i_k}; \beta) \right) \right] \\ &\le \mathbb{E}_{\sigma, \Omega, \Omega'} \left[ \sup_{\beta \in \mathcal{F}} \frac{1}{n} \sum_{k=1}^n \sigma_k \ell(S'_{i_k}; \beta) + \sup_{\beta \in \mathcal{F}} \frac{1}{n} \sum_{k=1}^n -\sigma_k \ell(S_{i_k}; \beta) \right] \\ &= \mathbb{E}_{\sigma, \Omega} \left[ \sup_{\beta \in \mathcal{F}} \frac{1}{n} \sum_{k=1}^n \sigma_k \ell(S_{i_k}; \beta) \right] + \mathbb{E}_{\sigma, \Omega'} \left[ \sup_{\beta \in \mathcal{F}} \frac{1}{n} \sum_{k=1}^n \sigma_k \ell(S'_{i_k}; \beta) \right] \\&= 2 \cdot R_n(\ell(\cdot; \beta) \circ \mathcal{F}), \end{align*} where the second to last equality holds because $\sigma_k$ is a symmetric random variable. \end{proof} \subsection{Proof of Lemma \ref{lemma:rademacher_complexity_equality}} \begin{proof} We begin by restating similar arguments made in the derivation of \eqref{eq:max_error_bound.term1}, and recalling the definition of $\mathcal{F}_r$ given by \eqref{eq:linear_family_sparse}. Given that $\widehat{\bA}$ has rank $r$, any sub-matrix $\widehat{\bA}^{\Omega}$ formed from the collection of $n$ arbitrary rows of $\widehat{\bA}^{\Omega}$ must also have rank at most $r$. By Proposition \ref{prop:low_rank_sparsity}, for any hypothesis $\beta \in \mathcal{F}$ and $\widehat{\bA}^{\Omega}$, there exists an $r$-sparse vector $\beta_r$ such that $\widehat{\bA}^{\Omega} \beta = \widehat{\bA}^{\Omega} \beta_r$. Therefore, \begin{align*} R_n(\ell_{\beta} \circ \mathcal{F}) &= \mathbb{E}_{\sigma, \Omega} \Bigg[ \sup_{\beta \in \mathcal{F}} \Bigg( \frac{1}{n} \sum_{k=1}^n \sigma_k \ell(S_{i_k}; \beta) \Bigg) \Bigg] \\ &= \mathbb{E}_{\sigma, \Omega} \Bigg[ \sup_{\beta \in \mathcal{F}} \Bigg( \frac{1}{n} \sum_{k=1}^n \sigma_k (\widehat{\bA}_{i_k, \cdot} \beta - \boldsymbol{A}_{i_k, \cdot} \beta^*)^2 \Bigg) \Bigg] \\ &\stackrel{(a)}= \mathbb{E}_{\sigma, \Omega} \Bigg[ \sup_{\beta_r \in \mathcal{F}_r} \Bigg( \frac{1}{n} \sum_{k=1}^n \sigma_k (\widehat{\bA}_{i_k, \cdot} \beta_r - \boldsymbol{A}_{i_k, \cdot} \beta^*)^2 \Bigg) \Bigg] \\ &= R_n(\ell(\cdot; \beta) \circ \mathcal{F}_r). \end{align*} We highlight again that (a) follows from Proposition \ref{prop:low_rank_sparsity} since $\ell(\cdot; \beta)$ depends only on the product value, say $\widehat{\bA}_{i_k, \cdot} \beta$, as opposed to the hypothesis $\beta$ in isolation. Hence, the equality in (a) holds since it suffices to consider the space of $r$-sparse $p$-dimensional vectors $\mathcal{F}_r$ as opposed to the entire $p$-dimensional Euclidean space $\mathcal{F}$. \end{proof} \subsection{Proof of Lemma \ref{lemma:rademacher_complexity_linear_functions_sparse}} \begin{proof} Let $I_{\beta} = \{i \in [p]: \beta_i \neq 0\}$ denote the index set for the nonzero elements of $\beta \in \mathbb{R}^p$. For any vector $v \in \mathbb{R}^p$, we denote $v_{I_{\beta}}$ as the vector that retains only its values in $I_{\beta}$ and takes the value $0$ otherwise. Then, \begin{align*} R_n(\mathcal{F}_{r}) &= \mathbb{E}_{\sigma, \Omega} \Bigg[ \sup_{\beta \in \mathcal{F}_r} \Bigg( \frac{1}{n} \sum_{i=1}^n \sigma_i \langle \alpha_i , \beta \rangle \Bigg) \Bigg] \\ &= \frac{1}{n} \mathbb{E}_{\sigma, \Omega} \Bigg[ \sup_{\beta \in \mathcal{F}_r} \Bigg( \langle \sum_{i=1}^n \sigma_i \alpha_i, \beta \rangle \Bigg) \Bigg] \\ &= \frac{1}{n} \mathbb{E}_{\sigma, \Omega} \Bigg[ \sup_{\beta \in \mathcal{F}_r} \Bigg( \sum_{j \in I_{\beta}} \beta_j \Big( \sum_{i=1}^n \sigma_i \alpha_i \Big)_j \Bigg) \Bigg] \\ &\stackrel{(a)} \le \frac{1}{n} \mathbb{E}_{\sigma, \Omega} \Bigg[ \sup_{\beta \in \mathcal{F}_r} \Big\| \beta_{I_{\beta}} \Big\|_2 \cdot \Big\| \Big(\sum_{i=1}^n \sigma_i \alpha_i \Big)_{I_{\beta}} \Big\|_2 \Bigg] \\ &\stackrel{(b)} \le \frac{\sqrt{r} B}{n} \mathbb{E}_{\sigma, \Omega} \Bigg[ \Big\| \Big(\sum_{i=1}^n \sigma_i \alpha_i \Big)_{I_{\beta}} \Big\|_2 \Bigg] \\ &\stackrel{(c)} \le \frac{\sqrt{r} B}{n} \left( \mathbb{E}_{\sigma, \Omega} \Bigg[ \Big(\sum_{i=1}^n \sigma_i \alpha_i \Big)_{I_{\beta}}^T \Big(\sum_{k=1}^n \sigma_k \alpha_k \Big)_{I_{\beta}} \Bigg] \right)^{1/2} \\ &= \frac{\sqrt{r} B}{n} \left( \mathbb{E}_{\Omega} \Bigg[ \sum_{i=1}^n \Big\| (\alpha_i )_{I_{\beta}} \Big\|_2^2 \Bigg] \right)^{1/2} \\ &\le \frac{\sqrt{r} B}{n} \left(n r \max_{i \in [n]} \norm{(\alpha_i )_{I_{\beta}}}_{\infty}^2 \right)^{1/2} \\ &= \frac{r B}{\sqrt{n}} \max_{i \in [n]} \norm{ \alpha_i }_{\infty}. \end{align*} Note that (a) makes use of the Cauchy-Schwartz Inequality, (b) follows from the boundedness assumption in the Lemma statement, and (c) applies Jensen's Inequality. \end{proof} \section{Main Results}\label{sec:results} We state our main results and discuss consequences for noisy and missing data. The proof of Theorem \ref{thm:mse_train_general} is presented in Appendix \ref{sec:appendix_noisy_regression_via_MCSE}. Proofs for the other Theorems can be found in Section \ref{sec:proof_sketch}. \subsection{Prediction Error Bounds for General Matrix Estimation Methods} \label{sec:results_general} We present Theorem \ref{thm:mse_train_general}, which bounds the training MSE (refer to \eqref{eq:mse_train}) for a general matrix estimation algorithm. It is important to note that this quantity is also key in bounding the testing MSE (refer to \eqref{eq:mse_test}), as seen by Theorem \ref{thm:mse_test_hsvt}. As a tool for analysis, we introduce an auxiliary notion of error, the so-called max column sum error (MCSE). \begin{definition}\label{def:MCSE} For an estimator $\bhA}%{\widehat{\bbA} \in \mathbb{R}^{N \times p}$ of $\bA}%{\overline{\bA}$ and a set $\Omega \subset [N]$, we define the max column sum error (\emph{MCSE}) of $\bhA}%{\widehat{\bbA}$ over $\Omega$ as \begin{align} \label{eq:mcse} \emph{MCSE}_{\Omega}(\bhA}%{\widehat{\bbA}) &= \mathbb{E} \left[ \max_{j \in [p]} \sum_{i \in \Omega} (\widehat{A}_{ij} - A_{ij})^2 \right]. \end{align} \end{definition} It is easily seen that $\textrm{MCSE}$ is a stronger metric than the conventional Frobenius norm bound\footnote{Let $\boldsymbol{X} = [X_{ij}]$ be an $m \times n$ matrix. Let $\widehat{\bX} = [\widehat{X}_{ij}]$ be an estimator of $\boldsymbol{X}$ with $\text{MSE}(\widehat{\bX}) = \mathbb{E}[\frac{1}{mn} \sum_{i=1}^m \sum_{j=1}^n (\widehat{X}_{ij} - X_{ij})^2 ]$ denoting the Frobenius norm error. Then, $\text{MCSE}(\widehat{\bX}) \ge \text{MSE}(\widehat{\bX})$.}; thus, any known Frobenius norm lower bounds immediately hold for the MCSE as well. See Appendix \ref{sec:appendix_noisy_regression_via_MCSE} for details on the MCSE metric. \\ \noindent The following theorem provides a general upper bound on the training MSE of our estimate $\widehat{Y} = \bhA}%{\widehat{\bbA} \widehat{\beta}$ of the underlying signal $\mathbb{E} Y = \bA}%{\overline{\bA} \beta^*$ in terms of the model parameter $\|\beta^*\|_1$, variance of the response noise $\sigma$, and properties of the covariate estimate $\widehat{\bA}$. \begin{theorem}[Training MSE for general ME methods] \label{thm:mse_train_general} Let $\Omega = \{ i \in [N] : Y_i \text{ is observed} \}$ and $\widehat{Y}$ be the estimator of $Y$ produced by a general ``matrix estimation regression'' method described in Algorithm \ref{alg:main_algorithm}. Assume Property \ref{prop:observation_noise_structure} holds. Then, the training prediction error of our algorithm satisfies \begin{align} \label{eq:mse_upper_generic} \emph{MSE}_{\Omega}(\widehat{Y}) &\le \frac{1}{|\Omega|} \left( \norm{\beta^*}_1^2 \cdot \emph{MCSE}_{\Omega}(\bhA}%{\widehat{\bbA}) + 2 \sigma^2 \mathbb{E} [\emph{rank}(\widehat{\bA}^{\Omega})] \right). \end{align} \end{theorem} The proof of Theorem \ref{thm:mse_train_general} is deferred until Appendix \ref{sec:appendix_noisy_regression_via_MCSE}. \subsection{Prediction Error Bounds for HSVT} \label{sec:results_hvst} Here, we instantiate our matrix estimation subroutine with HSVT, and provide non-asymptotic upper bounds for both the training and testing MSE. Specifically, we upper bound $\text{MCSE}_{\Omega}(\bhA}%{\widehat{\bbA})$ for $\text{HSVT}$ in Theorems \ref{thm:mcse_train_hsvt}; combining this with Theorem \ref{thm:mse_train_general} yields an upper bound on training error, cf. Corollary \ref{cor:mse_hsvt}. Also, we analyze the generalization error to provide an upper bound on the testing MSE, which can be found in Theorem \ref{thm:mse_test_hsvt}. \subsubsection{Training Error} If we apply HSVT in the de-noising procedure of Algorithm \ref{alg:main_algorithm}, then we obtain an explicit upper bound on MCSE as stated in Theorem \ref{thm:mcse_train_hsvt}. In order to bound the MCSE of HSVT, we first describe the role of the thresholding hyper-parameter $\lambda^*$. Let \begin{equation}\label{eqn:svd_A} \bA}%{\overline{\bA} = \sum_{i=1}^{N \wedge p} \tau_i \mu_i \nu_i^T \end{equation} be the singular value decomposition of $\bA}%{\overline{\bA}$ with its singular values arranged in descending order. We reserve $\tau_i$ to denote the $i$-th singular value of $\bA}%{\overline{\bA}$ throughout this exposition. We may partition the principal components of $\bA}%{\overline{\bA}$ at the threshold $\lambda^*$ as \begin{align}\label{eq:SVD_A_matrix} \bA}%{\overline{\bA}^*(\lambda^*) = \sum_{i=1}^{N \wedge p} \tau_i \Ind{ \tau_i \geq \lambda^*} \mu_i \nu_i^T \quad\text{and}\quad \bE}%{\overline{\bE}(\lambda^*) = \sum_{i =1}^{N \wedge p} \tau_i \Ind{ \tau_i < \lambda^*} \mu_i \nu_i^T. \end{align} Then, $\bA}%{\overline{\bA} = \bA}%{\overline{\bA}^*(\lambda^*) + \bE}%{\overline{\bE}(\lambda^*)$. We define $r(\lambda^*, \bA}%{\overline{\bA}) = \text{rank}(\bA}%{\overline{\bA}^*(\lambda^*))$, i.e., \begin{equation}\label{eqn:r_lambda} r(\lambda^*, \bA}%{\overline{\bA}) := \max\left\{ i \in [N \wedge p]: \tau_i \geq \lambda^* \right\}. \end{equation} Before we present our results, we define a quantity which we will refer to multiple times in our theorem statements: \begin{align} \label{eq:Delta} \Delta &= \sqrt{N \rho}\sqrt{ \rho \gamma^2 + (1-\rho) \Gamma^2 } +2 C(\alpha) \sqrt{p} (K_{\alpha} + \Gamma) \Big( 1 + 9 \log(Np) \Big)^{\frac{1}{\alpha}} \sqrt{ \log(Np) }, \end{align} where $C(\alpha) > 0$ is an absolute constant that depends only on $\alpha$. \begin{theorem}[Main Theorem 1: MCSE upper bound for HSVT]\label{thm:mcse_train_hsvt} Given $\boldsymbol{Z} \in \mathbb{R}^{N \times p}$ and $\lambda^* \geq 0$, let $\widehat{\bA} = \frac{1}{\widehat{\rho}}\emph{HSVT}_{\lambda^*}(\boldsymbol{Z})$ and $\widehat{Y} = \widehat{\bA} \widehat{\beta}$. Let $r = r(\lambda^*, \boldsymbol{A})$ as defined above. Suppose the following conditions hold: \begin{enumerate} \item Properties \ref{prop:bounded_covariates}, \ref{prop:covariate_noise_structure} for some $\alpha \ge 1$, \ref{prop:covariate_noise_variance}, \ref{prop:masking_noise_structure}, and \ref{prop:observation_noise_structure} \item $\lambda^* $ satisfies $\rho \tau_{r+1} + \Delta < \lambda^* < \rho \tau_{r} - \Delta$ \item $\rho \geq \frac{64 \log(Np)}{Np}$. \end{enumerate} We define \begin{align*} \delta(N, p) &:= \frac{C_1 (K_{\alpha} + \Gamma)^2}{\rho^2} \left( \frac{ \Delta^2 N }{ \rho^2( \tau_{r} - \tau_{r+1} )^2} + r \right) \log^{\frac{2}{\alpha}}(Np) + 2\max_{j \in [p]} \norm{ \bE}%{\overline{\bE}_{\cdot, j} }_2^2 \end{align*} where $C_1$ is an absolute constant. Then there exists an absolute constant $C_2 > 0 $ such that \begin{align}\label{eq:MCSE_train_bound} \emph{MCSE}_{\Omega}(\widehat{\bA}) &\le \delta(N,p) +\frac{C_2}{N p^{3/2}} \Big(\Gamma^2 + K^2_\alpha \log^{\frac{2}{\alpha}}(Np) \Big). \end{align} \end{theorem} Theorem \ref{thm:mcse_train_hsvt} follows as an immediate consequence of Theorem \ref{thm:mcse_whp}. The full details of the proof (including technical Lemmas used in the proof and thir proofs) can be found in Appendix \ref{sec:mse_train_hsvt}. \begin{corollary} [Training MSE for HSVT] \label{cor:mse_hsvt} Suppose the conditions of Theorem \ref{thm:mcse_train_hsvt} hold. Then for some $C_2 > 0$ (the same constant as in Theorem \ref{thm:mcse_train_hsvt}), \begin{align} \label{eq:mcse_train_hsvt_bound} \emph{MSE}_{\Omega}(\widehat{Y}) &\le \frac{1}{n} \left( \norm{\beta^*}_1^2 \left[ \delta(N,p) + \frac{C_2}{N p^{3/2}} \Big(\Gamma^2 + K^2_\alpha \log^{\frac{2}{\alpha}}(Np) \Big) \right] + 2 \sigma^2 r \right). \end{align} \end{corollary} \begin{proof} The result immediately from plugging in \eqref{eq:MCSE_train_bound} of Theorem \ref{thm:mcse_train_hsvt} into \eqref{eq:mse_upper_generic} of Theorem \ref{thm:mse_train_general}. \end{proof} \subsubsection{Interpretation of Training Error Results} We now provide interpretation of Corollary \ref{cor:mse_hsvt} with two exemplar scenarios. For that purpose, we present Corollary \ref{cor:mse_hsvt}, which is simplified from Corollary \ref{cor:mse_hsvt} to succinctly convey the essence of the resulting training error bound. The proof of Corollary \ref{cor:mse_train_hsvt} and Propositions \ref{prop:low_rank_finite_sample} and \ref{prop:geo_decay_finite_sample} can be found in Appendix \ref{sec:proof_case_studies}. \begin{corollary}[Simplified Version of Corollary \ref{cor:mse_hsvt}] \label{cor:mse_train_hsvt} Let the conditions of Theorem \ref{thm:mcse_train_hsvt} hold. Suppose $n = \Theta(N)$. Let $E = \max_{j \in [p]} \norm{ \boldsymbol{E}_{\cdot, j} }_2$ and $C = \emph{poly}(K_\alpha, \Gamma, \gamma, \sigma, \norm{\beta^*}_1)$. Then, \begin{align} \label{eq:mse_train_hsvt_simple} \emph{MSE}_{\Omega}(\widehat{Y}) &\le \frac{C}{n} \left( \frac{r}{\rho^{2}} + E^2 + \frac{ (n\rho)^2 + np}{\rho^4 (\tau_r - \tau_{r+1})^2} \right) \cdot \log^5(np). \end{align} \end{corollary} \paragraph{Case 1: Low Rank, Evenly Spaced Singular Values.} Here, we assume the underlying covariate matrix $\boldsymbol{A}$ is low-rank and its signal is evenly spaced out amongst its nonzero singular values $\tau_i$. \begin{proposition} \label{prop:low_rank_finite_sample} Let conditions of Theorem \ref{thm:mcse_train_hsvt} hold. Suppose: (1) $\emph{rank}(\boldsymbol{A}) = r$; (2) $\tau_r = \Theta(\tau_1)$; (3) $\norm{\bA}%{\overline{\bA}}_F = C_1 \sqrt{N p}$ where $C_1 > 0$; (4) $n = \Theta(N)$. Let $C = \emph{poly}(K_\alpha, \Gamma, \gamma, \sigma, \norm{\beta^*}_1)$. If \begin{align} \label{eq:low_rank_finite_sample} \rho \gg C \cdot \sqrt{r} \cdot \frac{\log^5(n)}{\sqrt{p}} \quad \text{and} \quad n & \gg C \cdot \frac{r}{\rho^{4}} \cdot \log^5(p), \end{align} then $\emph{MSE}_{\Omega}(\widehat{Y}) \rightarrow 0$ as $p \rightarrow \infty$. \end{proposition} \paragraph{Case 2: Geometrically Decaying Singular Values.} We now let $\boldsymbol{A}$ be an approximately low-rank matrix with geometrically decaying singular values. Let $e_{\cdot, j} \in \mathbb{R}^p$ denote the $j$-th canonical basis vector. Recall that $\mu_i, \nu_i$ are the left and right singular vectors of $\boldsymbol{A}$, respectively, for $i \in [N \wedge p]$. \begin{proposition} \label{prop:geo_decay_finite_sample} Let the conditions of Theorem \ref{thm:mcse_train_hsvt} hold. Suppose: (1) $\tau_1 = C_1 \sqrt{Np}$ for some $C_1 > 0$; (2) $\tau_k = \tau_1 \theta^{k - 1}$ for $k \in [N]$ where $\theta \in (0, 1)$; (3) $n = \Theta(N)$; (4) $\nu_i^T e_j = O(p^{-1/2})$ for all $i \in \{r+1, \dots, N\}$ and all $j \in [p]$; (5) $n \ll p$; and (6) $r = C_2\frac{ \log \log (p) }{\log \left( 1/\theta \right)}$ for a sufficiently large $C_2 > 0$. Let $C, C' = \emph{poly}(K_\alpha, \Gamma, \gamma, \sigma, \norm{\beta^*}_1)$. If \begin{equation}\label{eq:geo_finite_sample} \rho \gg C \cdot \frac{ 1 }{ \sqrt{p}} \cdot \log^{\frac{5}{2} + C_2}(p \quad\text{and}\quad n \gg C' \cdot \frac{ 1 }{ \rho^4} \cdot \log^{5 + C_2}(p) \end{equation} then $\emph{MSE}_{\Omega}(\widehat{Y}) \rightarrow 0$ as $p \rightarrow \infty$. \end{proposition} \begin{remark} We provide justification for Condition (4) in Proposition \ref{prop:geo_decay_finite_sample}. From the proof of the proposition, we see that $\frac{1}{n}\| \boldsymbol{E}_{\cdot, j} \|^2 = \frac{1}{n} \sum^N_{i =r+ 1} \tau_i^2 (\nu_i^T e_j)^2$. Suppose that at least one $\nu_i$ for $i \in \{r+1, \dots, N\}$ is aligned with $e_j$, a canonical basis vector in $\mathbb{R}^p$, for some $j \in [p]$, i.e., $(\nu_i^T e_j)^2 = \Theta(1)$. Then, it is easily seen that $\frac{1}{n}\| \boldsymbol{E}_{\cdot, j} \|^2$ scales as $p \theta^{2r}$. Hence, a certain structural assumption is needed to avoid this predicament. Assuming an ``incoherence" type structural assumption as common in literature (cf. \cite{candes2007sparsity}) can help us to achieve $\frac{1}{n}\| \boldsymbol{E}_{\cdot, j} \|^2 = o \big( p \big)$ for all $j$. Condition (4) is exactly such an assumption where only the mass of the \textbf{residual right singular vectors} associated with the \textbf{geometrically decaying tail} singular vales (i.e., $\tau_i$ for $i \in \{r+1, \dots, N\}$) needs to be evenly distributed amongst its entries. Note that we do not impose any structural assumptions on the descriptive singular vectors (i.e., $\nu_i$ for $i \leq r$). \end{remark} \subsubsection{Testing Error} We now proceed to demonstrate how instantiating our meta-algorithm with HSVT (Algorithm \ref{alg:main_algorithm}) affects our ability to learn and generalize. Let $\Omega = \{i_1, \dots, i_n\}$ be a set of $n$ locations chosen uniformly at random and without replacement from $[N]$\footnote{In most setups, the generalization error bounds are stated in settings where the samples are drawn i.i.d. However, we sample our locations uniformly at random and without replacement. However, as argued in \cite{barak2016noisy}, the sampling model differences are negligible. }. As previously stated, our goal is to show that our hypothesis (defined by $\widehat{\beta}$) is close to the unknown expected response value associated with all data points in $\bA}%{\overline{\bA}$. We sketch the proof of Theorem \ref{thm:mse_test_hsvt} in Section \ref{sec:proofs_testing_error}. Full details are found in Appendix \ref{sec:mse_test_hsvt}. \begin{theorem} [Main Theorem 2: Testing MSE for HSVT] \label{thm:mse_test_hsvt} Given $\boldsymbol{Z} \in \mathbb{R}^{N \times p}$ and $\lambda^* \geq 0$, let $\widehat{\bA} = \frac{1}{\widehat{\rho}} \emph{HSVT}_{\lambda^*}(\bZ}%{\overline{\bZ})$ and $\widehat{Y} = \bhA}%{\widehat{\bbA} \widehat{\beta}$. Let $r = r(\lambda^*, \boldsymbol{A})$ as defined above. Suppose the following conditions hold: \begin{enumerate} \item Conditions of Theorem \ref{thm:mcse_train_hsvt} \item There exists a constant $B > 0$ such that for any hypothesis $\beta \in \mathbb{R}^p$ (including $\widehat{\beta}$), $\norm{\beta}_{\infty} \le B$. \end{enumerate} Let $C = \emph{poly}(K_{\alpha}, \Gamma, B, \norm{\beta^*}_1)$. Then, \begin{align} \label{eq:mse_test_hsvt} \emph{MSE}(\widehat{Y}) &\le \emph{MSE}_{\Omega}(\widehat{Y}) + \frac{C r^2}{\rho^2} \cdot \frac{ \log^2(rNp)}{ \sqrt{n}}. \end{align} Here, $\emph{MSE}_{\Omega}(\widehat{Y})$ is defined as in \eqref{eq:mcse_train_hsvt_bound}\footnote{More precisely, they are equivalent up to constant factors, i.e., only the value of $C_2$ in \eqref{eq:mcse_train_hsvt_bound} changes.}. \end{theorem} \subsection{Technical Results (of Independent Interest)} \label{sec:results_technical} In this subsection, we present two important technical contributions in Theorems \ref{thm:spectral_norm_noise_matrix_bound} and \ref{thm:mcse_whp}, which are both utilized in the derivation of Theorem \ref{thm:mcse_train_hsvt}. These results can also be lifted and applied in more general settings and, thus, could be of interest in its own right. \subsubsection{Spectral Norm Bounds for Random Matrices} \begin{theorem}[Main Technical Result 1]\label{thm:spectral_norm_noise_matrix_bound} Suppose Properties \ref{prop:bounded_covariates}, \ref{prop:covariate_noise_structure} for some $\alpha \ge 1$, \ref{prop:covariate_noise_variance} and \ref{prop:masking_noise_structure} hold. Then for any $\delta_1 > 0$, \begin{align} \norm{\bZ}%{\overline{\bZ} - \rho \bA}%{\overline{\bA}} &\leq \sqrt{N\rho}\sqrt{ \rho \gamma^2 + (1-\rho) \Gamma^2 } \nonumber \\ &+ C(\alpha) \sqrt{1+\delta_1}\sqrt{p} (K_{\alpha} + \Gamma) \Big( 1 + \big(2 + \delta_1 \big) \log(Np) \Big)^{\frac{1}{\alpha}} \sqrt{ \log(Np) } \end{align} with probability at least $ 1 - \frac{2}{N^{1 + \delta_1} p^{\delta_1}}$. Here, $C(\alpha)$ is an absolute constant that depends only on $\alpha$. \end{theorem} We sketch the proof of Theorem \ref{thm:spectral_norm_noise_matrix_bound} in Section \ref{sec:proof_sketch_spectral_norm_noise_matrix_bound}. Full details are found in Appendix \ref{sec:appendix_spectral_norm_noise_matrix_bound}. \subsubsection{High-probability Max Column $\ell_2$-norm Error Bound via HSVT} \begin{theorem}[Main Technical Result 2] \label{thm:mcse_whp} Given $\boldsymbol{Z} \in \mathbb{R}^{N \times p}$ and $\lambda^* \geq 0$, let $\bhA}%{\widehat{\bbA} = \frac{1}{\widehat{\rho}}\emph{HSVT}_{\lambda^*}(\bZ}%{\overline{\bZ})$. Let $r = r(\lambda^*, \bA}%{\overline{\bA})$ as defined above. Suppose the following conditions hold: \begin{enumerate} \item Properties \ref{prop:bounded_covariates}, \ref{prop:covariate_noise_structure} for some $\alpha \ge 1$, \ref{prop:covariate_noise_variance}, and \ref{prop:masking_noise_structure} \item $\lambda^* $ satisfies $\rho \tau_{r+1} + \Delta < \lambda^* < \rho \tau_{r} - \Delta$ \item $\rho \geq \frac{64 \log(Np)}{Np}$. \end{enumerate} Then, with probability at least $1 - \frac{8}{N^8p^7}$, \begin{align}\label{eq:MCSE_bound} \max_{j \in [p]} \Big\| \bhA}%{\widehat{\bbA}_{\cdot, j} - \bA}%{\overline{\bA}_{\cdot, j} \Big\|_2^2 &\leq \frac{C (K_{\alpha} + \Gamma)^2}{\rho^2} \left( \frac{ \Delta^2 N }{ \rho^2( \tau_{r} - \tau_{r+1} )^2} + r \right) \log^{\frac{2}{\alpha}}(Np) + 2\max_{j \in [p]} \norm{ \bE}%{\overline{\bE}_{\cdot, j} }_2^2. \end{align} Here, $C > 0$ is an absolute constant. \end{theorem} We sketch the proof of Theorem \ref{thm:mcse_whp} in Section \ref{sec:proof_sketch_mcse_whp}. Full details are found in Appendix \ref{sec:appendix_noisy_regression_via_MCSE}. \subsection{Useful Comparisons}\label{ssec:compare} Here, we compare Propositions \ref{prop:low_rank_finite_sample} and \ref{prop:geo_decay_finite_sample} against a few well known results in the high-dimensional error-in-variables regression literature. In order to facilitate a proper comparison, we first state the model assumptions made in \cite{loh_wainwright}. \cite{cocolasso} operates in a similar setting and builds upon the work of \cite{loh_wainwright} (see Section \ref{sec:lit_review} for more details). As previously mentioned, \cite{loh_wainwright} assumes $\beta^*$ is $r$-sparse, and the covariates $\boldsymbol{A}$ are corrupted such that only $\boldsymbol{Z}$ is observed. However, they assume that $\boldsymbol{Z}$ is either generated by an additive {\bf or} multiplicative (with missing data as a special instance) noise model; it is important to note that they do not consider both noise models simultaneously. In the former, they assume that $\boldsymbol{Z} = \boldsymbol{A} + \boldsymbol{H}$ where (1) $\boldsymbol{A}$ and $\boldsymbol{H}$ are random matrices of i.i.d. mean-zero sub-gaussian rows ($\alpha = 2$) with $\norm{\boldsymbol{A}_{i,\cdot}}_{\psi_2} \le K_2(\boldsymbol{A})$ and $\norm{\eta_{i,\cdot}}_{\psi_2} \le K_2(\boldsymbol{H})$, and (2) the noise covariance matrix $\Sigma_{\boldsymbol{H}} = \mathbb{E} \boldsymbol{H}^T \boldsymbol{H}$ is known\footnote{Although the authors of \cite{loh_wainwright} argue that $\Sigma_{\boldsymbol{H}}$ can be estimated from data, it is unclear how to obtain a data-driven estimate of $\Sigma_{\boldsymbol{H}}$ when only one data set is readily available. However, if multiple replicates of the data are accessible, then a data-driven estimation procedure is achievable.}. Under these assumptions, a consistent $\ell_2$ estimation of $\beta^*$ is achieved if \begin{align} \label{eq:loh_wainwright_n_additive} n \ge C \max \left \{ \frac{K^2_2(\boldsymbol{A}) + K^2_2(\boldsymbol{H})}{\tau_{\min}^2}, 1 \right\} r \log(p), \end{align} where $\tau_\min$ denotes the minimum singular value of $\boldsymbol{A}$. In the setting of multiplicative i.i.d. Bernoulli noise (randomly missing data), the authors make the same assumptions on $\boldsymbol{X}$ rendering $\boldsymbol{Z}$ as the missing data matrix. Consistent estimation of $\beta^*$ is achieved if (for some $C > 0$) if \begin{align} \label{eq:loh_wainwright_n_missing_data} n \ge C \max \left \{ \frac{K^4_2(\boldsymbol{A})}{\rho^4 \tau_{\min}^2 }, 1 \right\} r \log(p). \end{align} Again, we highlight that their algorithm changes based on the noise model assumption, i.e., they design different plug-in estimators for different scenarios. \cite{cocolasso} proposes a convex formulation of Lasso to handle measurement errors in the covariates $\boldsymbol{A}$ (assumed to be deterministic). Note this setup differs from \cite{loh_wainwright}, which utilizes a non-convex $\ell_1$-penalization and assumes the rows of $\boldsymbol{A}$ are drawn i.i.d. from some fixed distribution. However both works design algorithms that depend on plug-in estimators tailor made for different settings; in fact, the authors of \cite{cocolasso} use the estimators proposed in \cite{loh_wainwright}. Moreover, the same assumptions in the additive and i.i.d. missing data models are made. Consequently, both works achieve comparable statistical error bounds. Although our aim is to minimize the prediction error, we can compare our sample complexity results in Propositions \ref{prop:low_rank_finite_sample} and \ref{prop:geo_decay_finite_sample} against \cite{loh_wainwright} (since similar bounds are derived in other works (cf. \cite{cocolasso}, \cite{tsybakov_2})). With respect to model setup, we (1) make no assumptions on $\beta^*$; (2) allow $\boldsymbol{H}$ to be a random matrix of independent $\psi_\alpha$-rows (this includes sub-exponential noise); (3) allow $\boldsymbol{Z}$ to be simultaneously corrupted by noise and missing data. Algorithmically, we (1) do not require knowledge of $\| \beta^* \|_2$ or $\Sigma_{\boldsymbol{H}}$; (2) do not change our estimation procedure based on the model assumptions. For a more fair comparison, however, one can imagine that the rows of our covariate matrix $\boldsymbol{A}$ were sampled i.i.d. from an isotropic distribution, similar to that described in \cite{loh_wainwright}. The conditions of Proposition \ref{prop:low_rank_finite_sample} then apply and its sample complexity matches \eqref{eq:loh_wainwright_n_additive} and \eqref{eq:loh_wainwright_n_missing_data} up to $\log^4(p)$ factors as seen by \eqref{eq:low_rank_finite_sample}. It is worth noting we have an identical dependence on $\rho$. Another difference to highlight is the dependence on $\tau^{-2}_\min$ in \eqref{eq:loh_wainwright_n_additive} and \eqref{eq:loh_wainwright_n_missing_data}. This leads to a weak result for the setup in Proposition \ref{prop:geo_decay_finite_sample} with geometrically decaying singular values since $\tau_\min$ gets arbitrarily small as $N$ and $p$ grow. However, as \eqref{eq:geo_finite_sample} of Proposition \ref{prop:geo_decay_finite_sample} indicates, our algorithm and associated analysis does not suffer in this setup with regards to sample complexity. Specifically, by applying HSVT, our bound in Corollary \ref{cor:mse_train_hsvt} demonstrates how the choice of $\lambda^*$ leads to a precise tradeoff between the signal captured and model misspecification, as seen by $\max_{j \in [p]} \norm{ \boldsymbol{E}_{\cdot, j} }_2$ and $(\tau_r - \tau_{r+1})^{-2}$. In short, despite less restrictive assumptions and our algorithm having minimal knowledge of the underlying model, we achieve comparable sample complexity bounds and guarantee prediction consistency (both in-/out-of-sample). While we do not learn $\beta^*$ specifically, our analysis allows us to accurately predict response values associated with \textit{noisy, missing} covariates outside the sample set $\Omega$. \section{Proof Sketches} \label{sec:proof_sketch} In this section, we provide proof sketches for our main theorems in this work\footnote{We do not sketch the proof of the first main theorem, Theorem \ref{thm:mcse_train_hsvt}, because it follows as a direct consequence of Theorem \ref{thm:mcse_whp}. However, its proof can be found in Appendix \ref{sec:mse_train_hsvt}.}, i.e., Theorems \ref{thm:mse_test_hsvt}, \ref{thm:spectral_norm_noise_matrix_bound} and \ref{thm:mcse_whp}. The order in which we present these sketches has been chosen so as to allow for a sequential reading of the proofs. \subsection{Supporting Lemmas of Theorem \ref{thm:spectral_norm_noise_matrix_bound}: Spectral Norm Bound for Random Matrices } \label{sec:proof_sketch_spectral_norm_noise_matrix_bound} {\bf Outline.} We begin by presenting Proposition \ref{prop:spectral_upper_bound}, which holds for general random matrices $\boldsymbol{W} \in \mathbb{R}^{N \times p}$. We note that this result depends on two quantities: (1) $\norm{ \mathbb{E} \boldsymbol{W}^T \boldsymbol{W}}$ and (2) $\norm{\boldsymbol{W}_{i, \cdot}}_{\psi_\alpha}$ for all $i \in [N]$. We then instantiate $\boldsymbol{W} := \bZ}%{\overline{\bZ} - \rho \bA}%{\overline{\bA}$ and present Lemmas \ref{lemma:masked_noise_operator_norm} and \ref{lemma:masked_noise_row_norm}, which bound (1) and (2), respectively, for our choice of $\boldsymbol{W}$. Theorem \ref{thm:spectral_norm_noise_matrix_bound} follows immediately from the above results (the proofs of which are found in Appendix \ref{sec:appendix_spectral_norm_noise_matrix_bound}). \begin{proposition}\label{prop:spectral_upper_bound} Let $\boldsymbol{W} \in \mathbb{R}^{N \times p}$ be a random matrix whose rows $\boldsymbol{W}_{i, \cdot}$ ($i \in [N]$) are independent $\psi_{\alpha}$-random vectors for some $\alpha \geq 1$. Then for any $\delta_1 > 0$, \begin{align*} \norm{\boldsymbol{W}} \leq \norm{ \mathbb{E} \boldsymbol{W}^T \boldsymbol{W} }^{1/2} + C(\alpha) \sqrt{(1+\delta_1) p } \max_{i \in [N]}\norm{ \boldsymbol{W}_{i, \cdot} }_{\psi_\alpha} \Big( 1 + \big(2 + \delta_1 \big) \log(Np) \Big)^{\frac{1}{\alpha}} \sqrt{ \log(Np) } \end{align*} with probability at least $ 1 - \frac{2}{N^{1 + \delta_1}p^{\delta_1}}$. Here, $C(\alpha)>0$ is an absolute constant that depends only on $\alpha$. \end{proposition} \begin{lemma} \label{lemma:masked_noise_operator_norm} Assume Property \ref{prop:masking_noise_structure} holds. Then, \begin{align} \norm{\mathbb{E} (\bZ}%{\overline{\bZ} - \rho \bA}%{\overline{\bA})^T(\bZ}%{\overline{\bZ} - \rho \bA}%{\overline{\bA})} &\le \rho(1-\rho) \max_{j \in [p] } \norm{ \bA}%{\overline{\bA}_{\cdot, j} }_2^2 + \rho^2 \norm{ \mathbb{E} \bH}%{\overline{\bH}^T \bH}%{\overline{\bH} }. \end{align} \end{lemma} \begin{lemma} \label{lemma:masked_noise_row_norm} Assume Properties \ref{prop:bounded_covariates}, \ref{prop:covariate_noise_structure}, and \ref{prop:masking_noise_structure} hold. Then for any $\alpha \geq 1$ with which Property \ref{prop:covariate_noise_structure} holds, we have \begin{align} \norm{ \bZ}%{\overline{\bZ}_{i,\cdot} - \rho \bA}%{\overline{\bA}_{i, \cdot} }_{\psi_{\alpha}} \le C(K_{\alpha} + \Gamma) \qquad\text{for all }~i \in [N], \end{align} where $C > 0$ is an absolute constant. \end{lemma} \subsubsection{Completing Proof of Theorem \ref{thm:spectral_norm_noise_matrix_bound}} \begin{proof} The proof is complete by plugging the results of Lemmas \ref{lemma:masked_noise_operator_norm} and \ref{lemma:masked_noise_row_norm} into Proposition \ref{prop:spectral_upper_bound} for $\boldsymbol{W} := \bZ}%{\overline{\bZ} - \rho \bA}%{\overline{\bA}$ and applying Properties \ref{prop:bounded_covariates} and \ref{prop:covariate_noise_variance}. \end{proof} \subsection{Supporting Lemmas of Theorem \ref{thm:mcse_whp}: High-probability Max Column $\ell_2$-Error via HSVT} \label{sec:proof_sketch_mcse_whp} \paragraph{Outline.} We begin by defining a useful operator $\varphi$. Then, we present Lemma \ref{lemma:column_error}, which is a conditional, but deterministic precursor of Theorem \ref{thm:mcse_whp}. In order to achieve our desired result in Theorem \ref{thm:mcse_whp} from Lemma \ref{lemma:column_error}, we define four conditioning events of relevance and show that these events occur with high probability. Finally, we prove Theorem \ref{thm:mcse_whp} by amalgamating the above results. The proofs of all Lemmas can be found in Appendix \ref{sec:appendix_noisy_regression_via_MCSE}. \paragraph{Notation.} Consider a matrix $\boldsymbol{B} \in \mathbb{R}^{N \times p}$ such that $\boldsymbol{B} = \sum_{i=1}^{N \wedge p} \sigma_i(\boldsymbol{B}) x_i y_i^T$. With a specific choice of $\lambda \geq 0$, we can define a function $\varphi^{\boldsymbol{B}}_{\lambda}: \mathbb{R}^{N} \to \mathbb{R}^{N}$ as follows: for any vector $w \in \mathbb{R}^N$, \begin{align} \label{eq:prox_vector} \varphi^{\boldsymbol{B}}_{\lambda}(w) &= \sum_{i = 1}^{N \wedge p} \mathbb{1}(\sigma_i (\boldsymbol{B}) \ge \lambda) x_i x_i^T w. \end{align} Note that $\varphi^{\boldsymbol{B}}_{\lambda}$ is a linear operator and it depends on the tuple $(\boldsymbol{B}, \lambda)$; more precisely, the singular values and the left singular vectors of $\boldsymbol{B}$, as well as the threshold $\lambda$. If $\lambda = 0$, then we will adopt the shorthand notation: $\varphi^{\boldsymbol{B}} = \varphi_{0}^{\boldsymbol{B}}$. \begin{lemma} \label{lemma:column_error} Given $\bZ}%{\overline{\bZ} \in \mathbb{R}^{N \times p}$ and $\lambda^* \geq 0$, let $\bhA}%{\widehat{\bbA} = \frac{1}{\widehat{\rho}}\emph{HSVT}_{\lambda^*}(\bZ}%{\overline{\bZ})$ , and let $r = r(\lambda^*, \bA}%{\overline{\bA})$. Suppose that \begin{enumerate} \item $\norm{ \bZ}%{\overline{\bZ} - \rho \bA}%{\overline{\bA}} \leq \Delta $ for some $\Delta \geq 0$ \item $ \frac{1}{\varepsilon} \rho \le \widehat{\rho} \le \varepsilon \rho$ for some $\varepsilon \ge 1$ \item $\lambda^* $ satisfies $\rho \tau_{r+1} + \Delta < \lambda^* < \rho \tau_r - \Delta$. \end{enumerate} Then for any $j \in [p]$, \begin{align} \Big\| \bhA}%{\widehat{\bbA}_{\cdot, j} - \bA}%{\overline{\bA}_{\cdot, j} \Big\|_2^2 &\leq \frac{4\varepsilon^2}{\rho^2} \frac{\Delta^2}{ \rho^2( \tau_r - \tau_{r+1})^2} \big\| \bZ}%{\overline{\bZ}_{\cdot, j} - \rho \bA}%{\overline{\bA}_{\cdot, j} \big\|_2^2 \nonumber\\ &\quad + \frac{4\varepsilon^2}{\rho^2} \Big \| \varphi^{\bA}%{\overline{\bA}^*}(\bZ}%{\overline{\bZ}_{\cdot, j} - \rho \bA}%{\overline{\bA}_{\cdot, j}) \Big \|_2^2 + 2 (\varepsilon-1)^2 \| \bA}%{\overline{\bA}_{\cdot, j} \|_2^2. \nonumber \\ &\quad + \frac{2 \Delta^2}{ \rho^2( \tau_r - \tau_{r+1})^2} \norm{ \bA}%{\overline{\bA}^*_{\cdot, j} }_2^2 + 2\, \norm{ \bE}%{\overline{\bE}_{\cdot, j} }_2^2. \label{eq:main_MCSE_inequality} \end{align} \end{lemma} \paragraph{High probability events for conditioning.} We define the following four events: \begin{align} \mathcal{E}_1 &:= \bigg\{ \norm{ \bZ}%{\overline{\bZ} - \rho \bA}%{\overline{\bA}} \leq \sqrt{ N \rho } \sqrt{ \rho \gamma^2 + (1-\rho) \Gamma^2 } \nonumber\\ &\qquad\qquad\qquad\quad +2 C(\alpha) \sqrt{p} (K_{\alpha} + \Gamma) \Big( 1 + 9 \log(Np) \Big)^{\frac{1}{\alpha}} \sqrt{ \log(Np) } \bigg\} \label{eqn:cE_1} \\ \mathcal{E}_2&:= \Bigg\{ \bigg(1 - \sqrt{\frac{16 \log (Np)}{ Np \rho}}\bigg) \rho \le \widehat{\rho} \le \frac{1}{1 - \sqrt{\frac{16 \log (Np)}{ Np \rho}}} \rho \Bigg\} \label{eqn:cE_2} \\ \mathcal{E}_3 &:= \bigg\{ \max_{j \in [p]} \Big \| \bZ}%{\overline{\bZ}_{\cdot, j} - \rho \bA}%{\overline{\bA}_{\cdot, j} \Big \|_2^2 \leq 9C (K_{\alpha} + \Gamma)^2 N \log^{\frac{2}{\alpha}}(Np)\bigg\} \label{eqn:cE_3} \\ \mathcal{E}_4 &:= \bigg\{ \max_{j \in [p]} \Big \| \varphi^{\bA}%{\overline{\bA}^*}(\bZ}%{\overline{\bZ}_{\cdot, j} - \rho \bA}%{\overline{\bA}_{\cdot, j}) \Big \|_2^2 \leq 9C (K_{\alpha} + \Gamma)^2 r \log^{\frac{2}{\alpha}}(Np)\bigg\}. \label{eqn:cE_4} \end{align} Here, $C(\alpha)$ is the same absolute constant that appears in Theorem \ref{thm:spectral_norm_noise_matrix_bound}, and $C > 0$ is an absolute constant. The proof of Lemmas \ref{lemma:E1}, \ref{lemma:E2}, \ref{lemma:E3}, and \ref{lemma:E4} can be found in Appendix \ref{sec:conditioning_events}. \subparagraph{Observation 1: $\mathcal{E}_1$ occurs with high probability.} \begin{lemma}\label{lemma:E1} Suppose that Properties \ref{prop:bounded_covariates}, \ref{prop:covariate_noise_structure} for $\alpha \ge 1$, \ref{prop:covariate_noise_variance}, and \ref{prop:masking_noise_structure} hold. Then for any $\delta_1 > 0$, \begin{align*} \norm{\bZ}%{\overline{\bZ} - \rho \bA}%{\overline{\bA}} &\leq \sqrt{N \rho} \sqrt{ \rho \gamma^2 + (1-\rho) \Gamma^2 } \\ &\qquad+ C(\alpha) \sqrt{1+\delta_1}\sqrt{p} (K_{\alpha} + \Gamma) \Big( 1 + \big(2 + \delta_1 \big) \log(Np) \Big)^{\frac{1}{\alpha}} \sqrt{ \log(Np) } \end{align*} with probability at least $ 1 - \frac{2}{N^{1 + \delta_1} p^{\delta_1}}$. Here, $C(\alpha)>0$ is an absolute constant that depends only on $\alpha$. \end{lemma} \begin{remark}\label{rem:E1} By letting $\delta_1 = 7$ in Lemma \ref{lemma:E1}, we have that $\Prob{\mathcal{E}_1^c} \leq \frac{2}{N^8 p^7}$. \end{remark} \subparagraph{Observation 2: $\mathcal{E}_2$ occurs with high probability.} \begin{lemma}\label{lemma:E2} Suppose that Property \ref{prop:masking_noise_structure} holds. Then for any $\varepsilon > 1$, \begin{align*} \Prob{ \frac{1}{\varepsilon} \rho \le \widehat{\rho} \le \varepsilon \rho } \geq 1 - 2 \exp \left( - \frac{(\varepsilon - 1)^2}{2 \varepsilon^2} Np\rho \right). \end{align*} \end{lemma} \begin{remark}\label{rem:E2} Let $\varepsilon = \left(1 - \sqrt{\frac{16 \log (Np)}{ Np \rho}} \right)^{-1} $ in Lemma \ref{lemma:E2}. Then, $\Prob{\mathcal{E}_2^c} \leq \frac{2}{N^8p^8}$. \end{remark} \subparagraph{Observation 3: $\mathcal{E}_3$ and $\mathcal{E}_4$ occur with high probability.} \begin{lemma}\label{lemma:E3} Suppose Properties \ref{prop:bounded_covariates}, \ref{prop:covariate_noise_structure}, and \ref{prop:masking_noise_structure} hold. Then, \[ \Prob{\mathcal{E}_3^c} \leq \frac{2}{N^8p^8}. \] \end{lemma} \begin{lemma}\label{lemma:E4} Suppose properties \ref{prop:bounded_covariates}, \ref{prop:covariate_noise_structure}, and \ref{prop:masking_noise_structure} hold. Then, \[ \Prob{\mathcal{E}_4^c} \leq \frac{2}{N^8p^8}. \] \end{lemma} \subsubsection{Completing Proof of Theorem \ref{thm:mcse_whp}.}\label{sec:proof_thm_MCSE} \begin{proof} Let $E$ denote the event where the desired inequality, \eqref{eq:MCSE_bound}, holds. Conditioned on $\mathcal{E}_1 \cap \mathcal{E}_2 \cap \mathcal{E}_3 \cap \mathcal{E}_4$---see \eqref{eqn:cE_1}, \eqref{eqn:cE_2}, \eqref{eqn:cE_3}, \eqref{eqn:cE_4}---the desired inequality never fails because of Lemma \ref{lemma:column_error} (and simplifying \eqref{eq:main_MCSE_inequality} \footnote{We choose $\varepsilon = \bigg(1 - \sqrt{\frac{16 \log (Np)}{ Np \rho}}\bigg)^{-1}$; see the definition of $\mathcal{E}_2$ in \eqref{eqn:cE_2}. Remark that we can upper bound $\varepsilon \leq 2$ regardless of the problem size $(N, p)$ because we assumed $\rho \geq \frac{64 \log(Np)}{Np}$.} using \eqref{eqn:cE_1}, \eqref{eqn:cE_2}, \eqref{eqn:cE_3}, \eqref{eqn:cE_4}). By the law of total probability and the union bound, we have \begin{align*} \Prob{E^c} &= \Prob{E^c \big| \mathcal{E}_1 \cap \mathcal{E}_2 \cap \mathcal{E}_3 \cap \mathcal{E}_4 } \cdot \Prob{\mathcal{E}_1 \cap \mathcal{E}_2 \cap \mathcal{E}_3 \cap \mathcal{E}_4}\\ &\qquad + \Prob{E^c \big| \mathcal{E}_1^c \cup \mathcal{E}_2^c \cup \mathcal{E}_3^c \cup \mathcal{E}_4^c } \cdot \Prob{\mathcal{E}_1^c \cup \mathcal{E}_2^c \cup \mathcal{E}_3^c \cup \mathcal{E}_4^c}\\ &\leq \Prob{E^c \big| \mathcal{E}_1 \cap \mathcal{E}_2 \cap \mathcal{E}_3 \cap \mathcal{E}_4 } + \Prob{\mathcal{E}_1^c \cup \mathcal{E}_2^c \cup \mathcal{E}_3^c \cup \mathcal{E}_4^c}\\ &\leq \Prob{\mathcal{E}_1^c} + \Prob{\mathcal{E}_2^c} + \Prob{\mathcal{E}_3^c} + \Prob{\mathcal{E}_4^c}. \end{align*} \end{proof} \subsection{Supporting Lemmas of Theorem \ref{thm:mse_test_hsvt}: Testing MSE Bound for HSVT}\label{sec:proofs_testing_error} The proof of Theorem \ref{thm:mse_test_hsvt} follows a somewhat standard approach in terms of establishing generalization error bounds using Rademacher complexity (cf. \cite{Bartlett_2003} and references therein). We note two important contributions: (1) relating our notion of generalization error to the standard definitions; (2) arguing that the Rademacher complexity of our matrix estimation regression algorithm (using HSVT) can be identified with the Rademacher complexity of regression with $\ell_0$-regularization. \medskip \noindent{\bf Outline.} Before we prove Theorem \ref{thm:mse_test_hsvt}, we begin by introducing some useful notation. Then, we define two conditioning events of relevance and show that these events occur with high probability. Lemma \ref{lemma:rademacher_bound} then bounds the expected generalization error in terms of the Rademacher complexity of the class of squared loss functions for linear predictors. Next, Lemma \ref{lemma:rademacher_complexity_equality} allows us to equivalently analyze the Rademacher complexity of squared loss functions under $r$-sparse linear predictors, of which Lemma \ref{lemma:rademacher_complexity_linear_functions_sparse} computes its upper bound. Finally, Theorem \ref{thm:mse_test_hsvt} utilizes the above Lemmas to achieve a bound on the test error. Note that the proofs of the following Lemmas can be found in Appendix \ref{sec:mse_test_hsvt}. \medskip \noindent{\bf Notation. } Let $S = \{(\widehat{\bA}_{i_k}, \boldsymbol{A}_{i_k} \beta^*)\}_{k=1:n}$ denote the set of $n$ de-noised covariate observations and expected response value tuples. For any hypothesis $\beta \in \mathbb{R}^p$, we denote the squared loss as $\ell(S_{i_k}; \beta) = (\widehat{\bA}_{i_k, \cdot} \beta - \boldsymbol{A}_{i_k, \cdot} \beta^*)^2$. For any sample set $\Omega = \{i_1, \dots, i_n\} \subset [N]$, we define the training (empirical) error under the hypothesis $\beta$ as \begin{align} \label{eq:train_error} \widehat{\cE}_{\Omega}(\beta) &= \sum_{k=1}^n \Big( \widehat{\bA}_{i_k, \cdot} \beta - \boldsymbol{A}_{i_k, \cdot} \beta^* \Big)^2 = \sum_{k=1}^n \ell(S_{i_k}; \beta). \end{align} Similarly, we define the test error \begin{align} \label{eq:test_error} \mathcal{E}(\beta) &= \frac{1}{N} \sum_{i=1}^N \Big( \widehat{\bA}_{i, \cdot} \beta - \boldsymbol{A}_{i, \cdot} \beta^* \Big)^2. \end{align} We will define the generalization error as the supremum of the gap between \eqref{eq:train_error} and \eqref{eq:test_error} over a hypothesis class $\mathcal{H}$, i.e., the generalization error is defined as \begin{align} \label{eq:generalization_error_general} \phi(\Omega) = \sup_{\beta \in \mathcal{H}} \left( \mathcal{E}(\beta) - \widehat{\cE}_{\Omega}(\beta) \right). \end{align} Let $\widehat{\mathcal{A}} = \{\widehat{\bA}_{1, \cdot}, \dots, \widehat{\bA}_{N, \cdot}\}$ denote the rows of $\widehat{\bA}$. We define \begin{align} \label{eq:linear_family} \mathcal{F} = \{f: \widehat{\mathcal{A}} \rightarrow \mathbb{R}: \exists \beta \in \mathbb{R}^p \text{ s.t. } f(\alpha) = \langle \beta, \alpha \rangle \text{ with } \norm{\beta}_{\infty} \le B, \forall \alpha \in \widehat{\mathcal{A}} \} \end{align} as the family of linear predictors. Similarly, we define the family of $r$-sparse linear predictors as \begin{align} \label{eq:linear_family_sparse} \mathcal{F}_r = \{f: \widehat{\mathcal{A}} \rightarrow \mathbb{R}: \exists \beta \in \mathbb{R}^p \text{ s.t. } f(\alpha) = \langle \beta, \alpha \rangle \text{ with } \norm{\beta}_{\infty} \le B, \norm{\beta}_0 \le r, \forall \alpha \in \widehat{\mathcal{A}} \}. \end{align} Note that for every $f_1 \in \mathcal{F}$ and $f_2 \in F_r$, we can associate a unique $\beta_1 \in \mathbb{R}^p$ and $\beta_2 \in \mathbb{R}^p$, respectively. The associated Rademacher complexities of the class of squared-loss functions $\ell(\cdot; \cdot)$ under $\mathcal{F}$ and $\mathcal{F}_r$ are defined to be \begin{align} R_n(\ell(\cdot; \cdot)\circ \mathcal{F}) &= \mathbb{E}_{\sigma, \Omega} \Bigg[ \sup_{\beta \in \mathcal{F}} \Bigg( \frac{1}{n} \sum_{k=1}^n \sigma_k \ell(S_{i_k}; \beta) \Bigg) \Bigg] \end{align} and \begin{align} R_n(\ell(\cdot; \cdot)\circ \mathcal{F}_r) &= \mathbb{E}_{\sigma, \Omega} \Bigg[ \sup_{\beta \in \mathcal{F}_r} \Bigg( \frac{1}{n} \sum_{k=1}^n \sigma_k \ell(S_{i_k}; \beta) \Bigg) \Bigg], \end{align} respectively. Here, $\sigma_k$ are independent Rademacher random variables uniformly chosen from $\{-1, 1\}$. \paragraph{High probability events for conditioning.} We define the following two events: \begin{align} \mathcal{E}_5 &= \left \{ \max_{i \in [N]} \ell(S_i; \beta) \le c(N,p), \,\, \forall \beta \in \mathcal{F} \right \} \label{eqn:cE_5} \\ \mathcal{E}_6 &= \left \{ \phi(\Omega) \le \mathbb{E}_{\Omega} \left[ \phi(\Omega) \right] + \sqrt{ \frac{ 8\, c(N,p) \log(Np)}{n}} \right\} \label{eqn:cE_6}. \end{align} Here, $c(N, p) = C \cdot \left( \frac{r^2 B^2}{\rho^2} \left( \Gamma + K_\alpha \log^{\frac{1}{\alpha}}(rNp) \right)^2 +\Gamma^2 \norm{\beta^*}_1^2 \right) $, where $C > 0$ is an absolute constant. \subparagraph{Observation 5: $\mathcal{E}_5$ occurs with high probability.} \begin{lemma}\label{lemma:E5} Let $\text{rank}(\widehat{\bA}) = r$. Assume Properties \ref{prop:bounded_covariates} and \ref{prop:covariate_noise_structure} hold. Then, if $\mathcal{E}_2$ occurs, \begin{align} \mathbb{P} \{\mathcal{E}_5^c \} &\le \frac{2}{N^8 p^8}. \end{align} \end{lemma} \subparagraph{Observation 6: $\mathcal{E}_6$ occurs with high probability.} \begin{lemma}\label{lemma:E6} Suppose $\mathcal{E}_5$ occurs. Then, \begin{align} \mathbb{P} \{\mathcal{E}_6^c \} &\le \frac{1}{N^8 p^8}. \end{align} \end{lemma} \subparagraph{Helpful Rademacher complexity lemmas.} \begin{lemma} \label{lemma:rademacher_bound} Let $\phi(\Omega)$ be defined as in \eqref{eq:generalization_error}. Then, \begin{align*} \mathbb{E}_{\Omega} \left[ \phi(\Omega) \right] &\le 2 R_n(\ell(\cdot; \cdot)\circ \mathcal{F}). \end{align*} \end{lemma} \begin{lemma} \label{lemma:rademacher_complexity_equality} Let $\emph{rank}(\widehat{\bA}) = r$. Then, \begin{align} \label{eq:rademacher_relation} R_n(\ell(\cdot; \cdot)\circ \mathcal{F}) &= R_n(\ell(\cdot; \cdot)\circ \mathcal{F}_r). \end{align} \end{lemma} \begin{lemma} \label{lemma:rademacher_complexity_linear_functions_sparse} Let $\emph{rank}(\widehat{\bA}) = r$. Let there exist a constant $B > 0$ such that for any hypothesis $\beta \in \mathbb{R}^p$, $\norm{\beta}_{\infty} \le B$. Then for any $(\alpha_{1}, \dots, \alpha_{n}) \in \widehat{\mathcal{A}} $, \begin{align} R_n(\mathcal{F}_r) &= \mathbb{E}_{\sigma, \Omega} \Bigg[ \sup_{\beta \in \mathcal{F}_r} \Bigg( \frac{1}{n} \sum_{i=1}^n \sigma_i \langle \alpha_{i}, \beta \rangle \Bigg) \Bigg] \le \frac{r B}{\sqrt{n}} \max_{i \in [n]} \norm{\alpha_i}_{\infty}. \end{align} \end{lemma} \vspace{10pt} \subsubsection{Completing Proof of Theorem \ref{thm:mse_test_hsvt}} \begin{proof} Let $E \triangleq \mathcal{E}_1 \cap \mathcal{E}_2 \cap \mathcal{E}_3 \cap \mathcal{E}_4 \cap \mathcal{E}_5 \cap \mathcal{E}_6$. Then, we have \begin{align} \label{eq:test_error.term0} \mathbb{E} \left[ \mathcal{E}(\widehat{\beta}) \right] &= \mathbb{E} \left[ \mathcal{E}(\widehat{\beta}) \cdot \mathbb{1}(E) \right] +\mathbb{E} \left[ \mathcal{E}(\widehat{\beta}) \cdot \mathbb{1}(E^c) \right]. \end{align} We will bound each term on the right-hand side of \eqref{eq:test_error.term0} separately. \vspace{10pt} \noindent {\bf Upper bound on first term in \eqref{eq:test_error.term0} .} Suppose the event $E$ occurs. First, observe that \begin{align*} \mathcal{E}(\widehat{\beta}) &\le \widehat{\cE}_{\Omega}(\widehat{\beta}) + \sup_{\beta \in \mathcal{F}} \Big( \mathcal{E}(\beta) - \widehat{\cE}_{\Omega}(\beta) \Big) = \widehat{\cE}_{\Omega}(\widehat{\beta}) + \phi(\Omega) \\ &\le \widehat{\cE}_{\Omega}(\widehat{\beta}) + \mathbb{E}_{\Omega} \left[ \phi(\Omega) \right] + \sqrt{ \frac{ 8 \, c(N,p) \log(Np)}{n}} \\ &\stackrel{(a)} \le \widehat{\cE}_{\Omega}(\widehat{\beta}) + 2 R_n(\ell(\cdot; \cdot)\circ \mathcal{F}) + \sqrt{ \frac{ 8 \,c(N,p) \log(Np)}{n}} \nonumber \\ &\stackrel{(b)} = \widehat{\cE}_{\Omega}(\widehat{\beta}) + 2 R_n(\ell(\cdot; \cdot)\circ \mathcal{F}_r) + \sqrt{ \frac{ 8 \, c(N,p) \log(Np)}{n}}, \end{align*} where (a) follows from Lemma \ref{lemma:rademacher_bound} and (b) follows from Lemma \ref{lemma:rademacher_complexity_equality}. We then use Lemmas \ref{lemma:rademacher_composition} and \ref{lemma:rademacher_complexity_linear_functions_sparse} to obtain \begin{align*} R_n(\ell(\cdot; \cdot)\circ \mathcal{F}_r) &\le 2 \sqrt{c(N, p)} R_n (\mathcal{F}_r) \le \frac{2 \sqrt{c(N,p)} r B}{\sqrt{n}} \max_{i \in [n]} \| \widehat{\bA}_{i, \cdot} \|_{\infty}. \end{align*} Combining Property \ref{prop:bounded_covariates} with the contraction property of the $\text{HSVT}$ operator (Lemma \ref{lemma:HSVT_contraction}), \begin{align*} \max_{i \in [n]} \| \widehat{\bA}_{i, \cdot} \|_{\infty} &\le \frac{C_1}{\rho} \max_{i \in [n]} \norm{\boldsymbol{Z}_{i, \cdot}}_{\infty} \le \frac{C_1}{\rho} \max_{i \in [n], j \in [p]} \abs{A_{ij} + \eta_{ij}} \le \frac{C_1}{\rho} \Big(\Gamma + \max_{i \in [n], j \in [p]} \abs{\eta_{ij}} \Big) \end{align*} for some $C_1 > 0$. Using the above results, we obtain \begin{align*} \mathbb{E} \left[ \mathcal{E}(\widehat{\beta}) \cdot \mathbb{1}(E) \right] &= \mathbb{E} \left[ \mathcal{E}(\widehat{\beta}) ~\Big|~ E ~ \right] \cdot \mathbb{P} \{ E \} \le \mathbb{E} \left[ \mathcal{E}(\widehat{\beta}) ~\Big|~ E ~ \right] \\ &\le \mathbb{E} \left[ \widehat{\cE}_{\Omega}(\widehat{\beta}) ~\Big|~ E ~ \right] + \frac{C_1 r B}{\rho} \sqrt{ \frac{c(N, p)}{n}} \left(\Gamma + \mathbb{E} \left[ \max_{ij} \abs{\eta_{ij}} \right] \right) + \sqrt{ \frac{ 8\, c(N,p) \log(Np)}{n}}. \end{align*} We begin by observing that the first term on the right-hand side of the above inequality corresponds to the training error defined in \eqref{eq:mcse_train_hsvt_bound} up to constant factors since we are now also conditioning on events $\mathcal{E}_5$ and $\mathcal{E}_6$; in particular, only the value of $C_2$ in \eqref{eq:mcse_train_hsvt_bound} changes. Further, by Lemma \ref{lemma:max_subg}, we know that \begin{align} \label{eq:cor.2} \mathbb{E} \left[ \max_{i \in [n], j \in [p]} \abs{\eta_{ij}} \right]&\le C_2 K_\alpha \log^{\frac{1}{\alpha}}(np) \end{align} for some $C_2 > 0$. Putting everything together, we arrive at the following inequality: \begin{align} \label{eq:test_error.term1} \mathbb{E} \left[ \mathcal{E}(\widehat{\beta}) \cdot \mathbb{1}(E) \right] &\le \text{MSE}_{\Omega}(\widehat{Y}) + \frac{C_3 r B}{\rho} \sqrt{ \frac{c(N, p)}{n}} \left(\Gamma + K_\alpha \log^{\frac{1}{\alpha}}(np) \right) + \sqrt{ \frac{ 8 \, c(N,p) \log(Np)}{n}}, \end{align} where $C_3$ is a universal positive constant. Again, we highlight that the value of $\text{MSE}_{\Omega}(\widehat{Y})$ in \eqref{eq:test_error.term1} is equivalent to \eqref{eq:mcse_train_hsvt_bound} after redefining the value of $C_2$ within \eqref{eq:mcse_train_hsvt_bound}. \vspace{10pt} \noindent {\bf Upper bound on second term in \eqref{eq:test_error.term0} .} We begin with the following trivial bound on the expected prediction error: \begin{align*} \mathcal{E}(\widehat{\beta}) &= \frac{1}{N} \sum_{i=1}^N \left(\widehat{\bA}_{i, \cdot} \widehat{\beta} - \boldsymbol{A}_{i, \cdot} \beta^* \right)^2 \le \max_{i \in [N]} \left(\widehat{\bA}_{i, \cdot} \widehat{\beta} - \boldsymbol{A}_{i, \cdot} \beta^* \right)^2 \le 2 \max_{i \in [N]} \left[ \left(\widehat{\bA}_{i, \cdot} \widehat{\beta} \right)^2 + \left( \boldsymbol{A}_{i, \cdot} \beta^* \right)^2 \right]. \end{align*} We will proceed to bound each term on the right-most side of the above inequality. Since $\text{rank}(\widehat{\bA}) = r$, any sub-matrix $\widehat{\bA}^{\Omega}$ formed from the collection of $n$ arbitrary rows of $\widehat{\bA}$ must also have rank at most $r$. By Proposition \ref{prop:low_rank_sparsity}, for any hypothesis $\beta \in \mathcal{F}$ (see \eqref{eq:linear_family} for the definition of $\mathcal{F}$) and $\widehat{\bA}^{\Omega}$, there exists an $r$-sparse vector $\beta_r$ such that $\widehat{\bA}^{\Omega} \beta = \widehat{\bA}^{\Omega} \beta_r$. Let $I_{\widehat{\beta}_r} = \{j \in [p]: (\widehat{\beta}_r)_j \neq 0\}$ denote the index set for the nonzero elements of $\widehat{\beta}_r$. Using the above facts (in addition to Condition 2), we have \begin{align} \label{eq:max_error_bound.term1} \widehat{\bA}_{i, \cdot} \widehat{\beta} = \widehat{\bA}_{i, \cdot} \widehat{\beta}_r = \sum_{j \in I_{\widehat{\beta}_r}} \widehat{\bA}_{ij} \cdot (\widehat{\beta}_r)_j \le r B \max_{ j \in I_{\widehat{\beta}_r}} | \widehat{A}_{ij} |. \end{align} Combining the assumption $\widehat{\rho} \geq \frac{1}{Np}$ (see the footnote in Algorithm \ref{alg:main_algorithm}), Property \ref{prop:bounded_covariates}, and Lemma \ref{lemma:HSVT_contraction}, we can bound \[ | \widehat{A}_{ij} | \le Np \, \abs{Z_{ij}} \le Np \left( \Gamma + \abs{ \eta_{ij} } \right). \] Moreover, by Property \ref{prop:bounded_covariates} and Holder's Inequality, we have \begin{align} \label{eq:max_error_bound.term2} \abs{\boldsymbol{A}_{i, \cdot} \beta^*} \le \norm{\boldsymbol{A}_{i, \cdot}}_{\infty} \, \norm{\beta^*}_1 \le \Gamma \norm{\beta^*}_1. \end{align} Combining the above results, we obtain \begin{align*} \mathcal{E}(\widehat{\beta}) &\le 2 \Big[ r B N p \Big( \Gamma + \max_{i \in [N], j \in I_{\widehat{\beta}_r}} \abs{\eta_{ij}} \Big) \Big]^2 + 2 \Gamma^2 \norm{\beta^*}_1^2. \end{align*} Applying Cauchy-Schwarz inequality gives the following inequality: \begin{align*} \mathbb{E} \left[ \mathcal{E}(\widehat{\beta}) \cdot \mathbb{1}(E^c) \right] &\le \mathbb{E} \left[ \mathcal{E}(\widehat{\beta})^2 \right]^{\frac{1}{2}} \cdot \mathbb{E} \left[ \mathbb{1}(E^c) \right]^{\frac{1}{2}} \\ &= \mathbb{E} \left[ \mathcal{E}(\widehat{\beta})^2 \right]^{\frac{1}{2}} \cdot \mathbb{P} \{ E^c \}^{\frac{1}{2}} \\ &\le 2 \sqrt{2} \, \mathbb{E} \left[ \left( r B N p \Big( \Gamma + \max_{i \in [N], j \in I_{\widehat{\beta}_r}} \abs{\eta_{ij}} \Big) \right)^4 + \Gamma^4 \norm{\beta^*}_1^4 \right]^{\frac{1}{2}} \cdot \mathbb{P} \{ E^c \}^{\frac{1}{2}} \\ &= 2 \sqrt{2} \, \left[ \left( r B N p \right)^4 \mathbb{E} \Big( \Gamma + \max_{i \in [N], j \in I_{\widehat{\beta}_r}} \abs{\eta_{ij}} \Big) ^4 + \Gamma^4 \norm{\beta^*}_1^4 \right]^{\frac{1}{2}} \cdot \mathbb{P} \{ E^c \}^{\frac{1}{2}} \\ &\le 2 \sqrt{2} \,\left[ (rBNp)^2 \, \mathbb{E} \Big[ \Big(\Gamma + \max_{i \in [N], j\in I_{\widehat{\beta}_r}} \abs{\eta_{ij}} \Big)^4 \Big]^{\frac{1}{2}} + \Gamma^2 \norm{\beta^*}_1^2 \right] \cdot \mathbb{P} \{ E^c \}^{\frac{1}{2}} \\ &\le 2 \sqrt{2} \,\left[ 8 (rBNp)^2 \, \Big(\Gamma^4 + \mathbb{E} [\max_{i \in [N], j\in I_{\widehat{\beta}_r}} \abs{\eta_{ij}}^4 ] \Big)^{\frac{1}{2}} + \Gamma^2 \norm{\beta^*}_1^2 \right] \cdot \mathbb{P} \{ E^c \}^{\frac{1}{2}} \\ &\le 2 \sqrt{2} \,\left[ 8 (rBNp)^2 \, \Big(\Gamma^2 + \mathbb{E} [\max_{i \in [N], j\in I_{\widehat{\beta}_r}} \abs{\eta_{ij}}^4 ]^{\frac{1}{2}} \Big)+ \Gamma^2 \norm{\beta^*}_1^2 \right] \cdot \mathbb{P} \{ E^c \}^{\frac{1}{2}}. \end{align*} Note that for any $\alpha >0$ and $\theta \geq 1$, $\big| \eta_{ij}\big|^{\theta}$ is a $\psi_{\alpha/\theta}$-random variable since $\eta_{ij}$ being a $\psi_{\alpha}$-random variable. With the choice of $\theta =4 $, we have \begin{align} \mathbb{E} \Big[ \max_{i\in[N], j\in I_{\widehat{\beta}_r}} \abs{\eta_{ij}}^4 \Big] &\le C_4 K_\alpha^4 \log^{\frac{4}{\alpha}}(rN) \label{eq:proof_test_error} \end{align} for some $C_4 > 0$ by Lemma \ref{lemma:max_subg} (also see Remark \ref{rem:max_psialpha}). Further, by a simple application of DeMorgan's Law and the union bound, we have \begin{align*} \mathbb{P} \{E^c\} &\le \mathbb{P} \{\mathcal{E}_1^c\} + \mathbb{P} \{\mathcal{E}_2^c\} + \mathbb{P} \{\mathcal{E}_3^c\} + \mathbb{P} \{\mathcal{E}_4^c\} + \mathbb{P} \{\mathcal{E}_5^c\} + \mathbb{P} \{\mathcal{E}_6^c\} \le \frac{11}{N^8 p^7}. \end{align*} Putting everything together, \begin{align} \label{eq:test_error.term2} \mathbb{E} \left[ \mathcal{E}(\widehat{\beta}) \cdot \mathbb{1}(E^c) \right] &\le C_5 \,\left[ (rBNp)^2 \, \Big(\Gamma^2 + K_\alpha^2 \log^{\frac{2}{\alpha}}(Nr) \Big)+ \Gamma^2 \norm{\beta^*}_1^2 \right] \cdot \frac{1}{N^4 p^{7/2}} \nonumber \\ &\le C_5 \,\left[ (rB)^2 \, \Big(\Gamma^2 + K_\alpha^2 \log^{\frac{2}{\alpha}}(rN) \Big)+ \Gamma^2 \norm{\beta^*}_1^2 \right] \cdot \frac{1}{N^2 p^{3/2}}, \end{align} where $C_5$ is an absolute constant. \vspace{10pt} \noindent {\bf Concluding the proof.} Plugging \eqref{eq:test_error.term1} and \eqref{eq:test_error.term2} into \eqref{eq:test_error.term0} gives the following bound: \begin{align*} \text{MSE}(\widehat{Y}) &\le \text{MSE}_{\Omega}(\widehat{Y}) + \frac{C_3 r B}{\rho} \sqrt{ \frac{c(N, p)}{n}} \left(\Gamma + K_\alpha \log^{\frac{1}{\alpha}}(np) \right) + \sqrt{ \frac{ 8 \, c(N,p) \log(Np)}{n}} \\ & + \frac{C_5}{N^2 p^{3/2}} \,\left[ (rB)^2 \, \Big(\Gamma^2 + K_\alpha^2 \log^{\frac{2}{\alpha}}(rN) \Big)+ \Gamma^2 \norm{\beta^*}_1^2 \right] . \end{align*} Observe that $c(N,p) = \text{poly}(K_\alpha, \Gamma, B, \norm{\beta^*}_1) \cdot \frac{r^2 \log^2(rNp)}{\rho^2} $. Thus, \begin{align*} \text{MSE}(\widehat{Y}) &\le \text{MSE}_{\Omega}(\widehat{Y}) + \frac{\text{poly}(K_\alpha, \Gamma, B, \norm{\beta^*}_1)}{\rho^2} \cdot \frac{r^2 \log^2(rNp)}{\sqrt{n}} \\ &+ \frac{\text{poly}(K_\alpha, \Gamma, B, \norm{\beta^*}_1)}{\rho} \cdot \frac{r \log^{\frac{3}{2}}(rNp)}{\sqrt{n}} + \text{poly}(K_\alpha, \Gamma, B, \norm{\beta^*}_1) \cdot \frac{r^2 \log^2(rN)}{N^2 p^{3/2}} \\ &\le \text{MSE}_{\Omega}(\widehat{Y}) + \frac{C_6 r^2}{\rho^2} \cdot \frac{ \log^2(rNp)}{\sqrt{n}}, \end{align*} where $C_6 = \text{poly}(K_{\alpha}, \Gamma, B, \norm{\beta^*}_1)$. This completes the proof. \end{proof} \section{Proof of Theorem \ref{thm:mcse_whp}} \label{sec:appendix_noisy_regression_via_MCSE} As outlined in Section \ref{sec:proof_sketch_mcse_whp}, we prove Theorem \ref{thm:mcse_whp} by utilizing Lemma \ref{lemma:column_error} and some conditioning events that ensure certain ``nice'' properties hold. First, in \ref{sec:more_svt}, we state some important properties of HSVT, i.e., we present a helper lemma and define an operator induced by HSVT that acts on the column space. Then, we prove Lemma \ref{lemma:column_error} in \ref{sec:key_lemma}. \subsection{More on HSVT}\label{sec:more_svt} \subsubsection{Rank after HSVT} \begin{lemma}\label{lemma:thresholding_rank} Given $\bZ}%{\overline{\bZ} \in \mathbb{R}^{N \times p}$ and $\lambda^* \geq 0$, let $\bhA}%{\widehat{\bbA} = \frac{1}{\widehat{\rho}}\emph{HSVT}_{\lambda^*}(\bZ}%{\overline{\bZ})$. If $\lambda^* $ satisfies $\rho\tau_{r+1} + \| \bZ}%{\overline{\bZ} - \rho \bA}%{\overline{\bA} \| < \lambda^* < \tau_r - \| \bZ}%{\overline{\bZ} - \rho \bA}%{\overline{\bA} \|$, then $r = \emph{rank}(\bhA}%{\widehat{\bbA}) = r(\lambda^*, \bA}%{\overline{\bA})$. \end{lemma} \begin{proof} We may write \[ \bZ}%{\overline{\bZ} = \rho \bA}%{\overline{\bA} + \big( \bZ}%{\overline{\bZ} - \rho \bA}%{\overline{\bA} \big). \] Recall that $s_i$ are the singular values of $\bZ}%{\overline{\bZ}$. We have \begin{align*} s_{r} &\ge \rho \tau_r - \| \bZ}%{\overline{\bZ} - \rho \bA}%{\overline{\bA} \|, \quad \text{and} \quad s_{r+1} \le \rho\tau_{r+1} + \| \bZ}%{\overline{\bZ} - \rho \bA}%{\overline{\bA} \| \end{align*} by Weyl's inequality between singular values (Lemma \ref{lem:weyls}). \end{proof} \subsubsection{Column Operator Induced by HSVT}\label{sec:induced_operator} \begin{lemma} \label{lemma:column_representation} Let $\boldsymbol{B} \in \mathbb{R}^{N \times p}$ and $\lambda \geq 0$ be given. Then for any $j \in [p]$, \begin{align} \varphi^{\boldsymbol{B}}_{\lambda} \big( \boldsymbol{B}_{\cdot,j} \big) = \emph{HSVT}_{\lambda}\big(\boldsymbol{B} \big)_{\cdot,j}. \end{align} \end{lemma} \begin{proof} By \eqref{eq:prox_vector} and the orthonormality of the left singular vectors, \begin{align*} \varphi^{\boldsymbol{B}}_{\lambda} \big( \boldsymbol{B}_{\cdot,j} \big) &= \sum_{i=1}^{N \wedge p} \mathbb{1}(\sigma_i (\boldsymbol{B}) \ge \lambda) x_i x_i^T \boldsymbol{B}_{\cdot, j} = \sum_{i = 1}^{N \wedge p} \mathbb{1}(\sigma_i (\boldsymbol{B}) \ge \lambda) x_i x_i^T \Big( \sum_{i'=1}^{N \wedge p} \sigma_{i'} (\boldsymbol{B}) x_{i'} y_{i'} \Big)_{\cdot, j} \\ &= \sum_{i, i' = 1}^{N \wedge p} \sigma_{i'}(\boldsymbol{B}) \mathbb{1}(\sigma_i (\boldsymbol{B}) \ge \lambda) x_i x_i^T x_{i'} (y_{i'})_{j} = \sum_{i, i' = 1}^{N \wedge p} \sigma_{i'}(\boldsymbol{B}) \mathbb{1}(\sigma_i (\boldsymbol{B}) \ge \lambda) x_i \delta_{i i'} (y_{i'})_{j} \\ &= \sum_{i = 1}^{N \wedge p} \mathbb{1} (\sigma_i (\boldsymbol{B}) \ge \lambda^*) \sigma_i x_i (y_i)_j \\ &= \text{HSVT}_{\lambda}(\boldsymbol{B})_{\cdot, j}. \end{align*} This completes the proof. \end{proof} \begin{remark} Suppose we have missing data. Then the estimator $\widehat{\bA}$ has the following representation: \[ \bhA}%{\widehat{\bbA} = \frac{1}{\widehat{\rho}} \emph{HSVT}_{\lambda^*}(\bZ}%{\overline{\bZ}) = \frac{1}{\widehat{\rho}} \sum_{i=1}^{N \wedge p} s_i \mathbb{1}(s_i \ge \lambda^*) u_i v_i^T. \] By Lemma \ref{lemma:column_representation}, we note that \begin{align}\label{eq:repsentation_A_hat} \bhA}%{\widehat{\bbA}_{\cdot, j} = \frac{1}{\widehat{\rho}} \varphi^{\boldsymbol{Z}}_{\lambda^*}(\bZ}%{\overline{\bZ}_{\cdot, j}). \end{align} \end{remark} \subsubsection{HSVT Operator is a Contraction}\label{sec:induced_operator} \begin{lemma}\label{lemma:HSVT_contraction} Let $\boldsymbol{B} \in \mathbb{R}^{N \times p}$ and $\lambda \geq 0$ be given. Then for any $j \in [p]$, \[ \norm{\emph{HSVT}_{\lambda}\big(\boldsymbol{B} \big)_{\cdot,j}}_2 \le \norm{\boldsymbol{B}_{\cdot,j} }_2 \] \end{lemma} \begin{proof} By \eqref{eq:prox_vector} and Lemma \ref{lemma:column_representation}, we have \begin{align*} \norm{\text{HSVT}_{\lambda}\big( \boldsymbol{B} \big)_{\cdot,j} }_2 &= \norm{\varphi^{\boldsymbol{B}}_{\lambda} \big( \boldsymbol{B}_{\cdot,j} \big) }_2 = \norm{\sum_{i=1}^{N \wedge p} \mathbb{1}(\sigma_i (\boldsymbol{B}) \ge \lambda) x_i x_i^T \boldsymbol{B}_{\cdot, j} }_2 \\ &\stackrel{(a)}= \sum_{i=1}^{N \wedge p} \norm{\mathbb{1}(\sigma_i (\boldsymbol{B}) \ge \lambda) x_i x_i^T \boldsymbol{B}_{\cdot, j} }_2 \le \sum_{i=1}^{N \wedge p} \norm{ x_i x_i^T \boldsymbol{B}_{\cdot, j} }_2 \\ &\stackrel{(b)}= \norm{ \sum_{i=1}^{N \wedge p} x_i x_i^T \boldsymbol{B}_{\cdot, j} }_2 = \norm{\boldsymbol{B}_{\cdot,j} }_2. \end{align*} Note that (a) and (b) use the orthonormality of the left singular vectors. \end{proof} \subsection{Proof of Lemma \ref{lemma:column_error} -- the Key Lemmas for Theorem \ref{thm:mcse_whp}}\label{sec:key_lemma} \begin{proof} First, we recall three conditions assumed in the Lemma that will be used in the proof: \begin{enumerate} \item $\norm{ \bZ}%{\overline{\bZ} - \rho \bA}%{\overline{\bA}} \leq \Delta $ for some $\Delta \geq 0$. \item $ \frac{1}{\varepsilon} \rho \le \widehat{\rho} \le \varepsilon \rho$ for some $\varepsilon \ge 1$. \item $\lambda^* $ satisfies $\rho \tau_{r+1} + \Delta < \lambda^* < \rho \tau_r - \Delta$. \end{enumerate} Here, $r = r(\lambda^*, \bA}%{\overline{\bA})$. We prove our Lemma in three steps. \paragraph{Step 1.} Fix a column index $j \in [p]$. Observe that \begin{equation*} \bhA}%{\widehat{\bbA}_{\cdot, j} - \bA}%{\overline{\bA}_{\cdot, j} = \Big( \bhA}%{\widehat{\bbA}_{\cdot, j} - \varphi_{\lambda^*}^{\bZ}%{\overline{\bZ}} \big( \bA}%{\overline{\bA}_{\cdot, j} \big) \Big) + \Big( \varphi_{\lambda^*}^{\bZ}%{\overline{\bZ}} \big( \bA}%{\overline{\bA}_{\cdot, j} \big) - \bA}%{\overline{\bA}_{\cdot, j} \Big). \end{equation*} Note that Condition 3 implies $\text{rank}(\bhA}%{\widehat{\bbA}) = r$ by Lemma \ref{lemma:thresholding_rank}. By definition (see \eqref{eq:prox_vector}), we have that $\varphi_{\lambda^*}^{\bZ}%{\overline{\bZ}}: \mathbb{R}^N \to \mathbb{R}^N$ is the projection operator onto the span of the top $r$ left singular vectors of $\bZ}%{\overline{\bZ}$, namely, $\text{span}\big\{ u_1, \ldots, u_r \big\}$. Therefore, \[ \varphi_{\lambda^*}^{\bZ}%{\overline{\bZ}} (\bA}%{\overline{\bA}_{\cdot, j}) - \bA}%{\overline{\bA}_{\cdot, j} \in \text{span}\{u_1, \ldots, u_r \}^{\perp} \] and by \eqref{eq:repsentation_A_hat} (using Lemma \ref{lemma:column_representation}), \[ \bhA}%{\widehat{\bbA}_{\cdot, j} - \varphi_{\lambda^*}^{\bZ}%{\overline{\bZ}} (\bA}%{\overline{\bA}_{\cdot, j}) = \frac{1}{\widehat{\rho}}\varphi_{\lambda^*}^{\bZ}%{\overline{\bZ}} (\bZ}%{\overline{\bZ}_{\cdot, j}) - \varphi_{\lambda^*}^{\bZ}%{\overline{\bZ}} (\bA}%{\overline{\bA}_{\cdot, j}) \in \text{span}\{u_1, \ldots, u_r \}. \] Hence, $ \langle \bhA}%{\widehat{\bbA}_{\cdot, j} - \varphi_{\lambda^*}^{\bZ}%{\overline{\bZ}} (\bA}%{\overline{\bA}_{\cdot, j}), \varphi_{\lambda^*}^{\bZ}%{\overline{\bZ}} (\bA}%{\overline{\bA}_{\cdot, j}) - \bA}%{\overline{\bA}_{\cdot, j} \rangle = 0$ and \begin{equation}\label{eq:column_error} \Big\| \bhA}%{\widehat{\bbA}_{\cdot, j} - \bA}%{\overline{\bA}_{\cdot, j} \Big\|_2^2 = \Big\| \bhA}%{\widehat{\bbA}_{\cdot, j} - \varphi_{\lambda^*}^{\bZ}%{\overline{\bZ}} \big( \bA}%{\overline{\bA}_{\cdot, j} \big) \Big\|_2^2 + \Big\| \varphi_{\lambda^*}^{\bZ}%{\overline{\bZ}} \big( \bA}%{\overline{\bA}_{\cdot, j} \big) - \bA}%{\overline{\bA}_{\cdot, j} \Big\|_2^2 \end{equation} by the Pythagorean theorem. It remains to bound the terms on the right hand side of \eqref{eq:column_error}. \paragraph{Step 2.} We begin by bounding the first term on the right hand side of \eqref{eq:column_error}. Again applying Lemma \ref{lemma:column_representation}, we can rewrite \begin{align*} \bhA}%{\widehat{\bbA}_{\cdot, j} - \varphi_{\lambda^*}^{\bZ}%{\overline{\bZ}}(\bA}%{\overline{\bA}_{\cdot, j}) &= \frac{1}{\widehat{\rho}}\varphi_{\lambda^*}^{\bZ}%{\overline{\bZ}}(\bZ}%{\overline{\bZ}_{\cdot, j}) - \varphi_{\lambda^*}^{\bZ}%{\overline{\bZ}}(\bA}%{\overline{\bA}_{\cdot, j}) = \varphi_{\lambda^*}^{\bZ}%{\overline{\bZ}} \Big(\frac{1}{\widehat{\rho}} \bZ}%{\overline{\bZ}_{\cdot, j} - \bA}%{\overline{\bA}_{\cdot, j} \Big)\\ &= \frac{1}{\widehat{\rho}} \varphi_{\lambda^*}^{\bZ}%{\overline{\bZ}} (\bZ}%{\overline{\bZ}_{\cdot, j} - \rho \bA}%{\overline{\bA}_{\cdot, j} ) + \frac{\rho - \widehat{\rho}}{\widehat{\rho}} \varphi_{\lambda^*}^{\bZ}%{\overline{\bZ}}( \bA}%{\overline{\bA}_{\cdot, j} ). \end{align*} Using the Parallelogram Law (or, equivalently, combining Cauchy-Schwartz and AM-GM inequalities), we obtain \begin{align} \norm{\bhA}%{\widehat{\bbA}_{\cdot, j} - \varphi_{\lambda^*}^{\bZ}%{\overline{\bZ}}(\bA}%{\overline{\bA}_{\cdot, j})}_2^2 &= \norm{\frac{1}{\widehat{\rho}} \varphi_{\lambda^*}^{\bZ}%{\overline{\bZ}} (\bZ}%{\overline{\bZ}_{\cdot, j} - \rho \bA}%{\overline{\bA}_{\cdot, j} ) + \frac{\rho - \widehat{\rho}}{\widehat{\rho}} \varphi_{\lambda^*}^{\bZ}%{\overline{\bZ}}( \bA}%{\overline{\bA}_{\cdot, j}) }_2^2 \nonumber\\ &\leq 2 \, \norm{\frac{1}{\widehat{\rho}} \varphi_{\lambda^*}^{\bZ}%{\overline{\bZ}} (\bZ}%{\overline{\bZ}_{\cdot, j} - \rho \bA}%{\overline{\bA}_{\cdot, j} ) }_2^2 + 2 \, \norm{ \frac{\rho - \widehat{\rho}}{\widehat{\rho}} \varphi_{\lambda^*}^{\bZ}%{\overline{\bZ}}( \bA}%{\overline{\bA}_{\cdot, j} )}_2^2 \nonumber\\ &\leq \frac{2}{\widehat{\rho}^2} \norm{\varphi_{\lambda^*}^{\bZ}%{\overline{\bZ}}(\bZ}%{\overline{\bZ}_{\cdot, j} - \rho \bA}%{\overline{\bA}_{\cdot, j})}_2^2 + 2 \Big( \frac{\rho - \widehat{\rho}}{\widehat{\rho}}\Big)^2 \| \bA}%{\overline{\bA}_{\cdot, j} \|_2^2 \nonumber\\ &\leq \frac{2\varepsilon^2}{\rho^2} \norm{\varphi_{\lambda^*}^{\bZ}%{\overline{\bZ}}(\bZ}%{\overline{\bZ}_{\cdot, j} - \rho \bA}%{\overline{\bA}_{\cdot, j})}_2^2 + 2 (\varepsilon-1)^2 \| \bA}%{\overline{\bA}_{\cdot, j} \|_2^2. \label{eqn:term.1a \end{align} because Condition 2 implies $\frac{1}{\widehat{\rho}} \leq \frac{\varepsilon}{\rho}$ and $\left( \frac{\rho - \widehat{\rho}}{\widehat{\rho}} \right)^2 \leq (\varepsilon-1)^2$. Note that the first term of \eqref{eqn:term.1a} can further be decomposed (using the Parallelogram Law and recalling $\bA}%{\overline{\bA} = \bA}%{\overline{\bA}^*(\lambda^*) + \bE}%{\overline{\bE}(\lambda^*)$, we have \begin{align} &\norm{\varphi_{\lambda^*}^{\bZ}%{\overline{\bZ}}(\bZ}%{\overline{\bZ}_{\cdot, j} - \rho \bA}%{\overline{\bA}_{\cdot, j})}_2^2 \nonumber\\ &\qquad\le 2 \, \Big \| \varphi_{\lambda^*}^{\bZ}%{\overline{\bZ}}(\bZ}%{\overline{\bZ}_{\cdot, j} - \rho \bA}%{\overline{\bA}_{\cdot, j}) - \varphi^{\bA}%{\overline{\bA}^*}(\bZ}%{\overline{\bZ}_{\cdot, j} - \rho \bA}%{\overline{\bA}_{\cdot, j}) \Big \|_2^2 +2 \, \Big \| \varphi^{\bA}%{\overline{\bA}^*}(\bZ}%{\overline{\bZ}_{\cdot, j} - \rho \bA}%{\overline{\bA}_{\cdot, j}) \Big \|_2^2. \label{eq:tricky} \end{align} We now bound the first term on the right hand side of \eqref{eq:tricky} separately. First, we apply the Davis-Kahan $\sin \Theta$ Theorem (see \cite{davis1970rotation, wedin1972perturbation}) to arrive at the following inequality: \begin{align}\label{eq:davis_kahan} \big\| \mathcal{P}_{u_1, \ldots, u_r} - \mathcal{P}_{\mu_1, \ldots, \mu_r} \big\|_2 &\leq \frac{\| \bZ}%{\overline{\bZ} - \rho \bA}%{\overline{\bA} \|}{\rho \tau_r - \rho \tau_{r+1}} \leq \frac{\Delta}{ \rho( \tau_r - \tau_{r+1})} \end{align} where $\mathcal{P}_{u_1, \ldots, u_r}$ and $\mathcal{P}_{\mu_1, \ldots, \mu_r}$ denote the projection operators onto the span of the top $r$ left singular vectors of $\bZ}%{\overline{\bZ}$ and $\bA}%{\overline{\bA}^*$, respectively. We utilized Condition 1 to bound $\| \bZ}%{\overline{\bZ} - \rho \bA}%{\overline{\bA} \|_2 \leq \Delta$. Then it follows that \begin{align*} \Big \| \varphi_{\lambda^*}^{\bZ}%{\overline{\bZ}}(\bZ}%{\overline{\bZ}_{\cdot, j} - \rho \bA}%{\overline{\bA}_{\cdot, j}) - \varphi^{\bA}%{\overline{\bA}^*}(\bZ}%{\overline{\bZ}_{\cdot, j} - \rho \bA}%{\overline{\bA}_{\cdot, j}) \Big \|_2 &\le \big\| \mathcal{P}_{u_1, \ldots, u_r} - \mathcal{P}_{\mu_1, \ldots, \mu_r} \big\|_2 \big\| \bZ}%{\overline{\bZ}_{\cdot, j} - \rho \bA}%{\overline{\bA}_{\cdot, j} \big\|_2 \nonumber\\ &\le \frac{\Delta}{ \rho( \tau_r - \tau_{r+1})} \big\| \bZ}%{\overline{\bZ}_{\cdot, j} - \rho \bA}%{\overline{\bA}_{\cdot, j} \big\|_2. \end{align*} Combining the inequalities together, we have \begin{align} \Big\| \bhA}%{\widehat{\bbA}_{\cdot, j} - \varphi_{\lambda^*}^{\bZ}%{\overline{\bZ}} \big( \bA}%{\overline{\bA}_{\cdot, j} \big) \Big\|_2^2 &\leq \frac{4\varepsilon^2}{\rho^2} \frac{\Delta^2}{ \rho^2( \tau_r - \tau_{r+1})^2} \big\| \bZ}%{\overline{\bZ}_{\cdot, j} - \rho \bA}%{\overline{\bA}_{\cdot, j} \big\|_2^2 \nonumber\\ &\quad + \frac{4\varepsilon^2}{\rho^2} \Big \| \varphi^{\bA}%{\overline{\bA}^*}(\bZ}%{\overline{\bZ}_{\cdot, j} - \rho \bA}%{\overline{\bA}_{\cdot, j}) \Big \|_2^2 + 2 (\varepsilon-1)^2 \| \bA}%{\overline{\bA}_{\cdot, j} \|_2^2. \label{eq:term_step2} \end{align} \paragraph{Step 3.} We now bound the second term of \eqref{eq:column_error}. Again recalling $\bA}%{\overline{\bA} = \bA}%{\overline{\bA}^*(\lambda^*) + \bE}%{\overline{\bE}(\lambda^*)$ and using \eqref{eq:davis_kahan}, we obtain \begin{align} \norm{\varphi_{\lambda^*}^{\bZ}%{\overline{\bZ}} \big( \bA}%{\overline{\bA}_{\cdot, j} \big) - \bA}%{\overline{\bA}_{\cdot, j} }_2^2 &= \norm{ \varphi_{\lambda^*}^{\bZ}%{\overline{\bZ}} \big( \bA}%{\overline{\bA}^*_{\cdot, j} + \bE}%{\overline{\bE}_{\cdot, j} \big) - \bA}%{\overline{\bA}^*_{\cdot, j} - \bE}%{\overline{\bE}_{\cdot, j} }_2^2 \nonumber\\ &\leq 2 \, \norm{\varphi_{\lambda^*}^{\bZ}%{\overline{\bZ}} \big( \bA}%{\overline{\bA}^*_{\cdot, j} \big) - \bA}%{\overline{\bA}^*_{\cdot, j} }_2^2 + 2 \,\norm{ \varphi_{\lambda^*}^{\bZ}%{\overline{\bZ}} \big( \bE}%{\overline{\bE}_{\cdot, j} \big) - \bE}%{\overline{\bE}_{\cdot, j}}_2^2 \nonumber\\ &= 2 \, \norm{\varphi_{\lambda^*}^{\bZ}%{\overline{\bZ}} \big( \bA}%{\overline{\bA}^*_{\cdot, j} \big) - \varphi^{\bA}%{\overline{\bA}^*} \big( \bA}%{\overline{\bA}^*_{\cdot, j} \big) }_2^2 + 2 \,\norm{ \varphi_{\lambda^*}^{\bZ}%{\overline{\bZ}} \big( \bE}%{\overline{\bE}_{\cdot, j} \big) - \bE}%{\overline{\bE}_{\cdot, j}}_2^2 \nonumber\\ &\leq 2 \, \norm{\mathcal{P}_{u_1, \ldots, u_r} - \mathcal{P}_{\mu_1, \ldots, \mu_r} }^2 \norm{ \bA}%{\overline{\bA}^*_{\cdot, j} }_2^2 + 2 \,\norm{ \bE}%{\overline{\bE}_{\cdot, j} }_2^2 \nonumber \\ &\le \frac{2 \Delta^2}{ \rho^2( \tau_r - \tau_{r+1})^2} \norm{ \bA}%{\overline{\bA}^*_{\cdot, j} }_2^2 + 2\, \norm{ \bE}%{\overline{\bE}_{\cdot, j} }_2^2. \label{eq:term_step3} \end{align} \noindent Inserting \eqref{eq:term_step2} and \eqref{eq:term_step3} back to \eqref{eq:column_error} completes the proof. \end{proof} \subsection{Proof of $\mathcal{E}_1, \mathcal{E}_2, \mathcal{E}_3, \mathcal{E}_4$ Being High-probability Events}\label{sec:conditioning_events} \subsubsection{Proof of Lemma \ref{lemma:E1}} \begin{proof} Observe that $\norm{ \bA}%{\overline{\bA}_{\cdot, j} }_2^2 \leq N \Gamma^2$ when Property \ref{prop:bounded_covariates} holds, and $\norm{ \mathbb{E} \boldsymbol{H}^T \boldsymbol{H} } \leq N \gamma^2 $ when Property \ref{prop:covariate_noise_variance} holds. By Theorem \ref{thm:spectral_norm_noise_matrix_bound}, we know that for any $\delta_1 > 0$, \begin{align*} \norm{\boldsymbol{Z} - \rho \boldsymbol{A}} &\leq \sqrt{ N \rho } \sqrt{ \rho \gamma^2 + (1-\rho) \Gamma^2 }\\ &+ C(\alpha) \sqrt{1+\delta_1}\sqrt{p} (K_{\alpha} + \Gamma) \Big( 1 + \big(2 + \delta_1 \big) \log(Np) \Big)^{\frac{1}{\alpha}} \sqrt{ \log(Np) } \end{align*} with probability at least $ 1 - \frac{2}{N^{1 + \delta_1} p^{\delta_1}}$. \end{proof} \subsubsection{Proof of Lemma \ref{lemma:E2}} \begin{proof} Recall that we define $\widehat{\rho}$ in section \ref{sec:alg_ME_HSVT} as \[ \widehat{\rho} = \frac{1}{Np} \sum_{i=1}^{N} \sum_{j=1}^p \mathbb{1}(\bZ}%{\overline{\bZ}_{ij} \text{ observed}) \vee \frac{1}{Np}. \] By the binomial Chernoff bound, for $\varepsilon > 1$, \begin{align*} \Prob{ \widehat{\rho} > \varepsilon \rho } &\leq \exp\left( - \frac{(\varepsilon - 1 )^2}{\varepsilon + 1} Np \rho \right), \quad\text{and}\\ \Prob{ \widehat{\rho} < \frac{1}{\varepsilon} \rho } &\leq \exp \left( - \frac{(\varepsilon - 1)^2}{2 \varepsilon^2} Np \rho \right). \end{align*} By the union bound, \[ \Prob{ \frac{1}{\varepsilon} \rho \le \widehat{\rho} \le \varepsilon \rho } \geq 1 - \Prob{ \widehat{\rho} > \varepsilon \rho } - \Prob{ \widehat{\rho} < \frac{1}{\varepsilon} \rho }. \] Noticing $\varepsilon + 1 < 2 \varepsilon < 2 \varepsilon^2$ for all $\varepsilon > 1$ completes the proof. \end{proof} \subsubsection{Two Helper Lemmas for the Proof of Lemmas \ref{lemma:E3} and \ref{lemma:E4}} \begin{lemma} \label{lemma:masked_noise_col_norm} Assume Properties \ref{prop:bounded_covariates}, \ref{prop:covariate_noise_structure}, and \ref{prop:masking_noise_structure} hold. Then for any $\alpha \geq 1$ with which Property \ref{prop:covariate_noise_structure} holds, \[ \norm{ \bZ}%{\overline{\bZ}_{\cdot, j} - \rho \bA}%{\overline{\bA}_{\cdot, j} }_{\psi_{\alpha}} \le C(K_{\alpha} + \Gamma), \qquad\forall j \in [p] \] where $C > 0$ is an absolute constant. \end{lemma} \begin{proof} Observe that \begin{align*} \norm{ \bZ}%{\overline{\bZ}_{\cdot, j} - \rho \bA}%{\overline{\bA}_{\cdot, j}}_{\psi_{\alpha}} &= \sup_{u \in \mathbb{S}^{N-1}} \norm{ u^T \big( \bZ}%{\overline{\bZ}_{\cdot, j} - \rho \bA}%{\overline{\bA}_{\cdot, j} \big)}_{\psi_{\alpha}}\\ &= \sup_{u \in \mathbb{S}^{N-1}} \norm{ u^T \big( \bZ}%{\overline{\bZ} - \rho \bA}%{\overline{\bA} \big) e_j }_{\psi_{\alpha}}\\ &= \sup_{u \in \mathbb{S}^{N-1}} \norm{ \sum_{i=1}^n u_i \big( \bZ}%{\overline{\bZ}_{i, \cdot} - \rho \bA}%{\overline{\bA}_{i, \cdot} \big) e_j }_{\psi_{\alpha}}\\ &\stackrel{(a)}\leq C \sup_{u \in \mathbb{S}^{N-1}} \left( \sum_{i=1}^n u_i^2 \norm{ \big( \bZ}%{\overline{\bZ}_{i, \cdot} - \rho \bA}%{\overline{\bA}_{i, \cdot} \big) e_j }_{\psi_{\alpha}}^2 \right)^{1/2}\\ &\leq C \max_{i \in [N]} \norm{ \bZ}%{\overline{\bZ}_{i, \cdot} - \rho \bA}%{\overline{\bA}_{i, \cdot} }_{\psi_{\alpha}}, \end{align*} where (a) follows from Lemma \ref{lem:ind_sum}. Then the conclusion follows from Lemma \ref{lemma:masked_noise_row_norm}. \end{proof} \begin{lemma}\label{lem:norm_psi_alpha} Let $W_1, \ldots, W_n$ be a sequence of $\psi_{\alpha}$-random variables for some $\alpha \geq 1$. For any $t \geq 0$, \[ \Prob{ \sum_{i=1}^n W_i^2 > t } \leq 2 \sum_{i=1}^n \exp \left( - \left( \frac{t}{n \| W_i \|_{\psi_{\alpha}}^2 }\right)^{\alpha/2} \right). \] \end{lemma} \begin{proof} Note that $ \sum_{i=1}^n W_i^2 > t$ implies that there exists at least one $i \in [n]$ with $W_i^2 > \frac{t}{n}$. By the union bound, \begin{align*} \Prob{ \sum_{i=1}^n W_i^2 > t } &\leq \sum_{i=1}^n \Prob{W_i^2 > \frac{t}{n}} &\leq \sum_{i=1}^n \Prob{|W_i| > \sqrt{\frac{t}{n}}} \leq \sum_{i=1}^n 2 \exp \left( - \left( \frac{t}{n \| W_i \|^2_{\psi_{\alpha}} }\right)^{\alpha/2} \right). \end{align*} \end{proof} \subsubsection{Proof of Lemma \ref{lemma:E3}} \begin{proof} Fix $j \in [p]$. Let $e_i \in \mathbb{R}^n$ denote the $i$-th canonical basis of $\mathbb{R}^N$ (column vector representation). Note that \[ \Big \| \bZ}%{\overline{\bZ}_{\cdot, j} - \rho \bA}%{\overline{\bA}_{\cdot, j} \Big \|_2^2 = \sum_{i=1}^n \Big( e_i^T \big( \bZ}%{\overline{\bZ}_{\cdot, j} - \rho \bA}%{\overline{\bA}_{\cdot, j} \big) \Big)^2 \] and $e_i^T \big( \bZ}%{\overline{\bZ}_{\cdot, j} - \rho \bA}%{\overline{\bA}_{\cdot, j} \big)$ is a $\psi_{\alpha}$-random variable with $\norm{ e_i^T \big( \bZ}%{\overline{\bZ}_{\cdot, j} - \rho \bA}%{\overline{\bA}_{\cdot, j} \big) }_{\psi_{\alpha}} \leq \norm{ \bZ}%{\overline{\bZ}_{\cdot, j} - \rho \bA}%{\overline{\bA}_{\cdot, j} }_{\psi_{\alpha}} $. By Lemma \ref{lemma:masked_noise_col_norm}, $ \norm{ \bZ}%{\overline{\bZ}_{\cdot, j} - \rho \bA}%{\overline{\bA}_{\cdot, j} }_{\psi_{\alpha}} \leq C(K_{\alpha} + \Gamma)$ for all $j \in [p]$. By Lemma \ref{lem:norm_psi_alpha} and the union bound, \begin{align*} \Prob{\mathcal{E}_3^c} &\leq \sum_{j=1}^p \Prob{ \Big \| \bZ}%{\overline{\bZ}_{\cdot, j} - \rho \bA}%{\overline{\bA}_{\cdot, j} \Big \|_2^2 > 9C^2 (K_{\alpha} + \Gamma)^2 N \log^{\frac{2}{\alpha}}(Np) }\\ &\leq 2 \sum_{j=1}^p \sum_{i=1}^n \exp\left( -9 \log(Np) \right)\\ &= \frac{2}{N^8p^8}. \end{align*} \end{proof} \subsubsection{Proof of Lemma \ref{lemma:E4}} \begin{proof} Recall that $\text{rank}(\bA}%{\overline{\bA}^*(\lambda^*)) = r$. We write \[ \Big \| \varphi^{\bA}%{\overline{\bA}^*}(\bZ}%{\overline{\bZ}_{\cdot, j} - \rho \bA}%{\overline{\bA}_{\cdot, j}) \Big \|_2^2 = \sum_{i=1}^{r} \Big( u_i^T (\bZ}%{\overline{\bZ}_{\cdot, j} - \rho \bA}%{\overline{\bA}_{\cdot, j}) \Big)^2, \] where $u_1, \ldots, u_{n \wedge p}$ denote the left singular vectors of $\bA}%{\overline{\bA}^*$. The proof has the same structure with that of Lemma \ref{lemma:E3} with $u_1, \ldots, u_{r}$ in place of $e_1, \ldots, e_n$. \end{proof} \section{Useful Theorems}\label{sec:useful_theorems} \subsection{Bounding $\psi_{\alpha}$-norm} \begin{lemma} \label{lemma:sum_of_subgaussians} {\bf Sum of independent sub-gaussians random variables.} \\ Let $X_1, \dots, X_n$ be independent, mean zero, sub-gaussian random variables. Then $\sum_{i=1}^n X_i$ is also a sub-gaussian random variable, and \begin{align} \Big \| \sum_{i=1}^n X_i\Big \|_{\psi_2}^2 &\le C \sum_{i=1}^n \norm{X_i}_{\psi_2}^2 \end{align} where $C$ is an absolute constant. \end{lemma} \begin{lemma} \label{lemma:subgauss_subexp} {\bf Product of sub-gaussians is sub-exponential.}\\ Let $X$ and $Y$ be sub-gaussian random variables. Then $XY$ is sub-exponential. Moreover, \begin{align} \norm{XY}_{\psi_1} &\le \norm{X}_{\psi_2} \norm{Y}_{\psi_2}. \end{align} \end{lemma} \subsection{Concentration Inequalities for Random Variables} \begin{lemma}\label{lem:general_bernsteins} {\bf Bernstein's inequality.}\\ Let $X_1, X_2, \dots, X_N$ be independent, mean zero, sub-exponential random variables. Let $S = \sum_{i=1}^n X_i$. Then for every $t > 0$, we have \begin{align} \mathbb{P} \{ \abs{S} \ge t \} \le 2 \exp ( -c \min \Bigg[\frac{t^2}{\sum^N_{i=1} \norm{X_i}^2_{\Psi_1}}, \frac{t}{\max_i \norm{X_i}_{\Psi_1}} \Bigg] ) \end{align} \end{lemma} \begin{lemma} \label{lem:mcdiarmid} {\bf McDiarmid inequality.}\\ Let $x_1, \dots, x_n$ be independent random variables taking on values in a set $A$, and let $c_1, \dots, c_n$ be positive real constants. If $\phi: A^n \rightarrow \mathbb{R}$ satisfies \begin{align*} \sup_{x_1, \dots, x_n, x_i' \in A} \abs{\phi(x_1, \dots, x_i, \dots, x_n) - \phi(x_1, \dots, x'_i, \dots, x_n)} &\le c_i, \end{align*} for $1 \le i \le n$, then \begin{align*} \mathbb{P} \Big\{ \abs{\phi(x_1, \dots, x_n) - \mathbb{E} \phi(x_1, \dots, x_n)} \ge \epsilon \Big\} &\le \exp(\frac{-2\epsilon^2}{\sum_{i=1}^n c_i^2}). \end{align*} \end{lemma} \subsubsection{Upper Bound on the Maximum Absolute Value in Expectation} \begin{lemma} \label{lemma:max_subg} {\bf Maximum of sequence of random variables. } \\ Let $X_1, X_2, \dots, X_n$ be a sequence of random variables, which are not necessarily independent, and satisfy $\mathbb{E}[X_i^{2p}]^{\frac{1}{2p}} \le K p^{\frac{\beta}{2}}$ for some $K, \beta >0$ and all $i$. Then, for every $n \ge 2$, \begin{align} \mathbb{E} \max_{i \le n} \abs{X_i} &\le C K \log^{\frac{\beta}{2}}(n). \end{align} \end{lemma} \begin{remark}\label{rem:max_psialpha} Lemma \ref{lemma:max_subg} implies that if $X_1, \ldots, X_n$ are $\psi_{\alpha}$ random variables with $\| X_i \|_{\psi_{\alpha}} \leq K_{\alpha}$ for all $i \in [n]$, then \begin{align*} \mathbb{E} \max_{i \le n} \abs{X_i} &\le C K_{\alpha} \log^{\frac{1}{\alpha}}(n). \end{align*} \end{remark} \subsection{Other Useful Lemmas} \begin{lemma} {\bf Perturbation of singular values (Weyl's inequality).} \label{lem:weyls}\\ Let $\boldsymbol{A}$ and $\boldsymbol{B}$ be two $m \times n$ matrices. Let $k = m \wedge n$. Let $\lambda_1,\dots, \lambda_k$ be the singular values of $\boldsymbol{A}$ in decreasing order and repeated by multiplicities, and let $\tau_1, \dots, \tau_k$ be the singular values of $\boldsymbol{B}$ in decreasing order and repeated by multiplicities. Let $\delta_1, \dots, \delta_k$ be the singular values of $\boldsymbol{A} - \boldsymbol{B}$, in any order but still repeated by multiplicities. Then, \begin{align*} \max_{1 \le i \le k} \abs{ \lambda_i - \tau_i} &\le \max_{1 \le i \le k} \abs{ \delta_i}. \end{align*} \end{lemma} \begin{lemma} \label{lemma:rademacher_composition} {\bf Lipschitz composition of Rademacher averages.} \\ Let $\mathcal{Y}$ refer to the response space where $\mathcal{Y}$ need not be finite. Suppose $\mathcal{F} \subset [a,b]^{\mathcal{X}}$ and $L: \mathcal{Y} \times \mathbb{R} \rightarrow [0, \infty)$ is a loss such that $L(y, \cdot)$ is $C$-Lipschitz for all $y \in \mathcal{Y}$. Then for all $S = \{(X_1, Y_1), \dots, (X_n, Y_n)\}$, \begin{align} R_n(L \circ \mathcal{F}) &\le C \cdot R_n(\mathcal{F}) \end{align} where $L \circ \mathcal{F} = \{(x, y) \rightarrow L(y, f(x)) | \, f \in \mathcal{F}\}$. \end{lemma} \section{Introduction} \label{sec:intro} \paragraph{Overview.} We consider error-in-variable regression in the high-dimensional regime. Let $\boldsymbol{A} \in \mathbb{R}^{N \times p}$ and $\beta^* \in \mathbb{R}^{p}$ denote the (unobserved) covariates and model parameters, respectively. Let $Y = \boldsymbol{A} \beta^* + \epsilon \in \mathbb{R}^N$ denote the vector of responses. We denote $\Omega \subset [N]$, where $\abs{\Omega} = n \le N$, as the subset of observed indices. Given a ``sample set'' $\Omega$, we observe $Y^{\Omega} = \{Y_i: i \in \Omega\}$, i.e., a subset of size $n$ of all the response variables $Y$. Rather than observing $\boldsymbol{A}$, we are given access to $\boldsymbol{Z} \in \mathbb{R}^{N \times p}$, where $\boldsymbol{Z}$ is sparse, noisy version of $\boldsymbol{A}$. Specifically, for all $i \in [N]$ and $j \in [p]$, we define $Z_{ij} = (A_{ij} + \eta_{ij}) \cdot \mathds{1}(\pi_{ij} = 1) + (\star) \cdot \mathds{1}(\pi_{ij} = 0)$; here, $\eta_{ij}$ represents noise, $\pi_{ij} \sim \text{Bernoulli}(\rho)$, and $\star$ denotes an unknown value. Given observations $Y^{\Omega}$ and $\boldsymbol{Z}$, the goal is to predict $\mathbb{E}[Y] = \boldsymbol{A} \beta^*$. \paragraph{Matrix estimation for pre-processing.} In the classical regression setup, the covariates are assumed to be fully observed and noiseless, i.e, $\boldsymbol{Z} = \boldsymbol{A}$; if $p$ is small, then ordinary least squares (OLS) is sufficient in solving the problem; if, however, $p$ is large and can exceed $N$, then regularization methods (e.g., Lasso) can accurately recover (sparse) $\beta^*$. However, most modern datasets of interest are both high-dimensional and corrupted by noisy, partial observations. In the last decade or so, matrix estimation has emerged as a powerful, model agnostic method for recovering a structured matrix from its noisy and sparse observations. Therefore, it stands to reason that in the setup of error-in-variable where we observe $\boldsymbol{Z}$ instead of $\boldsymbol{A}$, we can produce a good estimate $\widehat{\bA}$ of $\boldsymbol{A}$ from $\boldsymbol{Z}$ via matrix estimation. Using $\widehat{\bA}$, we can then apply OLS to recover the underlying model parameter. In this paper, our focus is to investigate the properties of such an approach. Specifically, we consider the three-step procedure (see Algorithm \ref{alg:main_algorithm}): (1) obtain $\widehat{\bA}$ by applying matrix estimation (see Section \ref{sec:alg_ME_HSVT}) on $\boldsymbol{Z}$; (2) run OLS on $\widehat{\bA}^{\Omega}$ (restricting the rows of $\widehat{\bA}$ to $\Omega$) and $Y^{\Omega}$ to produce $\widehat{\beta}$; (3) output $\widehat{Y} = \widehat{\bA} \widehat{\beta}$ as an estimate for $\mathbb{E} Y = \boldsymbol{A} \beta^*$. Our analysis proves that our $\widehat{Y}$ achieves small in- and out-of-sample prediction errors with nearly optimal number of samples $n$, indicating that \textit{matrix estimation can be a general-purpose data preprocessing step} to obtain estimates of the latent covariates $\boldsymbol{A}$ for the purposes of prediction. Further, our testing (out-of-sample) error analysis (see Appendix \ref{sec:proofs_testing_error}) demonstrates that matrix estimation with OLS effectively performs implicit regularization. This highlights another benefit of applying matrix estimation as a pre-processing step. \subsection{Contributions} \label{sec:contributions} \paragraph{Model and algorithm.} We describe a model for high-dimensional error-in-variable regression (see Section \ref{sec:background}), where we simultaneously allow for missing data (Property \ref{prop:masking_noise_structure}) and sub-gaussian or sub-exponential noise in the covariate measurements (Property \ref{prop:covariate_noise_structure}). A key contribution is in utilizing a natural three-step algorithm (see Algorithm \ref{alg:main_algorithm}) for this setting. Our proposed algorithm is model agnostic and does not require knowledge (or an estimate) of the noise covariance, as is commonly assumed in the literature (cf. \cite{loh_wainwright, cocolasso, tsybakov_2}). Despite this generality, our algorithm achieves a vanishing (in- and out-of-sample) prediction error with the number of required samples scaling comparably to the state-of-the-art results in the literature. We highlight again that our analysis holds for the weaker requirement of sub-exponential noise compared to the sub-gaussianity assumption typically made in the literature. \paragraph{Finite sample analysis of prediction error.} We provide new finite sample prediction error bounds for high-dimensional linear regression with corrupted covariates (see Theorem \ref{thm:mse_train_general}). This bound holds for any matrix estimation algorithm, and is particularly useful if its max column sum error (MCSE) (Definition \ref{def:MCSE}) is small. For concreteness, we instantiate the matrix estimation subroutine with hard singular value thresholding (HSVT) (Section \ref{sec:alg_ME_HSVT}), and provide finite-sample analysis for both training (Corollary \ref{cor:mse_hsvt}) and testing error (Theorem \ref{thm:mse_test_hsvt}). A key contribution is that if the underlying matrix $\boldsymbol{A}$ is (approximately) low-rank, then both errors decay to 0 as long as the number of samples $n \gg \frac{C}{\rho^{4}} r \log^5(p)$ and $p \rightarrow \infty$ where $r$ is the (approximate) rank of the true covariate matrix (Propositions \ref{prop:low_rank_finite_sample} and \ref{prop:geo_decay_finite_sample}). By Theorem \ref{thm:mse_test_hsvt} (specifically, Lemma \ref{lemma:rademacher_complexity_equality}), we show that pre-processing the observed covariates with HSVT and then performing OLS is a form of implicit regularization, giving similar Rademacher complexity bounds as that of regression with $\ell_0$-regularization (a nonconvex program). As discussed in Section \ref{ssec:compare}, the sample complexity of our {\em model agnostic} algorithm is comparable to the best known sample complexities of {\em model aware} methods, cf. \cite{loh_wainwright, cocolasso, tsybakov_2}. \paragraph{Technical results of independent interest.} We highlight two technical results. First, a spectral norm bound for random matrices whose rows are (1) independent (entry-wise independence is not necessary), and (2) the Hadamard product of sub-exponential and Bernoulli random vectors (see Properties \ref{prop:covariate_noise_structure} and \ref{prop:masking_noise_structure}). Theorem \ref{thm:spectral_norm_noise_matrix_bound} indicates that this spectral norm scales as $O(\sqrt{N} + \sqrt{p} \log^{3/2}(Np))$. Second, we prove a high probability bound on the MCSE of the estimate $\widehat{\bA}$ produced from HSVT (see Theorem \ref{thm:mcse_whp}). We note that MCSE is a more stringent error metric than the Frobenius norm, the standard metric of choice in literature (see Section \ref{sec:lit_review} for details). The study of MCSE is motivated by Theorem \ref{thm:mse_train_general} as it provides a general upper bound for the prediction error in error-in-variable regression. \paragraph{Applications.} In Section \ref{sec:discussion_applications}, we provide important applications of error-in-variable regression and how they fit within our framework. In the first two applications, synthetic control and time series analysis, covariate noise is a natural formation (e.g., in time series forecasting, future predictions are made using past noisy observations). Both topics are of immense importance in fields such as econometrics, signal processing, machine learning, finance, and retail. The third application, regression with privacy, is rapidly becoming a crucial topic in machine learning, where practitioners strive to make predictions with highly sensitive data. Here, it is typical to \textit{purposefully inject} Laplacian noise into the covariates so the underlying dataset $\boldsymbol{A}$ is differentially private. This, again, neatly fits into our framework. \subsection{Related works} \label{sec:lit_review} \paragraph{Matrix estimation.} Over the past decade, matrix estimation has spurred tremendous theoretical and empirical research across numerous fields, including recommendation systems (cf. \cite{KeshavanMontanariOh10a, KeshavanMontanariOh10b, NegahbanWainwright11, ChenWainwright15, Chatterjee15, LeeLiShahSong16, CandesTao10, Recht11, DavenportPlanBergWootters14}), social network analysis (cf. \cite{AbbeSandon15a, AbbeSandon15b, AbbeSandon15c, AnandkumarGeHsuKakade13, hopkins2017efficient}), and graph learning (graphon estimation) (cf. \cite{AiroldiCostaChan13, ZhangLevinaZhu15, BorgsChayesCohnGanguly15, BorgsChayesLeeShah17}). Traditionally, the end goal is to recover the underlying mean matrix from an incomplete and noisy sampling of its entries; the quality of the estimate is often measured through the Frobenius norm. Further, entry-wise independence and sub-gaussian noise is typically assumed. A key property of many matrix estimation methods is that they are model agnostic (i.e., the de-noising procedure does not change with the noise assumptions); this makes such methods desirable for our purposes. We build upon recent developments by advocating that matrix estimation can be a vital \textit{pre-processing} subroutine in solving high-dimensional error-in-variable regression. To theoretically analyze the effectiveness of matrix estimation as a pre-processing procedure for linear regression, we study a nonstandard error metric, the MCSE, a stronger error metric than the Frobenius norm (appropriately normalized). Further, we only require independence across rows (e.g., measurements), and allow for a broader class of noise distributions (e.g., sub-exponential). This allows our model and algorithm to connect to important modern applications such as differential privacy, where adding Laplacian noise (a sub-exponential random variable) to the data is a standard tool in preserving privacy within databases. Thus, our algorithm can serve as a useful tool for interacting with highly sensitive datasets. \paragraph{Error-in-variable regression.} There exists a rich body of work regarding high-dimensional error-in-variable regression (cf. \cite{loh_wainwright}, \cite{cocolasso}, \cite{tsybakov_1}, \cite{tsybakov_2}, \cite{tsybakov_3}, \cite{tsybakov_4}, \cite{orthogonal_1}, \cite{orthogonal_2}, \cite{weighted_l1}). Two common threads of these works include: (1) a sparsity assumption on $\beta^*$; (2) error bounds with convergence rates for estimating $\beta^*$ under different norms, i.e., $\| \widehat{\beta} - \beta^* \|_q$ where $\norm{\cdot}_q$ denotes the $\ell_q$-norm. Some notable works closest to our setup include \cite{loh_wainwright}, \cite{cocolasso}, \cite{tsybakov_2}. We focus the comparison of our work to these few papers. In \cite{loh_wainwright}, a non-convex $\ell_1$-penalization algorithm is proposed based on the plug-in principle to handle covariate measurement errors. However, the authors consider additive and multiplicative (with randomly missing data as a special instance) noise models separately and design different plug-in estimators in each setting. Under both noise models, \cite{loh_wainwright} assume that observed covariate matrix $\boldsymbol{Z}$ is sub-gaussian, and that a bound on $\norm{\beta^*}_2$ is known (recall that $\beta^*$ is the unknown vector to be estimated). Arguably the most crucial difference is that for the additive noise setting, they additionally require knowledge of the unobserved noise covariance matrix $\Sigma_{\boldsymbol{H}} = \mathbb{E} \boldsymbol{H}^T \boldsymbol{H}$ and the estimator they design \textit{changes} based on their assumption of $\Sigma_{\boldsymbol{H}}$. \cite{cocolasso} builds upon \cite{loh_wainwright}, but propose a convex formulation of Lasso. Although the algorithm introduced does not require knowledge of $\norm{\beta^*}_2$, similar assumptions on $\boldsymbol{Z}$ and $\boldsymbol{H}$ (e.g., sub-gaussianity and access to $\Sigma_{\boldsymbol{H}}$) are made. This renders their algorithm to be not model agnostic. In fact, many works (e.g., \cite{tsybakov_1}, \cite{tsybakov_2}, \cite{tsybakov_3}) require either $\Sigma_{\boldsymbol{H}}$ to be known or the structure of $\boldsymbol{H}$ is such that it admits a data-driven estimator for its covariance matrix. This is so because these algorithms rely on correcting the bias for the matrix $\boldsymbol{Z}^T \boldsymbol{Z}$, which we do not need to compute. A key difference with the above works is their to aim to estimate $\beta^*$ exactly while we aim to achieve low training/testing prediction error. Learning $\beta^*$ is important, but proving low training/testing error is vital in guaranteeing good predictions when out of sample measurements $\boldsymbol{Z}_{i, \cdot}, \ i \notin \Omega$ are sparse and noisy. To best of our knowledge, the above works do not provide a formal method to de-noise $\boldsymbol{Z}_{i, \cdot}$. In summary, from a model standpoint, our work analyzes a more general setting: we allow $\boldsymbol{Z}$ to be simultaneously corrupted by noise (including sub-exponential noise) and missing data. Algorithmically, we propose (1) a model agnostic estimator that does not change depending on the underlying model (i.e., $\beta^*$ and $\Sigma_{\boldsymbol{H}}$); and (2) can provably generalize beyond the training set $\Omega$ in predicting the expected response values via a de-noising process of the observed covariates. \paragraph{Principal Component Regression (PCR).} Recall our approach to handling covariates with measurement errors is a two-step procedure that first utilizes a general matrix estimation method to de-noise and impute the observed covariates, and then performs linear regression to make predictions. Our analysis focuses on when the matrix estimation subroutine is HSVT (Section \ref{sec:alg_ME_HSVT}). In this case, our algorithm is similar to that of Principal Component Regression (PCR) (cf. \cite{pcr_tibshirani}, \cite{pcr_jolliffe}). Thus, a contribution of our work is in motivating that PCR-like methods are an effective tool for solving error-in-variable regression in high dimensions. In particular, we provide finite sample analysis (Propositions \ref{prop:low_rank_finite_sample} and \ref{prop:geo_decay_finite_sample}) and (in- and out-of-sample) prediction error bounds (Corollary \ref{cor:mse_hsvt} and Theorem \ref{thm:mse_test_hsvt}) that demonstrate the efficacy of PCR-like methods. As stated in Section \ref{sec:contributions}, it is worth recalling that our analysis indicates that PCR serves as a form of implicit regularization (Proposition \ref{prop:low_rank_sparsity}), i.e., taking a low-rank approximation of the observed covariates $\boldsymbol{Z}$ and then performing OLS gives similar Rademacher complexity bounds as that of regression with $\ell_0$-regularization. \section{Applications}\label{sec:discussion_applications} \subsection{Synthetic Control} \paragraph{Problem formulation.} Synthetic control is a popular method for comparative case studies and policy evaluation in econometrics to predict a counterfactual for a unit of interest after its exposure to a treatment. To do so, a synthetic treatment unit is constructed using a combination of so-called ``donor'' units. Proposed by \cite{abadie1}, it has been analyzed in \cite{amjad}, \cite{abadie2}, \cite{imbens}, \cite{hsiao2018}, \cite{athey}, \cite{athey1}. A canonical example is in \cite{abadie2}, where the unit of interest is California, the donor pool is all other states in the U.S., and the treatment is Proposition 99; the goal is to isolate the effect of Proposition 99 on cigarette consumption in California. Formally, $\boldsymbol{A}$ denotes the true donor matrix where $N$ is the number of observed time periods and $p$ the number of donors. We observe sparse, noisy observations $\boldsymbol{Z}$. $\Omega = [n]$ represents pre-treatment indices (time periods) and $Y^{\Omega}$ the pre-treatment responses for the exposed unit. Its counterfactual is denoted as $Y_i$ for $n < i \le N$. To estimate $\mathbb{E}[Y]$, \cite{amjad} performs linear regression to learn $\widehat{\beta}$, which represents the ``correct'' combination of donors to form a synthetic treatment unit. Thus, $\widehat{\beta}$ helps predict the response values for the exposed unit if the treatment had never been administered, i.e., $\mathbb{E}[Y_i]$ for $i > n$. The goal is to have low prediction error between $\widehat{Y}_i$ and $\mathbb{E} [Y_i]$ for $i \in [N]$. \paragraph{How it fits our framework.} In \cite{amjad}, the authors propose to de-noise the observed donor matrix $\boldsymbol{Z}$ to obtain $\widehat{\bA}$, and then learn $\widehat{\beta}$ from $Y^{\Omega}$ and $\widehat{\bA}^{\Omega}$, the pre-treatment responses for the exposed and donor units, respectively. Moreover, they assume that $\boldsymbol{A}$ is low-rank. It is easily seen that this setup is a special instance of our framework; more specifically, our training MSE bound (Corollary \ref{cor:mse_hsvt}) is tighter than Corollary 4 of \cite{amjad}, and we provide a testing MSE bound (Theorem \ref{thm:mse_test_hsvt}), which is missing in their work. We highlight existing methods in the error-in-variable regression literature are able to construct a synthetic California (expressed via $\beta^*$), but are unable to predict the counterfactual observations $\mathbb{E}[Y_i]$ for $i > n$. \subsection{Time Series Analysis} \paragraph{Problem formulation.} We follow the formulation in \cite{timeseries}. Specifically, consider a discrete-time setting with $t \in \mathbb{Z}$ representing the time index and $f: \mathbb{Z} \to \mathbb{R}$ representing the latent discrete-time time series of interest. For each $t \in [T]$ and with probability $\rho \in (0, 1]$, the random variable $X(t)$ such that $\mathbb{E}[X(t)] = f(t)$ is observed. Under this setting, the two objectives are: (1) interpolation, i.e., estimate $f(t)$ for all $t \in [T]$; and (2) extrapolation, i.e., forecast $f(t)$ for $t > T$. The underlying time series is denoted as $f = [f(t)]_{t \in [T]}$. Similarly, the imputation and forecasting estimates are denoted as $\widehat{f}_I = [\widehat{f}_I(t)]_{t \in [T]}, \widehat{f}_F = [\widehat{f}_F(t)]_{t \in [T]}$, respectively\footnote{Note the forecasting estimator can only rely on past values to make a prediction.}. The quality of the estimates are evaluated by: $\| f - \widehat{f}_I \|^2_2, \ \|f - \widehat{f}_F \|^2_2$. Please refer to paper for full details and notation. \paragraph{How it fits our framework.} In \cite{timeseries}, they first transform the sparse, noisy observations $X(t)$ into a matrix of dimension $\boldsymbol{Z} \in \mathbb{R}^{n \times p}$, where $Z_{ij} = X((i -1) + j) \cdot \pi_{ij}$ and $\pi_{ij} \sim \text{Bernoulli}(\rho)$. Analogously, we denote $\boldsymbol{A} \in \mathbb{R}^{n \times p}$ with entries $A_{ij} = f((i -1) + j)$. They then perform matrix estimation to estimate the underlying time series, i.e., $\widehat{\bA} = \text{ME}(\boldsymbol{Z})$. We see that imputation fits our framework as a good imputation performance is equivalent to small Frobenius norm difference between $\boldsymbol{A}$ and $\widehat{\bA}$. This is achieved by Theorem \ref{thm:mcse_whp} as MCSE is a stronger bound than the Frobenius norm error (after appropriate normalization). Forecasting also fits into our framework since it equates to a small prediction error in our setup; here, $Y_i$ refers to $X(i)$ for $i \in [n]$ and $\boldsymbol{Z}_{i, \cdot}$ refers to the preceding values of the time series $[X(i-1) \pi_{i - 1} , \dots X(i-p) \pi_{i - p}]$\footnote{We spare inconsequential details on how $Y_i$ and $\boldsymbol{Z}_{i, \cdot}$ are constructed to ensure setup between both papers are the same.}. Thus, good training prediction error is equivalent to the squared difference between $\widehat{Y}_i$ (i.e., $\widehat{f}_F(i))$ and $f(i)$ being small for $i \in [n]$. This is guaranteed by Corollary \ref{cor:mse_hsvt}, if the underlying matrix $\boldsymbol{A}$ is low-rank or approximately low-rank. This condition is satisfied for the three time series models listed in (\cite{timeseries}, Section 5): (i) linear recurrent formulae; (ii) time series with compact support; (iii) sum of sublinear trends (and additive mixtures thereof). By Theorem \ref{thm:mse_test_hsvt}, we generalize their results by proving that the prediction error is small for future unseen data, i.e., $f(t)$ for $t > T$. \subsection{Regression with Privacy} \paragraph{Problem formulation.} With the advent of large datasets, analysts must maximize the accuracy of their queries and simultaneously protect sensitive information. An important notion of privacy is that of differential privacy; this requires that the outcome of a query of a database cannot greatly change due to the presence or absence of any individual data record (cf. \cite{dwork_1}). This guarantees that little can be learned about any particular record. Suppose $\boldsymbol{A}$ denotes the true, fixed database of $N$ sensitive individual records. We consider the setting where an analyst is allowed to ask two types of queries of the data: (1) querying for individual data records, i.e., $\boldsymbol{A}_{i, \cdot}$ for $i \in [N]$; (2) querying for a linear combination of an individual's covariates, i.e. $\boldsymbol{A}_{i, \cdot} \beta^*$. A typical example would be where $\boldsymbol{A}_{i, \cdot}$ is the genomic information for patient $i$ and $\boldsymbol{A}_{i, \cdot} \beta^*$ is the outcome of a clinical study. The aim in such a setup is to be able to produce both in- and out-of-sample predictions while preserving each patient's privacy. \paragraph{How it fits our framework.} A typical way to achieve differential privacy is to add Laplacian noise to queries. This naturally fits our framework as we allow for sub-exponential noise in $\boldsymbol{H}$ (which includes Laplacian noise) and $\epsilon$ (see Properties \ref{prop:covariate_noise_structure} and \ref{prop:observation_noise_structure}). Our setup even allows for a significant fraction of the query response to be masked (see Property \ref{prop:masking_noise_structure}). Specifically, whenever an analyst queries for $\boldsymbol{A}_{i, \cdot}$, the answer is returned as $\boldsymbol{Z}_{i, \cdot}$ where $\boldsymbol{Z}_{i, \cdot} = \boldsymbol{A}_{i, \cdot} + \eta_{i, \cdot}$ and $\eta_{i, \cdot}$ is independent Laplacian noise. Similarly, when an analyst queries for the response variable $\boldsymbol{A}_{i, \cdot} \beta^*$, he or she observes $Y_i = \boldsymbol{A}_{i, \cdot} \beta^* + \epsilon_i$, where $\epsilon_i$ is again independent Laplacian noise. This guarantees that every individual's data remains differentially private. Nevertheless, by the results in Section \ref{sec:results}, the analyst can still accurately learn valuable global statistics (e.g., the average over $\boldsymbol{A} \beta^*$) about the data. \section{Proof of Theorem \ref{thm:mse_train_general} }\label{sec:appendix_noisy_regression_via_MCSE} \subsection{Background} Before we present Theorem \ref{thm:mse_train_general} (and its proof), we first introduce an alternate view of the max column sum error (MCSE) metric. Recall that the $(a,b)$-mixed norm of a matrix $\boldsymbol{B} \in \mathbb{R}^{N \times p}$ is defined as \begin{align} \label{eq:mixed_norm} \| \boldsymbol{B} \|_{a,b} = \left( \sum_{j=1}^p \|\boldsymbol{B}_{\cdot, j} \|_{a}^b \right)^{1/b} = \left( \sum_{j=1}^p \left( \sum_{i =1}^N \boldsymbol{B}_{ij}^a \right)^{b/a} \right)^{1/b}. \end{align} For our analysis, we are interested in the $(2, \infty)$-mixed norm, which corresponds to the maximum $\ell_2$ column norm: \begin{align} \label{eq:column_norm} \| \boldsymbol{B} \|_{2,\infty} = \max_{j \in [p]} \big\| \boldsymbol{B}_{\cdot, j} \big\|_2 = \max_{j \in [p]} \left( \sum_{i=1}^N \boldsymbol{B}_{ij}^2 \right)^{1/2}. \end{align} \begin{lemma} \label{lemma:holder_general} Let $\boldsymbol{B}$ be a real-valued $n \times p$ matrix and $x$ a real-valued $p$ dimensional vector. Let $q_1, q_2 \in [1, \infty]$ with $1/q_1 + 1/q_2 = 1$. Then, \begin{align} \norm{\boldsymbol{B} x}_2 &\le \norm{x}_{q_1} \, \norm{\boldsymbol{B}}_{2, q_2}. \end{align} \end{lemma} \begin{proof} Using H\"older's Inequality, we have \begin{align*} \norm{\boldsymbol{B} x}_2^2 &= \sum_{i=1}^n \langle \boldsymbol{B}_{i, \cdot}, x \rangle^2 \le\norm{x}_{q_1}^2 \sum_{i=1}^n \norm{\boldsymbol{B}_{i, \cdot}}_{q_2}^2 = \norm{x}_{q_1}^2 \cdot \norm{\boldsymbol{B}}_{2, q_2}^2. \end{align*} \end{proof} \noindent Observe that we can rewrite the MCSE (see Definition \ref{def:MCSE}) for any estimator $\widehat{\bA}$ of $\bA}%{\overline{\bA}$ and any $\Omega \subset [N]$ as \begin{equation}\label{eqn:mcse_2infty} \text{MCSE}_{\Omega}(\bhA}%{\widehat{\bbA}) = \mathbb{E} \left[ \max_{j \in [p]} \sum_{i \in \Omega} (\widehat{A}_{ij} - A_{ij})^2 \right] = \mathbb{E} \left[ \max_{j \in [p]} \norm{ \widehat{\bA}^{\Omega}_{\cdot, j} - \boldsymbol{A}^{\Omega}_{\cdot, j}}_2^2 \right] = \mathbb{E} \Big\| \widehat{\bA}^{\Omega} - \boldsymbol{A}^{\Omega} \Big\|_{2, \infty}^2. \end{equation} Recall that we let $\boldsymbol{A}^{\Omega}, \widehat{\bA}^{\Omega} \in \mathbb{R}^{|\Omega| \times p}$ denote the restrictions of $\boldsymbol{A}, \widehat{\bA} \in \mathbb{R}^{N \times p}$, respectively. \subsection{Proof of Theorem \ref{thm:mse_train_general}} \begin{proof} From our model setup, $Y^{\Omega} = \boldsymbol{A}^{\Omega}\beta^* + \epsilon$, we have \begin{align} \label{eq:linear_1} \norm{\widehat{\bA}^{\Omega} \widehat{\beta} - Y^{\Omega} }_2^2 &= \norm{\widehat{\bA}^{\Omega} \widehat{\beta} - \boldsymbol{A}^{\Omega} \beta^*}_2^2 + \norm{\epsilon}_2^2 - 2 \epsilon^T (\widehat{\bA}^{\Omega} \widehat{\beta} - \boldsymbol{A}^{\Omega} \beta^*). \end{align} On the other hand, the optimality of $\widehat{\beta}$ (recall that $\widehat{\beta} \in \arg \min \| \widehat{\bA}^{\Omega} \widehat{\beta} - Y^{\Omega} \|_2^2$) yields \begin{align} \label{eq:linear_2} \norm{\widehat{\bA}^{\Omega} \widehat{\beta} - Y^{\Omega}}_2^2 &\le \norm{\widehat{\bA}^{\Omega} \beta^* - Y^{\Omega}}_2^2 \nonumber \\ &= \norm{(\widehat{\bA}^{\Omega} - \boldsymbol{A}^{\Omega}) \beta^*}_2^2 + \norm{\epsilon}_2^2 - 2 \epsilon^T (\widehat{\bA}^{\Omega} - \boldsymbol{A}^{\Omega}) \beta^*. \end{align} % Combining \eqref{eq:linear_1} and \eqref{eq:linear_2} and taking expectations, we have \begin{align} \label{eq:linear_3} \mathbb{E} \norm{\widehat{\bA}^{\Omega} \widehat{\beta} - \boldsymbol{A}^{\Omega} \beta^*}_2^2 &\le \mathbb{E} \norm{(\widehat{\bA}^{\Omega} - \boldsymbol{A}^{\Omega}) \beta^*}_2^2 + 2 \mathbb{E}[\epsilon^T \widehat{\bA}^{\Omega} (\widehat{\beta} - \beta^*)]. \end{align} Let us bound the final term on the right hand side of \eqref{eq:linear_3}. Under our independence assumptions ($\epsilon$ is independent of $\boldsymbol{H}$), observe that \begin{align} \mathbb{E}[\epsilon^T \widehat{\bA}^{\Omega}] \beta^* &= \mathbb{E}[\epsilon^T] \mathbb{E}[\widehat{\bA}^{\Omega}] \beta^* = 0. \end{align} % Recall that $\widehat{\beta} = \big(\widehat{\bA}^{\Omega} \big)^{\dagger} Y = \big(\widehat{\bA}^{\Omega}\big)^{\dagger}\boldsymbol{A}^{\Omega} \beta^* + \big(\widehat{\bA}^{\Omega} \big)^{\dagger} \epsilon$. Using the cyclic and linearity properties of the trace operator (coupled with similar independence arguments), we further have \begin{align} \label{eq:linear_trace} \mathbb{E} [\epsilon^T \widehat{\bA}^{\Omega} \widehat{\beta}] &= \mathbb{E}[\epsilon^T \widehat{\bA}^{\Omega} \big(\widehat{\bA}^{\Omega} \big)^{\dagger}] \boldsymbol{A}^{\Omega} \beta^* + \mathbb{E}[\epsilon^T \widehat{\bA}^{\Omega} \big(\widehat{\bA}^{\Omega} \big)^{\dagger} \epsilon] \nonumber \\ &= \mathbb{E}[\epsilon]^T \mathbb{E} [ \widehat{\bA}^{\Omega} \big(\widehat{\bA}^{\Omega} \big)^{\dagger}] \boldsymbol{A}^{\Omega} \beta^* + \mathbb{E} \Big[ \text{tr}\Big( \epsilon^T \widehat{\bA}^{\Omega} \big(\widehat{\bA}^{\Omega} \big)^{\dagger} \epsilon \Big) \Big] \nonumber \\ &= \mathbb{E} \Big[ \text{tr}\Big( \widehat{\bA}^{\Omega} \big(\widehat{\bA}^{\Omega}\big)^{\dagger} \epsilon \epsilon^T \Big) \Big] \nonumber = \text{tr}\Big( \mathbb{E} [ \widehat{\bA}^{\Omega} \big(\widehat{\bA}^{\Omega}\big)^{\dagger} ] \cdot \mathbb{E} [ \epsilon \epsilon^T ] \Big) \nonumber \le \sigma^2 \mathbb{E} \Big[ \text{tr} \Big( \widehat{\bA}^{\Omega} \big(\widehat{\bA}^{\Omega} \big)^{\dagger} \Big) \Big] \nonumber \\ &= \sigma^2 \mathbb{E} [\text{rank}(\widehat{\bA}^{\Omega})], \end{align} where the inequality follows from Property \ref{prop:observation_noise_structure}. Returning to the first term on right-hand side of \eqref{eq:linear_3}, we invoke Lemma \ref{lemma:holder_general} with $q_1 = 1$ and $q_2 = \infty$ to obtain \begin{align*} \norm{(\widehat{\bA}^{\Omega} - \boldsymbol{A}^{\Omega}) \beta^*}_2^2 &\le \norm{\beta^*}_1^2 \cdot \max_{j \in [p]} \norm{(\boldsymbol{A}^{\Omega} - \widehat{\bA}^{\Omega})_{\cdot, j}}_2^2. \end{align*} Recalling the definition of $\text{MCSE}_{\Omega}(\widehat{\bA})$, we combine our above results (and normalize) to obtain our desired result. \end{proof} \section{Proof of Theorem \ref{thm:spectral_norm_noise_matrix_bound}} \label{sec:appendix_spectral_norm_noise_matrix_bound} As we sketch in Section \ref{sec:proof_sketch_spectral_norm_noise_matrix_bound}, the proof of Theorem \ref{thm:spectral_norm_noise_matrix_bound} follows by plugging the results of Lemmas \ref{lemma:masked_noise_operator_norm} and \ref{lemma:masked_noise_row_norm} into Proposition \ref{prop:spectral_upper_bound} for $\boldsymbol{W} := \bZ}%{\overline{\bZ} - \rho \bA}%{\overline{\bA}$ and applying Properties \ref{prop:bounded_covariates} and \ref{prop:covariate_noise_variance}. In this section, we prove Proposition \ref{prop:spectral_upper_bound}, Lemma \ref{lemma:masked_noise_operator_norm} and Lemma \ref{lemma:masked_noise_row_norm}. \subsection{Proof of Proposition \ref{prop:spectral_upper_bound}} \begin{proof} We prove the proposition in four steps. \paragraph{Step 1: picking the threshold value.} Let $e_1, \ldots, e_p \in \mathbb{R}^p$ denote the canonical basis\footnote{Column vector representation} of $\mathbb{R}^p$. Observe that $\norm{ \boldsymbol{W}_{i, \cdot} }_2^2 = \boldsymbol{W}_{i, \cdot} \boldsymbol{W}_{i, \cdot}^T = \sum_{j=1}^p \left( \boldsymbol{W}_{i, \cdot} e_j \right)^2$\footnote{Recall that $\boldsymbol{W}_{i, \cdot}$ is a row vector and hence $\boldsymbol{W}_{i, \cdot} \boldsymbol{W}_{i, \cdot}^T$ is a scalar.} . Therefore, for any $t \ge 0$, \begin{align*} \mathbb{P} \Big\{ \norm{\boldsymbol{W}_{i, \cdot}}_2^2 > t \Big\} &= \mathbb{P}\bigg \{ \sum_{j=1}^p \left( \boldsymbol{W}_{i, \cdot} e_j \right)^2 > t\bigg\} \\ &\stackrel{(a)}\le \sum_{j=1}^p \mathbb{P}\bigg \{ \left( \boldsymbol{W}_{i, \cdot} e_j \right)^2 > \frac{t}{p}\bigg\} \\ &\le \sum_{j=1}^p \mathbb{P}\bigg\{ \left| \boldsymbol{W}_{i, \cdot} e_j \right| \ > \sqrt{\frac{t}{p}}\bigg\} \\ &\stackrel{(b)}\le 2p\exp( - C({\alpha}) \left( \frac{t}{p \norm{ \boldsymbol{W}_{i, \cdot} }_{\psi_\alpha}^2 } \right)^{\frac{\alpha}{2}} ), \end{align*} where (a) uses the union bound and (b) follows from the definition of $\psi_{\alpha}$-random vector ($C({\alpha})$ is an absolute constant which depends only on $\alpha \geq 1$). Choosing $t = C^{\frac{2}{\alpha}} C({\alpha})^{-\frac{2}{\alpha}} p \norm{ \boldsymbol{W}_{i, \cdot} }_{\psi_\alpha}^2 \big(\log(2p) \big)^{\frac{2}{\alpha}}$ for some $C > 1$ gives \[ \mathbb{P}\Big \{ \norm{ \boldsymbol{W}_{i, \cdot} }_{2}^2 > C^{\frac{2}{\alpha}} C({\alpha})^{-\frac{2}{\alpha}} p \norm{ \boldsymbol{W}_{i, \cdot} }_{\psi_\alpha}^2 \big(\log(2p) \big)^{\frac{2}{\alpha}} \Big \} \le \Big( \frac{1}{2p} \Big)^{C - 1}. \] Applying the union bound, we obtain \[ \mathbb{P}\bigg\{ \max_{i \in [N]} \norm{ \boldsymbol{W}_{i, \cdot} }_{2}^2 > C^{\frac{2}{\alpha}} C({\alpha})^{-\frac{2}{\alpha}} p \max_{i \in [N]} \norm{ \boldsymbol{W}_{i, \cdot} }_{\psi_\alpha}^2 \big(\log(2p) \big)^{\frac{2}{\alpha}} \bigg\} \le N \Big( \frac{1}{2p} \Big)^{C - 1}. \] For $\delta_1 > 0$, we define $C(\delta_1) \triangleq 1 + \big(2 + \delta_1 \big) \log_{2p}(Np)$ and let $C = C(\delta_1)$. Also, we define \[ t_0(\delta_1) \triangleq C(\delta_1)^{\frac{2}{\alpha}} C({\alpha})^{-\frac{2}{\alpha}} p \max_{i \in [N]} \norm{ \boldsymbol{W}_{i, \cdot} }_{\psi_\alpha}^2 \big(\log(2p) \big)^{\frac{2}{\alpha}}. \] We have \begin{equation}\label{eqn:step1} \mathbb{P}\Big \{ \max_{i \in [N]} \norm{ \boldsymbol{W}_{i, \cdot} }_{2}^2 > t_0(\delta_1)\Big \} \le N \Big( \frac{1}{2p} \Big)^{\big(2 + \delta_1 \big) \log_{2p}(Np)} = \frac{1}{N^{1 + \delta_1} p^{2 + \delta_1}}. \end{equation} \medskip \paragraph{Step 2: decomposing $\boldsymbol{W}$ by truncation.} Next, given $\delta_1 > 0$, we decompose the random matrix $\boldsymbol{W}$ as follows: \[ \boldsymbol{W} = \boldsymbol{W}^{\circ}(\delta_1) + \boldsymbol{W}^{\times} (\delta_1) \] where for each $i \in [N]$, \[ \boldsymbol{W}^{\circ}(\delta_1)_{i, \cdot} = \boldsymbol{W}_{i, \cdot} \Ind{ \norm{\boldsymbol{W}_{i, \cdot} }_2^2 \leq t_0(\delta_1) } \quad\text{and}\quad \boldsymbol{W}^{\times}(\delta_1)_{i, \cdot} = \boldsymbol{W}_{i, \cdot} \Ind{ \norm{\boldsymbol{W}_{i, \cdot} }_2^2 > t_0(\delta_1) }. \] Then it follows that \begin{equation}\label{eqn:step2} \norm{\boldsymbol{W}} \leq \norm{\boldsymbol{W}^{\circ}(\delta_1) } + \norm{ \boldsymbol{W}^{\times}(\delta_1) } \leq \norm{\boldsymbol{W}^{\circ}(\delta_1) } + \norm{ \boldsymbol{W}^{\times}(\delta_1) }_F. \end{equation} \medskip \paragraph{Step 3: bounding $\norm{\boldsymbol{W}^{\circ}(\delta_1) }$ and $\norm{ \boldsymbol{W}^{\times}(\delta_1) }_F$.} We define two events for conditioning: \begin{align} E_1(\delta_1) &:= \left\{ \norm{\boldsymbol{W}^{\circ} (\delta_1) } \leq \norm{ \mathbb{E} \boldsymbol{W}^T \boldsymbol{W} }^{1/2} + \sqrt{ \frac{1 + \delta_1}{c} t_0(\delta_1) \log(Np) } \right\}, \label{eqn:step3.E1}\\ E_2(\delta_1) &:= \left\{ \norm{ \boldsymbol{W}^{\times} (\delta_1) }_F = 0 \right\}. \label{eqn:step3.E2} \end{align} First, given $\delta_1 > 0$, we let $\Sigma^{\circ}(\delta_1) = \mathbb{E} \boldsymbol{W}^{\circ}(\delta_1)^T \boldsymbol{W}^{\circ}(\delta_1)$. By definition of $\boldsymbol{W}^{\circ}(\delta_1)$, we have $\norm{\boldsymbol{W}_{i, \cdot}}_2 \leq \sqrt{t_0(\delta_1)}$ for all $i \in [N]$. Then it follows that for every $s \geq 0$, \[ \norm{\boldsymbol{W}^{\circ} (\delta_1) } \leq \norm{ \Sigma^{\circ}(\delta_1) }^{1/2} + s \sqrt{t_0(\delta_1)} \] with probability at least $1 - p \exp(-c s^2 )$ (see Theorem 5.44 of \cite{vershynin2010introduction} and Eqs. (5.32) and (5.33) in reference, and replacing the common second moment $\boldsymbol{\Sigma} = \mathbb{E} \boldsymbol{W}_{i, \cdot}^T \boldsymbol{W}_{i, \cdot}$ with the average second moment for all rows, $\boldsymbol{\Sigma} = \frac{1}{N} \sum_{i=1}^N \mathbb{E} \boldsymbol{W}_{i, \cdot}^T \boldsymbol{W}_{i, \cdot}$, i.e., redefining $\boldsymbol{\Sigma}$). Note that $\norm{ \Sigma^{\circ}(\delta_1) } = \norm{\mathbb{E} \boldsymbol{W}^{\circ}(\delta_1)^T \boldsymbol{W}^{\circ}(\delta_1)} \leq \norm{\mathbb{E} \boldsymbol{W}^T \boldsymbol{W}}$. Now we define $\tilde{E}_1(s)$ parameterized by $s > 0$ as \begin{equation} \tilde{E}_1(s; \delta_1) := \left\{ \norm{\boldsymbol{W}^{\circ} (\delta_1) } > \norm{ \mathbb{E} \boldsymbol{W}^T \boldsymbol{W} }^{1/2} + s \sqrt{t_0(\delta_1)} \right\}. \end{equation} If we pick $s = \left( \frac{1 + \delta_1}{c} \log(Np) \right)^{1/2}$, then $E_1(\delta_1) = \tilde{E}_1(s; \delta_1)$ and \[ \Prob{ E_1(\delta_1)^c } \leq p \exp(-c s^2 ) = p \exp \left( -(1 + \delta_1) \log(Np) \right) = \frac{1}{N^{1+\delta_1} p^{\delta_1}}. \] Next, we observe that $\norm{ \boldsymbol{W}^{\times} (\delta_1) }_F = 0$ if and only if $\boldsymbol{W}^{\times} (\delta_1) = 0$. If $\boldsymbol{W}^{\times} (\delta_1) \neq 0$, then $\max_{i \in [n]} \norm{ \boldsymbol{W}_{i, \cdot} }_{2}^2 > t_0(\delta_1)$. Therefore, \[ \Prob{E_2^c} \leq \frac{1}{N^{1+\delta_1} p^{2 + \delta_1}} \] by the analysis in Step 1; see \eqref{eqn:step1}. \medskip \paragraph{Step 4: concluding the proof.} For any given $\delta_1 > 0$, \[ \Prob{ \norm{\boldsymbol{W}} > \norm{ \mathbb{E} \boldsymbol{W}^T \boldsymbol{W} }^{1/2} + \sqrt{ \frac{1 + \delta_1}{c} t_0(\delta_1) \log(Np) } ~ \bigg|~ E_1(\delta_1) \cap E_2(\delta_1)} = 0. \] by \eqref{eqn:step2}, \eqref{eqn:step3.E1}, and \eqref{eqn:step3.E2}. By the law of total probability and the union bound, \begin{align*} &\Prob{ \norm{\boldsymbol{W}} > \norm{ \mathbb{E} \boldsymbol{W}^T \boldsymbol{W} }^{1/2} + \sqrt{ \frac{1 + \delta_1}{c} t_0(\delta_1) \log(Np) } }\\ &\qquad\leq \Prob{ \norm{\boldsymbol{W}} > \norm{ \mathbb{E} \boldsymbol{W}^T \boldsymbol{W} }^{1/2} + \sqrt{ \frac{1 + \delta_1}{c} t_0(\delta_1) \log(Np) } ~ \bigg|~ E_1(\delta_1) \cap E_2(\delta_1)}\\ &\qquad\quad+ \Prob{E_1(\delta)^c} + \Prob{E_2(\delta)^c}\\ &\qquad\leq \frac{1}{N^{1+\delta_1} p^{\delta_1}} + \frac{1}{N^{1+\delta_1} p^{2 + \delta_1} }\\ &\qquad\leq \frac{2}{N^{1+\delta_1} p^{\delta_1} }. \end{align*} This completes the proof. \end{proof} \subsection{Proof of Lemma \ref{lemma:masked_noise_operator_norm}} \begin{proof} When Property \ref{prop:masking_noise_structure} holds, then \[ \mathbb{E} (\bZ}%{\overline{\bZ} - \rho \bA}%{\overline{\bA})^T(\bZ}%{\overline{\bZ} - \rho \bA}%{\overline{\bA}) = \rho(1-\rho) diag( \bA}%{\overline{\bA}^T \bA}%{\overline{\bA} ) + \rho^2 \mathbb{E} (\bX}%{\overline{\bX} - \bA}%{\overline{\bA})^T (\bX}%{\overline{\bX} - \bA}%{\overline{\bA}) \] by \cite[Lemma A.2]{shah2018learning}. Applying triangle inequality, we have \begin{align*} \norm{\mathbb{E} (\bZ}%{\overline{\bZ} - \rho \bA}%{\overline{\bA})^T(\bZ}%{\overline{\bZ} - \rho \bA}%{\overline{\bA})} &\le \rho(1-\rho) \norm{ diag( \bA}%{\overline{\bA}^T \bA}%{\overline{\bA} ) } + \rho^2 \norm{ \mathbb{E} (\bX}%{\overline{\bX} - \bA}%{\overline{\bA})^T (\bX}%{\overline{\bX} - \bA}%{\overline{\bA}) } \\ &\le \rho(1-\rho) \max_{j \in [p] } \norm{ \bA}%{\overline{\bA}_{\cdot, j} }_2^2 + \rho^2 \norm{ \mathbb{E} \bH}%{\overline{\bH}^T \bH}%{\overline{\bH} }. \end{align*} \end{proof} \subsection{Proof of Lemma \ref{lemma:masked_noise_row_norm}} \subsubsection{Auxiliary Lemmas} \begin{lemma}\label{lem:technical_Ka} Suppose that $X \in \mathbb{R}^n$ and $P \in \{0, 1\}^n$ are random vectors. Then for any $\alpha \geq 1$, \[ \norm{ X \circ P }_{\psi_{\alpha}} \leq \norm{ X }_{\psi_{\alpha}}. \] \end{lemma} \begin{proof} Given a deterministic binary vector $P_0 \in \{0, 1\}^n$, let $I_{P_0} = \{ i \in [n]: Q_i = 1 \}$. Observe that \[ X \circ P_0 = \sum_{i \in I_{P_0}} e_i e_i^T X. \] Here, $\circ$ denotes the Hadamard product (entrywise product) of two matrices. By definition of the $\psi_{\alpha}$-norm, \begin{align*} \norm{X}_{\psi_{\alpha}} &= \sup_{u \in \mathbb{S}^{n-1}} \norm{ u^T X } _{\psi_{\alpha}} = \sup_{u \in \mathbb{S}^{n-1}}\inf\left\{ t > 0: \mathbb{E}_X \Big[ \exp\big( | u^T X|^{\alpha} / t^{\alpha} \big) \Big] \leq 2 \right\}. \end{align*} Let $u_0 \in \mathbb{S}^{n-1}$ denote the maximum-achieving unit vector (such $u_0$ exists because $\inf\{\cdots\}$ is continuous with respect to $u$ and $\mathbb{S}^{n-1}$ is compact). Then, \begin{align*} \norm{ X \circ P }_{\psi_{\alpha}} &= \sup_{u \in \mathbb{S}^{n-1}} \norm{ u^T X \circ P } _{\psi_{\alpha}}\\ &= \sup_{u \in \mathbb{S}^{n-1}} \inf\left\{ t > 0: \mathbb{E}_{X,P} \Big[ \exp \left( \big| u^T X \circ P \big|^{\alpha} / t^{\alpha} \right) \Big] \leq 2 \right\}\\ &= \sup_{u \in \mathbb{S}^{n-1}} \inf\left\{ t > 0: \mathbb{E}_{P} \Big[ \mathbb{E}_{X} \Big[ \exp \left( \big| u^T X \circ P \big|^{\alpha} / t^{\alpha} \right) ~\Big|~ P \Big] \Big] \leq 2 \right\}\\ &= \sup_{u \in \mathbb{S}^{n-1}} \inf\left\{ t > 0: \mathbb{E}_{P} \bigg[ \mathbb{E}_{X} \bigg[ \exp \bigg( \Big| u^T \sum_{i \in I_P} e_i e_i^TX \Big|^{\alpha} / t^{\alpha} \bigg) ~\bigg|~ P \bigg] \bigg] \leq 2 \right\}\\ &= \sup_{u \in \mathbb{S}^{n-1}} \inf\left\{ t > 0: \mathbb{E}_{P} \bigg[ \mathbb{E}_{X} \bigg[ \exp \bigg( \bigg| \Big( \sum_{i \in I_P} e_i e_i^T u \Big)^T X \bigg|^{\alpha} / t^{\alpha} \bigg) ~\bigg|~ P \bigg] \bigg] \leq 2 \right\}. \end{align*} For any $u \in \mathbb{S}^{n-1}$ and $P_0 \in \{0, 1 \}^n$, observe that \begin{align*} \mathbb{E}_{X} \bigg[ \exp \bigg( \bigg| \Big( \sum_{i \in I_P} e_i e_i^T u \Big)^T X \bigg|^{\alpha} / t^{\alpha} \bigg) ~\bigg|~ P = P_0 \bigg] \leq \mathbb{E}_X \Big[ \exp\Big( | u_0^T X |^{\alpha}/t^{\alpha} \Big) \Big]. \end{align*} Therefore, taking supremum over $u \in \mathbb{S}^{n-1}$, we obtain \begin{align*} \norm{ X \circ P }_{\psi_{\alpha}}&\leq \norm{X}_{\psi_{\alpha}}. \end{align*} \end{proof} \begin{lemma}\label{lem:MGF_upper} Let $X$ be a mean-zero, $\psi_{\alpha}$-random variable for some $\alpha \geq 1$. Then for $| \lambda | \leq \frac{1}{C \norm{X}_{\psi_{\alpha}}}$, \[ \mathbb{E} \exp\left( \lambda X \right) \leq \exp\left( C \lambda^2 \norm{X}_{\psi_{\alpha}}^2 \right). \] \end{lemma} \begin{proof} See \cite{vershynin2018high}, Section 2.7. \end{proof} \begin{lemma}\label{lem:ind_sum} Let $X_1, \ldots, X_n$ be independent random variables with mean zero. For $\alpha \geq 1$, \[ \norm{\sum_{i=1}^n X_i}_{\psi_{\alpha}} \leq C \left( \sum_{i=1}^n \norm{X_i}_{\psi_{\alpha}}^2 \right)^{1/2}. \] \end{lemma} \begin{proof} Immediate by Lemma \ref{lem:MGF_upper}. \end{proof} \subsubsection{Proof of Lemma \ref{lemma:masked_noise_row_norm}} \begin{proof} Let $\bP}%{\overline{\bP} \in \{ 0, 1\}^{N \times p}$ denote a random matrix whose entries are i.i.d. random variables that take value $1$ with probability $\rho$ and $0$ otherwise. Note that $\bZ}%{\overline{\bZ}_{i, \cdot} = \bX}%{\overline{\bX}_{i, \cdot} \circ \bP}%{\overline{\bP}_{i, \cdot}$ when Property \ref{prop:masking_noise_structure} is assumed and $\star$ is identified with $0$. By triangle inequality, \begin{align*} \norm{\bZ}%{\overline{\bZ}_{i, \cdot} - \rho\bA}%{\overline{\bA}_{i, \cdot}}_{\psi_{\alpha}} &= \norm{\bX}%{\overline{\bX}_{i, \cdot} \circ \bP}%{\overline{\bP}_{i, \cdot} - \rho\bA}%{\overline{\bA}_{i, \cdot}}_{\psi_{\alpha}} \\ &= \norm{ (\bX}%{\overline{\bX}_{i, \cdot} \circ \bP}%{\overline{\bP}_{i, \cdot}) - (\bA}%{\overline{\bA}_{i, \cdot} \circ \bP}%{\overline{\bP}_{i, \cdot}) - \rho\bA}%{\overline{\bA}_{i, \cdot} + (\bA}%{\overline{\bA}_{i, \cdot} \circ \bP}%{\overline{\bP}_{i, \cdot}) }_{\psi_{\alpha}} \\ &\le \norm{(\bX}%{\overline{\bX}_{i, \cdot} - \bA}%{\overline{\bA}_{i, \cdot}) \circ \bP}%{\overline{\bP}_{i, \cdot}}_{\psi_{\alpha}} + \norm{ (\bA}%{\overline{\bA}_{i, \cdot} \circ \bP}%{\overline{\bP}_{i, \cdot}) - \rho\bA}%{\overline{\bA}_{i, \cdot} }_{\psi_{\alpha}}. \end{align*} \noindent By definition of $\boldsymbol{X}$, Property \ref{prop:covariate_noise_structure}, and Lemma \ref{lem:technical_Ka}, we have that \begin{align*} \norm{(\bX}%{\overline{\bX}_{i, \cdot} - \bA}%{\overline{\bA}_{i, \cdot}) \circ \bP}%{\overline{\bP}_{i, \cdot}}_{\psi_{\alpha}} &\leq \norm{\bX}%{\overline{\bX}_{i, \cdot} - \bA}%{\overline{\bA}_{i, \cdot}}_{\psi_{\alpha}} = \norm{\eta_{i, \cdot}}_{\psi_{\alpha}} \leq C K_{\alpha}. \end{align*} Moreover, Property \ref{prop:bounded_covariates} and the i.i.d. property of $\bP}%{\overline{\bP}_{ij}$ for different $j$ gives \begin{align*} \Big \| (\bA}%{\overline{\bA}_{i, \cdot} \circ \bP}%{\overline{\bP}_{i, \cdot}) - \rho\bA}%{\overline{\bA}_{i, \cdot} \Big \|_{\psi_{\alpha}} &= \sup_{u \in \mathbb{S}^{p-1}} \bigg \| \sum_{j=1}^p u_j \bA}%{\overline{\bA}_{i, j} \big( \bP}%{\overline{\bP}_{i, j} - \rho \big) \bigg \|_{\psi_{\alpha}}\\ &\leq \sup_{u\in \mathbb{S}^{p-1}} \bigg( \sum_{j=1}^p u_j^2 \norm{ \bA}%{\overline{\bA}_{i,j} (\bP}%{\overline{\bP}_{i,j} - \rho)}_{\psi_{\alpha}}^2 \bigg)^{1/2}\\ &\leq \bigg(\sup_{u\in \mathbb{S}^{p-1}} \sum_{j} u_j^2 \max_{j \in [p]} | \bA}%{\overline{\bA}_{i,j} |^2\bigg)^{1/2} \norm{\bP}%{\overline{\bP}_{1,1} - \rho}_{\psi_\alpha}\\ &\leq \Gamma \norm{\bP}%{\overline{\bP}_{1,1} - \rho}_{\psi_\alpha}. \end{align*} The first inequality follows from Lemma \ref{lem:ind_sum}, the second inequality is immediate, and the last inequality follows from Property \ref{prop:bounded_covariates}. Lastly, $\norm{\bP}%{\overline{\bP}_{1,1} - \rho}_{\psi_\alpha} \leq C$ because $\bP}%{\overline{\bP}_{1,1} - \rho$ is a bounded random variable in $[-\rho, 1- \rho]$. \end{proof} \section{Problem Setup} \label{sec:background} Standard formulations of prediction problems assume the independent variables (covariates) are noiseless and fully observed. However, these assumptions often fail to accurately describe modern applications where the covariates are commonly noisy, missing, and/or dependent. Here, we study these issues in the context of high-dimensional linear regression. \subsection{Structural Assumptions for Covariates} \label{sec:covariate_noise} Let $\bA}%{\overline{\bA} \in \mathbb{R}^{N \times p}$ denote the matrix of true covariates, where the number of predictors $p$ can possibly exceed $N$. We assume that its entries $A_{ij}$ are bounded. \begin{property} \label{prop:bounded_covariates} There exists an absolute constant $\Gamma \geq 0$ such that $|A_{ij}| \leq \Gamma$ for all $(i,j) \in [N] \times [p]$. \end{property} Rather than directly observing $\bA}%{\overline{\bA}$, we are given access to its corrupted version. Let the random matrix $\bX}%{\overline{\bX}$ denote a perturbation of the deterministic covariates $\bA}%{\overline{\bA}$, i.e., \begin{align} \label{eq:noisy_a} \bX}%{\overline{\bX} &= \bA}%{\overline{\bA} + \bH}%{\overline{\bH}, \end{align} where $\bH}%{\overline{\bH}$ is a random matrix of independent, mean-zero rows. Before we introduce the properties assumed on $\bH}%{\overline{\bH}$, we first define an important class of random variables/vectors. \begin{definition} For any $\alpha \geq 1$, we define the $\psi_{\alpha}$-norm of a random variable $X$ as $\norm{X}_{\psi_{\alpha}} = \inf \{ t > 0: \mathbb{E} \exp(|X|^{\alpha} /t^{\alpha}) \le 2 \}$ \begin{align} \label{eq:alpha_norm} \norm{X}_{\psi_{\alpha}} &= \inf \Big \{ t > 0: \mathbb{E} \exp(|X|^{\alpha} /t^{\alpha}) \le 2 \Big \}. \end{align} If $\norm{X}_{\psi_{\alpha}} < \infty$, we call $X$ a $\psi_{\alpha}$-random variable. More generally, we say $X$ in $\mathbb{R}^n$ is a $\psi_{\alpha}$-random vector if all one-dimensional marginals $\langle X, v \rangle$ are $\psi_{\alpha}$-random variables for any fixed vector $v \in \mathbb{R}^n$. We define the $\psi_{\alpha}$-norm of the random vector $X \in \mathbb{R}^n$ as $\norm{X}_{\psi_{\alpha}} = \sup_{v \in \mathcal{S}^{n-1}} \norm{ \langle X, v \rangle }_{\psi_{\alpha}}$ \begin{align} \label{eq:alpha_vector_norm} \norm{X}_{\psi_{\alpha}} &= \sup_{v \in \mathcal{S}^{n-1}} \norm{ \langle X, v \rangle }_{\psi_{\alpha}}, \end{align} where $\mathcal{S}^{n-1} := \{ v \in \mathbb{R}^n: \norm{v}_2 = 1\}$ denotes the unit sphere in $\mathbb{R}^n$ and $\langle \cdot, \cdot \rangle$ denotes the inner product. Note that $\alpha = 2$ and $\alpha =1 $ represent the class of sub-gaussian and sub-exponential random variables/vectors, respectively. \end{definition} We now impose the following properties on the noise matrix $\bH}%{\overline{\bH}$. For notational convenience, we denote $\eta_{ij}$ as the $(i, j)$-th entry of $\bH}%{\overline{\bH}$, and $\eta_{i, \cdot}$ and $\eta_{\cdot, j}$ as the $i$-th row and $j$-th column of $\bH}%{\overline{\bH}$, respectively. In general, for any $d_1 \times d_2$ matrix $\boldsymbol{M}$, we define $\boldsymbol{M}_{i, \cdot} \in \mathbb{R}^{d_1 \times 1}$ and $\boldsymbol{M}_{\cdot, j} \in \mathbb{R}^{1 \times d_2}$ as its $i$-th row and $j$-th column, respectively. \begin{property} \label{prop:covariate_noise_structure} Let $\bH}%{\overline{\bH}$ be a matrix of independent, mean zero $\psi_{\alpha}$-rows for some $\alpha \geq 1$, i.e., there exists an $\alpha \geq 1$ and $K_{\alpha} < \infty$ such that $\norm{\eta_{i, \cdot}}_{\psi_{\alpha}} \le K_{\alpha}$ for all $i \in [N]$. \end{property} \begin{property}\label{prop:covariate_noise_variance} $\big\| \mathbb{E} \eta_{i,\cdot}^T \eta_{i, \cdot} \big\| \leq \gamma^2$ for all $i \in [N]$. \end{property} \subsection{Missing Data} \label{sec:missing_data} In addition to the noise perturbations, we allow for missing data within our observed covariate matrix. In particular, we observe the matrix $\bZ}%{\overline{\bZ}$, which is a ``masked" version of $\bX}%{\overline{\bX}$, i.e., each entry $Z_{ij}$ of $\bZ}%{\overline{\bZ}$ is observed with some probability $\rho \in (0, 1]$, independent of other entries. This is made formal by the following property. \begin{property} \label{prop:masking_noise_structure} For all $(i,j) \in [N] \times [p]$, \begin{align} \label{eq:observed_covariates} Z_{ij} &= \begin{cases} A_{ij} + \eta_{ij} & \text{w.p. } \rho \\ \star & \text{w.p. } 1 - \rho \end{cases} \end{align} is sampled independently. Here, $\star$ denotes an unknown value. \end{property} \subsection{Response Variables} \label{sec:response_variables} We consider a response associated with each covariate vector. Formally, for each $i \in [N]$, we let $Y_i$ denote the random response variable associated with $\bA}%{\overline{\bA}_{i, \cdot} \in \mathbb{R}^{1 \times p}$. We consider the setting where we observe a response via the following model: letting $\Omega \subset [N]$, we define \begin{equation} \label{eq:regression_model} Y_i = \begin{cases} \bA}%{\overline{\bA}_{i, \cdot} \beta^* + \epsilon_i & \text{if } i \in \Omega\\ \star & \text{otherwise}, \end{cases} \end{equation} where $\beta^* \in \mathbb{R}^{p}$ is the vector of unknown parameters and $\epsilon_i \in \mathbb{R}$ is the response noise with the following property. \begin{property} \label{prop:observation_noise_structure} The response noise $\epsilon$ is a random vector with independent, mean zero entries and $\mathbb{V}\text{ar}(\epsilon_i) \le \sigma$ for some $\sigma > 0$; here, $\mathbb{V}\text{ar}(X)$ denotes the variance of a random variable $X$. \end{property} \subsection{Problem Statement and Model Recap} In summary, we observe all $N$ (noisy) covariates $\{\boldsymbol{Z}_{1, \cdot}, \dots, \boldsymbol{Z}_{N, \cdot} \}$. However, we only observe a subset $\Omega$ of size $n$ of its corresponding response values $\{Y_i: i \in \Omega \}$. Using our $n$ sample points $\{(Y_i, \boldsymbol{Z}_{i, \cdot}): i \in \Omega \}$, we aim to produce a regression estimator $\widehat{\beta} \in \mathbb{R}^p$ so that our prediction estimates $\widehat{Y}(\widehat{\beta}) \in \mathbb{R}^N$ are close to the unknown expected responses values associated with any data point in $\boldsymbol{A}$, i.e., $\mathbb{E} Y_i = \boldsymbol{A}_{i, \cdot} \beta^*$ for all $i \in [N]$. We will evaluate our algorithm based on its prediction error. Specifically, we assess the quality of our estimate $\widehat{Y} := \widehat{Y}(\widehat{\beta}) \in \mathbb{R}^{N}$ in terms of its (1) mean-squared training (empirical) error over $\Omega$, \begin{align} \label{eq:mse_train} \text{MSE}_{\Omega}(\widehat{Y}) &= \frac{1}{|\Omega|} \mathbb{E} \left[ \sum_{i \in \Omega} \Big( \widehat{Y}_{i} - \boldsymbol{A}_{i, \cdot} \beta^* \Big)^2 \right], \end{align} and (2) mean-squared test error over all (observed and unobserved) entries \begin{align} \label{eq:mse_test} \text{MSE}(\widehat{Y}) &= \frac{1}{N} \mathbb{E} \left[ \sum_{i=1}^N \Big( \widehat{Y}_{i} - \boldsymbol{A}_{i, \cdot} \beta^* \Big)^2 \right]. \end{align} Note that $\Omega$ denotes the set of $n$ locations out of $[N]$ for which we observe a response. This is the set of sample indices that will be used to learn a model parameter $\widehat{\beta}$ in our algorithm (Algorithm \ref{alg:main_algorithm}). In that sense, we call $\textrm{MSE}_{\Omega}$ the ``training error''. For convenience, we summarize all of our model assumptions\footnote{With regards to Property \ref{prop:masking_noise_structure}, we specifically mean $Z_{ij} = (A_{ij} + \eta_{ij}) \cdot \mathds{1}(\pi_{ij} = 1) + \star \cdot \mathds{1}(\pi_{ij} = 0)$.} in Table \ref{table:model_assumptions}. \begin{table} \caption{Summary of Model Assumptions} \label{table:model_assumptions} \centering \begin{tabular}{ c c c c c } \toprule \multirow{2}{*}{Covariates $\boldsymbol{A}$} & \multicolumn{2}{c}{Covariate Noise $\boldsymbol{H}$} & \multirow{2}{*}{Covariate Masking $\boldsymbol{Z}$} & \multirow{2}{*}{Response Noise $\epsilon$} \\ \cmidrule{2-3} & $\psi_{\alpha}$-norm & Covariance && \\ \midrule $ \left| A_{ij} \right| \leq \Gamma$ & $\norm{\eta_{i, \cdot}}_{\psi_{\alpha}} \le K_{\alpha}$ & $\big\| \mathbb{E} \eta_{i,\cdot}^T \eta_{i, \cdot} \big\| \leq \gamma^2$ & $\pi_{ij} \sim \text{Bernoulli}(\rho)$ & $\mathbb{V}\text{ar}(\epsilon_i) \le \sigma$\\ Property \ref{prop:bounded_covariates} & Property \ref{prop:covariate_noise_structure} & Property \ref{prop:covariate_noise_variance} & Property \ref{prop:masking_noise_structure} & Property \ref{prop:observation_noise_structure}\\ \bottomrule \end{tabular} \end{table} \paragraph{Notation} For any matrix $\boldsymbol{B} \in \mathbb{R}^{N \times p}$ and index set $\Omega \subset [N]$, we let $\boldsymbol{B}^{\Omega}$ denote the $|\Omega| \times p$ submatrix of $\boldsymbol{B}$ formed by stacking the rows of $\boldsymbol{B}$ according to $\Omega$, i.e., $\boldsymbol{B}^{\Omega}$ is the pile of $\{ \boldsymbol{B}_{i, \cdot}: i \in \Omega\}$. We are particularly interested in the case where $\Omega = \{i_1, \dots, i_n\}$ is the set of $n$ locations drawn from $[N]$ for which we observe response. In this case, we denote $\boldsymbol{A}^{\Omega}$ as the $n \times p$ matrix formed by concatenating the rows of $\bA}%{\overline{\bA}$ according to $\Omega$, i.e., $\boldsymbol{A}^{\Omega}$ is constructed from $\{ \boldsymbol{A}_{i_1, \cdot}, \dots, \boldsymbol{A}_{i_n, \cdot} \}$. We define $\boldsymbol{X}^{\Omega}, \boldsymbol{H}^{\Omega}$, and $\boldsymbol{Z}^{\Omega}$ similarly using $\bX}%{\overline{\bX}, \bH}%{\overline{\bH}$, and $\bZ}%{\overline{\bZ}$, respectively. The superscript $\Omega$ is sometimes omitted whenever what the matrix represents is clear from the context. Finally, we denote $\text{poly}(\alpha_1, \dots, \alpha_n)$ as a function that scales at most polynomially in its arguments $\alpha_1, \dots, \alpha_n$. \section{Experiments}
{ "timestamp": "2019-03-14T01:06:00", "yymm": "1902", "arxiv_id": "1902.10920", "language": "en", "url": "https://arxiv.org/abs/1902.10920" }
\section{Introduction} \markboth{Y. N. Alvarez and R. Sa Earp}{Introduction} \label{secIntroducao} Let $M$ be a complete Riemannian manifold of dimension $n\geq 2$. Given a smooth bounded domain $\Omega$ in $M$, we ask if for a given smooth function $\varphi$ and a prescribed smooth function $H=H(x,z)$ non-decreasing in the variable $z$, there exists a smooth function $u$ up to the boundary satisfying \begin{equation}\tag{P}\label{ProblemaP \left\{ \begin{split} \diver \left(\dfrac{\nabla u}{W} \right)&=n H(x,u) \ \mbox{in}\ \Omega,\\ u&=\varphi \ \mbox{in}\ \partial\Omega, \end{split}\right. \end{equation} where $W=\sqrt{1+\norm{\nabla u (x)}^2}$ and the quantities involved are calculated with respect to the metric of $M$. If $u$ satisfies the equation \begin{equation}\label{operador_minimo_1 \diver \left(\dfrac{\nabla u}{W}\right) = nH(x,u), \end{equation} then its vertical graph, $$ \Gr(u)=\{(x,u(x));x\in \Omega\}\subset M\times\R,$$ is an hypersurfaces in $M\times\R$ of mean curvature $H(x,u(x))$ at each point $(x,u(x))$. In a coordinates system $(x_1,\dots,x_n)$ in $M$ equation \eqref{operador_minimo_1} can be written in non-divergence form as \begin{equation}\label{operador_minimo_1_coord} \mathcal{M} u:=\sum_{i,j=1}^n \left(W^2\sigma^{ij} - {u^iu^j}\right)\nabla^2_{ij} u=nH(x,u){W^3} , \end{equation} where $(\sigma^{ij})$ is the inverse of the metric $(\sigma_{ij})$ of $M$, $u^i=\displaystyle\sum_{j=1}^n\sigma^{ij} \partial_j u$ are the coordinates of $\nabla u$ and $\nabla^2_{ij} u(x)=\Hess u(x){\left(\ds\mathsmaller{\frac{\partial}{\partial x_i}},\ds\mathsmaller{\frac{\partial}{\partial x_j}}\right)}$. We also define the operator $\mathfrak{Q}$ by $ \mathfrak{Q} u = \mathcal{M} u -n H(x,u)W^3. $$ The matrix of the operator $\mathcal{M}$ (and $\mathfrak{Q}$) is given by $A={W^2}g$, where $g$ is the induce metric on the graph of $u$. This implies that the eigenvalues of $A$ are positive and depends on $x$ and on $\nabla u$. Hence, $\mathcal{M}$ is locally uniformly elliptic. Furthermore, if $\Omega$ is bounded and $u\in\mathscr{C}^{1}(\overline{\Omega})$, then $\mathcal{M}$ is uniformly elliptic in $\overline{\Omega}$ (see \cite{spruck} for more details). We recall that the Dirichlet problem \eqref{ProblemaP} is a classical problem in the intersection between Differential Geometry and Partial Differential Equations. First steps were given by Bernstein \cite{Bernstein}, Douglas \cite{Douglas1931} and Rad\'o \cite[p. 795]{Rado1930} in domains of $\R^2$ for the minimal case. In 1966 Jenkins-Serrin {\cite[Th. 1 p. 171]{Serrin1968}} derived related results in higher dimensions. Later on, Serrin \cite{Serrin} devoted his attention to study Dirichlet problems for a class of more general elliptic equations within which is the prescribed mean curvature equation. Specifically related to our work, he obtained the following result. \begin{teo}[Serrin {\cite[Th. p. 484]{Serrin}}]\label{T_Serrin_Ricci} Let $\Omega\subset\R^n$ be a bounded domain whose boundary is of class $\mathscr{C}^2$. Let $H(x)\in\mathscr{C}^1(\overline{\Omega})$ and suppose that \begin{equation}\label{cond_Ricc_Serrin} \modulo{\nabla H(x)}\leq \dfrac{n}{n-1}(H(x))^2\ \forall\ x\in\Omega. \end{equation} Then the Dirichlet problem in $\Omega$ for surfaces having prescribed mean curvature $H(x)$ is uniquely solvable for arbitrarily given $\mathscr{C}^2$ boundary values if, and only if, \begin{equation}\label{SerrinCondition} (n-1)\mathcal{H}_{\partial\Omega}(y)\geq n\modulo{H(y)} \ \forall \ y\in\partial\Omega. \end{equation} \end{teo} We note that in Serrin condition \eqref{SerrinCondition}, $\mathcal{H}_{\partial\Omega}(y)$ denotes the inward mean curvature of $\partial\Omega$ at $y\in\partial\Omega$. A direct consequence of theorem \ref{T_Serrin_Ricci} is the following sharp result. \begin{teo}[Serrin sharp solvability criterion {\cite[p. 416]{Serrin}}]\label{SharpSerrin} Let $\Omega\subset\R^n$ be a bounded domain whose boundary is of class $\mathscr{C}^2$. Then the Dirichlet problem for the mean curvature equation has a unique solution for every constant $H$ and arbitrary $\mathscr{C}^2$ boundary data if, and only if, $(n-1) \mathcal{H}_{\partial\Omega} \geq n \modulo{H}$. \end{teo} Joel Spruck \cite{spruck} is the pioneer in the study the Dirichlet problem \eqref{ProblemaP} in the $M\times\R$ setting when $H$ is a positive constant. Spruck established a priori estimates for this problem that led to several existence results. More specifically related with our work is the theorem stated below. \begin{teo}[Spruck {\cite[T 1.4 p. 787]{spruck}}]\label{T_exist_Spruck} Let $\Omega\subset M$ be a bounded domain whose boundary is of class $\mathscr{C}^{2,\alpha}$ for some $\alpha\in(0,1)$. Let $H\in\R_+$ and suppose that \begin{equation}\label{SerrinConditionConstantH} (n-1) \mathcal{H}_{\partial\Omega}(y) \geq n H. \end{equation} Suppose also that \begin{equation}\label{cond_Ricc_Spruck} \Ricc_x \geq - \dfrac{n^2}{n-1} H^2\ \ \forall x\in \Omega. \end{equation} Then the Dirichlet problem \eqref{ProblemaP} is uniquely solvable for arbitrary continuous boundary data $\varphi$. \end{teo} Above, $\Ricc_x$ is the Ricci curvature of $M$ at $x$. The notation $\Ricc_x \geq f(x)$ means that the Ricci curvature evaluated in any unitary tangent vector at $x$ is bounded below by the function $f(x)$. The definition of the Ricci curvature we use throughout the text follows \cite{petersen1998}. We note that, condition \eqref{cond_Ricc_Spruck} is trivially satisfy for any constant $H$ if $M=\R^n$. So, theorem \ref{T_exist_Spruck} of Spruck is a generalization of the sufficient part of theorem \ref{SharpSerrin} of Serrin. \medskip On the other hand, in our previous work \cite[Th. 1 p. 3]{artigonaoexist:inpress} we proved that the {\it strong Serrin condition}, \begin{equation}\label{StrongSerrinCondition} (n-1)\mathcal{H}_{\partial\Omega}(y)\geq n \sup\limits_{z\in\R}\modulo{H\left(y,z\right)} \ \forall \ y\in\partial\Omega, \end{equation} is necessary for the solvability of problem \eqref{ProblemaP} in a large class of Riemannian manifolds. As an examples are the Hadamard manifolds \cite[Corollary 2 p. 3]{artigonaoexist:inpress} and the simply connected and compact manifolds whose sectional curvature satisfies $0<\frac{1}{4} K_0 < K \leq K_0$ provided $\diam(\Omega)<\frac{\pi}{2\sqrt{K_0}}$ \cite[Corollary 3 p. 4]{artigonaoexist:inpress}. \bigskip In the present paper, our goal is to study under which conditions on the function $H$ the strong Serrin condition \eqref{StrongSerrinCondition} is also sufficient. The main theorem of this paper is the following. \begin{teo}[main theorem]\label{T_exist_Ricci} Let $\Omega \subset M$ be a bounded domain with $\partial\Omega$ of class $\mathscr{C}^{2,\alpha}$ for some $\alpha\in(0,1)$. Let $H\in\mathscr{C}^{1,\alpha}(\overline{\Omega}\times\R)$ satisfying $\partial_z H \geq 0$ and \begin{equation}\label{cond_H_Ricci_exist} \Ricc_x\geq n\sup\limits_{z\in\R}\norm{\nabla_x H(x,z)}-\dfrac{n^2}{n-1}\displaystyle\inf_{z\in\R}\left(H(x,z)\right)^2 \ \forall \ x\in\Omega. \end{equation} If $ (n-1)\mathcal{H}_{\partial\Omega}(y)\geq n \sup\limits_{z\in\R}\modulo{H\left(y,z\right)} \ \forall \ y\in\partial\Omega, $ then for every $\varphi\in\mathscr{C}^{2,\alpha}(\overline{\Omega})$ there exists a unique solution $u\in\mathscr{C}^{2,\alpha}(\overline{\Omega})$ of the Dirichlet problem \eqref{ProblemaP}. \end{teo} Notice that assumptions \eqref{cond_Ricc_Serrin} and \eqref{cond_Ricc_Spruck} are particular cases of \eqref{cond_H_Ricci_exist}. Hence, theorem \ref{T_exist_Ricci} generalizes the existence part in theorem \ref{T_Serrin_Ricci} of Serrin and theorem \ref{T_exist_Spruck} of Spruck. We also highlight that the combination of the non-existence results mentioned above with theorem \ref{T_exist_Ricci} gives Serrin type solvability criteria for the Dirichlet problem \eqref{ProblemaP} (see \cite[Thms. 8 and 9]{artigonaoexist:inpress}). \medskip On the hand, notice that, from the combination of theorem \ref{T_exist_Spruck} of Spruck and our non-existence result \cite[Corollary 2 p. 3]{artigonaoexist:inpress} for Hadamard manifolds, we can deduce that the Serrin condition \eqref{SerrinConditionConstantH} is necessary and sufficient for the solvability of problem \eqref{ProblemaP} for every constant $H$ satisfying \eqref{cond_Ricc_Spruck}. In the case where $M=\HH^n$ we see that condition \eqref{cond_Ricc_Spruck} is satisfied for every constant $H\geq \frac{n-1}{n}$. In the opposite case, $H\in\left[0,\frac{n-1}{n}\right)$, Spruck \cite[Th. 5.4 p. 797]{spruck} obtained an existence result assuming the strict inequality in the Serrin condition. \medskip In this paper we also extend this result of Spruck \cite[Th. 5.4 p. 797]{spruck} in the hyperbolic space by deriving the following theorem. \begin{teo}\label{exist_hiperbolicoHmenor1} Let $\Omega\subset \HH^n$ be a bounded domain with $\partial\Omega$ of class $\mathscr{C}^{2,\alpha}$ for some $\alpha\in(0,1)$ and $\varphi\in\mathscr{C}^{2,\alpha}(\overline{\Omega})$. Let $H\in\mathscr{C}^{1,\alpha}(\overline{\Omega}\times\R)$ satisfying $\partial_z H\geq 0$ and $\sup\limits_{\Omega\times\R}\modulo{H}\leq \frac{n-1}{n}$. If $ (n-1)\mathcal{H}_{\partial\Omega}(y)\geq n \sup\limits_{z\in\R}\modulo{H\left(y,z\right)} \ \forall \ y\in\partial\Omega, $ then for every $\varphi\in\mathscr{C}^{2,\alpha}(\overline{\Omega})$ there exists a unique solution $u\in\mathscr{C}^{2,\alpha}(\overline{\Omega})$ of the Dirichlet problem \eqref{ProblemaP}. \end{teo} Putting together theorem \ref{exist_hiperbolicoHmenor1} and our non existence result for Hadamard manifolds \cite[Cor. 1 p. 3]{artigonaoexist:inpress} with theorem \ref{T_exist_Spruck} of Spruck, one can deduce: the Serrin sharp solvability criterion for arbitrary constant $H$ as stated in theorem \ref{SharpSerrin} above, also holds in the hyperbolic case \cite[Th. 7 p. 5]{artigonaoexist:inpress}. \medskip At last, we use the barriers constructed by Galvez-Lozano \cite[Th. 6 p. 12]{Galvez2015} to prove the following result in Hadamard manifolds. \begin{teo}\label{teo_GalvezLozano} Let $M$ be a Hadamard manifold such that $-c^2\leq K \leq -1$, for some $c>1$. Let $\Omega\subset M$ be a bounded domain with $\partial\Omega$ of class $\mathscr{C}^{2,\alpha}$ for some $\alpha\in(0,1)$ and whose principal curvatures are greater than $c$. Let $\varphi\in\mathscr{C}^{2,\alpha}(\overline{\Omega})$ and $H\in\mathscr{C}^{1,\alpha}(\overline{\Omega}\times\R)$ satisfying $\partial_z H\geq 0$ and $\sup\limits_{\Omega\times\R}\modulo{H}\leq \frac{n-1}{n}$. Then problem \eqref{ProblemaP} has a unique solution $u\in\mathscr{C}^{2,\alpha}(\overline{\Omega})$. \end{teo} \section{The a priori estimates} Firstly, we establish a lemma that will help us to obtain a priori height and boundary gradient estimates. \begin{lema}\label{lema_Est_altura_cond_ricci} Let $\Gamma$ be an embedded and oriented $\mathscr{C}^2$ hypersurface of $M$ and $\Gamma_t$ parallel to $\Gamma$ for each $t\in[0,\tau)$. Assume that for some fix $y\in\Gamma$, $\mathcal{H}_{\Gamma}(y)\geq 0$ with respect to a normal field $N$. Suppose also that there exists a function $h\in\mathscr{C}^1[0,\tau)$ satisfying \begin{equation}\label{der_H_3} \modulo{h(0)}\leq \mathcal{H}_{\Gamma}(y) \end{equation} and \begin{equation}\label{der_H_3_2} (n-1)\left(\modulo{h'(t)}-(h(t))^2\right) \leq \Ricc_{\gamma_y(t)}(\gamma_y'(t)) \ \forall \ t\in[0,\tau), \end{equation} where $\gamma_y(t)=\exp_{y}(tN_y)\in\Gamma_t$. Then \begin{equation}\label{est_laplaciano_est_altura_1_lema} \modulo{h(t)}\leq \mathcal{H}_{\Gamma_t}(\gamma_y(t)) \ \forall \ 0\leq t < \tau, \end{equation} where $\mathcal{H}_{\Gamma_t}$ is computed with respect to $\gamma_y'(t)$. Furthemore, $\mathcal{H}_{\Gamma_t}(\gamma_y(t))$ is increasing as a function of $t$. \end{lema} \begin{proof} Let $\mathcal{H}(t):=\mathcal{H}_{\Gamma_t}(\gamma_y(t))$. It is known that (see \cite[Cor. B.4 p. 66]{minhatese}) $$\mathcal{H}'(t) \geq \dfrac{\Ricc_{\gamma_y(t)}(\gamma_y'(t))}{n-1} + \left(\mathcal{H}(t)\right)^2.$$ Since we are assuming \eqref{der_H_3_2} it follows \begin{equation}\label{desig_sem_ricc} \mathcal{H}'(t) \geq \modulo{h'(t)} - (h(t))^2 + \left(\mathcal{H}(t)\right)^2. \end{equation} Then, \begin{equation}\label{desig_sem_ricc_v} (\mathcal{H}(t) - h(t))'\geq \left(\mathcal{H}(t)+h(t)\right)\left(\mathcal{H}(t)-h(t)\right) \end{equation} and \begin{equation}\label{desig_sem_ricc_g} (\mathcal{H}(t) + h(t))'\geq \left(\mathcal{H}(t)-h(t)\right)\left(\mathcal{H}(t)+h(t)\right). \end{equation} Let us define $v(t)=\mathcal{H}(t)-h(t)$ and $g(t)=\mathcal{H}(t)+h(t)$. From \eqref{desig_sem_ricc_v} we have $$\left(\dfrac{v(t)}{\displaystyle e^{\int_0^t g(s)ds}}\right)'\geq 0,$$ so $v(t) \geq\displaystyle v(0)e^{\int_0^t g(s)ds}$ for each $t\in[0,\tau)$. As a consequence of \eqref{der_H_3} we obtain $$\mathcal{H}(t)\geq h(t) \ \forall t\in[0,\tau).$$ Using \eqref{desig_sem_ricc_g} we obtain in a similar way that $$\mathcal{H}(t)\geq -h(t) \ \forall t\in[0,\tau).$$ Therefore, \begin{equation}\label{H_h} \mathcal{H}(t)\geq\modulo{h(t)} \ \forall t\in[0,\tau). \end{equation} Substituting \eqref{H_h} in \eqref{desig_sem_ricc} we also obtain $\mathcal{H}'(t)\geq 0$. \end{proof} Roughly speaking, lemma \ref{lema_Est_altura_cond_ricci} says that, under condition \eqref{der_H_3_2}, the parallel hypersurfaces inherit the initial condition on $\Gamma$ throughout the orthogonal geodesics. Moreover, the mean curvature of the parallel is an increasing function of $t$. \subsection{A priori height estimate} We point out that in theorem \ref{T_Serrin_Ricci} of Serrin the combination of condition \eqref{cond_Ricc_Serrin} with the Serrin condition \eqref{SerrinCondition} provides height estimate for the Dirichlet problem \eqref{ProblemaP} in the Euclidean case. Analogously for theorem \ref{T_exist_Spruck} of Spruck. We generalize these geometric ideas in the next theorem. \begin{teo}\label{teo_Est_altura} Let $\Omega\in M$ be a bounded domain with $\partial\Omega$ of class $\mathscr{C}^2$ and $\varphi\in\mathscr{C}^0(\partial\Omega)$. Let $H\in\mathscr{C}^{1}(\overline{\Omega}\times\R)$ satisfying $\partial_z H\geq 0$, \begin{equation}\label{cond_H_Ricci_sup} \Ricc_x\geq n\sup\limits_{z\in\R}\norm{\nabla_x H(x,z)}-\dfrac{n^2}{n-1}\inf\limits_{z\in\R}\left(H(x,z)\right)^2 \ \forall \ x\in\Omega, \end{equation}} and \begin{equation}\label{cond_Serrin_hightest_teo} (n-1)\mathcal{H}_{\partial\Omega}(y)\geq n \modulo{H(y,\varphi(y))} \ \forall \ y\in\partial\Omega. \end{equation} If $u\in\mathscr{C}^2(\Omega)\cap\mathscr{C}^0(\overline{\Omega})$ is a solution of problem \eqref{ProblemaP}, then \begin{equation* \sup\limits_{\Omega}\modulo{u}\leq \sup\limits_{\partial\Omega} \modulo{\varphi} +\dfrac{e^{\mu\delta}-1}{\mu}, \end{equation*} where $\mu>n\sup\left\{\modulo{H(x,z)}, (x,z)\in \overline{\Omega}\times\left[-\sup\limits_{\partial\Omega}\modulo{\varphi},\sup\limits_{\partial\Omega}\modulo{\varphi}\right]\right\}$ and $\delta=\diam(\Omega)$. \end{teo} \begin{proof} For $x\in\Omega$ let us define the distance function $d(x)=\dist(x,\partial\Omega)$. Let $\Omega_0$ be the biggest open subset of $\Omega$ having the unique nearest point property; that is, for every $x\in\Omega_0$ there exists a unique $y\in\partial\Omega$ such that $d(x)=\dist(x,y)$. Then $d\in\mathscr{C}^2(\Omega_0)$ (see \cite[Prop. 4.1 p. 794]{spruck}, \cite{LiNirenberg2}). We now define $w=\phi\circ d + \sup\limits_{\partial\Omega}\modulo{\varphi}$ over $\Omega$, where $$\phi(t)=\dfrac{\displaystyle e^{\mu\delta}}{\mu}\left(1-e^{-\mu t}\right).$$ If we prove that $u\leq w$ in $\overline{\Omega}$ we obtain the desired estimate. By the sake of contradiction we suppose that the function $v=u-w$ attains a maximum $m>0$ at $x_0\in{\Omega}$. Let $y_0\in\partial\Omega$ be such that $d(x_0)=\dist(x_0,y_0)=t_0$, let $\gamma$ be the minimizing geodesic orthogonal to $\partial\Omega$ joining $x_0$ to $y_0$. Restricting $u$ and $w$ to $\gamma$ we see that $\parcial{v}{t}(t_0)=0$. Hence, $\parcial{u}{t}(t_0)=\parcial{w}{t}(t_0)=\phi'(t_0)>0$ which implies that $\nabla u(x_0)\neq 0$. Therefore, $\Gamma_0=\left\{x\in\Omega;u(x)=u(x_0)\right\}$ is of class $\mathscr{C}^2$ near $x_0$. Then, there exists a geodesic ball $B_{\epsilon}(z_0)$ tangent to $\Gamma_0$ in $x_0$ such that \begin{equation}\label{eq_bola_1} u > u(x_0) \mbox{ in } \overline{B_{\epsilon}(z_0)}\setminus\{x_0\}. \end{equation} We note that $$\dist(z_0,y_0)\leq \dist(z_0,x_0)+\dist(x_0,y_0)=\epsilon + d(x_0).$$ Hence, for $\tilde{z}$ lying in the intersection of $\partial B_{\epsilon}(z_0)$ with a minimizing geodesic joining $z_0$ to $y_0$, we have $$d(\tilde{z})\leq \dist(\tilde{z},y_0)=\dist(z_0,y_0)-\epsilon \leq d(x_0)+\epsilon -\epsilon = d(x_0).$$ Hence, $w(\tilde{z})\leq w(x_0)$ since $\phi$ is increasing. Consequently, $$ u(\tilde{z})-w(x_0) \leq u(\tilde{z})-w(\tilde{z}) \leq u(x_0)-w(x_0)$$ and $u(\tilde{z})\leq u(x_0)$. By \eqref{eq_bola_1} $\tilde{z}=x_0$, so $z_0=\gamma(t_0+\epsilon)$. This ensures that $x_0\in\Omega_0$ because, if not, $z_0$ would also be on the extension of another minimizing geodesic joining some $y_1\in\partial\Omega\setminus\{y_0\}$ to $x_0$, which is a contradiction. However, let's show that this is also impossible. After some computations we have \begin{equation}\label{Mw_est_altura_0} \mathcal{M} w = \phi'(1+\phi'^2) \Delta d + {\phi''} \ \mbox{ in } \Omega_0. \end{equation} For $x\in\Omega_0$, let $y=y(x)$ in $\partial\Omega$ be the nearest point to $x$ and $\gamma_y(t)$ the orthogonal geodesic to $\partial\Omega$ from $y$ to $x$. Let us define $$h(t)=\frac{n}{n-1}H\left(\gamma_y(t),\varphi(y)\right).$$ Note that $y$ is now fixed. From the Serrin condition \eqref{cond_Serrin_hightest_teo} it follows that $$\modulo{h(0)} = \dfrac{n}{n-1} \modulo{H\left(y,\varphi(y)\right)}\leq \mathcal{H}_{\partial\Omega}(y) =\mathcal{H}(0) .$$ Besides, $$ h'(t)=\dfrac{n}{n-1}\escalar{\nabla_x H(\gamma_y(t),\varphi(y))}{\gamma_y'(t)}. $$ Taking into account the additional hypothesis \eqref{cond_H_Ricci_sup} we see that $$(n-1)\left(\modulo{h'(t)}-(h(t))^2\right)\leq \Ricc_{\gamma_y(t)}(\gamma_y'(t)).$$ Then we can apply lemma \ref{lema_Est_altura_cond_ricci} to the function $h(t)$ to obtain $$ n \modulo{H(\gamma_y(t),\varphi(y))}\leq (n-1)\mathcal{H}_{\Gamma_t}(\gamma_y(t)),$$ where $\Gamma_t$ is parallel to some portion of $\partial\Omega$. Therefore $$\Delta d(x)\leq-n\modulo{H\left(x,\varphi(y(x))\right)} \ \forall \ x\in\Omega_0.$$ Using this estimate in \eqref{Mw_est_altura_0} we obtain $$\mathcal{M} w \leq -n\modulo{H\left(x,\varphi(y(x))\right)} {\phi'}{(1+\phi'^2)} + {\phi''}.$$ Also $ \phi''(t)=-\mu \displaystyle e^{\mu(\delta-t)}=-\mu\phi'(t)<-n\modulo{H(x,\varphi(y(x)))}\phi'(t) $ and $\phi'\geq 1$, so \begin{equation}\label{est_Mw_est_alt} \mathcal{M} w \leq -n\modulo{H\left(x,\varphi(y(x))\right)} {\phi'(2+\phi'^2)}< -n\modulo{H\left(x,\varphi(y(x))\right)}{\left(1+\phi'^2\right)^{3/2}}. \end{equation} On the other hand, the hypothesis $\partial_z H\geq 0$ implies that \begin{equation}\label{eq_Hxw} \mp H(x,\pm w) \leq \mp H\left(x,{\varphi(y(x))}\right)\leq \modulo{H\left(x,{\varphi(y(x))}\right)}. \end{equation} From this fact and \eqref{est_Mw_est_alt} we conclude that \begin{align*} \pm \mathfrak{Q}(\pm w) = & \mathcal{M} w \mp nH\left(x,\pm w\right) {\left(1+\phi'^2\right)^{3/2}}\leq 0. \end{align*} Therefore \begin{align*} \mathfrak{Q}(w+m)=&\mathcal{M} (w+m)-nH(x,w+m){\left(1+\phi'^2\right)^{3/2}} \leq \mathfrak{Q} w \leq \mathfrak{Q} u. \end{align*} Moreover $u \leq w + m$ and $u(x_0)=w(x_0)+m$. By the maximum principle $u\equiv w+m$ in $\Omega_0$ which is a contradiction since $u<w+m$ in $\partial\Omega$. This proves that $u\leq w$ in $\overline{\Omega}$. Similarly we prove that $u \geq - w$ in $\Omega$. \end{proof} \begin{obs} Instead of condition \eqref{cond_H_Ricci_sup}, the proof shows that it is suffice to assume that \begin{equation*} \Ricc_{x}\geq n\norm{\nabla_x H(x,\varphi(y))}-\dfrac{n^2}{n-1}\left(H(x,\varphi(y))\right)^2 \ \forall \ x\in\Omega_0, \end{equation*} where $\Omega_0$ is the biggest open subset of $\Omega$ having the unique nearest point property, and $y\in\partial\Omega$ is the nearest point to $x$. \end{obs} \subsection{A priori boundary gradient estimates}\label{sec_est_grad_bordo} In this section we use the classical idea to find upper and a lower barriers for $u$ on $\partial\Omega$ to get a control for $\nabla u$ along $\partial\Omega$. \begin{teo}\label{teo_Est_gradiente_fronteira} Let $\Omega\in M$ be a bounded domain with $\partial\Omega$ of class $\mathscr{C}^2$ and $\varphi\in\mathscr{C}^2(\overline{\Omega})$. Let $H\in\mathscr{C}^{1}\left(\overline{\Omega}\times\R\right)$ satisfying $\partial_z H\geq 0$, \begin{equation}\label{cond_H_Ricci_sup_est_grad_fronteira} \Ricc_x\geq n\sup\limits_{z\in\R}\norm{\nabla_x H(x,z)}-\dfrac{n^2}{n-1}\inf\limits_{z\in\R}\left(H(x,z)\right)^2 \ \forall \ x\in\Omega, \end{equation} and \begin{equation}\label{cond_Serrin} (n-1)\mathcal{H}_{\partial\Omega}(y)\geq n \modulo{H(y,\varphi(y))} \ \forall \ y\in\partial\Omega. \end{equation} If $u\in\mathscr{C}^2(\Omega)\cap\mathscr{C}^1(\overline{\Omega})$ is a solution of \eqref{ProblemaP}, then \begin{equation}\label{EstGradFront} \sup\limits_{\partial\Omega}\norm{\nabla u}\leq \norm{\varphi}_1 + \displaystyle e^{C\left(1+ \norm{H}_1+ \norm{\varphi}_2\right)\left(1+\norm{\varphi}_1\right)^3\left(\norm{u}_0+\norm{\varphi}_0\right)} \end{equation} for some $C=C(n,\Omega)$. \end{teo} \begin{proof} Again, for $x\in\Omega$, we set $d(x)=\dist(x,\partial\Omega)$. Let $\tau>0$ be such that $d$ is of class $\mathscr{C}^2$ over the set of points in $\Omega$ for which $d(x)\leq\tau$. Let $\psi\in\mathscr{C}^2([0,\tau])$ be a non-negative function satisfying \begin{multicols}{3} \begin{enumerate} \item[P1.] $\psi'(t)\geq 1$, \item[P2.] $\psi''(t) \leq 0$, \item[P3.] $t\psi'(t)\leq 1$. \end{enumerate} \end{multicols} For $a<\tau$ to be fixed latter on we consider the set $$\Omega_{a}=\left\{x\in M; d(x)<a \right\} .$$ We now define $w^{\pm}=\pm \psi\circ d + \varphi$. Firstly, let's estimate $\pm\mathcal{M} w^{\pm}$. A straightforward computation yields \begin{equation}\label{eq_barreira_superior} \begin{array}{rl} &\pm \mathcal{M} w^{\pm}=\psi'W_{\pm}^2\Delta d + \psi''W_{\pm}^2-\psi''\escalar{\nabla d }{\pm \psi' \nabla d + \nabla \varphi}^2\\ &-\psi'\Hess d ( \nabla \varphi, \nabla \varphi) \mp \Hess\varphi(\pm\psi' \nabla d + \nabla \varphi,\pm\psi' \nabla d + \nabla \varphi), \end{array} \end{equation} where $ W_{\pm}=\sqrt{1+\norm{\nabla w^{\pm}}^2}=\sqrt{1+\norm{\pm \psi'\nabla d + \nabla \varphi}^2}. $ Since $\psi''<0$ and $\escalar{\nabla d}{\pm\psi' \nabla d + \nabla \varphi}^2\leq \norm{\pm\psi' \nabla d + \nabla \varphi}^2$, then \begin{equation}\label{AAA} \psi''W_{\pm}^2-\psi''\escalar{\nabla d }{\pm\psi' \nabla d + \nabla \varphi}^2\leq \psi''. \end{equation} Once $\Hess d(x)$ is a continuous bilinear form and $\psi'\geq 1$ we have \begin{equation}\label{BBB} \psi'\modulo{\Hess d ( \nabla \varphi, \nabla \varphi)} \leq \psi'^2 \norm{d}_2\norm{\varphi}_1^2 \end{equation} Note also that \begin{equation}\label{normaa} \norm{\pm\psi' \nabla d + \nabla \varphi}^2=\left(\psi'^2+2 \psi'\escalar{\pm\nabla d}{\nabla \varphi}+\norm{\nabla \varphi}^2\right) \leq \left(1+\norm{\varphi}_1\right)^2\psi'^2, \end{equation} hence \begin{equation}\label{CCC} \modulo{\Hess \varphi(\pm\psi' \nabla d + \nabla \varphi,\pm\psi' \nabla d + \nabla \varphi)} \leq \norm{\varphi}_2\left(1+\norm{\varphi}_1\right)^2\psi'^2. \end{equation} Substituting \eqref{AAA}, \eqref{BBB}, \eqref{CCC} in \eqref{eq_barreira_superior} it follows \begin{equation}\label{est_Mwpm} \pm\mathcal{M} w^{\pm}\leq \psi' W_{\pm}^2 \Delta d + \psi''+ c \psi'^2, \end{equation} where \begin{equation}\label{constantec0} c=\norm{d}_2\norm{\varphi}_1^2+\norm{\varphi}_2\left(1+\norm{\varphi}_1\right)^2. \end{equation} Observe now that \[\pm \mathfrak{Q}_{ } w^{\pm} = \pm \mathcal{M} w^{\pm} \mp n H(x,w^{\pm})W_{\pm}^3.\] Moreover $$ \mp H(x,w^{\pm}(x)) = \mp H(x,\pm\psi(d(x))+ \varphi(x))\leq \mp H(x, \varphi(x))$$ since we are assuming that $\partial_z H\geq 0$, so $$\pm \mathfrak{Q}_{ } w^{\pm} \leq \pm \mathcal{M} w^{\pm} \mp n H(x, \varphi(x))W_{\pm}^3\leq \pm \mathcal{M} w^{\pm} + n \modulo{H(x, \varphi(x))}W_{\pm}^3.$$ Using the estimate in \eqref{est_Mwpm} we obtain \begin{equation}\label{eq_barreira_superior_Q_2} \begin{array}{r} \pm \mathfrak{Q}_{ } w^{\pm} \leq \psi'W_{\pm}^2\Delta d + \psi''+ c \psi'^2 + n \modulo{H(x, \varphi(x))}W_{\pm}^{3}. \end{array} \end{equation} Let now $y\in\partial\Omega$ be fixed and $\gamma_y(t)=\exp_{y}(tN_y)$ for $0\leq t \leq a$, where $N$ is the inner normal field to $\partial\Omega$. Applying again lemma \ref{lema_Est_altura_cond_ricci} to $h(t)=\frac{n}{n-1} H(\gamma_y(t),\varphi(y))$, we see that $\mathcal{H}'(t)\geq 0$, for $0\leq t \leq \tau$. Then, $\mathcal{H}_{\Gamma_t}(\gamma_y(t))\geq \mathcal{H}_{\partial\Omega}(y)$ for $0\leq t\leq a$, where $\Gamma_t$ is parallel to $\partial\Omega$. Therefore, \begin{equation}\label{est_usar_hiperb} \Delta d(x) \leq \Delta d(y) \leq -n \modulo{H(y,\varphi(y))} \ \forall \ x\in\Omega_a, \end{equation} where we denote by $y=y(x)\in\partial\Omega$ the nearest point to $x$. Substituting \eqref{est_usar_hiperb} in \eqref{eq_barreira_superior_Q_2} we obtain \begin{equation}\label{Qw_medio} \begin{split \pm \mathfrak{Q}_{ } w^{\pm} \leq & n \psi'W_{\pm}^2( \modulo{H(x,\varphi(x))} -\modulo{H(y,\varphi(y))}) \\ &+n \modulo{H(x, \varphi(x))}W_{\pm}^2 \left(W_{\pm}-\psi'\right) + \psi''+ c \psi'^2 . \end{split \end{equation} It follows directly from \eqref{normaa} that \begin{equation}\label{est_W2} W_{\pm}^2 \leq 1 + \left(1+\norm{\varphi}_1\right)^2\psi'^2 \leq 2\left(1+\norm{\varphi}_1\right)^2\psi'^2. \end{equation} In addition $ \modulo{H(x,\varphi(x))}-\modulo{H(y,\varphi(y))}\leq h_1(1+\norm\varphi_1)d(x), $ where $$h_1=\sup\limits_{\Omega\times\left[-\sup\limits_{\Omega}\modulo{\varphi},\sup\limits_{\Omega}\modulo{\varphi}\right]}\norm{\nabla_{M\times\R} H(x,z)}.$$ Then, \[n \psi'W_{\pm}^2( \modulo{H(x,\varphi(x))} -\modulo{H(y,\varphi(y))} )\leq 2nh_1\left(1+\norm{\varphi}_1\right)^3 d(x)(\psi'(d(x)))^3.\] Using the assumption P3 results \begin{equation}\label{Termo2} \begin{array}{c} n \psi'W_{\pm}^2( \modulo{H(x,\varphi(x))} -\modulo{H(y,\varphi(y))} )\leq 2 n h_1 \left(1+\norm{\varphi}_1\right)^3 \psi'^2. \end{array} \end{equation} On the other hand, \begin{equation}\label{Wmenospsi_0} W_{\pm}-\psi'\leq 1+\norm{\pm \psi'\nabla d +\nabla \varphi} -\psi' \leq 1+\norm{\varphi}_1. \end{equation} From \eqref{est_W2} and \eqref{Wmenospsi_0} we obtain \begin{equation}\label{Wmenospsi} n \modulo{H(x, \varphi(x))} \left(W_{\pm}-\psi'\right)W_{\pm}^2\leq 2 n h_0\left(1+\norm{\varphi}_1\right)^3\psi'^2, \end{equation} where $$h_0=\sup\limits_{\Omega\times\left[-\sup\limits_{\Omega}\modulo{\varphi},\sup\limits_{\Omega}\modulo{\varphi}\right]}\modulo{H(x,z)}.$$ Using \eqref{Termo2} and \eqref{Wmenospsi} in \eqref{Qw_medio} we get $$ \pm \mathfrak{Q}_{ } w^{\pm} \leq \left(c+2n\norm{H}_{1}\left(1+\norm{\varphi}_1\right)^3\right)\psi'^2+\psi'',$$ where we are using the notation $\norm{H}_1=h_0+h_1$. Remembering the expression for $c$ given in \eqref{constantec0} and making some algebraic computation we infer that \begin{align*} c+2n\norm{H}_{1}\left(1+\norm{\varphi}_1\right)^3 < C \left(1+\norm{\varphi}_2 + \norm{H}_{1}\right)\left(1+\norm{\varphi}_1\right)^3, \end{align*} where \begin{equation}\label{C_kappa} C= 2n\left(1+\norm{d}_2+1/\tau\right). \end{equation} Choosing \begin{equation}\label{nu} \nu= C \left(1+ \norm{H}_1+ \norm{\varphi}_2\right)\left(1+\norm{\varphi}_1\right)^3 \end{equation} we define $\psi$ by $ \psi(t)=\dfrac{1}{\nu}\log(1+kt). $ So, \begin{equation}\label{dpsi} \psi'(t)=\dfrac{k}{\nu(1+kt)} \end{equation} and \begin{equation}\label{ddpsi} \psi''(t)=-\dfrac{k^2}{\nu(1+kt)^2}, \end{equation} hence $ \pm \mathfrak{Q} w^{\pm} < \nu\psi'^2+\psi''=0, \ \mbox{ in } \ \Omega_a. $ Besides $$ t\psi'(t)=\dfrac{kt}{\nu(1+kt)}\leq \dfrac{1}{\nu}<1,$$ which is property P3. From \eqref{ddpsi} we see that property P2 is also satisfied. This implies that $\psi'(t)>\psi'(a)$ for all $t\in[0,a]$ as well, thus property P1 is ensured provided that \begin{equation}\label{paraP1} \psi'(a)=\dfrac{k}{\nu(1+ka)}=1. \end{equation} Furthermore, if we choose \begin{equation}\label{esc_psia_curv_media} \psi(a) = \dfrac{1}{\nu}\log(1+ka) = \norm{u}_0+\norm{\varphi}_0, \end{equation} we would have for each $x\in \partial\Omega_a\setminus\partial\Omega$ that $$\pm w^{\pm}(x)=\psi(a)\pm \varphi(x)=\norm{u}_0+\norm{\varphi}_0 \pm \varphi(x) \geq \pm u(x).$$ By combining \eqref{paraP1} and \eqref{esc_psia_curv_media} we see that \begin{equation}\label{cte_k} k=\nu\displaystyle e^{\nu(\norm{u}_0+\norm{\varphi}_0)} \end{equation} and, therefore, $ =\dfrac{e^{\nu(\norm{u}_0+\norm{\varphi}_0)}-1}{\nu\displaystyle e^{\nu(\norm{u}_0+\norm{\varphi}_0)}}. $ Note also that $a<\frac{1}{\nu}<\tau$ as required. Finally, if $x\in\partial\Omega$, then $w^{\pm}(x)=\pm \psi(0)+\varphi(x)=u(x)$. By the maximum principle we can conclude that $w^-\leq u \leq w^+ $ in $\Omega_a$, thus $$ -\psi\circ d \leq u - \varphi \leq \psi\circ d \mbox{ in } \Omega_a.$$ Recall that $$ -\psi\circ d = u - \varphi = \psi\circ d =0 \mbox{ in } \partial\Omega.$$ Consequently, for $y\in\partial\Omega$ and $0 \leq t \leq a$, we have that $$-\psi(t) + \psi(0) \leq (u-\varphi) (\gamma_y(t)) - (u-\varphi)(\gamma_y(0)) \leq \psi(t)-\psi(0).$$ Dividing by $t>0$ and passing to the limit as $t$ goes to zero we infer that \begin{equation}\label{est_grad_fronteira_1} \modulo{\escalar{\nabla u(y)}{N}}\leq \modulo{\escalar{\nabla \varphi(y)}{N}} + \psi'(0). \end{equation} As $u=\varphi$ on $\partial\Omega$, using \eqref{est_grad_fronteira_1} we derive \begin{align*} \norm{\nabla u(y)}\leq&\norm{\nabla \varphi(y)} + \psi'(0). \end{align*} which yields the desired estimate. \end{proof} \begin{obs} It is suffice to assume in the statement of theorem \ref{teo_Est_gradiente_fronteira} that \begin{equation*} \Ricc_{x}\geq n\norm{\nabla_x H(x,\varphi(y))}-\dfrac{n^2}{n-1}\left(H(x,\varphi(y))\right)^2 \ \forall \ x\in\Omega_0, \end{equation*} where $\Omega_0$ is the biggest open subset of $\Omega$ having the unique nearest point property, and $y\in\partial\Omega$ is the nearest point to $x$. \end{obs} \bigskip Now, we observe that the combination of assumption \eqref{cond_H_Ricci_sup_est_grad_fronteira} with the Serrin condition \eqref{cond_Serrin} ensures that the mean curvature of the parallel hypersurfaces ${\Gamma_t}$ in $\Omega$ increases along the inner normal geodesics. On the other hand, this behavior of $\mathcal{H}_{\Gamma_t}$ is guaranteed indeed by the geometric condition \begin{equation}\label{eq_est_grad_curv_bordo_Ricc_obs} \Ricc_{\gamma_y(t)}(\gamma_y'(t))\geq -(n-1)\left(\mathcal{H}_{\partial\Omega}(y)\right)^2 \ \forall \ y\in\partial\Omega. \end{equation} This can be seen applying lemma \ref{lema_Est_altura_cond_ricci} to the constant function $h(t)=\mathcal{H}_{\partial\Omega}(y)$ (see also {\cite[Th. 1 p. 232]{Dajczer2008}}). Therefore, if \eqref{eq_est_grad_curv_bordo_Ricc_obs} holds we do not need the assumption \eqref{cond_H_Ricci_sup_est_grad_fronteira} in the statement of theorem \ref{teo_Est_gradiente_fronteira}. So, we are able to establish the following result for later reference. \begin{teo}\label{teo_Est_gradiente_fronteira_paraGalvez} Suppose that for $\Ricc_x \geq -(n-1)c^2$ for each $x\in M$, where $c>0$. Let $\Omega\in M$ be a bounded domain with $\partial\Omega$ of class $\mathscr{C}^2$ such that $\mathcal{H}_{\partial\Omega} \geq c$ and $\varphi\in\mathscr{C}^2(\overline{\Omega})$. Let $H\in\mathscr{C}^{1}\left(\overline{\Omega}\times\R\right)$ satisfying $\partial_z H\geq 0$ and \begin{equation}\label{cond_Serrin_2} (n-1)\mathcal{H}_{\partial\Omega}(y)\geq n \modulo{H(y,\varphi(y))} \ \forall \ y\in\partial\Omega. \end{equation} If $u\in\mathscr{C}^2(\Omega)\cap\mathscr{C}^1(\overline{\Omega})$ is a solution of \eqref{ProblemaP}, then \begin{equation}\label{EstGradFront} \sup\limits_{\partial\Omega}\norm{\nabla u}\leq \norm{\varphi}_1 + \displaystyle e^{C\left(1+ \norm{H}_1+ \norm{\varphi}_2\right)\left(1+\norm{\varphi}_1\right)^3\left(\norm{u}_0+\norm{\varphi}_0\right)} \end{equation} for some $C=C(n,\Omega)$. \end{teo} \begin{proof} By the previous discussion we see that $$\Delta d(x) \leq \Delta d(y) \ \forall \ x\in\Omega_a,$$ where $y\in\partial\Omega$ is the nearest point to $x$. The rest of the proof is the same as before. \end{proof} Now we consider a mean convex domain $\Omega$ in the hyperbolic space $\HH^n$ and let $y\in\partial\Omega$. If $\lambda_i(t)$ represents the ith principal curvature of $\Gamma_t$ in $\gamma_y(t)$, then (see \cite[p. 17]{marcos}) \begin{equation}\label{curv_princ_paral_explicito} \lambda_i(t)=\dfrac{-\tanh t+\lambda_i(0)}{1- \lambda_i(0)\tanh t }, \end{equation} hence \begin{equation}\label{der_curv_princ_hiperb} \lambda_i'(t)=\dfrac{\sech^2(t)\left(\left(\lambda_i(0)\right)^2-1\right)}{\left(1-\lambda_i(0)\tanh t\right)^2}. \end{equation} Thus, $\mathcal{H}_{\Gamma_t}(\gamma_y(t))$ decrease if $\modulo{\lambda_i} < 1$ for all $1\leq i \leq n$. In any case ($\modulo{\lambda_i}<1$ or $\modulo{\lambda_i}\geq 1$), we can choose $\tau$ small enough such that $$ \modulo{\mathcal{H}_{\partial\Omega}(y)-\mathcal{H}_{d(x)}(x)}\leq \kappa d(x)$$ for some $\kappa>0$ depending on $\Omega$. Using this fact we are able to deduce the following result. \begin{teo}\label{teo_Est_gradiente_fronteira_hiperbólico} Let $\Omega\in \HH^n$ be a bounded domain with $\partial\Omega$ of class $\mathscr{C}^2$ and $\varphi\in\mathscr{C}^2(\overline{\Omega})$. Let $H\in\mathscr{C}^{1}\left(\overline{\Omega}\times\R\right)$ satisfying $\partial_z H\geq 0$, and $$(n-1)\mathcal{H}_{\partial\Omega}(y)\geq n \modulo{H(y,\varphi(y))} \ \forall \ y\in\partial\Omega.$$ If $u\in\mathscr{C}^2(\Omega)\cap\mathscr{C}^1(\overline{\Omega})$ is a solution of \eqref{ProblemaP}, then \begin{equation}\label{EstGradFront} \sup\limits_{\partial\Omega}\norm{\nabla u}\leq \norm{\varphi}_1 + \displaystyle e^{C\left(1+ \norm{H}_1+ \norm{\varphi}_2\right)\left(1+\norm{\varphi}_1\right)^3\left(\norm{u}_0+\norm{\varphi}_0\right)} \end{equation} for some $C=C(n,\Omega)$. \end{teo} \begin{proof} The proof follows the steps of the proof of theorem \ref{teo_Est_gradiente_fronteira} with the difference that we need to replace relation \eqref{est_usar_hiperb} by \ \Delta d(x)\leq \Delta d(y) +(n-1)\kappa d(x)\leq -n\modulo{H(y,\varphi(y))} + n\kappa d(x). \ In this case $C=2n\left(1+\kappa + \norm{d}_2+1/\tau\right)$ instead of \eqref{C_kappa}. \end{proof} \subsection{A priori global gradient estimate} In order to obtain a priori global gradient estimate we use techniques introduced by Caffarelli-Nirenberg-Spruck \cite[p. 51]{CNSpruck} in the Euclidean context. See also applications in the works of Nelli-Sa Earp \cite[Lemma 3.1 p. 4]{NelliRicardo} and Barbosa-Sa Earp \cite[Lemma 5.2 p. 62]{BarbosaRicardo1998} in the hyperbolic setting. We are able to prove the following theorem. \begin{teo}\label{teo_Est_global_gradiente} Let $\Omega\in M$ be a bounded domain with $\partial\Omega$ of class $\mathscr{C}^2$. Let $u\in\mathscr{C}^3(\Omega)\cap\mathscr{C}^1(\overline{\Omega})$ be a solution of \eqref{operador_minimo_1}, where $H\in\mathscr{C}^{1}\left(\Omega\times\left[-\sup\limits_{\overline{\Omega}}\modulo{u},\sup\limits_{\overline{\Omega}}\modulo{u}\right]\right)$ satisfies $\partial_z H\geq 0$. Then $ \sup_{\Omega}\norm{\nabla u(x)}\leq\left(\sqrt{3}+\sup\limits_{\partial\Omega}\norm{\nabla u}\right)\exp\left(2\sup\limits_{\Omega}\modulo{u}\left(1+8n\left(\norm{H}_1+R\right)\right)\right), $ where $R\geq 0$ is such that $\Ricc_x\geq -R$ for each $x\in\Omega$. \end{teo} \begin{proof} Let $w(x)=\norm{\nabla u(x)}e^{Au(x)}$, $A\geq 1$. Suppose that $w$ attains a maximum at $x_0\in\overline{\Omega}$. If $x_0\in\partial\Omega$, then $$w(x)\leq w(x_0) =\norm{\nabla u(x_0)}e^{Au(x_0)}.$$ So, \begin{equation}\label{est_global_1} \sup_{\Omega}\norm{\nabla u(x)}\leq\sup_{\partial\Omega}\norm{\nabla u}e^{2A\sup\limits_{\Omega}\modulo{u}}. \end{equation} Suppose now that $x_0\in\Omega$ and that $\nabla u(x_0)\neq 0$. Let us define normal coordinates at $x_0$ in such a way that $\frac{\partial}{\partial x_1}\big|_{x_0}=\frac{\nabla u(x_0)}{\norm{\nabla u(x_0)}}$. Then, \begin{equation}\label{der_u_x0} \partial_k u(x_0)=\escalar{\ds\mathsmaller{\frac{\partial}{\partial x_k}}\big|_{x_0}}{\nabla u(x_0) =\norm{\nabla u(x_0)}\delta_{k1}. \end{equation} Denoting by $\sigma$ the metric in this coordinates system we recall that \begin{align} \sigma_{ij}(x_0)=\sigma^{ij}(x_0)=\delta_{ij}\label{sigmaij}\\ \partial_k \sigma_{ij}(x_0)=\partial_k \sigma^{ij}(x_0) =0\label{deriv_sigmaij}\\ \Gamma_{ij}^k(x_0)=0.\label{simb_Chris} \end{align} Also $\nabla u(x) = \displaystyle\sum_{i}u^i \ds\mathsmaller{\frac{\partial}{\partial x_i}},$ where \begin{equation}\label{exp_local_grad_1} u^i=\sum_{j=1}^n\sigma^{ij} \partial_j u \end{equation} So, \begin{equation}\label{exp_local_norma_grad} \norm{\nabla u(x)}^2=\sum_{i,j=1}^n\sigma^{ij}\partial_i u\partial_j u. \end{equation} Observe now that the function $\tilde{w}(x)=\ln w(x)=Au(x)+\ln \norm{\nabla u(x)}$ also attains a maximum at $x_0$. Therefore, for each $0\leq k \leq n$, we have the relations $\partial_k \tilde{w}(x_0)=0$ and $\partial_{kk} \tilde{w}(x_0)\leq 0.$ Thus $$\partial_k \tilde{w}(x)= A\partial_k u(x) + \dfrac{\partial_k \left(\norm{\nabla u}^2\right)(x)}{2\norm{\nabla u(x)}^2}, $$ $$ \partial_{kk} \tilde{w}(x)=A\partial_{kk} u(x) +\dfrac{1}{2} \partial_k\left({\norm{\nabla u}^{-2}}\right)(x)\partial_k \left(\norm{\nabla u}^2\right)(x)+\dfrac{\partial_{kk} \left(\norm{\nabla u}^2\right)(x)}{2\norm{\nabla u(x)}^2}. $$ Since $$\partial_k\left({\norm{\nabla u}^{-2}}\right)=\partial_k\left({\norm{\nabla u}^{2}}\right)^{-1}=-\left({\norm{\nabla u}^{2}}\right)^{-2} \partial_k\left({\norm{\nabla u}^{2}}\right),$$ then $$ \partial_{kk} \tilde{w}(x) = A\partial_{kk} u(x) - \dfrac{\left(\partial_k \left(\norm{\nabla u}^2\right)(x)\right)^2}{2\norm{\nabla u(x)}^{4}} +\dfrac{\partial_{kk} \left(\norm{\nabla u}^2\right)(x)}{2\norm{\nabla u(x)}^2}.$$ Hence, \begin{equation}\label{derk} A\partial_k u(x_0) +\dfrac{\partial_k \left(\norm{\nabla u}^2\right)(x_0)}{2\norm{\nabla u(x_0)}^2} =0, \end{equation} \noindent and \begin{equation}\label{derkk} A\partial_{kk} u(x_0) - \dfrac{\left(\partial_k \left(\norm{\nabla u}^2\right)(x_0)\right)^2}{2\norm{\nabla u(x_0)}^{4}}+\dfrac{\partial_{kk} \left(\norm{\nabla u}^2\right)(x_0)}{2\norm{\nabla u(x_0)}^2}\leq 0. \end{equation} From \eqref{exp_local_norma_grad} it follows \begin{equation}\label{Dknorma} \partial_k \left(\norm{\nabla u}^2\right)= \displaystyle\sum_{i,j=1}^n \left( \left(\partial_k\sigma^{ij}\right)\partial_i u \partial_j u+2 \sigma^{ij}\partial_{ki} u \partial_j u\right) \end{equation} From \eqref{der_u_x0}, \eqref{sigmaij} and \eqref{deriv_sigmaij} we obtain $$\partial_k \left(\norm{\nabla u}^2\right)(x_0)= 2\displaystyle\sum_{i,j=1}^n \delta_{ij}\partial_{ki} u (\norm{\nabla u(x_0)}\delta_{j1}), $$ so} \begin{equation}\label{Dknorm} \partial_k \left(\norm{\nabla u}^2\right)(x_0)=2 \norm{\nabla u(x_0)}\partial_{1k} u(x_0). \end{equation}\vspace*{.1cm} \noindent Substituting \eqref{der_u_x0} and \eqref{Dknorm} in \eqref{derk} we derive $$ A\norm{\nabla u (x_0)}\delta_{k1} + \dfrac{2 \norm{\nabla u(x_0)}\partial_{1k} u(x_0)}{2\norm{\nabla u(x_0)}^2} =0,$$ thus,}% \begin{equation}\label{derk_2} \partial_{1k} u(x_0)=-A\norm{\nabla u (x_0)}^2\delta_{k1}. \end{equation}\vspace*{.1cm} \noindent Substituting also \eqref{derk_2} in \eqref{Dknorm} we obtain \begin{equation}\label{Dknorm_2} \partial_k \left(\norm{\nabla u}^2\right)(x_0)=-2A\norm{\nabla u (x_0)}^3\delta_{k1}. \end{equation} On the other hand, taking into account the expression \eqref{Dknorma} it follows \begin{align*} \partial_{kk} \left(\norm{\nabla u}^2\right)(x) =&\displaystyle\sum_{i,j=1}^n \left(\left(\partial_{kk}\sigma^{ij}\right)\partial_i u \partial_j u + \left(\partial_k\sigma^{ij}\right)\partial_k\left(\partial_i u \partial_j u\right)\right.\\ &\left.+2\left( \left(\partial_k\sigma^{ij}\right) \partial_{ki} u\partial_j u + \sigma^{ij} \partial_{kki} u \partial_j u +\sigma^{ij}\partial_{ki} u\partial_{kj} u\right)\right). \end{align*} From \eqref{der_u_x0}, \eqref{sigmaij} and \eqref{deriv_sigmaij} we have \begin{equation}\label{Dkk_2} \begin{split} \partial_{kk} \left(\norm{\nabla u}^2\right)(x_0)=\norm{\nabla u(x_0)}^2 \left(\partial_{kk}\sigma^{11}\right) +2\norm{\nabla u(x_0)}\partial_{kk1} u \\ +2 \displaystyle\sum_{i=1}^n (\partial_{ki} u(x_0))^2. \end{split} \end{equation} Differentiating two times with respect to $x_k$ the equation $\sigma\circ\sigma^{-1}=Id$ and evaluating in $x_0$ we see that $\partial_{kk} \sigma^{-1}(x_0)=-\partial_{kk} \sigma(x_0)$. Besides, \begin{align*} \partial_{kk} \sigma_{11}=&\ds\mathsmaller{\frac{\partial}{\partial x_k}} \ds\mathsmaller{\frac{\partial}{\partial x_k}}\escalar{\ds\mathsmaller{\frac{\partial}{\partial x_1}}}{\ds\mathsmaller{\frac{\partial}{\partial x_1}}}=2\ds\mathsmaller{\frac{\partial}{\partial x_k}}\escalar{\conex_{\ds\mathsmaller{\frac{\partial}{\partial x_k}}}\ds\mathsmaller{\frac{\partial}{\partial x_1}}}{\ds\mathsmaller{\frac{\partial}{\partial x_1}}}\\ =&2\left(\escalar{\conex_{\ds\mathsmaller{\frac{\partial}{\partial x_k}}}\conex_{\ds\mathsmaller{\frac{\partial}{\partial x_k}}}\ds\mathsmaller{\frac{\partial}{\partial x_1}}}{\ds\mathsmaller{\frac{\partial}{\partial x_1}}} + \norm{\conex_{\ds\mathsmaller{\frac{\partial}{\partial x_k}}}\ds\mathsmaller{\frac{\partial}{\partial x_1}}}^2 \right). \end{align*} Recalling \eqref{simb_Chris} we then have \begin{equation}\label{Dkk_sigma} \partial_{kk}\sigma^{11}(x_0)=-\partial_{kk}\sigma_{11}(x_0)=-2\escalar{\conex_{\ds\mathsmaller{\frac{\partial}{\partial x_k}}}\conex_{\ds\mathsmaller{\frac{\partial}{\partial x_k}}}\ds\mathsmaller{\frac{\partial}{\partial x_1}}}{\ds\mathsmaller{\frac{\partial}{\partial x_1}}}. \end{equation} Substituting \eqref{Dkk_sigma} in \eqref{Dkk_2} we can conclude that \begin{equation}\label{dkknorma_2} \begin{split \partial_{kk} \left(\norm{\nabla u}^2\right)(x_0)=2\Big(&-\norm{\nabla u (x_0)}^2\escalar{\conex_{\ds\mathsmaller{\frac{\partial}{\partial x_k}}}\conex_{\ds\mathsmaller{\frac{\partial}{\partial x_k}}} \ds\mathsmaller{\frac{\partial}{\partial x_1}}}{\ds\mathsmaller{\frac{\partial}{\partial x_1}}}\\ & +\norm{\nabla u(x_0)}\displaystyle \partial_{kk1} u(x_0) +\displaystyle\sum_{i=1}^n\left(\partial_{ki} u(x_0)\right)^2\Big). \end{split \end{equation} Using expressions \eqref{Dknorm_2} and \eqref{dkknorma_2} in \eqref{derkk} we verify that \[ \begin{split} A\partial_{kk} u(x_0)-2A^2\norm{\nabla u (x_0)}^2\delta_{k1}+\displaystyle \dfrac{\partial_{kk1} u(x_0)}{\norm{\nabla u(x_0)}}-\escalar{\conex_{\ds\mathsmaller{\frac{\partial}{\partial x_k}}}\conex_{\ds\mathsmaller{\frac{\partial}{\partial x_k}}} \ds\mathsmaller{\frac{\partial}{\partial x_1}}}{\ds\mathsmaller{\frac{\partial}{\partial x_1}}}&\\ +\dfrac{\displaystyle\sum_{i=1}^n\left(\partial_{ki} u(x_0)\right)^2}{\norm{\nabla u (x_0)}^2} &\leq 0. \end{split} \]}% It follows from \eqref{derk_2} that, for $k=1$, \[\begin{split} -A^2\norm{\nabla u (x_0)}^2-2A^2\norm{\nabla u (x_0)}^2+\displaystyle \dfrac{\partial_{111} u(x_0)}{\norm{\nabla u(x_0)}}-\escalar{\conex_{\ds\mathsmaller{\frac{\partial}{\partial x_1}}}\conex_{\ds\mathsmaller{\frac{\partial}{\partial x_1}}} \ds\mathsmaller{\frac{\partial}{\partial x_1}}}{\ds\mathsmaller{\frac{\partial}{\partial x_1}}}&\\ +\dfrac{\displaystyle\sum_{i=1}^n\left(-A\norm{\nabla u(x_0)}^2\right)^2\delta_{i1}}{\norm{\nabla u (x_0)}^2}& \leq 0, \end{split}\] then,}% \begin{equation}\label{est_Dumumum} \partial_{111} u(x_0)\leq 2A^2\norm{\nabla u (x_0)}^3+\norm{\nabla u(x_0)}\escalar{\conex_{\ds\mathsmaller{\frac{\partial}{\partial x_1}}}\conex_{\ds\mathsmaller{\frac{\partial}{\partial x_1}}} \ds\mathsmaller{\frac{\partial}{\partial x_1}}}{\ds\mathsmaller{\frac{\partial}{\partial x_1}}}. \end{equation} If $k>1$, then \begin{align*} A\partial_{kk} u(x_0)+\displaystyle \dfrac{\partial_{kk1} u(x_0)}{\norm{\nabla u(x_0)}}-\escalar{\conex_{\ds\mathsmaller{\frac{\partial}{\partial x_k}}}\conex_{\ds\mathsmaller{\frac{\partial}{\partial x_k}}} \ds\mathsmaller{\frac{\partial}{\partial x_1}}}{\ds\mathsmaller{\frac{\partial}{\partial x_1}}}\leq-\dfrac{\displaystyle\sum_{i=1}^n\left(\partial_{ki} u(x_0)\right)^2}{\norm{\nabla u (x_0)}^2} \leq 0, \end{align*}} so,{\small \begin{equation}\label{est_Dkkum} \partial_{kk1} u(x_0)\leq -A\partial_{kk} u(x_0)\norm{\nabla u (x_0)}+\norm{\nabla u(x_0)}\escalar{\conex_{\ds\mathsmaller{\frac{\partial}{\partial x_k}}}\conex_{\ds\mathsmaller{\frac{\partial}{\partial x_k}}} \ds\mathsmaller{\frac{\partial}{\partial x_1}}}{\ds\mathsmaller{\frac{\partial}{\partial x_1}}}. \end{equation}} We recall that \begin{equation}\label{hessianohij_coord} \nabla^2_{ij} u(x)=\Hess u(x){\left(\ds\mathsmaller{\frac{\partial}{\partial x_i}},\ds\mathsmaller{\frac{\partial}{\partial x_j}}\right)}=\partial_{ij} u-\displaystyle\sum_{k=1}^n\Gamma_{ij}^k \partial_k u, \end{equation} \begin{equation}\label{laplacianof} \Delta u(x)=\tr \left(X\longrightarrow\conex_X\nabla u\right)=\displaystyle\sum_{ij}\sigma^{ij}\nabla^2_{ij} u (x). \end{equation} \medskip \noindent From \eqref{exp_local_grad_1}, \eqref{hessianohij_coord} and \eqref{laplacianof} we have \begin{align} u^i(x_0)&=\partial_i u(x_0)=\norm{\nabla u(x_0)}\delta_{i1}\label{ui}\\[1em] \nabla^2_{ij} u(x_0)&= \partial_{ij} u(x_0)\label{hess_x0}\\[1em] \Delta u(x_0)&= \displaystyle\sum_{i=1}^n \partial_{ii} u(x_0)\label{laplac_x0}. \end{align} In the sequel we evaluate at $x_0$ the mean equation \eqref{operador_minimo_1_coord}. \noindent Substituting these expressions in \eqref{operador_minimo_1_coord}, using \eqref{derk_2}, we see that \begin{align*} nH_0 W_0^3=& W_0^2 \Delta u(x_0) - \displaystyle\sum_{i,j=1}^n \left(\norm{\nabla u(x_0)}\delta_{i1} \right)\left(\norm{\nabla u(x_0)}\delta_{j1} \right) \partial_{ij} u\\[1em] =&W_0^2 \Delta u(x_0)-\norm{\nabla u(x_0)}^2\partial_{11} u(x_0)\\[1em] =&W_0^2\displaystyle\sum_{i>1}\partial_{ii} u(x_0) + \partial_{11} u(x_0)\\[1em] =&W_0^2\displaystyle\sum_{i>1}\partial_{ii} u(x_0) -A \norm{\nabla u(x_0)}^2, \end{align*} where $H_0=H(x_0,u(x_0))$ and $W_0=\sqrt{1+\norm{\nabla u(x_0)}^2}$. Therefore, \begin{equation}\label{equ_curv_media_MxR_Delta_x0} \displaystyle\sum_{i>1}\partial_{ii} u(x_0) = nH_0W_0+\dfrac{A\norm{\nabla u(x_0)}^2}{W_0^2}. \end{equation} Finally let us differentiate \eqref{operador_minimo_1_coord} with respect to $x_1$. We have \begin{equation}\label{der_MCE} \begin{split} \partial_1 \left(W^2\right)\Delta u+W^2(\partial_1\Delta u)-2\displaystyle\sum_{i,j=1}^nu^i\left(\partial_1 u^j\right) \nabla^2_{ij} u -\sum_{i,j=1}^n u^i u^j \partial_1(\nabla^2_{ij} u) \\ =n(\partial_1 H+\partial_z H\partial_1 u)W^3 + nH\partial_1\left(W^3\right). \end{split} \end{equation} Let us calculate the derivative involved in this equation and evaluate at $x_0$. Since \eqref{Dknorm_2} holds we deduce \begin{equation}\label{dW2} \partial_1\left(W^2\right)(x_0)=\partial_1\left(\norm{\nabla u}^2\right)(x_0)=-2A\norm{\nabla u (x_0)}^3, \end{equation} \begin{equation}\label{dW3} \partial_1\left(W^3\right)(x_0)=\frac{3}{2}W_0\partial_1 \left(W^2\right)(x_0)=-3AW_0\norm{\nabla u (x_0)}^3. \end{equation} Using \eqref{exp_local_grad_1}, we have $$\partial_1 u^i=\partial_1 \displaystyle\sum_{j=1}^n\sigma^{ij}\partial_j u=\displaystyle\sum_{j=1}^n\left(\left(\partial_1\sigma^{ij}\right)\partial_j u + \sigma^{ij}\partial_{1j} u\right).$$ Using now \eqref{derk_2} we obtain \begin{equation}\label{dui} \partial_1 u^i(x_0)=\partial_{1i} u(x_0) =-A\norm{\nabla u(x_0)}^2\delta_{i1}. \end{equation} On the other hand, from \eqref{hessianohij_coord} we deduce \begin{align*} \partial_1(\nabla^2_{ij} u)(x =&\partial_{1ij} u(x) - \escalar{\conex_{\ds\mathsmaller{\frac{\partial}{\partial x_1}}} \nabla u}{\conex_{\ds\mathsmaller{\frac{\partial}{\partial x_i}}}\ds\mathsmaller{\frac{\partial}{\partial x_j}}}-\escalar{\nabla u(x)}{\conex_{\ds\mathsmaller{\frac{\partial}{\partial x_1}}}\conex_{\ds\mathsmaller{\frac{\partial}{\partial x_i}}}\ds\mathsmaller{\frac{\partial}{\partial x_j}}}. \end{align*}}% Hence, \begin{equation}\label{DumHessiano} \partial_1(\nabla^2_{ij} u)(x_0)=\partial_{1ij} u(x_0) - \escalar{\conex_{\ds\mathsmaller{\frac{\partial}{\partial x_1}}}\conex_{\ds\mathsmaller{\frac{\partial}{\partial x_i}}}\ds\mathsmaller{\frac{\partial}{\partial x_j}}}{\nabla u(x_0)}. \end{equation} Finally, it follows from \eqref{laplacianof}, $$\partial_1 \Delta u(x)= \displaystyle\sum_{ij}\left(\left(\partial_1 \sigma^{ij}\right)\nabla^2_{ij} u (x) +\sigma^{ij}\partial_1\left(\nabla^2_{ij} u (x)\right)\right).$$ } From \eqref{DumHessiano} we also have \begin{equation}\label{DumLaplaciano} \partial_1(\Delta u)(x_0)=\displaystyle\sum_{i=1}^n\left(\partial_{1ii} u(x_0) - \escalar{\conex_{\ds\mathsmaller{\frac{\partial}{\partial x_1}}}\conex_{\ds\mathsmaller{\frac{\partial}{\partial x_i}}}\ds\mathsmaller{\frac{\partial}{\partial x_i}}}{\nabla u(x_0)}\right). \end{equation} \bigskip Substituting \eqref{ui}, \eqref{hess_x0}, \eqref{dW2}, \eqref{dW3}, \eqref{dui}, \eqref{DumHessiano} and \eqref{DumLaplaciano} in \eqref{der_MCE} we obtain \begin{align*} &n\partial_1 H(x_0) W_0^3 + n \partial_z H(x_0)\norm{\nabla u(x_0)} W_0^3-3nA H_0 W_0\norm{\nabla u(x_0)}^3\\[1em] =&-2A\norm{\nabla u(x_0)}^3\Delta u(x_0)+W_0^2\displaystyle\sum_{i=1}^n\left(\partial_{1ii} u(x_0) - \escalar{\conex_{\ds\mathsmaller{\frac{\partial}{\partial x_1}}}\conex_{\ds\mathsmaller{\frac{\partial}{\partial x_i}}}\ds\mathsmaller{\frac{\partial}{\partial x_i}}}{\nabla u(x_0)}\right)\\ &+2A\norm{\nabla u(x_0)}^3\partial_{11} u(x_0)\\ &-\norm{\nabla u(x_0)}^2\left(\partial_{111} u(x_0) - \escalar{\conex_{\ds\mathsmaller{\frac{\partial}{\partial x_1}}}\conex_{\ds\mathsmaller{\frac{\partial}{\partial x_1}}}\ds\mathsmaller{\frac{\partial}{\partial x_1}}}{\nabla u(x_0)}\right)\\[2em] =&-2A\norm{\nabla u(x_0)}^3\left(\Delta u(x_0)-\partial_{11} u(x_0)\right)\\ &+W_0^2\displaystyle\sum_{i>1}\left(\partial_{1ii} u(x_0) - \escalar{\conex_{\ds\mathsmaller{\frac{\partial}{\partial x_1}}}\conex_{\ds\mathsmaller{\frac{\partial}{\partial x_i}}}\ds\mathsmaller{\frac{\partial}{\partial x_i}}}{\nabla u(x_0)}\right)\\ &+W_0^2\left(\partial_{111} u(x_0) -\norm{\nabla u(x_0)} \escalar{\conex_{\ds\mathsmaller{\frac{\partial}{\partial x_1}}}\conex_{\ds\mathsmaller{\frac{\partial}{\partial x_1}}}\ds\mathsmaller{\frac{\partial}{\partial x_1}}}{\ds\mathsmaller{\frac{\partial}{\partial x_1}}}\right)\\ &-\norm{\nabla u(x_0)}^2\left(\partial_{111} u(x_0) - \escalar{\conex_{\ds\mathsmaller{\frac{\partial}{\partial x_1}}}\conex_{\ds\mathsmaller{\frac{\partial}{\partial x_1}}}\ds\mathsmaller{\frac{\partial}{\partial x_1}}}{\nabla u(x_0)}\right)\\[2em] =&-2A\norm{\nabla u(x_0)}^3\displaystyle\sum_{i>1}\partial_{ii} u(x_0)\\ &+W_0^2\displaystyle\sum_{i>1}\left(\partial_{1ii} u(x_0) - \escalar{\conex_{\ds\mathsmaller{\frac{\partial}{\partial x_1}}}\conex_{\ds\mathsmaller{\frac{\partial}{\partial x_i}}}\ds\mathsmaller{\frac{\partial}{\partial x_i}}}{\nabla u(x_0)}\right)\\ &+\partial_{111} u(x_0) -\norm{\nabla u(x_0)} \escalar{\conex_{\ds\mathsmaller{\frac{\partial}{\partial x_1}}}\conex_{\ds\mathsmaller{\frac{\partial}{\partial x_1}}}\ds\mathsmaller{\frac{\partial}{\partial x_1}}}{\ds\mathsmaller{\frac{\partial}{\partial x_1}}}. \end{align*} } Using \eqref{est_Dumumum}, \eqref{est_Dkkum}, \eqref{equ_curv_media_MxR_Delta_x0} we obtain \begin{align*} &n\partial_1 H(x_0) W_0^3 + n \partial_z H(x_0)\norm{\nabla u(x_0)} W_0^3-3nA H_0 W_0\norm{\nabla u(x_0)}^3\\[1em] \leq&-2A\norm{\nabla u(x_0)}^3\displaystyle\sum_{i>1}\partial_{ii} u(x_0)\\ &+W_0^2\displaystyle\sum_{i>1}\left(-A\partial_{ii} u(x_0)\norm{\nabla u(x_0)} +\norm{\nabla u(x_0)}\escalar{\conex_{\ds\mathsmaller{\frac{\partial}{\partial x_i}}}\conex_{\ds\mathsmaller{\frac{\partial}{\partial x_i}}}\ds\mathsmaller{\frac{\partial}{\partial x_1}}}{\ds\mathsmaller{\frac{\partial}{\partial x_1}}}\right.\\ &\pushright{\left. - \norm{\nabla u(x_0)}\escalar{\conex_{\ds\mathsmaller{\frac{\partial}{\partial x_1}}}\conex_{\ds\mathsmaller{\frac{\partial}{\partial x_i}}}\ds\mathsmaller{\frac{\partial}{\partial x_i}}}{\ds\mathsmaller{\frac{\partial}{\partial x_1}}}\right) \ \ \ \ \ \ \ \ \ }\\ &+2A^2\norm{\nabla u(x_0)}^3 +\norm{\nabla u(x_0)} \escalar{\conex_{\ds\mathsmaller{\frac{\partial}{\partial x_1}}}\conex_{\ds\mathsmaller{\frac{\partial}{\partial x_1}}}\ds\mathsmaller{\frac{\partial}{\partial x_1}}}{\ds\mathsmaller{\frac{\partial}{\partial x_1}}} \\ &\pushright{{-\norm{\nabla u(x_0)} \escalar{\conex_{\ds\mathsmaller{\frac{\partial}{\partial x_1}}}\conex_{\ds\mathsmaller{\frac{\partial}{\partial x_1}}}\ds\mathsmaller{\frac{\partial}{\partial x_1}}}{\ds\mathsmaller{\frac{\partial}{\partial x_1}}}} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\\[2em] =&-2A\norm{\nabla u(x_0)}^3\displaystyle\sum_{i>1}\partial_{ii} u(x_0)\\ &-A\norm{\nabla u(x_0)}W_0^2\displaystyle\sum_{i>1}\partial_{ii} u(x_0) +2A^2\norm{\nabla u(x_0)}^3 \\ &+\norm{\nabla u(x_0)}W_0^2\displaystyle\sum_{i>1}\left(\escalar{\conex_{\ds\mathsmaller{\frac{\partial}{\partial x_i}}}\conex_{\ds\mathsmaller{\frac{\partial}{\partial x_1}}}\ds\mathsmaller{\frac{\partial}{\partial x_i}}}{\ds\mathsmaller{\frac{\partial}{\partial x_1}}} - \escalar{\conex_{\ds\mathsmaller{\frac{\partial}{\partial x_1}}}\conex_{\ds\mathsmaller{\frac{\partial}{\partial x_i}}}\ds\mathsmaller{\frac{\partial}{\partial x_i}}}{\ds\mathsmaller{\frac{\partial}{\partial x_1}}}\right)\\[2em] =&\left(-2A\norm{\nabla u(x_0)}^3-A\norm{\nabla u(x_0)}W_0^2\right)\displaystyle\sum_{i>1}\partial_{ii} u(x_0)\\ & +2A^2\norm{\nabla u(x_0)}^3 +\norm{\nabla u(x_0)}W_0^2\displaystyle\sum_{i>1}\escalar{R\left(\ds\mathsmaller{\frac{\partial}{\partial x_i}},\ds\mathsmaller{\frac{\partial}{\partial x_1}}\right)\ds\mathsmaller{\frac{\partial}{\partial x_i}}}{\ds\mathsmaller{\frac{\partial}{\partial x_1}}}\\[2em] \leq&-A\norm{\nabla u(x_0)}\left(1+3\norm{\nabla u(x_0)}^2\right)\left(nH_0W_0+\dfrac{A\norm{\nabla u(x_0)}^2}{W_0^2} \right)\\ & +2A^2\norm{\nabla u(x_0)}^3 -\norm{\nabla u(x_0)}W_0^2\Ricc_{x_0}\left(\ds\mathsmaller{\frac{\partial}{\partial x_1}}\right)\\[2em] =&-A\norm{\nabla u(x_0)}nH_0W_0\left(1+3\norm{\nabla u(x_0)}^2\right) -\dfrac{A^2\norm{\nabla u(x_0)}^3}{W_0^2}\left(1+3\norm{\nabla u(x_0)}^2\right) \\ & +2A^2\norm{\nabla u(x_0)}^3 -\norm{\nabla u(x_0)}W_0^2\Ricc_{x_0}\left(\ds\mathsmaller{\frac{\partial}{\partial x_1}}\right). \end{align*} Since $\partial_z H\geq 0$ we have \begin{align*} &n\partial_1 HW_0^3\\[1em] \leq &A n H_0 W_0\norm{\nabla u(x_0)}\left(3\norm{\nabla u(x_0)}^2-1-3\norm{\nabla u(x_0)}^2\right)\\ &+\dfrac{A^2\norm{\nabla u(x_0)}^3}{W_0^2}\left(2W_0^2-1-3\norm{\nabla u(x_0)}^2\right) -\norm{\nabla u(x_0)}W_0^2\Ricc_{x_0}\left(\ds\mathsmaller{\frac{\partial}{\partial x_1}}\right)\\[2em] =&-A n H_0 W_0\norm{\nabla u(x_0)}+\dfrac{A^2\norm{\nabla u(x_0)}^3}{W_0^2}\left(1-\norm{\nabla u(x_0)}^2\right)\\ & -\norm{\nabla u(x_0)}W_0^2\Ricc_{x_0}\left(\ds\mathsmaller{\frac{\partial}{\partial x_1}}\right). \end{align*} \bigskip Let \begin{align*} h_0=&\sup\limits_{\Omega\times\left[-\sup\limits_{\Omega}\modulo{u},\sup\limits_{\Omega}\modulo{u}\right]}\modulo{H}\\ h_1=&\sup\limits_{\Omega\times\left[-\sup\limits_{\Omega}\modulo{u},\sup\limits_{\Omega}\modulo{u}\right]}\left(\norm{\nabla_x H}+\partial_z H\right). \end{align*} and $R\geq 0$ such that $-\Ricc\leq R$ in $\Omega$. Then \begin{align*} \dfrac{A^2\norm{\nabla u(x_0)}^3}{W_0^2}\left(\norm{\nabla u(x_0)}^2-1\right) \leq A n h_0 W_0\norm{\nabla u(x_0)} +\norm{\nabla u(x_0)}W_0^2 R + n h_1 W_0^3. \end{align*} Dividing by $W_0^3$ it follows \begin{align*} \dfrac{A^2\norm{\nabla u(x_0)}^3}{W_0^5}\left(\norm{\nabla u(x_0)}^2-1\right) &\leq A n h_0 \dfrac{\norm{\nabla u(x_0)}}{W_0^2} + n h_1 +\dfrac{\norm{\nabla u(x_0)}}{W_0} R, \\ &\leq A n h_0 + n h_1 + R, \tag{*} \\ &\leq A n \left( h_0 + h_1 + R \right) \tag{**} \end{align*} where for $(*)$ we used the fact that $W_0^2> W_0 >\norm{\nabla u(x_0)}$, and for $(**)$ that $A,n>1$. Denoting by $H_1= h_0+h_1$ and dividing by $A^2$ we obtain $$ \dfrac{\norm{\nabla u(x_0)}^3}{W_0^5}\left(\norm{\nabla u(x_0)}^2-1\right)< \dfrac{n}{A} \left({H}_1+R\right). $$ We can suppose that $\norm{\nabla u(x_0)}>1$. Since $$W_0^3=\left(1+\norm{\nabla u(x_0)}^2\right)^{3/2}<\left(2\norm{\nabla u(x_0)}^2\right)^{3/2} <4\norm{\nabla u(x_0)}^3, $$ we see that $$ \dfrac{\norm{\nabla u}^3}{W_0^3}>\dfrac{1}{4}.$$ Then, $$\dfrac{1}{4}\dfrac{\norm{\nabla u(x_0)}^2-1}{W_0^2}<\dfrac{\norm{\nabla u(x_0)}^3}{W_0^3}\dfrac{\norm{\nabla u(x_0)}^2-1}{W_0^2}< \dfrac{n}{A} \left({H}_1+R\right), $$ that is, $$\dfrac{\norm{\nabla u(x_0)}^2-1}{\norm{\nabla u(x_0)}^2+1}< \dfrac{4n}{A} \left({H}_1+ R\right), $$ Choosing $A>8n\left({H}_1+R\right)$ it follows $$\dfrac{\norm{\nabla u(x_0)}^2-1}{\norm{\nabla u(x_0)}^2+1}< \dfrac{1}{2}, $$ so, $$ \norm{\nabla u(x_0)}<\sqrt{3}.$$ As a consequence, $$w(x)\leq w(x_0) =\norm{\nabla u(x_0)}e^{Au(x_0)}\leq\sqrt{3}e^{Au(x_0)},$$ thus \begin{equation}\label{est_global_grad_interior} \sup_{\Omega}\norm{\nabla u(x)}\leq\sqrt{3}e^{2A\sup\limits_{\Omega}\modulo{u}}. \end{equation} Joining \eqref{est_global_1} and \eqref{est_global_grad_interior} we obtain \begin{equation*} \sup_{\Omega}\norm{\nabla u(x)}\leq\sqrt{3}e^{2A\scriptstyle\sup\limits_{\Omega}\modulo{u}} + \sup\limits_{\partial\Omega}\norm{\nabla u}e^{2A\scriptstyle\sup\limits_{\Omega}\modulo{u}}, \end{equation*} Choosing \ A=1+8n\left(\norm{H}_{1}+R\right). \ we obtain the desire estimate. \end{proof} \begin{obs} A related global gradient estimate was obtained independently in \cite[Prop. 2.2 p. 5]{2018miriam}. \end{obs} \section{Proof of the theorems} \bigskip \noindent\textit{Proof of the main theorem (theorem \ref{T_exist_Ricci}).} Let $\Omega\subset M$ with $\partial\Omega$ of class $\mathscr{C}^{2,\alpha}$ for some $\alpha\in(0,1)$ and $\varphi\in\mathscr{C}^{2,\alpha}(\overline{\Omega})$. Elliptic theory assures that the solvability of problem \eqref{ProblemaP} strongly depends on $\mathscr{C}^{1}$ a priori estimates for the family of related problems \begin{equation}\tag{$P_{\tau}$}\label{ProblemaPsigma} \left\{\begin{array}{l} \diver\left(\dfrac{\nabla u}{W}\right) = \tau nH(x,u)\ \mbox{ in }\ \Omega, \\ \phantom{\diver\left(\dfrac{\nabla u}{W}\right)} u = \tau \varphi \ \mbox{ in }\ \partial\Omega, \end{array}\right. \end{equation} not depending on $\tau$ or $u$. Observe that the strong Serrin condition \eqref{StrongSerrinCondition} implies \begin{equation}\label{SerrinCondition_exist} (n-1)\mathcal{H}_{\partial\Omega}(y)\geq n \modulo{H(y,\tau\varphi(y))} \ \forall \ \tau\in[0,1],\ \forall \ y\in\partial\Omega. \end{equation} Let $u$ be a solution of problem \eqref{ProblemaPsigma} for arbitrary $\tau\in[0,1]$. Let $w=\phi\circ d + \sup\limits_{\partial\Omega}\modulo{\varphi}$ as in the proof of theorem \ref{teo_Est_altura}. Then $$ u\leq \sup\limits_{\partial\Omega} \modulo{\tau\varphi}\leq \sup\limits_{\partial\Omega} \modulo{\varphi}=w\ \mbox{ on } \ \partial\Omega.$$ As before, let $\Omega_0$ be the biggest open subset of $\Omega$ having the unique nearest point property. Let $x\in\Omega_0$ and $y=y(x)\in\partial\Omega$ the nearest point to $x$. Once \eqref{eq_Hxw} holds and $\tau\in[0,1]$ we have that $$ \mp n \tau {H(x,\pm w)} \leq n \tau\modulo{H(x,\varphi(y))} \leq n \modulo{H(x,\varphi(y))}. $$ From \eqref{est_Mw_est_alt} we have \begin{align*} \pm\mathfrak{Q}_{\tau}(\pm w)=\mathcal{M} w\mp n\tau H(x,\pm w)(1+\phi'^2)^{3/2} \leq 0. \end{align*} Proceeding as in the proof of theorem \ref{teo_Est_altura}, we get that $w$ and $-w$ are supersolution and subsolution in $\Omega_0$, respectively, for the problem \eqref{ProblemaPsigma}. This provides a priori height estimate for any solution of the problems \eqref{ProblemaPsigma} independently of $\tau$. On account of assumptions \eqref{cond_H_Ricci_exist} and \eqref{SerrinCondition_exist}, we can apply theorem \ref{teo_Est_gradiente_fronteira} to obtain a priori boundary gradient estimate for the solutions of the problems \eqref{ProblemaPsigma}. Elliptic regularity guarantees that any solution $u$ of the related problems \eqref{ProblemaPsigma} belongs to $\mathscr{C}^3(\Omega)$. We conclude therefore, by applying theorem \ref{teo_Est_global_gradiente}, the desired a priori global gradient estimate independently of $\tau$ and $u$. Classical elliptic theory (see {\cite[Th. 11.4 p. 281]{GT}}), ensures the existence of a solution $u\in\mathscr{C}^{2,\alpha}(\overline{\Omega})$ for our problem \eqref{ProblemaP}. Uniqueness follows from the maximum principle. \hfill \qedsymbol \bigskip \noindent{\it Proof of theorem \ref{exist_hiperbolicoHmenor1}.} We first recall that in $\HH^n\times\R$ there exists an entire vertical graph of constant mean curvature $\frac{n-1}{n}$. Explicit formulas were given by B\'erard-Sa Earp {\cite[Th. 2.1 p. 22]{BerardRicardo}}. The a priori height estimate for the solutions of the related problems \eqref{ProblemaPsigma} follows directly from the convex hull lemma \cite[Prop. 3.1 p. 41]{BerardRicardo}. Now, the a priori boundary gradient estimate and the a priori global gradient estimate follows from theorem \ref{teo_Est_gradiente_fronteira_hiperbólico} and theorem \ref{teo_Est_global_gradiente}, respectively. The rest of the proof is the same as before. \hfill\qedsymbol \bigskip \noindent{\it Proof of theorem \ref{teo_GalvezLozano}.} Under the hypothesis on $M$ and $\Omega$, Galvez-Lozano \cite[Th. 6 p. 12]{Galvez2015} proved the existence of a vertical graph over $\Omega$ with constant mean curvature $\frac{n-1}{n}$ and zero boundary data. As a matter of fact, such a graph constitutes a barrier for the solutions of the related problems \eqref{ProblemaPsigma}. On the other hand, for $y\in\partial\Omega$ we have $$(n-1)\mathcal{H}_{\partial\Omega}(y)> (n-1)c > n-1 \geq n \sup\limits_{\Omega\times\R}\modulo{H(x,z)}.$$ So the strong Serrin condition trivially holds and the boundary gradient estimate follows from our theorem \ref{teo_Est_gradiente_fronteira_paraGalvez}. The rest of the proof is the same as before. \hfill \qedsymbol \bibliographystyle{plain} \nocite{artigoexist:inpress}
{ "timestamp": "2019-03-01T02:02:53", "yymm": "1902", "arxiv_id": "1902.10774", "language": "en", "url": "https://arxiv.org/abs/1902.10774" }
\section{Introduction} \IEEEPARstart{M}{illimeter} wave (mmWave) systems have been identified as a promising solution to cope with the explosive growth of mobile traffic [1], [2]. Moreover, massive multiple-input multiple-output (MIMO) can provide significant array gains to compensate for the serve propagation losses and improve the system capacity of mmWave systems. For mmWave Massive MIMO systems, hybrid precoding is efficient transceivers which can achieve performance close to that of a fully digital precoding with limited number of RF chains [3]-[5]. As is known, fully-connected architecture is widely adopted in hybrid precoding systems [5]-[8]. In this architecture, each RF chain is connected to all the antennas with phase shifters (PSs) and RF adders. Therefore, both the hardware cost and power consumption are high since the numbers of the PSs and RF adders increase linearly with the number of antennas. To address these challenges, it has drawn the tremendous attention on the mapping methodologies of the radio frequency (RF) chains and antennas to reduce the number of PSs and RF adders [9], [10]. Compared with the fully-connected architecture, the partial-connected architecture can greatly reduce the number of PSs and eliminate the need for RF adders, which has low hardware cost and power consumption [9]-[12]. The partial-connected architecture can be divided into two categories: the fixed subarray and the dynamic subarray. In the fixed subarray architecture, each RF chain is connected with a fixed antenna subset; meanwhile, each antenna is connected to a single RF chain [13], [14]. In dynamic subarray scenario, antenna elements are adaptively partitioned into several subsets based on the long term channel information [15], [16]. The inherent directional characters of mmWave frequencies are beneficial to serve tens of users simultaneously. Thus multi-user hybrid precoding is an important approach to significantly enhance the spectral efficiency and system capacity for mmWave massive MIMO systems. One typical multi-user hybrid precoding scheme designed the analog precoders to harvest the large array gain and the digital Zero forcing (ZF), Minimum mean-squared error (MMSE) or Block diagonalization (BD) processing based on the analog effective channel in [17]-[19]. Another typical multi-user hybrid precoding scheme was designed as a solution of non-orthogonal angle division multiple access based on the angle information extracted from the channel estimation in [20], [21]. Unfortunately, most prior works on multi-user hybrid precoding only considered the fully-connected architecture [17]-[21]. The dynamic subarray achieves a compromise between sum rate and hardware complexity for millimeter wave massive MIMO systems [15]. However, multi-user hybrid precoding in the dynamic subarray architecture is intractable to solve, where the antenna partitioning would result in user unfairness and the multi-user interference (MUI). Limited work has been done for multi-user hybrid precoding in the dynamic subarray architecture. In [22], the antenna partitioning and analog precoder were achieved by the exhaustive search to maximize the analog effective channel gain. Base on the analog precoded channel with low dimension, the digital precoding was utilized to suppress the MUI exploiting ZF criterion. Furthermore, a multi-user analog precoding scheme was proposed in [23]. $N$ antenna elements with the largest amplitude were selected based on the channel of the first user in Multi-user MIMO (MU-MIMO) system. Then, the phase of analog precoder is computed as the quantized phase of the corresponding column vector. The above two sub-steps were carried out iteratively until all MU-MIMO users were completed. Nevertheless, the aforementioned multi-user hybrid precoding may have the following shortcomings. Firstly, the exhaustive search used in [22] introduces an extremely high complexity, since the effective channel gains for all users with all analog codewords have to be calculated. Then, the solution in [23] is hard to achieve optimal performance as the number of antennas is same connected with each RF chain. Finally, the users order in the procedure of the antenna selection leads to severe unfairness since the first user is able to choose the whole antenna elements and the other users can only choose the remaining elements. In this paper, a novel multi-user hybrid precoding solution is designed for dynamic subarrays architecture in mmWave Massive MIMO systems. The contributions are summarized as follows: \begin{itemize} \item We propose a multi-user hybrid precoding solution for they dynamic subarray architecture. Firstly, each user selects the best beam to maximize single-user effective channel gains and feedbacks the index to the base station (BS). Then, BS takes it as the initial analog precoder of each user. Secondly, the multi-user set with maximal sum rate is selected. Subsequently, the antennas are partitioned to RF chains based on the maximal signal-to-interference-plus-noise ratio (SINR) increment criterion. Finally, the hybrid precoding scheme is optimized for the dynamic subarray architecture. \item We develop an antenna partitioning algorithm for the dynamic subarray. For the selected multi-users, each antenna element is dynamically allocated to RF chain according to the maximal SINR increment. The proposed antenna partitioning algorithm guarantees the user fairness since each antenna element is allocated to acquire the maximal SINR increment of all selected users. Moreover, it can greatly reduce the size of the search space and the calculation complexity because the SINR is calculated on the analog effective channels of the selected multi-users. \end{itemize} Simulation results show that the sum rate of the proposed multi-user hybrid precoding solution achieves significant gain compared to the fixed subarray and approaches that of the exhaustive search of the dynamic subarray. The results also confirm the energy efficiency of the proposed solution outperforms the fully-connected architecture because it greatly reduces the number of PSs and eliminates the need for RF adders. Finally, based on the complexity analysis, the computation amount of the proposed antenna partitioning algorithm can be significantly reduced to $N_{\rm{RF}}\times N_{\rm{TX}}$, compared to that of exhaustive search solution in [22], i.e. $\frac{1}{\left ( N_{\rm{RF}} \right )!}\sum_{k=0}^{N_{\rm{RF}}}\left ( -1 \right )^{N_{\rm{RF}}-k}\binom{N_{\rm{RF}}}{k}k^{^{N_{\rm{TX}}}}$. The remaining parts of this paper are structured as following. Section II provides the description of the system model and channel model. Section III proposes the problem description and section IV illustrates specifics of the proposed solution. The simulation results and the complexity analysis are discussed in section V. Lastly, concluding remarks are presented in section VI. Notation: Bold uppercase $\mathbf{A}$ is a matrix, $\mathbf{a}$ is a vector, $\mathit{a}$ is a scalar. Moreover, $\mathbf{A}^{H}$, $\mathbf{A}^{-1}$ and $\mathbf{A}^{T}$ are the hermitian operation (conjugate transpose), the inverse operation, and the transpose operation of matrix $\mathbf{A}$, respectively. $\mathbf{I}_{N}$ is the $\mathit{N}$ dimensional identity matrix. $\left \| \mathbf{A} \right \|_{F}$ is Frobenius norm of matrix $\mathbf{A}$. $\mathcal{CN}\left ( \mathbf{m,R} \right )$ is a complex gaussian random vector with mean $\mathbf{m}$ and covariance $\mathbf{R}$. $\mathcal{A}$ is a set. \section{System Model And Channel Model} \subsection{System Model} Consider a downlink multi-user hybrid precoding mmWave system with the conventional fully-connected architecture as shown in Fig. 1, in which BS simultaneously communicates with $K$ mobile users. $N_{\rm{TX}}$ antennas and $N_{\rm{RF}}$ RF chains are equipped in BS such that $N_{\rm{RF}}\leq N_{\rm{TX}}$. Each mobile user configures $N_{\rm{RX}}$ antenna. \begin{figure} \centering \includegraphics[width=3.4 in]{4}\\ \caption{Fully-connected architecture in the multi-user mmWave Massive MIMO system.} \end{figure} The transmitter adopts a $N_{\rm{RF}}\times N_{s}$ digital precoding weight ${\bf{F}}_{\textrm{BB}}^{\mathit{k}}$, followed by a $N_{\rm{TX}}\times N_{\rm{RF}}$ analog precoding weight, ${{\bf{F}}_{\textrm{RF}}} = \left[ {{\bf{F}}_{\textrm{RF}}^1,{\bf{F}}_{\textrm{RF}}^2, \cdots ,{\bf{F}}_{\textrm{RF}}^K} \right]$, where $\mathbf{F}_{\textrm{RF}}^{\mathit{k}}$ is the analog precoding vector for the $k$th RF chain. $\mathbf{H}_{k}$ denotes the $N_{\rm{RX}}\times N_{\rm{TX}}$ channel matrix from BS to the $k$th mobile user, such that $\mathbb{E}\left[ {\left\| {{{\bf{H}}_k}} \right\|_{\rm{F}}^2} \right] = {N_{{\rm{TX}}}}{N_{{\rm{RX}}}}$. The received signal of mobile user $k$ can be written as \begin{align} {{\bf{y}}_k} = \sqrt \rho {{\bf{W}}_k} {{\bf{H}}_k}{\bf{F}}_{\textrm{RF}}{\bf{F}}_{\textrm{BB}}^k{\bf{s}}_k + \sum\limits_{i \ne k}^K {{{\bf{W}}_k}{{\bf{H}}_k}{\bf{F}}_{\textrm{RF}}{\bf{F}}_{\textrm{BB}}^i{\bf{s}}_i} + {{\bf{W}}_k}{{\bf{n}}_k}, \end{align}where ${\bf{s}}_k$ is the ${N_s} \times 1$ transmitted signal at the $k$th user with $\mathbb{E}\left[ {\bf{s}}_k{\bf{s}}_k^H \right] = {{\bf{I}}_s}$, ${{\bf{n}}_k}\sim\mathcal{CN}\left( {0,\sigma _N^2{\bf{I}}} \right)$ is an additive Gaussian white noise vector with independent and identically distribution $(i.i.d)$ and $\rho $ is defined as the average received power. ${{\bf{W}}_k}$ is the $N_{s}\times N_{\rm{RX}}$ digital matrix at the receiver. In further, ${{\bf{W}}_k}{{\bf{H}}_k}{\bf{F}}_{\textrm{RF}}{\bf{F}}_{\textrm{BB}}^k{\bf{s}}_k$ represents the desired signal and $\sum\limits_{i \ne k}^K {{{\bf{W}}_k}{{\bf{H}}_k}{\bf{F}}_{\textrm{RF}}{\bf{F}}_{\textrm{BB}}^i{\bf{s}}_i}$ is the multi-user interference (MUI) for the $k$th user, respectively. The PSs are adopted by the analog precoder, thus the entries of ${\bf{F}}_{{\textrm{RF}}}$ possess constant modulus and are normalized to satisfy ${\left| {{{\left[ {{{\bf{F}}_{\textrm{RF}}}} \right]}_{j,k}}} \right|^2} = 1$, $\left( {j = 1, \ldots ,{N_{\rm{TX}}}} \right)$. The analog precoding codebook $\mathcal{F}$ with cardinality $\left| {\mathcal{F}} \right| = {N_Q}$ is shared by BS and the user equipment. On the condition of beam steering codebooks, $\mathcal{F}$ consists of the vectors ${{\bf{b}}_q} = \left[ {1,{e^{j\frac{{2\pi }}{\lambda }dsin\left( {\frac{{2\pi q}}{{{N_Q}}}} \right)}}, \cdots ,{e^{j\left( {{N_{\rm{TX}}} - 1} \right)\frac{{2\pi }}{\lambda }dsin\left( {\frac{{2\pi q}}{{{N_Q}}}} \right)}}} \right]$, where the variable $q$ taking the values $0,1,2$ and ${N_Q} - 1$. The total power of transmitter is constrained to $\left\| {{\bf{F}}_{\textrm{RF}}^{\mathit{k}}{\bf{F}}_{\textrm{BB}}^{\mathit{k}}} \right\|_F^2 = {N_s}$ by normalizing ${\bf{F}}_{\textrm{BB}}^{\mathit{k}}$. The multi-user hybrid precoding in dynamic subarray architecture is shown in Fig. 2, where each RF chain dynamically connects to a subset of large-scale antenna elements and each antenna is only connected to one RF chain as defined in [14], [15]. Assume BS transmits data to every mobile user via only one stream considering the inherent directionality of mmWave. Therefore, the number of MU-MIMO users is equal to the number of RF chains, i.e. $K= N_{\rm{RF}}$. From $K= N_{\rm{RF}}$, the $k$th user is mapped to the $k$th RF chain with the analog precoding vector. Correspondingly, ${\mathcal{S}_k}$ denotes the antenna subarray connected to the $k$th RF chain. The subarray ${\mathcal{S}_k}$ is comprised by $N_{\rm{k}}$ antenna elements such that $1 \le {N_{\rm{k}}} < {N_{\rm{TX}}}$ and ${N_{\rm{TX}}} = \sum\limits_{k = 1}^K {{N_{\rm{k}}}} $. \begin{figure} \centering \includegraphics[width=3.4 in]{5}\\ \caption{Dynamic subarray architecture in the multi-user mmWave Massive MIMO system.} \end{figure} According to the dynamic subarray architecture as aforementioned, the received signal of mobile user $k$ can be rewritten as \begin{align} \begin{array}{l} {{\bf{y}}_{{\mathcal{S}_k}}} = \sqrt \rho {{{\bf{W}}_{\mathcal{S}_k}}{\bf{H}}_{{\mathcal{S}_k}}}{\bf{F}}_{\textrm{RF}}^{{\mathcal{S}_k}}{\bf{F}}_{\textrm{BB}}^{{\mathcal{S}_k}}{{\bf{s}}_k} + \sum\limits_{i \ne k}^K {{{\bf{W}}_{\mathcal{S}_k}}{{\bf{H}}_{{\mathcal{S}_k}}}{\bf{F}}_{\textrm{RF}}^{{\mathcal{S}_i}}{\bf{F}}_{\textrm{BB}}^{{\mathcal{S}_i}}{{\bf{s}}_i}} \\ + {{\bf{W}}_{\mathcal{S}_k}}{{\bf{n}}_k}, \end{array} \end{align}where ${{\bf{W}}_{\mathcal{S}_k}}$ is the $N_{s}\times N_{\rm{RX}}$ digital matrix at the receiver. ${\bf{H}}_{\mathcal{S}_k}$ is the ${N_{\rm{RX}}} \times {N_k}$ channel matrix from the subarray ${\mathcal{S}_k}$ at the transmitter to the $k$th mobile user, ${\bf{F}}_{\textrm{RF}}^{\mathcal{S}_k}$ is the ${N_k} \times {N_{\rm{RF}}}$ analog precoding vector connecting the $k$th RF chain and the subarray ${\mathcal{S}_k}$. ${\bf{F}}_{\textrm{BB}}^{\mathcal{S}_k}$ is the ${N_{\rm{RF}}} \times {N_s}$ digital precoding vector of the $k$th user. \subsection{Channel Model} The severe pathloss in mmWave frequency leads to limited scattering [2], [4]. The geometric channel model is adopted with ${L_k}$ scatterers for the $k$th user. It is assumed that each scatterer contributes a single propagation path between the BS and user [5], [24], [25]. The channel of each mobile user $k$ can be expressed as \begin{align} {{\bf{H}}_k} = \sqrt {\frac{{{N_{\rm{TX}}}{N_{\rm{RX}}}}}{{{L_k}}}} \sum\limits_{l = 1}^{{L_k}} {{\alpha _{k,l}}} {\bf{a}}_{\textrm{MS}}^H\left( {{\theta _{k,l}}} \right){{\bf{a}}_{\textrm{BS}}}\left( {{\phi _{k,l}}} \right), \end{align}where ${\alpha _{k.l}}$ is the complex gain of the $l$th path between the BS and the $k$th user, ${L_k}$ is the number of scatterers. The variable ${\theta _{k,l}} \in \left[ {0,2\pi } \right]$ represents the azimuth angle of arrival (AOA) of the $l$th path. ${\phi _{k,l}} \in \left[ {0,2\pi } \right]$ represents the azimuth angle of departure (AoD) of the $l$th path. Consequently, ${{\bf{a}}_{\textrm{MS}}}\left( {{\theta _{k,l}}} \right)$ and ${{\bf{a}}_{\textrm{BS}}}\left( {{\phi _{k,l}}} \right)$ represent the array response vectors of the $k$th user and the BS respectively. The uniform linear arrays (ULAs) are used, and the array response vector ${{\bf{a}}_{\textrm{BS}}}\left( {{\phi _{k,l}}} \right)$ of the BS is \begin{align} {{\bf{a}}_{\textrm{BS}}}\left( {{\phi _{k,l}}} \right) = {\left[ {\begin{array}{*{20}{c}} 1\\ {{e^{j\frac{{2\pi }}{\lambda }dsin\left( {{\phi _{k,l}}} \right)}}}\\ \vdots \\ {{e^{j\left( {{N_{\rm{TX}}} - 1} \right)\frac{{2\pi }}{\lambda }dsin\left( {{\phi _{k,l}}} \right)}}} \end{array}} \right]^T}, \end{align} where $d$ is the spacing distance between two adjacent antenna elements, and $\lambda$ is the wavelength of transmitting signals. ${{\bf{a}}_{\textrm{MS}}}\left( {{\theta _{k,l}}} \right)$ can be formulated in a similar fashion. For dynamic subarray architecture, the channel of mobile user $k$ can be represented as \begin{align} {{\bf{H}}_{{\mathcal{S}_k}}} = \sqrt {\frac{{{N_k}{N_{\rm{RX}}}}}{{{L_k}}}} \sum\limits_{l = 1}^{{L_k}} {{\alpha _{k,l}}} {\bf{a}}_{\textrm{MS}}^H\left( {{\theta _{k,l}}} \right){\bf{a}}_{\textrm{BS}}^{{\mathcal{S}_k}}\left( {{\phi _{k,l}}} \right), \end{align}where array response vector ${\bf{a}}_{\textrm{BS}}^{{\mathcal{S}_k}}\left( {{\phi _{k,l}}} \right)$ corresponding to the antenna set ${\mathcal{S}_k}$ can be given by \begin{align} {\bf{a}}_{\rm{BS}}^{{\mathcal{S}_k}}\left( {{\phi _{k,l}}} \right) = {\left[ {\begin{array}{*{20}{c}} {{e^{j\left( {\mathcal{S}_k^1 - 1} \right)\frac{{2\pi }}{\lambda }dsin\left( {{\phi _{k,l}}} \right)}}}\\ \vdots \\ {{e^{j\left( {\mathcal{S}_k^i - 1} \right)\frac{{2\pi }}{\lambda }dsin\left( {{\phi _{k,l}}} \right)}}}\\ \vdots \\ {{e^{j\left( {\mathcal{S}_k^{{N_k}} - 1} \right)\frac{{2\pi }}{\lambda }dsin\left( {{\phi _{k,l}}} \right)}}} \end{array}} \right]^T}, \end{align} where $\mathcal{S}_k^i$ is the $i$th antenna index of ${\mathcal{S}_k}$ expressed by $\left\{ {\mathcal{S}_k^1, \ldots ,\mathcal{S}_k^i, \ldots ,\mathcal{S}_k^{{N_k}}} \right\}$. \section{Problem Formulation} Given the system model for the dynamic subarray architecture in eq. (2), the achievable rate of the $k$th user corresponding to the antenna set ${\mathcal{S}_k}$ is written as \begin{align} {R_k} = {\log _2}\left( {1 + \frac{{\left\| {{{\bf{W}}_{{\mathcal{S}}_k}}}{{{\bf{H}}_{{{\mathcal{S}}_k}}}{\bf{F}}_{\textrm{RF}}^{{{\mathcal{S}}_k}}{\bf{F}}_{\textrm{BB}}^{{{\cal S}_k}}} \right\|_F^2}}{{\sigma _n^2 + \sum\nolimits_{i \ne k}^K {\left\| {{{\bf{W}}_{{\mathcal{S}}_k}}}{{{\bf{H}}_{{{\mathcal{S}}_k}}}{\bf{F}}_{\textrm{RF}}^{{{\mathcal{S}}_i}}{\bf{F}}_{\textrm{BB}}^{{{\mathcal{S}}_i}}} \right\|_F^2} }}} \right). \end{align} The main objective of this paper is to design the multi-user hybrid precoding weights for the dynamic subarray architecture $\left\{ {{\bf{F}}_{\textrm{RF}}^*,{\bf{F}}_{\textrm{BB}}^*,{\mathcal{S}}_k^*} \right\}$ at the transmitter. In this paper, ${{\bf{W}}_k} = {\bf{U}}_k^H$, where ${\bf{U}}_k^H$ is obtained by the Singular value decomposition (SVD) of the channel matrix ${{\bf{H}}_{{\mathcal{S}_k}}}$, which ${{\bf{H}}_{{\mathcal{S}_k}}} = {{\bf{U}}_k}{{\bf{\Lambda }}_k}{\bf{V}}_k^H$. For the sake of simplicity, let ${{\bf{\tilde H}}_{{\mathcal{S}_k}}}$ represent ${{\bf{W}}_k}{{\bf{H}}_{{\mathcal{S}_k}}}$ in the following. Then the objective function of hybrid precoding for dynamic subarray is described as \begin{align} \begin{array}{l} \left\{ {{\bf{F}}_{\textrm{RF}}^*,{\bf{F}}_{\textrm{BB}}^*,\mathcal{S}_k^*} \right\} = \mathop {\arg \max }\limits_{{{\bf{F}}_{\textrm{RF}}},{{\bf{F}}_{\textrm{BB}}},{\mathcal{S}_k}} \sum\limits_{k = 1}^K {{R_k}}, \\ s.t.\left\| {{{\left[ {{\bf{F}}_{\textrm{RF}}^{}} \right]}_{j,k}}} \right\|_F^2 = 1,j = 1, \ldots ,{N_{\rm{TX}}},\\ \begin{array}{*{20}{c}} {}&{\left\| {{\bf{F}}_{\textrm{RF}}^{}{\bf{F}}_{\textrm{BB}}^{}} \right\|_F^2 = {N_s}}, \end{array} \end{array} \end{align}which is a joint optimization problem to the three matrix variables $\left\{ {{\bf{F}}_{\textrm{RF}}^*,{\bf{F}}_{\textrm{BB}}^*,{\mathcal{S}}_k^*} \right\}$. Unfortunately, this problem is found to be intractable to acquire the global optima for joint optimization problems with the similar constrains [26], [27]. Thus the non-convex constraints on the hybrid precoding $\left\{ {{\bf{F}}_{\textrm{RF}}^*,{\bf{F}}_{\textrm{BB}}^*,{\mathcal{S}}_k^*} \right\}$ are impossible to be directly solved. To simplify the hybrid precoding design of the dynamic subarray, the optimization problem is temporarily decoupled and decomposed into three simple maximal sum rate optimization problems for $\left\{ {{\bf{F}}_{\textrm{RF}}^*} \right\}$, $\left\{ {\mathcal{S}_k^*} \right\}$ and $\left\{ {{\bf{F}}_{\textrm{BB}}^*} \right\}$, respectively. The detail of the proposed solution will be explained in section IV. \section{The Proposed Method} In this section, a general framework for multi-user hybrid precoding in dynamic subarray system is designed. The joint optimization problem $\left\{ {{\bf{F}}_{\textrm{RF}}^*,{\bf{F}}_{\textrm{BB}}^*,{\mathcal{S}}_k^*} \right\}$ is decomposed into multiple sub-problems which includes the analog precoding initialization, the multi-user selection, the dynamic subarray partitioning, and the optimization of hybrid precoding for the dynamic subarray architecture. The main idea of the proposed solution can be summarized as follows: \begin{enumerate} \item Each user searches the best beam from the codebook which can obtain the maximal single-user analog effective channel gain and feedbacks the index of the best beam to BS. Accordingly, BS takes the best beam as the initial analog precoding vector of each user. \item Then, exploiting the initial analog effective channel, BS selects the multi-user set from $N$ candidate users to maximize the sum rate. \item For the selected multi-user set, the antenna partitioning algorithm is designed to maximize SINR increment of all selected multi-users. \item The analog precoding vector is solved for the dynamic subarray architecture, and the ZF linear precoding is adopted as digital precoding to eliminate MUI, respectively. \end{enumerate} \subsection{Analog Precoding} MmWave channel possesses inherent directionality characteristics, thus the best beams of the candidate users depend on their own scattering paths. For the single user hybrid precoding, the common approach of the analog precoding is searching the strongest beam in the whole codebook [4], [6]. For multi-user hybrid precoding, the space division multiplexing is an important approach to mitigate MUI in mmWave system [20], [21]. Therefore, the best beam of each active user is firstly selected based on its own channel and utilized as the important information for multi-user hybrid precoding. That is, BS adopts the best beam of each candidate user as its initial analog precoding. The process to solve the initial analog precoding is constructed specifically as: Before the downlink transmission, BS broadcasts the reference signals sequentially precoded by the codeword of the codebook. Then the user measures the power of reference signals and selects the codeword of the strongest receiving reference signals as the best beam. The index of the best beam is sent back to the BS. At the last step, BS takes the corresponding codeword as the initial analog precoding vector of this user. In further, the analog precoding vector of the $n$th user can be selected from the analog precoding codebook $\mathcal{F}$ according to the following criterion: \begin{align} {\bf{f}}_n^o = \mathop {\arg \max }\limits_{{{\bf{b}}_q} \in \mathcal{F}} \left\| {{{\bf{\tilde H}}_n}{{\bf{b}}_q}} \right\|_F^2, \end{align}where ${{\bf{b}}_q}$ is the $q$th codeword from the analog precoding codebook $\mathcal{F}$. ${{\bf{\tilde H}}_n}$ represents the effective channel ${{\bf{W}}_n}{{\bf{H}}_n}$ of the $n$th user. ${\bf{f}}_n^o \in {{\bf{C}}^{{N_{{\rm{TX}}}} \times 1}}$ indicates the initial analog precoding vector to the $n$th user ($n = 1,2, \ldots ,N$) and $N$ is the number of candidate users. \subsection{MU-MIMO User Selection} When the BS transmits signals to multiple users in the same time slot, MUI severely degrades the system performance. The aim of multi-user selection is to select a group of MU-MIMO users with minimal inter-user interference and maximal objective channel gains. Usually maximizing the sum rate is the criterion of user selection, the SINR for each user should be estimated by scheduler to simplify the calculation [30]. The basic principle is that only if user has maximal SINR value, it is added to the selected user set as described in [28], [29]. Exploiting the initial analog precoding vectors, the SINR of the $n$th user is written as \begin{align} {\rm{SINR}_n} = \frac{{\left\| {{{\bf{\tilde H}}_n}{\bf{f}}_n^o} \right\|_F^2}}{{{\sigma _n^2} + \sum\nolimits_{i \ne n}^N {\left\| {{{\bf{\tilde H}}_n}{\bf{f}}_i^o} \right\|} _F^2}}. \end{align} The process of the multi-user selection is constructed as: Let us define a set of all candidate users $\mathcal{T} = \left[ {1,2, \ldots ,N} \right]$ and an empty set $\mathcal{U}$ which is updated as the selected multi-user set. The algorithm selects the first user $\mathcal{U}\left( 1 \right)$ with maximum channel gain $\left\| {{{\bf{\tilde H}}_{\mathcal{T}\left( n \right)}}} \right\|_F^2$ from the set $\mathcal{T}$. Then the set $\mathcal{T}$ and the set $\mathcal{U}$ are updated as following \begin{align} {\mathcal{T} \leftarrow \mathcal{T}\backslash \mathcal{U}\left( 1 \right),\mathcal{U} \leftarrow \mathcal{U} \cup \mathcal{U}\left( 1 \right)}. \end{align} According to the maximal SINR criterion, the new user is included to the selected multi-user set successively. The algorithm selects the remaining users from the set $\mathcal{T}$. And the $k$th user which will be added into the set $\mathcal{U}$ can be represented by following expression. \begin{align} \begin{array}{l} \mathcal{U}\left( k \right) = \mathop {\arg \max }\limits_{\mathcal{T}\left( n \right)} {\rm{SINR}}\left[ {\mathcal{T}\left( n \right)} \right]\\ = \mathop {\arg \max }\limits_{\mathcal{T}\left( n \right)} \frac{{\left\| {{{\bf{\tilde H}}_{\mathcal{T}\left( n \right)}}{\bf{f}}_{\mathcal{T}\left( n \right)}^o} \right\|_F^2}}{{\sigma _n^2 + \sum\nolimits_{i = 1}^{k - 1} {\left\| {{{\bf{\tilde H}}_{\mathcal{T}\left( n \right)}}{\bf{f}}_{\mathcal{U}\left( i \right)}^o} \right\|} _F^2}},\\ k = 2, \ldots ,{N_{{\rm{RF}}}}, \end{array} \end{align}where ${\bf{f}}_{\mathcal{T}\left( n \right)}^o$ is the initial analog precoding vector of the $n$th user in the set $\mathcal{T}$, ${\bf{f}}_{\mathcal{U}\left( i \right)}^o$ is the initial analog precoding vector of the $i$th user from set $\mathcal{U}$, and ${\bf{\tilde H}}_{\mathcal{T}\left( n \right)}$ is the effective channel of the $n$th user from set $\mathcal{T}$. Then set $\mathcal{T}$ and set $\mathcal{U}$ can be updated by \begin{align} {\mathcal{T} \leftarrow \mathcal{T}\backslash \mathcal{T}\left( n \right),\mathcal{U} \leftarrow \mathcal{U} \cup \mathcal{T}\left( n \right)}. \end{align} In each loop, the user with maximal SINR value is added to $\mathcal{U}$ and gets removed from $\mathcal{T}$. The process continues until the set $\mathcal{U}$ contains $K$ users. \subsection{Dynamic sub-array partitioning algorithm} In order to address the trade-off between the achievable spectral efficiency and hardware complexity, the dynamic subarrays dynamically partition antenna elements to RF chain based on the long-term channel information [15], [16]. Different from the single-user case in [15], [16], we design the antenna partitioning algorithm for multi-user dynamic subarray hybrid architecture as defined in subsection II. A. For the antenna subarray $\mathcal{S}_k$, the SINR of the $k$th user is represented by following expression, \begin{align} {\rm{SINR}}\left[ {{\mathcal{S}_k}} \right] = \frac{{\left\| {{{\bf{\tilde H}}_{{\mathcal{S}_k}}}{\bf{F}}_{\textrm{RF}}^{{\mathcal{S}_k}}{\bf{F}}_{\textrm{BB}}^{{\mathcal{S}_k}}} \right\|_F^2}}{{\sigma _n^2 + \sum\nolimits_{i \ne k}^K {\left\| {{{\bf{\tilde H}}_{{\mathcal{S}_k}}}{\bf{F}}_{\textrm{RF}}^{{\mathcal{S}_i}}{\bf{F}}_{\textrm{BB}}^{{\mathcal{S}_i}}} \right\|_F^2} }}. \end{align} To maximize the sum rate of MU-MIMO user, the dynamic subarray is partitioned according to the maximal SINR of the selected MU-MIMO users. The exhaustive search from all probable cases of the three unknown matrix variables $\left\{ {{\bf{F}}_{\textrm{RF}}^*,{\bf{F}}_{\textrm{BB}}^*,\mathcal{S}_k^*} \right\}$ could be the direct solution for the dynamic subarray but lead to high computational complexity. To address this issue, the analog precoding vectors are exploited as the important information to partition antenna elements. It enables each selected user to fully take advantage of large-scale array gains generated by the directional transmission of mmWave Massive MIMO systems. The optimal antenna subarray ${\mathcal{S}_k^*}$ for the $k$th user is represented by following expression \begin{align} {\mathcal{S}_k^* = \mathop {\arg \max }\limits_{{\mathcal{S}_k}} \frac{{\left\| {{{\bf{\tilde H}}_{{\mathcal{S}_k}}}{\bf{f}}_{\mathcal{S}_k}^o} \right\|_F^2}}{{\sigma _n^2 + \sum\nolimits_{i \ne k}^K {\left\| {{{\bf{\tilde H}}_{{\mathcal{S}_k}}}{\bf{f}}_{\mathcal{S}_i}^o} \right\|_F^2} }}}, \end{align} where ${\bf{f}}_{{\mathcal{S}_k}}^o \in {C^{{N_k} \times 1}}$ is obtained by selecting the values of beam ${\bf{f}}_k^o$ according to the dynamic subarray effective channel ${{\bf{\tilde H}}_{{\mathcal{S}_k}}}$. ${\bf{f}}_{{\mathcal{S}_k}}^o$ can be written as \begin{align} {\bf{f}}_{{\mathcal{S}_k}}^o = {\left[ {\begin{array}{*{20}{c}} {{e^{j\left( {\mathcal{S}_k^1 - 1} \right)\frac{{2\pi }}{\lambda }dsin\left( {\frac{{2\pi {q_k}}}{{{N_Q}}}} \right)}}}\\ \vdots \\ {{e^{j\left( {\mathcal{S}_k^{{N_k}} - 1} \right)\frac{{2\pi }}{\lambda }dsin\left( {\frac{{2\pi {q_k}}}{{{N_Q}}}} \right)}}} \end{array}} \right]^T}, \end{align} where $\left\{ {\mathcal{S}_k^1, \ldots ,\mathcal{S}_k^{{N_k}}} \right\}$ is the antenna index for the dynamic subarray ${\mathcal{S}_k}$ of the $k$th user, ${q_k}$ is the label of the best beam in the codebook ${\mathcal{F}}$ which was selected for the $k$th user in Subsection IV. A. However, the maximal SINR criterion results in the severe user unfairness on account that the SINR value is higher for the user being partitioned more antennas. On the other hand, the first antenna generates more SINR increment than other antenna, and so on [3]. Considering the user fairness and the objective function of the maximal sum rate criterion, the dynamic subarray allocates each antenna element to RF chain according to the maximal SINR increment. The SINR increment ${\nabla _k}$ of antenna subarray ${\mathcal{S}_k}$ can be defined as \begin{align} {\nabla _k} = {\rm{SINR}}[{\mathcal{S}_k} \cup j] - {\rm{SINR}}[{\mathcal{S}_k}], \end{align} where $[{\mathcal{S}_k} \cup j]$ represents that the antenna $j$ is added in subarray ${\mathcal{S}_k}$. Thus, the optimal subarray $\mathcal{S}_k^*$ can be rewritten as \begin{align} \mathcal{S}_k^* = \mathop {\arg \max }\limits_{{\mathcal{S}_k}} {\nabla _k}. \end{align} The algorithm is circularly performed as the following process: At the initial stage, the dynamic subarray of each user is an empty set and the candidate antenna set contains all antenna elements. Then, the algorithm updates SINR and SINR increment values of each user respectively when an antenna is added into the dynamic subarray. At last, the algorithm finds the subarray $\mathcal{S}_k^*$ with the maximal SINR increment value and assigns this antenna element to the optimal subarray $\mathcal{S}_k^*$. Note that only one antenna is assigned and other antennas remain unchanged at each antenna selection stage. The above process is performed iteratively until all antennas are assigned. In this paper, the antenna number $N_k$ of subarray $\mathcal{S}_k$ is adaptive to the channel state in order that multi-users can obtain more array gains. First, the calculation complexity can greatly be reduced since the SINR is calculated on the analog effective channel with low dimension. Further, the selection of each antenna element guarantees user fairness because the SINR increment is maximal for all MU-MIMO users. At last, the number of iterations is significantly decreased as it is equal to the amount of multi-users instead of all candidate users. The process of the proposed algorithm is described in Algorithm 1. \noindent \begin{tabular}{lcl} \\ \toprule $\bf{Algorithm\ 1}$: Dynamic sub-array partitioning \\ \midrule $\bf{Input}$: ${N_{{\rm{TX}}}}, {K}, {\mathcal{S}_0} = \left\{ {1, \ldots ,{N_{{\rm{TX}}}}} \right\}, {{\mathcal{S}_1}, \ldots ,{\mathcal{S}_K} = \phi }$\\ for $j = 1:{N_{{\rm{TX}}}}$\\ ~~~for $k = 1:K$\\ ~~~~~~~${\rm{SINR}}\left[ {\mathcal{S}_k} \right] = \frac{{\left\| {{{{\bf{\tilde H}}}_{{\mathcal{S}_k}}}{{\bf{f}}}_{{{\mathcal{S}}_k}}^o} \right\|_F^2}}{{\sigma _n^2 + \sum\nolimits_{i \ne k}^K {\left\| {{{\bf{\tilde H}}_{{{\mathcal{S}}_k}}}{\bf{f}}_{{\mathcal{S}_i}}^o} \right\|_F^2} }}$\\ ~~~~~~~${\rm{SINR}}\left[ {{\mathcal{S}_k} \cup j} \right] = \frac{{\left\| {{{\bf{\tilde H}}_{{\mathcal{S}_k} \cup j}}{\bf{f}}_{{\mathcal{S}_k} \cup j}^o} \right\|_F^2}}{{\sigma _n^2 + \sum\nolimits_{i \ne k}^K {\left\| {{{\bf{\tilde H}}_{{\mathcal{S}_k} \cup j}}{\bf{f}}_{{\mathcal{S}_i}}^o} \right\|_F^2} }}$\\ ~~~~~~~${\nabla _k} = {\rm{SINR}}[{\mathcal{S}_k} \cup j] - {\rm{SINR}}[{\mathcal{S}_k}]$\\ ~~~end\\ ~~~$\mathcal{S}_k^* = \mathop {\arg \max }\limits_{{\mathcal{S}_k}} {\nabla _k}$\\ ~~~$\mathcal{S}_k^* \leftarrow \mathcal{S}_k^* \cup j,{\mathcal{S}_0} \leftarrow {\mathcal{S}_0}\backslash j$\\ end\\ $\bf{Output}$:${\mathcal{S}_1}, \ldots ,{\mathcal{S}_{{N_{{\rm{RF}}}}}}$ \\\bottomrule \end{tabular} \subsection{Hybrid Precoding} After the partitioning of dynamic subarrays, the optimization problem in eq. (8) actually becomes similar to the conventional hybrid precoding problem. The only difference is the dynamic subarray architecture. Both the beam shape and width of each dynamic sub-array are changed as the antenna elements which are different from the full-connected architecture. Thus the initial analog precoding should be updated by the dynamic subarray. For the dynamic subarray with the partitioned antenna elements, the vector ${\bf{b}}_q^{{\mathcal{S}_k}}$ in codebook ${\mathcal{F}^{{\mathcal{S}_k}}}$ should be rewritten: \begin{align} {\bf{b}}_q^{{\mathcal{S}_k}} = {\left[ {\begin{array}{*{20}{c}} {{e^{j\left( {\mathcal{S}_k^1 - 1} \right)\frac{{2\pi }}{\lambda }dsin\left( {\frac{{2\pi q}}{{{N_Q}}}} \right)}}}\\ \vdots \\ {{e^{j\left( {\mathcal{S}_k^{{N_k}} - 1} \right)\frac{{2\pi }}{\lambda }dsin\left( {\frac{{2\pi q}}{{{N_Q}}}} \right)}}} \end{array}} \right]^T}, \end{align} where $\left\{ {\mathcal{S}_k^1, \ldots ,\mathcal{S}_k^{{N_k}}} \right\}$ is the antenna index for the dynamic subarray $\mathcal{S}_k$ of each user. Here, the analog precoding vector ${\bf{F}}_{{\textrm{RF}}}^{{\mathcal{S}_k}}$ for the dynamic subarray $\mathcal{S}_k$ is determined to: \begin{align} {\bf{F}}_{{\textrm{RF}}}^{{\mathcal{S}_k}} = \mathop {{\rm{argmax}}}\limits_{{\bf{b}}_q^{{\mathcal{S}_k}} \in {\mathcal{F}^{{\mathcal{S}_k}}}} \left\| {{{\bf{\tilde H}}_{{\mathcal{S}_k}}}{\bf{b}}_q^{{\mathcal{S}_k}}} \right\|_F^2, \end{align} where ${\bf{b}}_q^{{\mathcal{S}_k}}$ is the $q$th codeword of the analog precoding codebook ${\mathcal{F}^{{\mathcal{S}_k}}}$. Then, the aim of digital precoding is to eliminate the inter-user interference according to the maximal SINR criterion. The digital precoding algorithms are adopted as the classical ZF and MF schemes [3], [4] \begin{align} {\bf{F}}_{{\textrm{BB}}}^{{\textrm{ZF}}}\left( k \right) = {\bf{\bar H}}_{{\mathcal{S}_k}}^H{\left( {{{{\bf{\bar H}}}_{{\mathcal{S}_k}}}{\bf{\bar H}}_{{\mathcal{S}_k}}^H} \right)^{ - 1}}, \end{align} \begin{align} {\bf{F}}_{{\rm{BB}}}^{{\rm{MF}}}\left( k \right) = \frac{{{\bf{\bar H}}_{{\mathcal{S}_k}}^H}}{{\left\| {{\bf{\bar H}}_{{\mathcal{S}_k}}^H} \right\|_F^2}}, \end{align} where ${{\bf{\bar H}}_{{\mathcal{S}_k}}} = {{\bf{\tilde H}}_{{\mathcal{S}_k}}}{\bf{F}}_{{\rm{RF}}}^{{\mathcal{S}_k}}$ is the analog effective channel of the $k$th user. \section{Simulation Results} In this section, the performance of the proposed solution is evaluated by extensive computer simulations. The exhaustive search of dynamic subarray, the fixed subarray and the conventional full-connected architecture are chosen as benchmarks. In order to validate the superiority of the proposed solution, we compare the sum rate and the energy efficiency of three array architectures, respectively. At last, the computational complexity of the proposed solution is investigated. Specially, two kinds of the fixed subarray cases are adopted as shown in Fig. 3, e.g. the adjacent structure and the interlaced structure where $m = {{{N_{{\rm{TX}}}}} \mathord{\left/ {\vphantom {{{N_{{\rm{TX}}}}} {{N_{{\rm{RF}}}}}}} \right. \kern-\nulldelimiterspace} {{N_{{\rm{RF}}}}}}$. For the exhaustive search algorithm of the dynamic subarray architecture, the optimal subarrays are found by the exhaustive search over all the antenna elements and the analog precoding codewords as described in [22]. Considering the fairness comparison, the full-connected architecture and the fixed subarray architecture adopt the same hybrid precoding method in simulations as proposed in [13]. \begin{figure} \centering \includegraphics[width=3.4 in]{6}\\ \caption{Two structures of the fixed subarray using different mapping strategies: each RF chain is connected to adjacent antennas in (a) and to interlaced antennas in (b).} \end{figure} Without loss of generality, the key simulation parameters are the same as those in [15], [17] and are listed in Table I. In the simulations, the geometric channel model with ${L_k}$ scatterers is adopted as described in Subsection II. B. The distributions of the paths delay and the azimuth angles are similar to that in WINNER II SCM channel model [25]. \begin{table}[!htp] \begin{center} \caption{Simulation Parameters} \begin{tabular}{|c|c|} \hline Number of antennas at BS, $N_{\rm{TX}}$ & 16, 32, 64, 128, 256 \\\hline Number of antennas at user, $N_{\rm{RX}}$ & 2 \\\hline Number of users, $K$ & $2 \le K \le 8 (K = {N_{\rm{RF}}})$ \\\hline Number of scatterers, ${L_k}$ & $4$ \\\hline Range of azimuth angle & uniformly distribution in \\ & $\left[ { - {{180}^ \circ },{{180}^ \circ }} \right]$ \\\hline Size of codebook & 32 \\\hline Antenna spacing & $0.5\lambda $ \\\hline Carrier frequency & $60{\rm{GHz}}$ \\\hline Power consumption of RF chain, ${P_{\rm{RF}}}$ & $250{\rm{mW}}$ \\\hline Power consumption of PS, ${P_{\rm{PS}}}$ & $1{\rm{mW}}$ \\\hline Power amplifier efficiency, $\eta $ & 0.38 \\\hline \end{tabular} \end{center} \end{table} \subsection{Performance Comparisons of Sum Rate} Firstly, we investigate the sum rate of the proposed solution and the benchmark schemes. BS is configured with 64 antennas (Uniform Linear Array, ULA) and 2 RF chains serving two users simultaneously. As observed from Fig. 4, the sum rate of the proposed solution significantly outperforms two kinds of the fixed subarrays because the antenna elements in the proposed solution are adaptively partitioned to RF chains according to the long-term channel information. And then, the sum rates of the proposed multi-user hybrid precoding scheme approach to that of the exhaustive search in dynamic subarray architecture with lower calculation complexity. At last, the result shows that the performance loss of the proposed solution is negligible to compare with the full-connected architecture. For instance, the proposed solution can obtain about 97.8\% of the sum rate achieved by full-connected hybrid architecture and more than 7\% of the sum rate achieved by the fixed subarray architecture at SNR = 0 dB. \begin{figure} \centering \includegraphics[width=3.7 in]{t1}\\ \caption{Performance comparisons of sum rate for various array architectures when BS equips 64 antennas (ULA) and 2 RF chains.} \end{figure} In Fig. 5, we investigate the performance of three array architectures for diverse number of RF chains at the transmitter. The figures show the sum rate versus SNR in the case of 64 and 128 transmit antennas and two users with 2 receive antennas. The number of RF chains at BS is 2, 4, or 8, respectively. The fixed subarray adopts the interlaced architecture in this simulation. Observed from Fig. 5 (a), Fig. 5 (b) and Fig. 5 (c), the dynamic subarray obviously performs better than the fixed subarray in diverse settings because the adaptive antenna partitioning algorithm obtains more array processing gains. More importantly, the performance gaps of three array architectures are more obvious with 8 RF chains observed from Fig. 5 (c). It validates our analysis that the dynamic subarray would bear the slight performance degradation with lower power consumption and hardware cost compared with the full-connected architecture. For example, when BS is configured with 128 antennas and 8 RF chains, the number of the PSs and RF adders can be reduce to 128 and 0 in the dynamic subarray and the fixed subarray compared with $8 \times 128$ and 128 in the full-connected architecture, respectively. \textcolor{blue}{\begin{figure}[!htbp] \centering \subfigure[Sum rate vs. SNR (64, 128 antennas and 2 RF chains).]{ \label{fig:subfig:a} \includegraphics[width=3.4 in]{t2}} \subfigure[Sum rate vs. SNR (64, 128 antennas and 4 RF chains).]{ \label{fig:subfig:b} \includegraphics[width=3.4 in]{t3}} \hspace{0.5in} \subfigure[Sum rate vs. SNR (64, 128 antennas and 8 RF chains).]{ \label{fig:subfig:c} \includegraphics[width=3.4 in]{t4}} \caption{Performance comparisons of sum rate for different numbers of BS antennas, $N_{\rm{TX}}$.} \end{figure}} To further compare the performance of the proposed solution and the existing schemes, Fig. 6 indicates the sum rate according to transmit antenna numbers when each user is equipped with 2 receive antennas. Here, the numbers of RF chains at the BS are set as 2, 4, 6 and 8, respectively. In Fig. 6 (a) and (b), SNRs are assumed to be fixed at -10 dB and 0 dB, respectively. The results show that the performance of the proposed solution approaches to that of the full-connected architecture in the condition of 2 RF chains. When more RF chains are connected to the antenna arrays, the performance loss of the dynamic subarray is more obvious due to significantly reducing the hardware cost and power consumption. Fortunately, the proposed solution can achieve a considerable high sum rate performance in the condition of $4\sim8$ RF chains and 128/256 antennas, e.g. the proposed solution can achieve about 91\% (SNR = -10 dB) and 96\% (SNR = 0 dB) of the sum rate compared with the full-connected architecture. The configurations of 128/256 antennas and $4\sim8$ RF chains at the transmitter are the most common use cases of mmWave Massive MIMO system. This result is significant for practical implementations since it means that the proposed multi-user hybrid precoding of the dynamic array performs almost as good as the full-connected architecture in the main use case of massive MIMO mmWave systems. \textcolor{blue}{\begin{figure}[!htbp] \centering \subfigure[Sum rate vs. number of transmit antennas (SNR = -10 dB).]{ \label{fig:subfig:a} \includegraphics[width=3.6 in]{t5}} \subfigure[Sum rate vs. number of transmit antennas (SNR = 0 dB).]{ \label{fig:subfig:b} \includegraphics[width=3.7 in]{t6}} \caption{Comparison between 2, 4, 6 or 8 RF chains in dynamic subarray architectures with different number of antennas.} \label{fig:subfig} \end{figure}} \subsection{Performance comparisons of energy efficiency} In this subsection, we investigate the energy efficiency of multi-user hybrid precoding designs in three antenna array architectures. According to [20], [21], the energy efficiency formula can be expressed as \begin{align} {\rm{EE}} = \frac{{\sum\limits_{k = 1}^K {{R_k}} }}{{{P_{total}}}} = \frac{{\sum\limits_{k = 1}^K {{R_k}} }}{{{P_t}/\eta + {N_{\rm{RF}}}{P_{\rm{RF}}} + {N_{\rm{PS}}}{P_{\rm{PS}}}}}, \end{align} where ${P_t}$ is the transmission power constrained to $\left\| {{\bf{F}}_{\textrm{RF}}^{\mathit{k}}{\bf{F}}_{\textrm{BB}}^{\mathit{k}}} \right\|_F^2 = {N_s}$, $\eta $ is the power amplifier efficiency, ${P_{\rm{RF}}}$ is the energy consumed by RF chain, ${P_{\rm{PS}}}$ is the energy consumed by PS, and ${N_{\rm{PS}}}$ is the number of required PSs, respectively. Here, we use the ${P_{\rm{RF}}} = 250{\rm{mW}}$ [31] and ${P_{\rm{PS}}} = 1{\rm{mW}}$ [32] in simulation. Fig. 7 shows the comparison of energy efficiency for different number of RF chains, $N_{\rm{RF}}$, where SNR = -10 dB, ${N_{\rm{TX}}} = 64$, ${N_{\rm{RX}}} = 2$. It is observed from Fig. 7 that, in accordance with the theory analysis, the increment of the number of RF chains results in performance losses on the energy efficiency evidently, since a large number of electronic equipment consumes more power. Further, we can find that all the partial-connected hybrid architectures achieve higher energy efficiency than the fully-connected hybrid architecture. Moreover, the proposed hybrid precoding solution in the dynamic sub-array obtains more energy efficient than the hybrid precoding design in the fixed subarray. \begin{figure} \centering \includegraphics[width=3.7 in]{t7}\\ \caption{Energy efficiency comparison against the numbers of RF chains $N_{\rm{RF}}$, where ${N_{\rm{TX}}} = 64$, ${N_{\rm{RX}}} = 2$, SNR = -10 dB.} \end{figure} In Fig. 8, the energy efficiency versus the number of transmit antennas is illustrated with ${N_{\rm{RF}}} = 4$, ${N_{\rm{RX}}} = 2$, and SNR=-10dB. Here, BS is equipped with $N_{\rm{TX}}$ antennas, where $N_{\rm{TX}}= 16, 32, 64, 128, 256$, respectively. The results in Fig. 8 show that the improvement of energy efficiency is remarkable with the number of antennas increasing. Moreover, the proposed solution achieves better energy efficiency than the fixed subarray and the full-connected architecture. \begin{figure} \centering \includegraphics[width=3.7 in]{t8}\\ \caption{Energy efficiency comparison against the numbers of transmit antennas $N_{\rm{TX}}$, where ${N_{\rm{RF}}} = 4$, ${N_{\rm{RX}}} = 2$, SNR = -10 dB.} \end{figure} In conclusion, the proposed multi-user precoding for the dynamic subarray architecture obtains more energy efficient than the conventional multi-user precoding schemes according to the simulation results of energy efficiency. \subsection{Computational complexity} Consider a multi-user downlink system with ${N_{{\rm{RF}}}}$ RF chains and ${N_{{\rm{TX}}}}$ transmit antennas. The problem in (15) is a combinatorial optimization problem, which the exhaustive search is required to find the optimal solution from all probable cases. Such that, the computational complexity is given by \begin{align} \frac{1}{{({N_{{\rm{RF}}}})!}}\sum\limits_{k = 0}^{{N_{{\rm{RF}}}}} {{{( - 1)}^{{N_{{\rm{RF}}}} - k}}} \left( {\begin{array}{*{20}{c}} {{N_{{\rm{RF}}}}}\\k\end{array}} \right){k^{{N_{{\rm{TX}}}}}}. \end{align} Note that, this complexity is large even for a small number of RF chains and antennas. At a consequence, the complexity of our approach can be effectively reduced compared to the conventional one by assigning each antenna. The complexity of our antenna subarray partitioning algorithm can be reduced to ${N_{{\rm{RF}}}} \times {N_{{\rm{TX}}}}$. \section{Conclusion} In this paper, we developed a multi-user hybrid precoding framework in the dynamic subarray of mmWave Massive MIMO systems. Especially, the proposed antenna subarray partitioning algorithm guaranteed the user fairness and reduced the computation complexity since each antenna element was allocated based on the maximal SINR increment criterion for all selected users. Simulation results showed that the sum rate and energy efficiency achieved by the dynamic subarrays architecture significantly outperforms that of the fixed subarray architectures. Furthermore, the energy efficiency of the proposed solution in the dynamic subarray case obviously outperformed that of fully-connected architecture with a slight performance loss of sum rate. Simulation results consisted with the theory analysis, and demonstrated that the proposed multi-user scheme could achieve right trade-off between the hardware complexity and system performance for mmWave Massive MIMO systems. \appendices \section*{Acknowledgment} This paper was supported by National Natural Science Foundation of China under Grant 61871321, National Science and Technology Major Project under Grant 2016ZX03001016, and Innovation Team Project of Shaanxi Province under Grant 2017KCT-30-02. \ifCLASSOPTIONcaptionsoff \newpage \fi
{ "timestamp": "2019-03-01T02:16:20", "yymm": "1902", "arxiv_id": "1902.11023", "language": "en", "url": "https://arxiv.org/abs/1902.11023" }
\section{Introduction} \label{sec:intro} Reflected light from objects and radiance from a light source has information at various wavelengths both in visible and non-visible spectrum to human eye \cite{21,6,1,2,23}. With the recent advancements on hyperspectral imaging systems, unlike the conventional cameras providing images with 1(monochrome images) or 3 channel (e.g. RGB or YCbCr images), hyperspectral imaging systems enable researchers the opportunity to capture data from the observed scenes with high spatial and redundant spectral resolution (both visible and non-visible spectrum to human eye) of the observed scenes from the radiance or reflected light source from objects \cite{6,1,2,23}. These data have been used in many applications (remote sensing \cite{6,5,7}, scene analysis or object detection \cite{6,5,3,4,7,8,9}, spectral estimation\cite{9,10,11,12}, etc.). Visual attention modeling (saliency detection) \cite{13,14,24,16,17} is a promising research field for practical applications, which may benefit many other applications on hyperspectral data processing stated prior. For instance, a few studies using low-level features on hyperspectral images demonstrated that salient object detection can be achieved \cite{3,4,7,8}. In contrast to these models relying on low-level features or hand-crafted features to obtain saliency maps, higher-level features can be extracted and used in a self-supervised manner for hyper-spectral data, where each spectral bands' contribution to the representation can be learned with unsupervised neural network used for segmentation task \cite{25}. In addition, works on hyperspectral saliency on natural scenes were mostly tested on dataset with a few hyperspectral images (\cite{3} used 13 images and \cite{4} used 17 images) collected and selected from various hyperspectral data. Moreover, these hyperspectral data was not collected and created for the purpose of salient object detection. And, quantitative evaluations of the models were mostly limited to Precision -Recall and F-measure metrics. Therefore, we believe that a dataset created specifically for salient object detection should be used for evaluating the models with various metrics. \begin{figure*}[ht] \begin{minipage}[b]{1.0\linewidth} \centering \centerline{\includegraphics[width=15.5cm]{Frameworkver2.png}} \end{minipage} \caption{Proposed hyperspectral salient object detection model with unsupervised deep features .} \label{fig:model} \end{figure*} \textbf{Proposed work and contributions:} In this work, we propose a salient object detection model (see Figure 1) on hyperspectral images by applying manifold ranking \cite{24} to self-supervised Convolutional Neural Network (CNN) features (high-level features) learned by an unsupervised image segmentation task \cite{25}. Self-supervision of CNN continues until clustering loss or saliency map computed from CNN features converges to a defined error between each iteration. Then, saliency map at the last iteration is used as the result of proposed model when the self-supervision procedure terminates. \begin{table} \centering \caption{The detailed configuration of the CNN model. Note that \emph{BN} represents batch normalization operation.} \begin{tabular}{ c | c | c | c } \hline layer & \# filters & kernel & stride \\ \hline\hline Conv. + Relu + BN & 64 & 3 $\times$ 3 & 1 $\times$ 1 \\ Maxpooling & - & 2 $\times$ 2 & 2 $\times$ 2 \\ \hline Conv. + Relu + BN & 64 & 3 $\times$ 3 & 1 $\times$ 1 \\ Maxpooling & - & 2 $\times$ 2 & 2 $\times$ 2 \\ \hline Conv. + Relu + BN & 64 & 3 $\times$ 3 & 1 $\times$ 1 \\ \hline Upsampling & - & 2 $\times$ 2 & 2 $\times$ 2 \\ Deconv. + Relu + BN & 64 & 3 $\times$ 3 & 1 $\times$ 1 \\ \hline Upsampling & - & 2 $\times$ 2 & 2 $\times$ 2 \\ Deconv. + Relu + BN & 64 & 1 $\times$ 1 & 1 $\times$ 1 \\ \hline \end{tabular} \label{tab:CNN_model} \end{table} To the best of our knowledge, there are not any works on hyperspectral salient object detection for natural scenes as a self-supervised approach, which combines unsupervised segmentation using CNN and salient object detection task on the scene. Regarding the approach, contributions or differences of the proposed model can be explained as: First, unsupervised segmentation task used in previous paper \cite{25} takes advantage of cluster refinement process based on the superpixels obtained by the input color image. However, in this work, we apply the refinement process based on the superpixels obtained by the high-level features of the CNN (see Fig.1) that takes hyperspectral image as input. Interestingly, this process resulted in better saliency detection performance and it seemed faster convergence regarding the segmentation task. Second, in contrast to the saliency model with manifold ranking (MR) in \cite{24} using low-level features, we utilized self-supervised deep-features with higher order semantics, which seems to improve the saliency detection performance drastically compared to study in \cite{24}. Then, unlike the CNN model used in \cite{25}, we included max-pooling for down-sampling and we replaced last two CNN layers with deconvolution layers as in Table \ref{tab:CNN_model}. Finally, self-supervision of CNN model does not need to finalize until a defined maximum iteration because we check clustering loss and saliency map for termination; in addition, saliency results of proposed model seems to converge faster than the segmentation task in most cases while using self-supervised deep features on manifold ranking based saliency detection. Experiments demonstrated that proposed saliency detection algorithm on hyperspectral images is outperforming state-of-the-arts hyperspectral saliency models including the original MR \cite{24} saliency model. \section{Self-supervised salient object detection on hyperspectral images} \label{sec:hs_saliency} \begin{table*}[!ht] \centering \caption{The performance of different saliency detection methods are given as values with Red, Green, and Blue colors indicate the best three results in respective order. Note that larger AUC$_{Borji}$, CC, F$_{\beta}$, maxF$_{\beta}$, aveF$_{\beta}$, Precision, Recall, NSS values, and smaller KLdiv values means better performance.} \begin{tabular}{ | c | c | c | c | c | c | c | c | c | c | c } \hline \small Model & \small AUC$_{Borji}$ & \small CC & \small F$_{\beta}$ & \small maxF$_{\beta}$ & \small aveF$_{\beta}$ & \small Precision & \small Recall & \small NSS & \small KLdiv \\ \hline\hline \small Itti \emph{et al.} \cite{13,3} & 0.7774 & 0.3536 & 0.3530 & 0.3754 & 0.1674 & 0.1909 & 0.2329 & 1.3636 & 2.3186 \\ \small SED \cite{3} & 0.7691 & 0.2797 & 0.3082 & 0.3420 & 0.1541 & 0.3479 & 0.1207 & 1.3498 & 2.3426 \\ \small SAD \cite{3} & 0.7707 & 0.3034 & 0.2635 & 0.2662 & 0.1397 & 0.1547 & 0.2314 & 1.1767 & 2.2719 \\ \small GS \cite{3} & 0.7781 & 0.3403 & 0.3004 & 0.3694 & 0.1861 & 0.2753 & 0.2275 & 1.5637 & 2.1944 \\ \small SED-OCM-GS \cite{3} & 0.8021 & 0.3730 & 0.3260 & 0.3634 & 0.1708 & 0.2757 & 0.2209 & \textcolor{blue}{\textbf{1.5908}} & 2.1707 \\ \small SED-OCM-SAD \cite{3} & 0.8108 & 0.3882 & 0.2635 & 0.2662 & 0.1397 & 0.1547 & 0.2314 & 1.5301 & \textcolor{blue}{\textbf{2.1601}} \\ \small SGC \cite{4}* & \textcolor{blue}{\textbf{0.8252}} & \textcolor{blue}{\textbf{0.5012}} & 0.1891 & 0.2214 & 0.1815 & 0.2344 & 0.2822 & 1.4739 & 2.2154 \\ \small HS-MR \cite{24}** & 0.7369 & 0.3492 & \textcolor{blue}{\textbf{0.3636}} & \textcolor{blue}{\textbf{0.4308}} & \textcolor{blue}{\textbf{0.3638}} & \textcolor{blue}{\textbf{0.4397}} & \textcolor{blue}{\textbf{0.3587}} & 1.2702 & 3.7637 \\ \hline \textbf{\small SUDF$_{HS-Slic}$} & \textcolor{green}{\textbf{0.8509}} & \textcolor{green}{\textbf{0.5563}} & \textcolor{green}{\textbf{0.4580}} & \textcolor{green}{\textbf{0.5355}} & \textcolor{green}{\textbf{0.4430}} & \textcolor{green}{\textbf{0.5346}} & \textcolor{green}{\textbf{0.4449}} & \textcolor{red}{\textbf{2.1938}} & \textcolor{green}{\textbf{1.7853}} \\ \textbf{\small SUDF$_{HF-Slic}$} & \textcolor{red}{\textbf{0.8602}} & \textcolor{red}{\textbf{0.5829}} & \textcolor{red}{\textbf{0.4671}} & \textcolor{red}{\textbf{0.5654}} & \textcolor{red}{\textbf{0.4668}} & \textcolor{red}{\textbf{0.5436}} & \textcolor{red}{\textbf{0.4834}} & \textcolor{green}{\textbf{2.1200}} & \textcolor{red}{\textbf{1.7241}} \\ \hline \end{tabular} \begin{tabular}{l} \small *SGC \cite{8} The codes were not available for \cite{8} so the implementation was done by the authors in Matlab based on the paper \cite{8} \\ \small **HS-MR \cite{24} saliency detection is originally for color images; however, published codes by the authors can be used for\\ \small hyperspectral data for MR and superpixel methods in the code. \end{tabular} \label{tab:Metrics_comp} \end{table*} To achieve salient object detection goal in Fig. \ref{fig:model}, we propose to use an unsupervised backpropagation semantic segmentation method \cite{25} to learn high-level visual features that will be used in the manifold raking algorithm \cite{24} for saliency computation. Given $k$ channels hyperspectral imagery $I = \{ {H^{k}} \} _{n=1}^{k}$ as input to our model, first, all the pixel values are normalized to [0,1]. Then, we adopt a CNN model to extract p-dimension feature maps $\{ x_{n} \}$ from the Batch-Normalization (BN) output of the last Deconvolution layer of model getting hyperspectral imagery $I$ as input. The detail configuration of the CNN model is shown in Table \ref{tab:CNN_model}. Note that the spatial resolution of output feature map and input hyperspectral image $I$ are identical. After normalize the learned response maps via batch normalization as in \cite{25}, we obtain cluster label $\{ c_{n} \}$ by using argmax classification to the feature maps to classify each pixel by choosing the dimension that has the maximum value as $\{ y_{n} \}$ \cite{25}. Then, we apply the refinement process on $\{ c_{n} \}$ based on the superpixels obtained by the high-level features of the CNN (see Fig. \ref{fig:model}) in contrary to \cite{25} using superpixels based on the input data (e.g. hyperspectral image). Refinement process is achieved by assigning all pixels same cluster label based on the highest frequency of label in the superpixel area \cite{25}. \begin{figure}[h] \begin{minipage}[b]{1.0\linewidth} \centering \centerline{\includegraphics[width=7.5cm]{iteratation_results_segmentation_saliencyver3.png}} \end{minipage} \caption{(top) sample image and segmentation results at different iterations during self-supervision, (bottom) ground-truth image and saliency maps from different iterations.} \label{fig:iteration_results} \end{figure} Similar with the supervised learning, we use the softmax cross entropy loss between the network responses $\{ y_{n} \}$ and the refined cluster labels $\{ c_{n} \}$ at iteration \emph{n} \cite{25}. Using this error with back-propagation, the parameters of convolutional and deconvolution filters are updated by utilizing gradient-descent with momentum \cite{25}. As in \cite{25}, Glorot and Bengio method \cite{29} is employed for the parameter initialization, which uses uniform distribution normalized according to the input and output layer size. While self-supervising the network for unsupervised segmentation task, at each iteration, $\{ x_{n} \}$ is used to obtain saliency map by employing MR \cite{24} with multi-channel. For the model, we use two main termination conditions as: \begin{eqnarray} {\cal L}_{i+1}-{\cal L}_{i} \leq \varepsilon_{1} \\ S_{i+1}-S_{i} \leq \varepsilon_{2} \end{eqnarray} where ${\cal L}_{i+1}$ and ${\cal L}_{i}$ denote the cross-entroppy losses of step $(i+1)$ and $(i)$, $S_{i+1}$ and $S_{i}$ denote the predicted saliency maps of step $(i+1)$ and $(i)$, $\varepsilon_{1}$ and $\varepsilon_{2}$ are defined small non-zero constants to terminate the training process. Also, when the training step $N$ achieve maximum value $\kappa=200$, it will stop the process. In Fig. \ref{fig:iteration_results}, unsupervised segmentation outputs and computed saliency maps are shown for different iterations of self-supervised learning. It can be seen that saliency map results are converging even though clustering through self-supervision is not giving optimal segmentation result yet. \section{Experimental Results} \label{sec:experiments} \textbf{Metrics:} We made extensive evaluation (see Table 2) based on various performance metrics \cite{22,26,27,28} such as Area Under Curve (AUC$_{Borji}$), Cross Correlation (CC), Normalized Scanpath Saliency (NSS), Kullback-Leibler divergence (KLdiv), Precision, Recall, F-measure (F$_{\beta}$, maxF$_{\beta}$, aveF$_{\beta}$) with Precision-Recall or Precision-Recall Curves. \textbf{Dataset:} We made evaluation of the model on hyperspectral salient object detection (HS-SOD) dataset \cite{23} consisting of 60 hyperspectral images with their respective binary ground-truth images referring to salient objects. The dataset details can be seen in \cite{23}, and is available on \cite{29}. For each image, spatial resolution is 768x1024 pixels, and there are 81 spectral channels covering the wavelengths between 380-780nm (visible spectrum) with 5nm intervals \cite{23}. \begin{figure*}[!ht] \begin{minipage}[b]{1.0\linewidth} \centering \centerline{\includegraphics[width=15cm]{saliency_examples_allmodels_ver5.png}} \end{minipage} \caption{(a) Sample scenes of the the hyperspectral data rendered in sRGB with its respective (b) ground-truth salient objects, and saliency map results of the compard models: (c) Itti \emph{et al.} \cite{13,3}, (d) SAD \cite{3}, (e) SED \cite{3}, (f) SED-OCM-GS \cite{3}, (g) SED-OCM-SAD \cite{3}, (h) SGC \cite{4}, (i) HS-MR \cite{24}, (j) \textbf{Proposed SUDF$_{HF-Slic}$}} \label{fig:sal_results_models} \end{figure*} \textbf{Evaluation:} We selected \cite{3} and \cite{4} for comparison as being the hyperspectral salient object detection models for natural scenes. In work \cite{3}, various approaches were tested on hyperspectral data so we also apply the approaches tested in \cite{3} on HS-SOD dataset \cite{23} for comparison. i) spectral distances between each spatial region for saliency computation by using spectral Euclidean distance (SED) and spectral Angle distances (SAD) \cite{3,23}, ii) color opponency method in \cite{13,3} is replaced by spectral grouping rather than Red-Green and Blue-Yellow differences, in which Euclidean distance between spectral group (GS) vectors by dividing spectral bands into four groups (G1,G2,G3,G4) \cite{3,23}. iii) In \cite{23}, spectral distance based saliency also combined with orientation based saliency with combinations such as SED-OCM-GS and SED-OCM-SAD. iv) saliency maps from Itti et al. \cite{13} were also provided for hyperspectral saliency comparison in \cite{3,23} as a baseline model. As a more recent work, we also tested saliency from spectral gradient contrast (SGC) proposed by \cite{4}. In \cite{4}, local region contrast is computed from the superpixels obtained by spatial and spectral gradients, which is used to calculate spectral gradient contrast (SGC) for saliency detection \cite{4, 23}. In addition, saliency detection by graph-based manifold ranking (MR) is also applied on hyperspectral dataset, referred as HS-MR \cite{24} to compare with the proposed model \textbf{SUDF$_{HF-Slic}$} (SUDF: Saliency from Unsupervised Deep Features ), which uses higher-level features for both MR based saliency and cluster refinement compared with the original approaches in \cite{24} and \cite{25}. In addition, to demonstrate the performance improvement on saliency detection when the cluster refinement is done on high-level fetures, we also implemented and compared saliency computation when cluster refinement is done based on input hyperspectral image that is referred as \textbf{SUDF$_{HS-Slic}$}. As it can be seen in Table \ref{tab:Metrics_comp}, proposed \textbf{SUDF$_{HF-Slic}$} performs better than other approaches in all metrics except being second best on NSS. However, although the performance difference is very close with \textbf{SUDF$_{HF-Slic}$}, best performing model in NSS metric is also variation of the proposed approach, \textbf{SUDF$_{HS-Slic}$} which applies cluster refinement based on the superpixels obtaind from hyperspectral image directly as in the original work \cite{25}. In addition, proposed \textbf{SUDF$_{HF-Slic}$} demonstrated that using higher-level features even learned from self-supervision is more beneficial to saliency computation using manifold-ranking since proposed \textbf{SUDF$_{HF-Slic}$} outperformed HS-MR \cite{24} manifold-ranking using low-level features in all evaluation metrics. In Fig. \ref{fig:sal_results_models}, some sample scenes for hyperspectral data are given rendered in sRGB for visualization with their respective grount-truth images for salient objects, and saliency maps results of various approaches on these scenes are also given to demonstrate the performance of proposed model \textbf{SUDF$_{HF-Slic}$} with respect to other models. It can be seen from the saliency maps that our model performs better qualitatively too. \section{Conclusion} \label{sec:conclusion} In this work, we demonstrated hyperspectral salient object detection approach based on self-supervised deep features in a multi-task model. Paramater update of the CNN model is done based on cross-entropy loss of clustering performance, and saliency is computed by the learned features of the unsupervised segmentation task, in which saliency convergence is the termination criteria for the self-supervised learning procedure. Evaluation on the HS-SOD dataset \cite{23} demonstrates promising results for salient object detection with the proposed approach. As a future work, we would like to investigate how to improve representation of hyperspectral image during self-supervision process (e.g. adding sparsity loss, orthogonality constraint, decoder based image generation loss, etc. ) to improve the accuracy for salient object detection and also to increase the convergence on clustering and saliency map results. Moreover, we would like to investigate other options for saliency computation compared to MR model since it assumes boundary prior for background regions.
{ "timestamp": "2019-03-01T02:14:52", "yymm": "1902", "arxiv_id": "1902.10993", "language": "en", "url": "https://arxiv.org/abs/1902.10993" }
\section{Introduction} For simplicity, we choose the set ${\mathbb C}$ of complex numbers as our ground field, although most results are valid for arbitrary fields of characteristic 0. Let $V$ be a rational representation of a reductive group $G$ and denote the ring of polynomial functions on $V$ by ${\mathbb C}[V]$. The group $G$ also acts on ${\mathbb C}[V]$ and the ring of invariants is $${\mathbb C}[V]^G=\{f\in {\mathbb C}[V]\mid \mbox{$g\cdot f=f$ for all $g\in G$}\}.$$ It is well known that the ring of invariants ${\mathbb C}[V]^G=\bigoplus_{d=0}^\infty {\mathbb C}[V]^G_d$ is a finitely generated graded subring of the polynomial ring ${\mathbb C}[V]$ (see \cite{Haboush,Hilbert1,Hilbert2,Nagata}). All representations in this paper will be rational representations by default. A fundamental question in invariant theory is to describe the generators of an invariant ring and their relations. Invariant rings play a central role in the Geometric Complexity Theory (GCT) approach to the P vs NP problem. This connection to computational complexity results in new problems in invariant theory, albeit with a different flavor. As one might expect, these problems are more quantitative in nature, asking for how easy or hard the invariant ring is from a computational perspective. There are well understood notions of hardness of computation in computational complexity. We refer to \cite{GCT5} for precise details, as well as numerous conjectures and open problems in invariant theory that are inspired by computational complexity. From the perspective of GCT, a central problem of interest is the problem of degree bounds for generators. The problem of finding strong upper bounds for the degrees of generators has been studied. An approach via understanding the null cone was proposed by Popov (see~\cite{Popov1,Popov2}), and improved by the first author, see \cite{Derksen1}. The zero set of a set of polynomials $S\subseteq {\mathbb C}[V]$ is $$ {\mathbb V}(S)=\{v\in V\mid f(v)=0\mbox{ for all $f\in S$}\}. $$ Hilbert's null cone $\mathcal{N}\subseteq V$ is defined by $\mathcal{N}={\mathbb V}(\bigoplus_{d=1}^\infty {\mathbb C}[V]_d^G)$. \begin{definition} We define $\sigma_G(V)$ to be the smallest integer $D$ such that the non-constant homogeneous invariants of degree $\leq D$ define the null cone, so $$ \textstyle\sigma_G(V)=\min\Big\{D\,\Big|\, \mathcal{N}={\mathbb V}\big(\bigoplus_{d=1}^D {\mathbb C}[V]^G_d\big)\Big\}. $$ General upper bounds for $\sigma_G(V)$ were first given by Popov (see~\cite{Popov1,Popov2}), and improved by the first author in \cite{Derksen1}. \begin{remark} \label{hsop} The number $\sigma_G(V)$ can also be defined as the smallest integer $D$ such that ${\mathbb C}[V]^G$ is a finite extension over the subalgebra generated by $\oplus_{d = 0}^D {\mathbb C}[V]^G_d$. \end{remark} We define $\beta_G(V)$ to be the smallest integer $D$ such that invariants of degree $\leq D$ generate ${\mathbb C}[V]^G$, i.e., $$ \textstyle\beta_G(V) = \min\Big\{D\,\Big|\, \bigoplus_{d=0}^D {\mathbb C}[V]^G_d \text{ is a generating set for } {\mathbb C}[V]^G\Big\}. $$ \end{definition} The number $\beta_G(V)$ can also be seen as the largest degree of a minimal set of (homogeneous) generators for ${\mathbb C}[V]^G$. It is easy to see that $\beta_G(V)\geq \sigma_G(V)$. The first author showed in \cite{Derksen1} that $\beta_G(V)\leq \max\{2,\frac{3}{8}r\sigma_G(V)^2\}$, where $r$ is the Krull dimension of ${\mathbb C}[V]^G$, which is bounded above by $\dim V$. In this paper, we focus instead on lower bounds. The key idea is to compare two invariant rings via a surjective map between them. \begin{lemma} \label{surj.bw.inv} Suppose $U_1,U_2$ are representations of $G$ and $H$ respectively, such that we have a degree non-increasing surjective homomorphism $\phi: {\mathbb C}[U_1]^G \twoheadrightarrow {\mathbb C}[U_2]^H$. Then we have $$ \beta_G(U_1) \geq \beta_H(U_2) \text{ and } \sigma_G(U_1) \geq \sigma_H(U_2). $$ \end{lemma} \begin{proof} It is clear that $\beta_G(U_1) \geq \beta_H(U_2)$ since surjections preserve generating sets. For the null cone, the argument is slightly more involved, but follows from Remark~\ref{hsop} since surjections preserve finite extensions. \end{proof} The source of such surjective maps for us will be Grosshans principle (\cite{Grosshans}). \begin{theorem} [Grosshans principle] \label{gross.princ} Let $W$ be a representation of $G$, and let $H$ be a closed subgroup of $G$. Then we have an isomorphism $$ \psi:({\mathbb C}[G]^H \otimes {\mathbb C} [W])^G \longrightarrow {\mathbb C}[W]^H. $$ \end{theorem} We will derive the following result from Grosshans principle. \begin{theorem} \label{main} Let $V,W$ be representations of $G$. Suppose $v \in V$ is such that its orbit $G \cdot v$ is closed. Let $H = {\rm Stab}_G(v) =\{g \in G \ |\ g \cdot v = v\}$ be a closed reductive subgroup of $G$. Then we have a degree non-increasing surjection $$ \phi: {\mathbb C}[V \oplus W]^G \twoheadrightarrow {\mathbb C}[W]^H. $$ In particular, we have $$ \beta_G(V \oplus W) \geq \beta_H(W) \text{ and } \sigma_G(V \oplus W) \geq \sigma_H(W). $$ \end{theorem} In order to use this method for finding invariant rings for $G$ with large degree lower bounds, there are mainly three steps, each of which is relatively challenging. First, we have to show that the orbit of a certain point $v$ is closed. Next, we must compute its stabilizer $H$. Finally, we need to find a $G$-representation $W$ for which $\beta_H(W)$ is large. We develop the techniques in this paper in a general setup as we believe they are likely useful in many situations. To show that orbits are closed, we will develop a criterion using the moment map (see Theorem~\ref{crit.co}). We will pick our point carefully, so as to ensure that its stabilizer is a torus. For torus actions, it is relatively easier to construct examples with exponential lower bounds. One of the main intentions of this paper is to demonstrate that exponential lower bounds can be achieved for fairly simple representations of $\operatorname{SL}_n$. To this end, we are able to prove lower bounds for the action on $4$-tuples of cubic forms. We obtain: \begin{theorem} \label{lbsln} Let $V$ be a vector space of dimension $3n$. Then $$ \beta_{\operatorname{SL}(V)}(S^3(V)^{\oplus 4}) \geq \sigma_{\operatorname{SL}(V)}(S^3(V)^{\oplus 4}) \geq \textstyle\frac{2}{3} (4^n - 1). $$ \end{theorem} We note that $\dim(S^3(V)^{\oplus 4}) = O(n^3)$, and $\dim(\operatorname{SL}(V)) = O(n^2)$. So, the group and the representation are polynomially sized in $n$, while the lower bound for the degree of generators is exponential in $n$. Another goal of this paper is to gain a better understanding of the computational hardness of the invariant ring for tensor actions. More precisely, consider the action of $\operatorname{SL}(V_1) \times \operatorname{SL}(V_2) \times \dots \times \operatorname{SL}(V_d)$ on $(V_1 \otimes V_2 \otimes \dots \otimes V_d)^{\oplus m}$ defined on each copy of $V_1 \otimes V_2 \otimes \dots \otimes V_d $ by $$ (g_1,g_2,\dots,g_d) \cdot v_1 \otimes \dots \otimes v_d = g_1v_1 \otimes \dots \otimes g_dv_d. $$ A major open problem in complexity is the problem of polynomial identity testing (PIT). A polynomial time algorithm for PIT would be a major step towards the celebrated P vs NP problem, see \cite{KI,GCT5} for details. The null cone membership and orbit closure intersection problems for various invariant rings are closely related to various subclasses of PIT problems, see \cite{GCT5}. The vital role of degree bounds is exemplified for the above tensor action in the case $d = 2$ (often called matrix semi-invariants). The `polynomial' degree bounds proved in \cite{DM,DM-arbchar} for matrix semi-invariants were instrumental in giving an algebraic polynomial time algorithm for the null cone membership and orbit closure algorithms in this case, see \cite{DM,DM2,IQS,IQS2}. The algorithm for the null cone problem for matrix semi-invariants gives a polynomial time algorithm for non-commutative rational identity testing. The orbit closure intersection problem solves another subclass of PIT problems in polynomial time. An analytic algorithm over ${\mathbb Q}$ for this subclass appears in \cite{AGOLW}. Despite the analytic nature of the algorithm, the polynomiality of degree bounds are crucial to show that the algorithm runs in polynomial time! In summary, degree bounds are an essential component in understanding the challenges and boundaries of algorithmic efficiency. The cases when $d \geq 3$ have also been the subject of recent interest, see for example \cite{BGOWW}. We prove exponential lower bounds for these tensor actions. We show: \begin{theorem} \label{tensor-lbs} Suppose $V,W,Z$ are vector spaces of dimension $3n$. Then, for the tensor action of $G = \operatorname{SL}(V) \times \operatorname{SL}(W) \times \operatorname{SL}(Z)$ on $(V \otimes W \otimes Z)^{\oplus 9}$, we have $$ \beta_G(V) \geq \sigma_G(V) \geq 4^n-1 $$ \end{theorem} Again, let us point out that the dimension of the group and representation are polynomial in $n$, but the lower bounds on the degree of generation is exponential in $n$. \section{Preliminaries from linear algebra} \label{prelim} We will first setup some preliminaries from linear algebra. An $n \times m$ matrix $A$ should be interpreted as a linear map $A: {\mathbb Q}^m \rightarrow {\mathbb Q}^n$. The null space of $A$ is defined as $$ \mathcal{Z}(A) = \{v \in {\mathbb Q}^m\ |\ Av = 0\}. $$ We will be interested in non-negative integral points in the null space. So, we define $$ \mathcal{I}(A) = \mathcal{Z}(A) \cap {\mathbb Z}_{\geq 0}^m. $$ Observe that $\mathcal{I}(A)$ is a monoid under addition. Further, we will be interested in the minimal generators of the monoid $\mathcal{I}(A)$. So, we define $$ \mathcal{GI}(A) = \{v \in \mathcal{I}(A)\ |\ v \neq w_1 + w_2 \ \forall\ w_1,w_2 \in \mathcal{I}(A)\setminus\{0\}\}. $$ It is easy to see that $\mathcal{GI}(A)$ is a minimal generating set for the monoid $\mathcal{I}(A)$. We will be interested in computing this in two specific cases. The first is the $n \times (n+1)$ matrix \begin{equation} M = \begin{pmatrix} 1 & 0 &\dots & \dots & 0 & -4 & 3 \\ -4 & 1 & \ddots & & 0 & 0 & 0\\ 0 & -4 & \ddots & \ddots & \vdots & \vdots & \vdots\\ \vdots & 0 & \ddots & \ddots & \vdots & \vdots & \vdots \\ \vdots & \vdots & \ddots & \ddots & 1 & 0 & 0\\ 0 & \dots & 0 & 0 & -4 & 1 & 0 \end{pmatrix} \end{equation} \begin{lemma} We have $\mathcal{Z}(M) = {\mathbb Q} \cdot \left(1,4,16,\dots,4^{n-1}, \frac{4^n -1}{3}\right)^t$. \end{lemma} \begin{proof} It is clear that the matrix $M$ has full rank, i.e., $\rk(M) = n$. By the rank-nullity theorem, we know that $\mathcal{Z}(M)$ is $1$-dimensional. The lemma follows by checking that $M$ kills $\left(1,4,16,\dots,4^{n-1}, \frac{4^n -1}{3}\right)^t$. \end{proof} \begin{corollary} \label{gi.calc.m} The set $\mathcal{GI}(M)$ consists of only one vector. Further, we have $$ \mathcal{GI}(M) = \bigg\{ \left(1,4,16,\dots,4^{n-1}, \frac{4^n -1}{3}\right)^t \bigg\}$$ \end{corollary} \begin{proof} Since $\mathcal{Z}(M)$ is $1$-dimensional, the set $\mathcal{GI}(M)$ consists of at most one element. This will be smallest non-negative integral element in $\mathcal{Z}(M)$, and this is the one given in the statement of the corollary. \end{proof} The second case we will be interested in is the $3n \times (3n-1)$ matrix $$ N = \begin{pmatrix} B & I_3 & &&& \\ & P & I_3 &&& \\ && P & \ddots && \\ &&& \ddots & I_3 & \\ &&&& P & A\\ \end{pmatrix}, $$ Where $$ A = \begin{pmatrix} 1 \\ 1 \\1 \end{pmatrix}, P = \begin{pmatrix} -2 & -1 & -1 \\ -1 & -2 & -1 \\ -1 & -1 & -2 \end{pmatrix}, I_3 = \begin{pmatrix} 1 & & \\ & 1 & \\ &&1 \end{pmatrix}, \text{ and } B = \begin{pmatrix} -2 \\ -2 \\ -2 \end{pmatrix}. $$ \begin{lemma} We have $\mathcal{Z} (N) = {\mathbb Q} \cdot \left(1,2,2,2,8,8,8,\dots, 2^{2n-3},2^{2n-3},2^{2n-3}, 2^{2n-1}\right)^t$ \end{lemma} \begin{proof} Suppose $v = (v_1,\dots,v_{3n-1})$ is such that $N v = 0$. Let us look at this as a system of $3n$ equations in $3n-1$ variables. As is well understood, each row gives one equation. Let us assume $v_1 = \alpha$. Now, we will go through the equations corresponding to the rows from top to bottom to deduce what $v_i$ have to be for $i \geq 1$. The first three rows imply that $v_2 = v_3 = v_4 = 2 \alpha$. The fourth row implies that $v_5 = 2v_2 + v_3 + v_4 = 4(2 \alpha)$. Similarly the fifth and sixth rows imply $v_6 = v_7 = 8 \alpha$. The process repeats until we get $v_{3n-4} = v_{3n-3} = v_{3n-2} = 2^{2n-3} \alpha$. The last three equations all imply that $v_{3n-1} = 2^{2n-1} \alpha$. In other words, we have $v = \alpha \cdot \left(1,2,2,2,8,8,8,\dots, 2^{2n-3},2^{2n-3},2^{2n-3}, 2^{2n-1}\right)^t$ \end{proof} Using a similar argument to the case of $M$, we get: \begin{corollary} The set $\mathcal{GI}(N)$ consists of only one vector. Further, we have $$ \mathcal{GI}(N) = \bigg\{\left(1,2,2,2,8,8,8,\dots, 2^{2n-3},2^{2n-3},2^{2n-3}, 2^{2n-1}\right)^t\bigg\}$$ \end{corollary} \section{Invariants for torus actions} We will briefly recall invariant theory for torus actions. Let $T = ({\mathbb C}^*)^n$ be an $n$-dimensional (complex) torus. A group homomorphism $T \rightarrow {\mathbb C}^*$ is called a character of $T$. Given two characters $\lambda,\mu:T \rightarrow {\mathbb C}^*$, we define a character $\lambda + \mu :T \rightarrow {\mathbb C}^*$ defined by $(\lambda + \mu) (t) = \lambda(t) \mu(t)$. With this operation, the set of characters of $T$ form a group called the character group, which we denote by $\mathcal{X}(T)$. To each $\lambda = (\lambda_1,\dots,\lambda_n) \in {\mathbb Z}^n$, we can associate a character also denoted $\lambda$ by abuse of notation. The character $\lambda: T \rightarrow {\mathbb C}^*$ is defined by $\lambda(t) = \prod_{i=1}^n t_i^{\lambda_i}$. This gives an isomorphism of groups ${\mathbb Z}^n \xrightarrow{\sim} \mathcal{X}(T)$. Characters of the torus are often called weights, and we will use this terminology as well. Let $V$ be a rational representation of $T$. We make the identification $\mathcal{X}(T) = {\mathbb Z}^n$. For a weight $\lambda \in {\mathbb Z}^n$, the weight space $V_{\lambda} = \{v \in V\ |\ t \cdot v = \lambda(t) v \ \forall t \in T\}$. A vector $v \in V_{\lambda}$ is called a weight vector of weight $\lambda$. Any representation $V$ is a direct sum of its weight spaces, i.e., $V = \oplus_{\lambda \in {\mathbb Z}^n} V_{\lambda}$. In other words, we have a basis consisting of weight vectors. Let $\mathcal{E} = (e_1,\dots,e_m)$ be an ordered basis of $V$ consisting of weight vectors. Further, suppose each $e_i$ is a weight vector of weight $\lambda_i$. Let $x_1,\dots,x_m$ denote the coordinate functions with respect to the basis $e_1,\dots,e_m$. The following are well known: \begin{enumerate} \item A monomial $x^v = x_1^{v_1} x_2^{v_2} \dots x_m^{v_m}$ is an invariant monomial if and only if $ \sum_i v_i \lambda_i = 0$. \item The ring of invariants ${\mathbb C}[V]^T$ is linearly spanned by such invariant monomials. \end{enumerate} We will rewrite the above results in a slightly different language. We will first need a definition. \begin{definition} \label{wt.matrix} Let $V$ be a representation of $T$ with an (ordered) weight basis $\mathcal{E} = (e_1,\dots,e_m)$. Further, suppose each $e_i$ is a weight vector of weight $\lambda_i$. Define $M_{\mathcal{E}}(V)$ to be the $n \times m$ matrix whose $i^{th}$ column is $\lambda_i$, i.e., $$ M_{\mathcal{E}}(V) := \begin{pmatrix} | & | & \dots & | \\ \lambda_1 & \lambda_2 & \dots & \lambda_m \\ | & | & \dots & | \end{pmatrix} $$ \end{definition} \begin{remark} For a different choice of ordered weight basis $\mathcal{E}'$, the matrix $M_{\mathcal{E}'}(V)$ is obtained by a permutation of the columns of $M_{\mathcal{E}}(V)$. This is because the formal sum of the columns (i.e., $\sum_i e^{\lambda_i}$) is called the character of the representation $V$ and independent of the choice of weight basis. \end{remark} \begin{proposition} Let $V$ be a representation of $T$. Let $\mathcal{E} = (e_1,\dots,e_m)$ be a weight basis, and let $x_1,\dots,x_m$ be the corresponding coordinate functions. Then \begin{enumerate} \item For $v = (v_1,\dots,v_m) \in \mathcal{I}(M_\mathcal{E}(V))$, $x^v = x_1^{v_1} \dots x_m^{v_m}$ is an invariant monomial; \item The set $\{x^v\ |\ v \in \mathcal{I}(M_\mathcal{E}(V)) \}$ is a ${\mathbb C}$-linear spanning set of invariants; \item The set $\{x^v \ |\ v \in \mathcal{GI}(M_{\mathcal{E}}(V)) \}$ is a minimal set of generators for ${\mathbb C}[V]^T$. \end{enumerate} \end{proposition} \begin{proof} The first two statements is simply a rephrasing of the discussion before Definition~\ref{wt.matrix}. The last follows from the fact that for any matrix $A$, the set $\mathcal{GI}(A)$ is a minimal generating set for the monoid $\mathcal{I}(A)$. \end{proof} \begin{proposition} \label{tor.inv.M} Let $T$ act on $V = {\mathbb C}^{n+1}$ such that for some weight basis $\mathcal{E}$, we have $M_{\mathcal{E}}(V) = M$, the matrix in Section~\ref{prelim}. Then, we have $$ \beta_T(V) \geq \sigma_T(V) \geq \textstyle\frac{2}{3} (4^n - 1). $$ \end{proposition} \begin{proof} Let $\mathcal{E} = e_1,\dots,e_{n+1}$. Let $x_1,\dots,x_{n+1}$ be the coordinates with respect to this basis. From the above proposition, we know that $\{x^v\ |\ v \in \mathcal{GI}(M)\}$ is a minimal set of generators for the invariant ring. Corollary~\ref{gi.calc.m} tells us that $\mathcal{GI}(M)$ consists of precisely one element. The corresponding monomial is $f := x_1x_2^4x_3^{16} \dots x_n^{4^{n-1}} x_{n+1}^{(4^n-1)/3}$. To summarize, we have ${\mathbb C}[V]^T = {\mathbb C}[f]$. It is clear that $f$ has degree $(1 + 4 + \dots 4^{n-1} + \frac{4^n-1}{3}) = \frac{2}{3} (4^n - 1)$. This already gives us that $\beta_T(V) \geq \frac{2}{3} (4^n-1)$. Now, consider $v = e_1 + \dots + e_{n+1}$. Then, we have $f(v) \neq 0$, so $v$ is not in the null cone. Since there are no non-constant homogeneous invariants of smaller degree, it follows that $\sigma_T(V) \geq \frac{2}{3} (4^n-1)$. \end{proof} A similar argument gives the following: \begin{proposition} \label{tor.inv.N} Let $T = ({\mathbb C}^*)^{3n}$ act on an $V = {\mathbb C}^{3n-1}$ such that for some weight basis $\mathcal{E}$, we have $M_{\mathcal{E}}(V) = N$, the matrix in Section~\ref{prelim}. Then, we have $$ \beta_T(V) \geq \sigma_T(V) \geq 4^n-1 $$ \end{proposition} \begin{proposition} \label{tor.inv.compare} Suppose $V \subseteq W$ are two representations of $T$, then $$ \beta_T(V) \leq \beta_T(W) \text{ and } \sigma_T(V) \leq \sigma_T(W). $$ \end{proposition} \begin{proof} Representations of tori are completely reducible, so we have $W = V \oplus V'$, where $V'$ is also a subrepresentation of $T$. The inclusion $V \hookrightarrow W$ gives a surjection $\pi: {\mathbb C}[W] \rightarrow {\mathbb C}[V]$ that is clearly degree non-increasing. It is easy to check that $\pi$ descends to a map of invariant rings ${\mathbb C}[W]^T \rightarrow {\mathbb C}[V]^T$. We claim that this is a surjection. Indeed, for $f \in {\mathbb C}[V]^T$, define $\widetilde{f}$ by $\widetilde{f}(v,v') = f(v)$ for all $(v,v') \in V \oplus V' = W$. Clearly $\widetilde{f} \in {\mathbb C}[W]^T$ and $\pi(\widetilde{f}) = f$. The fact that the surjection $\pi: {\mathbb C}[W]^T \twoheadrightarrow {\mathbb C}[V]^T$ is degree non-increasing implies both statements by Lemma~\ref{surj.bw.inv}. \end{proof} \section{Grosshans principle} First let us note that for any vector space $U$, the coordinate ring ${\mathbb C}[U] = S(U^*)$ is a polynomial ring, and hence we have a grading ${\mathbb C}[U] = \oplus_{d=0}^{\infty} {\mathbb C}[U]_d$. We will call this the polynomial grading. For any vector space $W$, and any ring $R$, we can define a grading on $R \otimes {\mathbb C}[W]$ by setting $(R \otimes {\mathbb C}[W])_d = R \otimes {\mathbb C}[W]_d$. We will call this the $W$-grading. We will first give an outline of the proof of Grosshans principle, i.e., Theorem~\ref{gross.princ}. \begin{proof} [Proof of Theorem~\ref{gross.princ}] Consider the action of $G \times H$ on $G \times W$ by $$ (g',h') \cdot (g,w) = (g'g(h')^{-1}, h' \cdot w). $$ Let us compute the ring of invariants ${\mathbb C}[G \times W]^{G \times H}$. First, let us observe that the action of $G$ is trivial on $W$, so we have ${\mathbb C}[G \times W]^G = ({\mathbb C}[G] \otimes {\mathbb C}[W])^G = {\mathbb C}[G]^G \otimes {\mathbb C}[W] = {\mathbb C}[W]$. Hence, we have $$ {\mathbb C}[G \times W]^{G \times H} = ({\mathbb C}[G \times W]^G)^H = {\mathbb C}[W]^H. $$ Now, let us consider another action of $G \times H$ on $G \times W$ given by $$ (g,w) \cdot (g,w) = (g'g(h')^{-1}, g' \cdot w). $$ In this case, $H$ acts trivially on $W$, so we have ${\mathbb C}[G \times W]^H = ({\mathbb C}[G] \otimes {\mathbb C}[W])^H = {\mathbb C}[G]^H \otimes {\mathbb C}[W]$. Hence, we have $$ {\mathbb C}[G \times W]^{G \times H} = ({\mathbb C}[G \times W]^H)^G = ({\mathbb C}[G]^H \otimes {\mathbb C}[W])^G. $$ We have both sides of Grosshans principle, and now we need to relate them. Consider the map \begin{align*} \psi : G \times W \rightarrow G \times W\\ (g,w) \mapsto (g,g\cdot w) \end{align*} This gives a map on the coordinate rings, which we will also denote by $\psi: {\mathbb C}[G \times W] \isom {\mathbb C}[G \times W]$. There is a $W$-grading on ${\mathbb C}[G \times W]$ because ${\mathbb C}[G \times W] = {\mathbb C}[G] \otimes {\mathbb C}[W]$. Since $G$ acts by linear transformations, the map $\psi$ preserves the $W$-grading. Now, observe that $\psi$ takes the first action of $G \times H$ to the second action of $G \times H$. In particular, this means that the map $\psi$ restricts to an isomorphism of the invariant rings $$ \psi: ({\mathbb C}[G]^H \otimes {\mathbb C}[W])^G \rightarrow {\mathbb C}[W]^H. $$ Observe that ${\mathbb C}[W]^H$ is a $W$-graded subring of ${\mathbb C}[G \times W]$. Since $\psi$ is an isomorphism that preserves the $W$-grading, $({\mathbb C}[G]^H \otimes {\mathbb C}[W])^G$ is also a $W$-graded subring. \end{proof} Let us recall Matsushima's criterion, see \cite{Matsushima, Bial}. \begin{theorem} [Matsushima] \label{Matsu} Let $G$ be a reductive group, and let $H$ be a closed subgroup. Then $H$ is reductive if and only if $G/H$ is an affine variety. \end{theorem} An immediate consequence is the following: \begin{corollary} Let $G$ be a reductive group, and let $H$ be a closed reductive subgroup. Then we have an isomorphism $$ {\mathbb C}[G]^H \isom {\mathbb C}[G/H]. $$ \end{corollary} We need one more lemma before we prove Theorem~\ref{main}. \begin{lemma} \label{need.char0} Let $Y$ be a variety with an action of a reductive group $G$. Suppose $X$ is a closed $G$-stable subvariety of $X$, then we have a surjection ${\mathbb C}[Y]^G \twoheadrightarrow {\mathbb C}[X]^G$. \end{lemma} \begin{proof} Since $G$ is reductive, we have Reynolds operators $R_Y: {\mathbb C}[Y] \twoheadrightarrow {\mathbb C}[Y]^G$ and $R_X: {\mathbb C}[X] \twoheadrightarrow {\mathbb C}[X]^G$. Let $i: X \hookrightarrow Y$ denote the inclusion map, and let the pull back map on the coordinate rings be $i^*: {\mathbb C}[Y] \twoheadrightarrow {\mathbb C}[X]$. In the following diagram, the horizontal arrows are Reynolds operators and the vertical arrows are given by $i^*$. \begin{center} \begin{tikzcd} {\mathbb C}[Y] \arrow[r] \arrow[d] & {\mathbb C}[Y]^{G} \arrow[d] \\ {\mathbb C}[X] \arrow[r] & {\mathbb C}[X]^G \end{tikzcd} \end{center} The above diagram commutes. To see this, let us observe that ${\mathbb C}[Y]$ can be decomposed as a direct sum of irreducibles. So, it suffices to see that the diagram commutes for each isotypic component. The isotypic components for non-trivial representations get killed by the Reynolds operators, so both directions send them to zero. The Reynolds operators act by identity on trivial representations. So, in either direction the isotypic component for the trivial representation is only subject to $i^*$. Hence, the map $i^*: {\mathbb C}[Y]^G \rightarrow {\mathbb C}[X]^G$ must be a surjection since the other three maps are surjections. \end{proof} \begin{proof} [Proof of Theorem~\ref{main}] By the above discussion, the Grosshans principle in this case reads as: $$ {\mathbb C}[G/H \times W]^G \isom ({\mathbb C}[G]^H \otimes {\mathbb C}[W])^G \isom {\mathbb C}[W]^H. $$ Observe that $G/H \cong G \cdot v$ as affine varieties \footnote{This follows essentially from Zariski's main theorem, see for eg \cite[Theorem~25.1.2(iv)]{TY}}, and thus we have $$ G/H \times W \isom G \cdot v \times W \hookrightarrow V \oplus W. $$ This gives a surjection of invariant rings ${\mathbb C}[V \oplus W]^G \twoheadrightarrow {\mathbb C}[G/H \times W]^G$ by the above lemma. Combining with the above discussion, we have: $$ \phi: {\mathbb C}[V \oplus W]^G \twoheadrightarrow {\mathbb C}[G/H \times W]^G \isom {\mathbb C}[W]^H $$ Recall the $W$-grading on ${\mathbb C}[G/H \times W]$. We also have a $W$-grading on ${\mathbb C}[V \oplus W]$. The surjection ${\mathbb C}[V \oplus W] \twoheadrightarrow {\mathbb C}[G/H \times W]$ is degree non-increasing in the $W$-grading, and hence so is $\phi$. The polynomial grading and $W$-grading are different on ${\mathbb C}[V \oplus W]$. If $f \in {\mathbb C}[V \oplus W]$ is homogeneous in degree $d$ in the polynomial grading, then $f$ need not be homogeneous in the $W$-grading. However, the homogeneous components of $f$ in the $W$-grading will all be in degrees $\leq d$. On the other hand, the $W$-grading and the polynomial grading on ${\mathbb C}[W]^H$ agree. In particular this means that the surjection $\phi:{\mathbb C}[V \oplus W]^G \twoheadrightarrow {\mathbb C}[W]^H$ is degree non-increasing even when we consider the polynomial grading on ${\mathbb C}[V \oplus W]^G$. Thus we can apply Lemma~\ref{surj.bw.inv} to deduce: $$ \beta_G(V \oplus W) \geq \beta_H(W) \text{ and } \sigma_G(V \oplus W) \geq \sigma_H(W). $$ \end{proof} \section{Root systems and Invariant forms} \label{rs and inv.forms} Let $G$ be a complex reductive group, and $K$ a maximal compact subgroup, also called a compact real form. Let $T_{{\mathbb R}}$ be a (real) maximal torus of $K$. Being a real torus means that $T_{\mathbb R} \cong S_1^n$ for some $n \in {\mathbb Z}_{\geq 0}$, where $S_1 = \{z \in {\mathbb C}\ |\ |z| = 1\}$. Let $T$ denote the complexification of $T_{\mathbb R}$. Then $T$ is a (complex) maximal torus for $G$. We denote the Lie algebra of $T$ by $\mathfrak{t}$. For any representation $V$ of $G$, we can view it as a representation of $T$, and hence we get a weight space decomposition $$ V = \bigoplus_{\lambda \in \mathcal{X}(T)} V_{\lambda}. $$ There is a natural way to view $\mathcal{X}(T)$ as a subset of $\mathfrak{t}^*$. Indeed, let $T = ({\mathbb C}^*)^n$, and consequently its Lie algebra $\mathfrak{t} = {\mathbb C}^n$ where the lie bracket is identically zero. Let $v_1,\dots,v_n$ be the standard basis for ${\mathbb C}^n$, and consider the dual basis $e_1,\dots,e_n \in ({\mathbb C}^n)^* = \mathfrak{t}^*$. Then the correspondence $\lambda = (\lambda_1,\dots,\lambda_n) \longleftrightarrow \sum_{i=1}^n \lambda_i e_i$ allows us to view $\mathcal{X}(T)$ as a subset of $\mathfrak{t}^*$. This is indeed natural because for the character $\lambda: T = ({\mathbb C}^*)^n \rightarrow {\mathbb C}^*$ given by $(t_1,\dots,t_n) \mapsto \prod_i t_i^{\lambda_i}$, we get a map on the Lie algebras $\lambda: \mathfrak{t} = {\mathbb C}^n \rightarrow {\mathbb C}$ given by $(x_1,\dots,x_n) \mapsto \sum_i \lambda_ix_i$. An action of $T$ on $V$ gives an action of the Lie algebra $\mathfrak{t}$ on $V$. For $\lambda \in \mathfrak{t}^*$, we can define $V_\lambda = \{v \in V\ | \ X \cdot v = \lambda(X) v \ \forall X \in \mathfrak{t}\}$. Unless $\lambda \in \mathcal{X}(T)$, we must have $V_\lambda = 0$. Further, both definitions of $V_\lambda$ agree. Hence, one might also write the above decomposition as $$ V = \bigoplus_{\lambda \in \mathfrak{t}^*} V_{\lambda}. $$ \begin{remark} As discussed above, there are two ways to view the weight space decomposition. Both are well-known and standard, and we will freely switch between the two as needed. \end{remark} We make a convenient definition. \begin{definition} For any $v \in V$, the decomposition $v = \sum_{\lambda} v_{\lambda}$ with $v_{\lambda} \in V_{\lambda}$ is called the weight decomposition of $v$. The weight decomposition is unique. Further the set $\{\lambda\ |\ v_{\lambda} \neq 0\}$ is called the support of $v$, and we denote it by $\Supp(v)$. \end{definition} The following two examples are meant for readers unfamiliar with root systems, and can be skipped by experts. In Remark~\ref{B-defns}, we develop notation that will be helpful at various stages of paper. \begin{example} \label{GLnstart} Suppose $G = \operatorname{GL}_n({\mathbb C})$. Consider $\operatorname{U}_n(C) = \{X \in \operatorname{GL}_n({\mathbb C})\ |\ XX^\dag = I \}$ the unitary group of matrices, where $X^\dag$ denotes the conjugate transpose of $X$. Then $K = \operatorname{U}_n(C)$ is a compact real form for $\operatorname{GL}_n({\mathbb C})$. Let ${\rm diag} (a_1,\dots,a_n)$ denote the diagonal $n \times n$ matrix with diagonal entries $a_1,\dots,a_n$. The real torus $T_{\mathbb R} = \{{\rm diag}(a_1,\dots,a_n)\ |\ a_i \in {\mathbb C}, |a_i| = 1 \}$ is a (real) maximal torus of $K$, and its complexification $T = \{ {\rm diag} (a_1,\dots,a_n)\ |\ a_i \in {\mathbb C} \}$ is a (complex) maximal torus, and the Lie algebra of $T$ is $\mathfrak{t} = \{{\rm diag} (x_1,\dots,x_n)\ |\ x_i \in {\mathbb C}\}$. Let $\widetilde{e}_i \in \mathcal{X}(T)$ be defined by $\widetilde{e}_i \cdot {\rm diag}(t_1,\dots,t_n) = t_i$. Then, for the action of $\operatorname{GL}_n$ on ${\mathbb C}^n$ by left multiplication, the standard basis vector $e_i$ is a weight vector with weight $\widetilde{e}_i$. The weights $\widetilde{e}_i$ form a basis of $\mathfrak{t}^*$. \end{example} \begin{example} \label{SLnstart} Suppose $G = \operatorname{SL}_n({\mathbb C})$. Then $K = \operatorname{SU}_n({\mathbb C}) = \{X \in U_n({\mathbb C})\ |\ \det(X) = 1 \}$ is a compact real form, $T_{\mathbb R} = \{ {\rm diag}(a_1,\dots,a_n)\ |\ a_i \in {\mathbb C}, |a_i| = 1, \prod_i a_i = 1\}$ is a (real) maximal torus, its complexification $T = \{ {\rm diag}(a_1,\dots,a_n)\ |\ a_i \in {\mathbb C}, \prod_i a_i = 1\}$ is a (complex) maximal torus, and its Lie algebra is $\mathfrak{t} = \{{\rm diag} (x_1,\dots,x_n)\ |\ x_i \in {\mathbb C}, \sum_i x_i = 0\}$. The condition $\sum_i x_i = 0$ is really just asking for the trace of the matrix to be $0$. Let $\widetilde{e}_i \in \mathcal{X}(T)$ be defined again by $\widetilde{e}_i \cdot {\rm diag} (t_1,\dots,t_n) = t_i$. Then, for the action of $\operatorname{SL}_n$ on ${\mathbb C}^n$ by left multiplication, the standard basis vector $e_i$ is a weight vector with weight $\widetilde{e}_i$. The weights $\widetilde{e}_i$ do not form a basis. They satisfy one relation, i.e, $\sum_i \widetilde{e}_i = 0 \in \mathfrak{t}^*$. \end{example} Let us reformulate the above examples. \begin{remark} \label{B-defns} Suppose $G = \operatorname{GL}(V)$ or $\operatorname{SL}(V)$, with ${\mathcal B}$ a basis for $V$. Then, using the basis, we can identify $\operatorname{GL}(V)$ (resp. $\operatorname{SL}(V)$) with $\operatorname{GL}_n$ (resp. $\operatorname{SL}_n$). With this identification, we can define $K_{\mathcal B},T_{{\mathbb R},{\mathcal B}}, T_{\mathcal B}, \mathfrak{t}_{\mathcal B}$ as in the above examples. Under these choices, ${\mathcal B}$ consists of weight vectors. Let us denote the weight of $b \in {\mathcal B}$ by $\widetilde{b}$. The set $\{ \widetilde{b}\ |\ b \in {\mathcal B}\}$ forms a basis for $\mathfrak{t}_{\mathcal B}^*$ if $G = \operatorname{GL}(V)$, and satisfy one relation. i.e., $\sum_{b \in {\mathcal B}} \widetilde{b} = 0$ if $G = \operatorname{SL}(V)$. When the group is not clear, we will write $K_{G,{\mathcal B}}, T_{G,{\mathcal B}}$ etc. \end{remark} \subsection{Invariant forms} \label{Section.inv.forms} For this section, let $G$ be a complex reductive group, $K$ a compact real form, $T_{\mathbb R}$ a (real) maximal torus in $K$ and $T$ the complexification of $T_{\mathbb R}$. \begin{proposition} \label{com.inv.exist} Let $V$ be any representation of $G$. Then there is a positive definite $K$-invariant hermitian form on $V$ for which the weight spaces are pairwise orthogonal. \end{proposition} \begin{proof} Let $\rho:G \rightarrow \operatorname{GL}(V)$ define the representation. There is a positive definite hermitian form $\left<-,-\right>$ on $V$ such that $\rho(K) \subseteq U(V)$, where $U(V)$ denotes the unitary group with respect to $\left<-,-\right>$. This is the statement of Weyl's unitary trick. Now, $\rho(T_{\mathbb R}) \subseteq \rho(K) \subseteq U(V)$ is a subtorus of $U(V)$, and hence $\rho(T_{\mathbb R})$ is contained in a (real) maximal torus $H_{\mathbb R}$ of $U(V)$. All maximal tori of $U(V)$ are conjugate to each other. In other words, there is an orthonormal basis ${\mathcal B}$ of $V$ such that $K_{\operatorname{GL}(V),{\mathcal B}} = U(V)$ and $T_{\operatorname{GL}(V),{\mathbb R},{\mathcal B}} = H_{\mathbb R}$. The basis vectors $b \in {\mathcal B}$ are weight vectors for $H_{\mathbb R}$ and hence for $T_{\mathbb R}$. Since the basis vectors are orthonormal, the weight spaces must be orthogonal. \end{proof} \begin{definition} [$(K,T)$-compatible form] For a representation $V$ of $G$, we call a positive definite $K$-invariant hermitian form $(K,T)$-compatible if the weight spaces are orthogonal. \end{definition} The above proposition can now be reformulated as: \begin{corollary} Let $V$ be a representation of $G$. Then there exists a $(K,T)$-compatible form on $V$. \end{corollary} \begin{definition} Let $V$ be a vector space with basis ${\mathcal B}$. Let $W$ be a representation of $GL(V)$. Then a ${\mathcal B}$-compatible form on $W$ is defined as an $(K_{\operatorname{GL}(V),{\mathcal B}},T_{\operatorname{GL}(V),{\mathcal B}})$-invariant form. \end{definition} \begin{remark} If $W$ is a representation of $GL(V)$, then it is also a representation of $SL(V)$. A ${\mathcal B}$-compatible form on $W$ is also a $(K_{\operatorname{SL}(V),{\mathcal B}}, T_{\operatorname{SL}(V),{\mathcal B}})$-form. The converse is not always true. \end{remark} \begin{definition} [direct sum form] Suppose $W_i$ is a vector space with a bilinear form $\left<-.-\right>_i$ for $i = 1,2$. Then we define the direct sum form $\left<-,-\right>$ on $W_1 \oplus W_2$ by $$ \left<(a,b),(c,d)\right> = \left<a,c\right>_1 \left<b,d \right>_2. $$ \end{definition} The following lemma is straightforward. \begin{lemma} \label{sum.form} Suppose $W_1$ and $W_2$ are representations of $G$ with positive definite $K$-invariant hermitian forms. Then the direct sum form gives a positive definite $K$-invariant hermitian form on $W_1 \oplus W_2$. Further, under this form, $W_1$ is orthogonal to $W_2$. \end{lemma} \subsection{Root systems} For a complex reductive group $G$, let $K$ be a compact real form, $T_{\mathbb R}$ a maximal torus in $K$ and $T$ its compexification. Let $\mathfrak{g}$ denote the Lie algebra of $G$. There is an exponential map $\exp: \g \rightarrow G$. There is a natural action of the group $G$ acts on $\g$ called adjoint action. The adjoint action is given by $g \cdot X = \frac{d}{dt} g \cdot \exp(tX) \cdot g^{-1}$. Since $T$ is a subgroup of $G$, we get an action of $T$ on $\g$. This gives a decomposition of $\g$ into weight spaces with respect to $T$, i.e., $$ \g = \mathfrak{t} \oplus\bigoplus_{\alpha\in\Phi} \g_\alpha. $$ Let us explain the terms. For each weight $\beta \in \mathfrak{t}^*$, the weight space $\g_\beta$ consists of all the weight vectors in $\g$ of weight $\beta$. More precisely, $\g_\beta = \{X \in \g\ |\ t \cdot X = \beta(t) X\ \forall t \in T\}$. Since $\g$ is finite dimensional, only a finitely many of these weight spaces are non-zero. The weight space corresponding to the $0$ weight is just $\mathfrak{t}$, i.e., $\g_0 = \mathfrak{t}$. The set of non-zero weights $\beta$ for which the weight space $g_\beta$ is non-zero form a finite collection of vectors in $\mathcal{X}(T) \subset \mathfrak{t}^*$ called the root system, which we denote by $\Phi$. This explains the above decomposition. The above decomposition of $\g$ is often called the root space decomposition. Root systems have a very rich structure, and have been explored extensively from algebraic, geometric and combinatorial points of view. We refer to \cite{Humphreys} for an algebraic introduction. \begin{example} \label{glnrs} Let $G = \operatorname{GL}_n$. We continue with the choices for $K,T,\mathfrak{t}$ etc from Example~\ref{GLnstart}. The Lie algebra of $G = \operatorname{GL}_n$ is $\g = \operatorname{Mat}_{n,n}$. Let $E_{i,j}$ denote the elementary $n \times n$ matrix with a $1$ in the $(i,j)^{th}$ entry and $0$'s everywhere else. The set $\{E_{i,j}\}_{1 \leq i,j \leq n}$ is a weight basis for $\operatorname{Mat}_{n,n}$. The torus element $t = {\rm diag}(t_1,\dots,t_n)$ acts on $E_{i,j}$ by the formula $$ t \cdot E_{i,j} = t_it_j^{-1} E_{i,j} $$ Recall that $e_1,\dots,e_n$ is a basis for $\mathfrak{t}^*$, where the formula $e_i (t) = t_i$ defines the weight $e_i \in \mathfrak{t}^*$. Hence, the weight $e_i-e_j$ is given by the formula $(e_i - e_j)(t) = t_it_j^{-1}$. In particular, $E_{i,j}$ is a weight vector for the weight $e_i-e_j$. Hence, the obvious decomposition $$ \g = \operatorname{Mat}_{n,n} = \mathfrak{t} \oplus \bigoplus_{i,j} {\mathbb C} E_{i,j} $$ is indeed the weight space decomposition. This means that the root system $\Phi$ consists of weights of the form $e_i - e_j$ for $i \neq j$, i.e., $\Phi = \{e_i - e_j \ |\ 1\leq i,j \leq n, i \neq j\}$. \end{example} \begin{example} \label{slnrs} Let $G = \operatorname{SL}_n$. We continue with the choices for $K,T,\mathfrak{t}$ etc from Example~\ref{SLnstart}. The Lie algebra of $G = \operatorname{SL}_n$ is $\g = \{X \in \operatorname{Mat}_{n,n}\ |\ {\rm Tr}(X) = 0 \}$. Recall that in this case $e_1,\dots,e_n$ satisfy one linear relation, i.e., $\sum_i e_i = 0$. Again, the root system $\Phi = \{e_i - e_j \ |\ 1\leq i,j \leq n, i \neq j\}$. \end{example} \begin{remark} \label{b-rootsys} If we started with $G = \operatorname{GL}(V)$ or $\operatorname{SL}(V)$, and a basis ${\mathcal B}$ of $V$ and made all the standard choices as in Remark~\ref{B-defns}, the root system would be $\Phi = \{\widetilde{b} - \widetilde{b'}\ |\ b,b' \in {\mathcal B}, b \neq b'\}$ \end{remark} We make some useful definitions to aid in formulating later statements. \begin{definition} [Root adjacent] We say two weights $\lambda,\mu \in \mathfrak{t}^*$ are root adjacent if $\lambda - \mu \in \Phi$. \end{definition} \begin{definition} [\uncramped\ sets of weights] \label{uncramped} A subset of weights $I \subseteq \mathfrak{t}^*$ is called \uncramped\ if no pair of weights in $I$ is root adjacent. \end{definition} \subsection{Products of root systems} Suppose $G_1,\dots,G_d$ are connected reductive groups. Suppose for each $i$ that $K_i,T_i$ are choices of maximal compact subgroups and maximal tori. Let $\Phi_i$ be the root system for $G_i$ corresponding to these choices. Then $K:= K_1 \times K_2 \times \dots \times K_d$ (resp. $T:= T_1 \times \dots \times T_d$) is a maximal compact subgroup (resp. maximal torus) for $G:= G_1 \times G_2 \times \dots \times G_d$. Let $\mathfrak{t} = \mathfrak{t}_1 \times \mathfrak{t}_2 \dots \times \mathfrak{t}_d$ is the Lie algebra of $T$, where $\mathfrak{t}_i$ denotes the Lie algebra of $T_i$. Observe that $\Phi_i \subseteq \mathfrak{t}_i^* \subset \mathfrak{t}^*$. The following is straightforward. \begin{lemma} The set $\Phi_1 \cup \Phi_2 \cup \dots \cup \Phi_d$ is the root system for $G$. \end{lemma} Let $\underline{\lambda} = (\lambda_1,\lambda_2,\dots,\lambda_d)$ and $\underline{\mu} = (\mu_1,\dots,\mu_d)$ be two weights in $\mathfrak{t}^*$. \begin{corollary} Suppose $\lambda_t \neq \mu_t$ for at least two choices of $t \in \{1,2,\dots,d\}$. Then $\underline{\lambda}$ and $\underline{\mu}$ are not root adjacent. \end{corollary} Suppose $V_i$ is a representation of $G_i$ for each $i$. Then $V = V_1 \otimes V_2 \otimes \dots \otimes V_d$ is a representation of $G$. Suppose $v_i,w_i \in V_i$ are weight vectors of weights $\lambda_i, \mu_i$. Let $v = v_1 \otimes v_2 \otimes \dots \otimes v_d$ and $w = w_1 \otimes \dots \otimes w_d$. Clearly, the weight of $v$ is $(\lambda_1,\lambda_2,\dots,\lambda_d) := \underline{\lambda}$, and the weight of $w$ is $(\mu_1,\dots,\mu_d):= \underline{\mu}$. Specializing the discussion to tensor actions, we get: \begin{corollary} \label{uncramped.tensor} Consider the tensor action of $\operatorname{SL}(V_1) \times \operatorname{SL}(V_2) \times \dots \times \operatorname{SL}(V_d)$ on $V_1 \otimes V_2 \otimes \dots \otimes V_d$. Suppose ${\mathcal B}_i$ is a basis for $V_i$ and we make all the standard choices for compact real form, tori etc with respect to the basis ${\mathcal B}_i$. Let $v = b_1 \otimes b_2 \otimes \dots \otimes b_d$ and $w = b_1' \otimes b_2' \otimes \dots \otimes b_d'$ with $b_i,b_i' \in {\mathcal B}_i$ for all $i$. Suppose for at least two choices of $i \in \{1,2,\dots,d\}$, we have $b_i \neq b_i'$. Then $v$ and $w$ are weight vectors whose weights are not root adjacent. \end{corollary} \section{Moment map and a criterion for closed orbits} In order to be able to use Theorem~\ref{main} effectively, we would need to prove that an orbit is closed. A criterion for detecting whether an orbit is closed is interesting by itself, and a good criterion could have a range of applications in both pure and applied mathematics. We approach the problem via the moment map, which suffices for our purposes. It is an interesting problem to understand whether the criterion we propose (see Theorem~\ref{crit.co}) has a suitable analogue in positive characteristic. We first define the moment map. \begin{definition} Let $V$ be a representation of a connected complex reductive group $G$. Let $K$ be a maximal compact group of $G$, and let $\g$ denote the Lie algebra of $G$. Let $\left< -, - \right>$ be a $K$-invariant positive definite hermitian form. The moment map $\mu_G : V \rightarrow \mathfrak{g}^*$ is defined by $\mu_G(v)(X) = \left<Xv, v \right>$ for $v \in V$ and $X \in \g$. \end{definition} \begin{proposition} [Kempf-Ness] Suppose $\mu_G(v) = 0$, then the orbit $G \cdot v$ is closed. \end{proposition} An even stronger statement holds, namely that every closed orbit contains a unique $K$-orbit at which the moment map vanishes. This is precisely why the GIT quotient $X\mathbin{\mkern-4mu/\mkern-6mu/\mkern-4mu} G$ agrees with the symplectic reduction $\mu^{-1}(0) / K$, which is known as the Kempf-Ness theorem. We refrain from getting into this beautiful subject, and refer to \cite{KN, Mumford} for details. Now, we turn towards proving a criterion for the vanishing of the moment map in the language of root systems. \begin{proposition} Suppose $V$ is a representation of a connected complex reductive group $G$. Let $K$ be a compact real form, and let $T$ be a maximal torus, and let $\left<-,-\right>$ be a $(K,T)$-compatible form. Assume that $v \in V$. Let $v = \sum_{\lambda \in \Supp(v)} v_{\lambda}$ be its weight decomposition. Suppose \begin{enumerate} \item $\Supp(v)$ is \uncramped\ (see Definition~\ref{uncramped}). \item $\sum_{\lambda \in \Supp(v)} ||v_{\lambda}||^2 \lambda = 0$. \end{enumerate} Then, $\mu_G(v) = 0$, and hence the orbit of $v$ is closed. \end{proposition} \begin{proof} Look at the root space decomposition $\g = \mathfrak{t} \oplus\bigoplus_{\alpha \in \Phi} \g_\alpha$. We want to show that $\mu_G(v)(X) = 0$ for all $X \in \g$. Since $\mu_G(v)$ is a linear map from $\g$ to ${\mathbb C}$, it will suffice to show $\mu_G(v)(X) = 0$ separately for $X \in \mathfrak{t}$ and $X \in \g_\alpha$ for each $\alpha \in \Phi$. Suppose $X \in \g_\alpha$. Then for each $\lambda \in \Supp(v)$, we have $X \cdot v_{\lambda} \in V_{\lambda + \alpha}$. We know that $\lambda + \alpha \notin \Supp(v)$ because $\Supp(v)$ is \uncramped. Since the form is $(K,T)$-compatible, we know that weight spaces are orthogonal. So, $X \cdot v_{\lambda}$ is orthogonal to $v$. Hence $X \cdot v = \sum_{\lambda \in \Supp(v)} X \cdot v_{\lambda}$ is orthogonal to $v$, i.e., $\left< X \cdot v, v \right> = 0$. In other words, $\mu_G(v)(X) = 0$. Now, suppose $X \in \mathfrak{t}$. Then for each $\lambda \in \Supp(v)$, we have $X \cdot v_{\lambda} = \lambda(X) v_\lambda$. Now, observe that $$ \left< X\cdot v_{\lambda}, v \right> = \left< \lambda(X) v_\lambda, v \right> = \left<\lambda(X) v_\lambda, v_{\lambda} \right> = \lambda(X) ||v_{\lambda}||^2. $$ Thus, we have $$ \left< X \cdot v, v \right> = \sum_{\lambda \in \Supp(v)} \left< X\cdot v_{\lambda}, v \right> = \sum_{\lambda \in \Supp(v)} \lambda(X) ||v_{\lambda}||^2 = 0. $$ The last equality of course follows from Condition $(2)$. Hence, we have $\mu_G(v)(X) = 0$ for $X \in \mathfrak{t}$. Thus, we conclude $\mu_G(v) = 0$, and consequently that the orbit $G \cdot v$ is closed. \end{proof} \begin{theorem} \label{crit.co} Suppose $W$ is a representation of a connected complex reductive group $G$ and $w \in W$. Let $W = \bigoplus\limits_{j \in J} W_j$ be a decomposition into subrepresentations. Take a $(K,T)$-compatible form on each $W_j$, and let $\left<-,-\right>$ denote their direct sum form on $W$. Let $w = \sum_{j \in J} {w_j}$ with $w_j \in W_j$. Further, write $w_j = \sum_{\lambda \in \Supp(w_j)} w_{j,\lambda}$ be the weight decomposition for each $w_j$. Suppose \begin{enumerate} \item $\Supp(w_j)$ is \uncramped\ for all $j$; \item $\sum_{j} \sum_{\lambda \in \Supp(w_j)} ||w_{j,\lambda}||^2 \lambda = 0$. \end{enumerate} Then, the orbit of $w$ is closed. \end{theorem} \begin{proof} We want to show that $\mu_G(w)(X) = 0$ for all $X \in \g$. Again, it suffices to show it separately for $X \in \mathfrak{t}$ and $X \in \g_\alpha$ for each $\alpha \in \Phi$. The argument for $\mathfrak{t}$ is the same as the previous proposition. Now suppose $X \in \g_\alpha$. For all $j$, we have $\left<Xw_j,w_j\right> = 0$ by repeating the argument from the previous proposition, as $\Supp(w_j)$ is uncramped. Since the irreducibles $W_j$ are orthogonal by construction of the form, this shows that $\left<Xw,w\right> = 0$ as required. \end{proof} \section{Cubic forms} Let us set up the situation for this section. Let $V$ be a vector space of dimension $3n$, and let a basis for $V$ be ${\mathcal B} = \{x_i,y_i,z_i\}_{1 \leq i \leq n}$ . Consider $W = S^3(V)^{\oplus 4}$, and let $$ w = \Big(\sum_i x_i^2 z_i, \sum_i y_i^2 z_i, \sum_i \alpha_i x_iy_iz_i\Big), $$ where $\alpha_i$ are distinct complex numbers with $|\alpha_i| = 1$ and for all $i \neq j$, $\alpha_i \neq \pm \alpha_j$. There is a natural action of $\operatorname{SL}(V)$ on $S^3(V)$, and hence on $W$. We will write $w = (w_1,w_2,w_3)$ where $w_1 = \sum_i x_i^2 z_i$, $w_2 = \sum_i y_i^2 z_i$ and $w_3 = \sum_i \alpha_i x_iy_iz_i.$ \begin{proposition} \label{closed w} The orbit $\operatorname{SL}(V) \cdot w$ is closed. \end{proposition} Let us define a map $ \phi: ({\mathbb C}^*)^n \rightarrow \operatorname{SL}(V)$. To define the map, it suffices to understand how $\phi(t = (t_1,\dots,t_n))$ acts on the basis $\{x_i,y_i,z_i\}_{1 \leq i \leq n}$. Define $\phi$ by $\phi(t) \cdot x_i = t_i x_i$, $\phi(t) \cdot y_i = t_i y_i$ and $\phi(t) \cdot z_i = t_i^{-2} z_i$. Let $H := \phi(({\mathbb C}^*)^n)$. \begin{proposition} \label{stab.compute} We have ${\rm Stab}_{\operatorname{SL}(V)}(w) = H$. \end{proposition} It is also easy to see that $H$ is a closed subgroup of $G$. It is also reductive because it is a torus. It is indeed necessary that the stabilizer is closed and reductive to be able to apply Theorem~\ref{main}, as we will do in the proof of Theorem~\ref{lbsln}. We postpone the proofs Proposition~\ref{closed w} and Proposition~\ref{stab.compute} and complete the proof of Theorem~\ref{lbsln}. Consider the $n+1$-dimensional subspace $U \subset S^3(V)$ spanned by $\{x_1z_2^2, x_2z_3^2,\dots,x_nz_1^2,x_1^3\}$. This is an invariant subspace under the action of $H \subset \operatorname{SL}(V)$ described in the previous section. \begin{lemma} We have $\beta_H(U) \geq \sigma_H(U) \geq \textstyle\frac{2}{3}(4^n - 1)$. \end{lemma} \begin{proof} The basis $\mathcal{E} = (x_1z_2^2, x_2z_3^2,\dots,x_nz_1^2,x_1^3)$ is a weight basis, and $M_{\mathcal{E}(W)} = M$, the matrix in Section~\ref{prelim}. The lemma now follows from Proposition~\ref{tor.inv.M}. \end{proof} \begin{corollary} We have $\beta_H(S^3(V)) \geq \sigma_H(S^3(V)) \geq \textstyle\frac{2}{3}(4^n - 1)$. \end{corollary} \begin{proof} This follows from Proposition~\ref{tor.inv.compare} since $U$ is a subrepresentation of $S^3(V)$ for the action of $H$. \end{proof} \begin{proof} [Proof of Theorem~\ref{lbsln}] Let $G = \operatorname{SL}(V)$. Recall $w \in S^3(V)^{\oplus 3}$ from the previous section such that ${\rm Stab}_G(w) = H$. Thus, by Theorem~\ref{main} and the above corollary, we have $$ \beta_G(S^3(V)^{\oplus 3} \oplus S^3(V)) \geq \sigma_G(S^3(V)^{\oplus 3} \oplus S^3(V)) \geq \sigma_H(S^3(V)) \geq \textstyle\frac{2}{3}(4^n - 1). $$ \end{proof} \begin{remark} If instead of $w$, one takes $(\sum_i x_i^2 z_i, \sum_i y_i^2 z_i) \in S^3(V)^{\oplus 2}$, then this also has a closed orbit. However, its stabilizer is not the torus $H$ (defined above), but rather a finite extension of it. With some additional work, this can be used to show exponential lower bounds for $S^3(V)^{\oplus 3}$ (instead of $S^3(V)^{\oplus 4}$ as stated in Theorem~\ref{lbsln}). However, we feel that this modest improvement does not warrant the additional discussion on how to deal with finite extensions of tori, so we omit it. \end{remark} \subsection{Closedness of orbit} The strategy is to apply Theorem~\ref{crit.co}. But before proceeding to check the hypothesis, we need a little groundwork. \begin{definition} [type of a monomial] Every monomial in the basis $\mathcal{B}$ can be written as $b_1^{a_1}b_2^{a_2} \dots b_k^{a_k}$, where the $b_i$ represent distinct elements in the basis $\mathcal{B}$, and $a_1 \geq a_2 \geq \dots \geq a_k > 0$. We define its type to be $(a_1,\dots,a_k)$. \end{definition} \begin{example} The types of $x_i^2z_i$ and $y_j^2z_j$ are $(2,1)$, whereas the type of $x_iy_iz_i$ is $(1,1,1)$. \end{example} \begin{lemma} There exists a ${\mathcal B}$-compatible form on $S^3(V)$. For any ${\mathcal B}$-compatible form $S^3(V)$, all the monomials of a fixed type have the same norm. \end{lemma} \begin{proof} Recall that a ${\mathcal B}$-compatible form is a $(K_{\operatorname{GL}(V),{\mathcal B}},T_{\operatorname{GL}(V),{\mathcal B}})$-compatible form. Since we have an action of $\operatorname{GL}(V)$ on $S^3(V)$, there is a ${\mathcal B}$-compatible form on $S^3(V)$ by Proposition~\ref{com.inv.exist}. A permutation of the basis vectors in ${\mathcal B}$ is an element of $K_{\operatorname{GL}(V),{\mathcal B}}$, and hence such permutations preserve norm. To conclude, notice that the monomials of a fixed type are related by such permutations. \end{proof} We can now prove Proposition~\ref{closed w}. \begin{proof} [Proof of Proposition~\ref{closed w}] We want to show that $w$ satisfies the hypothesis of Theorem~\ref{crit.co}. Recall that $w = (w_1,w_2,w_3)$ where $w_1 = \sum_i x_i^2 z_i$, $w_2 = \sum_i y_i^2 z_i$ and $w_3 = \sum_i \alpha_i x_iy_iz_i.$ Note that these are the weight space decompositions $w_j = \sum_{\lambda \in \Supp (w_j)} w_{j,\lambda}$. We want to check that the hypothesis of Theorem~\ref{crit.co} is satisfied. To check condition $(1)$ of Theorem~\ref{crit.co}, we need to check that each $\Supp (w_j)$ is \uncramped. But observe from the weight decompositions that $\Supp (w_1) = \{2\widetilde{x}_i + \widetilde{z}_i\}_{1 \leq i \leq n}$, $\Supp (w_2) = \{2\widetilde{y}_i + \widetilde{z}_i\}_{1 \leq i \leq n}$ and $\Supp (w_3) = \{\widetilde{x}_i + \widetilde{y}_i + \widetilde{z}_i\}_{1 \leq i \leq n}$. It is clear that these are \uncramped\ from the description of the root system $\Phi$ in Remark~\ref{b-rootsys}. To check condition~$(2)$, we need to do an explicit computation that requires us to be able to compute the norms of $x_i^2z_i$, $y_i^2z_i$ and $x_iy_iz_i$. From the above lemma, we know that there is a ${\mathcal B}$-compatible form on each copy of $S^3(V)$. We insist that we use the same form on each copy and then use the direct sum form on $S^3(V)^{\oplus 3}$. The above lemma tells us that all monomials of a certain type have the same norm. Let us suppose that the monomials of type $(2,1)$ (e.g., $x_i^2z_i$ and $y_j^2z_j$) have norm $M$ and monomials of type $(1,1,1)$ (e.g., $x_iy_iz_i)$ have norm $N$. We compute $\sum_{\lambda \in \Supp (w_1)} ||w_{1,\lambda}||^2 \lambda$. \begin{align*} \sum_{\lambda \in \Supp (w_1)} ||w_{1,\lambda}||^2 \lambda & = \sum_{i=1}^n ||x_i^2z_i||^2 (2\widetilde{x}_i + \widetilde{z}_i) \\ & = \sum_i M^2 (2\widetilde{x}_i + \widetilde{z}_i) \end{align*} Similarly, we have $$ \sum_{\lambda \in \Supp (w_2)} ||w_{2,\lambda}||^2 \lambda = \sum_{i} M^2 (2\widetilde{y}_i + \widetilde{z}_i), $$ and \begin{align*} \sum_{\lambda \in \Supp (w_3)} ||w_{3,\lambda}||^2 \lambda &= \sum_i ||x_iy_iz_i||^2 (\widetilde{x}_i + \widetilde{y}_i + \widetilde{z}_i) \\ & = N^2 (\sum_i \widetilde{x}_i + \widetilde{y}_i + \widetilde{z}_i). \end{align*} Hence, we have $$ \sum_{j =1}^3\sum_{\lambda \in \Supp (w_j)} ||w_{j,\lambda}||^2 \lambda = (2M^2 + N^2) (\sum_i \widetilde{x}_i + \widetilde{y}_i + \widetilde{z}_i) =(2M^2 + N^2) \sum_{b \in {\mathcal B}} \widetilde{b} = 0. $$ The last equality follows from Remark~\ref{B-defns}, as we are working with $\operatorname{SL}(V)$. Hence, $w$ satisfies the hypothesis of Theorem~\ref{crit.co}, so the orbit of $w$ is closed. \end{proof} \subsection{Computation of stabilizer} Now, we turn towards computing the stabilizer. We will proceed in steps. \begin{lemma} Suppose $g \in SL(V)$ such that $g \cdot w_1 = w_1$. Then $g \cdot x_i = c_i x_{\sigma(i)}$ for some permutation $\sigma$ of $\{1,2,\dots,n\}$ and non-zero scalars $c_i$. \end{lemma} \begin{proof} The space of partial derivatives of $w_1$ is $\left< x_1^2,\dots,x_n^2, x_1z_1, \dots ,x_nz_n\right>$. This must be preserved by $g$. The squares in the space of partial derivatives are of the form $d_ix_i^2$ for some nonzero scalars $d_i$. Thus the image of $x_i$ under the action of $g$ must be a scalar multiple of $x_j$ for some $j$. Since $g$ is invertible, the lemma follows. \end{proof} \begin{corollary} \label{x2z} Suppose $g \in {\rm Stab}_{\operatorname{SL}(V)} (w_1)$. Then for some permutation $\sigma$, we must have $g \cdot x_i = c_ix_{\sigma(i)}$ and $g \cdot z_i = c_i^{-2}z_{\sigma(i)}$ for some scalars $c_i$. \end{corollary} \begin{proof} From the above lemma, we already know that $g \cdot x_i = c_i x_{\sigma(i)}$ for some permutation $\sigma$ and scalars $c_i$. Hence, we have $$ \sum_i (c_ix_{\sigma(i)})^2 (g \cdot z_i) = g \cdot w_1 = w_1 = \sum_i x_i^2 z_i = \sum_i x_{\sigma(i)}^2 z_{\sigma(i)}. $$ Thus, we have $$ \sum_i x_{\sigma(i)}^2 (c_i^2 g \cdot z_i - z_{\sigma(i)}) = 0. $$ Observe that monomials of degree $3$ in $\{x_i,y_i,z_i\}_{1 \leq i \leq n}$ are a basis for $S^3(V)$. Now, for any $p,q \in V$, $x_i^2 p$ and $x_j^2 q$ do not have any monomials in common. Hence, we must have $x_{\sigma(i)}^2 (c_i^2 g \cdot z_i - z_{\sigma(i)}) = 0$ for all $i$. Hence, for all $i$, we must have $c_i^2 g \cdot z_i - z_{\sigma(i)} = 0$ or equivalently $g \cdot z_i = c_i^{-2} z_{\sigma(i)}$ as required. \end{proof} We can do a similar analysis for $w_2$, and we get: \begin{lemma} \label{y2z} Suppose $g \in {\rm Stab}_{\operatorname{SL}(V)} (w_2)$. Then for some permutation $\pi$ and scalars $d_i$, we have $g \cdot y_i = d_iy_{\pi(i)}$ and $g \cdot z_i = d_i^{-2}z_{\pi(i)}$. \end{lemma} \begin{corollary} Suppose $g \in {\rm Stab}_{\operatorname{SL}(V)} (w_1,w_2)$. Then for some permutation $\sigma$ and scalars $c_i$, we have $g(x_i) = c_i x_{\sigma(i)}$, $g(y_i) = \pm c_i y_{\sigma(i)}$ and $g(z_i) = c_i^{-2} z_{\sigma(i)}$. \end{corollary} \begin{proof} Suppose $g \in {\rm Stab}_{\operatorname{SL}(V)} (w_1,w_2)$. Then from Corollary~\ref{x2z}, we know that there is a permutation $\sigma$ and scalars $c_i$ such that $g(x_i) = c_i x_{\sigma(i)}$ and $g(z_i) = c_i^{-2}z_{\sigma(i)}$. By Lemma~\ref{y2z}, there is a permutation $\pi$ and scalars $d_i$ such that $g(y_i) = d_i y_{\pi(i)}$ and $g(z_i) = d_i^{-2}z_{\pi(i)}$. Thus, we have $g\cdot z_i = c_i^{-2} z_{\sigma(i)} = d_i^{-2} z_{\pi(i)}$ for all $i$. Hence, we must have $\sigma = \pi$ and $d_i = \pm c_i$. \end{proof} \begin{proof} [Proof of Proposition~\ref{stab.compute}] Suppose $g \in Stab(w_1,w_2,w_3)$. Then since $g \in Stab(w_1,w_2)$, we know that there is a permutation $\sigma$ and scalars $c_i$ such that $g(x_i) = c_i x_{\sigma(i)}$, $g(y_i) = \pm c_i y_{\sigma(i)}$ and $g(z_i) = c_i^{-2} z_{\sigma(i)}$ In particular, this means that $g \cdot x_iy_iz_i = \pm x_{\sigma(i)}y_{\sigma(i)}z_{\sigma(i)}$. But now $g$ also fixes $w_3 = \sum_i \alpha_i x_iy_iz_i$. However, we have $$ \sum_i \pm \alpha_i x_{\sigma(i)}y_{\sigma(i)}z_{\sigma(i)} = g \cdot w_3 = w_3 = \sum_i \alpha_i x_iy_iz_i $$ This means that $\pm\alpha_i = \alpha_{\sigma(i)}$. But recall that the choice of $\alpha_i$'s was such that $\alpha_i \neq \pm \alpha_j$ for all $i \neq j$. This means that $\sigma$ is the identity permutation, and further that we must have $g \cdot x_iy_iz_i = x_iy_iz_i$. Hence, this implies $g \cdot y_i = c_iy_i$. Thus we must have $g \cdot x_i = c_ix_i$, $g \cdot y_i = c_iy_i$ and $g\cdot z_i = c_i^{-2}z_i$. In other words, $g \in H$. Conversely, it is easy to observe that $H \subseteq Stab(w)$. \end{proof} \section{Tensor actions} Let $U,V,W$ be $3n$-dimensional vector spaces with basis ${\mathcal B}_u = \{u_1^k,u_2^k,u_3^k\}_{1 \leq k \leq n}$, ${\mathcal B}_v = \{v_1^k,v_2^k,v_3^k\}_{1 \leq k \leq n}$ and ${\mathcal B}_w = \{w_1^k,w_2^k,w_3^k\}_{1 \leq k \leq n}$ respectively. Let \begin{align*} F_1 &= \sum_{k=1}^n u_1^k v_2^k w_3^k + u_2^k v_3^k w_1^k + u_3^k v_1^k w_2^k \\ G_1 &= \sum_{k=1}^n \alpha_k u_1^k v_2^k w_3^k + \beta_k u_2^k v_3^k w_1^k + \gamma_k u_3^k v_1^k w_2^k \\ F_2 &= \sum_{k=1}^n u_2^k v_1^k w_3^k + u_1^k v_3^k w_2^k + u_3^k v_2^k w_1^k \\ G_2 &= \sum_{k=1}^n \alpha_k u_2^k v_1^k w_3^k + \beta_k u_1^k v_3^k w_2^k + \gamma_k u_3^k v_2^k w_1^k \\ F_3 &= \sum_{k=1}^n u_1^k v_1^k w_3^k + u_2^k v_3^k w_2^k + u_3^k v_1^k w_1^k \\ G_3 &= \sum_{k=1}^n \alpha_k u_1^k v_1^k w_3^k + \beta_k u_2^k v_3^k w_2^k + \gamma_k u_3^k v_1^k w_1^k \\ F_4 &= \sum_{k=1}^n u_2^k v_2^k w_3^k + u_1^k v_3^k w_1^k + u_3^k v_2^k w_2^k \\ G_4 &= \sum_{k=1}^n \alpha_k u_2^k v_2^k w_3^k + \beta_k u_1^k v_3^k w_1^k + \gamma_k u_3^k v_2^k w_2^k, \end{align*} where $\alpha_k,\beta_k,\gamma_k$ are a collection of distinct scalars in ${\mathbb C}$ with unit norm. Consider $$ \underline{F} = (F_1,G_1,F_2,G_2,F_3,G_3,F_4,G_4) \in (U \otimes V \otimes W)^8. $$ The approach will be the same as cubic forms. First, we show: \begin{proposition} \label{orbit.closed.tensor} The orbit of $\underline{F}$ for the action of $\operatorname{SL}(U) \times \operatorname{SL}(V) \times \operatorname{SL}(W)$ is closed. \end{proposition} Next, we compute the stabilizer. Let us define a map $\phi_U: (({\mathbb C}^*)^3)^n \rightarrow \operatorname{GL}(U)$. To define such a map it suffices to understand the action of $t = (p_1,q_1,r_1,p_2,q_2,r_2,\dots,p_n,q_n,r_n)$ on each basis vector $b \in {\mathcal B}_u$. The map $\phi_U$ is defined by $$ \phi_U(t) u_1^k = p_k u_1^k, \phi_U(t) u_2^k = p_k u_2^k \text{ and } \phi_U(t) u_3^k = (q_kr_k)^{-1} u_3^k. $$ Similarly define $\phi_V: (({\mathbb C}^*)^3)^n \rightarrow \operatorname{GL}(V)$ by $$ \phi_V(t) v_1^k = q_k v_1^k, \phi_V(t) v_2^k = q_k v_2^k \text{ and } \phi_V(t) v_3^k = (p_kr_k)^{-1} v_3^k. $$ Finally, define $\phi_W: (({\mathbb C}^*)^3)^n \rightarrow \operatorname{GL}(W)$ by $$ \phi_W(t) w_1^k = r_k w_1^k, \phi_W(t) w_2^k = r_k w_2^k \text{ and } \phi_W(t) w_3^k = (p_kq_k)^{-1} w_3^k. $$ Let $\phi = (\phi_U,\phi_V,\phi_W): (({\mathbb C}^*)^3)^n \rightarrow \operatorname{GL}(U) \times \operatorname{GL}(V) \times \operatorname{GL}(W)$. Let $H$ denote the image of $\phi$. Then, we have: \begin{proposition} \label{tensor-stab-compute} We have ${\rm Stab}_{\operatorname{GL}(U) \times \operatorname{GL}(V) \times \operatorname{GL}(W)} (\underline{F}) = H$. \end{proposition} Again, it is easy to check that $H$ is a closed subgroup of $\operatorname{GL}(U) \times \operatorname{GL}(V) \times \operatorname{GL}(W)$. It is also reductive because it is a torus. The reader perhaps has noticed that we have computed the stabilizer in $\operatorname{GL}(U) \times \operatorname{GL}(V) \times \operatorname{GL}(W)$ rather than the stabilizer in $\operatorname{SL}(U) \times \operatorname{SL}(V) \times \operatorname{SL}(W)$. There are several ways to fix this, and we indicate one of them. Consider the group $$ J := \{ (g_1,g_2,g_3) \in \operatorname{GL}(U) \times \operatorname{GL}(V) \times \operatorname{GL}(W) \ |\ \det(g_1) \det(g_2) \det(g_3) = 1\}. $$ Indeed, the first thing to observe is that $H \subset J$. Now, we claim that the orbits of $J$ and the orbits of $\operatorname{SL}(U) \times \operatorname{SL}(V) \times \operatorname{SL}(W)$ in $U \otimes V \otimes W$ are the same. Let $h = (g_1,g_2,g_3) \in J$. Since $\det(g_1)\det(g_2)\det(g_3) = 1$, we can choose $c_1,c_2,c_3 \in {\mathbb C}$ with $c_1c_2c_3 = 1$ such that $\det(c_ig_i) = 1$. Thus, we have $h \cdot v = (c_1g_1,c_2g_2,c_3g_3) \cdot v$ for any $v \in U \otimes V \otimes W$. But $(c_1g_1,c_2g_2,c_3g_3) \in \operatorname{SL}(U) \times \operatorname{SL}(V) \times \operatorname{SL}(W)$, so this means that the $J$-orbit of $v$ is contained in the $\operatorname{SL}(U) \times \operatorname{SL}(V) \times \operatorname{SL}(W)$-orbit of $v$. On the other hand, $J \supseteq \operatorname{SL}(U) \times \operatorname{SL}(V) \times \operatorname{SL}(W)$, so the orbits must be the same. The same argument works for $(U \otimes V \otimes W)^{\oplus m}$. Further observe that the quotient $GL(U) \times GL(V) \times \operatorname{GL}(W)/J = {\mathbb C}^*$, which is affine. Since $J$ is clearly a closed subgroup of $\operatorname{GL}(U) \times \operatorname{GL}(V) \times \operatorname{GL}(W)$, by Matsushima's criterion (see Theorem~\ref{Matsu}) we conclude that $J$ is reductive. We summarize the above discussion as follows: \begin{proposition} The $J$-orbit of $\underline{F}$ is closed. Further, the stabilizer of $\underline{F}$ in $J$ is $H$. Moreover $J$ is a reductive group. \end{proposition} Further, since orbits of $J$ are the same as the orbits of $\operatorname{SL}(U) \times \operatorname{SL}(V) \times \operatorname{SL}(W)$, we also have that the invariant rings are equal, i.e, \begin{corollary} \label{J.inv.equal} We have ${\mathbb C}[U \otimes V \otimes W]^{\operatorname{SL}(U) \times \operatorname{SL}(V) \times \operatorname{SL}(W)} = {\mathbb C}[U \otimes V \otimes W]^J.$ \end{corollary} Consider the action of $H$ on $U \otimes V \otimes W$. Let $L$ denote the subspace spanned by $\mathcal{E} = \{u_1^1v_1^1w_1^1\} \cup \{u_1^{k+1} v_3^k w_3^k, u_3^kv_1^{k+1}w_3^k, u_1^kv_1^kw_3^{k+1}\}_{1 \leq k \leq n-1} \cup \{u_3^nv_3^nw_3^n \}$. Now, it is clear that for the action of $H$ on $L$, the set $\mathcal{E}$ is a weight basis, and further one can check that $M_{\mathcal{E}}(L) = N$, the matrix in Section~\ref{prelim}. Hence, from Proposition~\ref{tor.inv.N}, we obtain: \begin{corollary} We have $$ \beta_H(U \otimes V \otimes W) \geq \sigma_H(U \otimes V \otimes W) \geq \sigma_H(L) \geq 4^n-1. $$ \end{corollary} \begin{proof} [Proof of Theorem~\ref{tensor-lbs}] Proceed in exactly the same fashion as the proof of Theorem~\ref{lbsln} to obtain the required lower bounds on $\sigma_J((U \otimes V \otimes W)^{\oplus 9})$ and $\beta_J((U\otimes V \otimes W)^{\oplus 9})$. Then using Corollary~\ref{J.inv.equal}, we conclude that the same lower bounds hold for $\operatorname{SL}(U) \times \operatorname{SL}(V) \times \operatorname{SL}(W)$. \end{proof} \subsection{Closedness of orbit} This section is devoted to the proof of Proposition~\ref{orbit.closed.tensor}. The strategy will again be to use Theorem~\ref{crit.co}. We have the basis ${\mathcal B}_u,{\mathcal B}_v$ and ${\mathcal B}_w$ for $U,V,W$ respectively. For $\operatorname{SL}(U) \times \operatorname{SL}(V) \times \operatorname{SL}(W)$, we choose $K:= K_{{\mathcal B}_u} \times K_{{\mathcal B}_v} \times K_{{\mathcal B}_w}$ for a compact real form and $T = T_{{\mathcal B}_u} \times T_{{\mathcal B}_v} \times T_{{\mathcal B}_w}$ for a maximal torus. Observe that ${\mathcal B} = \{b_u \otimes b_v \otimes b_w \ | \ b_u \in {\mathcal B}_u, b_v \in {\mathcal B}_v, b_w \in {\mathcal B}_w\}$ is a basis for $U \otimes V \otimes W$. Consider the hermiitan form on $U \otimes V \otimes W$ given by asking for ${\mathcal B}$ to be an orthonormal basis. It is easy to check that this form is $(K,T)$-compatible. \begin{proof} [Proof of Proposition~\ref{orbit.closed.tensor}] We use the form described above for each copy of $U \otimes V \otimes W$ and take the direct sum form. In order to use Theorem~\ref{crit.co}, the first step is to check that the supports $\Supp(F_d)$ and $\Supp(G_d)$ are uncramped. Let us only indicate the proof for $F_1$, as the other cases are similar. The defining decomposition of $F_1$ is its weight decomposition. It has three types of terms $u_1^k v_2^k w_3^k$, $u_2^k v_3^k w_1^k$, and $u_3^k v_1^k w_2^k$. We want to show that the support is uncramped. So, for any two such terms, we need to show that their weights are not root adjacent. But this follows easily from Corollary~\ref{uncramped.tensor}. Let us now check the second condition in Theorem~\ref{crit.co} i.e., we want: $$ \sum_{d =1}^4 \left(\sum_{\lambda \in \Supp(F_d)} ||(F_d)_\lambda||^2 \lambda + \sum_{\mu \in \Supp(G_d)} ||(G_d)_{\lambda}||^2 \mu\right) = 0. $$ The defining decompositions of $F_d$ and $G_d$ are weight decompositions. All the coefficients appearing in $F_d$ and $G_d$ have absolute value $1$. Further, observe that $\Supp(F_d) = \Supp(G_d)$. Thus we have \begin{align*} \sum_d \left(\sum_{\lambda \in \Supp(F_d)} ||(F_d)_\lambda||^2 \lambda + \sum_{\mu \in \Supp(G_d)} ||(G_d)_{\lambda}||^2 \mu\right) &= \sum_d \left(\sum_{\lambda \in \Supp(F_d)} \lambda + \sum_{\mu \in \Supp(G_d)} \mu\right) \\ & = 2 \sum_d \left(\sum_{\lambda \in \Supp(F_d)} \lambda\right). \end{align*} Recall that $\widetilde{u}_i^k$ denotes the weight for $u_i^k$ for $\operatorname{SL}(U)$. Recall that $\sum_{i,k} \widetilde{u}_i^k = 0$ from Remark~\ref{B-defns}. Observe that each $u_i^k$ appears a total of $4$ times in all the terms of $T_1,T_2,T_3,T_4$. Similarly for $v_i^k$ and $w_i^k$. This means that \begin{align*} \sum_d \left(\sum_{\lambda \in \Supp(F_d)} \lambda\right) &= 4(\sum_{i,k} \widetilde{u}_i^k, \sum_{i,k} \widetilde{v}_i^k, \sum_{i,k} \widetilde{w}_i^k) \\ & = 0. \end{align*} Hence, the second condition of Theorem~\ref{crit.co} is satisfied for $\underline{F}$. This concludes the proof. \end{proof} \subsection{Computation of Stabilizer} In spirit, the computation is very similar to the computation for cubic forms in the previous section. However, we will need slightly different arguments for this. Tensors of the form $a \otimes b \otimes c \in U \otimes V \otimes W$ are called rank $1$ tensors. \begin{lemma} \label{kruskal} Suppose $T = \sum_{i =1}^r a_i \otimes b_i \otimes c_i \in U \otimes V \otimes W$, where $\{a_i\}, \{b_i\}, \{c_i\}$ are linearly independent collections of vectors in $U,V$ and $W$ respectively. Then this is the unique decomposition of $T$ into a sum of $r$ rank $1$ tensors. \end{lemma} \begin{proof} For $r = 1$, this is clear. For $r \geq 2$, this follows from Kruskal's theorem, see \cite{Kruskal}. \end{proof} The above lemma can also be proved by using just elementary linear algebra arguments without resorting to Kruskal's theorem. \begin{lemma} Suppose $g \in \operatorname{GL}(U) \times \operatorname{GL}(V) \times \operatorname{GL}(W)$ fixes $T$ as in the previous lemma. Then $g$ must permute the terms $a_i \otimes b_i \otimes c_i$. \end{lemma} \begin{proof} Applying $g$ to the decomposition into a sum of $r$ rank $1$ tensors also yields a decomposition into a sum of $r$ rank $1$ tensors. Hence, by the above lemma, $g$ must permute the terms. \end{proof} \begin{corollary} Suppose $g \in \operatorname{GL}(U) \times \operatorname{GL}(V) \times \operatorname{GL}(W)$ fixes $F_1$, then $g$ must permute the terms in $F_1$. \end{corollary} \begin{corollary} Suppose $g \in \operatorname{GL}(U) \times \operatorname{GL}(V) \times \operatorname{GL}(W)$ fixes $F_1$ and $G_1$, then $g$ must fix all the terms in $F_1$. \end{corollary} \begin{proof} Any non-trivial permutation of the terms in $F_1$ does not fix $G_1$. Hence $g$ must fix all the terms. \end{proof} Similar arguments hold for $F_2,F_3$ and $F_4$ as well. In summary, we obtain: \begin{corollary} \label{stupidere} Suppose $g \in \operatorname{GL}(U) \times \operatorname{GL}(V) \times \operatorname{GL}(W)$ fixes $\underline{F}$, then $g$ must fix all the terms in $F_1,F_2,F_3$ and $F_4$. \end{corollary} Let $I_k = \{u_i^k v_j^k w_3^k, u_i^k v_3^k w_j^k , u_3^k v_i^k w_j^k \}_{1 \leq i,j \leq 2}$. Then $\cup_k I_k$ are precisely the terms occuring in $F_1,F_2,F_3$ and $F_4$. \begin{lemma} Suppose $g = (g_u,g_v,g_w) \in \operatorname{GL}(U) \times \operatorname{GL}(V) \times \operatorname{GL}(W)$ fixes $I_k$. Then for some $p_k,q_k,r_k \in {\mathbb C}^*$, we have \begin{align*} & g_u(u_i^k) = p_k u_i^k \text{ for } i = 1,2 \text{ and } g_u(u_3^k) = (q_kr_k)^{-1} u_3^k, \\ & g_v(v_i^k) = q_k v_i^k \text{ for } i = 1,2 \text{ and } g_v(v_3^k) = (p_kr_k)^{-1} v_3^k, \\ & g_w(w_i^k) = r_k w_i^k \text{ for } i = 1,2 \text{ and } g_w(w_3^k) = (p_kq_k)^{-1} w_3^k. \end{align*} \end{lemma} \begin{proof} It is clear that if $g$ fixes $b_u \otimes b_v \otimes b_w$, then each $g_x$ must scale $b_x$ for each $x \in \{u,v,w\}$. So, we must have $g_u(u_1^k) = p_k u_1^k$, $g_v(v_1^k) = q_k v_1^k$ and $g_w(w_1^k) = r_k w_1^k$ for some $p_k,q_k,r_k \in {\mathbb C}^*$. Then, since $u_1^kv_1^kw_3^k \in I_k$ is fixed by $g$, we must have $g_w(w_3^k) = (p_kq_k)^{-1} w_3^k$. Since $u_1^kv_2^kw_3^k \in I_k$ is fixed by $g$, we must have $g_v (v_2^k) = q_k v_k$. Symmetric arguments complete the proof. \end{proof} \begin{proof} [Proof of Proposition~\ref{tensor-stab-compute}] From Corollary~\ref{stupidere}, we conclude that if $g$ fixes $\underline{F}$, then it must fix all the terms in $\cup_k I_k$. From the previous lemma, one concludes that $g \in H$. Conversely, it is easy to check that $H$ fixes $\underline{F}$. \end{proof} \section{Concluding remarks} It was pointed out to us by David Wehlau that for the adjoint representation (denoted ${\rm Ad}$) of a group $G$, a generic point has a closed orbit whose stabilizer is a maximal torus. This gives rise to a plethora of examples with exponential degree lower bounds. Take any representation $W$ of $G$ for which one can prove exponential degree lower bounds for invariants w.r.t a maximal torus. Then the same lower bound would also hold for the ring of $G$-invariants for ${\rm Ad} \oplus W$. The proof of Theorem~\ref{main} requires characteristic zero. The proof breaks down in positive characteristic. For example, in the proof of Lemma~\ref{need.char0}, we use Reynolds operators, which do not exist in positive characteristic. However, one can modify the arguments in a standard way to get similar statements for separating invariants. Also, we do not have a general technique to prove that an orbit is closed as there is no analog of moment map in positive characteristic. We can get around this by using the adjoint representation as discussed above. Hence, we can construct representations with exponential degree lower bounds even in positive characteristic (for example, the action of $\operatorname{SL}(V)$ on ${\rm Ad} \oplus S^3(V)$). Sometimes, one is interested in a specific representation that doesn't contain the adjoint representation as a direct summand. One such example is the case of tensor actions that we address in this paper. It remains a difficult open problem to prove exponential lower bounds in such cases in positive characteristic. The main issue happens to be the fact we do not know any criterion that can be used to show that an orbit is closed in positive characteristic.
{ "timestamp": "2019-03-01T02:02:52", "yymm": "1902", "arxiv_id": "1902.10773", "language": "en", "url": "https://arxiv.org/abs/1902.10773" }
\section{Introduction} A fully autonomous mobile robot, able to explore, navigate and perform actions in an unknown environment is the ultimate objective of today's mobile robotics research. To this end, we consider a single autonomous robot or an autonomous team of robots tasked with exploring the unknown environment. The autonomous exploration problem comprises collecting the data sensed from the environment, using the collected data to build a structured model of the environment, self-localizing in the environment model, high-level planning and scheduling of robot tasks (mission generation and assignment), and path planning and following. All of these need to be performed in real time. In a common approach known as frontier exploration, the robot maintains information about the border which divides the explored and unexplored space in the environment -- the \emph{frontier}. Elements of the frontier represent places in the environment which the robot may approach and thereby increase the knowledge about the structure of the environment. With the information about the exploration frontier available, mission planning can be described in its simplest version as \emph{(boldly) go where no one has gone before}. Many components of the autonomous exploration problem mentioned above are complex enough to be associated with their own field of robotics research, resulting in sophisticated methods and software modules being available for solving them. Namely, building the model of the environment (a map) and self-localization therein may be performed by a module implementing a Simultaneous Localization and Mapping (SLAM) algorithm. For this reason, we consider a control system implementing a modular exploration pipeline as depicted in \autoref{fig:high_level_block_diagram}. Map building and localization are performed by a SLAM module. The SLAM results -- a map and a robot pose -- are processed in a frontier detection module. The detected frontier is then used by a high-level exploration task generation and scheduling module, which creates and selects exploration tasks to be executed according to an exploration strategy, and further requests the path planning and following modules to execute the exploration tasks. \enlargethispage{-16.5\baselineskip} The main contribution of this paper is a piece of the exploration pipeline -- a new, efficient frontier detection approach, specialized for use with a 2D graph SLAM algorithm based on occupancy grid submaps \cite{hess2016}. The proposed approach is robust to loop closures and exploits the submap structure of the SLAM algorithm in order to quickly perform frontier updates. By providing high frequency incremental frontier updates which enable more responsive planning of exploration objectives, it facilitates a real-time use case on large and complex maps, e.g. the Deutsches museum dataset \cite{hess2016}. All the while the proposed frontier detection algorithm delivers a result at least as good as a naive frontier edge-detection algorithm, i.e. performing edge detection on a completely assembled global map each time after SLAM updates the map by inserting scans or optimizing the pose graph to perform loop closures. \section{Related Work} \subsection{Frontier Exploration as a Prevalent Exploration Method} Frontier exploration in the context of autonomous robotics was first introduced by Yamauchi in 1997 \cite{yamauchi1997}, paving the way for many others (\cite{burgard2000, freda2005, wettach2010}). Commonly, elements of the detected frontier are used as navigation goals during planning of exploration tasks. Building on this, there are more complex exploration strategies which attempt to coordinate entire robot teams (\cite{burgard2005,faigl2013}), or use frontiers as sinks in a potential field (\cite{shade2011}). Frontier detection is therefore a key elementary operation in frontier exploration, and it is important that it be performed as quickly as possible so that exploration can be more efficient \cite{quin2014}. \subsection{State of the Frontier Detection Art} A naive algorithm for frontier detection is to perform edge detection on the complete global map after each map update. However, this approach is not feasible for larger maps and real-time robot operation with such maps, as it presents a significant computational burden. \subsubsection{Keidar and Kaminka's seminal work on efficient frontier detection} Keidar and Kaminka \cite{keidar2014} proposed in 2014 several approaches which attempt to perform frontier detection in an efficient manner. The first, \emph{Wavefront Frontier Detector} (WFD), consists of running two consecutive breadth-first searches (BFS). The first BFS starts at the robot position and continues throughout the free space, until eventually a frontier point is found which belongs to a component of connected frontier points. From there, the rest of the connected component is found by a second BFS along the connected frontier points. While WFD avoids searching the unoccupied space, it still searches all observed free space in each iteration, which may degenerate into a full map search as exploration progresses. The second approach to frontier detection proposed by Keidar and Kaminka, the \emph{Fast Frontier Detector} (FFD), does not use the map built by SLAM, but rather constructs the contour of each laser scan using Bresenham's line algorithm, and uses the constructed contour to detect the frontier and store it in a specialized data structure. Quin and Alempijević \cite{quin2014} note that FFD has to be executed after each scan, which results in many wasteful calculations if frontier updates are required only occasionally, and that Bresenham's line algorithm can cut across unobserved space and miss some frontier cells. The proposed approach does not require execution after each processed scan, supporting a use case where frontier updates are required only occasionally. FFD is also notable for introducing the concept of \emph{active area} -- a bounding box positioned in the map around the robot position, circumscribing the last scan the map was updated with. The frontier update step is sped up by restricting it to the active area. Keidar and Kaminka also applied this concept to the WFD detector, yielding the \emph{incremental WFD} (WFD-INC) algorithm, which requires non-trivial auxiliary data structures for frontier point maintenance. The proposed algorithm uses a similar concept of \emph{active submaps}. \subsubsection{Impact of loop closure in SLAM on frontier detection} Loop closure is an event when the SLAM algorithm recognizes that the robot has revisited the same place, and then makes a correction using this information which reduces the error caused by drift in localization along the whole loop. The frontier detector has to be able to efficiently cope with the map changes induced by loop closure corrections. These map changes may not be confined to the active area - in fact, loop closure may result in widespread changes all over the map. While an efficient frontier detection algorithm should avoid reassembling and iterating throughout the entire global map in every iteration, constraining the algorithm to only the active area makes it difficult to be robust to loop closures. WFD-INC addresses loop closure events by evicting the detected frontier and performing frontier detection from scratch using the new loop-corrected map. To efficiently address loop closures, the frontier detection algorithm needs to get intimate to a certain degree with the implementation of the SLAM algorithm. For example, Keidar and Kaminka additionally proposed an implementation of WFD-INC for GMapping (a particle filter-based SLAM \cite{grisetti2007}) called \emph{incremental parallel} WFD (WFD-IP). WFD-IP performs in parallel separate WFD-INC frontier detection for each particle (each particle having its own map), and outputs the appropriate frontier of the current best particle. Like WFD-INC, the proposed method uses the internals of the SLAM algorithm in order to perform frontier detection faster while also being robust to loop closure. Quin and Alempijević \cite{quin2014} introduce two frontier detection methods: \emph{naive active area} (NaiveAA), which is the naive approach confined to the active area, and a version of WFD called \emph{Expanding WFD} (EWFD) which steers the WFD breadth-first search into newly discovered free areas. EWFD assumes that the entropy for each cell can only decrease over time. This is not true in the general case for the complete global map when considering effects of loop closure -- observed areas can get moved around in the global map during loop closure and leave unexplored space in their wake. However, the entropy decrease assumption \emph{is} almost surely true for single submaps in submap-based SLAM, and we exploit this fact in the proposed approach. \subsubsection{Other approaches} Senarathne and Wang \cite{senarathne2013} use an oriented bounding-box based inexact approach Umari \cite{umari2017} uses rapidly-exploring random trees (RRT) to perform sparse frontier detection by building a tree inside the free space in the SLAM-built map. When the algorithm crosses the frontier while trying to expand the random tree, a single frontier point is detected. However, using the implementation of the algorithm provided in \cite{umari2017} does require reassembling the global map in each iteration. Also, this algorithm is not robust to loop closure, since the built RRT tree does not follow the results of pose graph optimization. Experiments were performed with a modified version of the RRT frontier detection algorithm in which the tree nodes were bent according to the displacement of the closest submap in the optimized pose graph in an attempt to make the algorithm robust to loop closure. However, narrow corridors in the map have proven problematic as the map size increases, because the probability of extending the tree into a narrow corridor dramatically decreases as the map canvas size increases. An inspiration for the proposed method was attempting to perform dense local frontier detection (opposed to sparse, as with RRT), while trying to follow the global "dance" of the pose graph as it is optimized. \section{Prerequisites} \subsection{Simultaneous Localization and Mapping} The term SLAM was coined by Leonard and Durrant-Whyte in 1991 \cite{leonard1991}. As shown in the block diagram of the exploration pipeline in \autoref{fig:high_level_block_diagram}, a SLAM algorithm uses sensor data to build a map and perform localization, which is further used in frontier detection, exploration task planning and execution. There is a wealthy trove of SLAM methods developed to this day, which can be roughly grouped into methods based on filtering and methods based on graph optimization. We will focus on graph SLAM, which represents poses and detected features as nodes in a graph, while the correspondences which impose constraints on the poses of the respective nodes are represented as edges. Various optimization methods may be used to minimize the residual error of all constraints, e.g. the Ceres solver \cite{ceres-solver}. Submaps are small local maps which are merged into a global map. One of earlier approaches to SLAM using submaps is \cite{williams2002}, with further examples being \cite{strom2011} and a graph SLAM approach using histograms of submap features \cite{bosse2008}. \begin{figure*} \adjustbox{max width=\textwidth}{ \vcenteredhbox{\includegraphics[width=.333\textwidth]{submap1_cleanup.png}} $\cup$ \vcenteredhbox{\includegraphics[width=.333\textwidth]{submap2_cleanup.png}} $=$ \vcenteredhbox{\includegraphics[width=.333\textwidth]{composite_cleanup.png}} } \caption{Depiction of merging two adjacent submaps and their local frontiers into a global composite map. Colour legend: unobserved grid cells are grey, free cells are white, occupied cells are black, while frontier points are red. Each submap has a \emph{local frontier} which is based solely on the occupancy grid of that submap. The points in the local frontier set are candidates for becoming part of the \emph{global frontier}. The submaps are then projected into the global map coordinate system according to the current optimized solution of graph SLAM. Any local frontier point which after projection ends up in an observed (free or occupied) area of any other submap (i.e. fails the stabbing query test against that submap) is discarded from the global frontier set.} \label{fig:submap_composition} \end{figure*} \subsection{Cartographer} The proposed frontier detection method was designed for use with Cartographer, an open-source multi-robot multi-trajectory 2D and 3D graph SLAM based on occupancy grid submaps, developed by Google (Hess, Kohler, Rapp in 2016 \cite{hess2016}). Cartographer's approach of optimizing the poses of all scans and submaps follows Sparse Pose Adjustment \cite{konolige2010} and uses the Ceres solver \cite{ceres-solver} for optimizing the pose graph using the Levenberg–Marquardt algorithm (LMA). Cartographer submaps are spatially and temporally compact occupancy grid maps made from a short, continuous series of rangefinder sensor measurements (laser scans) taken during traversal of a short section of the robot trajectory. It is desired that the size of the submaps be small enough such that the localization drift is not perceptible when looking at one submap at a time. Building the submaps and the trajectory (without loop closures) is handled by Cartographer's \emph{local trajectory builder} component, which maintains a pair of active submaps that the laser scans are inserted into according to a local pose obtained by performing scan matching against the older (larger) submap from the active pair. When a predetermined number of scans is inserted into a submap, it is marked as finished, and a new submap is created to take its place in the active submap pair. Importantly, once a submap is finished, its occupancy grid is immutable from that point onward. Cell occupancy probabilities are clamped to the interval $[0.1, 0.9]$ and are stored linearly mapped onto the space of unsigned 16-bit integers. Scan insertion into submaps is performed as Bayesian updates of the cell occupancy probabilities (see (3) in \cite{hess2016}). The cells corresponding to laser hit points are updated with "occupied" observations, while the intermediate points (obtained by casting rays from the laser rangefinder origin to hit points) are updated with "free" observations. It is also important to note that on a level of a single submap, the cell entropy (i.e. the uncertainty of the occupancy probability) can be assumed to be monotonically decreasing. If there are moving objects in the environment (e.g. a door which was previously closed is now open), it is theoretically possible that an observed cell may afterwards land in the narrow occupancy probability interval near 0.5 where it would be again considered unobserved. However, the probability of this is negligible, and it is considered that \emph{the cells of an active submap cannot become unobserved once they are observed}. When loop closures are detected by the \emph{constraint builder} component of Cartographer, pose constraints between the corresponding trajectory nodes and submaps are added as edges into the pose graph. Afterwards, optimization is periodically invoked in order to find a new solution -- a set of global submap and trajectory node poses -- which minimizes the residual costs of the constraints. As discussed, for frontier detection, this implies that when pose graph optimization is performed, the submaps can and do get displaced and rotated (i.e. undergo rigid transformations), although their occupancy grids are immutable after they are marked as finished. The proposed frontier detection approach attempts to take advantage of these properties of Cartographer. \section{Frontier detection} \subsection{Definitions} \textbf{Rigid transformation} $\mat{T}^b_a \in \mat{SE}(3)$ is the pose of the coordinate system $b$ relative to the coordinate system $a$. The $\mat{SE}(3)$ group operation of pose composition is written as multiplication, e.g. $\mat{T}^b_a$ $\mat{T}^c_b$. The transform inverse $(\mat{T}^b_a)^{-1}$ is equal to $\mat{T}^a_b$. The projection of a point with coordinates expressed in $b$, $\mat{p}^b = \begin{bmatrix} x \; y \; z \end{bmatrix}^T \in \mathbb{R}^3$, to the corresponding point $\mat{p}^a$ in coordinate system $a$ is denoted as $\mat{p}^a = \mat{T}^b_a \; \mat{p}^b$. \textbf{The global map coordinate system} is denoted with $g$. The solution of pose graph optimization are poses of graph members expressed with respect to $g$. \textbf{Submap} is an occupancy grid of resolution (i.e. cell size) $r$, which is typically 0.05 cm. The occupancy probabilities of grid cells are initially unobserved i.e. unknown (exactly 0.5). Submaps are constructed by insertion of $n_{scans}$ sequential laser scans, where parameters $r$ and $n_{scans}$ are predetermined fixed parameters. A submap is marked as \textbf{finished} when $n_{scans}$ scans have been inserted into the submap. \textbf{The set of active submaps} contains the submaps which are not yet finished. \textbf{The local submap coordinate system} has the origin fixed, for example, to the robot pose in the first scan inserted into the submap. The cells of a submap occupancy grid are indexed in a 2D matrix using a 2-integer tuple: $\mat{S}^{si}_{k, l}$ is the occupancy probability value of the cell $(k,l)$ in the submap $si$. As the submap grows in size, the 2-integer tuples corresponding to the same cell in the submap may change (which is an implementation-specific detail), but the position of the cell in the local submap coordinate system must remain the same. \textbf{Global submap pose} $\mat{T}_g^{si}$ is the global pose of the origin of the local coordinate system of a submap $si$. Since submaps are members of the pose graph, submap poses are part of the optimized pose graph solution. \textbf{Occupancy classification} -- in the frontier detection algorithm, the cell occupancy probability values are not used directly. The probability values are first classified according to the following thresholding rule: \begin{equation} \text{class}(p = \mat{S}^{si}_{k, l}) \colonequals \begin{cases} \text{free} & p < 0.5 \\ \text{occupied} & p > 0.5 \\ \text{unobserved} & p = 0.5 \end{cases} \label{eq:thresholding} \end{equation} \textbf{Observed cells} are occupancy grid cells that are not unobserved. \textbf{Local frontier point} is the center of an \textbf{unobserved} occupancy grid cell which is adjacent to a \textbf{free} cell in the same submap. \textbf{Local frontier} of a submap is the set of its local frontier points. See the red points in the first two pictures in \autoref{fig:submap_composition} for an example. \textbf{Stabbing query} refers to looking up the corresponding cell in a given submap for a given global point and the global submap pose. More precisely, for a submap $si$ and a global point $\mat{p}^g \in \mathbb{R}^3$ expressed in $g$, to find the occupancy grid cell $(k, l)$ in $si$ whose center coordinates in the local coordinate system of the submap $si$ are closest to $(\mat{T}_g^{si})^{-1} \mat{p}^g$. Further, performing a \textbf{stabbing query test} means checking if the corresponding cell $\mat{S}^{si}_{k, l}$ is an \textbf{unobserved} cell, in which case the test passes. \textbf{Global frontier point} is the center of an \textbf{unobserved} global occupancy grid map cell adjacent to a \textbf{free} global map cell. A valid global frontier point must pass the stabbing query test against all other submaps. \textbf{Global frontier} is the set of all valid global frontier points at a given time. See the third picture in \autoref{fig:submap_composition}. \textbf{Perimeter} of the global or local frontier is the number of frontier points in the respective set. It may be noted that the frontier points could alternatively have been defined as centers of \emph{free} cells adjacent to \emph{unobserved} cells. We have chosen to define frontier points as centers of \emph{unobserved} cells as above in order to simplify the stabbing query test. \subsection{The frontier detection algorithm} There are two kinds of events which occur during SLAM execution which are of interest for frontier detection: a submap update event, where a scan is inserted into the active submaps; and a pose graph optimization event which also occurs periodically, but less often. \autoref{alg:handle_submap_update} describes handling of submap update events, while \autoref{alg:handle_optimization_update} describes handling of pose graph optimization events. \begin{algorithm}[t] \small \capstart \KwIn{active\_submaps, global\_submap\_poses} \KwPersistent{\\ \quad local\_frontiers, global\_frontiers, \\ \quad global\_submap\_bounding\_boxes} \KwLocal{\\ \quad current\_bounding\_boxes $\Leftarrow$ empty \\ \quad submaps\_with\_frontier\_updates $\Leftarrow$ active\_submaps} \nl \ForEach{submap $si \in$ active\_submaps}{ \nl local\_frontiers[$si$].Clear()\; \nl global\_frontiers[$si$].Clear()\; \nl \nosemic current\_bounding\_boxes[$si$] $\Leftarrow$\; \pushline CalculateGlobalBoundingBox($si$,\; \pushline\dosemic global\_submap\_poses[$si$])\;\popline\popline \lnl{algline:intersecting_submaps_query} \nosemic intersecting\_submaps $\Leftarrow$\; \pushline global\_submap\_bounding\_boxes.Intersect(\; \pushline\dosemic current\_bounding\_boxes[$si$])\;\popline\popline \lnl{algline:local_frontier_detection_begin} Threshold and classify cells in $si$ according to \eqref{eq:thresholding}\; \nl \ForEach{ cell $(k, l) \in$ submap $si$}{ \lnl{algline:edge_detection} \If{ \nosemic cell $(k, l)$ is \textbf{unobserved} $\wedge$ \; cell $(k, l)$ is adjacent to a \textbf{free} cell in $si$}{ \lnl{algline:local_frontier_detection_end} local\_frontiers[$si$].Add($(k, l)$)\; \lnl{algline:local_frontier_projection} \nosemic projected\_to\_global $\Leftarrow$\; \pushline ProjectToGlobal(\; \pushline global\_submap\_poses[$si$], \dosemic cell $(k, l)$)\;\popline\popline \lnl{algline:local_stabbing_query_test_against_intersecting_submaps} \eIf{ \nosemic StabbingQueryTest(\; \pushline\pushline projected\_to\_global,\; intersecting\_submaps $\cup$\; \quad\quad \:\:\: active\_submaps $\setminus$ $si$) \popline\popline }{ \lnl{algline:adding_to_global_frontier} \nosemic global\_frontiers[$si$].Add(\; \dosemic\pushline projected\_to\_global)\;\popline }{ \lnl{algline:add_failing_submap_hint} \nosemic local\_frontiers[$si$]\; \pushline\dosemic .AddFailingSubmapHintForCell($(k, l)$)\;\popline } } } \lnl{algline:test_adjacent_global_frontiers_against_active_submaps_begin} \ForEach{ submap $sj \in$ intersecting\_submaps}{ \nl \ForEach{ global\_frontier\_point $\in$ global\_frontiers[$sj$] } { \nl \If{ \nosemic \textbf{not} StabbingQueryTest(\; \pushline\pushline\pushline global\_frontier\_point, $si$)\; \popline\popline\popline}{ \lnl{algline:test_adjacent_global_frontiers_against_active_submaps_end} \nosemic global\_frontiers[$sj$].Remove(\; \dosemic\pushline global\_frontier\_point)\;\popline \lnl{algline:mark_adjacent_global_frontiers_changed} submaps\_with\_frontier\_updates.Add($sj$)\; } } } } \lnl{algline:insert_finished_submap_bounding_boxes} \ForEach{submap $si \in$ active\_submaps}{ \nl \If{submap $si$ has just been finished}{ \nl \nosemic global\_submap\_bounding\_boxes.Insert(\; \pushline\dosemic current\_bounding\_boxes[$si$])\;\popline } } \lnl{algline:publishing_of_incremental_frontier_updates} \ForEach{ \nosemic submap $si \in$ submaps\_with\_frontier\_updates}{ \nl PublishFrontierUpdates($si$, global\_frontiers[$si$])\; } \caption{Handling of updates to active submaps} \label{alg:handle_submap_update} \end{algorithm} \begin{algorithm} \small \capstart \KwIn{\nosemic updated global\_submap\_poses, \; \quad finished\_submaps, active\_submaps} \KwPersistent{\\ \quad local\_frontiers, global\_frontiers, \\ \quad global\_submap\_bounding\_boxes} \lnl{algline:bounding_boxes_recomputation_begin} global\_submap\_bounding\_boxes.Clear()\; \nl \ForEach{submap $si \in$ finished\_submaps}{ \lnl{algline:bounding_boxes_recomputation_end} \nosemic global\_submap\_bounding\_boxes.Insert(\; \pushline CalculateGlobalBoundingBox($si$,\; \pushline\dosemic global\_submap\_poses[$si$]))\;\popline } \nl global\_frontiers.Clear()\; \nl \ForEach{submap $si \in$ finished\_submaps $\cup$ active\_submaps}{ \lnl{algline:alg2_intersecting_submaps} \nosemic intersecting\_submaps $\Leftarrow$\; \Indp global\_submap\_bounding\_boxes.Intersect(\; \Indp global\_submap\_bounding\_boxes[$si$])\; \dosemic $\cup$ active\_submaps $\setminus$ $si$\Indm\;\Indm \nl \ForEach{ cell $(k, l) \in$ local\_frontiers[$si$]}{ \lnl{algline:optimization_projecting_points} \nosemic projected\_to\_global $\Leftarrow$\; \pushline ProjectToGlobal(\; \pushline global\_submap\_poses[$si$], \dosemic cell $(k, l)$)\;\popline\popline \lnl{algline:hint_comment_1} \tcp{First try testing against the} \lnl{algline:hint_comment_2} \tcp{failing submap hint if present.} \lnl{algline:optimization_retesting} \eIf{ \nosemic StabbingQueryTest(\; \pushline\pushline projected\_to\_global,\; intersecting\_submaps $\cup$\; \quad\quad \:\:\: active\_submaps $\setminus$ $si$) \popline\popline }{ \lnl{algline:optimization_insert_into_new_global_frontier} \nosemic global\_frontiers[$si$].Add(\; \dosemic\pushline projected\_to\_global)\;\popline }{ \nl \nosemic local\_frontiers[$si$]\; \pushline\dosemic .AddFailingSubmapHintForCell($(k, l)$)\;\popline } } \nl PublishFrontierUpdates($si$, global\_frontiers[$si$])\; } \caption{Handling of pose graph optimization events} \label{alg:handle_optimization_update} \end{algorithm} \subsubsection{Handling of submap updates} For each laser scan processed in SLAM, both submaps in the active pair are updated, making submap updates the most frequently occurring type of event. A submap update can only affect the frontier in the area covered by the active submaps, therefore the frontier detection algorithm can be constrained to this area in order to maximize efficiency. The first step in handling a submap update of an active submap is performing dense local frontier detection on the new version of the submap occupancy grid (\autoref{alg:handle_submap_update}, \crefrange{algline:local_frontier_detection_begin}{algline:local_frontier_detection_end}). In other words, on the local submap level, we have opted to perform a naive edge detection approach. The reason for using a naive approach for local frontier detection is twofold. First, since the submaps are bounded in size (controlled by the fixed parameter $n_{scans}$), and the number of active submaps is constant, the time complexity of local frontier detection is not affected by the size of the global map or the size of the dataset. Second, probability thresholding, classification (\autoref{algline:local_frontier_detection_begin}) and edge detection (\autoref{algline:edge_detection}) can be vectorized, allowing for high performance on modern CPUs. Our implementation relies on Eigen \cite{eigenweb} for vectorization, where we used matrix block algebra and Hadamard products to implement thresholding, classification and Boolean logic for edge detection. Computing and storing the set of local frontier points for an active submap produces a set of \emph{candidates} for the global frontier. Every local frontier point is projected into the global map frame $g$ according to the current global pose of the corresponding submap (\autoref{algline:local_frontier_projection}) and the stabbing query test is performed against the intersecting submaps (\autoref{algline:local_stabbing_query_test_against_intersecting_submaps}). In case the test passes against all submaps, the projected frontier point is indeed a global frontier point and is added to the global frontier set (\autoref{algline:adding_to_global_frontier}). If the test fails, the submap against which the test failed is recorded as a hint (\autoref{algline:add_failing_submap_hint}) so that future re-testing can be performed faster -- performing a test against the failing submap hint in most cases immediately produces a negative result. In the next step of handling submap updates, we exploit several properties of graph SLAM based on occupancy grid submaps: \begin{enumerate} \item The occupancy grids of finished submaps are immutable. Therefore, it is not necessary to re-detect \emph{local} frontiers for already finished submaps. \item The algorithm for handling submap updates may assume that no graph optimization has occurred since the last submap update event, so all existing \emph{global} frontiers of finished submaps are valid (except for the situation described below). \item The cell occupancy probabilities of active submaps have decreasing entropy, which means that only previously \textbf{unobserved} cells can become \textbf{observed}, and not vice-versa. This implies that updates to active submaps can invalidate the global frontiers of intersecting submaps. \end{enumerate} In \crefrange{algline:test_adjacent_global_frontiers_against_active_submaps_begin}{algline:test_adjacent_global_frontiers_against_active_submaps_end}, it is tested if the new versions of active submaps cover existing valid global frontiers of the intersecting finished submaps. This is done by performing stabbing query tests of the global frontier points against the active submaps. If the stabbing query test fails, the newly covered global frontier points are removed from the set of global frontier points (\autoref{algline:test_adjacent_global_frontiers_against_active_submaps_end}), thus preserving the invariant that all global frontier points at a time are valid. The global frontiers of finished submaps whose global frontier points got removed are also marked as updated (\autoref{algline:mark_adjacent_global_frontiers_changed}) for incremental publishing of frontier updates (\autoref{algline:publishing_of_incremental_frontier_updates}). Bounding boxes of finished submaps are stored inside a tree data structure which enables fast queries of submaps which intersect with a given bounding box (e.g. looking up finished submaps intersecting with an active submap, \autoref{algline:intersecting_submaps_query}). When a submap is marked as finished, its bounding box is inserted into the tree structure (\autoref{algline:insert_finished_submap_bounding_boxes}). Our implementation uses the Boost implementation of R-trees \cite{boost_rtree} for storing global axis-aligned bounding boxes of finished submaps. It can also be noted that not all submap update events have to be handled in order to guarantee a correct result -- any non-final submap update can be skipped. A robotic exploration system which does not require real-time frontier updates after every scan, but rather occasionally, can invoke the algorithm for handling submap updates only when a submap is finished. This can significantly reduce the computational effort of keeping the frontier up to date, up to a factor of $n_{scans}$. \subsubsection{Handling of graph optimization events} When graph SLAM performs optimization, a new solution is produced for poses of all graph members. For frontier detection, this means that the submap poses have changed. This invalidates the global bounding boxes of submaps and the entire global frontier, all of which has to be recomputed in \autoref{alg:handle_optimization_update}. However, advantage is taken of the fact that the local frontiers have already been computed for all submaps, so all that needs to be done is to re-project the local frontier points to the global coordinate system $g$ and re-test them. The global bounding boxes are recomputed in \crefrange{algline:bounding_boxes_recomputation_begin}{algline:bounding_boxes_recomputation_end}, while the rest of \autoref{alg:handle_optimization_update} recomputes the global frontier similarly to \autoref{alg:handle_submap_update}: each local frontier point is re-projected according to the corresponding new global submap pose (\autoref{algline:optimization_projecting_points}) and the stabbing query test against the intersecting submaps is performed (\autoref{algline:optimization_retesting}). If the stabbing query test is passed, the re-projected and re-tested frontier point is inserted into the new global frontier set (\autoref{algline:optimization_insert_into_new_global_frontier}). Note that during re-testing, it is advisable to first try testing against the failing submap hint, if it exists (comment in \crefrange{algline:hint_comment_1}{algline:hint_comment_2}). This speeds up rejection of points which fail the test by first testing against the same submap which caused the same local frontier point to fail the test earlier. Even though recomputing the entire global frontier in \autoref{alg:handle_optimization_update} is certainly not a computationally lightweight operation, it is still more efficient than any other non-submap aware approach which would require iterating through the entire reassembled global map, and thus have a time complexity proportional to the \emph{area} of the global map. Recomputing the global frontier in \autoref{alg:handle_optimization_update} has the time complexity proportional to the \emph{perimeter} of local frontiers. To summarize, \autoref{alg:handle_submap_update} assumed that all global frontier points before a submap update event were valid and up-to-date. Graph optimization violates this assumption by displacing the submaps according to submap poses of the new pose graph solution. \autoref{alg:handle_optimization_update} restores this invariant by recomputing the global frontier. \section{Algorithm analysis} \subsection{Soundness and completeness} Let us first note that the process of merging submaps into a global map according to a pose graph solution cannot result in an unobserved cell in the global map if any of the corresponding submap cells are observed. In other words, if a global map cell is unobserved, the corresponding cells in all submaps are also unobserved. Also, let $LF$ and $GF$ denote the sets of local and valid global frontier points of all submaps. For discussing completeness, we will consider a valid global frontier point in $GF$, i.e. the center of an \emph{unobserved} global map cell adjacent to a free global map cell. Due to the map merging process, all submap cells corresponding to the global frontier cell have to be unobserved as well. Next, the adjacent free global map cell is also free in at least one submap, whose merging caused the free global map cell to be marked as such. In that submap, the unobserved cell next to the free cell is a local frontier and is in $LF$. Therefore, each valid global frontier point corresponds to a local frontier point in at least one submap (i.e. $GF \subseteq LF$ projected to $g$), and an algorithm which computes the set of global frontier points $GF$ by taking the set of all local frontier points $LF$ as input is thus complete, providing it does not incorrectly discard any valid global frontier points in the process. The rule for discarding local frontier points when they fail the stabbing query test can only result in \emph{observed} cells being correctly discarded from the global frontier, and therefore the property of completeness is preserved. Also, the naive edge detection algorithm used for computing the local frontiers is trivially valid. For soundness, we will consider a global frontier point returned by the proposed algorithm, and suppose that it is not valid. This could be either because the returned global frontier point is not an unobserved cell in the global map, or because the returned global frontier point is not adjacent to a free cell in the global map. The first case is not possible because the frontier point would have failed the stabbing query test against the submap that contains an observed cell which resulted in marking the corresponding global cell as observed. The second case is theoretically possible -- for example, if there were a plurality of submaps with corresponding occupied cells rather than free, so the merging process results in adjacent global map cells being marked as occupied instead of free. However, this case is of no practical significance, because the chance that this will happen without the unobserved frontier cell also getting covered up by observed cells (which in turn will make it fail the stabbing query test) is negligible. To exclude this case, the stabbing query test could be modified to also look at the adjacent cells in submaps, instead of just checking if the single cell corresponding to the frontier is unobserved. This would not increase the theoretical time complexity, but would make the implementation unnecessarily more elaborate and slow. \subsection{Computational complexity} \subsubsection{Handling of submap updates} In \autoref{alg:handle_submap_update}, \autoref{algline:intersecting_submaps_query}, the complexity of querying the R-tree for submaps intersecting with the updated submap $si$ is $O(\log|S| + |S_{\cap si}|)$, where $S$ is the set of all submaps, and $S_{\cap si}$ is the set of submaps intersecting with the submap $si$. This also includes the complexity of inserting a finished submap bounding box into the R-tree. In \crefrange{algline:local_frontier_detection_begin}{algline:edge_detection}, naive local frontier detection runs in $O(A(si))$, where $A(si)$ is the area of submap $si$, i.e. the number of cells in the submap. The number of local frontier points in $si$, i.e. the perimeter of its local frontier $LF_{si}$ will be denoted as $P(LF_{si})$, while the global frontier of the submap $si$ and its perimeter will be denoted as $GF_{si}$ and $P(GF_{si})$, respectively. For each detected local frontier point in the updated submap, the complexity of performing the stabbing query test against the intersecting submaps (\autoref{algline:local_stabbing_query_test_against_intersecting_submaps}) is $O(|S_{\cap si}|)$, yielding the complexity of this step $O(P(LF_{si}) \cdot |S_{\cap si}|)$. Invalidating global frontiers of the intersecting submaps (\crefrange{algline:test_adjacent_global_frontiers_against_active_submaps_begin}{algline:test_adjacent_global_frontiers_against_active_submaps_end}) runs linearly with respect to their perimeter, i.e. in time $O(P(\bigcup_{sj \in S_{\cap si}} GF_{sj}))$. The total time complexity of handling an update of submap $si$ thus equals (simplified assuming $P(LF_{si}) \geq 1$): \begin{equation} O(\log|S| + A(si) + P(LF_{si}) \cdot |S_{\cap si}| + P(\bigcup_{sj \in S_{\cap si}} GF_{sj})) \end{equation} \subsubsection{Handling of pose graph optimization events} The first step in \autoref{alg:handle_optimization_update} is to rebuilt the R-tree (\crefrange{algline:bounding_boxes_recomputation_begin}{algline:bounding_boxes_recomputation_end}), which runs in $O(|S|\log|S|)$. This complexity also covers looking up the intersecting submaps for each submap (\autoref{algline:alg2_intersecting_submaps}). Next, every local frontier point is projected to the global coordinate frame $g$ and the stabbing query test is performed against the intersecting submaps (\crefrange{algline:optimization_projecting_points}{algline:optimization_retesting}). A pessimistic bound would entail testing every local frontier point against all other submaps, i.e. a time complexity of $O(|S| \cdot P(LF))$. The pessimism of this bound can be reduced by assuming that the points which fail the stabbing query test will do so on the first test, performed against the failing submap hint. If their perimeter, equal to $P(LF) - P(GF)$, is denoted as $P(FF)$, this assumption yields the time complexity of $O(|S|\cdot P(GF) + P(FF))$. The total time complexity of handling a pose graph optimization event thus depends on the number of submaps and the local and global frontier perimeters: \begin{equation} O(|S| \cdot (\log|S| + P(GF)) + P(FF)) \end{equation} We have managed to avoid having the two-dimensional map area in the time complexity of the global operation of handling pose graph optimization by taking advantage of submaps and their immutability. \section{Experimental results} Our implementation of frontier detection, available online on Github\footnote{\href{https://github.com/larics/cartographer_frontier_detection}{https://github.com/larics/cartographer\_frontier\_detection}}, has been tested on an Intel i7 6800K CPU running Ubuntu 18.04. A demo video of processing Google's Deutsches museum dataset \cite{hess2016} is available\footnote{\href{https://goo.gl/62zEUy}{https://goo.gl/62zEUy}}. We are using the Google Cartographer \emph{offline node}, which processes a ROS bag dataset as fast as the CPU can handle (around 4-5x realtime), making it ideal for benchmarking the impact of performing frontier detection on SLAM performance. In order to minimize the impact on SLAM performance, the frontier detection algorithm is running in a separate thread. The algorithm tries to process all submap updates, while it can adaptively skip non-final submap updates in case the processing speed of frontier detection falls behind SLAM. For this reason, the wall clock frequency of incremental frontier updates has been measured. The results are given in \autoref{table:results}. \begin{table}[] \centering \caption{Experimental setup and results} \label{table:results} \begin{tabular}{l} \toprule Dataset: \href{https://storage.googleapis.com/cartographer-public-data/bags/backpack_2d/cartographer_paper_deutsches_museum.bag}{cartographer\_paper\_deutsches\_museum.bag}, duration: 1912s \\ \midrule Testing setup details \\ \quad Intel i7 6800K @ 3600 MHz, Ubuntu 18.04, ROS Melodic\\ \quad POSE\_GRAPH.constraint\_builder.sampling\_ratio = 0.1\\ \quad MAP\_BUILDER.num\_background\_threads = 10 \\ \quad Cartographer offline node, exit before final optimization \\ \quad No RViz visualization \\ \midrule Wall clock SLAM processing time (not including final optimization) \\ \quad with frontier detection: 434s (4.4x realtime on average) \\ \quad without frontier detection: 360s (5.3x realtime on average) \\ \midrule Frequency of frontier updates \\ \quad Average: 78 Hz (13 ms) \quad Std. deviation: 22 ms\\ \midrule SLAM events \\ \quad Total submap update events: 37816 \\ \quad Skipped submap update events: 3900/37816\\ \quad Pose graph optimization events: 355\\ \bottomrule \end{tabular} \end{table} \subsection{Additional notes on implementation of the frontier detection algorithm} An optimization that has been included in our implementation is performing additional stabbing query tests against a fixed number of \emph{temporally close} i.e. sequential submaps using \emph{unoptimized} submap poses into the local frontier edge detection condition in \autoref{alg:handle_submap_update}, \autoref{algline:edge_detection} (for example, against 4 previous submaps). This would have the result of permanently "baking in" a negative test result against these submaps, since these points would permanently be erased from the local frontiers. A benefit of discarding points early from local frontiers is speeding up recomputation of the global frontier in \autoref{alg:handle_optimization_update}, since there will now be fewer candidates in the local frontier which have to be re-projected and re-tested. Also, it is expected that the localization drift between sequential submaps is small, and that pose graph optimization will not produce a significant relative displacement between sequential submaps. Using the unoptimized poses may actually be preferential for processing sequential submaps, because it could prevent detecting a false frontier resulting from slight misalignment of sequential submaps introduced by pose graph optimization when closing loops. We have employed a few cosmetic improvements which result in (subjectively) aesthetically better frontiers. The first improvement is treating cells with a very uncertain "free" probability (i.e. $\mat{S}^{si}_{k, l} \in [0.5 - \varepsilon, 0.5]$, where $\varepsilon \colonequals 0.04$ is an arbitrarily chosen small value) as "unobserved" cells. This prevents detecting a false frontier around single false long-distance laser readings, which cause insertion of a false ray of "free" cells into the submap. This also has the beneficial effect of driving the robot exploration system to get a "better look" at areas with uncertain free cells, since they are considered unexplored. The second change we have made is simple smoothing of local frontiers by adjusting the definition of a local frontier to be \emph{the center of an unobserved cell with $\geq$ 2 free adjacent cells and $\geq$ 2 unobserved adjacent cells}, where \emph{adjacent} cells are cells in the Moore 8-neighbourhood. \section{Conclusion and future work} We have described, implemented and tested an efficient frontier detection algorithm that is specialized for 2D graph SLAM based on occupancy grid submaps. Our algorithm is efficiently constrained to the area of active submaps, yet robust to loop closure by recomputing the global frontier after pose graph optimization. Importantly, the time complexity of recomputing the global frontier is a function of the frontier perimeter, and not of map area. \emph{Future work:} Some cited state of the art (e.g. \cite{keidar2014}) performs grouping of continuous frontier points into segments, which is useful for selecting navigation objectives of exploration tasks. Another kind of post-processing of the detected frontier which might be of interest is reachability analysis: detecting frontier points unreachable by the robot which do not make sense as navigation objectives, such as frontier points behind glass, closed doors or behind walls (e.g. from false laser readings, or slight misadjustment of walls in different submaps). Nonetheless, a performant algorithm for frontier detection is the basis for any such further improvements. \bibliographystyle{IEEEtran}
{ "timestamp": "2019-03-01T02:17:39", "yymm": "1902", "arxiv_id": "1902.11061", "language": "en", "url": "https://arxiv.org/abs/1902.11061" }
\section{Introduction} Quenching a system across a continuous phase transition from a high to low symmetry phase causes the system to spontaneously break symmetry. Immediately after the quench causally disconnected regions of the system will break symmetry independently, resulting in the formation of domains with independent order parameter orientation. The subsequent growth of these domains toward the global equilibrium state is known as coarsening dynamics. Although the microscopic details of coarsening are usually extremely complicated, at a macroscopic level a much simpler scaling regime can emerge for large average domain size $L$. Spatial correlations of the order parameter at different times $t$ then collapse onto a single curve when rescaled by $L$, and the domains grow as $L\sim t^{1/\eta}$ with the scaling exponent $\eta$ determined by the dynamic universality class~\cite{bray1994}. Such universal dynamics has been explored in a vast variety of systems, ranging from the early universe~\cite{boyanovsky2000} to superfluid formation~\cite{damle1996} to opinion spreading in sociology~\cite{castellano2009}. When the quench produces topological defects, the decay of these defects has long been thought to provide a unifying framework for understanding the coarsening~\cite{bray1994}. Recently, there has been much interest in coarsening dynamics in ultracold atom systems, which are well isolated from their environment and present a pristine system for studying nonequilibrium phase transitions~\cite{sadler2006,erne2018,prufer2018,weiler2008,lamporesi2013,navon2015,chomaz2015,anquez2016,kang2017}. Of particular interest are multicomponent condensates, which support a rich variety of order parameter manifolds and associated topological defects~\cite{kawaguchi2012,stamperkurn2013}. Theoretical studies of coarsening in a variety of cold atom systems~\cite{damle1996,hofmann2014,kudo2013,kudo2015,williamson2016a,williamson2016b,williamson2017,bourges2017,symes2017,fujimoto2018,kulczykowski2017} have culminated in the recent experimental observation of universal dynamics in a quenched quasi-1D scalar Bose gas~\cite{erne2018} and in a quenched quasi-1D spin-1 condensate~\cite{prufer2018}. Simulations of a homogeneous quasi-2D spin-1 condensate quenched from the polar phase to the easy-plane ferromagnetic phase, see Fig.~\ref{phaseDiag}(a), identified coarsening dynamics driven by the mutual annihilation of transverse spin vortices with domain size growing as $L\sim t/\log t$~\cite{williamson2016a,williamson2016b}. A log correction to scaling is familiar from two dimensional systems supporting vortices~\cite{bray1994}. In this work we study the easy-plane ferromagnetic ordering of a homogeneous quasi-2D spin-1 condensate after all vortices have annihilated. Remarkably, we find that the annihilation of vortices does not take the system to the equilibrium state. Instead, a nonequilibrium background of spin waves remain at the Berezinskii-Kosterlitz-Thouless (BKT) temperature, an order of magnitude hotter than the eventual equilibrium temperature. The coarsening then continues via a spin wave cascade that transports energy from low to high wavevectors. As the transverse spin waves do not interact, this turbulent cascade arises from their dynamic coupling to interacting axial spin degrees of freedom, and presents a novel cascade relevant to the emerging area of spin turbulence (e.g.~see \cite{Fujimoto2012a,Fujimoto2014a,fujimoto2016,Simula2015a,kang2017}). Order parameter correlations show dynamic scale invariance during the spin wave coarsening, with a length scale that grows as $t^{1/3}$. This scaling is distinct from that during the vortex driven coarsening, showing that there are two renormalisation group fixed points affecting the phase ordering of this system. Strongly coupling the system to a reservoir of energy and particles destroys the second scaling regime, allowing the system to thermalise following the annihilation of vortices. Our results give new insights into the phase ordering dynamics of isolated systems and provide a profitable connection between phase ordering and wave turbulence. \section{Background} A spin-1 condensate can be described by three interacting classical fields $\psi_m$ for condensates in the three spin components with spin projections $m=-1,0,1$. The quasi-2D Hamiltonian density within a uniform trap~\cite{chomaz2015} is~\cite{ho1998,ohmi1998,stenger1998,barnett2011}, \begin{align}\label{Htot} \mathcal{H}=-\sum_{m=-1}^1\psi_m^*\frac{\hat{p}^2}{2M}\psi_m+\frac{g_n}{2}n^2+\mathcal{H}_s \end{align} where $\hat{p}=-i\hbar\nabla$ is the momentum operator, $M$ is the atom mass, $n=\sum_m|\psi_m|^2$ is the areal density, $g_n$ is the quasi-2D density interaction strength and $\mathcal{H}_s$ encompasses the spin dependent terms, \begin{align} \mathcal{H}_s=\frac{g_s}{2}n^2|{\mathbf{F}}|^2+\sum_{m=-1}^1qm^2|\psi_m|^2. \end{align} The first term in $\mathcal{H}_s$ is the spin interaction energy, with spin density ${\mathbf F}=\sum_{m m^\prime}\psi_m^* {\mathbf f}_{m m^\prime}\psi_{m^\prime}/n^2$ for spin-1 matrices $(f_x,f_y,f_z)\equiv{\mathbf f}$, and quasi-2D spin interaction strength $g_s$. The sign of $g_s$ determines whether the interactions are ferromagnetic ($g_s<0$), which occurs in $^{87}$Rb~\cite{vankempen2002}, or antiferromagnetic ($g_s>0$), which occurs in $^{23}$Na~\cite{stenger1998}. Here we consider the ferromagnetic case. The second term in $\mathcal{H}_s$ is a quadratic Zeeman splitting of the spin components, which can be induced using either DC magnetic fields or AC microwave stark shifts~\cite{gerbier2006,stamperkurn2013}. A linear Zeeman term $pnF_z$ can also be included, but conservation of $nF_z$ means this term does not affect the system dynamics and can be removed via the unitary transformation $e^{-ipmt/\hbar}\psi_m\rightarrow\psi_m$. The quasi-2D regime is obtained from a 3D system by tightly confining the system in one direction and integrating over the resulting spatial profile along that direction~\cite{sadler2006,barnett2011}. The relative strength of the two terms in $\mathcal{H}_s$ produces a rich phase diagram, from which a variety of quenches can be explored. The zero temperature mean field phase diagram for ferromagnetic interactions and with $q>0$ is shown in Figure~\ref{phaseDiag}(a). A quantum critical point at $q=q_0\equiv 2|g_s|n_0$ ($n_0$ is the mean condensate density) separates the unmagnetised polar phase (all atoms in the $m=0$ condensate) from the easy-plane ferromagnetic phase with spin order parameter ${\mathbf{F}}_\perp\equiv (F_x,F_y)$ (for quantization along $F_z$). The order parameter manifold of ${\mathbf{F}}_\perp$ is $\text{SO}(2)$ with transverse spin vortices as topological defects. These vortices consist of a positive or negative phase winding of the transverse spin angle $\theta$ ($\tan\theta=F_y/F_x$), and can only decay via the mutual annihilation of two vortices of opposite sign. Vortices with negative phase winding are also termed antivortices. The energy scale $q_0$ defines a time scale $t_s\equiv \hbar/q_0$ and the spin healing length $\xi_s\equiv \hbar/\sqrt{Mq_0}$. \section{Quench dynamics: anomalous phase ordering} We simulate the condensate dynamics following an instantaneous quench of the quadratic Zeeman energy from deep in the polar phase to $q=0.3q_0$ in the easy-plane ferromagnetic phase, see Fig.~\ref{phaseDiag}(a),(b). Symmetry breaking and the production of transverse spin vortices following such a quench have been observed in experiments with $^{87}$Rb~\cite{sadler2006}. Conservative dynamics of our system is simulated by numerically integrating the three coupled Gross-Pitaevskii equations (GPEs) obtained from Eq.~\eqref{Htot}~\cite{kawaguchi2012}, \begin{align}\label{spinGPEs} i\hbar\frac{\partial\psi_m}{\partial t}=\left(\frac{\hat{p}^2}{2M}+qm^2+g_nn\right)\psi_m+g_sn\sum_{m^\prime}{\mathbf F}\cdot{\mathbf f}_{mm^\prime}\psi_{m^\prime}. \end{align} Further numerical details are described in the Appendix. A homogeneous system can be realised in experiments using a flat bottomed trap~\cite{gaunt2013,chomaz2015}. \begin{figure*} \centering \includegraphics[width=17.8cm]{fig1s.jpg} \caption{\label{phaseDiag}(a) A spin-1 ferromagnetic condensate is unmagnetised (``polar'') for $q>q_0$ and magnetises in the transverse plane (``easy-plane'') for $0<q<q_0$. The point $q=q_0$ is a quantum critical point (QCP). We explore the ordering of transverse spin following a quench from $q\gg q_0$ to $q=0.3q_0$. (b) Coarsening of transverse spin domains [colormap shown in (a)]. This is associated with collisions between transverse spin vortices (red triangles) and antivortices (black circles), which can then mutually annihilate, resulting in a growing intervortex spacing $L_\text{v}$ (red bar). A second length scale $L_\text{sw}$ giving the thermal wavelength of spin waves grows much more slowly (black bar), such that the transverse spin remains out of equilibrium long after all transverse spin vortices have annihilated. The central time axis quantifies the first stage of vortex driven coarsening, the time of last vortex annihilation, and the subsequent stage of spin wave thermalisation. (c) Spatial correlations of transverse spin at different times, showing that long after all vortices have annihilated the correlations still decay more rapidly than the equilibrium prediction (upper black dashed line).} \end{figure*} We quantify order in the system by spatial correlations of ${\mathbf{F}}_\perp$, \begin{align}\label{Gdef} G(r,t)\equiv\left<{\mathbf{F}}_\perp\left({\mathbf{r}},t\right)\cdot{\mathbf{F}}_\perp\left({\mathbf{0}},t\right)\right>, \end{align} where angular brackets denote an ensemble average (see Appendix). Figure~\ref{phaseDiag}(c) shows the evolving correlation function Eq.~\eqref{Gdef}. For times $10^2t_s\lesssim t\lesssim 10^3t_s$, the growth of order is scale invariant and driven by the mutual annihilation of transverse spin vortices of opposite sign, see Fig.~\ref{phaseDiag}(b), with correlations decaying to zero at a length scale on the order of the intervortex spacing. This vortex driven coarsening has been described in previous work~\cite{williamson2016a,williamson2016b}. We find that all vortices have annihilated by a time $t\approx 2.8\times 10^3t_s$, after which correlations extend to the boundary. The correlations can then be compared to the equilibrium (thermalised) prediction~\cite{barnett2011,chaikin1995}, \begin{align}\label{algebraicDecay} G_\text{eq}(r)\sim r^{-\nu},\hspace{0.7cm}\nu=\frac{T_\text{eq}}{4T_\text{BKT}}. \end{align} Here $T_\text{BKT}=\pi K/2k_\text{B}$ is the BKT temperature associated with the unbinding of transverse spin vortices~\cite{kosterlitz1972}, with $K=\hbar^2n_0(1-q/q_0)/2M$ the spin wave stiffness and $k_\text{B}$ Boltzmann's constant. The equilibrium temperature $T_\text{eq}$ of our microcanonical system is calculated by equipartitioning the energy liberated by the quench amongst all collective modes of the system~\cite{williamson2016b}, which gives $\nu\approx 0.0113$. This equilibrium prediction is shown in Fig.~\ref{phaseDiag}(c). Surprisingly, even after very long simulation times $t=10^5 t_s$, correlations of transverse spin only agree with the equilibrium prediction for length scales $r\lesssim 5\xi_s$. For larger length scales the correlations decay more rapidly. This absence of equilibrium following the annihilation of topological defects is not predicted by the current theory of coarsening dynamics~\cite{bray1994}. \section{A spin wave energy cascade driving phase ordering} \begin{figure} \centering \includegraphics[width=.8\linewidth]{fig2s.pdf} \caption{\label{EiEc}(a) The evolving incompressible field spectral energy $\epsilon_\text{i}(k,t)$ displays a predicted $k^{-2}$ scaling and rapidly drops after all vortices have annihilated. (b) The evolving compressible field spectral energy $\epsilon_\text{c}(k,t)$ (solid lines) shows three regions: a persistent high temperature long wavelength region with a temperature approximately equal to $T_\text{BKT}$; a steep region with an approximate $k^{-4}$ scaling; and a short wavelength thermal region. The spectral energy of $F_z$ excitations $\epsilon_{F_z}(k,t)$ (dashed lines) closely follows $\epsilon_\text{c}(k,t)$ for times $t\gtrsim 400t_s$. The interacting $F_z$ fluctuations mediate the thermalisation of the noninteracting transverse spin waves. (c) The total spectral energy of transverse and axial spin waves, $E_\text{tot}(t)$, is decomposed into a low wavevector portion $E_\text{low}(t)$, which decreases in time, and a high wavevector portion $E_\text{high}(t)$, which increases in time, consistent with a cascade of energy from low to high wavevectors. The total $E_\text{tot}(t)$ also decreases in time indicating energy flow away from spin waves.} \end{figure} To identify the origin of the unexpectedly slow ordering displayed in Fig.~\ref{phaseDiag}(c) we look at the distribution of energy in the gradient of the transverse spin angle $\nabla\theta$ (this vector field is proportional to currents of $F_z$ magnetization~\cite{yukawa2012}). We firstly perform a Helmholtz decomposition $\nabla\theta={\mathbf{v}}_i+{\mathbf{v}}_c$ with $\nabla\cdot{\mathbf{v}}_i=0$ and $\nabla\times{\mathbf{v}}_c=0$. The first contribution ${\mathbf{v}}_i$, known as the incompressible field, arises from vortex excitations while the second contribution ${\mathbf{v}}_c$, known as the compressible field, arises from transverse spin wave excitations. The spectral energies of the incompressible and compressible fields are given by, \begin{align}\label{epspec} \epsilon_\mu(k,t)=\frac{K}{2}\left<\left|\tilde{{\mathbf{v}}}_\mu({\mathbf{k}},t)\right|^2\right>,\hspace{1cm}\mu=\text{i, c} \end{align} where $\tilde{{\mathbf{v}}}_\mu({\mathbf{k}})=l^{-1}\int d^2{\mathbf{r}}\,{\mathbf{v}}_\mu({\mathbf{r}})e^{-i{\mathbf{k}}\cdot{\mathbf{r}}}$ is the Fourier transform of ${\mathbf{v}}_\mu({\mathbf{r}})$ and angular brackets denote an ensemble average (see Appendix). The evolving spectral energies $\epsilon_\mu(k,t)$ are shown in Fig.~\ref{EiEc}(a),(b). The incompressible spectral energy, Fig.~\ref{EiEc}(a), shows a $k^{-2}$ decay when vortices are present, in agreement with the infrared ($\xi_s k<1$) scaling of a distribution of quantum vortices~\cite{bradley2012,billam2015}. Once all vortices have annihilated the spectral energy drops abruptly. In comparison, the compressible spectral energy, Fig.~\ref{EiEc}(b), shows nonequilibrium features across the duration of the simulation. The initial condition of our simulation results in a flat high energy distribution $\epsilon_\text{c}(k,0)\approx 200k_\text{B}T_\text{eq}$. For times $t\gtrsim 10^3t_s$, the compressible spectral energy shows three approximate regimes, \begin{align}\label{specport} \epsilon_\text{c}(k,t)=\left\{\begin{array}{ll}\epsilon_\text{lw}(k,t),&k<k_\text{lw}(t),\\ \left(\epsilon_\text{lw}(k_\text{lw},t)/{k_\text{lw}}^{-\alpha}\right)k^{-\alpha},&k_\text{lw}(t)\le k<k_\text{eq}(t),\\ k_\text{B}T_\text{eq}/2,&k_\text{eq}(t)\le k.\end{array}\right. \end{align} Here $\epsilon_\text{lw}(k,t)$ is the long wavelength portion of $\epsilon_\text{c}(k,t)$, with energy per mode $\epsilon_\text{lw}(k,t)\approx 10 k_\text{B}T_\text{eq}\approx k_\text{B}T_\text{BKT}/2$ in the wavevector window considered. This nonequilibrium temperature, being approximately at the BKT temperature, corresponds to the typical energy of a single transverse spin vortex~\cite{kosterlitz1972,kobayashi2019}, and may be a remnant of interactions between spin waves and vortices during the vortex driven coarsening. For $k>k_\text{lw}$ $\epsilon_\text{c}(k,t)$ decays steeply as $k^{-\alpha}$ with an exponent $\alpha\approx 4$ until the equilibrium distribution $\epsilon_\text{c}(k,t)=k_\text{B}T_\text{eq}/2$ is reached at a wavevector $k_\text{eq}(t)$. The structure of $\epsilon_\text{c}(k,t)$ is suggestive of a turbulent cascade, with a high temperature long wavelength energy source cascading to a short wavelength thermal field. We provide confirmation of this shortly. With no vortices present, the persistent nonequilibrium features of $\epsilon_\text{c}(k,t)$ must be responsible for the anomalously slow ordering that we observe in Fig.~\ref{phaseDiag}(c). The observed dynamics of $\epsilon_\text{c}(k,t)$ necessarily involves nonlinear interactions, whereas the transverse spin waves in our system do not interact at any order in the Hamiltonian (which is independent of the phase variable $\theta$). However, the field conjugate to the transverse spin phase $\theta$, i.e.\ the generator of rotations of $\theta$, is $nF_z$, leading to dynamic coupling between transverse spin waves and the axial spin waves of $F_z$~\cite{barnett2011,williamson2016b}. Axial spin waves do interact, both between themselves and with other excitations, and must therefore mediate the transverse spin wave interactions. Expanding the system Hamiltonian to quadratic order in $F_z$ and $n$~\cite{barnett2011} gives the spectral energy of axial spin fluctuations, \begin{align}\label{Fzspec} \epsilon_{F_z}(k,t)=n_0\frac{\hbar^2 k^2/2M+q}{2(1-q/q_0)}\left<\left|\tilde{F}_z({\mathbf{k}},t)\right|^2\right> \end{align} where $\tilde{F}_z({\mathbf{k}})\equiv l^{-1}\int d^2{\mathbf{r}}\,F_z({\mathbf{r}})e^{i{\mathbf{k}}\cdot{\mathbf{r}}}$ and angular brackets denote an ensemble average (see Appendix). For times $t\gtrsim 400t_s$ the spectral energy $\epsilon_{F_z}(k,t)$ closely follows $\epsilon_\text{c}(k,t)$, see Fig.~\ref{EiEc}(b), indicating that the dynamics of the two spectra are coupled and in equilibrium with each other. The nonlinear interactions of axial spin waves allow the redistribution of energy in $\epsilon_{F_z}(k,t)$ and then dynamic coupling to transverse spin waves actuates the same effect in $\epsilon_\text{c}(k,t)$. To confirm the presence of an energy cascade in Fig.~\ref{EiEc}(b) we decompose the total spin wave energy \begin{align} E_\text{tot}(t)=\sum_k 2\pi k(\epsilon_\text{c}(k,t)+\epsilon_{F_z}(k,t)) \end{align} into a low wavevector portion \begin{align} E_\text{low}(t)=\sum_{k<k_\text{mid}} 2\pi k(\epsilon_\text{c}(k,t)+\epsilon_{F_z}(k,t)) \end{align} and a high wavevector portion \begin{align} E_\text{high}(t)=\sum_{k\geq k_\text{mid}} 2\pi k(\epsilon_\text{c}(k,t)+\epsilon_{F_z}(k,t)), \end{align} where we choose $k_\text{mid}=0.5\xi_s^{-1}$. In Fig.~\ref{EiEc}(c) we plot the energy changes $\Delta E(t)\equiv E(t)-E(10^5 t_s)$ of these three quantities for times after all vortices have annihilated. The energy $E_\text{low}$ decreases in time while $E_\text{high}$ increases, consistent with an energy cascade from $k<k_\text{mid}$ to $k\geq k_\text{mid}$. There is also a net decrease in the total energy $E_\text{tot}$, showing that energy is also lost from the spin wave excitations, either to other quadratic excitations~\cite{barnett2011,murata2007,uchino2010,symes2014} or to excitations beyond quadratic order. In principle, one could solve for the dynamics of these additional excitations to obtain effective spin wave dynamics. The spin waves would then show a cascade of energy from low to high wavevectors. Figure~\ref{EiEc}(b) shows that the spin wave energy cascade is associated with an approximate $k^{-4}$ scaling of $\epsilon_c(k,t)$ and $\epsilon_{F_z}(k,t)$. (Note the spectral energies in most studies of turbulence include a $k$ phase space factor so that the $k^{-4}$ scaling observed here would normally be described as $k^{-3}$ scaling.) There are currently no predictions for this cascade within weak wave turbulence theory~\cite{zakharov1992,nazarenko2011}. \section{A second regime of scale invariance} \begin{figure} \centering \includegraphics[width=.8\linewidth]{fig3s.pdf} \caption{\label{Ggpe}(a) The evolving spatial correlations of transverse spin after all vortices have annihilated (inset) collapse onto a single curve (main figure) according to Eq.~\eqref{corrcol2} when rescaled by the growing length scale $L_\text{sw}(t)$. The nonequilibrium (decaying) portion of $r^{\nu} G(r)$ shows a $r^{-0.21}$ algebraic decay that indicates a nonequilibrium temperature of $T\approx 0.9T_\text{BKT}$. The flat dashed line indicates equilibrium correlations. (b) The length scale $L_\text{sw}(t)$ grows as $t^{1/3}$ for $t>10^3 t_s$, much slower than the $t/\ln (t/t_s)$ growth of average intervortex spacing $L_\text{v}(t)$. The largest thermarlised wavelength extracted from $\epsilon_\text{c}(k,t)$ is $2\pi k_\text{eq}(t)^{-1}$, which follows the growth of $L_\text{sw}(t)$.} \end{figure} The robust shape of $\epsilon_\text{c}(k,t)$ for times $t\gtrsim 10^3t_s$ (see Fig.~\ref{EiEc}(b)) suggests a second regime of scale invariance beyond the scale invariant coarsening dynamics driven by vortex annihilation. To explore this we consider the late time dynamics of correlations of transverse spin, Eq.~\eqref{Gdef}, which in a scale invariant regime will evolve as~\cite{lee1995,bray2000}, \begin{align}\label{corrcol2} G(r,t)=r^{-{\nu}}f\left(\frac{r}{L(t)}\right) \end{align} for some universal function $f$ and growing length scale $L(t)$. The $r^{-\nu}$ correction factor ensures $G(r,\infty)\sim r^{-\nu}$, consistent with equilibrium. Since $\nu\approx 0.0113\ll 1$, the correction is only significant when $G(r)$ is close to ordered. The evolving correlation function for times after all vortices have annihilated is shown in the inset to Fig.~\ref{Ggpe}(a). The correlations exhibit a short wavelength ordered portion that grows slowly in time and a nonequilibrium long wavelength portion. The correlation functions collapse onto a single curve after rescaling according to Eq.~\eqref{corrcol2}, see Fig.~\ref{Ggpe}(a). The rescaling factor $L_\text{sw}(t)$ is defined by $G(L_\text{sw},t)=0.8G(0,t)$, which follows the boundary between the ordered portion of the correlation function and the nonequilibrium portion. The length scale $L_\text{sw}(t)$ grows as a power law $L_\text{sw}\sim t^{1/3}$ for times $t\gtrsim 10^3 t_s$, i.e.\ times after all vortices have annihilated, see Fig.~\ref{Ggpe}(b). The length scale $2\pi k_\text{eq}(t)^{-1}$, where $k_\text{eq}(t)$ is introduced in Eq.~\eqref{specport} and defined more precisely in the Appendix, follows the growth of $L_\text{sw}(t)$. For comparison, the scale invariance during vortex driven coarsening is associated with the more rapidly growing average intervortex spacing $L_\text{v}(t)$ (defined in the Appendix)~\cite{williamson2016a}. The nonequilibrium portions of the correlation functions in Fig.~\ref{Ggpe}(a) clearly exhibit an additional algebraic decay $G(r)\sim r^{-0.21-\nu}\approx r^{-0.22}$. The value of the decay exponent corresponds to a temperature of $T\approx 0.9T_\text{BKT}$, see Eq.~\eqref{algebraicDecay}, and is consistent with the nonequilbrium temperature of $\epsilon_\text{lw}(k,t)$ from Eq.~\eqref{specport}. \section{Comparison with open system dynamics} \begin{figure*} \centering \includegraphics[width=17.8cm]{fig4s.jpg} \caption{\label{Gres}Open system results. (a) Spatial correlations of transverse spin at different times during the coarsening. The correlations agree very well with the equilibrium prediction Eq.~\eqref{algebraicDecay} (dashed line) after all vortices have annihilated. (b) Coarsening of transverse spin domains [colormap shown in Fig.~\ref{phaseDiag}(a)]. Spin vortices and antivortices are marked by red triangles and black circles respectively. Comparing with Fig.~\ref{phaseDiag}(b), it is clear that the transverse spin is more ordered in the space between vortices in the open system dynamics. (c) The growth of intervortex spacing $L_\text{v}$ follows the isolated system growth $L_\text{v,ISO}$. The length scale $L_\text{sw}$ follows the growth of intervortex spacing $L_\text{v}$, indicating the absence of a second coarsening process. Inset: Algebraic decay exponent $\nu_\text{fit}$ for different $q$ quenches (dots) obtained from single trajectory simulations by fitting to the transverse spin correlation function for $2\xi_s\le r\le 100\xi_s$ and averaging the result across times $5\times 10^3t_s\le t\le 10^4t_s$. The error bars give the standard error of this mean. For $q>0.1q_0$ the fitted exponents agree well with the equilibrium prediction from Eq.~\eqref{algebraicDecay} (solid line).} \end{figure*} Our analysis so far has considered isolated, energy conserving dynamics. It is of interest to compare our results with open system quench dynamics, where the condensate is coupled to a reservoir of energy and particles. Using a stochastic Gross-Pitaevskii theory (see Appendix), we model a spin-1 condensate strongly coupled to a reservoir with fixed temperature and chemical potential, which we choose such that the equilibriated energy and particle number matches those of the conservative dynamics. We then simulate the same quench as for the isolated system dynamics. Figure~\ref{Gres}(a) shows the evolution of transverse spin correlations, Eq.~\eqref{Gdef}, for the open system dynamics. The vortex driven coarsening is comparable to the isolated system case, with correlations showing scale invariant growth. For times after $t\approx 2\times 10^3 t_s$ correlations in the open system dynamics show excellent agreement with the equilibrium prediction Eq.~\eqref{algebraicDecay}. For comparison, all vortices have annihilated by a time $t\approx 1.8\times 10^3 t_s$. The results in Fig.~\ref{Gres}(a) are in stark contrast to the results in Fig.~\ref{phaseDiag}(c) for the isolated system. Indeed, differences in the two cases are apparent from the evolving spin domains, Fig.~\ref{phaseDiag}(b) and Fig.~\ref{Gres}(b), with the open system being more ordered in the spaces between vortices. For the large reservoir coupling strength we have used here, spin waves in the open system are able to rapidly thermalise directly with the reservoir rather than via an energy cascade. However, we emphasise that microscopically derived reservoir coupling strengths are much smaller than the value we use here~\cite{bradley2014}, and therefore the isolated system dynamics are a realistic approximation to experiments. The growing length scales $L_\text{v}$ and $L_\text{sw}$ for the open system dynamics, defined as for the conservative dynamics, are shown in Fig.~\ref{Gres}(c). The growth of $L_\text{v}$ in the open system is very similar to the isolated system growth (denoted by $L_\text{v,ISO}$ in this figure). In the open system, however, there is no second growing length scale, and $L_\text{sw}$ follows the growth of $L_\text{v}$. The decay of transverse spin correlations for open system dynamics following quenches to different values of $q$ show good agreement with Eq.~\eqref{algebraicDecay} once all vortices have annihilated, see Fig.~\ref{Gres}(c) inset. (The temperature and chemical potential for these quenches have been adjusted as a function of $q$; see Appendix.) The small deviation at the smallest $q$ value may be caused by axial spin fluctuations, which become stronger as $q\rightarrow 0$ due to a diminishing energy gap. Indeed, we expect that the physics will be modified in the limit $q\rightarrow 0$, since the ground state manifold changes from $\text{SO}(2)\times \text{U}(1)$ to $\text{SO}(3)$, resulting in changes in collective mode excitations~\cite{uchino2010,symes2014} and vortex topology~\cite{williamson2017}. \section{Summary and Discussion} We have shown that vortex driven coarsening of an isolated easy-plane ferromagnetic spin-1 condensate does not take the system to equilibrium. Instead, a second regime of scale invariant coarsening associated with a cascade of spin wave energy scales more slowly as $t^{1/3}$. Strongly coupling the system to a reservoir of energy and particles destroys this second coarsening process and equilibrium is reached after the vortex driven coarsening. The presence of two dynamic scaling regimes in the isolated dynamics shows that there are two renormalisation group fixed points affecting the phase ordering. The first, associated with vortices, has been ascribed to the model E dynamic universality class~\cite{williamson2016a}. The second slower scaling, $L_\text{sw}\sim t^{1/3}$, matches that of the model B dynamic universality class~\cite{hohenberg1977,bray1994}, however the order parameter does not: the model B universality class corresponds to a conserved order parameter whereas the transverse spin is not conserved. A possible reconciliation is that the nonlinear dynamics of the conserved $nF_z$ field belongs to the model B dynamic universality class, even though this is not the order parameter of the system, and that the dynamic coupling between $nF_z$ and $\theta$ leads to model B scaling emerging in the correlations of transverse spin. Alternatively, our results might point to a currently unidentified dynamic universality class. We have shown that this second regime of scale invariance is associated with cascading spin wave energy, thus identifying a connection between the fields of wave turbulence and phase ordering dynamics. Furthermore, we have shown that strongly coupling the system to a reservoir preserves the first regime of scale invariance but destroys the second. Considering renormalisation group flows to the corresponding fixed points, the reservoir coupling is irrelevant during the first scale invariant regime, whereas it is relevant (or marginal) in the second regime. The nonequilibrium background of spin waves that remain after the vortices have annihilated is at a temperature very close to $T_\text{BKT}$. These spin waves may have thermalised with the vortex field during vortex driven coarsening, either via scattering off of vortices or via spin wave production after vortex annihilation (see~\cite{williamson2016c}). The absence of interactions once all vortices have annihilated would then leave behind high temperature spin waves, reminiscent of photons decoupling from matter in the early universe to produce the cosmic microwave background. This intriguing process may be ubiquitous in phase ordering systems involving topological defects interacting with collective mode excitations. \section*{Acknowledgements} We thank Dan Stamper-Kurn, Kazuya Fujimoto, Matthew Reeves and Ashton Bradley for valuable discussions. We acknowledge support from the Marsden Fund of the Royal Society of New Zealand. LW acknowledges support from the University of Otago Postgraduate Publishing Bursary. \section*{Appendix: numerical details} \subsection*{GPE simulations} The GPE simulations are conducted on a 2D square grid with dimensions $l\times l=400\xi_s\times 400\xi_s$ covered by an $N\times N=512\times 512$ grid of equally spaced points. In experiments in $^{87}$Rb, $g_n/|g_s|\sim 100$~\cite{vankempen2002}. We use a more modest ratio $g_n/|g_s|=10$, which is sufficient to suppress density fluctuations at the energy scale we are interested in. The mean condensate density is taken to be $n_0=10^4\xi_s^{-2}$. We evolve our system using a recently developed fourth order symplectic integrator~\cite{symes2016} to ensure that energy, atom number and $nF_z$ magnetization are conserved effectively. We find that total energy and atom number are conserved to within a factor of $10^{-9}$ across the full simulation time. The total axial magnetization $\int d^2{\mathbf{r}}\,n({\mathbf{r}})F_z({\mathbf{r}})$ remains below $10^{-6}n_0l^2$. We use a time step of $0.02t_s$ for each integration step. The kinetic energy time step is evaluated spectrally using fast Fourier transforms, and we employ periodic boundary conditions. Our initial state is the polar state $(\psi_1,\psi_0,\psi_{-1})=\sqrt{n_0}(0,1,0)+\delta$, where $\delta$ is noise added to Bogoliubov modes on top of the ground state at $q=\infty$, as in~\cite{williamson2016b}, which seeds the symmetry breaking evolution. Noise added this way corresponds to adding on average half a particle per mode according to the truncated Wigner prescription~\cite{blakie2008}. We then evolve our system using Eq.~\eqref{spinGPEs} at a quadratic Zeeman energy $q=0.3q_0$, so that the quench is effectively instantaneous at $t=0$. \subsection*{Ensemble averaging} Correlations Eq.~\eqref{Gdef} and spectral energies Eq.~\eqref{epspec} and Eq.~\eqref{Fzspec} are computed using an ensemble average of the form, \begin{align} \bar{g}({\mathbf{u}})=\left<g({\mathbf{u}})\right> \end{align} where ${\mathbf{u}}={\mathbf{r}},\,{\mathbf{k}}$ and $g({\mathbf{u}})$ denotes the result of a single simulation trajectory. In the GPE simulations, the ensemble average is over 30 simulation trajectories conducted with independent initial noise. In the open system simulations, the ensemble average is over 10 simulation trajectories. We also average $\bar{g}({\mathbf{u}})$ over azimuthal angles of the coordinate ${\mathbf{u}}$, such that $\bar{g}({\mathbf{u}})\rightarrow \bar{g}(u)$ for $u\equiv |{\mathbf{u}}|$. Correlation functions are additionally averaged over space, i.e.\ we replace ${\mathbf{F}}_\perp({\mathbf{0}})\cdot{\mathbf{F}}_\perp({\mathbf{r}})$ by ${\mathbf{F}}_\perp({\mathbf{r}}^\prime)\cdot{\mathbf{F}}_\perp({\mathbf{r}}^\prime+{\mathbf{r}})$ in Eq.~\eqref{Gdef} and average over the spatial coordinate ${\mathbf{r}}^\prime$. \subsection*{Vortex detection and averaging} We detect vortices by evaluating the phase winding of the transverse spin angle around plaquettes of our simulation grid. The average intervortex spacings in Fig.~\ref{Ggpe}(b) and Fig.~\ref{Gres}(c) are defined as $L_\text{v}(t)\equiv \left<\sqrt{l^2/N_v(t)}\right>$ where $N_v(t)$ is the vortex number for a single simulation trajectory at time $t$ and angular brackets denote an ensemble average over the 30 simulation trajectories for the GPE results and the 10 simulation trajectories for the open system results. The results for $L_v$ in Fig.~\ref{phaseDiag}(b) are for the single trajectory displayed. \subsection*{Extraction of ${\mathbf{k}}_\text{\textbf{eq}}$} The wavevector $k_\text{eq}$ introduced in Eq.~\eqref{specport} is obtained as follows. We firstly skew the spectrum $\epsilon_\text{c}(k)$ by multiplying by $k$. We then define $k_\text{eq}$ as the position of the local minimum that appears in $k\epsilon_\text{c}(k)$ at the start of the equilibrium portion of the spectral energy. To improve resolution, we firstly interpolate the numerical values for $k\epsilon_\text{c}$ around its minimum and then find the minimum point of the more highly resolved interpolated data. \subsection*{Open system simulations} To model open system evolution we couple our condensate to a reservoir of energy and particles with fixed temperature $T$ and chemical potential $\mu$. The dynamics is simulated using the simple growth stochastic Gross-Pitaevskii equations (SGPEs)~\cite{gardiner2002,gardiner2003,blakie2008,bradley2014}, \begin{align}\label{sGPE} i\hbar d\psi_m=\left(1-i\gamma\right)\left(\mathcal{L}_m[\psi_m]-\mu\psi_m\right) dt+dW({\mathbf{r}},t). \end{align} Here \begin{align} \mathcal{L}_m[\psi_m]=\left(\frac{\hat{p}^2}{2M}+qm^2+g_nn\right)\psi_m+g_sn\sum_{m^\prime}{\mathbf F}\cdot{\mathbf f}_{mm^\prime}\psi_{m^\prime} \end{align} is the conservative evolution operator from Eq.~\eqref{spinGPEs}, $\mu$ is the chemical potential and $\gamma$ is a dimensionless damping. The precise value chosen for $\gamma$ will not affect equilibrium properties, but will affect the rate that equilibrium is approached. The term $dW({\mathbf{r}},t)$ is Gaussian distributed complex noise with delta correlations, \begin{align}\label{noisecha} \left<dW^*({\mathbf{r}},t)dW({\mathbf{r}}^\prime,t)\right>=\frac{2\gamma k_\text{B} T}{\hbar}\delta({\mathbf{r}}-{\mathbf{r}}^\prime)dt. \end{align} The SGPEs Eq.~\eqref{sGPE} take the form of Langevin equations. The temperature (as a function of $q$) is chosen to be that obtained by equiparitioning the energy liberated by the quench amongst the $3N^2$ numerical modes, as was done in the calculation of $T_\text{eq}$ for the microcanonical system. The energy liberated is the energy of the polar state evaluated at the final quadratic Zeemen energy $q$~\cite{williamson2016b}. The temperature is then, \begin{align} k_\text{B}T=\frac{q_0}{12N^2}\left(1-\frac{q}{q_0}\right)^2n_0l^2. \end{align} The chemical potential (as a function of $q$) is chosen to be that of a zero temperature spin-1 condensate in the easy-plane phase, which is obtained by solving $\mathcal{L}_m\psi_m=\mu\psi_m$. This gives~\cite{kawaguchi2012}, \begin{align} \mu=g_nn_0+g_sn_0+\frac{q}{2}. \end{align} These choices of $T$ and $\mu$ give steady state energy and atom number within 1\% of the conservative GPE results. We use $\gamma=10^{-2}$, which gives $|dW|\lesssim q_0|\psi_m|dt$, and therefore reservoir scattering events occur within the time scale of spin interactions. The microscopically derived value for $\gamma$ will be considerably smaller than this~\cite{bradley2014}, resulting in the GPE dynamics overwhelming the reservoir interactions. We evolve Eq.~\eqref{sGPE} using an interaction picture fourth order Runge-Kutta integrator with periodic boundary conditions and kinetic energy evaluated to spectral accuracy. The noise is added in a single step following the Runge-Kutta integration of the $(1-i\gamma)(\mathcal{L}_m[\psi_m]-\mu\psi_m)$ term~\cite{milstein2004}. Numerical parameters and initial condition sampling are the same as for the GPE simulations.
{ "timestamp": "2019-03-01T02:04:04", "yymm": "1902", "arxiv_id": "1902.10792", "language": "en", "url": "https://arxiv.org/abs/1902.10792" }
\section{Introduction} Precision cosmological observations have firmly established the evolution history of our universe, laying down the existence of a current period of exponential expansion. However, the theoretical seeds for this late-time acceleration remain a source of considerable debate. Occam's razor dictates that the simplest explanation lies in assuming a positive cosmological constant leading to the standard $\Lambda$CDM paradigm. Already at this level, there are perplexing questions regarding the `naturalness' of the small value of $\Lambda$ as observed in our universe, and its compatibility with our current understanding of vacuum energy density. However, even allowing for this observed value of the cosmological constant as a free parameter in standard cosmology, it is interesting to ask if there exists other theoretical paths towards constraining the $\Lambda$CDM model. On the other hand, the ultraviolet (UV) completion of general relativity (GR), or the problem of quantum gravity, remains one of the most interesting challenges in theoretical physics today. In the most exciting case, peculiarities of the UV theory, whatever it might be, can perhaps leave its signatures on GR -- the empirically verified low energy effective field theory (EFT) of gravity. Recently, it has been proposed that there is a swampland of inconsistent EFTs arising from string theory, which are incompatible with quantum gravity, unless they satisfy some constraints \cite{Ooguri:2006in, Obied:2018sgi, Agrawal:2018own, Ooguri:2018wrx} \begin{itemize} \item[(S1):] The range traversed by any of the scalar field, $\pi$, has the upper bound, $|\Delta \pi|/\Mpl < \Delta \sim \mathcal{O}(1)$, AND \item[(S2):] The gradient of the scalar field potential, $V(\pi)$, is bounded from below, $\Mpl\, |V'|/V > c \sim \mathcal{O}(1)$,\, OR \\ The potential has unstable directions with large curvature, \textit{i.e.} $-\Mpl^2\, \text{min}(V'')/V > \tilde{c} \sim \mathcal{O}(1)$, \end{itemize} where $\Delta$, $c$ and $\tilde{c}$ are positive and $\Mpl$ is the reduced Planck mass. Specifically, the second condition above implies that there are no metastable de-Sitter (dS) solutions in EFTs emerging from string theory. This conjecture, referred from now on as the `dS constraint', has led to a new guideline for constraining cosmological models \cite{Agrawal:2018own,Blumenhagen:2017cxt,Achucarro:2018vey,Garg:2018reu,Dias:2018ngv,Kehagias:2018uem,Lehners:2018vgi,Denef:2018etk,Colgain:2018wgk,Heisenberg:2018yae,Kinney:2018nny,Brahma:2018hrd,Das:2018hqy,Wang:2018duq,Danielsson:2018qpa,Han:2018yrk,Visinelli:2018utg,Matsui:2018xwa,Heisenberg:2018rdu,Hamaguchi:2018vtv,Das:2018rpg,Lin:2018kjm,Kawasaki:2018daf,Dimopoulos:2018upl,Motaharfar:2018zyb,Ashoorioon:2018sqb,Wang:2018kly,Fukuda:2018haz,Garg:2018zdg,Park:2018fuj,Lin:2018rnx,Schimmrigk:2018gch,Agrawal:2018rcg,Yi:2018dhl,Heckman:2018mxl,Elizalde:2018dvw,Cheong:2018udx,Holman:2018inr,Acharya:2018deu,Herdeiro:2018hfp,Kinney:2018kew,Montero:2018fns,Lin:2018edm,Cai:2018ebs,Heckman:2019dsj,Kamali:2019hgv,Haque:2019prw,Andriot:2018wzk,Raveri:2018ddi,Heisenberg:2019qxz} and shall be the main focus of our work. The dS constraint clearly rules out standard $\Lambda$CDM cosmology, our simplest proposal for late-time acceleration, due to the unavailability of any local dS extrema in the effective potential. As an alternative, it has been proposed that the current era of accelerated expansion be explained by models of quintessence, i.e. by assuming a scalar field beyond the Standard Model. (In a similar manner, the same swampland conjectures have also put appreciable pressure on the simplest models of vanilla single-field inflation -- an era of accelerated expansion in the early-universe -- thereby necessitating the introduction of more complicated models, \textit{e.g.} \cite{Brahma:2018hrd,Kinney:2018nny,Achucarro:2018vey,Garg:2018reu,Kehagias:2018uem,Blumenhagen:2017cxt,Das:2018rpg,Motaharfar:2018zyb,Lin:2018kjm,Dias:2018ngv}.) On the bright side, such quintessence models can easily be embedded in string theory through slowly rolling moduli fields (see, for instance \cite{Cicoli:2012tz,Gupta:2011yj, Panda:2010uq, Hellerman:2001yi, Choi:1999xn}), thereafter suitably imposing the swampland conjectures on them. However, it was also noted that current cosmological observations put a strict bound on the $c$, which for the least constrained exponential potential, is put at $c<0.6$ at $1\sigma$ level \cite{Heisenberg:2018yae,Heisenberg:2018rdu,Fukuda:2018haz,Wang:2018duq}. Therefore, one is forced to consider modifications to GR, which typically contain an additional scalar field, as a necessary step for realizing models of evolving dark-energy in order to satisfy the dS constraint coming out of the swampland conjectures. Our main motivation for this work is to show that this bound can be significantly relaxed when considering models which go beyond quintessence; in particular, by considering higher derivative terms in the action as is common for Galileon theories \cite{Nicolis:2008in,Chow:2009fm,Silva:2009km,DeFelice:2010pv,DeFelice:2010nf,Deffayet:2009wt}. The most general class of such theories with a scalar field, in the presence of derivative interactions, is represented by the Horndeski Lagrangian \cite{Horndeski:1974wa,Deffayet:2009mn,Deffayet:2013lga,Kase:2018aps}, the viability of which after imposing the swampland constraints has also been studied recently \cite{Heisenberg:2019qxz}. Although we do not study the most general class of full Horndeski theories, we point out, through our simpler example, that there are parts of the parameter space which naturally fit the late-time data much better than quintessence models, even after taking the swampland conjectures into account, once derivative interactions are turned on. Therefore, to reemphasize our main result, there are parts of the parameter space of Horndeski Lagrangian which are not in the swampland and can be efficiently used for model-building in explaining our cosmic history. We devote the rest of this paper to flesh out the details of our simple model which shall act as a proof of principle in support of this claim. \section{The model: cubic Galileon terms} In our model, the late-time acceleration of the universe shall be explained through the dynamics of an effective scalar field -- the Galileon $\pi$. We consider the following cubic Galileon action with a potential term $V(\pi)$ \cite{Ali:2012cv,Hossain:2012qm,Hossain:2017ica,Dinda:2017lpz}, a sub-class of the most general scalar-tensor (Horndeski) Lagrangian \begin{equation} \S=\int \d^4x\sqrt{-\g}\Bigl [\frac{\Mpl^2}{2} R-\frac{1}{2}(\nabla \pi)^2\Bigl(1+\frac{\al}{M^3}\Box \pi\Bigr) - V(\pi) \Bigr]+ \S_\m+\S_\r \, , \label{eq:action} \end{equation} where $\Mpl=1/\sqrt{8\pi G}$ is the reduced Planck mass, $M$ is an energy scale, $\al$ is a dimensionless constant. $\S_\m$ and $\S_\r$ are the matter and radiation action. In the above, one can rescale the parameter $\al$ to replace $M$ by $\Mpl$. It is also straightforward to see that setting $\alpha=0$ gets us back to the standard quintessence action. When potential $V(\pi)$ is linear, the scalar field action preserves the Galilean shift symmetry $\pi\to\pi+b_\mu x^\mu+c$, where $b_\mu$ and $c$ are constants. However, an exponential potential, which is what we shall consider in this work, results in the breaking of this shift symmetry. Note that our primary aim is to show how the swampland conjectures can easily be satisfied when including non-canonical higher-derivative terms in the action, going beyond quintessence models. As it is, quantum corrections can lead to the appearance of higher derivative terms when the non-renormalizable theorem is violated \cite{Pirtskhalava:2015nla}, when the Galileon symmetry is not \textit{weakly-broken}. Our choice of the exponential potential is purely phenomenological in spirit and takes this specific form to facilitate comparison with quintessence \cite{Heisenberg:2018yae} and some classes of Horndeski theories \cite{Heisenberg:2019qxz}, when combined with the swampland conjectures\footnote{However, independently from these considerations, one needs to add a potential term so as to get ghost-free late-time acceleration in models of cubic Galileon \cite{Ali:2010gr,Gannouji:2010au}.}. Apart from the potential term, one also needs to make a choice for the higher derivative terms to keep in the action. While the quartic/quintic terms in the covariant Galileon formalism has been made nonviable since the detection of multi-messenger gravitational wave astronomy \cite{Ezquiaga:2017ekz}, the cubic term is highly disfavored by Integrated Sachs-Wolfe effect (ISW) measurements \cite{Renk:2017rzu}. However, generalizations of the cubic Galileon models have been shown to be compatible with the ISW data \cite{Kimura:2011td}. Keeping these in mind, we shall only consider the cubic Galileon term, $(\nabla\pi)^2\, \Box\pi$, in our action. We remind the reader that not only has this term not been ruled out by gravitational wave detection, neither it been shown to be in conflict with the ISW data in the presence of a potential term \cite{Kase:2018aps}, as is the case here. Finally, and most importantly, we emphasize that this specific model has been chosen only to provide a concrete example to illustrate that parts of the parameter space of general Horndeski theories remain viable even after the imposition of the swampland conjectures. Varying the action (\ref{eq:action}) with respect to (w.r.t.) the metric $\g_{\mu\nu}$ gives the Einstein equation \begin{equation} \Mpl^2 G_{\mu\nu}= T_{(\m)\mu\nu}+T_{(\r)\mu\nu}+T_{(\pi)\mu\nu} \, , \label{eq:ee} \end{equation} where subscripts $m$, $r$ and $\pi$ represent matter, radiation and the scalar field respectively and \begin{align} T_{(\pi)\mu\nu}=& \pi_{;\mu}\pi_{;\nu}-\frac{1}{2}\g_{\mu\nu}(\nabla\pi)^2 -\g_{\mu\nu}V(\pi)+\frac{\alpha}{M^3} \Bigl[\pi_{,\mu}\pi_{;\nu}\Box\pi+\g_{\mu\nu}\pi_{;\lambda}\pi^{;\lambda\rho}\pi_{;\rho} \nn \\ & - \pi^{;\rho}\(\pi_{;\mu}\pi_{;\nu \rho}+\pi_{;\nu}\pi_{;\mu \rho}\)\Bigr] \, , \label{eq:emt_phi} \end{align} and varying w.r.t. the scalar field $\pi$ gives the equation of motion of the scalar field \begin{equation} \Box \pi+\frac{\alpha}{M^3}\Bigl[(\Box\pi)^2-\pi_{;\mu\nu}\pi^{;\mu\nu}-R^{\mu\nu}\pi_{;\mu}\pi_{;\nu}\Bigr]-V'(\pi)=0 \, , \label{eq:eom_phi} \end{equation} where $'$ denotes the derivative w.r.t. $\pi$. \section{Cosmological dynamics constrained by the Swampland} In a spatially flat Friedmann-Lema\^{i}tre-Robertson-Walker (FLRW) background, the Friedmann equations take the form \begin{eqnarray} 3M_{\rm{pl}}^2H^2 &=&\rho_\m+\rho_\r + \rho_\pi\,,\label{Friedmann1}\\ M_{\rm{pl}}^2(2\dot H + 3H^2)&=&-\frac{\rho_\r}{3} - P_\pi\,,\label{Friedmann2} \end{eqnarray} where \begin{eqnarray} \rho_\pi &=& \frac{\dot{\pi}^2}{2}\Bigl(1-\frac{6\alpha}{M^3} H\dot{\pi}\Bigr) + V{(\pi)} \, , \label{eq:rhopi}\\ P_\pi &=& \frac{\dot{\pi}^2}{2} \Bigl(1+\frac{2\alpha}{M^3}\ddot{\pi}\Bigr) - V(\pi) \, . \label{eq:ppi} \end{eqnarray} and the equation of motion for the scalar field reads \begin{eqnarray} \ddot{\pi}+ 3H\dot{\pi}- \frac{3\alpha}{M^3} \dot{\pi}\Bigl(3H^2\dot{\pi} + \dot{H}\dot{\pi} + 2H\ddot{\pi}\Bigr)+ V'(\pi)=0 \, , \end{eqnarray} where $\rho_\m$ and $\rho_\r$ are the energy densities of non-relativistic matter ($P_\m = 0$) and radiation ($P_\r = \rho_\r/3$) respectively. To examine the background cosmological dynamics, let us define the following dimensionless variables \cite{Ali:2012cv,Hossain:2012qm} \begin{eqnarray} x &=& \frac{\dot{\pi}}{\sqrt{6}H M_{\rm{pl}}}\,, \label{eq:x1}\\ y &=& \frac{\sqrt{V}}{\sqrt{3} H M_{\rm{pl}}}\,, \label{eq:y1}\\ \epsilon &=& -6\frac{\alpha}{M^3}H\dot \pi\,, \label{eq:ep1}\\ \Omega_\r &=& \frac{\rho_\r}{3 \Mpl^2 H^2}\,, \label{eq:omr1}\\ \lambda &=& -M_{\rm{pl}}\frac{V'}{V}, \label{eq:lam1} \end{eqnarray} and the equation-of-state (EoS) parameters, \begin{eqnarray} w_{\rm eff} &=& -1-\frac{2}{3}\frac{\dot H}{H^2} = \frac{3 x^2 \(4+8\ep+\ep^2\)-2\sqrt{6}xy^2\ep\lam-4(1+\ep)\(3y^2-\Om_\r\)}{3\(4+4\ep+x^2\ep^2\)} \, , \\ w_\pi &=& \frac{P_\pi}{\rho_\pi} = -\frac{12 y^2 (1+\epsilon )+2 \sqrt{6} x y^2 \ep\lam-x^2 \(12+24\ep+\ep^2(3-\Om_\r)\)}{3\left(4+4 \epsilon +x^2 \epsilon ^2\right) \left(y^2+x^2 (1+\epsilon )\right)} \, , \end{eqnarray} where $w_{\rm eff}$ and $w_\pi$ are the effective and scalar field EoS. The evolution equations of the dimensionless variables form the following autonomous system \begin{align} \frac{{\rm d}x}{{\rm d}N}&=x\Bigl(\frac{\ddot{\pi}}{H\dot{\pi}}-\frac{\dot H}{H^2}\Bigr) \label{eq:x}\\ \frac{{\rm d}y}{{\rm d}N}&=-y \Bigl(\sqrt{\frac{3}{2}}\lambda x+\frac{\dot H}{H^2}\Bigr) \label{eq:y}\\ \frac{{\rm d}\epsilon}{{\rm d}N}&=\epsilon \Bigl(\frac{\ddot{\pi}}{H\dot{\pi}}+\frac{\dot H}{H^2}\Bigr) \label{eq:ep}\\ \frac{{\rm d}\Omega_r}{{\rm d}N}&=-2\Omega_r\Bigl(2+\frac{\dot H}{H^2}\Bigr) \label{eq:omr}\\ \frac{{\rm d}\lambda}{{\rm d}N}&=\sqrt{6}x\lambda^2(1-\Gamma) \end{align} where $N=\text{ln}\, a$ is the number of $e$-folds. Here $\Gamma=VV_{,\pi\pi}/V_{,\pi}^2$ whereas, using Eqs.~\eqref{Friedmann1}-\eqref{Friedmann2}, \begin{align} \frac{\dot H}{H^2}&=\frac{2(1+\epsilon)(-3+3y^2-\Omega_r)-3x^2(2+4\epsilon+\epsilon^2)+\sqrt{6}x\epsilon y^2\lambda}{4+4\epsilon+x^2\epsilon^2} \, ,\\ \frac{\ddot{\pi}}{H\dot{\pi}}&=\frac{3x^3\epsilon-x\Bigl(12+\epsilon (3+3y^2-\Omega_r)\Bigr)+2\sqrt{6}y^2\lambda}{x(4+4\epsilon+x^2\epsilon^2)} \, . \end{align} For our choice of an exponential potential with a constant slope $\lambda$, $V(\pi) = e^{-\lambda \pi/\Mpl}$, implies $\Gamma=1$. In this case, the autonomous system can be reduced to the Eqs.~(\ref{eq:x})-(\ref{eq:omr}). Defining the matter density $\Omega_\m := \rho_\m/(3\Mpl^2 H^2)$, we can rewrite the constrain equation as $\Omega_\m+\Omega_\r+\Omega_\pi=1$ where \begin{equation} \Omega_\pi=x^2(1+\epsilon)+y^2 \, . \label{eq:ompi} \end{equation} \subsection{Fixed point analysis} It is straightforward to see from the Eqs.~(\ref{eq:rhopi}), (\ref{eq:ppi}) and (\ref{eq:ep1}) that $\epsilon$ parameterizes corrections in the dynamics due to the cubic Galileon term and setting $\epsilon=0$ gives the autonomous system for the standard quintessence scenario. Let us first briefly discuss the fixed point analysis for the cubic Galileon model with an exponential potential. We will also review the late time cosmological dynamics of the scalar field for an exponential potential. As shall be emphasized later on, we note that there exist several differences in the structure of critical points on our phase space as compared to the Horndeski model studied in \cite{Heisenberg:2019qxz,Kase:2015zva} The physical fixed points are listed in the Table~\ref{tab:fp}. The fixed points and their nature of stability is similar to the quintessence field with exponential potential. However, it is crucial to note that the inclusion of the variable $\ep$ reduces the number of the fixed points which one has for the quintessence field alone ($\ep=0$), including an exponential potential. For instance, the fixed point characterized by $\left(x,y,\Omega_\r\right) = \left(0,0,1\right)$ is absent in our case. Moreover, it also affects the character of some the points (see $\rm C_\pm$) and makes them behave strictly as saddle points whereas, for the quintessence case, they can act as an unstable nodes as well \cite{Copeland:1997et}. Interestingly, as can be seen from the table below, $\ep$ flows to zero for all the critical points in our model on a four-dimensional phase space. \begin{table*}[ht] \begin{center} \resizebox{\textwidth}{!} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|}\hline \hline Pts. & $x$ & $y$ & $\ep$ & $\Om_\r$ & Existence & Stability & $\Omega_\m$ & $\Omega_\pi$ & $w_\pi$ & $w_{\rm eff}$ \\ \hline\hline $\rm A_\pm$ & $2\sqrt{2}/(\sqrt{3}\lam)$ & $\pm 2/(\sqrt{3}\lam)$ & 0 & $1-4/\lam^2$ & $\lam^2>4$ &Saddle point & 0 & $4/\lam^2$ & 1/3 & 1/3 \\ \hline $\rm B_\pm$ & $\sqrt{3}/(\sqrt{2}\lam)$ & $\pm 3/(\sqrt{2}\lam)$ & 0 & 0 & $\lam^2>3$ & Saddle Point for & $1-3/\lam^2$ & $3/\lam^2$ & 0 & 0 \\ & & && & & $3<\lam^2<24/7$ & & & & \\ & & && & & Stable spiral for & & & & \\ & & && & & $\lam^2>24/7$ & & & & \\ \hline $\rm C_\pm$ & $\pm 1$ & 0 & 0 & 0 & For all $\lambda$ & Saddle point & 0 & 1 & 1 & 1 \\ \hline $\rm D_\pm$ & $\lambda/\sqrt{6}$ & $\pm\sqrt{1-\lam^2/6}$ & 0 & 0 & $\lambda^2 < 6$ &Stable node for $\lambda^2 < 3$ & 0 & 1 & $-1+\lambda^2/3$ & $-1+\lambda^2/3$ \\ & && & & & Saddle point for $3< \lambda^2 < 6$ & & & &\\ \hline \hline \end{tabular}} \end{center} \caption[crit]{\label{crit} Fixed points with their nature of stability and existence conditions are given for the autonomous system (\ref{eq:x})-(\ref{eq:omr}).} \label{tab:fp} \end{table*} The points $\rm A_\pm$ and $\rm B_\pm$ can represent radiation and matter dominated eras respectively, provided the value of $\lambda$ is sufficiently larger than what is required for the existence of these points. The critical points $\rm C_\pm$ represent a kinetic regime where the kinetic term of the scalar field dominates with $w_{\rm eff}=w_\pi=1$. The points $\rm D_\pm$ can lead to an attractor solution close to de Sitter for small $\lam$. Restricting our attention to a possible future attractor solution, we are then left with four points ($\rm B_\pm$ and $\rm D_\pm$). Among these, for the points $\rm B_\pm$, neither the scalar field nor ordinary matter fields dominate entirely and there is a scaling solution \cite{Copeland:1997et} where the energy density of the scalar field remains proportional to that of the background fluid, which in this case is baryonic matter. The condition $\lam^2>3$ has to be satisfied for this solution. However, this solution cannot give us late-time acceleration as $w_{\rm eff}=w_\pi=0$ for the points $\rm B_\pm$. Therefore, from observation we can rule out these points. In other words, we can conclude that the a (steep) exponential potential with $\lam^2>3$ cannot give late-time acceleration; rather, it leads to a scaling solution for which $w_{\rm eff}=w_\pi=0$, similar as the quintessence field with exponential potential \cite{Copeland:1997et}. Note that the same scaling behaviour is also present during the radiation era but it is not an attractor solution. So we are left with the option of $\lam^2<3$ {\it i.e.}, the condition required for the critical points $\rm D_\pm$. From Table~\ref{tab:fp}, it is easy to understand that as we lower the values of $\lam$ we get closer to the de Sitter solution ($w_\pi=-1$). But for these points we do not have such scaling behaviour. \subsection{Imposing the Swampland constraints} As described in the introduction, for the EFTs, responsible for such late-time accelerations, to be compatible with quantum gravity, we need to impose the swampland criteria on the cosmological dynamics of our model. For this, we shall primarily have to focus on the dS constraint (S2) since the condition (S1) is automatically satisfied for our case. Moreover, for a potential of the exponential type with our defined parameters, (S2) translates into a condition for the first derivative of the potential and we do not have to worry about the refined version of (S2), involving the Hessian of the potential, since it is not relevant in this case. It is worth pointing out that this is the same condition which has been applied previously in quintessence models as well as for some Horndeski theories for constraining them. Therefore, from now on, we shall further impose that $\lam\approx 1$, for models which do not fall into the swampland. This implies that $w_{\rm eff}=w_\pi=-2/3$ at the fixed points $\rm D_\pm$ in the future. (In hindsight, we could have ruled out some of the critical points simply on the basis of the swampland conjecture (S2).) However, this value of $w_{\rm eff}$ is obviously not compatible with current observational results. So we have to fix the initial conditions very precisely so that the current value of the equation of state ($w_\pi$) matches with observational data but does, however, deviate from that value in the future to reach the fixed point value {\it i.e.}, $w_\pi=-2/3$ for $\lam=1$. To achieve this we have to consider the thawing dynamics \cite{Caldwell:2005tm} of the scalar field where the scalar field behaves like a cosmological constant for most of the history of the universe but starts evolving from the recent past and deviates from the value $w_\pi=-1$. It is worth emphasizing at this point that thawing dynamics of the scalar field is extremely sensitive to the initial conditions, just as is the case for the cosmological constant. This is, in fact, also the case in \cite{Heisenberg:2019qxz} where a model of Horndeski higher-derivative self-interactions was considered and the cosmological dynamics were shown to be sensitively dependent on the tuned initial conditions. For our scenario under consideration, we have one variable more than standard quintessence which gives us some freedom in choosing our initial conditions. In \cite{Hossain:2012qm} it has already been shown that scalar field EoS can have lower values than the quintessence case as we increase the positive initial values of $\ep$ for a constant $\lam$. This is precisely what we shall cleverly employ so as to have the values of $w_\pi$ consistent with both current observation as well as the swampland criteria. \begin{figure}[h] \centering \subfigure[Cosmological evolution of $\ep$ for three different initial conditions with $\lam=1$.]{\includegraphics[scale=.7]{ep.eps}\label{fig:ep}}~~~~~ \subfigure[Green (dotted), red (dashed) and blue (solid) curves correspond to the cosmological evolution of $w_\pi$ for $\ep_i=0,\; 10,\; 20$ respectively with $\lam=1$.]{\includegraphics[scale=.7]{eos.eps}\label{fig:eos}}\\ \subfigure[Cosmological evolution of the fractional energy densities are shown for $\lam=1$.]{\includegraphics[scale=.7]{den.eps}\label{fig:den}}~~~~~ \subfigure[Cosmological evolution of the dimensionless parameters $x,\; y$ and $\ep$ are shown for $\lam=1$.]{\includegraphics[scale=.7]{xyep.eps}\label{fig:den1}} \caption{Evolution of cosmological parameters in our model} \end{figure} As briefly mentioned earlier, from Table~\ref{tab:fp} we can see that at all the fixed points are at $\ep=0$. But we are mainly interested about the late time attractor solution and an initial condition dependent cosmological evolution. In other words, we shall fix arbitrary initial conditions so as to reproduce a viable cosmological evolution with the correct amount of radiation, matter and scalar-dominated phases, and shall then study the late-time dynamics. Of course, even if we set arbitrary initial conditions, we should reach one of the points of $\rm D_\pm$ in future where $\ep=0$ which has been shown in Fig.~\ref{fig:ep}. That means in future the dynamics of the model should reduce to that of the quintessence. This has been illustrated in Fig.~\ref{fig:eos} with the evolution of the scalar field EoS where the red (dashed) and blue (solid) curves are approaching the green (dotted) curve which is for $\ep_i=0$ {\it i.e.}, the quintessence case. The cosmological evolution of the fractional energy density is shown in Fig.~\ref{fig:den} which shows the viability of the cosmological dynamics with the chosen initial conditions. The evolution of the dimensionless quantities $(x, y, \ep)$ is shown in Fig.~\ref{fig:den1}. At the onset, the scalar field is kept nearly frozen due to large Hubble damping which allows us to choose very small values for initial values of $x$ ($x_i$), and keeps $w_\pi \approx -1$. On setting the initial value of $\ep$ \textit{i.e.}, $\epsilon_i$ to zero, it remains zero throughout the cosmic history and we flow along the trajectories of standard quintessence fields to the fixed points at late times. Specifically for $\ep_i=0$, the situation reduces to that of quintessence models with thawing dynamics, for which we have tuned initial value of $y$ ($y_i$) so as to get a viable cosmology. On fixing a suitable $y_i$, the quintessence solution for $\lam\approx 1$ is largely independent of the initial condition $x_i$ \textit{at late times}. On the other hand, in the presence of the higher derivative cubic Galileon term, \textit{i.e.} when there is a non-zero $\ep_i$, different values of $x_i$ can lead to different cosmological histories near $z=0$. The reason for this is that when $\ep_i$ is set to non-zero positive values, an additional frictional term is turned on which is coupled to the $x$ parameter. Finally, it is worth pointing out that only the relative magnitude between the derivative self-interaction term, $\ep_i$ and $x_i$ is what matters for the dynamics. \section{Current observational bounds} Imposing experimental bounds from SNeIA, CMB, BAO and $H_0$ data, one can obtain $1\sig,\; 2\sig$ and $3\sigma$ contours for an upper bound on the dark energy EoS as a function of redshift \cite{Scolnic:2017caz}. This was carried out in \cite{Heisenberg:2018yae} for standard quintessence models using the observational constraints on $w_0$ and $w_a$, the parameters of the Chevallier-Polarski-Linder (CPL) parameterization \cite{Chevallier:2000qy,Linder:2002et} of the dark enrgy EoS \begin{equation} w(z)=w_0+w_a\frac{z}{1+z} \, . \end{equation} We shall closely follow the analysis of \cite{Heisenberg:2018yae} to put constraints on our model involving higher derivative terms. On setting $\ep=0$, one goes back to the quintessence scenario. As is expected analytically for this case, the numerical evolution setting $\ep_i=0$ reproduces the numerical solution for quintessence. Changing the values of $\ep_i$ changes the evolution history \cite{Hossain:2012qm} and similarly for different $x_i$ (when $\ep_i\neq 0$). Here, we are interested to see the effect of different values of $\ep_i$ and $x_i$ on the evolution history and comparing the latter with observational bounds while respecting the swampland criteria, in particular the dS constraint. Thus, we shall consider $\ep_i$ and $x_i$ as parameters while we shall fix $y_i$ and $\Om_\r$ ($\Om_{\r i}$) to $y_i=5\times 10^{-12}$ and $\Om_{\r i}=0.999$. \begin{figure}[h] \centering \subfigure[Green (dotted), red (dashed) and blue (dot dashed) curves correspond to $\ep_i=0,\; 10,\; 100$ respectively. The solid lines represent the $1\sig,\; 2\sig$ and $3\sigma$ contours from bottom to top respectively for the dark energy EoS considering CPL parameterization.]{\includegraphics[scale=.7]{wpi_1.eps}\label{fig:wpi}}~~~~~ \subfigure[Green (dotted), red (dashed) and blue (dot dashed) curves correspond to $x_i=10^{-22},\; 10^{-23},\; 10^{-24}$ respectively. The solid lines represent the $1\sig,\; 2\sig$ and $3\sigma$ contours from bottom to top respectively for the dark energy EoS considering CPL parameterization.]{\includegraphics[scale=.7]{wpi_2.eps}\label{fig:wpi2}} \caption{Observational bounds on the reconstructed Galileon EoS as a function of redshift} \end{figure} Fig.~\ref{fig:wpi} shows the constraints on the model from observations for different initial values of $\ep$ by fixing $x_i$. The solid black lines represent $1\sig,\; 2\sig$ and $3\sigma$ upper bounds from bottom to top respectively on the dark energy EoS. Throughout this analyses, we have fixed the slope $\lam$ to 1 to respect the dS constraint (S2). The green (dashed) line is for $\ep_i=0$ {\it i.e.}, the quintessence case. As can be seen clearly from the Fig.~\ref{fig:wpi}, this case is in conflict with current observation at $2\sig$ level. As we increase the values of $\ep_i$, the EoS gets closer to $-1$ making it more viable with observation. Also for a given value of $\ep_i$, we see that one gets the same behaviour if we lower the values of $x_i$ which we have shown in Fig.~\ref{fig:wpi2}. In other words, Fig.~\ref{fig:wpi} and \ref{fig:wpi2} show that for $\lam\sim \mathcal{O}(1)$, even though the quintessence model (with an exponential potential, which is the least constrained case) can be ruled out at the 2$\sig$ level, the inclusion of the cubic Galileon term can make the cosmology viable with both observations and the swampland criteria. A natural question which arises is how does our results match with those of \cite{Heisenberg:2019qxz}, which seem to suggest that Horndeski theories are typically in the Swampland? Firstly, we consider only a cubic higher derivative self-interaction term and more importantly, we take the limit where there is no non-minimal coupling in the theory. It is not straightforward to take this limit in the model studied in \cite{Heisenberg:2019qxz} due to the fact that the non-minimal coupling parameter has to be $\geq 10^{-3}$ to comply with solar system constraints \cite{Kase:2015zva}. Therefore, our model is indeed able to scan parts of the phase space of Horndeski theory which is truly unavailable in the system studied in \cite{Heisenberg:2019qxz}, therefore showing the possibility that there exists models of dark energy which are compatible with current observations, even after strongly ($\lam = 1$) imposing the Swampland constraints. \section{Looking Ahead: More general models} The swampland conjectures have paved a way towards constraining cosmological models of both the early and late universe. If true, these conjectures tell us how complicated UV-completions of GR can leave its imprint on the low-energy effective theory. Specifically, in the context of the dS constraint, these conjectures suggest that there is a naturalness condition such that the slope of the potential of any effective scalar should be related to its potential through an order one number. In other words, (S2) gives us a way to explain not only why the current value of the potential is small, but also why its slope is equally small. However, current models of quintessence leads to bounds on $c$ which can lead to significant tension with this conjecture, thanks to experimental data coming from current as well as near-future experiments. In this work, we showed through an explicit example, how some of these tensions can be relieved in going to a model beyond quintessence, namely by adding the additional higher derivative cubic Galileon term to the action. The specific Galileon model we have considered here is still viable after multi-messenger gravitational wave discovery as well as with other current observational bounds even after imposing the swampland conjectures. Moreover, as noted in \cite{Heisenberg:2019qxz}, large portions of the parameter space of Horndeski theories can be ruled out by current and near-future data, if the Swampland constraints are taken into account, thereby somehow singling out models of this kind as special. However, it would be more interesting to see what the swampland conjecture implies for models of modified gravity, aimed at explaining late-time acceleration, in a more systematic manner \cite{Heisenberg:2018vsk}. We plan to do this in future by going to more general models beyond-Horndeski and applying the swampland conjectures to them. In this way, we shall be able to constrain these models through a theoretical consistency requirement in addition to observable data. \vskip15pt \noindent\textit{Acknowledgments:} This research was supported in part by the Ministry of Science, ICT \& Future Planning, Gyeongsangbuk-do and Pohang City and the National Research Foundation of Korea (Grant No.: 2018R1D1A1B07049126).
{ "timestamp": "2019-03-01T02:15:27", "yymm": "1902", "arxiv_id": "1902.11014", "language": "en", "url": "https://arxiv.org/abs/1902.11014" }
\section{Introduction} Time-reversal-invariant (TRI) topological superconductors (TSC) attract sustained interests due to the interesting Majorana zero modes in vortices and itinerant Majorana fermions on the boundary.\cite{Schnyder,Qi1,RRoy} It has been established that the key requirement for TSC in inversion symmetric systems is odd-parity pairing.\cite{Fu1,Sato} Although considerable efforts are made both theoretically and experimentally,\cite{Qi2,Nakosai,Scheurer,Wang,Hosur}, definite evidence of a TRI-TSC is yet to be found. A best candidate to date is $\mathrm{Cu_xBi_2Se_3}$,\cite{Hor} which is made by intercalation of Cu into $\mathrm{Bi_2Se_3}$. The parent compound is a topological insulator (TI). While the topology of the normal state is not necessary for TSC,\cite{Fu1} the strong spin-orbital coupling (SOC) in such a material makes TSC more likely. The maximum transition temperature $T_c$ observed is 3.8K. Early specific heat measurement \cite{Kriener} seems to indicate a full pairing gap. The upper critical field exceeds the Pauli limit,\cite{Bay} suggesting triplet pairing. Assuming odd parity triplet pairing, a candidate pairing function is $\phi(\v k)=\tau_2 \sigma_3 i\sigma_2$,\cite{Fu1} dubbed $\Delta_2$ pairing. Henceforth, $\tau_i$ ($\sigma_i$) is the Pauli matrix in the orbital (spin) basis. The two orbitals are derived from the $p_z$ orbitals of Bi-atoms in a quintable layer of Bi$_2$Se$_3$. The $\Delta_2$ pairing is a nondegenerate representation of $D_{3d}$. The $d$-vector is along $z$ and the SC state is fully gapped. The zero-bias conductance peak observed in the point-contact spectroscopy indicates the existence of unusual in-gap surface states.\cite{Sasaki} However, the scanning-tunneling-microscopy (STM) measurement reveals a full gap with no sign of in-gap surface states.\cite{Levy} With the spectroscopic uncertainty in mind, recent nuclear magnetic resonance (NMR) experiment \cite{Matono} makes a breakthrough in this field. The observed Knight-shift $K$ develops a two-fold oscillation as a function of the angle of the in-plane applied field $H$, with strongest (or no) suppression of $K$ below $T_c$ for $H$ along (or orthogonal to) one of the Se-Se bonds, the principle axes henceforth. This could be understood if the $d$-vector of the triplet aligns along a principle axis. Fu realized that the inplane NMR nematicity suggests the pairing function must belong to a doublet representation.\cite{Fu3} For local pairing, the desired pairing function is obvious, $\phi(\v k)=\tau_2 \sigma_{1,2} i\sigma_2$, dubbed $(\Delta_{4x},\Delta_{4y})$ pairing, with the understanding that the $d$-vector is along and orthogonal to the principle axis, respectively. Note that $\Delta_{4x}$ leads to nodal SC gap, protected by the $D_{3d}$ group, while $\Delta_{4y}$ could become fully gapped in the presence of warping effect in the normal state band structure.\cite{Fu3} While the nematicity is observed by various probes in Cu$_x$Bi$_2$Se$_3$, the identified direction of the $d$-vector varies.\cite{Matono,Yonezawa,Pan,Smylie,Asaba,haihu,donglai} Theoretically,\cite{baowc} direct visualization of the $d$-vector is possible by quasi-particle interference (QPI) \cite{qpi} and STM: the leading peak momentum in QPI at sub-gap energies should be along the $d$-vector, and the STM profile of the vortex at low energies should be elongated also along the $d$-vector. The agreement between the results in the momentum space (from QPI) and real space (from vortex profile) is a stringent constraint for the nematic triplet. In real samples, there may be lattice distortions, \cite{distortion} which could pin the direction of the $d$-vector. On the other hand, the extent of warping, the filling level, the thickness of the sample, as well as the strength of inter-layer hybridization, may vary from sample to sample, and their roles for the $d$-vector are to be unravelled. Here we study how the nematic pairing, and the direction of the $d$-vector in particular, depends on the above typical material parameters, in order to understand the variety in the probed $d$-vector direction in existing experiments. We use a mean field theory (MFT) based on a tight-binding model, assuming local triplet pairing in the $E_u$ representation. Our main results are as follows. In the two-dimensional (2D) model, the system favors the fully gapped chiral state for weaker warping or lower filling level, while a nodal and nematic $\Delta_{4x}$ state is favorable for stronger warping or higher filling. In the presence of lattice distortion, relative elongation along one of the principle axes, tends to rotate the nematic $d$-vector in favor of the nematic $\Delta_{4y}$ state. In the 3D model, increasing inter-layer hybridization suppresses the chiral state in favor of the nematic ones. The rest of the paper is organized as follows. The model is described in Sec.\ref{sec:model}, the effect of warping in the 2D model is described in Sec.\ref{sec:2d}, the effect of lattice distortion in Sec.\ref{sec:distortion}, and the effect of inter-layer hybridization in Sec.\ref{sec:3d}. Finally, Sec.\ref{sec:summary} is a summary of this work. \section{Model and methods}\label{sec:model} \begin{figure} \includegraphics[width=0.9\columnwidth]{band.png} \caption{The band structure of the 2D model for $\mu=3.4$, $m=-0.5$ and $t_w=0.1$. (a) The band dispersion along high-symmetry cuts and near the Fermi level (the horizontal line). The inset shows the Fermi surface (green line) in the Brillouine zone (outer hexagon), with high symmetry momenta $G$, $K$ and $M$ indicated. (b) The density of states.}\label{fig:band} \end{figure} According to the first-principles calculations,\cite{Zhang} the conduction and valence bands of $\mathrm{Bi_2Se_3}$ are superpositions of Se $p_z$ orbitals on the top and bottom layers of the unit cell, each of which mixed with it's neighboring Bi $p_z$ orbital. The angle-resolved photoemission spectroscopy (ARPES) experiment\cite{Wray} shows that the band structure of $\mathrm{Cu_xBi_2Se_3}$ is quite similar to $\mathrm{Bi_2Se_3}$. A two-orbital continuum model \cite{Fu3} has been proposed to describe the low energy physics of $\mathrm{Cu_xBi_2Se_3}$. Here we use a similar minimal model defined on the layered triangular lattice, which is equivalent to the continuum model at low energy. The free part of the Hamiltonian can be written as, in the momentum space, $H_0=\sum_{\v k}\psi_{\v{k}}^\dag h_{\v{k}} \psi_{\v{k}}$, where $\psi_{\v k}$ is a four-component spinor describing two orbitals and two spins, with $\psi_{\v{k}}^\dag=(c_{\v k1\uparrow}^\dag,c_{\v k2\uparrow}^\dag,c_{\v k1\downarrow}^\dag,c_{\v k2\downarrow}^\dag)$, and \begin{eqnarray} h_{\v k}=&&\sum_i \alpha_i (\v d_i\times \pmb{\sigma})_z \tau_3 \sin k_i + t_w\sum_i \alpha_i \sigma_3\tau_3 f_i \sin k_i \nonumber\\ &&+[m+\sum_i \alpha_i (1-\cos k_i) + t_z(1 - \cos k_z)]\tau_1 \nonumber\\ &&+ t_z \sin k_z \tau_2-\mu. \label{eq:hk}\end{eqnarray} Here the summation is over the three inplane translation vectors $\v d_1=(1,0)$, $\v d_2=(1/2,\sqrt{3}/2)$ and $\v d_3=(-1/2,\sqrt{3}/2)$, $k_i=\v k\cdot \v d_i$; $t_w$ is the warping parameter allowed in a $D_{3d}$ system, induced here by the form factor $f_{1,2,3}=(1,-1,1)$; $\mu$ is the chemical potential; and $m$ controls the topology of the normal state band structure. The 3D dispersion is controlled by the interlayer hybridization $t_z$, and in the 2D limit we set $t_z=0$. Finally, the coefficient $\alpha_i=1$ unless specified explicitly otherwise (for the distorted lattice). Throughout this work we use arbitrary units for qualitative purposes, and we fix $m=-0.5$ for concreteness. As an example, we show the band structure in the 2D limit in Fig.\ref{fig:band}(a) for $\mu=3.4$ and $t_w=0.1$, which is consistent with the ARPES measurement.\cite{Wray} The corresponding density of states (DOS) is shown in Fig.\ref{fig:band}(b). The DOS has a Van Hove singularity near the Fermi level, and this makes the system sensitive to perturbations that we will be addressing. The NMR nematicity in the SC state strongly implies triplet pairing in a doublet $E_u$ channel. While the underlying pairing mechanism is not yet clear, for our purpose it is sufficient and reasonable to start with an effective MF Hamiltonian with attractive pairing interaction in the $E_u$ channel, \begin{eqnarray} H_{\rm MF}=H_0+\frac{1}{2}\sum_{\v k}\left(\Psi_\v k^\dag \tau_2\v V\cdot\vec\sigma i\sigma_2\Psi_{-\v k}^{\dag,t}+{\rm h.c.}\right),\end{eqnarray} where $\Psi_\v k^\dag = (\psi_\v k^\dag, \psi_{-\v k}^T)$ is the Nambu spinor, $t$ means transpose, and $\v V=(\Delta_{4x},\Delta_{4y})$ is the $d$-vector order parameter determined self-consistently by \begin{eqnarray} \Delta_{4x,4y}=\frac{V_s}N \sum_{\v k} \<\Psi_{-\v k}^t (-i\sigma_2) \sigma_{1,2} \tau_2 \Psi_\v k\>, \end{eqnarray} where $V_s <0$ is the pairing interaction and $N$ is the number of lattice sites. The local triplet is made possible by the antisymmetric pairing between the two orbitals (or orbital singlet). The chiral pairing is defined by $\v V\times \v V^*\neq 0$, while the nematic pairing is signaled by $\v V\times \v V^*= 0$. The Bogoliubov-de Gennes (BdG) quasiparticles, with energy dispersion $E_\v k$, can be obtained by diagonalizing the single-particle part of $H_{\rm MF}$. In the following we define the BdG gap at a polar angle as $|E_\v k|_{\rm min}$ for $\v k$ along the corresponding radial direction in the inplane Brillouin zone. Usually this is just $E_\v k$ on the normal state Fermi surface (FS), but in the present model the unusual pairing may cause $|E_\v k|_{\rm min}$ to deviate from the FS. \section{Two dimensional limit}\label{sec:2d} \begin{figure} \includegraphics[width=0.9\columnwidth]{phase.png} \caption{(a) The evolution of the real part ($r'$, solid lines) and imaginary part ($r''$, dashed lines) of the ratio $r=\Delta_{4y}/\Delta_{4x}$ versus MF iteration steps for $t_w=0.1$ (blue) and $t_w=0.4$ (red). Here $\mu=3.4$, $V_s=-1.5$, and $T=10^{-5}$. (b) The phase diagram obtained by MF calculations in the $(\mu, t_w)$ space with $V_s=-1.5$. The insets illustrate the characteristic BdG gap versus the polar angle in the respective phases.}\label{fig:phase} \end{figure} First, we consider the 2D limit, $t_z=0$. We fix $(\mu, V_s)=(3.4, -1.5)$ and perform the MF calculation at the temperature $T=10^{-5}$, the zero temperature limit. The solid (dotted) lines in Fig.\ref{fig:phase}(a) show the MF evolution of the real (imaginary) part of the ratio $r=\Delta_{4y}/\Delta_{4x}$, for $t_w=0.1$ (blue) and $t_w=0.4$ (red). Clearly, a chiral state with $\v V\propto (1, i)/\sqrt{2}$ is obtained for $t_w=0.1$. An equivalent state is $\v V\propto (1,-i)/\sqrt{2}$. On the other hand, a nematic state with $\v V\propto (1,\sqrt{3})/2$ is obtained for $t_w=0.4$. There is a discrete six-fold degeneracy in the nematic states, with \begin{eqnarray} \Delta_{4y}/\Delta_{4x}=\tan(n\pi/3),\ \ n=1,2,...,6. \label{eq:6fold}\end{eqnarray} The value of $n$ in the converged MF solution depends on the initial condition. Both the chiral state and the six-fold degenerate $\Delta_{4x}$ state can be captured by a phenomenological Landau free energy in the form \begin{eqnarray} f=&&\alpha~ \v V^*\cdot \v V+\beta (\v V^*\cdot\v V)^2 +\beta'|\v V\times \v V^*|^2 \nonumber\\ +&& \gamma [(\Delta_{4x}^*+i\Delta_{4y}^*)^3(\Delta_{4x}+i\Delta_{4y})^3+{\rm c.c.}]+\cdots.\end{eqnarray} With $\alpha<0$ and $\beta>0$ as usual, the chiral phase is stable for $\beta'<0$ and $\gamma= 0$,\cite{chiral} while the nematic phase is stable for $\beta'>0$ and $\gamma <0$. Note the 6-th order $\gamma$-term is required to reproduce the discrete six-fold degeneracy in the nematic state. Interestingly, a similar six-fold degeneracy of helical triplet pairing was found theoretically in doped BiH, but a 12-th order term is needed instead.\cite{yanglin} By systematic MF calculations with $V_s=-1.5$ and in the zero temperature limit, we obtain a phase diagram, Fig.\ref{fig:phase}(b), for the pairing state in the $(\mu, t_w)$ parameter space. Note a higher value of $\mu$ corresponds to a higher electron filling. We see that the chiral state is favorable for lower warping or filling, and the nematic state is favored otherwise. We stress that the nematic state is of $\Delta_{4x}$-type. The $\Delta_{4y}$-type pairing is not obtained in the MF calculations here. The characteristic BdG gap veresus the polar angle is shown as insets in the respective phase, which is nodeless/nodal in the chiral/nematic phase. \section{Lattice distortion}\label{sec:distortion} \begin{figure} \includegraphics[width=0.9\columnwidth]{distortion.png} \caption{The amplitude of $\Delta_{4x}$ and $\Delta_{4y}$ versus the distortion parameter $\gamma$, for (a) $(\mu,t_w)=(3.4,0.4)$, and (b) $(\mu,t_w)=(3.4,0.1)$.}\label{fig:distortion} \end{figure} In the previous section, we find the $\Delta_{4y}$ pairing is not stabilized in the ideal 2D lattice model. However, this type of pairing state was reported in experiments. We then ask how it could be realized by perturbations away from the ideal limit. One possible perturbation is the lattice distortion, which is indeed observed in Sr$_x$Bi$_2$Se$_3$ by X-ray diffraction.\cite{distortion} By symmetry, the effect of the lattice distortion on the hopping coefficients $\alpha_{1,2,3}$ defined in Eq.\ref{eq:hk} can be decomposed into two symmetry channels, $\Delta\alpha_i\propto x_i^2 - y_i^2$ and $\Delta\alpha_i\propto 2 x_i y_i$. Here $(x_i,y_i)=\v d_i$ is the two components of the first-neighbor bond $\v d_i$. We concentrate on the $d_{x^2-y^2}$ channel. (The other channel can be discussed similarly.) Although all of $\alpha_{1,2,3}$ would be changed, we can perform rescaling so that $\alpha_{2}=\alpha_3=1$, and $\alpha_1=1-\gamma$, to represent the qualitative effect of the strain. A positive $\gamma$ corresponds to relative elongation of $\v d_1$ versus $\v d_{2,3}$, and vice versa. (The rescaling may also change $m$ and $\mu$, but for a small distortion such changes could be ignored in the leading order approximation.) Fig.\ref{fig:distortion}(a) shows the MF solution versus $\gamma$ in the parent nematic case with $(\mu, t_w)=(3.4,0.4)$. The $\Delta_{4y}$ component increases rapidly for positive $\gamma$, and the $\Delta_{4x}$ component decreases to zero already for $\gamma\geq 0.008$. We checked that $\Delta_{4y}/\Delta_{4x}$ remains real, so the pairing is still nematic. We see that the relative elongation along $x$ can rotate the $d$-vector to the orthogonal direction, in favor of the $\Delta_{4y}$-pairing. For small positive $\gamma$, the $d$-vector interpolates between $x$- and $y$-directions. Such an intermediate state may have been observed in Ref.\onlinecite{donglai}. Note for a negative $\gamma$, or relative compression of the lattice along $x$, the $\Delta_{4x}$ state is the only stable one. This is reasonable since the system favors $\Delta_{4x}$ already in the parent undistorted lattice for $(\mu, t_w)=(3.4,0.4)$. In the parent chiral case with $(\mu, t_w)=(3.4,0.1)$, the effect of $\gamma$ on the $\Delta_{4x,4y}$ components of $\v V$ is shown in Fig.\ref{fig:distortion}(b). The two components coexist within $|\gamma|\leq 0.05$, and we checked that $\Delta_{4y}/\Delta_{4x}$ remains imaginary. So in this regime the pairing is still chiral. However, only $\Delta_{4y}$ ($\Delta_{4x}$) is left for $\gamma>0.05$ ($\gamma<-0.05$), hence the pairing becomes nematic. Although the critical value of $\gamma$ for the transition between chiral and nematic states is relatively large, we again see that the relative elongation of the $x$-bonds in the 2D lattice tends to favor $\Delta_{4y}$ pairing, and vice versa. By symmetry, the conclusion can be generalized to the relative elongation of any one of the principle axes of the triangle lattice. \section{Dimension crossover}\label{sec:3d} \begin{figure} \includegraphics[width=0.9\columnwidth]{phase_3d.png} \caption{MF results for the 3D model with $\mu=3.4$ and $V_s=-2.5$. (a) The evolution of the real part ($r'$, solid lines) and imaginary part ($r''$, dashed lines) of the ratio $r=\Delta_{4y}/\Delta_{4x}$ versus MF iteration steps for $t_w=0$ (blue) and $t_w=0.1$ (red). Here $t_z=1.0$. The inset shows the FS. (b) The phase diagram in the $(t_z, t_w)$ space.}\label{fig:phase3d} \end{figure} It is known that at low doping, Cu$_x$Bi$_2$Se$_3$ has an ellipsoidal Fermi pocket centered at $\Gamma$, while at high doping the FS is open and cylinder-like along $k_z$, as observed in the photoemission\cite{fermi1} and de Haas-van Alphen measurements.\cite{fermi2, fermi3} In our tight-binding model, the shape of the 3D FS is controlled by the inter-layer hybridization $t_z$. For small $t_z$, the FS is open and cylinder-like. For $t_z>2$, the FS becomes closed around the $\Gamma$ point. We should remark that in the case of an open cylinder-like FS, both types of nematic pairing are topologically trivial, irrespectively of whether they are nodal or nodeless, since the FS encloses two time-reversal invariant momenta. With this in mind, we continue to investigate the pairing state in the 3D limit. We model the 3D lattice by $N_z=10$ layers of triangle lattices and periodic boundary condition along all translation directions. As an example of the 3D model, we consider $\mu = 3.4$, $t_z = 1.0$ and $V_s=-2.5$. (In the 3D model, the DOS is lower. To get a sizable $T_c$, a stronger pairing strength is used.) Fig.\ref{fig:phase3d}(a) shows the FS in the inset, and the evolution of the MF solutions in the main panel, for the real (solid line) and imaginary (dotted line) parts of the ratio $r=\Delta_{4y}/\Delta_{4x}$. We find in both cases of $t_w=0$ (blue lines) and $t_w=0.1$ (red lines), the solution converges to $\v V=(\Delta_{4x},\Delta_{4y})\propto (1,\sqrt{3})/2$, namely the $\Delta_{4x}$ state up to a $C_6$ rotation. By systematic calculations, we obtain the phase diagram in the $(t_z, t_w)$ parameter space, see Fig.\ref{fig:phase3d}(b). The chiral state is limited to smaller $t_z$ or $t_w$, while the nematic $\Delta_{4x}$ phase prevails for larger $t_z$ or $t_w$. In reality, the hopping along $z$ is between quintuple layers, and should be weak. In this case, the result at large $t_w$, {\em e.g.}, $t_w=0.4$, is qualitatively similar to the 2D case discussed in Sec.\ref{sec:2d}. We note that the phenomenological Landau theory analysis in Ref.\onlinecite{chiral2} reaches a similar conclusion that reduced 3D dispersion would favor the chiral state, however, it does not include the effect of warping, hence is unable to capture the nematic phase in the 2D limit. On the other hand, we have further checked the effect of lattice distortion in the 3D model. The qualitative effect is the same as in Fig.\ref{fig:distortion}(a), namely, a relative elongation along $x$ would favor the $\Delta_{4y}$ pairing. \section{Summary}\label{sec:summary} To conclude, we studied the pairing states in a model for doped Bi$_2$Se$_3$ superconductor with triplet pairing in the $E_u$ representation. In the 2D model, the fully gapped chiral state is favored if the warping parameter is small, while the nodal nematic triplet with the $d$-vector $\v V=(\Delta_{4x},0)$ is favored otherwise. Under lattice distortion, a relative elongation along $x$ would favor a $d$-vector $\v V=(0,\Delta_{4y})$. In the 3D model, the chiral state disappears for large interlayer hopping, in favor of the nematic $\Delta_{4x}$ state, and the effect of lattice distortion is similar to the case of the 2D model. Taking the above material parameters into account, our results may provide a possible cause of the variety in the probed $d$-vector direction in existing experiments. \acknowledgements{The project was supported by the National Key Research and Development Program of China (under grant No. 2016YFA0300401) and the National Natural Science Foundation of China (under Grant No.11574134).}
{ "timestamp": "2019-03-01T02:10:08", "yymm": "1902", "arxiv_id": "1902.10915", "language": "en", "url": "https://arxiv.org/abs/1902.10915" }
\section{Game analysis} \label{sec:analysis} In this section, we analyze Nash equilibria and dynamics in game $\mathcal{G}(\bm{c}, c_{\tt stick}).$ \subsection{Equilibrium in game $\mathcal{G}(\bm{c}, c_{\tt stick})$} \label{sec:finite} \noindent \textbf{Characterization of equilibria.} Before finding Nash equilibria of $\mathcal{G}(\bm{c}, c_{\tt stick}),$ we define a pure Nash equilibrium. \smallskip \begin{definition}[Pure Nash equilibrium] A strategy vector $\mathbf{s}=(s_1, s_2, \cdots s_n)$ is a Nash equilibrium if $$U_i(\mathbf{s})= \max_{s_i^\prime\in \{\mathcal{F}, \mathcal{A}, \mathcal{B}\}} U_i(s_i^\prime, \mathbf{s_{-i}}), \qquad\mbox{for all $i$}.$$ \end{definition} \smallskip \noindent At an equilibrium, all rational players would not change their strategy, that is, $r_{\mathcal F}$ and $r_{\mathcal B}$ are not updated. We map a strategy vector $\mathbf{s}=(s_1, s_2, \cdots s_n)$ to state $(r_{\mathcal F},r_{\mathcal B})$ and denote by $\mathcal E(\bm{c}, c_{\tt stick})$ the set of all Nash equilibria in $\mathcal{G}(\bm{c}, c_{\tt stick}).$ We first determine the dynamics of player $i$ with small $c_i$ through Lemma~\ref{lem:dynamics} to establish the characterization of $\mathcal E(\bm{c}, c_{\tt stick}).$ \blue{\smallskip \begin{lemma} There is $\varepsilon>0$ such that, any player $i$ possessing $c_i<\varepsilon$ does not change its strategy at state $(r_{\mathcal F},r_{\mathcal B})$ if and only if \begin{equation*} \resizebox{\hsize}{!}{$ (r_{\mathcal F},r_{\mathcal B})=\begin{cases} (f_{\varepsilon}(c_{\tt stick}), c_{\tt stick}) \quad\text{if} c_{\tt stick}>0,\\ (\frac{k}{2}+\frac{\sqrt{{N_{\tt de}}^2k^2+4N_{\tt de}N_{\tt in}(k\cdot c_i-c_i^2)}}{2N_{\tt de}}\leq r_{\mathcal F}\leq 1, 0) \,\,\text{otherwise,} \end{cases}$} \end{equation*} where $f_{\varepsilon}$ is a decreasing function of which input is $c_{\tt stick}$ and output ranges between 0 and $1-c_{\tt stick}.$ Parameters $k, N_{\tt de},$ and $N_{\tt in}$ are defined in Assumption~\ref{ass:k} and \ref{ass:c}. \label{lem:dynamics} \end{lemma}} \blue{ \smallskip \noindent Note that $f_{\varepsilon}(c_{\tt stick})$ is $1-c_{\tt stick}$ for a small value of $c_{\tt stick}$ while $f_{\varepsilon}(c_{\tt stick})$ is 0 for a large value of $c_{\tt stick}.$ The above lemma implies that, considering miners with small computational power, if a Nash equilibrium exists, only $\Omega_{\tt stick}$ would remain as loyal miners to $coin_{\tt B}$ in the equilibrium. This is because $(r_{\mathcal F}, r_{\mathcal B})$ would continually change when $r_{\mathcal B}$ is greater than $c_{\tt stick}.$ From Lemma~\ref{lem:dynamics}, we can characterize the set $\mathcal E(\bm{c}, c_{\tt stick})$ as stated in Theorem~\ref{thm:eq}. We present the proof of Lemma~\ref{lem:dynamics} and Theorem~\ref{thm:eq} in Appendix~\ref{sec:pf_eq}.} \smallskip \blue{\begin{theorem} There is $\varepsilon>0$ such that, when $c_{\tt max}<\varepsilon,$ the set $\mathcal E(\bm{c}, c_{\tt stick})$ is as follows. \begin{equation*} \resizebox{\hsize}{!}{$ \mathcal E(\bm{c}, c_{\tt stick})= \begin{cases} \{(r_{\mathcal F}, r_{\mathcal B}): X \leq r_{\mathcal F}\leq 1, r_{\mathcal B}=0\} & \text{if} c_{\tt stick}=0, \\ \{(1-c_{\tt stick}, c_{\tt stick})\} &\text{else if} c_{\tt stick}<x,\\ \{(0, c_{\tt stick})\} &\text{else if} c_{\tt stick}>y, \\ \end{cases}$} \end{equation*} where $$X=\max_{i\in \Omega\backslash \Omega_{\tt stick}}\left\{\frac{k}{2}+\frac{\sqrt{{N_{\tt de}}^2k^2+4N_{\tt de}N_{\tt in}(k\cdot c_i-c_i^2)}}{2N_{\tt de}}\right\},$$ $x$ and $y$ $(>x)$ range between 0 and 1. \label{thm:eq} \end{theorem}} \smallskip \noindent \blue{ As described above, Theorem~\ref{thm:eq} shows that, in a game where players except for $\Omega_{\tt stick}$ possess small computational power, there exist only Nash equilibria where the $coin_{\tt B}$-factions sticking to $coin_{\tt B}$-mining are loyal miners for $coin_{\tt B}$. In the case where $c_{\tt stick}$ is small, we can certainly see that the overall health of the $coin_{\tt B}$ system would be weakened in terms of scalability, decentralization, and security, which will be discussed in more detail in Section~\ref{subsec:btc}. Indeed, even if $c_{\tt stick}$ is large, the case where $r_{\mathcal B}$ is equal to $c_{\tt stick}$ would make the $coin_{\tt B}$ system significantly centralized because only a few players possessing large power are loyal miners to $coin_{\tt B}$ (this example is presented in Section~\ref{subsec:stick}). In particular, if $\Omega_{\tt stick}$ is empty, no miner exists in the $coin_{\tt B}$ system in all Nash equilibria. Remark that this case indicates the complete downfall of $coin_B.$ As a result, Theorem~\ref{thm:eq} implies that fickle mining can be dangerous.} \begin{comment} \smallskip \noindent\textbf{Fast convergence to equilibria.} Next, we study whether the downfall of $coin_{\tt B}$ can actually be reached when players update their strategies under the following popular game dynamics. \begin{definition}[Best Response Dynamics] Strategy $s_i^*$ is player $i$'s \textit{best response strategy} if \begin{equation*} s_i^*\in\argmax_{s_i\in\{\mathcal{F}, \mathcal{A}, \mathcal{B}\}}{U_i(s_i, \mathbf{s_{-i}})}. \end{equation*} In the \textit{best response dynamics}, player $i$ is selected from among $n$ players uniformly at random, and she updates her strategy to $s_i^*$ at each time instance. Meanwhile, other players maintain their current strategies. This process (or step) is repeated until strategy vector $\textbf{s}$ becomes stable, i.e., (pure) Nash equilibrium. \end{definition} In other words, in the best response mechanism, each player maximizes its payoff, given other players' strategies, whenever that player is allowed to update its strategy. Then we obtain the below theorem on the convergence rate of the best response dynamics, and the rate is the fastest possible one can hope for. \begin{theorem} Assume that all players possess the same computational power as $\frac{1}{n}.$ Then there exists a universal constant $N^\prime$ such that, for all $\varepsilon\in(0,1)$ and $n>N^\prime,$ it takes ${O}\left(n\ln{\frac{1}{\varepsilon}}\right)$ steps for any initial state to reach an equilibrium stated in Theorem~\ref{thm:eq}, with probability $1-\varepsilon$. \label{thm:dynamics} \end{theorem} \noindent We present the proof of Theorem~\ref{thm:dynamics} in the full version of this paper~\cite{kwon2019bitcoin}. The above theorem describes that all states are likely to reach the downfall of $coin_{\tt B}$ within only a linear number of updates with respect to the number of players when players possess similar computational power. Therefore, this theorem implies that fickle mining can lead to downfall of $coin_{\tt B}$ as quickly as possible, indicating a considerable risk of fickle mining. \end{comment} \blue{ \smallskip \noindent\textbf{When players possess infinitesimal mining power.} Under the game $\mathcal{G}(\bm{c}, c_{\tt stick})$, it is not easy to analyze movement of state $(r_{\mathcal F}, r_{\mathcal B})$ (this movement will be used for data analysis in Section~\ref{sec:data}) due to a large degree of freedom in $\bm{c}$. Thus, we further assume that players except for $\Omega_{\tt stick}$ (i.e., $\Omega\backslash \Omega_{\tt stick}$) possess infinitesimal computational power (i.e., $\norm{\bm{c}}_2\approx 0$). We show that this assumption is reasonable by analyzing the real-world dataset in the Bitcoin system (see Section~\ref{sec:application}). We again study the equilibria of $\mathcal{G}(\bm{c}, c_{\tt stick})$ in this case. \smallskip \begin{theorem}\label{thm:inf_nash} When players except for $\Omega_{\tt stick}$ possess infinitesimal mining power, the set $\mathcal{E}(\bm{c}, c_{\tt stick})$ is as follows. \begin{align}\label{eq:nash_infty} &\mathcal{E}(\bm{c}, c_{\tt stick})=\notag \\ &\begin{cases} \left\{\left(0,\frac{k}{k+1}\right)\right\} \, \cup\, \{(r_{\mathcal F}, r_{\mathcal B})\,:\, k\leq r_{\mathcal F} \leq 1, r_{\mathcal B}=0\} \\ \hspace{2cm}\text{if $c_{\tt stick}=0$ (Case 1),}\\ \left\{\left(0,\frac{k}{k+1}\right)\right\} \,\cup\, \{(1-c_{\tt stick}, c_{\tt stick})\} \\ \hspace{2cm}\text{else if $c_{\tt stick}\leq \alpha$ (Case 2),}\\ \left\{\left(0,\frac{k}{k+1}\right)\right\} \,\cup\, \{(\beta, c_{\tt stick})\} \\ \hspace{2cm}\text{else if $\alpha < c_{\tt stick}\leq \frac{k}{k+1}$ (Case 3),}\\ \{(0, c_{\tt stick})\} \hspace{2.3mm} \text{otherwise (Case 4)} \end{cases} \end{align} \noindent Here, $\alpha$ and $\beta$ are defined in Section~\ref{sec:infinite}. \end{theorem} \smallskip \noindent We present the proof of Theorem~\ref{thm:inf_nash} in Appendix~\ref{sec:pf_in}. Comparing with Theorem~\ref{thm:eq}, the state $(0,\frac{k}{k+1})$ also becomes another Nash equilibrium when the computational power possessed by players (except for $\Omega_{\tt stick}$) is infinitesimal. Note that this state indicates the stable coexistence of $coin_{\tt A}$ and $coin_{\tt B}.$ Indeed, when $\norm{\bm{c}}_2$ is closer to 0, the difference among payoffs of players in $\mathcal{M_F}$\xspace, $\mathcal{M_A}$\xspace, and $\mathcal{M_B}$\xspace would also be closer to 0 at the state $(0,\frac{k}{k+1})$. Therefore, under the assumption that players possess infinitesimal power, payoffs of players in $\mathcal{M_F}$\xspace, $\mathcal{M_A}$\xspace, and $\mathcal{M_B}$\xspace are the same at the state $(0,\frac{k}{k+1})$ while the mining difficulties of $coin_{\tt A}$ and $coin_{\tt B}$ are maintained as $\frac{1}{k+1}$ and $\frac{k}{k+1}$, respectively. Meanwhile, at the remaining equilibria except for the state $(0,\frac{k}{k+1})$, only the $coin_{\tt B}$-factions $\Omega_{\tt stick}$ conduct $coin_{\tt B}$-mining after the $coin_{\tt B}$-mining difficulty increases. In particular, if no $coin_{\tt B}$-faction sticking to $coin_{\tt B}$-mining exists, loyal mining power to $coin_{\tt B}$ is 0 in the Nash equilibria.} Note that, in this case, $\mathcal{M_F}$\xspace and $\mathcal{M_A}$\xspace would continuously conduct $coin_{\tt A}$-mining, because the mining difficulty of $coin_{\tt B}$ has not decreased after the previous increase in difficulty. These players would not also change their strategy because the mining difficulty of $coin_{\tt B}$ increases to a significantly high value due to the heavy occurrence of fickle mining. \smallskip \noindent \textbf{Example.} Considering the case $c_{\tt stick}=0,$ we give an example where $(r_{\mathcal F}=0.2,r_{\mathcal B}=0), k=0.3,$ and the initial mining difficulty of $coin_{\tt B}$ is 0.4. The state $(0.2,0)$ is not a Nash equilibrium according to Theorem~\ref{thm:inf_nash}. Because fickle miners continuously conduct the $coin_{\tt A}$-mining, the mining difficulty of $coin_{\tt A}$ is maintained as 1, and players in $\mathcal{M_F}$\xspace and $\mathcal{M_A}$\xspace earn the payoff of 1. If a player moves into $\mathcal{M_B}$\xspace, the player would earn $\frac{0.3}{0.4}$ for a while in the beginning. However, because the mining difficulty of $coin_{\tt B}$ decreases after $\mathcal{M_B}$\xspace finds several blocks, the player who moves to $\mathcal{M_B}$\xspace would eventually earn $\frac{0.3}{0.2}$ consistently. Note that the time duration in which the mining difficulty of $coin_{\tt B}$ is close to 0 is negligible compared to the time duration in which the mining difficulty of $coin_{\tt B}$ is 0.2. Therefore, the payoff of $\mathcal{M_B}$\xspace is $\frac{0.3}{0.2},$ and rational players tend to move to $\mathcal{M_B}$\xspace due to the higher payoff. This means that the state $(0.2,0)$ is not a Nash equilibrium. \subsection{Dynamics in game $\mathcal{G}(\bm{c}, c_{\tt stick})$} \label{sec:infinite} In this section, we analyze dynamics in the game $\mathcal{G}(\bm{c}, c_{\tt stick})$ and study how a state can reach an equilibrium. \iffalse Indeed, because there exist significantly many miners in practice, we apply game $\mathcal{G}(\bm{c}, c_{\tt stick})$ to the practical system in Section~\ref{sec:application}. According to Theorem~\ref{thm:inf_nash}, in the practical system, not only the downfall of $coin_{\tt B}$ but also the coexistence of two coins can be a stable state even if fickle mining is possible, giving more positive result than that for game $\mathcal G_n.$ We present the proof of Theorem~\ref{thm:inf_nash} in Appendix~\ref{sec:pf_in}. \fi \begin{figure}[t] \centering \includegraphics[width=0.65\columnwidth]{figure/zones.png} \caption{Horizontal and vertical axes give the values of $r_{\mathcal F}$ and $r_{\mathcal B}$, respectively, and $(r_{\mathcal F}, r_{\mathcal B})$-coordinates of vertices in zones are marked. At the vertex of $Zone_1$\xspace and $Zone_3$\xspace, $\alpha$ is a solution of equation $N_{\tt in}r_{\mathcal B}^3+N_{\tt de}r_{\mathcal B}(1+k)-kN_{\tt de}=0$ for $r_{\mathcal B}$. All points in $Zone_1$\xspace, $Zone_2$\xspace, and $Zone_3$\xspace move in directions $(-,-)$, $(-,+)$, and $(+,-)$, respectively.} \label{fig:zones} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.75\columnwidth]{figure/eqs.png} \caption{Yellow points and line represent equilibria for each case.} \label{fig:eqs} \end{figure} \blue{\smallskip\noindent \textbf{Best response dynamics.} In game $\mathcal{G}(\bm{c}, c_{\tt stick})$, point $(r_{\mathcal F},r_{\mathcal B})$ reaches either of the two types of Nash equilibria: the stable coexistence of two coins and the lack of loyal miners to $coin_B.$} Figure~\ref{fig:zones} represents dynamics in game $\mathcal{G}(\bm{c}, c_{\tt stick})$, where horizontal and vertical axes are $r_{\mathcal F}$ and $r_{\mathcal B}$ values, respectively. A line, $boundary_{1,3}$, represents \begin{small} \begin{equation} \begin{aligned} &\frac{r_{\mathcal B}}{(1-r_{\mathcal F}-r_{\mathcal B})N_{\tt in}r_{\mathcal B}^2+(1-r_{\mathcal B})N_{\tt de}(r_{\mathcal F}+r_{\mathcal B})^2}\\ &=\frac{k}{N_{\tt in}r_{\mathcal B}^2+N_{\tt de}(r_{\mathcal F}+r_{\mathcal B})^2}.\label{eq:line1} \end{aligned} \end{equation} \end{small} \noindent On the line, the payoffs of $\mathcal{M_F}$\xspace (i.e., $U_{\mathcal{F}}(r_{\mathcal F},r_{\mathcal B})$) and $\mathcal{M_A}$\xspace (i.e., $U_{\mathcal{A}}(r_{\mathcal F},r_{\mathcal B})$) are the same. In addition, the line does not intersect with the line $(0\le r_{\mathcal F} \le 1, r_{\mathcal B}=0)$ and has an intersection $(1-\alpha, \alpha)$ with the line $r_{\mathcal F}+r_{\mathcal B}=1$ for $0\le r_{\mathcal F} \le 1$, where $\alpha$ is a solution of equation $N_{\tt in}r_{\mathcal B}^3+N_{\tt de}r_{\mathcal B}(1+k)-kN_{\tt de}=0$ for $r_{\mathcal B}$. The equation $N_{\tt in}r_{\mathcal B}^3+N_{\tt de}r_{\mathcal B}(1+k)-kN_{\tt de}=0$ has only one solution $\alpha$, and it is between 0 and $\frac{k}{1+k}$. Another line, $boundary_{2,3}$, represents \begin{small} \begin{equation} \begin{aligned} &\frac{(r_{\mathcal F}+r_{\mathcal B})}{(1-r_{\mathcal F}-r_{\mathcal B})N_{\tt in}r_{\mathcal B}^2+(1-r_{\mathcal B})N_{\tt de}(r_{\mathcal F}+r_{\mathcal B})^2}\\ &=\frac{k}{N_{\tt in}r_{\mathcal B}^2+N_{\tt de}(r_{\mathcal F}+r_{\mathcal B})^2}, \label{eq:line2} \end{aligned} \end{equation} \end{small} \noindent and the payoffs of $\mathcal{M_F}$\xspace (i.e., $U_{\mathcal{F}}$) and $\mathcal{M_B}$\xspace (i.e., $U_{\mathcal{B}}$) are the same on the line. The line does not intersect with the line $r_{\mathcal F}+r_{\mathcal B}=1$ for $0\le r_{\mathcal F} \le 1$ and has an intersection $(k,0)$ with the line $(0\le r_{\mathcal F} \le 1, r_{\mathcal B}=0)$. Moreover, it is most profitable among the three strategies to continually conduct $coin_{\tt A}$-mining ($\mathcal{A}$\xspace) in a zone above $boundary_{1,3}$. We let this zone be $Zone_1$\xspace. In the zone below $boundary_{2,3}$, it is most profitable to continually conduct $coin_{\tt B}$-mining ($\mathcal{B}$\xspace), and the zone is denoted as $Zone_2$\xspace. In the zone between $boundary_{1,3}$ and $boundary_{2,3}$, fickle mining ($\mathcal{F}$\xspace) is the most profitable, and this zone is denoted as $Zone_3$\xspace. \textit{Note that the range of zones changes if the coin price changes because boundaries are functions of $k.$} The moving direction of point $(r_{\mathcal F},r_{\mathcal B})$ is expressed as a red arrow in Figure~\ref{fig:zones}. For ease of reading, we express directions in which values $r_{\mathcal F}$ and $r_{\mathcal B}$ increase ($+$) or decrease ($-$) as $(\pm,\pm)$. For example, $(+,+)$ indicates the direction in which both values, $r_{\mathcal F}$ and $r_{\mathcal B}$, increase. In $Zone_1$\xspace, $\mathcal{A}$\xspace is the most profitable strategy, and thus every point in $Zone_1$\xspace moves in the direction $(-,-)$. In $Zone_2$\xspace, because $\mathcal{B}$\xspace is the most profitable strategy, every point moves in the direction $(-,+)$. Finally, in $Zone_3$\xspace, as $\mathcal{F}$\xspace is the most profitable strategy, every point in $Zone_3$\xspace moves in the direction $(+,-)$. Figure~\ref{fig:zones} shows the directions in the three zones ($Zone_1$\xspace, $Zone_2$\xspace, and $Zone_3$\xspace). \blue{\smallskip \noindent\textbf{2D-Illustration of movement towards equilibria.} To determine which equilibrium can be reached within each zone, we represent all Nash equilibria in game $\mathcal{G}(\bm{c}, c_{\tt stick})$ depending on a value of $c_{\tt stick}$ as yellow points and line in Figure~\ref{fig:eqs}. In the figure, the red dash lines represent $r_{\mathcal B}=c_{\tt stick}$ for each case. As described in Section~\ref{sec:finite}, there are two types of equilibrium points: 1) a lack of loyal miners and 2) stable coexistence of two coins. The equilibrium point representing a lack of loyal miners would be located on a red dash line $r_{\mathcal B}=c_{\tt stick}$, and we can see that all cases have this equilibrium. For Cases~1, 2, and 3, the second type of equilibrium (i.e., $(0, \frac{k}{k+1})$) representing stable coexistence of two coins is also found. A point $(r_{\mathcal F}, r_{\mathcal B})$ moves in the direction depending on its zone. In the meantime, if the point meets the line $r_{\mathcal B}=c_{\tt stick},$ then the point moves toward an equilibrium located on the line $r_{\mathcal B}=c_{\tt stick}$ as shown in Figure~\ref{fig:eqs}. In particular, the value of $r_{\mathcal F}$ in the equilibrium on the red dash line representing Case 3 is denoted by $\beta$, where the equilibrium is the intersection point between $boundary_{1,3}$ and the red dash line. Note, a point in $Zone_2$\xspace would not meet a red dash line because the point in $Zone_2$\xspace moves in the direction $(-,+)$ and can always be above the red dash line. Therefore, such points in $Zone_2$\xspace are likely to reach the stable coexistence of $coin_{\tt A}$ and $coin_{\tt B}.$ However, some points (near to $boundary_{2,3}$) in $Zone_2$\xspace can also move into $Zone_3$\xspace when more miners of $\mathcal{M_A}$\xspace than that of $\mathcal{M_F}$\xspace revise their strategies, and then it is possible to reach the equilibrium, representing a lack of loyal miners to $coin_{\tt B}$.} \section{Proofs of Theorems} \section{Proof of Lemma~\ref{lem:dynamics} and Theorem~\ref{thm:eq}} \label{sec:pf_eq} \blue{In order for player $i$ to not change its strategy at $(r_{\mathcal F},r_{\mathcal B})$, the below inequalities should be satisfied. \begin{align} &\begin{cases} &U_{\mathcal{F}}(r_{\mathcal F},r_{\mathcal B})\geq U_{\mathcal{A}}(r_{\mathcal F}-c_i,r_{\mathcal B}),\\ &U_{\mathcal{F}}(r_{\mathcal F},r_{\mathcal B})\geq U_{\mathcal{B}}(r_{\mathcal F}-c_i,r_{\mathcal {B}}+c_i)\label{eq:fm} \end{cases}\\ &\begin{cases} &U_{\mathcal{A}}(r_{\mathcal F},r_{\mathcal B})\geq U_{\mathcal{F}}(r_{\mathcal F}+c_i,r_{\mathcal B}), \\ &U_{\mathcal{A}}(r_{\mathcal F},r_{\mathcal B})\geq U_{\mathcal{B}}(x,r_{\mathcal {B}}+c_i)\label{eq:am} \end{cases}\\ &\begin{cases} &U_{\mathcal{B}}(r_{\mathcal F},r_{\mathcal B})\geq U_{\mathcal{F}}(r_{\mathcal F}+c_i,r_{\mathcal B}-c_i),\\ &U_{\mathcal{B}}(r_{\mathcal F},r_{\mathcal B})\geq U_{\mathcal{A}}(x,r_{\mathcal B}-c_i)\label{eq:bm} \end{cases} \end{align} \eqref{eq:fm} represents that a fickle miner's payoff decreases when the fickle miner moves to $\mathcal{M_A}$\xspace (i.e., $U_{\mathcal{F}}(r_{\mathcal F},r_{\mathcal B})\geq U_{\mathcal{A}}(r_{\mathcal F}-c_i,r_{\mathcal B})$) or when it moves to $\mathcal{M_B}$\xspace (i.e., $U_{\mathcal{F}}(r_{\mathcal F},r_{\mathcal B})\geq U_{\mathcal{B}}(r_{\mathcal F}-c_i,r_{\mathcal {B}}+c_i)$). } Similarly, \eqref{eq:am} and \eqref{eq:bm} represent that players in $\mathcal{M_A}$\xspace and $\mathcal{M_B}$\xspace cannot increase their payoff by changing their strategy, respectively. \blue{To prove Lemma~\ref{lem:dynamics}, we first consider the case that $c_{\tt stick}=0,$ and have the following steps. \begin{enumerate} \item First, we find all states characterized as $(r_{\mathcal F},0)$ in which player $i$ does not change its strategy. \item Second, we show that there is no state characterized as $(0,r_{\mathcal B})$ in which player $i$ does not change its strategy. \item Finally, there exists $\varepsilon>0$ such that, for any player $i$ with $c_i<\varepsilon,$ a player $i$ can change its strategy at state $(r_{\mathcal F},r_{\mathcal B})$ where $r_{\mathcal B}$ is positive. \end{enumerate}} \noindent\textbf{First step: } We find all states characterized as $(r_{\mathcal F},0)$ in which player $i$ does not change its strategy, where we denote such a state by $S$. In order for $(0,0)$ to be $S$, it is sufficient that \eqref{eq:am} is satisfied. Meanwhile, when $r_{\mathcal F}$ is greater than 0 and less than 1, in order for $(r_{\mathcal F},0)$ to be $S$, not only \eqref{eq:am} but also \eqref{eq:fm} should be satisfied. If $r_{\mathcal F}$ is 1, only \eqref{eq:fm} should be satisfied. First, we consider the condition for $(0,0)$ to be $S$. The payoff $U_{\mathcal{A}}(0,0)$ of players in $\mathcal{M_A}$\xspace is 1, and the payoff $U_{\mathcal{F}}(c_i,0)$ of a player who changes its strategy from $\mathcal{A}$\xspace to $\mathcal{F}$\xspace is also 1. Because $U_{\mathcal{A}}(0,0)\geq U_{\mathcal{F}}(c_i,0),$ it is sufficient to show $U_{\mathcal{A}}(0,0)\geq U_{\mathcal{B}}(0,c_i)$ in order for $(0,0)$ to be $S$. The payoff $U_{\mathcal{B}}(0,c_i)$ is $\frac{k}{c_i}$, and thus, the state $(0,0)$ cannot be a $S$ for $c_i$ less than $k.$ Next, we consider $(r_{\mathcal F},0)$ where $r_{\mathcal F}$ is greater than 0. In \eqref{eq:fm}, $U_{\mathcal{F}}(x,0)$ and $U_{\mathcal{A}}(r_{\mathcal F}-c_i,0)$ are 1. Moreover, in \eqref{eq:fm}, $U_{\mathcal{B}}(r_{\mathcal F}-c_i,c_i)\leq U_{\mathcal{F}}(x,0)$ can be arranged as follows. \begin{align} &\frac{{kN_{\tt in}c_i}+kN_{\tt de}r_{\mathcal F}}{N_{\tt in}c_i^2+N_{\tt de}r_{\mathcal F}^2}\leq 1\\ \Leftrightarrow &\,\, 0 \leq N_{\tt de}r_{\mathcal F}^2-kN_{\tt de}r_{\mathcal F}-\frac{N_{\tt in}}{n}(k-c_i) \notag\\ \Leftrightarrow &\,\, r_{\mathcal F} \leq \frac{k}{2}-\frac{\sqrt{N_{\tt de}^2k^2+4N_{\tt de}N_{\tt in}(kc_i-c_i^2)}}{2N_{\tt de}} \text{ or }\label{eq:con1} \\ &\,\,\frac{k}{2}+\frac{\sqrt{N_{\tt de}^2k^2+4N_{\tt de}N_{\tt in}(c_ik-c_i^2)}}{2N_{\tt de}} \leq r_{\mathcal F}\label{eq:con2} \end{align If $c_i$ is less than $k$, \eqref{eq:con1} cannot be satisfied because the right-hand side is negative. Also, if \begin{align*} &c_i \leq \frac{kN_{\tt in}-\sqrt{k^2 N_{\tt in}^2-4N_{\tt de}N_{\tt in}(1-k)}}{2N_{\tt in}}\leq \text{ or } \\ &k^2 N_{\tt in}^2-4N_{\tt de}N_{\tt in}(1-k) \leq 0, \end{align*} the left-hand side of \eqref{eq:con2} is less than or equal to 1, and $(1,0)$ is $S$. By \eqref{eq:am}, $U_{\mathcal{B}}(r_{\mathcal F},c_i)$ should be less than or equal to 1 in order that $(r_{\mathcal F},0)$ where $r_{\mathcal F}$ is greater than 0 and less than 1 is $S$. Referring to \eqref{eq:con2}, the following is satisfied: \begin{equation*} \resizebox{\hsize}{!}{$ \frac{k}{2}+\frac{\sqrt{N_{\tt de}^2k^2+4N_{\tt de}N_{\tt in}(c_ik-c_i^2)}}{2N_{\tt de}} \leq r_{\mathcal F}+c_i \Rightarrow U_{\mathcal{B}}(r_{\mathcal {F}},c_i) \leq 1.$} \end{equation*} Therefore, when $$r_{\mathcal F}\leq \frac{k}{2}+\frac{\sqrt{N_{\tt de}^2k^2+4N_{\tt de}N_{\tt in}(c_ik-c_i^2)}}{2N_{\tt de}},$$ both \eqref{eq:fm} and \eqref{eq:am} are satisfied. As a result, when $$c_i\leq\frac{kN_{\tt in}-\sqrt{k^2 N_{\tt in}^2-4N_{\tt de}N_{\tt in}(1-k)}}{2N_{\tt in}},$$ the all points $(\frac{k}{2}+\frac{\sqrt{N_{\tt de}^2k^2+4N_{\tt de}N_{\tt in}(c_ik-c_i^2)}}{2N_{\tt de}}\leq r_{\mathcal F}\leq 1,0)$ are $S$. \smallskip \noindent\textbf{Second step: } As the second step, we show that game $\mathcal{G} (\bm{c},c_{\tt stick})$ does not have state $(0,r_{\mathcal B})$ where $r_{\mathcal B}$ is positive and player $i$ does not change its strategy. In order for $(0,r_{\mathcal B})$ where $r_{\mathcal B}$ is greater than 0 and less than 1 to be $S$, both \eqref{eq:am} and \eqref{eq:bm} should be satisfied for player $i$. First, we consider the inequality $U_{\mathcal{F}}(c_i,r_{\mathcal B}) \leq U_{\mathcal{A}}(0,r_{\mathcal B})$. This inequality is expressed as follows: \begin{align} &U_{\mathcal{F}}(c_i,r_{\mathcal B}) \leq U_{\mathcal{A}}(0,r_{\mathcal B})\notag\\ &\Leftrightarrow \frac{N_{de}(c_i+r_{\mathcal{B}})^2}{(1-c_i-r_{\mathcal{B}})N_{\tt in}r_{\mathcal B}^2+(1-r_{\mathcal B})N_{\tt de}(r_{\mathcal{B}}+c_i)^2}+\notag\\ &\hspace{5mm}\frac{kN_{\tt in}r_{\mathcal{B}}}{N_{\tt in}r_{\mathcal B}^2+N_{de}(r_{\mathcal{B}}+c_i)^2}\leq\frac{1}{1-r_{\mathcal B}}\notag\\ &\Leftrightarrow k(1-r_{\mathcal B})\left((1-c_i-y)N_{\tt in}r_{\mathcal B}^2+(1-r_{\mathcal B})N_{de}(r_{\mathcal {B}}+c_i)^2\right)\notag\\ &\hspace{5mm}\leq r_{\mathcal{B}}\left(1-c_i-r_{\mathcal{B}}\right)\left(N_{\tt in}r_{\mathcal B}^2+N_{de}(r_{\mathcal {B}}+c_i)^2\right)\notag\\ &\Leftrightarrow kN_{\tt de}c_i(1-r_{\mathcal B})(r_{\mathcal{B}}+c_i)^2 \leq ((1+k)r_{\mathcal B}-k)\times\notag\\ &\hspace{5mm}\left(1-c_i-r_{\mathcal B}\right)\left(N_{\tt in}r_{\mathcal B}^2+N_{de}(r_{\mathcal{B}}+c_i)^2\right) \label{eq:pf1} \end{align The another inequality $U_{\mathcal{F}}(c_i,r_{\mathcal B}-c_i) \leq U_{\mathcal{B}}(0,r_{\mathcal B})$ can be expressed as follows: \begin{align} &U_{\mathcal{F}}(c_i,r_{\mathcal B}-c_i) \leq U_{\mathcal{B}}(0,r_{\mathcal B})\notag\\ &\Leftrightarrow \frac{N_{\tt de}r_{\mathcal B}^2}{(1-r_{\mathcal B})N_{\tt in}(r_{\mathcal B}-c_i)^2+(1-r_{\mathcal B}+c_i)N_{\tt de}r_{\mathcal B}^2}+\notag\\ &\hspace{5mm}\frac{kN_{\tt in}(r_{\mathcal B}-c_i)}{N_{\tt in}(r_{\mathcal B}-c_i)^2+N_{\tt de}r_{\mathcal B}^2}\leq\frac{k}{r_{\mathcal {B}}}\notag\\ &\Leftrightarrow \left(N_{\tt in}(r_{\mathcal B}-c_i)^2+N_{\tt de}r_{\mathcal B}^2\right)\label{eq:pf2}\\ &\hspace{5mm}\times \left(N_{\tt de}r_{\mathcal B}^3-k(1-r_{\mathcal B})(N_{\tt de}r_{\mathcal B}^2-N_{\tt in}c_i(r_{\mathcal B}-c_i))\right)\notag\\ &\hspace{5mm}\leq k\left(N_{\tt de}r_{\mathcal B}^2-N_{\tt in}c_i(r_{\mathcal B}-c_i)\right)N_{\tt de}r_{\mathcal B}^2c_i\notag \end{align \eqref{eq:pf2} is greater than or equal to $N_{\tt de}r_{\mathcal B}^2$. Therefore, the following inequality \begin{align} &N_{\tt de}r_{\mathcal B}^3-k(1-r_{\mathcal B})(N_{\tt de}r_{\mathcal B}^2-N_{\tt in}c_i(r_{\mathcal B}-c_i))\leq \notag\\ &kc_i \left(N_{\tt de}r_{\mathcal B}^2-N_{\tt in}c_i(r_{\mathcal B}-c_i)\right)\notag\\ &\Leftrightarrow N_{\tt de}r_{\mathcal B}^3-k(1+c_i-r_{\mathcal B})(N_{\tt de}r_{\mathcal B}^2-N_{\tt in}c_i(r_{\mathcal B}-c_i))\leq 0 \label{eq:pf3} \end{align} should be satisfied. We denote the left-hand side of \eqref{eq:pf3} by a function $f(c_i)$ of $c_i$. Moreover, if \eqref{eq:pf4} is satisfied, \eqref{eq:pf1} cannot be certainly satisfied as follows: \begin{small} \begin{align} &((1+k)r_{\mathcal B}-k)N_{\tt in}<N_{de}(k(1+c_i)-(1+k)r_{\mathcal B})\label{eq:pf4}\\ \Rightarrow&((1+k)r_{\mathcal B}-k)N_{\tt in}r_{\mathcal B}^2<N_{de}(k(1+c_i)-(1+k)r_{\mathcal B})(r_{\mathcal {B}}+c_i)^2\notag\\ \Leftrightarrow & ((1+k)r_{\mathcal B}-k)(N_{\tt in}r_{\mathcal B}^2+N_{de}(r_{\mathcal {B}}+c_i)^2)<kN_{\tt de}c_i(r_{\mathcal {B}}+c_i)^2\notag\\ \Rightarrow &((1+k)r_{\mathcal B}-k)\left(1-c_i-r_{\mathcal B}\right)\left(N_{\tt in}r_{\mathcal B}^2+N_{de}(r_{\mathcal {B}}+c_i)^2\right)\notag\\ &<kN_{\tt de}c_i(1-r_{\mathcal B})(r_{\mathcal {B}}+c_i)^2\notag \end{align} \end{small} Thus, if \eqref{eq:pf4} is satisfied for all $r_{\mathcal B}$ that satisfies \eqref{eq:pf3}, there would not exist $S=(0,r_{\mathcal B})$ where $r_{\mathcal B}$ is greater than 0 and less than 1 because any state $(0,r_{\mathcal B})$ does not satisfy both \eqref{eq:am} and \eqref{eq:bm}. We find a condition of $c_i$ such that there is no $S=(0,r_{\mathcal B})$ where $r_{\mathcal B}$ is greater than 0 and less than 1. In other words, we find a range of $c_i$ such that \eqref{eq:pf4} is satisfied for all $r_{\mathcal B}$ that satisfies \eqref{eq:pf3}. Eq.~\eqref{eq:pf4} is equivalent to the following inequality $$r_{\mathcal B}<\frac{k}{1+k}\left(1+\frac{N_{\tt de}c_i}{N_{\tt de}+N_{\tt in}}\right).$$ When $r_{\mathcal B}$ is $\frac{k}{1+k}\left(1+\frac{N_{\tt de}c_i}{N_{\tt de}+N_{\tt in}}\right)$, $f(c_i)$ is a quadratic equation of $c_i$, which has a negative coefficient of $c_i^2$. Therefore, we can easily find a number $l$ such that, for all $c_i<l$, $f(c_i)$ is positive when $r_{\mathcal B}$ is $\frac{k}{1+k}\left(1+\frac{N_{\tt de}c_i}{N_{\tt de}+N_{\tt in}}\right)$. Then, we find the derivative $\frac{\partial f(c_i)}{\partial r_{\mathcal {B}}}$, and it is expressed as \begin{equation} 3N_{de}(1+k)r_{\mathcal B}^2-2k\left(N_{de}(1+c_i)+N_{\tt in}c_i\right)r_{\mathcal {B}}+kN_{\tt in}c_i+2kN_{\tt in}c_i^2.\label{eq:pf5} \end{equation} In order that the derivative is non-negative when $r_{\mathcal B}$ is not less than $\frac{k}{1+k}\left(1+\frac{N_{\tt de}c_i}{N_{\tt de}+N_{\tt in}}\right),$ a solution for $r_{\mathcal B}$ of \eqref{eq:pf5} should not exist, or all solutions for $r_{\mathcal B}$ should be less than $\frac{k}{1+k}\left(1+\frac{N_{\tt de}c_i}{N_{\tt de}+N_{\tt in}}\right).$ If solutions exist, they are positive. Therefore, when the sum of solutions, $\frac{2k\left(N_{de}(1+c_i)+N_{\tt in}c_i \right)}{3N_{de}(1+k)},$ is less than $\frac{k}{1+k}\left(1+\frac{N_{\tt de}c_i}{N_{\tt de}+N_{\tt in}}\right)$, the solutions are less than $\frac{k}{1+k}\left(1+\frac{N_{\tt de}c_i}{N_{\tt de}+N_{\tt in}}\right).$ In other words, when $\frac{1}{c_i}$ is greater than $2+\frac{2N_{\tt in}}{N_{\tt de}}-\frac{3N_{\tt de}}{N_{\tt de}+N_{\tt in}}$, the solutions are less than $\frac{k}{1+k}\left(1+\frac{N_{\tt de}c_i}{N_{\tt de}+N_{\tt in}}\right).$ As a result, if $\frac{1}{c_i}>\max\{\frac{1}{l},2+\frac{2N_{\tt in}}{N_{\tt de}}-\frac{3t}{N_{\tt de}+N_{\tt in}}\},$ $f(c_i)$ is positive for $r_{\mathcal F} \geq \frac{k}{1+k}\left(1+\frac{N_{\tt de}c_i}{N_{\tt de}+N_{\tt in}}\right).$ This means that, for small $c_i,$ \eqref{eq:pf1} cannot be satisfied for all $r_{\mathcal B}$ that satisfies \eqref{eq:pf3}, and there is no $S=(0,r_{\mathcal B})$ where $r_{\mathcal B}$ is greater than 0 and less than 1. For a state $(0,1),$ \eqref{eq:bm} should be satisfied to be $S$. However, the state $(0,1)$ does not satisfy \eqref{eq:bm} except for when $\frac{1}{k}\geq c_i.$ Note that $k$ is not greater than 1. Therefore, $(0,1)$ cannot be $S$. \smallskip\noindent\textbf{Third step: } To do the third step, we consider the game when a player possesses sufficiently small power. When $r_{\mathcal B}$ is positive, inequality $\lim_{c_i\rightarrow 0}U_{\mathcal{F}}(r_{\mathcal F}+c_i,r_{\mathcal B}) \leq U_{\mathcal{A}}(r_{\mathcal F},r_{\mathcal B})$ is as follows. \begin{align} &\lim_{c_i\rightarrow 0}U_{\mathcal{F}}(r_{\mathcal F}+c_i,r_{\mathcal B}) \leq U_{\mathcal{A}} (r_{\mathcal F},r_{\mathcal B})\notag\\ \Leftrightarrow &\,\, U_{\mathcal{F}}(r_{\mathcal F},r_{\mathcal B})\leq U_{\mathcal{A}}(r_{\mathcal F},r_{\mathcal B})\notag\\ \Leftrightarrow &\,\,\frac{k}{N_{\tt in}r_{\mathcal B}^2+N_{\tt de}(r_{\mathcal F}+r_{\mathcal B})^2}\\ &\leq \frac{r_{\mathcal {B}}}{(1-r_{\mathcal F}-r_{\mathcal B})N_{\tt in}r_{\mathcal B}^2+(1-r_{\mathcal B})N_{de}(r_{\mathcal F}+r_{\mathcal B})^2}\\ \Leftrightarrow &\,\,k((1-r_{\mathcal F}-r_{\mathcal B})N_{\tt in}r_{\mathcal B}^2+(1-r_{\mathcal B})N_{\tt de}(r_{\mathcal F}+r_{\mathcal B})^2)\notag\\ &\leq r_{\mathcal B}(N_{\tt in}r_{\mathcal B}^2+N_{de}(r_{\mathcal F}+r_{\mathcal B})^2) \label{eq:pf1_c} \end{align Also, inequality $\lim_{c_i\rightarrow 0} U_{\mathcal{F}}(r_{\mathcal F}+c_i,r_{\mathcal B}-c_i) \leq U_{\mathcal{B}}(r_{\mathcal F},r_{\mathcal B})$ is as follows. \begin{align} \lim_{c_i\rightarrow 0}& U_{\mathcal{F}}(r_{\mathcal F}+c_i,r_{\mathcal B}-c_i) \leq U_{\mathcal{B}}(r_{\mathcal F},r_{\mathcal B})\notag\\ \Leftrightarrow &\,\, U_{\mathcal{F}}(r_{\mathcal F},r_{\mathcal B})\leq U_{\mathcal{B}}(r_{\mathcal F},r_{\mathcal B})\notag\\ \Leftrightarrow & \,\,(r_{\mathcal F}+r_{\mathcal B})(N_{\tt in}r_{\mathcal B}^2+N_{\tt de}(r_{\mathcal F}+r_{\mathcal B})^2)\notag\\ &\hspace{-8mm}\leq k((1-r_{\mathcal F}-r_{\mathcal B})N_{\tt in}r_{\mathcal B}^2+(1-r_{\mathcal B})N_{de}(r_{\mathcal F}+r_{\mathcal B})^2) \label{eq:pf2_c} \end{align The solution which satisfies both \eqref{eq:pf1_c} and \eqref{eq:pf2_c} is only $(0,\frac{k}{1+k})$. When $r_{\mathcal B}$ is greater than $\frac{k}{1+k}$, only \eqref{eq:pf1_c} is satisfied for $(0,r_{\mathcal B})$. Meanwhile, if $r_{\mathcal B}$ is less than $\frac{k}{1+k}$, only \eqref{eq:pf2_c} is satisfied for $(0,r_{\mathcal B})$. Therefore, the range of $(r_{\mathcal F},r_{\mathcal B})$, which satisfies \eqref{eq:pf1_c}, is always above that for \eqref{eq:pf2_c} except for $(0,\frac{k}{k+1})$ and $r_{\mathcal B}=0$ (see Figure~\ref{fig:zones}). It means that there exists a value $\varepsilon$ such that, for all $c_i<\varepsilon$ and given a positive real number $\delta$, the line where $U_{\mathcal{F}}(r_{\mathcal F}+c_i,r_{\mathcal B}) = U_{\mathcal{A}}(r_{\mathcal F},r_{\mathcal B})$ is always above the line where $U_{\mathcal{F}}(r_{\mathcal F}+c_i,r_{\mathcal B}-c_i) = U_{\mathcal{B}}(r_{\mathcal F},r_{\mathcal B})$ when $r_{\mathcal F}$, $r_{\mathcal B}$, and $1-r_{\mathcal F}-r_{\mathcal B}$ are in $[\delta, 1]$, $[c_i,1]$, and $[0,1]$, respectively. For ease of reading, we denote by $boundary_{\mathcal A}$ the line where $U_{\mathcal{F}}(r_{\mathcal F}+c_i,r_{\mathcal B}) = U_{\mathcal{A}}(r_{\mathcal F},r_{\mathcal B})$ when $r_{\mathcal F}$, $r_{\mathcal B}$, and $1-r_{\mathcal F}-r_{\mathcal B}$ are in $[0,1]$, $[c_i,1]$, and $[0,1]$, respectively. Also, we denote by $boundary_{\mathcal B}$ the line where $U_{\mathcal{F}}(r_{\mathcal F}+c_i,r_{\mathcal B}-c_i) = U_{\mathcal{B}}(r_{\mathcal F},r_{\mathcal B})$ when $r_{\mathcal F}$, $r_{\mathcal B}$, and $1-r_{\mathcal F}-r_{\mathcal B}$ are in $[0,1]$, $[c_i,1]$, and $[0,1]$, respectively. \begin{comment} \begin{figure*}[ht] \begin{equation} \resizebox{\hsize}{!}{$ \begin{aligned} &\left(t_2\left(y\right)^2+t\left(r_{\mathcal F}+r_{\mathcal B}\right)^2\right)\left(t_2r_{\mathcal B}^2+t\left(r_{\mathcal F}+r_{\mathcal B}+c_i\right)^2\right)\left(\left(1-r_{\mathcal F}-r_{\mathcal B}-c_i\right)t_2r_{\mathcal B}^2+\left(1-r_{\mathcal B}\right)t\left(r_{\mathcal F}+r_{\mathcal B}+c_i\right)^2\right)\\ &= kN_{\tt de}_2y\left(\left(1-r_{\mathcal F}-r_{\mathcal B}\right)t_2\left(y\right)^2+\left(1-r_{\mathcal B}\right)t\left(r_{\mathcal F}+r_{\mathcal B}\right)^2\right)\left(\left(1-r_{\mathcal F}-r_{\mathcal B}-c_i\right)t_2r_{\mathcal B}^2+\left(1-r_{\mathcal B}\right)t\left(r_{\mathcal F}+r_{\mathcal B}+c_i\right)^2\right)+t\left(r_{\mathcal F}+r_{\mathcal B}+c_i\right)^2\left(\left(1-r_{\mathcal F}-r_{\mathcal B}\right)t_2\left(y\right)^2+\left(1-r_{\mathcal B}\right)t\left(r_{\mathcal F}+r_{\mathcal B}\right)^2\right)\left(t_2r_{\mathcal B}^2+t\left(r_{\mathcal F}+r_{\mathcal B}+c_i\right)^2\right),\\ &\left(kN_{\tt de}_2\left(y\right)+kN_{\tt de}\left(r_{\mathcal F}+r_{\mathcal B}\right)\right)\left(t_2\left(r_{\mathcal B}-c_i\right)^2+t\left(r_{\mathcal F}+r_{\mathcal B}\right)^2\right)\left(\left(1-r_{\mathcal F}-r_{\mathcal B}\right)t_2\left(r_{\mathcal B}-c_i\right)^2+\left(1-r_{\mathcal B}+c_i\right)t\left(r_{\mathcal F}+r_{\mathcal B}\right)^2\right)\\ &= kN_{\tt de}_2\left(r_{\mathcal B}-c_i\right)\left(t_2\left(y\right)^2+t\left(r_{\mathcal F}+r_{\mathcal B}\right)^2\right)\left(\left(1-r_{\mathcal F}-r_{\mathcal B}\right)t_2\left(r_{\mathcal B}-c_i\right)^2+\left(1-r_{\mathcal B}+c_i\right)t\left(r_{\mathcal F}+r_{\mathcal B}\right)^2\right)+t\left(r_{\mathcal F}+r_{\mathcal B}\right)^2\left(t_2\left(y\right)^2+t\left(r_{\mathcal F}+r_{\mathcal B}\right)^2\right)\left(t_2\left(r_{\mathcal B}-c_i\right)^2+t\left(r_{\mathcal F}+r_{\mathcal B}\right)^2\right) \end{aligned}\label{eq:bor_line}$}\end{equation} \caption{\textcolor{red}{omit??}} \end{figure*} \end{comment} Moreover, the derivative $\frac{\partial r_{\mathcal {B}}}{\partial r_{\mathcal {F}}}|_{r_{\mathcal F}=0}$ on $boundary_{\mathcal A}$ is greater than that the derivative $\frac{\partial r_{\mathcal {B}}}{\partial r_{\mathcal {F}}}|_{r_{\mathcal F}=0}$ on $boundary_{\mathcal B}.$ Because $\frac{\partial r_{\mathcal {B}}}{\partial r_{\mathcal {F}}}$ is a continuous function, there exists a positive real number $\delta'$ such that, for all $x\in [0,\delta']$, the derivative $\frac{\partial r_{\mathcal {B}}}{\partial r_{\mathcal {F}}}|_{r_{\mathcal F}=x}$ on $boundary_{\mathcal A}$ is greater than the derivative $\frac{\partial r_{\mathcal {B}}}{\partial r_{\mathcal {F}}}|_{r_{\mathcal F}=x}$ on $boundary_{\mathcal B}.$ Then, there exists a number $\varepsilon^\prime$ such that, for all $c_i<\varepsilon^\prime$ and $x\in [0,\delta']$, the derivative $\frac{\partial r_{\mathcal {B}}}{\partial r_{\mathcal {F}}}|_{r_{\mathcal F}=x}$ on $boundary_{\mathcal A}$ is greater than that the derivative $\frac{\partial r_{\mathcal {B}}}{\partial r_{\mathcal {F}}}|_{r_{\mathcal F}=x}$ on $boundary_{\mathcal B}.$ Also, as described above, there exists a number $\varepsilon$ such that, for all $c_i<\varepsilon$, $boundary_{\mathcal A}$ is above $boundary_{\mathcal B}$ when $r_{\mathcal F}$, $r_{\mathcal B}$, and $1-r_{\mathcal F}-r_{\mathcal B}$ are in $[\delta', 1]$, $[c_i,1]$, and $[0,1]$, respectively. In the second step, we showed that $(0,r_{\mathcal B})$ cannot be $S$ where player $i$ does not change its strategy, when $\frac{1}{c_i}>\max\{\frac{1}{l}, 2+\frac{2N_{\tt in}}{N_{\tt de}}-\frac{3t}{N_{\tt de}+N_{\tt in}}\}.$ Therefore, $$\forall \frac{1}{c_i}>\max\{\frac{1}{l}, 2+\frac{2N_{\tt in}}{N_{\tt de}}-\frac{3t}{N_{\tt de}+N_{\tt in}}, \frac{1}{\varepsilon^\prime}\} \text{ and } \forall r_{\mathcal{F}}\in[0,\delta'],$$ a range for \eqref{eq:am} is always above that for \eqref{eq:bm} without any intersection. As a result, there exist $\varepsilon^{\prime\prime}$ as $$\varepsilon^{\prime\prime}=\max\{\frac{1}{l}, 2+\frac{2N_{\tt in}}{N_{\tt de}}-\frac{3t}{N_{\tt de}+N_{\tt in}}, \frac{1}{\varepsilon}, \frac{1}{\varepsilon^\prime}\}$$ such that, for all $c_i<\varepsilon^{\prime\prime},$ $(r_{\mathcal F},r_{\mathcal B})$ where $r_{\mathcal B}$ is positive is not $S$ in the game $\mathcal{G}(\bm{c},c_{\tt stick}).$ By the above three steps, if $c_{\tt stick}=0$, there exists $\varepsilon^{\prime\prime}$ such that, for all $c_i<\varepsilon^{\prime\prime},$ $S$ is characterized as presented in Lemma~\ref{lem:dynamics}. If $c_{\tt stick}>0,$ from this result, we can easily see that the value of $r_{\mathcal B}$ of $S$ is equal to $c_{\tt stick}.$ To characterize $S$ in this case, it is sufficient to have the second and third steps described above. Moreover, by Lemma~\ref{lem:dynamics}, the Nash equilibria in game $\mathcal{G}(\bm{c},c_{\tt stick})$ are characterized as presented in Theorem~\ref{thm:eq}. This completes the proof. \begin{comment} \subsection{Proof of Theorem~\ref{thm:dynamics}} \label{sec:pf_dy} According to best response dynamics, a state $(r_{\mathcal F},r_{\mathcal B})$ moves, and a path starting from an initial state is extended over steps. For ease of presentation, we denote ranges of $(r_{\mathcal F},r_{\mathcal B})$, which satisfy inequalities $U_{\mathcal A} (r_{\mathcal F},r_{\mathcal B})\geq U_{\mathcal F}(r_{\mathcal F}+c_i,r_{\mathcal B})$ and $U_{\mathcal B} (r_{\mathcal F},r_{\mathcal B})\geq U_{\mathcal F}(r_{\mathcal F}+c_i,r_{\mathcal B}-c_i)$ by $Zone_{\mathcal A}$ and $Zone_{\mathcal B}$, respectively. The rest of range of $(r_{\mathcal F},r_{\mathcal B})$ is denoted by $Zone_{\mathcal F}.$ As described in Section~\ref{sec:pf_eq}, for large $n,$ $(r_{\mathcal F},r_{\mathcal B})$ in $Zone_{\mathcal A}$ also satisfies the inequality $U_{\mathcal F}(r_{\mathcal F}+c_i,r_{\mathcal B}-c_i)> U_{\mathcal B}(r_{\mathcal F},r_{\mathcal B})$, and $(r_{\mathcal F},r_{\mathcal B})$ in $Zone_{\mathcal B}$ also satisfies the inequality $U_{\mathcal F}(r_{\mathcal F}+c_i,r_{\mathcal B})> U_{\mathcal A}(r_{\mathcal F},r_{\mathcal B})$. In addition, for large $n,$ $(r_{\mathcal F}+c_i,r_{\mathcal B})$ and $(r_{\mathcal F}-c_i,r_{\mathcal {B}}+c_i)$ are not in $Zone_{\mathcal B}$ for $(r_{\mathcal F},r_{\mathcal B})$ in $Zone_{\mathcal A}$. Hereafter, we consider the game $\mathcal G_n$ for large $n.$ Moreover, we express a directed line in which values, $r_{\mathcal F}$ and $r_{\mathcal B}$, increase by $c_i$ ($+1$) or decrease by $c_i$ ($-1$) as $(\pm1,\pm1)$. For example, $(+1,+1)$ indicates a directed line in which both values, $r_{\mathcal F}$ and $r_{\mathcal B}$, increase by $c_i$. If only $r_{\mathcal F}$ or $r_{\mathcal B}$ increases by $c_i$, the directed line is denoted by $(+1,0)$ or $(0,+1)$. Similarly, if only $r_{\mathcal F}$ or $r_{\mathcal B}$ decreases by $c_i$, the directed line is denoted by $(-1,0)$ or $(0,-1)$. We consider that a player of $\mathcal{M_B}$\xspace revises its strategy at a state $(r_{\mathcal F},r_{\mathcal B})$ in $Zone_{\mathcal A}$. Then a state $(r_{\mathcal F},r_{\mathcal B})$ in $Zone_{\mathcal A}$ moves to $(r_{\mathcal F},r_{\mathcal B}-c_i)$ when $(r_{\mathcal F},r_{\mathcal B}-c_i)$ is in $Zone_{\mathcal A}$. If $(r_{\mathcal F},r_{\mathcal B}-c_i)$ is not in $Zone_{\mathcal A}$, the state $(r_{\mathcal F},r_{\mathcal B})$ moves to $(r_{\mathcal F}+c_i,r_{\mathcal B}-c_i)$. In the case that a player of $\mathcal{M_F}$\xspace revises its strategy at a state $(r_{\mathcal F},r_{\mathcal B})$ in $Zone_{\mathcal A}$, if $(r_{\mathcal F}-c_i,r_{\mathcal B})$ is in $Zone_{\mathcal A}$, $(r_{\mathcal F},r_{\mathcal B})$ may move to $(r_{\mathcal F}-c_i,r_{\mathcal B})$. Also, because it is impossible that $(r_{\mathcal F}-c_i,r_{\mathcal {B}}+c_i)$ is in $Zone_{\mathcal B}$, $(r_{\mathcal F},r_{\mathcal B})$ cannot move to $(r_{\mathcal F}-c_i,r_{\mathcal {B}}+c_i).$ At the state $(r_{\mathcal F},r_{\mathcal B})$ in $Zone_{\mathcal A},$ any player of $\mathcal{M_A}$\xspace does not change its strategy. Next, we consider that a player of $\mathcal{M_A}$\xspace revises its strategy at a state $(r_{\mathcal F},r_{\mathcal B})$ in $Zone_{\mathcal B}$. If $(r_{\mathcal F},r_{\mathcal {B}}+c_i)$ is in $Zone_{\mathcal B}$, $(r_{\mathcal F},r_{\mathcal B})$ moves to $(r_{\mathcal F},r_{\mathcal {B}}+c_i).$ Otherwise, $(r_{\mathcal F},r_{\mathcal B})$ moves to $(r_{\mathcal F}+c_i,r_{\mathcal B})$. When a player of $\mathcal{M_F}$\xspace revises its strategy at $(r_{\mathcal F},r_{\mathcal B})$ and $(r_{\mathcal F}-c_i, r_{\mathcal {B}}+c_i)$ is in $Zone_{\mathcal B}$, $(r_{\mathcal F},r_{\mathcal B})$ may move to $(r_{\mathcal F}-c_i, r_{\mathcal {B}}+c_i)$. The state $(r_{\mathcal F},r_{\mathcal B})$ cannot move to $(r_{\mathcal F}-c_i,r_{\mathcal B})$ because $(r_{\mathcal F}-c_i,r_{\mathcal B})$ cannot be in $Zone_{\mathcal A}$. In $Zone_{\mathcal B}$, players in $\mathcal{M_B}$\xspace do not change their strategy. If a player of $\mathcal{M_F}$\xspace revises its strategy at $(r_{\mathcal F},r_{\mathcal B})$ in $Zone_{\mathcal F}$, $(r_{\mathcal F},r_{\mathcal B})$ may move to $(r_{\mathcal F}-c_i,r_{\mathcal B})$ when $(r_{\mathcal F}-c_i,r_{\mathcal B})$ is in $Zone_{\mathcal A}$. When $(r_{\mathcal F}-c_i, r_{\mathcal {B}}+c_i)$ is in $Zone_{\mathcal B}$, $(r_{\mathcal F},r_{\mathcal B})$ may move to $(r_{\mathcal F}-c_i, r_{\mathcal {B}}+c_i)$. Then we assume that a player of $\mathcal{M_A}$\xspace changes its strategy at a state $(r_{\mathcal F},r_{\mathcal B})$ in $Zone_{\mathcal F}$. If $(r_{\mathcal F},r_{\mathcal {B}}+c_i)$ is in $Zone_{\mathcal B}$, $(r_{\mathcal F},r_{\mathcal B})$ moves to $(r_{\mathcal F},r_{\mathcal {B}}+c_i)$. Otherwise, $(r_{\mathcal F},r_{\mathcal B})$ moves to $(r_{\mathcal F}-c_i,r_{\mathcal B})$. Lastly, we consider that a player of $\mathcal{M_B}$\xspace revises its strategy in $Zone_{\mathcal F}$. If $(r_{\mathcal F},r_{\mathcal B}-c_i)$ is in $Zone_{\mathcal A}$, $(r_{\mathcal F},r_{\mathcal B})$ moves to $(r_{\mathcal F},r_{\mathcal B}-c_i)$. Otherwise, $(r_{\mathcal F},r_{\mathcal B})$ moves to $(r_{\mathcal F}+c_i,r_{\mathcal B}-c_i)$. Figure~\ref{fig:arrows} summarizes the movement of each state. \begin{figure}[ht] \centering \includegraphics[width=0.75\columnwidth]{figure/arrows.png} \caption{Possible movements \vspace{-8mm}} \label{fig:arrows} \end{figure} We first show that any path does not include any circle. Paths, which comprise only $(r_{\mathcal F},r_{\mathcal B})$ in $Zone_{\mathcal A}$ and $Zone_{\mathcal F}$, cannot be a circle because the states can move in only directions where $r_{\mathcal B}$ decreases but not directions where $r_{\mathcal B}$ increases. In addition, paths that comprise only $(r_{\mathcal F},r_{\mathcal B})$ in $Zone_{\mathcal B}$ and $Zone_{\mathcal F}$ cannot be a circle. When the path comprises $l_1$ number of (-1,+1)-directed lines, $l_2$ number of (+1,-1)-directed lines, $l_3$ number of (0,+1)-directed lines, and $l_4$ number of (+1,0)-directed lines, $l_3$ and $l_4$ should be equal to $l_2-l_1$ and $l_1-l_2$, respectively, in order for the path to be a circle. In other words, the sum of $l_3$ and $l_4$ is 0. However, both $l_3$ and $l_4$ cannot be negative, thus the path, which comprises only $(r_{\mathcal F},r_{\mathcal B})$ in $Zone_{\mathcal B}$ and $Zone_{\mathcal F}$, cannot be a circle. Next, we consider a path, which comprises states in $Zone_{\mathcal A}$ and $Zone_{\mathcal B}$ or all three zones. Then the path can be separated into connected components as follows. Without loss of generality, we first assume that the path starts to a state $(r_{\mathcal{F},1},r_{\mathcal{B},1})$ in $Zone_{\mathcal A}$. When the state of $Zone_{\mathcal B}$ by which the path first passes is denoted by $(r_{\mathcal{F},2},r_{\mathcal{B},2})$ and the previous state $(r_{\mathcal{F},2}^p,r_{\mathcal{B},2}^p)$ of $(r_{\mathcal{F},2},r_{\mathcal{B},2})$ is in $Zone_{\mathcal A}$, a subpath from the initial state $(r_{\mathcal{F},1},r_{\mathcal{B},1})$ to $(r_{\mathcal{F},2}^p,r_{\mathcal{B},2}^p)$ is regarded as the first component. If $(r_{\mathcal{F},2}^p,r_{\mathcal{B},2}^p)$ is in $Zone_{\mathcal F}$, not $Zone_{\mathcal A}$, a subpath from $(r_{\mathcal{F},1},r_{\mathcal{B},1})$ to $(r_{\mathcal{F},2}^{pp},r_{\mathcal{B},2}^{pp})$, which is the previous state of $(r_{\mathcal{F},2}^{p},r_{\mathcal{B},2}^{p})$, is regarded as the first component. Then when the previous state $(r_{\mathcal{F},3}^{p},r_{\mathcal{B},3}^{p})$ of $(r_{\mathcal{F},3},r_{\mathcal{B},3})$, which is the first state of $Zone_{\mathcal A}$ after $(r_{\mathcal{F},2},r_{\mathcal{B},2})$ in the path, is in $Zone_{\mathcal F}$, the subpath from $(r_{\mathcal{F},2},r_{\mathcal{B},2})$ or $(r_{\mathcal{F},2}^p,r_{\mathcal{B},2}^p)$ to the previous state $(r_{\mathcal{F},3}^{pp},r_{\mathcal{B},3}^{pp})$ of $(r_{\mathcal{F},3}^{p},r_{\mathcal{B},3}^{p})$ is regarded as the second component. Meanwhile, if $(r_{\mathcal{F},3}^{p},r_{\mathcal{B},3}^{p})$ is in $Zone_{\mathcal B}$, the subpath from $(r_{\mathcal{F},2},r_{\mathcal{B},2})$ or $(r_{\mathcal{F},2}^{p},r_{\mathcal{B},2}^{p})$ to $(r_{\mathcal{F},3}^{p},r_{\mathcal{B},3}^{p})$ is regarded as the second component. The first and second type of components are denoted by $Com_1$ and $Com_2$, respectively. By repeating the above process, the path alternates between $Com_1$ and $Com_2$. These components $Com_1$ and $Com_2$ are connected with a directed line $(+1,-1)$ or $(+1,0)$. Moreover, $Com_1$ comprises directed lines $(-1,0)$, $(0,-1)$, $(+1,-1)$, and $(+1,0)$, and $Com_2$ comprises directed lines $(-1,+1)$, $(0,+1)$, $(+1,0)$, and $(+1,-1)$. Also, for ease of presentation, we express $i$-th $Com_1$ and $Com_2$ in a path as $Com_1(i)$ and $Com_2(i)$, respectively. The initial state $(r_{\mathcal{F},1},r_{\mathcal{B},1})$ belongs to $Com_1(1)$, and $Com_1(1)$ consists of some points $(r_{\mathcal F},r_{\mathcal B})$ where $r_{\mathcal B}\leq r_{\mathcal{B},1}$, when considering possible directed lines in $Com_1$. The next component $Com_2(1)$ consists of some points $(r_{\mathcal F},r_{\mathcal B})$ where $r_{\mathcal B}< r_{\mathcal B,1}$, $r_{\mathcal {F}}>r_{\mathcal {F},1}$, or $r_{\mathcal F}+r_{\mathcal B}>r_{\mathcal {F},1}+r_{\mathcal {B},1}$, when considering possible directed lines in $Com_2$. In the next component $Com_1(2)$, it is possible to exist points $(r_{\mathcal F},r_{\mathcal B})$, where $r_{\mathcal B}< r_{\mathcal B,1}-1$, $r_{\mathcal {F}}> r_{\mathcal F,1}+1$, or $r_{\mathcal F}+r_{\mathcal B}>r_{\mathcal F,1}+r_{\mathcal B,1}+1$. In general, for a state $(r_{\mathcal F},r_{\mathcal B})$ in a component $Com_1(i)$, either $r_{\mathcal F}\geq r_{\mathcal F,1}+2(i-1)$, $r_{\mathcal B} \geq r_{\mathcal B,1}+2(i-1)$, or $r_{\mathcal F}+r_{\mathcal B}\geq r_{\mathcal F,1}+r_{\mathcal B,1}+2(i-1)$ is true. For a state $(r_{\mathcal F},r_{\mathcal B})$ in a component $Com_2(i)$, either $r_{\mathcal F}\geq r_{\mathcal F,1}+2i-1$, $r_{\mathcal B} \geq r_{\mathcal B,1}+2i-1$, or $r_{\mathcal F}+r_{\mathcal B}\geq r_{\mathcal F,1}+r_{\mathcal B,1}+2i-1$ is true. In order for the path to be a circle, a component $Com_1(i>1)$ can include the state $(r_{\mathcal{F},1},r_{\mathcal{B},1})$. However, it is impossible, and therefore, the path cannot be a circle. Because any path cannot be a circle, an initial state would reach an equilibrium, downfall of $coin_{\tt B}$, and the length of the path from the initial state to the equilibrium is at most $\frac{n^2}{2}$. The lines $boundary_{\mathcal A}$ and $boundary_{\mathcal B}$ are 7-th order polynomials. Therefore, for given $r_{\mathcal A}$, a path can have at most eight directed lines, which start from $(r_{\mathcal F},r_{\mathcal B})$ and are not $(+1,-1)$ and $(-1,+1)$. This fact derives that every path can include at most $8n$ directed lines except for $(+1,-1)$ and $(-1,+1).$ In the same manner, for given $r_{\mathcal B}$, a path can have at most eight directed lines, which start from $(r_{\mathcal F},r_{\mathcal B})$ and are not $(+1,0)$ and $(-1,0)$. Thus, every path can include at most $8n$ directed lines except for $(+1,0)$ and $(-1,0).$ As a result, the length of the path is roughly less than $16n,$ and the length of the path is a linear number with respect to $n.$ Next, we find out how many steps it takes for an initial state to reach a Nash equilibrium in best response dynamics. If a player who has already the best strategy is selected in process of best response dynamics, the path would not be extended even though the step goes by. In other words, it means that the path is always not extended every step. We denote by $T_s$ the total number of steps in which a player who has already the best strategy is chosen. For a given path $\mathcal{P}$ and $q>0$, the following equation is satisfied: \vspace{-3mm} \begin{small} \begin{equation*} \Pr[T_s=q|\mathcal{P}]\leq Pr[T_s=0|\mathcal{P}]\cdot \left(1-c_i\right)^q. \vspace{-3mm} \end{equation*} \end{small} \noindent This is because at least one player would revise its strategy at a state, which is not an equilibrium, and the probability to choose the player who can revise its strategy is at least $c_i$. Therefore, for any $0<\varepsilon<1$, the probability that $T_s$ is greater than $n\ln{\frac{1}{\varepsilon}}$ is as follows. \vspace{-4mm} \begin{equation} \resizebox{\hsize}{!}{$ \begin{aligned} Pr\left[T_s\geq n\ln{\frac{1}{\varepsilon}}\right]&=\sum_{\mathcal{P}}{Pr\left[T_s\geq n\ln{\frac{1}{\varepsilon}}\,\,\bigg\rvert\,\,\mathcal{P}\right]\cdot Pr[\mathcal{P}]}\notag\cr &=\sum_{\mathcal{P}}{\sum_{q\geq n\ln{\frac{1}{\varepsilon}}}{Pr[T_s=q|\mathcal{P}]\cdot Pr[\mathcal{P}]}}\notag\cr &\leq\sum_{\mathcal{P}}{\sum_{q\geq 0}{Pr[T_s=q|\mathcal{P}]\cdot\left(1-c_i\right)^{n\ln{\frac{1}{\varepsilon}}} \cdot Pr[\mathcal{P}]}}\notag\cr &=\left(1-c_i\right)^{n\ln{\frac{1}{\varepsilon}}}\sum_{\mathcal{P}}{Pr[T_s\geq 0|\mathcal{P}]\cdot Pr[\mathcal{P}]}\\ &=\left(1-c_i\right)^{n\ln{\frac{1}{\varepsilon}}} \thickapprox \varepsilon \end{aligned} $} \end{equation} As a result, it would take $\mathcal{O}(n\ln{\frac{1}{\varepsilon}})$ steps for any initial state to reach a Nash equilibrium with at least $1-\varepsilon$ probability. \end{comment} \section{Proof of Theorem~\ref{thm:inf_nash}} \label{sec:pf_in} In this section, we show that all Nash equilibria in the game $\mathcal{G}(\bm{c}, c_{\tt stick})$ when players possess sufficiently small mining power. We first consider when $c_{\tt stick}$ is 0. In order for a state $(r_{\mathcal F},r_{\mathcal B})$ to be a Nash equilibrium in the game $\mathcal{G}(\bm{c}, c_{\tt stick})$, the following equation should be satisfied: \begin{equation*} \sum_{s\in \mathcal{S}_{\tt max}}r_s=1 \, \text{ when }\, \mathcal{S}_{\tt max}=\argmax_{s\in \{\mathcal{F}, \mathcal{A}, \mathcal{B}\}} U_s(r_{\mathcal F},r_{\mathcal B}) \end{equation*} The above equation means that all players belong to the most profitable group among $\mathcal{M_F}$\xspace, $\mathcal{M_A}$\xspace, and $\mathcal{M_B}$\xspace. In other words, in order for a point $(r_{\mathcal F},r_{\mathcal B})$ to be an equilibrium, either 1) $U_\mathcal{F}, U_\mathcal{A},$ and $U_\mathcal{B}$ have the same value at the point, or 2) all miners should be in the most profitable group at the point. If both of them are not satisfied, some players would change their strategy to the most profitable one. First, we consider that three payoffs are the same. The case that payoffs of $\mathcal{M_F}$\xspace and $\mathcal{M_A}$\xspace are the same is equal to \begin{equation} \begin{aligned} r_{\mathcal B}=0 \,\,&\text{or}\,\, \frac{k}{N_{\tt in}r_{\mathcal B}^2+N_{\tt de}(r_{\mathcal F}+r_{\mathcal B})^2}=\\ &\frac{r_{\mathcal {B}}}{(1-r_{\mathcal F}-r_{\mathcal B})N_{\tt in}r_{\mathcal B}^2+(1-r_{\mathcal B})N_{\tt de}(r_{\mathcal F}+r_{\mathcal B})^2}. \end{aligned} \label{eq:simeq1} \end{equation} The case that payoffs of $\mathcal{M_F}$\xspace and $\mathcal{M_B}$\xspace are the same equal to \begin{equation} \begin{aligned} r_{\mathcal F}+r_{\mathcal B}=0 \,\,&\text{or}\,\,\frac{k}{N_{\tt in}r_{\mathcal B}^2+N_{\tt de}(r_{\mathcal F}+r_{\mathcal B})^2}=\\ &\frac{r_{\mathcal F}+r_{\mathcal B}}{(1-r_{\mathcal F}-r_{\mathcal B})N_{\tt in}r_{\mathcal B}^2+(1-r_{\mathcal B})N_{\tt de}(r_{\mathcal F}+r_{\mathcal B})^2}. \end{aligned} \label{eq:simeq2} \end{equation} By finding a solution satisfying both \eqref{eq:simeq1} and \eqref{eq:simeq2}, we can derive that three payoffs have the same value at the points $(r_{\mathcal F}=0, r_{\mathcal B}=\frac{k}{k+1})$ and $(r_{\mathcal F}=k, r_{\mathcal B}=0)$. Therefore, these two points are equilibria. Second, we consider three cases, when all miners belong to only two groups: 1) $\mathcal{M_F}$\xspace and $\mathcal{M_A}$\xspace have the same mining profit density when $r_{\mathcal B}$ is 0, 2) $\mathcal{M_A}$\xspace and $\mathcal{M_B}$\xspace have the same mining profit density when $r_{\mathcal F}$ is 0, and 3) $\mathcal{M_F}$\xspace and $\mathcal{M_B}$\xspace have the same mining profit density when $r_{\mathcal F}+r_{\mathcal B}$ is 1. In the first case, in order for the case to be equilibria, $\mathcal{M_F}$\xspace and $\mathcal{M_A}$\xspace are profitable than $\mathcal{M_B}$\xspace. Therefore, when $r_{\mathcal F}$ is not less than $k$, the case can be an equilibrium. In other words, $(k\leq r_{\mathcal F} \leq 1, r_{\mathcal B}=0)$ is a Nash equilibrium. In the second case, given that $r_{\mathcal F}$ is 0, $r_{\mathcal B}$ should be $\frac{k}{k+1}$ in order that $\mathcal{M_A}$\xspace and $\mathcal{M_B}$\xspace have the same payoff. We already showed that the point $(0,\frac{k}{k+1})$ is an equilibrium. The final case is impossible except for when $k$ is 1. If $k$ is 1, only point $(1,0)$ belongs to the final case. Also, we already showed above that the point $(1,0)$ is an equilibrium. Finally, we consider three cases, when all players belong to just one group: 1) all players are in $\mathcal{M_F}$\xspace, 2) $\mathcal{M_A}$\xspace, and 3) $\mathcal{M_B}$\xspace. As we demonstrated above, the first case $(r_{\mathcal F}=1, r_{\mathcal B}=0)$ is an equilibrium. The second case represents $(r_{\mathcal F}=0, r_{\mathcal B}=0)$. In the second case, $\mathcal{M_A}$\xspace has the mining profit density 1. However, players in $\mathcal{M_A}$\xspace would shift to other groups because payoffs of $\mathcal{M_F}$\xspace and $\mathcal{M_B}$\xspace diverge to infinity. Therefore, this case cannot be an equilibrium. The third case presents $(r_{\mathcal F}=0, r_{\mathcal B}=1)$. In this case, $\mathcal{M_B}$\xspace has the payoff $k$, and payoffs for other strategies diverge to infinity. Therefore, players in $\mathcal{M_B}$\xspace shift to others, and this is not an equilibrium. As a result, all equilibria in the game $\mathcal{G}(\bm{c}, c_{\tt stick}=0)$ are $(r_{\mathcal F}=0, r_{\mathcal B}=\frac{k}{k+1})$ and $(k\leq r_{\mathcal F} \leq 1, r_{\mathcal B}=0)$. In the same manner, we can determine all Nash equilibria in game $\mathcal{G}(\bm{c}, c_{\tt stick}=0)$ when $c_{\tt stick}>0.$ Then the equilibria are \eqref{eq:nash_infty}. \section{Proof of Theorem~\ref{thm:bch2}}\label{sec:diff} \blue{In this section, we first consider $c_{\tt stick}=0.$ } At the state $(0,\frac{k}{1+k})$, the two payoffs of $coin_{\tt A}$ mining and $coin_{\tt B}$ mining are the same as $1+k$. Also, the payoff of $\mathcal{M_F}$\xspace has the same value of $1+k,$ because the mining difficulty of both $coin_{\tt A}$ and $coin_{\tt B}$ would not change and thus $\mathcal{M_F}$\xspace does not change the coin to mine. Therefore, rational miners do not revise their strategies at $(0,\frac{k}{1+k})$, and the state $(0,\frac{k}{1+k})$ is a Nash equilibrium. Indeed, in order for the payoffs of $\mathcal{M_F}$\xspace, $\mathcal{M_A}$\xspace, and $\mathcal{M_B}$\xspace to be the same, $\mathcal{M_F}$\xspace should not change the coin to be mined like in the state $(0,\frac{k}{1+k})$. If not, the $coin_{\tt B}$ mining difficulty would periodically change because of fickle miners. In this case, we first assume that the payoffs of $\mathcal{M_A}$\xspace and $\mathcal{M_B}$\xspace are the same. Then the $coin_{\tt B}$ mining is more profitable than the $coin_{\tt A}$ mining when the $coin_{\tt B}$ mining difficulty is low. Conversely, when the $coin_{\tt B}$ mining difficulty is high, the $coin_{\tt A}$ mining is more profitable than the $coin_{\tt B}$ mining. Therefore, $\mathcal{M_F}$\xspace would earn more profit than that for $\mathcal{M_A}$\xspace and $\mathcal{M_B}$\xspace because they conduct the $coin_{\tt B}$ mining only when its difficulty is low. This fact implies that, in a state where $\mathcal{M_F}$\xspace changes its preferred coin, the payoffs of $\mathcal{M_F}$\xspace, $\mathcal{M_A}$\xspace, and $\mathcal{M_B}$\xspace cannot be the same. By using this property, one can easily find that there exist only the states $(0,\frac{k}{1+k})$ and $(k,0)$ where the three payoffs are the same. In the states $(k <r_{\mathcal F} \leq 1, r_{\mathcal B}=0)$, the mining difficulty of $coin_{\tt A}$ is eventually maintained as 1 while the mining difficulty of $coin_{\tt B}$ is maintained as more than $k.$ Thus, the payoffs of $\mathcal{M_F}$\xspace and $\mathcal{M_A}$\xspace are 1 at the states. To find the payoff of $\mathcal{M_B}$\xspace in the states, we consider states $(k< r_{\mathcal F} \leq 1, r_{\mathcal B}=\delta)$ for sufficiently small $\delta$. Note that $U_\mathcal{B}(r_{\mathcal F},0)$ is defined as $\lim_{\delta\rightarrow 0}U_\mathcal{B}(r_{\mathcal F},\delta).$ In $(k< r_{\mathcal F} \leq 1, r_{\mathcal B}=\delta),$ the mining difficulty of $coin_{\tt B}$ would have value $d \in (k,r_{\mathcal F}]$ most of the time, because fickle mining that heavily occurs increases the mining difficulty of $coin_{\tt B}$ by a high value and it takes a significantly long time for $\mathcal{M_B}$\xspace with the mining power $\delta$ to find blocks with the high mining difficulty $d.$ Therefore, the payoff $U_\mathcal{B}(k<r_{\mathcal F}\leq 1,0)$ of $\mathcal{M_B}$\xspace as $\frac{k}{d}$ is less than $1$. This means that rational miners do not change their strategies to $\mathcal{B}$\xspace at the states, and states $(k\leq r_{\mathcal F} \leq 1, r_{\mathcal B}=0)$ representing the downfall of $coin_{\tt B}$ are Nash equilibria. Meanwhile, in states $(r_{\mathcal F}<k , r_{\mathcal B}=0)$, rational miners would move to $\mathcal{M_B}$\xspace because $\mathcal{M_B}$\xspace's payoff is greater than 1 while $\mathcal{M_F}$\xspace and $\mathcal{M_A}$\xspace's payoffs are $1.$ In addition, like in Figure~\ref{fig:zones}, $boundary_{1,3}$ is always above $boundary_{2,3}$ except the point $(0,\frac{k}{k+1})$ in the triangle area. Note that $boundary_{1,3}$ refers to a line on which the payoffs of $\mathcal{M_F}$\xspace and $\mathcal{M_A}$\xspace are the same while $r_{\mathcal B}>0$, and $boundary_{2,3}$ refers to a line on which the payoffs of $\mathcal{M_F}$\xspace and $\mathcal{M_B}$\xspace are the same. When $r_{\mathcal B}=0$, the payoffs of $\mathcal{M_F}$\xspace and $\mathcal{M_A}$\xspace are always the same because $\mathcal{M_F}$\xspace would mine only $coin_A$ eventually. If we assume that $boundary_{1,3}$ and $boundary_{2,3}$ have another intersection point, not $(0,\frac{k}{k+1})$, in the triangle area, the payoffs of $\mathcal{M_F}$\xspace, $\mathcal{M_A}$\xspace, and $\mathcal{M_B}$\xspace would be the same for at least three points. This is a contradiction because the three payoffs are the same at only two points, $(0,\frac{k}{1+k})$ and $(k,0)$. Moreover, $boundary_{2,3}$ intersects with the line $r_{\mathcal B}=0$ at the point $(k,0)$ because payoffs of $\mathcal{M_F}$\xspace, $\mathcal{M_A}$\xspace, and $\mathcal{M_B}$\xspace are the same at the point. Meanwhile, $boundary_{1,3}$ does not intersect with the line $r_{\mathcal B}=0.$ This is because $\mathcal{M_F}$\xspace is trivially most profitable at $(k\leq r_{\mathcal F} \leq 1, r_{\mathcal B}=\delta)$ for sufficiently small $\delta$, and the difference between payoffs of $\mathcal{M_F}$\xspace and $\mathcal{M_A}$\xspace (i.e., $U_{\mathcal F}-U_{\mathcal A}$) is a decreasing function of $r_{\mathcal B}$ when $r_{\mathcal F}$ is given. Indeed, when $r_{\mathcal F}$ is given, the greater $r_{\mathcal B}$, the smaller $r_{\mathcal A}$ is and the lower $coin_A$ mining difficulty is. Therefore, when $r_{\mathcal B}$ increases, the profit, which fickle miners earn by mining $coin_{\tt A}$, increases, and this means that, for given $r_{\mathcal F}$, $U_{\mathcal F}-U_{\mathcal A}$ is a decreasing function of $r_{\mathcal B}$. Similarly, for given $r_{\mathcal F}$, $U_{\mathcal F}-U_{\mathcal B}$ is an increasing function of $r_{\mathcal B}.$ Because of these facts, $boundary_{2,3}$ does not intersect with the line $r_{\mathcal F}+r_{\mathcal B}=0$ while $boundary_{1,3}$ intersects at one point of the line $r_{\mathcal F}+r_{\mathcal B}=0.$ As a result, even when the mining difficulty of $coin_{\tt B}$ is adjusted in a short time period, the game $\mathcal{G} (\bm{c}, c_{\tt stick}=0)$ has Nash equilibria and dynamics such as in Figure~\ref{fig:zones}. This fact also makes the game $\mathcal{G}(\bm{c},c_{\tt stick}>0)$ have dynamics presented in Figure~\ref{fig:zones}. \section{Application to Bitcoin System} \label{sec:application} In this section, we apply our game model to Bitcoin as a case study. \blue{Specifically, we consider game $\mathcal{G} (\bm{c}, c_{\tt stick})$ when players possess sufficiently small mining power. To see if this assumption is reasonable, we investigate the mining power distribution in the Bitcoin system, referring to the power distribution provided by Slush~\cite{slush}. The distribution is depicted in Figure~\ref{fig:slush} where the $x$-axis represents the range of the relative computational power $c_i$ and the $y$-axis represents the number of miners possessing computational power in the corresponding range. The figure shows that 1) most miners possess sufficiently small mining power, and 2) even the maximum computational power is less than $10^{-2}.$ Note that BITMAIN's $c_i$ is about $3\cdot 10^{-2}$ as of Dec. 2018. Moreover, even though mining pools currently possess large computational power, the miners in pools can individually decide which coin to mine. We also recognize the distribution of computational power is significantly biased toward a few miners, as shown in Figure~\ref{fig:slush}. However, this fact does not imply that $\norm{\bm{c}}_2$ is large. Referring to the data provided by Slush, $\norm{\bm{c}}_2$ is only about 0.05, where this value is equivalent to that for the case where all miners possess $2.5\times 10^{-3}$ computational power.\footnote{We calculated this assuming that other pools have the computational power distribution similar to Slush.} Therefore, most miners (and most mining power) would follow dynamics of game $\mathcal{G} (\bm{c}, c_{\tt stick})$. As a result, we can apply game $\mathcal{G} (\bm{c}, c_{\tt stick})$ to the practical systems.} \begin{figure}[ht] \centering \includegraphics[width=.8\columnwidth]{figure/slush3.png} \caption{The computational power distribution in Slush.} \label{fig:slush} \end{figure} Now, we describe how game $\mathcal{G} (\bm{c}, c_{\tt stick})$ is applied to the Bitcoin system. As described in Section~\ref{sec:preliminary}, Bitcoin was split into BTC and BCH in Aug. 2017. Thus, we can map BTC and BCH to $coin_{\tt A}$ and $coin_{\tt B}$, respectively. For the mining difficulty adjustment algorithm of BCH, we should consider two types of BCH mining difficulty adjustment algorithms: those that BCH have before and after Nov. 13, 2017. This is because the mining difficulty adjustment algorithm of BCH changed through a hard fork of BCH (on Nov. 13, 2017). \smallskip\noindent\textbf{Before Nov. 13, 2017. } First, we consider the mining difficulty adjustment algorithm of BCH before Nov. 13, 2017. In this algorithm, not only the mining difficulty is adjusted for every 2016 block, but also EDA can occur as described in Section~\ref{sec:preliminary}. Note that EDA occurs if the mining is significantly difficult in comparison with the current mining power, i.e., EDA is used only for \textit{decreasing} the BCH mining difficulty. Therefore, the value of $N_{\tt in}$ is 2016 because the BCH mining difficulty can increase after 2016 blocks are found. Meanwhile, when the BCH mining difficulty decreases, the value of $N_{\tt de}$ varies depending on $r_{\mathcal F}$ and $r_{\mathcal B}$, ranging between 6 and 2016. Thus, we can consider the expected number of blocks found until the mining difficulty decreases (i.e, the mean of $N_{\tt de}$ denoted by $E[N_{\tt de}]$) instead of $N_{\tt de}$, and $E[N_{\tt de}]$ as a function of $r_{\mathcal F}$ and $r_{\mathcal B}$ would continuously vary from 6 to 2016. If $r_{\mathcal F}$ is 0, $E[N_{\tt de}]$ is 2016 because EDA does not occur, and if $r_{\mathcal B}$ is 0, $E[N_{\tt de}]$ is 6. As a result, the Bitcoin system before Nov. 13, 2017 can be $\mathcal{G} (\bm{c}, c_{\tt stick})$ where $E[N_{\tt de}]$ substitutes for $N_{\tt de}.$ This game $\mathcal{G} (\bm{c}, c_{\tt stick})$ has also Nash equilibria and dynamics as shown in Figure~\ref{fig:zones} because $E[N_{\tt de}]$ is a continuous function of $r_{\mathcal F}$ and $r_{\mathcal B}.$ \smallskip\noindent\textbf{After Nov. 13, 2017. } Next, we consider the Bitcoin system after Nov. 13, 2017. In this case, the BCH mining difficulty adjustment algorithm is different from that assumed in our game because the mining difficulty is adjusted for every block by considering the generation time of the past 144 blocks as a moving time window. Despite that, game $\mathcal{G} (\bm{c}, c_{\tt stick})$ can be applied to this system. Indeed, in general, our results for game $\mathcal{G} (\bm{c}, c_{\tt stick})$ would appear in the Bitcoin system regardless of the BCH mining difficulty adjustment algorithm, shown below. \smallskip \begin{theorem}\label{thm:bch2} Consider the game $\mathcal{G} (\bm{c}, c_{\tt stick})$ when $\norm{\bm{c}}_2\approx 0.$ Then when the mining difficulty of $coin_{\tt B}$ is adjusted every block or in a short time period, the set $\mathcal{E} (\bm{c}, c_{\tt stick})$ is \eqref{eq:nash_infty} presented in Theorem~\ref{thm:inf_nash}. In addition, $\mathcal{G} (\bm{c}, c_{\tt stick})$ under this mining difficulty adjustment algorithm of $coin_{\tt B}$ has dynamics such as in Figure~\ref{fig:zones}. \end{theorem} \smallskip Because the current BCH mining difficulty is adjusted every block, Theorem~\ref{thm:bch2} implies that results for game $\mathcal{G} (\bm{c}, c_{\tt stick})$ is also applied to the current Bitcoin system even though the BCH mining difficulty adjustment algorithm changed. The proof of Theorem~\ref{thm:bch2} is presented in Appendix~\ref{sec:diff}. \begin{comment} \subsection{Applying to the Ethereum system} In the case of Ethereum, $coin_{\tt A}$ and $coin_{\tt B}$ correspond to ETH and ETC, respectively, because the price of ETC is lower than that for ETH. ETH and ETC have more complicated mining difficulty adjustment algorithm than the Bitcoin system~\cite{ether_diff1, ether_diff2}. In the above section, we described that $\mathcal{G} (\bm{c}, c_{\tt stick})$ can be applied to a system where there exist two coins with the same proof-of-work mechanism regardless of the mining difficulty adjustment algorithm. Therefore, similar to the game of the Bitcoin system, the game of the Ethereum system has two kinds of Nash equilibria, coexistence between ETH and ETC and a downfall of ETC. Moreover, Figure~\ref{fig:zones} represents the dynamics for the game between ETH and ETC. \end{comment} \section{Broader Implications} \label{sec:attack} In this section, we describe broader implications of our game model. More precisely, we first describe the risk of automatic mining, and then explain how one coin can exploit this risk to {\emph{intentionally} steal the loyal miners from other less valued coins} with negligible efforts and resources. \subsection{A potential risk of automatic mining} \label{sec:auto} As described above, the current state of Bitcoin is close to coexistence between BTC and BCH because faster BCH mining difficulty adjustment makes \textit{manual} fickle mining inconvenient. We introduce another possible mining scheme called \emph{automatic mining}, which can be less affected by faster mining difficulty adjustment. Automatic mining is designed for miners to automatically switch the coin to mine to the likely most profitable one of the compatible coins by analyzing their mining difficulty and coin prices in real time unlike fickle mining. Here, note that all automatic miners almost \textit{simultaneously} change their coin when not only mining difficulty but also coin prices changes. Indeed, automatic mining can be considered to be automatically choosing the most profitable one among three strategies, $\mathcal{F}$\xspace, $\mathcal{A}$\xspace, and $\mathcal{B}$ in real time. Automatic mining has been executed in the Bitcoin system~\cite{automatic} and has already become popular in the altcoin system~\cite{multipool}. Indeed, mining power increases and decreases by more than a factor of four in most altcoins several times a day~\cite{altcoin_auto}. We describe a simple implementation of automatic mining below. \begin{figure}[ht] \centering \includegraphics[width=.9\columnwidth]{figure/one-button.png} \caption{One-button switching mining in Antpool} \label{fig:one-button} \end{figure} Currently, many mining pools, including BTC.com, Antpool, and ViaBTC, support interactive user interfaces for switching the coin to mine by just clicking one button. Figure~\ref{fig:one-button} represents the one-button switching mining feature provided by Antpool. This feature makes automatic mining easier without technical difficulties in implementing this approach. For example, a miner can conduct automatic mining in Antpool as follows. \begin{enumerate} \item First, the miner saves an HTTP header with its cookies to maintain the login session. \item To determine which coin is more profitable, the miner calculates the mining profitability of BTC and BCH. In real-world settings, this can be simply implemented by using real-time coin prices~\cite{BTCprice, BCHprice} and the coin mining difficulty. \item If BTC mining is more profitable than BCH mining, the miner sends an HTTP request, which includes the saved HTTP header and data for switching to BTC mining. Otherwise, the miner sends an HTTP request to conduct BCH mining. \item The above steps are repeated. \end{enumerate} As shown in the code~\cite{automatic_mining}, this automatic mining can be executed within about 50 lines in Python. Large-scale automatic mining makes the state of the coin system enter $Zone_3$\xspace. As a simple example, we can consider an extreme case wherein the entire computational power is involved in automatic mining. In this case, {any initial state except for $(0,\frac{k}{k+1})$ immediately reaches the equilibrium $r_{\mathcal B}=c_{\tt stick}$} as soon as all miners start automatic mining. This is because all automatic miners should simultaneously choose the same coin and would eventually mine $coin_{\tt A}$ when the mining difficulty of $coin_{\tt B}$ increases. Then, we have the following question: What ratio of automatic mining power is needed to {reach the lack of $coin_{\tt B}$-loyal miners?} As shown in Figure~\ref{fig:zones}, the state $(r_{\mathcal F},r_{\mathcal B})$ cannot be in $Zone_2$\xspace when $r_{\mathcal F}$ is not less than $k.$ Therefore, $(r_{\mathcal F},r_{\mathcal B})$ where $r_{\mathcal F}\geq k$ would move in the decreasing direction of $r_{\mathcal B}$. Further, even manual miners who do not conduct automatic mining would prefer $coin_{\tt A}$ rather than $coin_{\tt B}$ at states in $Zone_3$\xspace where $r_{\mathcal F}\geq k$ because $coin_{\tt A}$-only mining is more profitable than $coin_{\tt B}$-only mining at the states; loyal miners of $coin_{\tt B}$ should generate blocks with high difficulty. Therefore, when a fraction $k$ of the total mining power is involved in the automatic fickle mining, the state moves towards a lack of $coin_{\tt B}$-loyal miners. As of Dec. 2018, because $k$ in the Bitcoin system is about 0.05, if 5\% of the total mining power in the Bitcoin system is involved in automatic mining, the automatic miners would conduct (automatic) fickle mining and the state would enters $Zone_3$\xspace. Note that if automatic miners of which the total mining power is 5\% conduct $coin_{\tt A}$-only (or $coin_{\tt B}$-only) mining, the state would enter $Zone_2$\xspace (or $Zone_1$\xspace). This is contradiction because the automatic miners should choose the most profitable strategy. {As a result, when only 5\% of the total mining power is involved in the automatic mining, the number of BCH loyal miners decreases and the BCH system is finally becoming more centralized.} \subsection{Injuring rivalry coins} \label{subsec:rivalry} In Section~\ref{sec:application}, we explained how our game $\mathcal G (\bm{c},c_{\tt stick})$ can be applied to the Bitcoin system regardless of the BCH mining difficulty adjustment algorithm. To generalize our game model, we here consider two types of possible mining difficulty adjustment algorithms: The first type of algorithm is to adjust the mining difficulty in a long time period (e.g., two weeks) while the second type of algorithm is to adjust the mining difficulty every block or in a short time period in order to promptly respond to the changes in the mining power. In the real-world, both types of these mining difficulty adjustment algorithms are mostly used. For example, BTC and Litecoin are the cryptocurrency systems using the first type, while many altcoins including BCH, Ethereum (ETH), and Ethereum Classic (ETC) are currently using the second type. We can generalize our game model to any coin system satisfying the following conditions. \begin{enumerate} \item Two existing coins share the same mining hardware. \item The more valued coin $coin_{\tt A}$ between those coins has the first type of mining difficulty adjustment algorithm. \end{enumerate} We note that there is no restriction on the mining difficulty adjustment algorithm for the less valued $coin_{\tt B}$ in our game model $\mathcal G_\infty$. When $coin_B$ has the first type of mining difficulty adjustment algorithm, our model can be applied according to Section~\ref{sec:model}. Note that we modeled our game in Section~\ref{sec:model}, assuming that $coin_B$ has the first type of mining difficulty adjustment algorithm. In addition, in Section~\ref{sec:application}, we described why our game can be applied to when $coin_B$ has the second type of mining difficulty adjustment algorithm. Therefore, regardless of $coin_{\tt B}$ mining difficulty adjustment algorithm, in the coin system satisfying the above two conditions, the $coin_{\tt B}$-loyal miners would leave if at least $k$ fraction of the total mining power is involved in automatic mining. Next, we explain {how the more valued coin can steal loyal miners from the other less valued rivalry coin. If $coin_{\tt A}$ utilizes the first type of mining difficulty adjustment algorithm, the number of $coin_{\tt B}$-loyal miners would naturally decrease due to the automatic mining.} Again note that this situation periodically weakens the health of the $coin_{\tt B}$ system in terms of security and decentralization. On one hand, if $coin_{\tt A}$ has a mining difficulty adjustment algorithm different from the first type (i.e., different from that in Assumption~\ref{ass:c}), our game model may not be applied. For example, when considering the Ethereum system consisting of ETH and ETC, ETH corresponding to $coin_{\tt A}$ has a different difficulty adjustment algorithm from that which we assumed in our game. In this case, even if $r_{\mathcal B}=0,$ the complete downfall of $coin_{\tt B}$ (e.g., ETC) may not occur and the mining power of $coin_{\tt A}$ and $coin_{\tt B}$ would fluctuate heavily. Therefore, to follow our game and so steal the loyal miners from $coin_{\tt B}$, $coin_{\tt A}$ should change its mining difficulty adjustment algorithm through a hard fork. We can see that some cryptocurrency systems (e.g., BCH, ETH, and ETC) have often performed hard forks to change their mining difficulty adjustment algorithms~\cite{bch_hardfork, eth_hardfork, etc_hardfork}. This indicates that cryptocurrency systems can practically update their mining difficulty adjustment algorithms if needed. In conclusion, if the mining difficulty adjustment algorithm for $coin_{\tt A}$ is changed to the first type of mining difficulty adjustment algorithms, {a lack of loyal miners for $coin_{\tt B}$ might be reached due to automatic mining.} \section{Conclusion} \label{sec:conclude} \blue{In this study, we modeled and analyzed the game between two coins for fickle mining, and our results imply that fickle mining can lead to a lack of loyal miners in the less valued coin system. We confirm that this lack of loyal miners can weaken the overall health of coin systems by analyzing real-world history.} In addition, our analysis is extended to the analysis of automatic mining, which shows a potentially severe risk of automatic mining. As of Dec. 2018, BCH's loyal miners would leave if more than about 5\% of the total mining power in BTC and BCH is involved in automatic mining. \blue{Moreover, we explained how one coin can steal the loyal miners from other less valued rivalry coins in the highly competitive coin market by generalizing our game model. We believe that this is one of the serious threats for a cryptocurrency system using a PoW mechanism.} \section{Data analysis} \label{sec:data} \subsection{BTC vs. BCH} \label{subsec:btc} We analyze the mining power data in the Bitcoin system to identify to which equilibrium the state has been moving. Moreover, through this data analysis, we can find out empirically how much our theoretical model agrees with practical results. For data analysis of the Bitcoin system, we collected the mining power data of BTC and BCH from the release date of BCH (Aug.\ 1, 2017) until the time of writing (Dec. 10, 2018) from CoinWarz~\cite{coinwarz}. Figure~7a represents the mining power history of BCH, where the mining power is expressed as a fraction of the total power in BTC and BCH, i.e., $$\frac{\text{BCH mining power}}{\text{BTC mining power}+\text{BCH mining power}}.$$ \blue{In addition, we represent the data history of a ratio between difficulties of BCH and BTC (i.e., $\frac{D_{\tt B}}{D_{\tt A}}$) and a relative price of BCH to that for BTC (i.e., $k$) in Figure~7b and 7c, respectively. The price of BCH is depicted as a yellow line in Figure 7c (see the left $y$-axis). Moreover, Figure 7c represents the relative BCH mining profitability ($\frac{kD_{\tt A}}{D_{\tt B}}-1$) to the BTC mining profitability as a purple line, and the black dashed line represents $\frac{kD_{\tt A}}{D_{\tt B}}-1=0$ (see the right $y$-axis for the two lines). For this profitability, to increase reliability of data, we collected the daily BCH profitability from CoinDance~\cite{coindance}, and thus a purple point is a data captured every day. Note that $\frac{D_{\tt B}}{D_{\tt A}}$ is less than $k$ in the case where the purple line is above the black dashed line. Figure~7d simultaneously shows all data histories (except for the BCH mining profitability) presented in Figure~7a$\sim$7c.} In Figure~\ref{fig:bch_data}, the data from Dec. 2017 to Nov. 2018 are omitted because they are similar to the data for Dec. 2018. Figure~\ref{fig:btc1}$\sim$\ref{fig:post3} correspond to parts (1)$\sim$(9) of Figure~\ref{fig:bch_data}, respectively, where the area of three zones has changed because the relative price $k$ of BCH to that for BTC has fluctuated quite frequently. As another case study, we examine the mining power data of Bitcoin ABC and Bitcoin SV from Nov. 1, 2018 to Dec. 20, 2018 to analyze a special situation where $c_{\tt stick}$ suddenly increases due to the ``hash war" caused by a hard fork in the BCH system. We describe this in Section~\ref{subsec:stick}. \smallskip \blue{\noindent\textbf{Methodology.} We first describe how to determine $r_{\mathcal F}$ and $r_{\mathcal B}$ of each state. According to the definition of fickle mining (Definition~\ref{def:fickle}), fickle miners would conduct BCH mining from when $\frac{D_{\tt B}}{D_{\tt A}}$ \textit{changes} to a value less than $k$ to when $\frac{D_{\tt B}}{D_{\tt A}}$ \textit{changes} to a value greater than $k.$ This is because $D_{\tt B}$ is always less than $r_{\mathcal F}+r_{\mathcal B}$ and greater than $r_{\mathcal B}$ (see Figure 7d). Therefore, Figure~7a represents the value of $r_{\mathcal F}+r_{\mathcal B}$ during the period. We indicate the fickle mining periods in gray before the hard fork of BCH (Nov. 13, 2017) in Figure~\ref{fig:bch_data}. Figure~7d shows that $\frac{D_{\tt B}}{D_{\tt A}}$ changes to a value less than and greater than $k$ at the start and end of these periods, respectively. As a result, in Figure~7a, we can find out the value of $r_{\mathcal F}+r_{\mathcal B}$ for the gray colored periods and the value of $r_{\mathcal B}$ for non-colored periods. Here, we can see that the mining power of BCH has fluctuated considerably when the ratio of the BCH mining difficulty to the BTC mining difficulty ($\frac{D_{\tt B}}{D_{\tt A}}$) \textit{changes} to a value less than $k$. Moreover, when the coin mining difficulties do not change while BCH mining is more profitable than BTC mining, large \textit{peaks} (i.e., a sudden increase) do not appear. This fact is confirmed, referring to the purple line in non-colored zones (e.g., part (3) in Figure 7c). As a result, we can consider that those fluctuations occur due to fickle miners between BTC and BCH. If a miner switches the coin to mine without changes in the coin mining difficulty, this implies that the miner's strategy changes (e.g., from $\mathcal{A}$\xspace to $\mathcal{B}$\xspace). From the method described above, we can determine the mining power $r_{\mathcal F}$ used for fickle mining and the mining power $r_{\mathcal B}$ used for BCH-only mining. The points and directions are marked roughly in Figure~\ref{fig:btc}. The red arrow represents movement in agreement with our analysis, whereas the black arrow represents movement deviating from our analysis. Next, we explain Figure~\ref{fig:btc} by matching it with each part of Figure~\ref{fig:bch_data}.} \smallskip\noindent\textbf{The beginning of the game. } In Figure~\ref{fig:bch_data}-(1), the status point is initially in $Zone_1$\xspace, and then it moves to $Zone_2$\xspace as shown in Figure~\ref{fig:btc1}, as the BCH mining power decreases. \smallskip\noindent\textbf{Towards the lack of BCH loyal miners. } In Figure~7a-(2), two peaks occur when the BCH mining difficulty decreases to values less than $k,$ \blue{and these peaks appear in the gray colored periods.} Therefore, we can know that these peaks occur due to fickle miners. The first peak indicates that more and more miners started fickle mining (i.e., increase in $r_{\mathcal F}$). This is because the upflow of the first peak is less steep than that for other peaks, and the downflow of the first peak is steeper than the upflow of the first peak, indicating that $r_{\mathcal F}$ increases from near 0 up to near 0.4. Furthermore, one can see that $r_{\mathcal B}$ increased at the beginning of Figure~7a-(2). \blue{Remark that Figure~7a shows the value of $r_{\mathcal B}$ in a non-colored zone.} In addition, the BCH mining power in the valley between two peaks of Figure~7a-(2) is greater than the mining power at the end of Figure~7a-(1). This fact shows again that $r_{\mathcal B}$ increased at the beginning of Figure~7a-(2). After that, because the end of Figure~7a-(2) is less than the valley between the two peaks of Figure~7a-(2), we can know that $r_{\mathcal B}$ decreased while $r_{\mathcal F}$ increased in Figure~7a-(2). Figure~\ref{fig:btc2} represents these movements described above. In the beginning of Figure~7a-(3), $r_{\mathcal B}$ slightly increases, and it does not correspond with our model; we regard this as a momentary phenomenon because of a decrease in the BCH mining difficulty. \blue{Figure~7b shows that the BCH mining difficulty decreased at the beginning of the part (3). However, even though the BCH mining difficulty decreased, peaks due to fickle mining do not appear because the relative BCH mining difficulty did not decrease to a value less than $k$ as shown in Figure~7d.} As a result, as can be seen in Figure~\ref{fig:btc3}, the point moves alternatively between $Zone_1$\xspace and $Zone_3$\xspace. One can see that $r_{\mathcal F}$ decreased compared with the mining power in the peaks of Figure~7a-(4) and the peaks in Figure~7a-(2); this might be because the moving direction in $Zone_1$\xspace is $(-,-)$. Next, the peaks in the period $P$ presented in Figure~7a-(4) appeared due to fickle miners because the BTC mining difficulty increased. We can check that $\frac{D_{\tt B}}{D_{\tt A}}$ in the period $P$ decreased to a value less than $k$ through Figure~7d. Note that the fact that the BTC mining difficulty increased makes the value of $\frac{D_{\tt B}}{D_{\tt A}}$ decrease. Indeed, the two peaks of the period $P$ show that $r_{\mathcal F}$ decreases and then increases because $r_{\mathcal F}+r_{\mathcal B}$ is represented in the period $P$ of Figure~7a. This may be explained according to our model as follows: the state was near to the boundary between $Zone_1$\xspace and $Zone_3$\xspace at the beginning of Figure~\ref{fig:bch_data}-(4), and then the state entered $Zone_3$\xspace while moving in the direction $(-,-)$ (the moving direction in $Zone_1$\xspace) as in Figure~\ref{fig:btc4}. Then, the state in $Zone_3$\xspace moved in the direction $(+,-)$ in agreement with our game, and one can see that the third peak (i.e., the beginning of the second gray colored zone in Figure~7a-(4)) is higher than the second peak. After that, $r_{\mathcal F}$ decreases (see the second gray colored zone in Figure~7a-(4)), showing a deviation from our model, which is indicated by the black arrow in Figure~\ref{fig:btc4}. \blue{Indeed, considering this case as well as Figure~\ref{fig:bch_data}-(3), we observe such noises in the case where $\frac{D_{\tt B}}{D_{\tt A}}$ changes to a value close to $k.$ } Next, as shown in Figure~\ref{fig:btc5}, the point in $Zone_3$\xspace moves in the direction $(+,-)$ again because peaks in Figure~7a-(5) are higher than that for Figure~7a-(4). Moreover, in Figure~7c-(4)$\sim$(6), $k$ is roughly decreasing and even drops to about 0.055 in a few cases. In the meantime, the point passes $boundary_{1,3}.$ Because the state entered $Zone_1$\xspace, $r_{\mathcal F}$ starts to decrease, moving in the direction $(-,-)$ (as shown in Figure~\ref{fig:btc6}). Therefore, the first peak in Figure~7a-(6) is smaller than the last peak in Figure~7a-(5). Then, because the second peak is higher than the first peak in Figure~7a-(6), one can see that the point moved in the direction $(+,-)$ in $Zone_3$\xspace in agreement with our model, which is, in turn, depicted in Figure~\ref{fig:btc6}. As can be seen in Figure~\ref{fig:post1}, $r_{\mathcal B}$ first increases in Figure~7a-(7), and the point enters $Zone_1$\xspace; this is a deviation from our analysis, which may be explained because the BCH mining is momentarily more profitable than the BTC mining at the time. \blue{Here, we can see again the noise in the case where the value of $\frac{D_{\tt B}}{D_{\tt A}}$ is close to $k.$} However, $r_{\mathcal B}$ decreases again in agreement with our model. In addition, one can see that $r_{\mathcal F}$ decreases in the meantime because the starting height of the peak in Figure~7a-(8), which is marked by a red point, is less than that of the final peak in Figure~7a-(6). Therefore, the point in $Zone_1$\xspace moved in the direction $(-,-)$ and entered $Zone_3$\xspace, conforming with our analysis. Then, in the second week of Nov. 2017, the price of BCH was suddenly pumped ($k \approx 0.4$ in some cases). Therefore, $Zone_2$\xspace widens in Figure~\ref{fig:post2}. Also, the point in $Zone_3$\xspace continuously moves in the direction $(+,-)$, and $r_{\mathcal F}$ even increases to over 0.5. It can be seen that the peak in Figure~\ref{fig:bch_data}-(8) has a right-angle trapezoid with a positive slope, which indicates that $r_{\mathcal F}$ continuously increases even though it was \textit{already} high. \blue{From the history, we observe that the Bitcoin system often reaches the lack of BCH loyal miners.} However, a breakthrough exists even in this bad situation. If $k$ continuously increases, $Zone_2$\xspace widens, and it makes the state enter $Zone_2$\xspace and reach close to the coexistence equilibrium. As a result, considering the state of Bitcoin as of Nov. 13, 2017, $k$ had to increase to a minimum of 0.5 in order for the mining power engaging in fickle mining to decrease. \smallskip\noindent\textbf{Close to coexistence. } However, at the end of Figure~\ref{fig:bch_data}-(8), another hard fork occurred in BCH for updating the difficulty adjustment algorithm, and this influenced the status as an external factor. Consequently, the point jumped into $Zone_2$\xspace due to this hard fork as shown in Figure~\ref{fig:post2}. After the hard fork, the point moves in the direction $(-,+)$, reaching close to coexistence. This is shown by this fact that fluctuations became stable more and more in the beginning of Figure~7a-(9). Note that peaks occur in a short time after the hard fork because the BCH mining difficulty is quickly adjusted. Even though the state has been close to coexistence, fickle mining is still possible and observed as described in Section~\ref{sec:preliminary}. In addition, as the price continuously changes, the point sometimes enters $Zone_3$\xspace where fickle mining increases, alternating up and down in the red semicircle in Figure~\ref{fig:post3}. In other words, fickle mining will not completely cease. Therefore, if the Bitcoin state largely deviates from the equilibrium of coexistence due to external factors such as a \textit{sudden} change in prices, then it is still possible to reach the lack of BCH loyal miners. \smallskip \blue{\noindent\textbf{Influence of the lack of BCH loyal miners. } We observe that the Bitcoin system suffered from the lack of BCH loyal miners before Nov. 13, 2017. Consequently, the BCH transaction process speed periodically became low, and it even took about four hours to generate one block in some cases. Moreover, we can see that BCH was significantly centralized during the period in which the BCH mining difficulty is high. For example, when considering blocks generated from Oct. 2 to Oct. 4, only two accounts generated about 70 \% of blocks and there were only five miners who conducted BCH mining. We note that, in blockchain systems using a PoW mechanism, high mining power is an essential factor for high security blockchain systems. In practice, BCH before Nov. 13, 2017 was susceptible to double spending attacks with only 1$\sim$2\% of the total computational power in the Bitcoin system. There is also selfish mining~\cite{eyal2014majority}, which makes the attacker unfairly earn the extra reward while others suffer a loss. Because of a decrease in $r_{\mathcal B},$ these attacks can be executed with relatively small mining power. As a result, fickle mining, which heavily occurred before Nov. 13, 2017, weakened the performance, decentralization level, and security of the BCH system.} \smallskip \noindent\textbf{Influence of the hard fork of BCH. } Next, we discuss why Bitcoin moved toward different equilibria before and after Nov. 13, 2017. First, in the Bitcoin system before Nov. 13, 2017, $r_{\mathcal F}$ considerably increased as can be seen in Figure~7a-(2). Meanwhile, after Nov. 13, 2017, $r_{\mathcal F}$ did not considerably increase even though the point passed $Zone_3$\xspace. This can be attributed to the different difficulty adjustment algorithms before and after Nov. 13, 2017; the mining difficulty of BCH is currently adjusted faster than that before Nov. 13, 2017. Therefore, currently, to conduct fickle mining, miners must switch between BTC and BCH relatively fast; this would make the current fickle mining in the Bitcoin system annoying. Then, can we regard the current state of BCH to be safe if the system avoids external factors such as a \textit{sudden} change in prices? We delay the answer until Section~\ref{sec:attack}. \blue{ \subsection{The "hash war" between Bitcoin ABC and Bitcoin SV} \label{subsec:stick} According to our model, we also describe the ``hash war" that recently occurred between Bitcoin ABC (ABC) and Bitcoin SV (BSV), which are derived from the original BCH on Nov. 15, 2018. In this paper, we call `Bitcoin ABC' ABC rather than BCH to avoid confusion with the original BCH even though Bitcoin ABC is currently regarded as BCH~\cite{hashwar}. This war was caused by the conflict over a BCH update that adds a new \textit{opcode}, where the BCH factions split into a reformist group and an opposing group. As a result, this conflict caused the two factions to make their own chain, where the reformist group is the ABC faction led by Roger Ver (the owner of \textit{Bitcoin.com}~\cite{bitcoin.com}) and Jihan Wu (the cofounder of Bitmain and also the owner of BTC.com~\cite{btc.com} and Antpool~\cite{antpool}) and the opposing group is the BSV faction led by Craig Wright and Calvin Ayre (the CEO of Coingeek~\cite{coingeek}). This split of the original BCH was achieved by a hard fork on Nov. 15, 2018, and each faction wanted its own chain to be the longest chain in order to unify the divided BCH. This fact makes both factions desperately conduct mining of their coins with vast computational power; thus the hash war occurred from Nov. 15, 2018 to Nov. 24, 2018. Such behavior of ABC and BSV factions would influence on a general miner who choose its coin among BTC, ABC, and BSV, and we analyze this situation by dividing into two games: 1) a game between BTC and ABC and 2) another game between BTC and BSV. In both games, $c_{\tt stick}$ became significantly high during the hash war period, and we can consider this situation as Case 4 ($c_{\tt stick}>\frac{k}{k+1}$). \begin{figure}[ht] \centering \includegraphics[width=.9\columnwidth]{figure/bitcoincash.png} \caption{The data for ABC from \textbf{Nov. 1, 2018} to Dec. 20 2018 is represented. The mining power of ABC is expressed as a relative value to the total power in BTC and ABC, and $k$ indicates a relative price of ABC to that for BTC. } \label{fig:bitcoincash} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=.9\columnwidth]{figure/bitcoinsv.png} \caption{The data for BSV from \textbf{Nov. 15, 2018} to Dec. 20 2018 is represented. In this figure, mining power of BSV is expressed as a relative value to the total power in BTC and BSV, and $k$ indicates a relative price of BSV to that for BTC. } \label{fig:bitcoinsv} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=.9\columnwidth]{figure/bitcoincash_dist.png} \caption{The $x$ and $y$-axes represent time from \textbf{Nov. 1, 2018} to Dec. 20, 2018 and the number of ABC blocks generated by each miner in previous 100 blocks, respectively. The name of a miner corresponding to each color is presented at the bottom of this figure. } \label{fig:bitcoincash_dist} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=.9\columnwidth]{figure/bitcoinsv_dist.png} \caption{The $x$ and $y$-axes represent time from \textbf{Nov. 15, 2018} to Dec. 20, 2018 and the number of BSV blocks generated by each miner in previous 100 blocks, respectively. The name of a miner corresponding to each color is presented at the bottom of this figure. } \label{fig:bitcoinsv_dist} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=0.65\columnwidth]{figure/hashwar.png} \caption{This figure describes the movement of state for hash war period and the movement of state before and after war. } \label{fig:movement_war} \end{figure} To analyze a phenomenon that appeared due to the hash war, we collect the data for ABC and BSV. Figure~\ref{fig:bitcoincash} and \ref{fig:bitcoinsv} show the ABC data history from Nov. 1, 2018 to Dec. 20, 2018 and the BSV data history from Nov. 15, 2018 to Dec. 20, 2018, respectively. Note that BSV was released on Nov. 15, 2018. In Figure~\ref{fig:bitcoincash}, the mining power of ABC is presented as a relative value to the total mining power of ABC and BTC, and $\frac{k}{k+1}$ is also presented, where $k$ indicates a relative price of ABC to that for BTC. Figure~\ref{fig:bitcoinsv} depicts the data history of BSV like Figure~\ref{fig:bitcoincash}. These figures show that the state $(r_{\mathcal{F}}, r_{\mathcal{B}})$ in the two games was above the state $(0, \frac{k}{1+k})$ during the hash war period. Moreover, to determine the movement of the state for the hash war period, we investigate the history of ABC computational power distribution among miners from Nov. 1, 2018 to Dec. 20, 2018 and that for BSV from Nov. 15, 2018 to Dec. 20, 2018. This is because it would be hard to determine the movement of the state through just the mining power history (i.e., Figure~\ref{fig:bitcoincash} and \ref{fig:bitcoinsv}) because $c_{\tt stick}$ significantly changed during this period. Figure~\ref{fig:bitcoincash_dist} and \ref{fig:bitcoinsv_dist} represent the changes in the mining power distribution of ABC and BSV over time, respectively. To do this, we crawled coinbase transactions and analyzed the number of blocks mined by each miner among previous 100 blocks. In these figures, each miner corresponds to one color, and the length of one colored bar represents the number of blocks generated by the corresponding miner among 100 blocks. Therefore, the number of colors in the entire bar indicates the number of active miners at the corresponding time. Note that only names of ten miners are presented in Figure~\ref{fig:bitcoincash_dist}. First, we consider the game between BTC and ABC. One can see that the state $(r_{\mathcal F}, r_{\mathcal B})$ jumps to a point above $(\frac{k}{k+1},0)$ for the hash war preparation period (from Nov. 13, 2018 to Nov.15, 2018) through Figure~\ref{fig:bitcoincash}. Such an increase in the ABC mining power may be explained because the mining power of BSV factions such as CoinGeek, svpool, BMG pool, and Mempool increased from the hash war preparation~\cite{mempool} as shown in Figure~\ref{fig:bitcoincash_dist}. In other words, the increase in the ABC mining power for the hash war preparation is because $c_{\tt stick}$ increased. On the other hand, Figure~\ref{fig:bitcoincash_dist} shows that some miners left the ABC system during the war preparation (the colors that appeared at the top of the figure before the war preparation period disappeared from the war preparation period). This fact indicates that the state moves toward the line $r_{\mathcal B}=c_{\tt stick}$ in the case that $c_{\tt stick}$ is large. Note that the reason why the ABC mining power decreases at the end of the hash war preparation period (i.e., the start of the hash war) is that BSV factions move to the BSV system. Next, for the hash war period, the ABC mining power increased because the ABC factions such as Bitcoin.com increased their mining power (i.e., $c_{\tt stick}$ increased)~\cite{hashwar}. However, there were only a few loyal ABC miners during this period. For example, at the start of the hash war, only five miners exist: Bitcoin.com, BTC.com, AntPool, ViaBTC, and BTC.TOP. Note that all of them are the ABC factions (ViaBTC and BTC.TOP announced that they support ABC~\cite{viabtc_support, btc.top_support}). As a result, we can see that this state is close to the state $r_{\mathcal B}=c_{\tt stick},$ which represents a lack of BCH loyal miners. This state makes the ABC system severely centralized. In particular, one miner (Bitcoin.com) possessed about 60 \% of the total computational power in some cases, which indicates the breakage of censorship resistance. Meanwhile, after the hash war (i.e., when $c_{\tt stick}$ is less than $\frac{k}{k+1}$), one can see that more other miners gradually enter the ABC system (see the increase in the number of colors after the hash war in Figure~\ref{fig:bitcoincash_dist}). In addition, Figure~\ref{fig:bitcoincash} shows that the state is close to $\frac{k}{k+1}$ after the hash war. As a result, the state moves as shown in Figure~\ref{fig:movement_war}. Second, we describe the game between BTC and BSV through Figure~\ref{fig:bitcoinsv} and \ref{fig:bitcoinsv_dist}. As shown in Figure~\ref{fig:bitcoinsv}, the state is above $(0,\frac{k}{k+1})$ for the hash war period because $c_{\tt stick}$ is significantly high. This fact is also presented in Figure~\ref{fig:bitcoinsv_dist}. Note that CoinGeek, svpool, BMG, and Mempool are BSV factions. Therefore, the state was close to $r_{\mathcal B}=c_{\tt stick}$ at the time. Similar to ABC, BSV also suffered from the severe centralization due to a lack of loyal miners. However, the other miners have entered the BSV system after the hash war, and the state became close to $(0,\frac{k}{k+1})$. Therefore, Figure~\ref{fig:movement_war} represents the state movement, and this result empirically confirms our theoretical analysis. Here, note that when the state is located above $\frac{k}{k+1},$ $\Omega_{\tt stick}$ suffers a loss. This fact makes the state $c_{\tt stick}>\frac{k}{k+1}$ would not last for a long time. Therefore, the hash war was also not able to continue for a long time, and the hash war ended with BSV's surrender~\cite{surrender}.} \begin{comment} \begin{figure*}[!htb] \centering \begin{minipage}[t]{.56\textwidth} \centering \includegraphics[width=.8\textwidth]{figure/etc_data.png} \caption{Mining power history of ETC and $\frac{k}{k+1}$ over one month (July 24, 2016 $\sim$ August 24, 2016) from the time that Ethereum was split into ETH and ETC. The mining power is expressed as a fraction of the total mining power in ETH and ETC.} \label{fig:etc_data} \end{minipage}\hspace{5mm} \begin{minipage}[t]{.25\textwidth} \centering \includegraphics[width=\textwidth]{figure/etc.png} \caption{mining power history in Figure~\ref{fig:etc_data} as points in zones.} \label{fig:etc} \end{minipage} \end{figure*} \subsection{Data analysis on the Ethereum system} Next, we analyze the mining power data for Ethereum system split into ETH and ETC. Through our analysis, we can explain that currently ETH and ETC almost stably coexist even though their mining power distribution is far from the equilibrium $(0, \frac{k}{k+1})$ in the initial time period when Ethereum was split. In a manner similar to our analysis of the Bitcoin system, we collected the mining power data of ETH and ETC from Jul. 24, 2016 to the time of writing (Jul. 2018) from BitInfoCharts~\cite{bitinfocharts}. For ease of presentation, Figure~\ref{fig:etc_data} represents the mining power of ETC over only one month (Jul 24, 2016 $\sim$ Aug.\ 24, 2016). Also, Figure~\ref{fig:etc_data} shows not only the mining power of ETC but also $\frac{k}{k+1}$; the mining power is expressed as a fraction of total mining power in ETH and ETC; Figure~\ref{fig:etc} represents this mining power history as points in zones. We can know that the status point started from $Zone_2$\xspace and moved toward the equilibrium $(0, \frac{k}{k+1})$ because of an increase in $r_{\mathcal B}$, which conforms with our analysis. Then the state alternates up and down in the red semicircle of Figure~\ref{fig:etc}, because $k$ continuously changes. Indeed, one can see the fact that $r_{\mathcal F}$ is low in Figure~\ref{fig:etc_data} because mining power of ETC does not considerably exceed $\frac{k}{k+1}$. \end{comment} \section{Discussion} \label{sec:discuss} In this section, we first discuss \blue{how $coin_{\tt B}$ can maintain its loyal miners} and consider environmental factors that may affect our game analysis results. \blue{\subsection{Maintenance of $coin_{\tt B}$-loyal miners}} \blue{As described in Section~\ref{subsec:rivalry}, $coin_{\tt B}$ cannot prevent the rivalry coin from stealing loyal miners by changing its difficulty adjustment algorithm alone.} Surely, the most straightforward way to avoid the risk is to not use the mining hardware compatible with $coin_{\tt A}$. That is, a proprietary mining algorithm, requiring customized mining hardware which is not compatible with $coin_{\tt A}$, should be introduced for $coin_{\tt B}$. However, this solution is not applicable in practice for small and medium-sized mining operators because it is expensive to develop customized mining hardware (e.g., ASICs). In fact, because many altcoins use a mining algorithm that can be implemented in CPU or GPU, automatic mining endangers their mining power, weakening their security. The second way is to use auxiliary proof-of-work (or merged mining), which makes a miner conduct mining more than two coins at the same time~\cite{merged_mining}. Therefore, our first assumption in Section~\ref{sec:model} is not satisfied by merged mining, and our game results would not be applied. This is also regarded as a potential solution to 51\% attacks because it significantly increases mining power of altcoins~\cite{merged_mining2}. However, despite of such definite advantages, most projects do not adopt merged mining because of following reasons: It is complex to implement merged mining, and miners should do additional work~\cite{merged_mining2}. The another way is to increase the price of $coin_{\tt B}$ through price manipulation. However, as far as we know, the problem of maintaining the increased coin price through price manipulation is not well-studied. Moreover, we can consider a way to increase the relative incentive of $coin_{\tt B}$ mining to $coin_{\tt A}$ mining, where it can be achieved by increasing the block reward or decreasing the average time of block generation. \blue{Even though this method may help prevent the rivalry coin from stealing loyal miners, it would cause other side effects such as inflation or the increase in fork rate~\cite{carlsten2016instability,gervais2016security}.} Lastly, $coin_{\tt B}$ can change its consensus protocol, the PoW mechanism, to another protocol. However, \blue{this process would not be supported by existing miners in $coin_{\tt B}$.} For example, Ethereum is planning to switch from a proof-of-work mechanism to a proof-of-stake mechanism for several years. However, note that if the consensus protocol is just changed through a hard fork, \blue{the existing miners may leave because they can lose their own merits (e.g., powerful hardware capability) for mining $coin_{\tt B}$.} \subsection{Environmental factors} \label{sec:practical} In practice, miners' behavior can deviate from our model because of the following environmental factors. \smallskip\noindent\textbf{Not all miners are rational.} First, miners are not always rational or wise. Even if fickle mining or $coin_{\tt A}$ mining is more profitable than $coin_{\tt B}$ mining, some miners may be reluctant to engage in fickle mining or $coin_{\tt A}$ mining because they may not recognize the profitability in doing so. \blue{However, our data analysis confirms that most miners are rational. In addition, if miners use the automatic mining function, they would always follow the most profitable strategy.} \smallskip \blue{\noindent\textbf{Some miners consider the long-term price of coins.} Because price prediction is significantly difficult~\cite{longterm}, we believe that most miners behave depending on the short-term price of a coin rather than the long-term price. For example, who could have predicted the hash war between ABC and BSV in advance? Therefore, as can be seen from the history of the Bitcoin system, most miners behave depending on short-term profits. To model more realistic and general situations, our model considered both rational miners who are interested in short-term profits and $coin_B$ factions ($\Omega_{\tt stick}$) which are interested in long-term profits.} \blue{\smallskip \noindent\textbf{Some miners prefer the stable coexistence of coins.} Some miners may want the stable coexistence of coins for coin market stability, and they may try to reach the equilibrium representing the coexistence of coins regardless of their profits. If the fraction of such miners is large, a state would move to the equilibrium $(0,\frac{k}{k+1})$ regardless of its zones.} Based on historical observations of the Bitcoin system, however, the fraction of these miners seems unlikely to be high in the real-world. \begin{comment} \smallskip \noindent\textbf{Colluding mining power.} One party can control large mining power when individual miners entrust their mining power to that party. For example, in Multipool~\cite{multipool}, individual miners can entrust their mining power to a pool manager by connecting multiports. Then, the pool manager can freely decide whether to engage in fickle mining, only $coin_{\tt A}$ mining, or only $coin_{\tt B}$ mining. Consequently, the current state can jump into another zone, depending on the manager's decision. Indeed, a rational power mandator would optimally split the mining power into three groups, $\mathcal{M_F}$\xspace, $\mathcal{M_T}$\xspace, and $\mathcal{M_C}$\xspace for maximum profit rather than choose only one group. However, in this case, rational individual miners would decide themselves how their power is used to maximize their profit rather than entrusting their power to a manager. \end{comment} \smallskip \noindent\textbf{Other selfish mining.} In this study, we considered only fickle mining, which is a type of rational mining. However, miners engaging in various form of selfish mining~\cite{eyal2014majority,eyal2015miner,luu2015power,kwon2017selfish} might cause a deviation from our analysis. \begin{comment} \smallskip \noindent Due to several existing environmental factors, $coin_{\tt B}$ may not die. Nevertheless, large-scale automatic mining or elimination attacks would significantly weaken the security of $coin_{\tt B}.$ \subsection{Limitation} Even though our model may explain mining power history for most parts, there exist some limitations with it. First, mining power history provided by different sites present different value (i.e., similar but not the same) because they are estimated by probabilistic methods. Therefore, public mining power data may not be accurate. In addition, our game model does not mirror miners' behaviors appeared due to a sudden change of price. Indeed, black arrows in Figure~\ref{fig:btc} are appeared by these miners' behaviors. Lastly, we know only $r_{\mathcal F}+r_{\mathcal B}$ values through Fig~\ref{fig:bch_data} and can only \textit{speculate} the mining power used to fickle mining by observing changes of mining power. These points make data analysis harder. \end{comment} \section{Introduction} \label{sec:intro} Bitcoin~\cite{nakamoto2008bitcoin} is the most popular cryptocurrency based on a distributed and public digital ledger called \emph{blockchain}. Nodes in the Bitcoin network store the blockchain, where transactions are recorded in a unit of a \textit{block}, and the blockchain is extended by generating new blocks. The process of generating new blocks is referred to as \textit{mining}, and nodes conducting mining activities are referred to as \textit{miners}. To successfully mine, miners should find a solution called the \textit{proof-of-work} (PoW)~\cite{pow}. In Bitcoin, miners are required to solve a cryptographic puzzle finding a hash value to satisfy specific conditions such as a certain number of leading zeroes. To solve a puzzle, miners spend their computational power, and the miner who finds the solution obtains 12.5 coins and the transaction fees in the new block as a reward. In addition, Bitcoin has an average block interval of 10 minutes by adjusting the mining difficulty (i.e., the difficulty of the puzzles). As Bitcoin has gained popularity, the transaction scalability issue has risen, and several solutions have been proposed to address the issue. However, there were also several conflicts over these solutions. As a result, in Aug. 2017, the Bitcoin system was split into the original Bitcoin (BTC) and Bitcoin Cash (BCH)~\cite{bch, split}. The key idea of BCH is to increase a maximum block size to process more transactions than BTC. However, even with different block size limits, they have compatible proof-of-work mechanisms with each other. Therefore, miners can freely alternate between BTC and BCH mining to boost their profits~\cite{stability}. The mining profitability changes when the mining difficulty and coin price change, but some miners may be concerned only with the change in former because it is relatively easier to predict the former than the latter. More precisely, rational miners can decide which cryptocurrency is better to mine depending on the coin mining difficulty --- BCH mining would be conducted by the miner only if the BCH mining difficulty is low compared to the BTC mining difficulty; otherwise, the miner does BTC mining rather than BCH mining. We call this miner's behavior ``\textbf{fickle mining}'' in this paper. Note that the fickle miner may change the coin to mine at a specific time period whenever the coin mining difficulty changes. Thus, fickle mining leads to instability of mining power, which may eventually cause unstable coin prices~\cite{stability}. \smallskip \noindent \textbf{Game model and analysis. } In this study, we aim to analyze the economics of fickle mining rigorously, which can later be extended to show how one coin can lead to a lack of loyal miners for other less valued coins. Here, a loyal miner represents one who conducts mining the less valued coin even after the coin mining difficulty increases. To study the economics of fickle mining, we propose a game theoretical framework of players who can conduct fickle mining between two coins (e.g., BTC and BCH). Moreover, our game model reflects \textit{coin factions} that stick to mining their own coins, as they are interested in only the maintenance of their systems rather than the payoffs. Then we analyze Nash equilibria and dynamics in the game; two types of equilibria exist: the stable coexistence of two coins and the lack of loyal miners for the less valued coin. More specifically, in the latter case, only some factions (e.g., BITMAIN for BCH mining) remain as loyal miners for the less valued coin, and this fact can eventually make the coin system severely centralized, weakening its security. We describe the game model in Section~\ref{sec:model} and analyze the game in Section~\ref{sec:analysis}. \smallskip \noindent\textbf{Data analysis for BTC vs. BCH.} Next, as a case study, we analyzed the mining power changes in BTC and BCH to see if our theoretical analysis matches with actual mining power changes. In this paper, we refer to the \textit{Bitcoin system} as a coin system consisting of BTC and BCH. We examine the mining power history in the Bitcoin system from the release date of BCH until Dec. 2018 to 1) analyze which equilibrium its state has been moving to and 2) evaluate our theoretical analysis empirically. Our analysis results show that until the BCH mining difficulty adjustment algorithm changed (on Nov. 13, 2017), the Bitcoin state reached a lack of loyal miners for BCH. Therefore, BCH periodically became severely centralized before the update of the BCH protocol. For example, we observe a period when only five miners exist, of which two miners possess about 70 \% power. However, since Nov. 13, 2017, the Bitcoin state has been close to coexistence because the change in the BCH mining difficulty adjustment algorithm with a shorter difficulty adjustment time interval (i.e., every block) has affected the game as an external factor. Nevertheless, we explain that the state would still get closer to a lack of BCH loyal miners if \emph{automatic mining}, in which miners automatically choose the most profitable coin to mine, is popularly used. Note that the main difference between fickle mining and automatic mining is that fickle miners \textit{immediately} change their coin only when the mining difficulty changes while automatic miners can \textit{immediately} change their coin when not only the mining difficulty but also the coin price changes. As a result, at the time of writing (Dec. 2018), if 5\% of the total mining power of the Bitcoin system involves automatic mining, the current loyal miners for BCH would leave, weakening its security. \smallskip \noindent\textbf{Data analysis for Bitcoin ABC vs. SV.} As another case study in our game model, we also analyze the changes in the hash rate distributions of Bitcoin ABC and Bitcoin SV, before and after the recent ``hash war" between those two coins. The analysis results of these case studies are presented in Section~\ref{sec:application} and \ref{sec:data}. \smallskip \noindent\textbf{Generalization.} Moreover, we remark that our analysis can be generalized to any circumstance wherein two coins have compatible PoW mechanisms with each other. \blue{We believe that the generalized results bring new important angles in competitive coin markets; a coin can attempt to steal loyal miners from other rivalry coins that have compatible PoW mechanisms.} In Section~\ref{sec:attack}, a risk of automatic mining and the way to intentionally reduce the number of loyal miners for other coins are described. Then, in Section~\ref{sec:discuss}, we discuss countermeasures and environmental factors that may make the actual coin states deviate from our game analysis. In summary, our main contributions are as follows: \begin{enumerate \item To analyze the economics of fickle mining, we first model a game between two coins, considering some coin factions that stick to mining their own coin. \item We analyze Nash equilibria and dynamics in the game and find two types of equilibria: 1) stable coexistence of two coins and 2) a lack of loyal miners to the less valued coin. Then, we apply this game to the Bitcoin system. \item To determine if real-world miners' behaviors follow our model, we investigate the mining power history in the Bitcoin system. Then we show that the state reached the lack of BCH loyal miners until Nov.\ 13, 2017, and we confirm that this fact periodically led the BCH system to be centralized and insecure. Moreover, for generalization, we also analyze the recent ``hash war'' situation between Bitcoin ABC and Bitcoin SV according to our game model. \item We introduce a risk of automatic mining and predict that the current BCH loyal miners would leave when 5\% of the total mining power in BTC and BCH involves automatic mining. \item Finally, our game is generalized to any mining-compatible coins (e.g. Ethereum vs. Ethereum Classic). Therefore, our study brings a threat that one coin can intentionally steal loyal miners from other less valued coin. \end{enumerate} \section*{Acknowledgment} \addcontentsline{toc}{section}{Acknowledgment} We are very grateful to the anonymous reviewers and Andrew Miller, the contact point for major revision of this paper. \bibliographystyle{IEEEtran} \section{Model} \label{sec:model} In this section, we formally model a game to represent fickle mining between two coins. \subsection{Notation and assumptions} \label{subsec:game model} We consider two coins, $coin_{\tt A}$ and $coin_{\tt B}$, which have compatible PoW mechanisms with each other. In this case, a miner with a hardware device can alternately conduct mining of $coin_{\tt A}$ and $coin_{\tt B}$; that is, he can conduct fickle mining between them. Meanwhile, a $coin_{\tt B}$-faction can stick to $coin_{\tt B}$-mining rather than fickle mining or $coin_{\tt A}$-mining to maintain its own coin, and the set of $coin_{\tt B}$-factions sticking to $coin_{\tt B}$-mining is denoted by $\Omega_{\tt stick}$. For example, in the case where BCH is $coin_{\tt B}$, BITMAIN~\cite{bitmain}, one of the main supporters of BCH, may belong to $\Omega_{\tt stick}$. We aim to formalize a game considering the fickle mining and $\Omega_{\tt stick}$. The proposed game consists of many players (i.e., miners), where the set of all players is denoted by $\Omega.$ Player $i\in \Omega$ chooses one of three strategies, $s_i \in \{\mathcal{F},\mathcal{A},\mathcal{B}\}$: Fickle mining ($\mathcal{F}$\xspace), $coin_{\tt A}$-only mining ($\mathcal{A}$\xspace), and $coin_{\tt B}$-only mining ($\mathcal{B}$\xspace). The payoff function of player $i$ is denoted by $U_i : \{\mathcal{F},\mathcal{A},\mathcal{B}\}^n \rightarrow \mathbb{R},$ which we will formally define later as well as fickle mining. We also define three sets $\mathcal{M_F}$\xspace$=\{i\in\Omega\,|s_i=\mathcal{F}\}$, $\mathcal{M_A}$\xspace$=\{i\in\Omega\,|s_i=\mathcal{A}\}$, and $\mathcal{M_B}$\xspace$=\{i\in\Omega\,|s_i=\mathcal{B}\}$, indicating a set of players who conduct fickle mining, $coin_{\tt A}$-only mining, and $coin_{\tt B}$-only mining, respectively. Note that $\Omega_{\tt stick}$ is a subset of $\mathcal{M_B}$\xspace because players in $\Omega_{\tt stick}$ always choose strategy $\mathcal{B}$\xspace. The sum of mining powers in $coin_{\tt A}$ and $coin_{\tt B}$ is regarded as 1; mining power of a coin is expressed as a ratio to the total mining power. The mining power possessed by player $i$ is denoted by $c_i,$ and the total computational power possessed by $\Omega_{\tt stick}$ is denoted by $c_{\tt stick}.$ We also define $c_{\tt max}$ as the maximum of $\{c_i\,|\,i\in \Omega\backslash \Omega_{\tt stick}\}.$ Moreover, because our game analysis result would depend on the computational power possessed by players, we use the notation $\mathcal{G}(\bm{c}, c_{\tt stick})$ to refer to the game, where $\bm{c}$ indicates a vector of computational power possessed by players except for $\Omega_{\tt stick}$ (i.e., $\bm{c}=(c_i)_{i\in \Omega\backslash \Omega_{\tt stick}}$). Lastly, we denote the total mining power of $\mathcal{M_F}$\xspace, $\mathcal{M_A}$\xspace, and $\mathcal{M_B}$\xspace as $r_{\mathcal F}$ (i.e., $\sum_{i\in\mathcal{M}_{\mathcal F}}{c_i}$), $r_{\mathcal A}$ (i.e., $\sum_{i\in\mathcal{M}_{\mathcal A}}{c_i}$), and $r_{\mathcal B}$ (i.e., $\sum_{i\in\mathcal{M}_{\mathcal B}}{c_i}$), respectively. Observe that $r_{\mathcal A}=1-r_{\mathcal F}-r_{\mathcal B}$ and $c_{\tt stick} \leq r_{\mathcal B}.$ Namely, $(r_{\mathcal F},r_{\mathcal B})$ represents the full status of mining powers where $r_{\mathcal{B}}$ is not less than $c_{\tt stick}$. For the analysis of the game, we assume the following: \begin{assumption} A miner conducts either only $coin_{\tt A}$ or $coin_{\tt B}$-mining (not both) at each time instance; for example, an ASIC miner cannot execute both BTC and BCH mining simultaneously. However, their choices can be time-varying; that is, miners can change their coin to mine. \end{assumption} \smallskip \begin{assumption}\label{ass:k} The price of 1 $coin_{\tt B}$ is equal to that of $k$ $coin_{\tt A}$. We assume that $0<k\leq 1$ without loss of generality. In addition, rewards for mining a block in both coins are 1 $coin_{\tt A}$ and 1 $coin_{\tt B}$, respectively. \end{assumption} \smallskip \begin{assumption}\label{ass:c} In both $coin_{\tt A}$ and $coin_{\tt B}$ systems, mining difficulties are adjusted to maintain the average period of generating a block as the same specific time period, which we denote by 1 $P_{\tt{ag}}$ time and regard as \textit{a time unit}; for example, 1 $P_{\tt{ag}}$ = 10 minutes in the Bitcoin system. Furthermore, we consider a generalized model in which mining difficulties of $coin_{\tt A}$ and $coin_{\tt B}$ are adjusted in proportion to the mining power for the previous time window, and we consider a normalized difficulty. Thus, if $x$ mining power has been engaged in coin mining, the mining difficulty would be $x.$ More precisely, in our model, the coin mining difficulty decreases and increases again, considering the generation time of a specific number of blocks since the last update of coin mining difficulty. In particular, for the mining difficulty of $coin_{\tt B},$ we denote the number of considered blocks when the $coin_{\tt B}$-mining difficulty decreases and increases as $N_{\tt de}$ and $N_{\tt in}$, respectively.\footnote{In Section~\ref{sec:application}, we will show that our results can be applied to the coin system regardless of the mining difficulty adjustment algorithm of $coin_{\tt B}$.} Note that $N_{\tt de}$ and $N_{\tt in}$ cannot be zero. In the case of BTC and Litecoin, $N_{\tt de}$ and $N_{\tt in}$ are 2016. \end{assumption} \smallskip As described previously, a fickle miner may change the preferred coin when the coin mining difficulty changes. Here we define fickle mining formally. \smallskip \begin{definition}[Fickle mining] Let $D_{\tt A}$ and $D_{\tt B}$ denote the $coin_{\tt A}$ and $coin_{\tt B}$-mining difficulties, respectively. If $D_{\tt B}< \min\{r_{\mathcal F}+r_{\mathcal B}, k\cdot D_{\tt A}\}$ or $D_{\tt B}\leq r_{\mathcal B}$ when $D_{\tt A}$ or $D_{\tt B}$ is updated, fickle miners ($\mathcal{M_F}$\xspace) decide to conduct $coin_{\tt B}$-mining until $D_{\tt A}$ or $D_{\tt B}$ is adjusted again. Otherwise, they conduct $coin_{\tt A}$-mining. \label{def:fickle} \end{definition} \smallskip\noindent We also emphasize that if $r_{\mathcal F}$ is 0, no miner engages in fickle mining, and mining powers of $coin_{\tt A}$ and $coin_{\tt B}$ are stably maintained. \blue{ On the other hand, if $r_{\mathcal B}$ is $c_{\tt stick}$, only $coin_{\tt B}$-factions $\Omega_{\tt stick}$ would conduct $coin_{\tt B}$-mining after an increase in the mining difficulty of $coin_{\tt B}.$ In other words, in this case, only the factions remain as \textit{loyal miners} for $coin_{\tt B}.$ Therefore, if the number of such factions ($|\Omega_{\tt stick}|$) is small, the state would be a lack of loyal miners. Note that loyal miners refer to players who continue to conduct $coin_{\tt B}$-mining even after an increase in $coin_{\tt B}$-mining. In particular, if all $coin_{\tt B}$-factions stop $coin_{\tt B}$-mining for higher payoff (i.e., $|\Omega_{\tt stick}|=0$), $r_{\mathcal B}$ is 0, and no player conducts $coin_{\tt B}$-mining after an increase in the mining difficulty of $coin_{\tt B}.$} Note that the $coin_{\tt B}$-mining difficulty cannot decrease in this case because $N_{\tt de}$ cannot be zero. Therefore, the case $r_{\mathcal B}=0$ indicates the complete downfall of $coin_{\tt B}$ while only $coin_{\tt A}$ survives. Parameters used in this paper are summarized in Table~\ref{tab:par}. The last parameter in the table will be introduced later. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{figure/analysis.png} \caption{Changes in the mining power of $coin_{\tt A}$ and $coin_{\tt B}$, and mining difficulty of $coin_{\tt B}$.} \label{fig:analysis} \end{figure} \begin{table}[t] \renewcommand{\tabcolsep}{1.5pt} \centering \caption{List of parameters.} \label{tab:par} \begin{tabular}{|l||l|l|l|l|} \hline \parbox[c][0.35cm][c]{2.1cm}{\centering \blue{$\Omega_{\tt stick}$}} & \multicolumn{4}{l|}{\makecell{The set of $coin_{\tt B}$-factions sticking to $coin_B$ \\ mining to maintain their own coin}} \\ \hline \parbox[c][0.35cm][c]{2.1cm}{\centering \blue{$\Omega$}} & \multicolumn{4}{l|}{\makecell{The set of all players}} \\ \hline \parbox[c][0.35cm][c]{2.1cm}{\centering $s_i$} & \multicolumn{4}{l|}{\makecell{Player $i$'s strategy}} \\ \hline \parbox[c][0.35cm][c]{2.1cm}{\centering $U_i$} & \multicolumn{4}{l|}{\makecell{Player $i$'s payoff}} \\ \hline \parbox[c][0.35cm][c]{2.1cm}{\centering $\mathcal{F}$\xspace, $\mathcal{A}$\xspace, $\mathcal{B}$\xspace} & \multicolumn{4}{l|}{\makecell{Fickle, $coin_{\tt A}$-only, $coin_{\tt B}$-only mining}} \\ \hline \parbox[c][0.35cm][c]{2.1cm}{\centering $\mathcal{M_F}$\xspace, $\mathcal{M_A}$\xspace, $\mathcal{M_B}$\xspace} & \multicolumn{4}{l|}{\makecell{The set of players with $\mathcal{F}$\xspace, $\mathcal{A}$\xspace, $\mathcal{B}$\xspace}} \\ \hline \parbox[c][0.35cm][c]{2.1cm}{\centering \blue{$c_i$}} & \multicolumn{4}{l|}{\makecell{Computational power of player $i$}} \\ \hline \parbox[c][0.35cm][c]{2.1cm}{\centering \blue{$c_{\tt stick}$}} & \multicolumn{4}{l|}{\makecell{Computational power possessed by $\Omega_{\tt stick}$}} \\ \hline \parbox[c][0.35cm][c]{2.1cm}{\centering \blue{$c_{\tt max}$}} & \multicolumn{4}{l|}{\makecell{The maximum of $\{c_i\,|\,i\in \Omega\backslash \Omega_{\tt stick}\}$}} \\ \hline \parbox[c][0.35cm][c]{2.1cm}{\centering $\bm{c}$} & \multicolumn{4}{l|}{\makecell{The vector of computational power \\ possessed by players in $\Omega\backslash \Omega_{\tt stick}$}} \\ \hline \parbox[c][0.35cm][c]{2.1cm}{\centering \blue{$\mathcal{G}(\bm{c}, c_{\tt stick})$}} & \multicolumn{4}{l|}{\makecell{The game of players and $\Omega_{\tt stick}$ with \\ computational power $\bm{c}$ and $c_{\tt stick}$}} \\ \hline \parbox[c][0.35cm][c]{2.1cm}{\centering $r_{\mathcal{F}}, r_{\mathcal{A}}, r_{\mathcal{B}}$} & \multicolumn{4}{l|}{\makecell{The total computational power \\ fraction of $\mathcal{M_F}$\xspace, $\mathcal{M_A}$\xspace, $\mathcal{M_B}$\xspace}} \\ \hline \parbox[c][0.35cm][c]{2.1cm}{\centering $k$} & \multicolumn{4}{l|}{\makecell{The relative price of $coin_{\tt B}$ to $coin_{\tt A}$}} \\ \hline \parbox[c][0.7cm][c]{2.1cm}{\centering $P_{\tt ag}$} & \multicolumn{4}{l|}{\makecell{The time unit representing the average \\period of generating one block}} \\ \hline \parbox[c][0.7cm][c]{2.1cm}{\centering $N_{\tt de}, N_{\tt in}$} & \multicolumn{4}{l|}{\makecell{The number of considered past blocks when the \\mining difficulty of $coin_{\tt B}$ decreases or increases}} \\ \hline \parbox[c][0.35cm][c]{2.1cm}{\centering $D_{\tt A}, D_{\tt B}$} & \multicolumn{4}{l|}{\makecell{The mining difficulty of $coin_{\tt A}$, $coin_{\tt B}$}} \\ \hline \parbox[c][0.35cm][c]{2.1cm}{\centering \blue{$\mathcal{E}(\bm{c}, c_{\tt stick})$}} & \multicolumn{4}{l|}{\makecell{The set of all Nash equilibrium in $\mathcal{G}(\bm{c}, c_{\tt stick})$}} \\ \hline \end{tabular} \end{table} \smallskip \noindent \textbf{Illustration of fickle mining.} Figure~\ref{fig:analysis} illustrates a stream of mining power in $coin_{\tt A}$ and $coin_{\tt B}$, as well as the mining difficulty of $coin_{\tt B}$ over time, caused by the strategies of players. \\ \noindent - Time $t_0~$: At the beginning, $1-r_{\mathcal{B}}$ and $r_{\mathcal B}$ mining powers are used for $coin_{\tt A}$ and $coin_{\tt B}$-mining, respectively. \\ \noindent - Time $t_1~$: The mining difficulty of $coin_{\tt B}$ decreases because it is relatively difficult to find PoWs with $r_{\mathcal B}$ mining power. At the moment, $\mathcal{M_F}$\xspace shifts from $coin_{\tt A}$ to $coin_{\tt B}$, and each of $1-r_{\mathcal F}-r_{\mathcal B}$ and $r_{\mathcal F}+r_{\mathcal B}$ mining powers is used for $coin_{\tt A}$ and $coin_{\tt B}$-mining, respectively. \\ \noindent - Time $t_2~$: Because the mining difficulty of $coin_{\tt B}$ is again adjusted (increases) after $N_{\tt in}$ blocks are found in the $coin_{\tt B}$ system since the last adjustment of the mining difficulty of $coin_{\tt B}$, the mining difficulty of $coin_{\tt B}$ would increase after $\frac{N_{\tt in}r_{\mathcal B}}{r_{\mathcal F}+r_{\mathcal B}}$ $P_{\tt{ag}}$ time since it takes $\frac{r_{\mathcal B}}{r_{\mathcal F}+r_{\mathcal B}}$ $P_{\tt{ag}}$ to find one valid block on average. Then, $\mathcal{M_F}$\xspace shifts again from $coin_{\tt B}$ to $coin_{\tt A}$ and conducts $coin_{\tt A}$-mining until the mining difficulty of $coin_{\tt B}$ decreases. \\ \noindent - Time $t_3~$: Until when the mining difficulty of $coin_{\tt B}$ decreases after $N_{\tt de}$ blocks are found in the $coin_{\tt B}$ system, $\mathcal{M_F}$\xspace would conduct $coin_{\tt A}$-mining (for $\frac{N_{\tt de}(r_{\mathcal F}+r_{\mathcal B})}{r_{\mathcal B}}$ $P_{\tt{ag}}$ time). \\ \noindent - This process is continually repeated. \subsection{Payoff function} Next, we describe payoff functions for our game model. All payoffs are expressed as a unit of $coin_{\tt A}$ and are calculated as a profit density, which is defined as an average earned reward for $1~ P_{\tt{ag}}$ time divided by the player's mining power. In other words, if player $i$ earns a reward $R$ for 1~$P_{\tt{ag}}$ time on average, the payoff would be $\frac{R}{c_i}.$ Player $i$'s payoff function $U_i(s_i, \mathbf{s_{-i}})$ is expressed as follows: \begin{equation} U_i(s_i, \mathbf{s_{-i}})= \begin{cases} & U_{\mathcal{F}}(r_{\mathcal F},r_{\mathcal B}) \text{ if $s_i=\mathcal{F}$}\\ &U_{\mathcal{A}}(r_{\mathcal F},r_{\mathcal B}) \text{ if $s_i=\mathcal{A}$}\\ &U_{\mathcal{B}}(r_{\mathcal F},r_{\mathcal B}) \text{ if $s_i=\mathcal{B}$} \end{cases} \label{eq:payoff} \end{equation} where $\mathbf{s_{-i}}$ indicates other players' strategies. Here, it suffices to define $U_\mathcal{F}, U_\mathcal{A}, U_\mathcal{B}$ in the range $0< r_{\mathcal F}\leq 1,$ $0< r_{\mathcal A} \leq 1$, and $0< r_{\mathcal B} \leq 1,$ respectively; for example, $U_\mathcal{F}$ would be defined when $s_i=\mathcal F$ (i.e, a fickle miner exists, and $0< r_{\mathcal F}$). First, we define the payoff $U_{\mathcal F}$ for a player in $\mathcal{M_F}$\xspace. As shown in Figure~\ref{fig:analysis}, $\mathcal{M_F}$\xspace conducts $coin_{\tt B}$-mining for $\frac{N_{\tt in}r_{\mathcal B}}{r_{\mathcal F}+r_{\mathcal B}}$~$P_{\tt{ag}}$ time. \blue{Therefore, a player in $\mathcal{M_F}$\xspace earns the profit $\frac{k\cdot c_i}{r_{\mathcal B}}$ per 1~$P_{\tt{ag}}$ time on average for $\frac{N_{\tt in}r_{\mathcal B}}{r_{\mathcal F}+r_{\mathcal B}}$~$P_{\tt{ag}}$ time. After that, $\mathcal{M_F}$\xspace conducts $coin_{\tt A}$-mining for $\frac{N_{\tt de}(r_{\mathcal F}+r_{\mathcal B})}{r_{\mathcal B}}$ $P_{\tt{ag}}$ time during which a player in $\mathcal{M_F}$\xspace earns the following profit per 1 $P_{\tt{ag}}$ time on average: \begin{equation} \resizebox{.9\hsize}{!}{$ \mbox{AP}_{\mathcal F} := c_i \frac{\frac{N_{\tt in}r_{\mathcal B}}{r_{\mathcal F}+r_{\mathcal B}}+\frac{N_{\tt de}(r_{\mathcal F}+r_{\mathcal B})}{r_{\mathcal B}}}{(1-r_{\mathcal F}-r_{\mathcal B}) \frac{N_{\tt in}r_{\mathcal B}}{r_{\mathcal F}+r_{\mathcal B}}+(1-r_{\mathcal B}) \frac{N_{\tt de}(r_{\mathcal F}+r_{\mathcal B})}{r_{\mathcal B}}}. $} \label{eq:pd1} \end{equation}} \noindent The above formulation is due to the fact that mining powers $1-r_{\mathcal F}-r_{\mathcal B}$ and $1-r_{\mathcal B}$ engage in $coin_{\tt A}$-mining for $\frac{N_{\tt in}r_{\mathcal B}}{r_{\mathcal F}+r_{\mathcal B}}$~$P_{\tt{ag}}$ and $\frac{N_{\tt de}(r_{\mathcal F}+r_{\mathcal B})}{r_{\mathcal B}}$ $P_{\tt{ag}}$ times, respectively, and thus, the second factor in the right-hand side of \eqref{eq:pd1} represents an inverse number of the mining difficulty of $coin_{\tt A}$. \blue{Consequently, the payoff of a player in $\mathcal{M_F}$\xspace can be expressed as \begin{equation*} \resizebox{\hsize}{!}{$ U_{\mathcal{F}}(r_{\mathcal F},r_{\mathcal B}) =\left (\frac{k\cdot c_i}{r_{\mathcal B}}\cdot\frac{N_{\tt in}r_{\mathcal B}}{r_{\mathcal F}+r_{\mathcal B}}+\mbox{AP}_{\mathcal F} \times\frac{N_{\tt de}(r_{\mathcal F}+r_{\mathcal B})}{r_{\mathcal B}}\right)\times Z,$} \end{equation*} where \begin{equation*} Z=\frac{1}{c_i\left(\frac{N_{\tt in}r_{\mathcal B}}{r_{\mathcal F}+r_{\mathcal B}}+\frac{N_{\tt de}(r_{\mathcal F}+r_{\mathcal B})}{r_{\mathcal B}}\right )}. \end{equation*} Next, we provide payoffs $U_{\mathcal A}$ and $U_{\mathcal B}$ as follows: \begin{align*} U_{\mathcal{A}}(r_{\mathcal F},r_{\mathcal B}) =& \frac{\mbox{AP}_{\mathcal F}}{c_i},\\ U_{\mathcal{B}}(r_{\mathcal F},r_{\mathcal B}) =& \left(\frac{kN_{\tt in}}{r_{\mathcal F}+r_{\mathcal B}}+\frac{kN_{\tt de}}{r_{\mathcal B}}\right)\times c_i\cdot Z, \end{align*} where we observe that a player in $\mathcal{M_B}$\xspace earns the profit $\frac{k\cdot c_i}{r_{\mathcal B}}$ per 1 $P_{\tt{ag}}$ for $\frac{N_{\tt in}r_{\mathcal B}}{r_{\mathcal F}+r_{\mathcal B}}$ $P_{\tt{ag}}$ time and profit $\frac{k\cdot c_i}{r_{\mathcal F}+r_{\mathcal B}}$ per 1 $P_{\tt{ag}}$ for $\frac{N_{\tt de}(r_{\mathcal F}+r_{\mathcal B})}{r_{\mathcal B}}$ $P_{\tt{ag}}$ time, on average.} \section{Preliminary} \label{sec:preliminary} \subsection{Cryptocurrency} Many cryptocurrencies such as Bitcoin, Ethereum, and Litecoin adopt the PoW mechanism as a consensus algorithm. In the PoW mechanism, when a node solves a cryptographic puzzle, the node can generate and propagate a valid block. Then other nodes append the generated block to the existing blockchain. The puzzle is to find an inverse image of a hash function satisfying the certain condition, and thus the node should spend computational power to solve the cryptographic puzzle. The process of generating a block is called \textit{mining}, and nodes participating in mining are called \textit{miners}. In systems, the mining difficulty is adjusted to maintain the average time of generating one block. In particular, Bitcoin mining difficulty is adjusted to keep the average period of generating one block at 10 minutes. In addition, to incentivize mining, whenever a miner finds a valid block, the miner earns the reward for one block in compensation for the computational power spent. For example, currently, miners earn the block reward of 12.5 coins in the Bitcoin system when they find one block. Many people have become involved in mining because of the incentive for mining, and specialized hardware for efficient mining such as application-specific integrated circuits (ASICs) has appeared. Based on the above reasons, the vast computational power is used for mining, and mining difficulty has increased significantly. Therefore, it should take a \textit{solo miner}, who mines alone, a significantly long time to find a valid block, and this causes solo miners to wait for a long time to earn block rewards. To reduce not only node costs and but also the variance of their rewards, \textit{mining pools} where miners gather together for mining have been organized. Most pools are composed of workers and a manager. The manager gives puzzles to workers, and they solve the puzzles. If a worker solves a given puzzle, the block reward is distributed to the workers in the pool. In the past years, there have been many attacks on and problems with cryptocurrency systems, and these attacks or problems have even caused cryptocurrency systems to split. For example, because Bitcoin has become a popular cryptocurrency, the system needs to provide high transaction throughput. To address the scalability issue, several solutions such as Segregated Witness~\cite{segwit} and unlimited block size have been proposed. Because of the debate on the proposed solutions, Bitcoin was eventually split into BTC and BCH in early Aug. 2017. Even though BCH chose to increase the block size limit in order to allow more transactions per block, the mining protocol of BCH was designed to be compatible with that of BTC. Therefore, miners can conduct both BTC and BCH mining with one hardware device. \subsection{Fickle mining} Before Nov. 13, 2017, BCH adjusted the mining difficulty every 2016 block to ensure that the average time period for generating a block is 10 minutes, like in the case of BTC. In doing so, if the time required for generating past 2016 blocks is longer than two weeks, the mining difficulty decreases, and miners can generate subsequent blocks more easily. In addition, BCH added a new difficulty adjustment algorithm called emergency difficulty adjustment (EDA)~\cite{eda} to decrease the mining difficulty without waiting for 2016 blocks to be generated when it is significantly difficult to find a valid block. Because BTC and BCH have a PoW mechanism compatible with each other, miners can freely switch between them depending on the mining difficulty and the coin price. However, because the change in coin price is hard to predict, some miners immediately change their coin only when mining difficulty changes, where we call this behavior \emph{fickle mining}. Concretely, the fickle miners first conduct BTC mining, observing the changes in the mining difficulties of BTC and BCH. Then, if the BCH mining difficulty is low, they immediately shift to BCH mining. When the BCH mining difficulty increases again thanks to its difficulty adjustment algorithm, fickle miners immediately shift to BTC mining. Fickle mining can boost profits of miners; however, this behavior might cause instability of both BTC and BCH. This mining behavior was easily observed in Bitcoin when we monitored the mining power in pools. We collected mining power history data over the course of a week from two popular pools: ViaBTC~\cite{viabtc} and BTC.com~\cite{btc.com}. These two pools support both BTC and BCH mining; miners in the pools can choose either BTC or BCH mining by just clicking one button. Figure~\ref{fig:mining pattern} represents the mining power data of ViaBTC and BTC.com for a week. In the figure, the grey regions show movements of mining power from BTC to BCH mining. \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{figure/mining_pattern.png} \caption{ Mining power history of ViaBTC and BTC.com (Sep. 29, 2017 $\sim$ Oct. 6, 2017). The grey regions represent movements of mining power from BTC to BCH.} \label{fig:mining pattern} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{figure/mining_pattern2.png} \caption{ Mining power history of ViaBTC (Dec. 5, 2017 $\sim$ Dec. 8, 2017). Grey regions represent movements of mining power from BTC to BCH. Note that we only displayed the mining power history of ViaBTC because BTC.com did not evidently execute fickle mining for this period.} \label{fig:mining pattern2} \end{figure} As fickle mining causes a sudden increase in mining power as shown in the grey zones of Figure~\ref{fig:mining pattern}, many blocks were generated quite quickly in the BCH system. For example, in the BCH system, 2016 blocks were generated within only three days in each grey zone. This caused the blockchain of BCH to be thousands of blocks ahead of BTC, and the halving time of the block reward in BCH was brought forward. To address this issue, BCH performed another hard fork on Nov. 13, 2017~\cite{hardfork}. Currently, BCH adjusts the difficulty for \textit{each} block based on the previous 144 blocks as a moving window~\cite{newdaa}. To determine if it is possible that miners conduct fickle mining even after the hard fork of Nov. 13, 2017, we investigated the BCH mining power data of ViaBTC for four days (Dec. 5, 2017 $\sim$ Dec. 8, 2017). Figure~\ref{fig:mining pattern2} represents the BCH mining power data of ViaBTC during this time period; as is evident from the figure, some miners still conduct fickle mining. Because the BCH mining difficulty is more quickly adjusted than before the hard fork of BCH, fickle miners should switch their mining power more quickly than before the hard fork. Indeed, fickle mining can occur in any mining difficulty adjustment algorithm. \section{Related work} \label{sec:related} In this section, we review previous studies related to mining in PoW systems. Kroll et al. considered the Bitcoin mining process as a game among multiple players~\cite{kroll2013economics} and showed that a miner possessing 51\% mining power can be motivated to disrupt the Bitcoin system. Several works~\cite{johnson2014game, laszka2015bitcoin} modeled and analyzed a game between two pools that can launch denial of service attacks against each other. Eyal and Sirer introduced the selfish mining strategy, where a malicious miner successfully mines blocks but does not immediately broadcast the blocks; instead, the attacker temporarily withholds the block~\cite{eyal2014majority}. Many researchers have intensively studied ways to optimize and extend selfish mining~\cite{sapirshtein2016optimal,nayak2016stubborn, gervais2016security, zhang2017necessity}. Bonneau introduced bribery attacks as a way for an attacker to increase her mining power~\cite{B16a}. Lewenberg et al. considered a mechanism of sharing rewards among pool miners as a cooperative game~\cite{lewenberg2015bitcoin}. In 2015, Eyal modeled a game between two pools that execute block withholding (BWH) attacks~\cite{eyal2015miner}. As a concurrent work, Luu et al.~\cite{luu2015power} modeled a power splitting game to find an optimized strategy for a BWH attacker. Kwon et al.~\cite{kwon2017selfish} proposed a new attack called a fork after withholding (FAW) attack against pools~\cite{kwon2017selfish}. Also, several works~\cite{carlsten2016instability, tsabary2018gap} analyzed a transaction-fee regime in PoW systems, where miners receive incentives for mining as transaction fees. Moreover, because many cryptocurrencies are competing with each other, there can be another incentive to execute 51\% attacks. Considering this fact, Bonneau revisited the 51\% attack with some basic analysis~\cite{bonneau2018hostile}. Recently, Ma et al.~\cite{ma2018market} considered a mining game of multiple miners and concluded that openness of the Bitcoin system causes the need for vast mining power. Another study~\cite{prat2018equilibrium} examined the relation between the Bitcoin/USD exchange rate and Bitcoin mining power. They first proposed an industry equilibrium model to forecast the mining power depending on the Bitcoin/USD exchange rate. Then, they showed that the real mining power data and simulated mining power according to their model are similar. Our study focuses on the relation between two coins that have compatible PoW mechanisms with each other and the miners' behavior between two coins. Furthermore, our model can be used to forecast the ratio of mining power between two coins. To the best of our knowledge, this is the first to study the effects of fickle mining.
{ "timestamp": "2019-03-05T02:11:02", "yymm": "1902", "arxiv_id": "1902.11064", "language": "en", "url": "https://arxiv.org/abs/1902.11064" }
\section{Introduction} \label{sec1} \subsection{Background and Motivation} Analog-to-digital converters (ADCs) are known to consume most of the power dissipated at a base station \cite{Bai15}. It is shown that the power consumed by ADCs grows exponentially with their resolution level and linearly with their sampling rate \cite{paper_40,paper_46}. Thus, using high-resolution quantization with high sampling rates can significantly degrade the energy efficiency of a communication system. With the introduction of massive multiple-input-multiple-output (MIMO) and millimeter wave (mmWave) technology, this is even more prominent in next generation wireless systems. Because, massive MIMO systems use hundreds of antennas where each antenna is connected to a dedicated radio frequency (RF) chain equipped with high-resolution ADCs. MmWave systems, on the other hand, use much larger bandwidths that require higher sampling rates. In fact, the typical power consumption of a high speed ($\geq 20$ GSamples/s) and high-resolution ($8$-$12$ bits) ADC is around $500$ [mWatts]. Therefore, a future mmWave massive MIMO system with $256$ RF chains and $512$ ADCs will require around $256$ [Watts] of power \cite{paper_55}, which is potentially unaffordable. Consequently, the idea of replacing power hungry high-resolution ADCs with low-resolution ADCs could provide a viable solution to the power consumption concerns in future wireless systems. Indeed, low-resolution ADCs have long been known to provide significant energy savings in digital transceiver implementations \cite{paper_22,paper_23, Madow09}. Their other benefits include simplification in design (especially with $1$-bit ADCs) and reduction in transceiver form-factor \cite{Dabeer10,paper_20,paper_55}. Furthermore, the future long-term evolution (LTE) networks are also expected to support a wide range of Internet-of-Things (IoT) applications through protocols such as LTE-M, NB-IoT and EC-GSM, where devices are usually battery power-limited \cite{IoT5g}. In these future application scenarios, low-resolution ADC based digital transceivers have the potential to prolong the battery lifetime of remote IoT devices as well, and thereby lessening the operating costs and the need for frequent human intervention. This paper investigates the performance of a wireless communication system with low resolution ADCs, in a symbol error probability $\paren{\mathsf{SEP}}$ perspective. In our analysis, we consider the optimum maximum likelihood (ML) detector, and to provide a thorough discussion, we focus on single-input single-output (SISO) channels. Most of the previous work on low-resolution ADCs have focused on the abstract case of $1$-bit quantization \cite{paper_3, paper_8, paper_20, paper_25, paper_22, Lee17, paper_32, paper_34, paper_33, paper_26, paper_29, paper_27, paper_28, paper_30}, where a simple comparator forwards the sign of the signal to the digital domain and discards all the information about the analog signal amplitude. Such comparators consume negligible power and does not require an automatic gain control circuit. Thus, they lead to cost and power effective implementation of RF chains \cite{Lee17}. In this paper, we take a different approach in which we allow the number of bits in the quantizer to vary until the transceiver architecture becomes asymptotically optimum in terms of communication reliability. Focusing on a special phase quantizer, we derive analytical expressions of the average $\mathsf{SEP}$ when the channels are subject to Nakagami-$m$ fading and the transmitted bits are modulated using $M$-PSK modulation. More importantly, our asymptotic results reveal a fundamental ternary behaviour in the error probability performance of a wireless communication system with low-resolution ADCs, providing an important insight to system designers when choosing the required number of quantization levels. Using low-resolution ADCs in wireless communication systems has been investigated in various aspects. The performance of communication systems with low-resolution ADCs is lower than that of the idealized systems without quantization or traditional systems with high-resolution ADCs. Therefore, performance analysis of low-resolution quantization is a key research area. It was shown in \cite{paper_26} that the capacity of point-to-point MIMO channel with $1$-bit ADCs is lower bounded by the rank of the channel in the high signal-to-noise ratio ($\mathsf{SNR}$) regime. Results in \cite{paper_27} and \cite{paper_28} show that the channel capacity reduces by a factor of $2/\pi$ (1.96 dB) in the low-$\mathsf{SNR}$ regime for a MIMO system with $1$-bit ADCs, when compared to a conventional high-resolution system. Further, the results in \cite{paper_30} establish the fact that the performance loss due to employing $1$-bit ADCs can be overcome by having approximately 2.5 times more antennas at the base station. \cite{paper_31} focuses on the information rate of a quantized block non-coherent channel with $1$-bit ADCs. The results in this paper show that around $80-85\%$ of the mutual information attained with unquantized observations can also be attained with $3$-bit quantization for QPSK modulation and $\mathsf{SNR}$ greater than $2$-$3$ dB. In \cite{paper_36}, Liang {\em et al.} presented a mixed-ADC architecture for MIMO systems in which some of the high-resolution ADCs were replaced with $1$-bit ADCs. Their results show that the proposed architecture can achieve a near-similar performance as conventional architecture while reducing the energy consumption considerably. Signal detection rules developed for receivers with high-resolution ADCs often become sub-optimal for receivers with low-resolution ADCs \cite{paper_55}. In \cite{Mezghani_12}, the authors propose a linear minimum mean square error (LMMSE) receiver when in-phase and quadrature components of the received signal are independently quantized by using a low-resolution ADC. They provide an approximation for the mean squared error between the transmitted symbol and the received one, and derive an optimized linear receiver which performs better than the conventional Weiner filter. Results in \cite{Mezghani_12} were further extended to an iterative decision feedback receiver with quantized observations in \cite{paper_13}. For the same quantizer structure of independent quantization of in-phase and quadrature signal components, an ML detector was obtained in \cite{paper_3} by using {\em only} $1$-bit ADCs. The complexity of the ML detector proposed in \cite{paper_13} grows exponentially with high signal constellations, number of transmit antennas and network size, which is not practical for real-world deployments. To overcome this difficulty, a near-optimum ML detector was proposed in \cite{paper_8} by using the convex optimization techniques. Although the $\mathsf{SEP}$ performance of the proposed near-optimum ML detector is better than the performance of linear detectors, it has been numerically observed that the proposed near-optimum ML detector still suffers from an error floor as $\mathsf{SNR}$ increases \cite{paper_55, paper_8}. Complementing this critical observation, in our work, we show the existence of a universal error floor below which the average $\mathsf{SEP}$ cannot be pushed down for any $M$-ary modulation scheme and quantizer structure if the number of quantization bits is less than $\log_2M$. \subsection{Main Contributions} In this paper, we consider a point-to-point wireless communication system, where data transmission is corrupted by fading and noise. Motivated by the capacity achieving property of circularly symmetric input distributions for low-resolution ADCs \cite{Krone10}, we assume that the transmitted symbols are modulated using $M$-ary phase shift keying ($M$-PSK). For such a system, we design a low-resolution ADC that quantizes the phase of the received signal in such a way that only the information about the quantization region in which the received signal landed is sent to the detector. The use of phase quantization in our model is further motivated by the following two factors. First, considering channel impairments as phase rotations in transmitted signals, quantization and decision regions for $M$-PSK modulation are conveniently modelled as convex cones in the complex plane \cite{book_5}, and without requiring the use of automatic gain control. Second, phase quantizers can be implemented using $1$-bit ADCs that consist of simple comparators, and they consume negligible power (in the order of mWatts). Our main contributions are summarized as follows. \begin{itemize} \item For any $M$-ary modulation scheme and quantizer structure, we show the existence of an error floor below which the average symbol error probability ($\mathsf{SEP}$) cannot be pushed if the number of quantization bits $n$ is less than $\log_2 M$. \item For $M$-PSK modulation with $M \geq 2$, we derive the optimum ML detection rule for signal detection with low-resolution ADCs. \item We obtain analytical expressions for the average $\mathsf{SEP}$ attained by the derived ML rule with $n$-bit quantization when the wireless channel is subjected to Nakagami-$m$ fading. \item We establish a fundamental ternary average $\mathsf{SEP}$ behaviour with low-resolution ADCs and $M$-PSK modulation under the Nakagami-$m$ fading model. In particular, we show that the decay exponent of the average $\mathsf{SEP}$ is the same with that of an infinite-bit quantization, which is equal to $m$, when $n$ is larger than or equal to $\log_2M + 1$. We also show that it is equal to $\frac12$ and $0$ for $n=\log_2M$ and $n<\log_2M$, respectively. \item We perform a detailed numerical analysis in the high-$\mathsf{SNR}$ regime to corroborate the derived analytical results and to illustrate the energy gains obtained by low-resolution ADCs. \end{itemize} From a system design point of view, our results show that using one additional bit on top of $\log_2 M$ of them can achieve optimum communication robustness in the high-$\mathsf{SNR}$ regime. In particular, for fading environments with a large value of $m$, using an extra quantization bit improves communication reliability significantly. On the other hand, it may be more beneficial to use $\log_2 M$ bits for small values of $m$, without sacrificing from communications robustness too much but doubling system energy efficiency. \subsection{Notation} We use uppercase letters to represent random variables and calligraphic letters to represent sets. We use $\R$, $\R^2$ and $\N$ to denote the real line, $2$-dimensional Euclidean space and natural numbers, respectively. For a pair of integers $i \leq j$, we use $\sqparen{i:j}$ to denote the discrete interval $\brparen{i, i+1, \ldots, j}$. For two functions $f$ and $g$, we will say $f(x) = \BO{g(x)}$ as $x \ra x_0$ if $\abs{f(x)} \leq c \abs{g(x)}$ for some $c > 0$ when $x$ is sufficiently close to $x_0$. Similarly, we will say $f(x) = \OO{g(x)}$ as $x \ra x_0$ if $\abs{f(x)} \geq c \abs{g(x)}$ for some $c > 0$ when $x$ is sufficiently close to $x_0$. We write $f(x) = \TO{g(x)}$ as $x \ra x_0$ if $f(x) = \BO{g(x)}$ and $f(x) = \OO{g(x)}$ as $x \ra x_0$. Finally, we will say $f(x) = \LO{g(x)}$ as $x \ra x_0$ if $\lim_{x \ra x_0} \abs{\frac{f(x)}{g(x)}} = 0$. The set of complex numbers $\C$ is $\R^2$ equipped with the usual complex addition and complex multiplication. We write $z = \re{z} + \jmath \im{z}$ to represent a complex number $z \in \C$, where $\jmath = \sqrt{-1}$ is the {imaginary unit} of $\C$, and $\re{z}$ and $\im{z}$ are called, respectively, {\em real} and {\em imaginary} parts of $z$ \cite{Remmert91}. Every $z \in \C$ has also a {\em polar} representation $z = \abs{z}\e{\jmath \theta} = \abs{z}\paren{\cos\paren{\theta} + \jmath \sin\paren{\theta}}$, where $\abs{z} \defeq \sqrt{\re{z}^2 + \im{z}^2}$ is the magnitude of $z$ and $\theta = \Arg{z} \in [-\pi, \pi)$ is called the (principle) argument of $z$.\footnote{The range of $\Arg{z}$ can be taken to be any interval of length $2\pi$. For our purposes, taking its range to be $[-\pi, \pi)$ will help to simplify the notation for some integral expressions.} As is common in the communications and signal processing literature, $\Arg{z}$ will also be called the phase of $z$ (modulo $2\pi$). For a complex random variable $Z = \re{Z} + \jmath \im{Z}$, we define its mean and variance as $\ES{Z} \defeq \ES{\re{Z}} + \jmath \ES{\im{Z}}$ and $\V{Z} \defeq \ES{\abs{Z - \ES{Z}}^2}$, respectively. We say that $Z$ is {\em circularly-symmetric} if $Z$ and $\e{\jmath \theta}Z$ induce the same probability distribution over $\C$ for all $\theta \in \R$ \cite{Picinbono94,Koivunen12}. For $x > 0$, $\log x$ and $\log_2 x$ will denote natural logarithm of $x$ and logarithm of $x$ in base $2$, respectively. \section{System Setup} \label{sec2} \subsection{Channel Model and Signal Modulation} \label{Subsection: Channel Model} We consider the classical point-to-point wireless channel model with flat-fading. For this channel, the received discrete-time baseband equivalent signal $Y$ can be expressed by \begin{equation}\label{eq1} Y = \sqrt{\mathsf{SNR}}H X + W, \end{equation} where $X \in \mathcal{C} \subset \C$ is the transmitted signal, $\mathcal{C}$ is the constellation set of information signals in $\C$, $\mathsf{SNR}$ is the ratio of the transmitted signal energy to the additive white Gaussian noise (AWGN) spectral density, $H \in \C$ is the unit power channel gain between the transmitter and the receiver, and $W$ is the circularly-symmetric zero-mean unit-variance AWGN, i.e., $W \sim \mathcal{CN}(0,1)$. In order to formalize the receiver architecture and the optimum signal detection problem below, we will assume that $\mathcal{C}=\brparen{\e{\jmath \pi \paren{\frac{2k +1}{M}-1}}}_{k=0}^{M-1}$ in the remainder of the paper, which is the classical $M$-ary phase shift keying ($M$-PSK) signal constellation\footnote{This choice of $\mathcal{C}$ ensures that the phase of $X$ always lies in $[-\pi, \pi)$} and for ease of exposition, we only consider the case in which $M$ is an integer power of $2$\footnote{Extensions of our results to the more general case of $M$ being any positive integer is straightforward, albeit with more complicated notation and separate analyses in some special cases.}. \subsection{Receiver Architecture} \label{Subsection: Receiver} The receiver architecture is based on a low-resolution ADC. As illustrated in Fig. \ref{sys_model}, the received signal $Y$ is first sent through a low-resolution quantizer, and then the resulting quantized signal information is used to determine the transmitted symbol $X$. More specifically, if $n$ bits are used to quantize $Y$, the quantizer $Q$ divides the complex domain $\C$ into $2^n$ quantization regions and outputs the index of the region in which $Y$ lies as an input to the detector. As such, we declare $Q(Y) = k$ if $Y \in \mathcal{R}_k$ for $k \in \sqparen{0:2^n -1}$, where $\mathcal{R}_k \subseteq \C$ is the $k$th quantization region. Since information is encoded in the phase of $X$ with the above choice of constellation points, we choose $\mathcal{R}_k$ as the convex cone given by \begin{eqnarray*} \mathcal{R}_k = \brparen{z \in \C: \frac{2\pi}{2^n}k \leq \Arg{z} + \pi < \frac{2\pi}{2^n}\paren{k+1}}. \end{eqnarray*} \begin{figure}[!t] \center \includegraphics[width=0.8\textwidth]{figures/SISO_sys_model2.eps} \hspace{10mm} \caption{The receiver architecture with low-resolution quantization. The signal detector observes only the $n$-bit quantized versions of $Y$ to estimate the transmitted signal.} \label{sys_model} \end{figure} We also assume that full channel state information is available at the receiver. The assumption on the availability of full channel state information at the receiver is justified by the previous work on channel estimation with low-resolution ADCs \cite{Dabeer10, Mezghani2010, Mo2018, Wen16}. In particular, it was shown in \cite{Dabeer10} that it is possible to attain a near full-precision channel estimation performance with the use of low-resolution ADCs by increasing the number of training symbols in the closed-loop estimation process. Further, mixed-ADC architectures can also be employed to achieve high channel estimation accuracy \cite{paper_36}. \section{Optimum Signal Detection} \label{Subsection: Signal Detection} The aim of the detector is to minimize the $\mathsf{SEP}$ by using the knowledge of $Q(Y)$ and channel state information, which can be represented as selecting a signal point $\hat{x}\paren{k, h}$ satisfying \begin{eqnarray} \hat{x}\paren{k, h} \in \underset{x \in \mathcal{C}}{\arg\max }\ \PR{X = x \big| Q(Y) = k, H = h}, \label{Eqn: opt detector} \end{eqnarray} for $h \in \C$ and $k \in \sqparen {0: 2^n-1}$. The main performance figure of merit for the optimum detector is the average $\mathsf{SEP}$ given by \begin{eqnarray} p\paren{\mathsf{SNR}} = \PR{X \neq \hat{x}\paren{Q\paren{Y}, H}}. \label{Eqn: SEP} \end{eqnarray} It is important to note that $p\paren{\mathsf{SNR}}$ depends on $\mathsf{SNR}$ as well as the number of quantization bits. Our first result indicates that there is an $\mathsf{SNR}$-independent error floor such that the average $\mathsf{SEP}$ values below which cannot be attained for $n<\log_2M$. The following theorem establishes this result formally. \begin{theorem} \label{Theorem: lower bound n<log2M} Let $p_{\min}$ be the probability of the least probable transmitted symbol. If $n <\log_2 M$, then for any choice of modulation scheme and quantizer structure \begin{eqnarray} p\paren{\mathsf{SNR}} \geq \frac{M-2^n}{2^n} p_{\min} \label{Eqn: Universal SEP Lower Bound} \end{eqnarray} for all $\mathsf{SNR} \geq 0$. \end{theorem} \begin{IEEEproof} See Appendix \ref{Appendix:Theorem: lower bound n<log2M}. \end{IEEEproof} Firstly, we note that the error floor in \eqref{Eqn: Universal SEP Lower Bound} is always a valid lower bound since $P_{\min} \leq \frac{1}{M}$. Secondly, it does not depend on the fading model. The average $\mathsf{SEP}$ values below $\frac{M-2^n}{2^n} p_{\min}$ cannot be achieved due to the inherent inability of low-resolution ADC receivers to resolve different signal points when $n < \log_2M$. We also note that the Fano's inequality can also be used to obtain similar, perhaps tighter, lower bounds on $p\paren{\mathsf{SNR}}$ \cite{Cover91}. However, this will require the calculation of equivocation between $X$ and $Q(Y)$ for each choice of modulation scheme and quantizer structure. Hence, it is not clear how the minimization is carried out over the modulation and quantizer selections in this approach. Next, we will assume that all signal points in $\mathcal{C}$ are equiprobable, with probability $\frac{1}{M}$, and hence the optimum detector in \eqref{Eqn: opt detector} is equivalent to the ML detector given by \begin{align} \hat{x}\paren{k, h} \in \underset{x \in \mathcal{C}}{\arg\max }\ \PR{Q(Y) = k \big| X = x, H = h} \label{Eqn: ML detector Q} \end{align} for $h \in \C$ and $k \in \sqparen {0: 2^n-1}$. Since $Y$ is a proper complex Gaussian random variable with mean $\ES{Y} = \sqrt{\mathsf{SNR}}hx$ and variance $\V{Y} = 1$, we can write the probability in \eqref{Eqn: ML detector Q} as \begin{eqnarray} \label{Eqn: Prob of landing} \PR{Q(Y) = k \big| X = x, H = h} = \int_{\mathcal{R}_k} \frac{1}{\pi} \exp\paren{-\abs{y - \sqrt{\mathsf{SNR}}hx}^2} dy, \label{Eqn: ML Probability} \end{eqnarray} where the integral in \eqref{Eqn: ML Probability} is with respect to the standard Borel measure in $\C$ \cite{Massey93}. The next theorem describes the operation of the ML detector for the above signal detection problem. \begin{theorem} \label{Theorem: ML Detector} Assume $H$ has a continuous probability density function (pdf). Then, $\hat{x}\paren{k, h}$ is unique with probability one, i.e., the set of $h$ values for which $\underset{x \in \mathcal{C}}{\arg\max }\ \PR{Q(Y) = k \big| X = x, H = h}$ is singleton has probability one, and the ML detection rule for the low-resolution ADC based receiver architecture can be given as \begin{eqnarray} \hat{x}\paren{k, h} = \underset{x \in \mathcal{C}}{\arg\min} \ \dist{\sqrt{\mathsf{SNR}}hx, \mathcal{H}_k}, \end{eqnarray} where $h \in \C$, $k \in \sqparen{0: 2^n-1}$, $\dist{z, \mathcal{A}}$ is the distance between a point $z \in \C$ and a set $\mathcal{A} \subseteq \C$, which is defined as $\dist{z, \mathcal{A}} \defeq \inf_{s \in \mathcal{A}} \abs{z - s}$, and $\mathcal{H}_k = \brparen{z \in \C: \Arg{z} + \pi = \frac{\pi}{2^n}\paren{2k+1}}$. \end{theorem} \begin{IEEEproof} See Appendix \ref{Appendix: Optimum Detector Proof}. \end{IEEEproof} We note that the half-hyperplane $\mathcal{H}_k$ in Theorem \ref{Theorem: ML Detector} bisects the $k$th quantization region $\mathcal{R}_k$ into two symmetric regions. Hence, Theorem \ref{Theorem: ML Detector} indicates that the most probability mass is accumulated in the region $\mathcal{R}_k$ when the unit-variance proper complex Gaussian distribution with mean closest to $\mathcal{H}_k$ is integrated over $\mathcal{R}_k$. Next we use the structure of the ML detection rule to derive integral expressions for $p\paren{\mathsf{SNR}}$ for $M\geq 2$ in Section \ref{Section: SEP}. Further, in order to characterize the communication robustness with low-resolution ADCs in the high $\mathsf{SNR}$ regime, we also provide a detailed analysis on the asymptotic decay exponent of $p\paren{\mathsf{SNR}}$ in Section \ref{Section: Diversity Order}. \section{Average Symbol Error Probability} \label{Section: SEP} \subsection{Symbol Error Probability for $n \geq \log_2 M$} Let us first obtain a key lemma that simplifies the calculations for deriving $p\paren{\mathsf{SNR}}$ when the number of quantization bits is at least $\log_2 M$. Note that this lemma holds for general circularly-symmetric fading processes without assuming any specific functional form. \begin{lemma} \label{Lemma: p(snr) = 2^n P_00} Let $H = R \e{\jmath \Lambda}$ be a circularly-symmetric fading coefficient with $ R$ and $\Lambda$ denoting the magnitude and the phase of $H$, respectively. Let the joint pdf of $R$ and $ \Lambda$ be given by $f_{R, \Lambda}\paren{r, \lambda} = \frac{1}{2\pi} f_R\paren{r}$ for $\lambda \in \parenro{-\pi, \pi}$ and $r \geq 0$. Then, $p\paren{\mathsf{SNR}}$ is equal to \begin{align}\label{Eqn: p(SNR) Eqn} p\paren{\mathsf{SNR}} &= \frac{2^{n-1}}{\pi} \int_{\frac{\pi}{M}-\frac{\pi}{2^n}}^{\frac{\pi}{M}+\frac{\pi}{2^n}}\int_{0}^{\infty}\PR{\sqrt{\mathsf{SNR}}r \e{\jmath\theta} + W \notin \mathcal{E}}f_{R}\paren{r} \,dr \, d\theta, \end{align} where $\mathcal{E} = \brparen{z \in \C: 0 \leq \Arg{z} < \frac{2\pi}{M}}$. \end{lemma} \begin{IEEEproof} See Appendix \ref{Appendix:p(snr)=2^nP00 Proof}. \end{IEEEproof} Using Lemma \ref{Lemma: p(snr) = 2^n P_00}, next we obtain integral expressions for $p\paren{\mathsf{SNR}}$ when $H$ is circularly-symmetric with the generalized Nakagami-$m$ fading magnitude. We note that the Nakagami-$m$ fading model characterizes a broad range of fading phenomena ranging from severe to moderate and no fading conditions as $m$ varies over $\parenro{0.5, \infty}$ \cite{Nakagami60, Stuber:2001} and it reduces to Rayleigh fading for $m=1$. Considering these advantages, we will focus on the Nakagami-$m$ fading model for $H$ to derive integral expressions for $p\paren{\mathsf{SNR}}$ in the remainder of the paper. This will be done so for all parameter combinations of $M \geq 2$ (as an integer power of $2$), $n \geq \log_2 M$ and $m \geq 0.5$. It will be seen that the derived integral expressions are easy to calculate numerically and they reduce to simple closed-form expressions in some special cases. Further, we will also show that using $\log_2 M + 1$ bits is enough to achieve the maximum communication robustness achieved by using infinite number of quantization bits. \begin{theorem} \label{Theorem: General SEP} Assume $H$ is a unit-power fading coefficient distributed according to a circularly-symmetric distribution with Nakagami-$m$ fading magnitude. Let $\mathcal{Q}\paren{\cdot}$ be the complementary distribution function of the standard normal random variable and $\Gamma\paren{\cdot}$ be the gamma function \cite{book_1}. Then, for $n \geq \log_2 M$ and $M \geq 2$, $p\paren{\mathsf{SNR}}$ is given according to \small \begin{align}\label{Eqn:General SEP} p\paren{\mathsf{SNR}} &= \left\{ \begin{array}{ll} p_1\paren{\mathsf{SNR}} + p_2\paren{\mathsf{SNR}} - p_3\paren{\mathsf{SNR}} + p_4\paren{\mathsf{SNR}} & M\geq 4 \\ p_2\paren{\mathsf{SNR}} & M=2 \end{array} \right. ,\mbox{\normalsize where} \\ p_1\paren{\mathsf{SNR}} &= \frac{2^{n-1}m^m}{\pi^{2}}\int_{0}^{\frac{\pi}{2}}\int_{\frac{\pi}{M} - \frac{\pi}{2^{n}}}^{\frac{\pi}{M} + \frac{\pi}{2^{n}}} \left( \frac{\mathsf{SNR}}{\sin^2\beta}\cos^2\theta + m \right)^{-m} d\theta d\beta, \label{Eqn: p_1(SNR)}\\ p_2\paren{\mathsf{SNR}} &= \frac{2^{n-1}m^m}{\pi^{2}}\int_{0}^{\frac{\pi}{2}}\int_{\frac{\pi}{M} - \frac{\pi}{2^{n}}}^{\frac{\pi}{M} + \frac{\pi}{2^{n}}} \left( \frac{\mathsf{SNR}}{\sin^2\beta}\sin^2\theta + m \right)^{-m} d\theta d\beta, \label{Eqn: p_2(SNR)} \\ p_3\paren{\mathsf{SNR}} &=\frac{2^{n-1}m^m}{\pi^{3}}\int_{0}^{\frac{\pi}{2}}\int_{0}^{\frac{\pi}{2}}\int_{\frac{\pi}{M} - \frac{\pi}{2^{n}}}^{\frac{\pi}{M} + \frac{\pi}{2^{n}}} \paren{ \frac{\mathsf{SNR} \cos^2\theta}{\sin^2\beta} + \frac{\mathsf{SNR}\sin^2\theta}{\sin^2\gamma} + m }^{-m} d\theta d\beta d\gamma, \label{Eqn: p_3(SNR)} \\ p_4\paren{\mathsf{SNR}} &=\frac{2^{n}m^m}{\pi\sqrt{\pi}\Gamma\paren{m}}\int_{\frac{\pi}{M} - \frac{\pi}{2^{n}}}^{\frac{\pi}{M} + \frac{\pi}{2^{n}}}\int_{0}^{\infty}\int_{-\sqrt{\mathsf{SNR}}r \cos \lambda}^{\infty}\mathcal{Q}\paren{\sqrt{2\mathsf{SNR}}r\sec\paren{\frac{2\pi}{M}}\sin\paren{\frac{2\pi}{M}-\theta} + \sqrt{2}w\tan\paren{\frac{2\pi}{M}}} \label{Eqn: p_4(SNR)} \nonumber \\ & \hspace{9.5cm}\cdot \exp\paren{-\paren{w^2 + mr^2}}dwdrd\theta. \end{align} \normalsize \end{theorem} \begin{IEEEproof} In the following we provide the proof for $M\geq 4$. Please note that the proof for $M=2$ is similar and simpler. With a slight abuse of notation, we define \begin{align} \label{Eqn: p(SNR, h) Expression 1} p\paren{\mathsf{SNR}, h} = \PR{\sqrt{\mathsf{SNR}}r \,\e{\jmath\theta} + W \notin \mathcal{E}}, \end{align} where the set $\mathcal{E}$ is defined as in Lemma \ref{Lemma: p(snr) = 2^n P_00}. The probability in \eqref{Eqn: p(SNR, h) Expression 1} can be calculated by conditioning on the real part of $W$, which is denoted by $\re{W}$. By using Fig. \ref{SEP calculation} as a visual guide, we can write $p\paren{\mathsf{SNR},h}$ after conditioning on $\re{W}$ as \begin{figure}[!t] \center \includegraphics[width=0.7\textwidth]{figures/SNR_Logic.eps} \caption{An illustration of average $\mathsf{SEP}$ calculations. If the noise does not drag the original $M$-PSK constellation point rotated by the channel $h$ beyond the region $\mathcal{E}$ (shaded area), there will not be any errors in decoding.} \label{SEP calculation} \end{figure} \begin{align} \PR{\sqrt{\mathsf{SNR}}r\e{j\theta} + W \notin \mathcal{E} \,\big|\, \re{W} = w} \nonumber \\ & \hspace{-6cm}= \mathcal{Q}\paren{\sqrt{2\mathsf{SNR}} r \sin\theta} +\mathcal{Q}\paren{\sqrt{2\mathsf{SNR}}r\sec\paren{\frac{2\pi}{M}}\sin\paren{\frac{2\pi}{M}-\theta} + \sqrt{2}w\tan\paren{\frac{2\pi}{M}}} \label{Eqn: p(SNR, h) Expression 3} \end{align} for $w \geq -\sqrt{\mathsf{SNR}} r \cos \theta$. Similarly, for $w < - \sqrt{\mathsf{SNR}} r \cos\theta$, we get \begin{eqnarray} \PR{\sqrt{\mathsf{SNR}}r\e{j\theta} + W \notin \mathcal{E} \,\big|\, \re{W} = w} = 1. \label{Eqn: p(SNR, h) Expression 2} \end{eqnarray} Integrating \eqref{Eqn: p(SNR, h) Expression 3} and \eqref{Eqn: p(SNR, h) Expression 2} with respect to the pdf of $\re{W}$, which is given by $f_{\re{W}}\paren{w} = \frac{1}{\sqrt{\pi}}\e{-w^2}$, we obtain $p\paren{\mathsf{SNR},h}$ as \begin{align}\label{Eqn: Conditional SEP 2} p\paren{\mathsf{SNR},h} \nonumber \\ &\hspace{-1.5cm}= \mathcal{Q}\paren{\sqrt{2\mathsf{SNR}}r\cos \theta} + \mathcal{Q}\paren{\sqrt{2\mathsf{SNR}}r\sin \theta}-\mathcal{Q}\paren{\sqrt{2\mathsf{SNR}}r\cos \theta}\mathcal{Q}\paren{\sqrt{2\mathsf{SNR}}r\sin \theta} \nonumber \\ &\hspace{-1.5cm}+ \frac{1}{\sqrt{\pi}}\int_{-\sqrt{\mathsf{SNR}}r\cos \theta}^{\infty}\mathcal{Q}\paren{\sqrt{2\mathsf{SNR}}r\sec\paren{\frac{2\pi}{M}}\sin\paren{\frac{2\pi}{M}-\theta} + \sqrt{2}w\tan\paren{\frac{2\pi}{M}}} \,\e{-w^2}\,dw. \end{align} For Nakagami-$m$ fading distribution with shape parameter $m \geq 0.5$ and spread parameter $\Omega > 0$ \cite{paper_49}, we can write the pdf of the fading magnitude as $f_{R}\paren{r} = \frac{2m^m}{\Gamma(m)\,\Omega^m}r^{2m-1}\e{-\frac{m}{\Omega}r^2}$ for $r \geq 0$. We set $\Omega = 1$ in our calculations to make sure that $H$ has unit-power. We average $p\paren{\mathsf{SNR},h}$ over the fading distribution and solve the resulting integral based on Lemma \ref{Lemma: p(snr) = 2^n P_00}, and the fact that $\theta$ lies between $0$ and $\frac{2\pi}{M}$, to obtain $p\paren{\mathsf{SNR}}$ in Theorem \ref{Theorem: General SEP}. \end{IEEEproof} \subsection{Centering Property: Impact of Quantization Bits on the Average $\mathsf{SEP}$ } In this subsection, we will present an intuitive explanation as to why $p\paren{\mathsf{SNR}}$ improves with increasing number of quantization bits. In particular, we will observe that one extra bit, on top of $\log_2 M$ of them, provides a desirable centering property that steers the received signal away from the error-prone decision boundaries. This intuition will help to understand the underlying dynamics leading to the ternary behaviour for the decay exponent of $p\paren{\mathsf{SNR}}$ that we establish in the high $\mathsf{SNR}$ regime in Section \ref{Section: Diversity Order}. For $i\in \sqparen{0:M-1}$, let $x_i = \e{\jmath \pi\paren{\frac{2i+1}{M}-1}}$ be the $i$th signal point in the constellation set $\mathcal{C}$ and $\mathcal{E}_i = \brparen{z \in \C: \Arg{x_i} - \frac{\pi}{M} \leq \Arg{z} < \Arg{x_i} + \frac{\pi}{M}}$. It can be shown (i.e., see Appendix \ref{Appendix:p(snr)=2^nP00 Proof}) that the regions defined by $\mathcal{E}_{i,k} \defeq \exp\paren{\jmath\paren{k-2^{n-1}}\frac{2\pi}{2^n}} \mathcal{E}_i$ for $i \in \sqparen{0:M-1}$ and $k \in \sqparen{0:2^n-1}$ contains all $\mathcal{H}_k$'s to which $\sqrt{\mathsf{SNR}}hx_i$ is the closest for $h \in \mathcal{D}_k$, where \begin{eqnarray*} \mathcal{D}_0 = \brparen{z \in \C: \pi - \frac{\pi}{2^n} \leq \Arg{z} < \pi} \bigcup \brparen{z \in \C: -\pi \leq \Arg{z} < \frac{\pi}{2^n} - \pi} \end{eqnarray*} and \begin{eqnarray*} \mathcal{D}_k = \brparen{z \in \C: \paren{2k -1}\frac{\pi}{2^n} \leq \Arg{z} +\pi < \paren{2k+1}\frac{\pi}{2^n}}. \end{eqnarray*} This means that all the received signal points in $\mathcal{E}_{i,k}$ will be detected as $x_i$, and hence $\mathcal{E}_{i,k}$ can be considered as the {\em region of attraction} of $x_i$. This also means that if the received signal lands in $\mathcal{E}_{i,k}$ when $x_i$ is transmitted, then there will not be any detection errors. Let us consider an example for QPSK modulation with $2$-bit and $3$-bit quantization. Without loss of generality, we will assume that $x_3 = \e{\jmath \frac{\pi}{M}}$ is the transmitted signal. Our analysis will be for two cases of $\lambda = \frac{\pi}{18}$ and $\lambda = \frac{4\pi}{18}$, where $\lambda = \Arg{h}$. Table \ref{Table: Properties} summarizes these two cases, and Fig. \ref{SEP calculation QPSK} illustrates them. In this figure, we show both the original signal points (indicated by `$\diamond$') and the rotated ones (indicated by `$\bullet$') after multiplying with $\sqrt{\mathsf{SNR}}$ and $h$. \begin{table} \begin{center} \begin{tabular}{ |c|c|c|c| } \hline & $\lambda = \frac{\pi}{18}$ & $\lambda = \frac{4\pi}{18}$ \\ \hline \multirow{3}{4em}{$n=2$} & $h \in \mathcal{D}_2$ & $h \in \mathcal{D}_4$ \\ & $i=3$, $k=2$ & $i=3$, $k=4$ \\ & $\mathcal{E}_{i,k} = \mathcal{E}_3$ & $\mathcal{E}_{i,k} = \mathcal{E}_3$ \\ \hline \multirow{3}{4em}{$n=3$} & $h \in \mathcal{D}_2$ & $h \in \mathcal{D}_4$ \\ & $i=3$, $k=2$ & $i=3$, $k=5$ \\ & $\mathcal{E}_{i,k} = \mathcal{E}_3$ & $\mathcal{E}_{i,k} = \e{\jmath\frac{\pi}{4}}\mathcal{E}_3$ \\ \hline \end{tabular} \end{center} \caption{Centering property for QPSK modulation with $2$-bit and $3$-bit quantization. $\mathcal{E}_{i,k}$ is the region of attraction of the symbol $x_i$ when the quantizer output $Q\paren{Y}=k$.} \label{Table: Properties} \end{table} \begin{figure}[!t] \center \includegraphics[width=0.7\textwidth]{figures/Example_Case_New.eps} \caption{An illustration of the centering property for QPSK modulation with $2$-bit and $3$-bit quantization. Original signal points are indicated by `$\diamond$', whereas the rotated ones after multiplication with $\sqrt{\mathsf{SNR}}$ and $h$ are indicated by `$\bullet$'. Quantization region boundaries and the corresponding bisectors are indicated in solid black lines and green dash lines, respectively. The shaded area represents the region of attraction of the transmitted symbol $x_3$.} \label{SEP calculation QPSK} \end{figure} For both $2$-bit and $3$-bit quantization, we observe that $h \in \mathcal{D}_2$ and $h \in \mathcal{D}_4$ for $\lambda = \frac{\pi}{18}$ and $\lambda = \frac{4\pi}{18}$, respectively. Therefore, for $2$-bit quantization, the region of attraction for $x_3$ will be $\mathcal{E}_3$ for both cases. Here, we can see that the rotated constellation point $\sqrt{\mathsf{SNR}}hx_3$ is very close to the decision boundary when $\lambda = \frac{4\pi}{18}$. Hence, there is a high probability that the received signal $\sqrt{\mathsf{SNR}}hx_3 + w$ lands in the adjacent quantization region for $\lambda = \frac{4\pi}{18}$. In this instance, we will have a detection error. However, with the addition of one bit to the quantizer (i.e. with $3$-bit quantization), the region of attraction of $x_3$ will be $\e{\jmath\frac{\pi}{4}}\mathcal{E}_3$, and hence the ML detector can correctly decode the transmitted signal even if the received one lands in the adjacent quantization region. This is illustrated in Fig. \ref{SEP calculation QPSK}(d). Therefore, the addition of one extra bit to the quantizer, steers the received signal away from the error-prone decision boundaries to improve $p\paren{\mathsf{SNR}}$. Similarly, when the number of bits in the quantizer continues to increase, the quantization regions will become thinner, and hence the regions of attraction will be better centered around the received signal points. This is the fundamental phenomenon that explains why the average $\mathsf{SEP}$ improves with a larger number of quantization bits. \section{The Decay Exponent for the Average Symbol Error Probability} \label{Section: Diversity Order} In this section, we will analyze the communication robustness that can be achieved with low-resolution ADCs by focusing on the decay exponent for $p\paren{\mathsf{SNR}}$, which is given by\footnote{We will show that the limit in \eqref{Eqn: DVO Definition} exists, and hence there is no ambiguity in the definition of $\mathsf{DVO}$.} \begin{eqnarray}\label{Eqn: DVO Definition} \mathsf{DVO} = - \lim_{\mathsf{SNR} \ra \infty} \frac{\log{p\paren{\mathsf{SNR}}}}{\log{\mathsf{SNR}}}. \end{eqnarray} Following the convention in the field, we will call $\mathsf{DVO}$ the {\em diversity order}, although there is only a single diversity branch in our system. It should be noted that Nakagami-$m$ amplitude distribution can be obtained as the envelope distribution of $m$ independent Rayleigh faded signals for integer values of $m$ \cite{Nakagami60}. Hence, visualizing a Nakagami-$m$ wireless channel as a pre-detection analog square-law diversity combiner will put the results of this section into context. We devote the rest of the current section to the proof of this important finding. We will first start with a definition that will simplify the notation below. \begin{definition} \label{Def: Exponential Equality} We say a function $f$ is {\em exponentially equal} to $\mathsf{SNR}^d$ if $\lim_{\mathsf{SNR} \ra \infty} \frac{\log f\paren{\mathsf{SNR}}}{\log \mathsf{SNR}} = d$ for some $d \in \R$. We write $f\paren{\mathsf{SNR}} \stackrel{\rm e}{=} \mathsf{SNR}^d$ to indicate exponential equality whenever this limit exists. Similarly, we also write $f\paren{\mathsf{SNR}} \stackrel{\rm e}{\leq} \mathsf{SNR}^d$ and $f\paren{\mathsf{SNR}} \stackrel{\rm e}{\geq} \mathsf{SNR}^d$ if $\lim_{\mathsf{SNR} \ra \infty} \frac{\log f\paren{\mathsf{SNR}}}{\log \mathsf{SNR}} \leq d$ and $\lim_{\mathsf{SNR} \ra \infty} \frac{\log f\paren{\mathsf{SNR}}}{\log \mathsf{SNR}} \geq d$, respectively. \end{definition} The following lemma establishes two important properties for exponential equality. \begin{lemma} \label{Lemma: sum of pi(SNR) limit} Let $f\paren{\mathsf{SNR}} \stackrel{\rm e}{=} \mathsf{SNR}^d$ and $f_i\paren{\mathsf{SNR}} \stackrel{\rm e}{=} \mathsf{SNR}^{d_i}$ for $i \in \sqparen{1:N}$. Then, \begin{itemize} \item[(i)] For any $\alpha > 0$, $\alpha f\paren{\mathsf{SNR}} \stackrel{\rm e}{=} \mathsf{SNR}^{d}$ (i.e., invariance with scaling property). \item[(ii)] $\sum_{i=1}^N f_i\paren{\mathsf{SNR}} \stackrel{\rm e}{=} \mathsf{SNR}^{d_{\max}}$, where $d_{\max} = \max_{i \in \sqparen{1:N}} d_i$ (i.e., summation property). \end{itemize} \end{lemma} \begin{IEEEproof} See Appendix \ref{Appendix: Proof of Lemma: sum of pi(SNR) limit}. \end{IEEEproof} The next two lemmas establish the decay rates for $p_1\paren{\mathsf{SNR}}$ and $p_2\paren{\mathsf{SNR}}$ in Theorem \ref{Theorem: General SEP} in terms of exponential equalities. \begin{lemma} \label{Lemma: p1(SNR) limit} For $M \geq 4$, $p_1\paren{\mathsf{SNR}}$ is exponentially equal to \begin{equation} p_1\paren{\mathsf{SNR}} \stackrel{\rm e}{=} \left \{ \begin{array}{ll} \mathsf{SNR}^{-\frac{1}{2}} & \mbox{ if } M=4 \mbox{ and } n=2, \\ \mathsf{SNR}^{-m} & \mbox{ if } M=4 \mbox{ and } n >2, \\ \mathsf{SNR}^{-m} & \mbox{ if } M > 4 \mbox{ and } n \geq \log_2{M}. \end{array} \right . \end{equation} \end{lemma} \begin{IEEEproof} See Appendix \ref{Appendix:Proof of Lemma: p1(SNR) limit}. \end{IEEEproof} \begin{lemma} \label{Lemma: p2(SNR) limit} For $M \geq 4$, $p_2\paren{\mathsf{SNR}}$ is exponentially equal to \begin{equation} p_2\paren{\mathsf{SNR}} \stackrel{\rm e}{=} \left \{ \begin{array}{ll} \mathsf{SNR}^{-\frac{1}{2}} & \mbox{ if } n = \log_2{M}, \\ \mathsf{SNR}^{-m} & \mbox{ if } n > \log_2{M}. \end{array} \right . \end{equation} \end{lemma} \begin{IEEEproof} See Appendix \ref{Appendix:Proof of Lemma: p2(SNR) limit}. \end{IEEEproof} The following lemma establishes lower and upper bounds on $\mathsf{SEP}$ in \eqref{Eqn:General SEP}. We note that the bounds in Lemma \ref{Lemma: SEP bounds} hold for all circularly-symmetric fading processes, including Nakagami-$m$ magnitude pdf as a special case. \begin{lemma} \label{Lemma: SEP bounds} For $M \geq 4$ and $n\geq \log_2{M}$, let $L\paren{\mathsf{SNR}} = p_1\paren{\mathsf{SNR}}+ \frac{1}{2} p_2\paren{\mathsf{SNR}}$ and $U\paren{\mathsf{SNR}} = p_1\paren{\mathsf{SNR}}+2p_2\paren{\mathsf{SNR}}$. Then, \begin{align} L\paren{\mathsf{SNR}} \leq p\paren{\mathsf{SNR}} \leq U\paren{\mathsf{SNR}}. \label{Eqn: Both bounds} \end{align} \end{lemma} \begin{IEEEproof} See Appendix \ref{Appendix: Proof of Lemma: SEP bounds}. \end{IEEEproof} \begin{theorem}\label{Theorem 2} The $\mathsf{DVO}$ of a low-resolution ADC based receiver architecture with $M$-PSK modulation and Nakagami-$m$ fading is given by \begin{align}\label{diversity_order} \mathsf{DVO} = \left\{ \begin{array}{ll} \frac{1}{2} & n = \log_2 M, \\ m & n \geq \log_2 M + 1. \end{array} \right. \end{align} \end{theorem} \begin{IEEEproof} The proof for $M \geq 4$ directly follows from Lemmas \ref{Lemma: sum of pi(SNR) limit}, \ref{Lemma: p1(SNR) limit}, \ref{Lemma: p2(SNR) limit} and \ref{Lemma: SEP bounds}. For BPSK modulation (i.e., $M=2$) and $n=1$, we have \begin{align} p\paren{\mathsf{SNR}} &= \frac{2^{n-1}m^m}{\pi^{2}}\int_{0}^{\frac{\pi}{2}}\int_{0}^{\pi} \paren{\frac{\mathsf{SNR}}{\sin^2\beta}\sin^2\theta + m }^{-m} d\theta d\beta \nonumber \\ &= \frac{2^{n-1}m^m}{\pi^{2}}\int_{0}^{\frac{\pi}{2}}\int_{0}^{\frac{\pi}{2}} \paren{\frac{\mathsf{SNR}}{\sin^2\beta}\sin^2\theta + m }^{-m} d\theta d\beta \nonumber \\ &\hspace{4cm}+ \frac{2^{n-1}m^m}{\pi^{2}}\int_{0}^{\frac{\pi}{2}}\int_{\frac{\pi}{2}}^{\pi} \paren{\frac{\mathsf{SNR}}{\sin^2\beta}\sin^2\theta + m }^{-m} d\theta d\beta \label{Eqn:p(SNR)_int2} \end{align} By using the change of variables $\hat{\theta}=\theta-\frac{\pi}{2}$ in the second integral term of \eqref{Eqn:p(SNR)_int2}, we have \begin{align} p\paren{\mathsf{SNR}} &= \frac{2^{n-1}m^m}{\pi^{2}}\int_{0}^{\frac{\pi}{2}}\int_{0}^{\frac{\pi}{2}} \paren{\frac{\mathsf{SNR}}{\sin^2\beta}\sin^2\theta + m }^{-m} d\theta d\beta \nonumber \\ &\hspace{4cm}+ \frac{2^{n-1}m^m}{\pi^{2}}\int_{0}^{\frac{\pi}{2}}\int_{0}^{\frac{\pi}{2}} \paren{\frac{\mathsf{SNR}}{\sin^2\beta}\cos^2\hat{\theta} + m }^{-m} d\hat{\theta} d\beta \label{Eqn: BPSK 1 bit} \end{align} This expression is equivalent to $p_1\paren{\mathsf{SNR}} + p_2\paren{\mathsf{SNR}}$ for $M=4$ and $n=2$. Hence, by using Lemma \ref{Lemma: sum of pi(SNR) limit}, we can conclude that \begin{align} \lim\limits_{\mathsf{SNR} \ra \infty} - \frac{\log\paren{p\paren{\mathsf{SNR}}}}{\log\paren{\mathsf{SNR}}}=\frac{1}{2} \end{align} for BPSK modulation with $1$-bit quantization and $m\geq \frac{1}{2}$. For BPSK modulation with $n>\log_2\paren{M}$, we have \begin{align} p\paren{\mathsf{SNR}} &= \frac{2^{n-1}m^m}{\pi^{2}}\paren{\mathsf{SNR}}^{-m}\int_{0}^{\frac{\pi}{2}}\int_{\frac{\pi}{M} - \frac{\pi}{2^{n}}}^{\frac{\pi}{M} + \frac{\pi}{2^{n}}} \paren{\frac{\sin^2\theta}{\sin^2\beta} + \frac{m}{\mathsf{SNR}} }^{-m} d\theta d\beta \label{Eqn: BPSK n>=2} \end{align} Therefore \begin{align} \log\paren{p\paren{\mathsf{SNR}}} &= c - m\log\paren{\mathsf{SNR}} + \log\paren{\int_{0}^{\frac{\pi}{2}}\int_{\frac{\pi}{M}-\frac{\pi}{2^n}}^{\frac{\pi}{M}+\frac{\pi}{2^n}} \paren{\frac{\sin^2\theta}{\sin^2\beta} + \frac{m}{\mathsf{SNR}}}^{-m} d\theta d\beta}, \nonumber \end{align} where $c = \log\paren{\frac{2^{n-1} m^m}{\pi^2}}$. Define the function $g_{\mathsf{SNR}}\paren{\theta, \beta} \triangleq \paren{\frac{\sin^2\theta}{\sin^2\beta} + \frac{m}{\mathsf{SNR}}}^{-m}$, indexed by $\mathsf{SNR}$. Since it is positive and increases to the limiting function $g_{\infty}\paren{\theta, \beta}= \paren{\frac{\sin^2\theta}{\sin^2\beta}}^{-m}$ as $\mathsf{SNR}$ increases, we can use the monotone convergence theorem \cite{book_7} to write \begin{align} \lim_{\mathsf{SNR} \ra \infty}\log\paren{\int_{0}^{\frac{\pi}{2}}\int_{\frac{\pi}{M}-\frac{\pi}{2^n}}^{\frac{\pi}{M}+\frac{\pi}{2^n}} \paren{\frac{\sin^2\theta}{\sin^2\beta} + \frac{m}{\mathsf{SNR}}}^{-m} d\theta d\beta} &= \log\paren{\int_{0}^{\frac{\pi}{2}}\int_{\frac{\pi}{M}-\frac{\pi}{2^n}}^{\frac{\pi}{M}+\frac{\pi}{2^n}} \paren{\frac{\sin^2\theta}{\sin^2\beta}}^{-m} d\theta d\beta}. \nonumber \end{align} We note that the last integral is finite since $g_{\infty}\paren{\theta, \beta}$ is continuous and finite over the range of integration. Therefore, for BPSK modulation with $n>\log_2\paren{M}$, we get \begin{align} \lim\limits_{\mathsf{SNR} \ra \infty} - \frac{\log\paren{p\paren{\mathsf{SNR}}}}{\log\paren{\mathsf{SNR}}} = m. \end{align} \end{IEEEproof} The $\mathsf{DVO}$ analysis above helps to discover the first-order effects of the low-resolution ADC based receivers on the $\mathsf{SEP}$ system performance. In particular, we observe that it is enough to use $\log_2 M + 1$ bits for quantizing the received signal to extract full diversity, which is equal to $m$ for Nakagami-$m$ faded wireless channels. Considering the fact that energy consumption increases exponentially with the number of quantization bits \cite{paper_12}, this finding indicates that a significant energy saving is possible by means of low-resolution ADC based receivers without any (first order) loss in communication robustness. We also observed that the $\mathsf{DVO}$ is only equal to $\frac12$ when $n = \log_2 M$. Together with the universal bound obtained in Theorem \ref{Theorem: lower bound n<log2M}, the discovered {\em ternary} behaviour has significant implications in terms of how to choose the number of quantization bits for low-resolution ADC based receivers. In particular, for fading environments with $m$ close to $\frac12$, a system designer may decide to trade off reliability for energy consumption, without having too much degradation in average $\mathsf{SEP}$ by using $\log_2 M$ bits. On the other hand, for fading environments with large $m$, it is more beneficial to use one extra bit to have a major improvement in average $\mathsf{SEP}$. \section{Performance Analysis for QPSK modulation} \label{Sec: Effect} In this section, we conduct a performance analysis for QPSK modulation with low-resolution ADCs by using our results in previous sections. We first present a simplified version of the average $\mathsf{SEP}$ expression in \eqref{Eqn:General SEP} for QPSK modulation, and then we analyze the effect of low-resolution quantization under Rayleigh fading. \subsection{Symbol Error Probability for QPSK modulation} {\em Nakagami-$m$ Fading:} In the special case of QPSK modulation (i.e., $M=4$), the average $\mathsf{SEP}$ expression in \eqref{Eqn:General SEP} can be further simplified to produce \begin{align}\label{Eqn: SEP for QPSK} p\paren{\mathsf{SNR}} = p_1\paren{\mathsf{SNR}} + p_2\paren{\mathsf{SNR}} - p_3\paren{\mathsf{SNR}}, \end{align} because $\tan\paren{\frac{2\pi}{M}} = \infty$ for $M=4$. By using hypergeometric function ${}_2F_1[\cdot]$ \cite{book_1}, we can simplify \eqref{Eqn: SEP for QPSK} for 2-bit quantization (i.e., $M=4$ and $n=2$) as \begin{align} p\paren{\mathsf{SNR}} & = \frac{2}{\pi}\int_{0}^{\frac{\pi}{2}}{}_2F_1 \sqparen{\frac{1}{2},m,1,\frac{-\mathsf{SNR}}{m \sin^2\beta}}\,d\beta \nonumber \\ & \hspace{1cm}- \frac{m^m}{\pi}\int_{0}^{\frac{\pi}{2}}\int_{0}^{\frac{\pi}{2}}\paren{\frac{\mathsf{SNR}}{\sin^2\gamma}+m}^{-m}{}_2F_1 \sqparen{\frac{1}{2},m,1,\frac{\mathsf{SNR}\paren{\sin^2 \beta - \sin ^2 \gamma}}{\mathsf{SNR} + m \sin^2 \gamma}}\,d\beta\, d\gamma. \nonumber \end{align} {\em Rayleigh Fading:} For special case of Rayleigh fading, which is obtained by setting $m=1$, the expression in \eqref{Eqn: SEP for QPSK} can be re-expressed as \begin{align}\label{Eqn: SEP QPSK Rayleigh} p\paren{\mathsf{SNR}} &= \frac{2^{n}}{\pi^{2}}\int_{0}^{\frac{\pi}{2}}\frac{\sin\beta}{\sqrt{\mathsf{SNR} + \sin^{2}\beta}}\arctan\paren{\frac{2\sin\beta\sqrt{\mathsf{SNR} + \sin^{2}\beta}}{\mathsf{SNR} + 2\sin^{2}\beta}\tan\paren{\frac{\pi}{2^{n-1}}}}d\beta \quad \nonumber \\ & \hspace{1cm}-\frac{2^{n-1}}{\pi^{3}}\int_{0}^{\frac{\pi}{2}}\int_{0}^{\frac{\pi}{2}} \sqrt{\frac{\sin^{2}\beta \sin^{2}\gamma}{\paren{\mathsf{SNR} + \sin^{2}\beta}\paren{\mathsf{SNR} + \sin^{2}\gamma}}} \,\cdot \arctan\paren{\vartheta}\,d\beta \,d\gamma, \end{align} where $\vartheta = \frac{2\sin\beta \sin\gamma \sqrt{(\mathsf{SNR} + \sin^{2}\beta)(\mathsf{SNR} + \sin^{2}\gamma)}}{\mathsf{SNR}(\sin^{2}\beta + \sin^{2}\gamma ) + 2\sin^{2}\beta\sin^{2}\gamma}\tan(\frac{\pi}{2^{n-1}}).\notag$ Furthermore, for $2$-bit quantization with Rayleigh fading (i.e., $M=4$, $n=2$ and $m=1$), we can obtain $p\paren{\mathsf{SNR}}$ in closed form as \begin{align} p\paren{\mathsf{SNR}} & = \frac{2}{\pi}\arctan\paren{\frac{1}{\sqrt{\mathsf{SNR}}}} - \paren{\frac{1}{\pi}\arctan\paren{\frac{1}{\sqrt{\mathsf{SNR}}}}}^{2}. \nonumber \end{align} This closed-form analytical expression is very easy to compute without resorting to any numerical integration. \subsection{Analysis of Quantization Penalty for QPSK Modulation} By using the Taylor series expansion for high $\mathsf{SNR}$ values, we can re-express the average $\mathsf{SEP}$ expressions for QPSK modulation under Rayleigh fading given in \eqref{Eqn: SEP QPSK Rayleigh} as \begin{equation}\label{Eqn: Asymptotic SEP QPSK Rayleigh} p_A\paren{\mathsf{SNR},n} = \left\{ \begin{array}{ll} \frac{2}{\pi}\mathsf{SNR}^{-\frac{1}{2}} + \LO{\mathsf{SNR}^{-\frac{1}{2}}} & n = 2 \\ \frac{2^{n-1}\paren{4\pi - 1}}{\pi^{3}}\tan\paren{\frac{\pi}{2^{n-1}}}\mathsf{SNR}^{-1} + \LO{\mathsf{SNR}^{-1}} & n \geq 3. \end{array} \right. \end{equation} While phase quantization with less number of quantization bits is desirable, due to less processing complexity at the receiver, it deteriorates the average $\mathsf{SEP}$ performance of the system. In the following, we quantify the increase in the average $\mathsf{SEP}$ as a quantization penalty defined as \begin{align}\label{Eqn: Quantization Penalty 1} \Psi\paren{\mathsf{SNR},n} & = 10\log\paren{\frac{p_A\paren{\mathsf{SNR},n}}{p_A\paren{\mathsf{SNR},\infty}}}, \end{align} where $p_A\paren{\mathsf{SNR},\infty}$ is the average $\mathsf{SEP}$ with infinite number of quantization bits. Based on \eqref{Eqn: Asymptotic SEP QPSK Rayleigh}, we can derive $p_A\paren{\mathsf{SNR},\infty}$ as \begin{align}\label{Eqn: Asymptotic SEP QPSK Rayleigh infinite bits} p_A\paren{\mathsf{SNR},\infty} & = \paren{\frac{4\pi -1}{\pi^{2}}}\mathsf{SNR}^{-1} + \LO{\mathsf{SNR}^{-1}}, \end{align} where we have used the small-angle approximation $\tan(x) = x$ as $x \rightarrow 0$. Substituting \eqref{Eqn: Asymptotic SEP QPSK Rayleigh} and \eqref{Eqn: Asymptotic SEP QPSK Rayleigh infinite bits} into \eqref{Eqn: Quantization Penalty 1} and doing some mathematical manipulations, we can derive the quantization penalty in terms of average $\mathsf{SEP}$ with $n$-bit quantization as \begin{equation}\label{Eqn: Quantization Penalty 1(a)} \Psi\paren{\mathsf{SNR},n} = \left\{ \begin{array}{ll} 10\log \paren{\paren{\frac{2\pi}{4\pi - 1}}\mathsf{SNR}^{\frac{1}{2}}} + \LO{\mathsf{SNR}^{\frac{1}{2}}} & n = 2 \\ 10\log \paren{\frac{2^{n-1}}{\pi}\tan\paren{\frac{\pi}{2^{n-1}}}} + \LO{1} & n \geq 3. \end{array} \right. \end{equation} In Section \ref{Sec: Numerical Examples}, we use $\Psi\paren{\mathsf{SNR},n}$ to quantify the increase in average $\mathsf{SEP}$ as we change from infinite-bit to $n$-bit quantization. We further notice that, in order to achieve the same average $\mathsf{SEP}$ as with $n$-bit quantization, we need to transmit the signal using a higher power if we use only $(n-1)$-bit quantization. In the following, we quantify the increase in the transmit power as another quantization penalty defined by \begin{align}\label{Eqn: Quantization Penalty 2} \Phi\paren{\mathsf{SEP},n} = 10\log \paren{\frac{\mathsf{SNR}_{n-1}}{\mathsf{SNR}_{n}}}, \end{align} where $\mathsf{SNR}_{n}$ and $\mathsf{SNR}_{n-1}$ are the $\mathsf{SNR}$ values required to achieve a certain average $\mathsf{SEP}$ with $n$ and $n-1$ quantization bits, respectively. Substituting \eqref{Eqn: Asymptotic SEP QPSK Rayleigh} into \eqref{Eqn: Quantization Penalty 2} and doing some mathematical manipulations, we can derive the quantization penalty with $n$-bit quantization as \begin{equation}\label{Eqn: Quantization Penalty 2(a)} \Phi\paren{\mathsf{SEP},n} = \left\{\begin{array}{ll} 10\log \paren{\frac{\pi^{2}}{2(4\pi - 1)}\paren{\mathsf{SNR}_{2}}^{\frac{1}{2}}} & n=3\\ 10\log\paren{\frac{1}{2}\tan\paren{\frac{\pi}{2^{n-2}}}\cot\paren{\frac{\pi}{2^{n-1}}}} & n\geq4. \end{array}\right. \end{equation} In Section \ref{Sec: Numerical Examples}, we use $\Phi\paren{\mathsf{SEP},n}$ to quantify the required transmit power increase as we change from $n$-bit to $(n-1)$-bit quantization. \section{Numerical Results}\label{Sec: Numerical Examples} In this section, we present analytical and simulated $\mathsf{SEP}$ results for $M$-PSK modulation with $n$-bit quantization. Channel fading is unit-power and circularly-symmetric with Nakagami-$m$ distributed magnitude, and additive noise is complex Gaussian with zero mean and unit variance. \begin{figure}[!t] \center \includegraphics[width=0.7\textwidth]{figures/nr_figure_1.eps} \caption{Average $\mathsf{SEP}$ curves as a function of $\mathsf{SNR}$ for QPSK modulation. $n=2,3,4$ and $m=1,2$.} \label{example1} \end{figure} Fig. \ref{example1} plots the average $\mathsf{SEP}$ as a function of $\mathsf{SNR}$ for QPSK modulation with $n=2,3,4$-bit quantization under Nakagami-$m$ fading with shape parameter $m=1$ and $2$. The simulated results are generated using Monte Carlo simulation, while the analytical results are generated using our expression in \eqref{Eqn:General SEP}. As the plot illustrates, the analytical results accurately follow the simulated results for all cases. We observe a noteworthy improvement in the average $\mathsf{SEP}$ when $n$ changes from $2$ to $3$-bit quantization for QPSK modulation in both $m=1$ and $2$. This jump in the average $\mathsf{SEP}$ performance is expected in the light of Theorem \ref{Theorem 2}, which states that using one extra bit, on top of $\log_2 M$ bits, improves the $\mathsf{DVO}$ from $\frac12$ to $m$. We also observe that the average $\mathsf{SEP}$ reduces as we increase $n$, but the amount by which it reduces also gets smaller as we increase $n$. This can be clearly observed from the zoomed-in section in Fig. \ref{example1}. As expected, $\mathsf{DVO}=m$ for all $n \geq 3$. Furthermore, $\mathsf{DVO} = \frac{1}{2}$ for any $m$, when $n=2$. \begin{figure}[!t] \center \includegraphics[width=0.7\textwidth]{figures/nr_figure_2.eps} \caption{Average $\mathsf{SEP}$ curves as a function of $\mathsf{SNR}$ for different modulation schemes. $n=\log_2M, \log_2M + 1, \log_2M + 2$ and $m=1$.} \label{example2} \end{figure} Fig. \ref{example2} plots the average $\mathsf{SEP}$ as a function of $\mathsf{SNR}$ for QPSK, $8$-PSK and $16$-PSK modulations schemes while keeping the Nakagami-$m$ shape parameter fixed at $m=1$, which is the classical Rayleigh fading scenario. We plot the average $\mathsf{SEP}$ for each modulation scheme by using $n=\log_2M$, $\log_2M+1$ and $\log_2M+2$ bits. From the plots, we can clearly observe that QPSK with $2$-bit, $8$-PSK with $3$-bit and $16$-PSK with $4$-bit quantization have a $\mathsf{DVO}$ of $\frac{1}{2}$. Further, we can observe that QPSK with $3$ or more bits, $8$-PSK with $4$ or more bits, $16$-PSK with 5 or more bits quantizations have a $\mathsf{DVO}$ of $1$, which is equal to $m$ in this case. To further emphasize this point, the zoomed-in section in Fig. \ref{example2} illustrates the asymptotic average $\mathsf{SEP}$ versus $\mathsf{SNR}$ for QPSK modulation. As stated in Theorem \ref{Theorem 2}, these numerical observations clearly indicate the ternary behaviour in the decay exponent for $p\paren{\mathsf{SNR}}$ depending on whether $n \geq \log_2M+1$, $n = \log_2M$, or $n < \log_2M$. \begin{figure}[!t] \center \includegraphics[width=0.7\textwidth]{figures/nr_figure_3.eps} \caption{Upper and lower bounds on $p\paren{\mathsf{SNR}}$ as a function of $\mathsf{SNR}$ for QPSK and $8$-PSK modulations. $n=2, 3, 4$ and $m=1$.} \label{SEP_Bounds} \end{figure} In order to illustrate the accuracy of upper and lower bounds on $p\paren{\mathsf{SNR}}$, derived in Lemma \ref{Lemma: SEP bounds}, in Fig. \ref{SEP_Bounds} we plot the expressions in \eqref{Eqn: Both bounds}, alongside the exact $p\paren{\mathsf{SNR}}$ curve, as a function of $\mathsf{SNR}$ for QPSK and $8$-PSK modulations under Rayleigh fading (i.e., $m=1$). This figure clearly shows that $U\paren{\mathsf{SNR}}$ becomes a very tight upper bound for $8$-PSK in the high $\mathsf{SNR}$ regime. The figure also confirms that the decay exponents of both $L\paren{\mathsf{SNR}}$ and $U\paren{\mathsf{SNR}}$ are the same as that of the $p\paren{\mathsf{SNR}}$. Next, in Fig. \ref{SEP_less_bits}, we plot the simulated average $\mathsf{SEP}$ curves as a function of $\mathsf{SNR}$ for $8$-PSK modulation with $2$-bit quantization and $16$-PSK modulation with $2$ and $3$-bit quantization. We consider equi-probable transmitted symbols and Nakagami-$m$ fading channel model with $m=0.5,1$ and $2$. The simulated results are again generated by using Monte Carlo simulations. We can clearly observe an error floor for high $\mathsf{SNR}$ values when $n < \log_2M$, as established by Theorem \ref{Theorem: lower bound n<log2M}. In particular, the average $\mathsf{SEP}$ for $8$-PSK has a lower bound of $0.5$ with $2$-bit quantization. Similarly, the average $\mathsf{SEP}$ for $16$-PSK has a lower bound of $0.75$ with $2$-bit quantization and a lower bound of $0.5$ with $3$-bit quantization. It should be noted that the error floor given in Theorem \ref{Theorem: lower bound n<log2M} is more conservative than those observed in Fig. \ref{SEP_less_bits}. This is because it is a universal lower bound that holds for all modulation schemes, quantizer types and fading environments, not only for very specific ones used to plot average $\mathsf{SEP}$ curves in Fig. \ref{SEP_less_bits}. \begin{figure}[!t] \center \includegraphics[width=0.7\textwidth]{figures/nr_figure_4.eps} \caption{Average $\mathsf{SEP}$ as a function of $\mathsf{SNR}$ for $8$-PSK and $16$-PSK modulations. $n=2<\log_2M$ and $m=0.5,1$ and $2$.} \label{SEP_less_bits} \end{figure} Finally, Fig. \ref{quantization_penalty} illustrates the quantization penalty and plots the asymptotic average $\mathsf{SEP}$ curves as a function of $\mathsf{SNR}$ for QPSK modulation with $n = $ 2, 3, 4 and $\infty$ under Rayleigh fading. The asymptotic plots are generated by using the expressions in \eqref{Eqn: Asymptotic SEP QPSK Rayleigh}. We observe that a $\mathsf{DVO}$ of half is achieved with $2$-bit quantization, and the full $\mathsf{DVO}$ of one is achieved with $n>2$. When the $\mathsf{SNR}$ is fixed at $18$ dB, we observe a quantization penalty of $\Psi\paren{18\mbox{ dB},2} \approx 6.35$ dB as we change from $n = 2$ to $\infty$, i.e., we get a $5$-fold increase in the average $\mathsf{SEP}$ as we change from $n = 2$ to $\infty$. When the average $\mathsf{SEP}$ is fixed at $0.015$, we observe a quantization penalty of $\Phi\paren{0.015,4} \approx 0.8$ dB as we change from $n = 3$ to $4$, i.e., to achieve an average $\mathsf{SEP}$ of $0.015$ with $4$-bit quantization, we need $0.8$ dB more transmit power than what is required with $3$-bit quantization. \begin{figure}[!t] \center \includegraphics[width=0.7\textwidth]{figures/nr_figure_5.eps} \caption{Quantization penalty for QPSK modulation and the asymptotic average $\mathsf{SEP}$ curves. $n=2,3,4,\infty$ and at $m=1$.} \label{quantization_penalty} \end{figure} \section{Conclusions and Future Generalizations} \label{Section: Conclusions} In this paper, we performed a theoretical analysis of a low-resolution based ADC communication system and obtained fundamental performance limits, optimum ML detectors and a general analytical expression for the average $\mathsf{SEP}$ for $M$-PSK modulation with $n$-bit quantization. These results were further investigated for Nakagami-$m$ fading model in detail. We conducted an asymptotic analysis to show that the decay exponent for the average $\mathsf{SEP}$ is the same and equal to $m$ with infinite-bit and $n$-bit quantizers for $n \geq \log_2M+ 1$. We also performed an extensive numerical study to illustrate the accuracy of the derived analytical expressions. In most parts of the paper, we have focused on phase modulated communications. Phase modulation has an important and practical layering feature enabling the quantizer and detector design separation in low-resolution ADC communications. For a given number of bits, the quantizer needs to be designed only once, and can be kept constant for all channel realizations. The detector can be implemented digitally as a table look-up procedure using channel knowledge and quantizer output. On the other hand, this feature is lost in joint phase and amplitude modulation schemes such as QAM. The quantizer needs to be dynamically updated for each channel realization in low-resolution ADC based QAM systems. This is because the fading channel amplitude may vary over a wide range, but the phase always varies over $\parenro{-\pi, \pi}$. However, phase modulation is historically known to be optimum only up to modulation order $16$ under peak power limitations \cite{Lucky62}. Hence, it is a notable future research direction to extend the results of this paper to higher order phase and amplitude modulations by taking practical design considerations into account. A major result of this paper is the discovery of a ternary $\mathsf{SEP}$ behaviour, indicating the sufficiency of $\log_2M+1$ bits for achieving asymptotically optimum $M$-ary communication reliability. Hence, without modifying the conventional RF chain, we can use one extra bit and still achieve the asymptotically optimum communication performance. Another important future research direction is to compare and contrast the backward-compatible receiver design approach of using one extra bit proposed in this paper with other approaches that can potentially modify the conventional RF chain and manipulate the received signals in the waveform domain by introducing extra analog components. This study needs to be done in detail by considering accuracy and agility of analog domain operations, energy consumption of analog and digital circuit components, different modulation schemes and the average $\mathsf{SEP}$ performance curves resulting from different low-resolution ADC based receiver architectures. Similarly, utilizing the results of this paper, a further detailed study on the receiver architecture design to determine where to place the diversity combiner (before or after quantizer or detector) and its type is needed when multiple diversity branches are available for data reception. \appendices \section{Proof of Theorem \ref{Theorem: lower bound n<log2M}} \label{Appendix:Theorem: lower bound n<log2M} Let us consider a class of hypothetical genie-aided detectors $g:\C^2 \times \sqparen{0:2^{n-1}} \to \sqparen{0:M-1}$ that has the knowledge of channel noise $W \in \C$, fading coefficient $H \in \C$ and quantizer output $Q\paren{Y} \in \sqparen{0:2^{n-1}}$. We also let $\mathcal{S}_{w, h, k} = \brparen{x \in \mathcal{C}: \sqrt{\mathsf{SNR}}hx + w \in \mathcal{R}_k}$ be the set of received signal points resulting in $Q(Y) = k$ for particular realizations of $H=h$ and $W=w$. We first observe that since $n<\log_2M$, there exists at least one quantization region $\mathcal{R}_{\tilde{k}}$ (depending on $w$ and $h$) such that $\mathcal{S}_{w, h, \tilde{k}}$ contains at least $\frac{M}{2^n}$ signal points. We note that $\frac{M}{2^n}$ is always an integer greater than $2$ since $M$ is assumed to be an integer power of $2$. Then, the conditional $\mathsf{SEP}$ of any detector $g$ given $W=w$ and $H=h$, which we will denote by $p_g\paren{\mathsf{SNR},h,w}$, can be lower-bounded as \begin{align} p_{g}\paren{\mathsf{SNR},h,w} &\geq p_{\min} \sum_{x_i \in \mathcal{S}_{w,h,\tilde{k}}}\PR{g\paren{h,w,\tilde{k}}\neq x_i \, \big| \, W=w, H=h, X=x_i} \nonumber \\ &\geq p_{\min}\paren{\frac{M}{2^n} - 1} \nonumber \\ &= \frac{M-2^n}{2^n}p_{\min}, \label{Eqn: Universal SEP Lower Bound 2} \end{align} By averaging with respect to $w$ and $h$, we have $p_g\paren{\mathsf{SNR}} \geq \frac{M-2^n}{2^n}p_{\min}$, where $p_g\paren{\mathsf{SNR}}$ is the average $\mathsf{SEP}$ corresponding to detector $g$. This concludes the proof since the obtained lower bound does not depend on the choice of modulation scheme, quantizer structure and detector rule, and hence holds for detectors not utilizing the knowledge of $W$ for any choice of modulation scheme and quantizer structure. \section{Proof of Theorem \ref{Theorem: ML Detector}} \label{Appendix: Optimum Detector Proof} To prove Theorem \ref{Theorem: ML Detector}, we will first obtain the following result. \begin{lemma} \label{Lemma: monotonically dec} Let $\mathcal{R}$ be a convex cone given by $\mathcal{R} = \brparen{z \in \C: \alpha_1 \leq \Arg{z} \leq \alpha_2}$ for $\alpha_1,\alpha_2 \in \parenro{-\pi,\pi}$, and $W_1 \sim \mathcal{CN}\paren{\mu_1,1}$ and $W_2 \sim \mathcal{CN}\paren{\mu_2,1}$ be proper complex Gaussian random variables with means satisfying $\abs{\mu_1} = \abs{\mu_2}=r$ for some $r > 0$. Then, $\PR{W_1 \in \mathcal{R}} \geq \PR{W_2 \in \mathcal{R}}$ if $\abs{\mu_1 - z_{\rm mid}} \leq \abs{\mu_2 - z_{\rm mid}}$, where $z_{\rm mid} = r \e{\jmath \frac{\alpha_1 + \alpha_2}{2}}$. \end{lemma} \begin{IEEEproof} It is enough to show this result only for $\alpha_2 = -\alpha_1 = \alpha$. Otherwise, we can first rotate $W_1$, $W_2$ and $\mathcal{R}$ with $\e{-\jmath \frac{\alpha_1 + \alpha_2}{2}}$ and repeat the same calculations below. Let $g\paren{\mu_i} = \PR{W_i \in \mathcal{R}}$ for $i=1,2$, and assume $\abs{\mu_1 - z_{\rm mid}} \leq \abs{\mu_2 - z_{\rm mid}}$. There are multiple cases in which the inequality $\abs{\mu_1 - z_{\rm mid}} \leq \abs{\mu_2 - z_{\rm mid}}$ holds, which we will analyze one-by-one below. \begin{figure}[!t] \center \includegraphics[width=0.7\textwidth]{figures/Lemma_8_Fig_1.eps} \caption{An illustration for the proof of Lemma \ref{Lemma: monotonically dec} when $\mu_1$ and $\mu_2$ lie outside $\mathcal{R}^\circ$. $\abs{\mu_1}=\abs{\mu_2} = r$, $\abs{\mu_1 - z_{\rm mid}} \leq \abs{\mu_2 - z_{\rm mid}}$ and $\alpha_2 = -\alpha_1 = \alpha$.} \label{Lemma1_1} \end{figure} First, we will consider the case in which both $\mu_1$ and $\mu_2$ lie outside $\mathcal{R}^\circ$, where $\mathcal{R}^\circ$ is the set of interior points of $\mathcal{R}$. This is the case shown in Fig. \ref{Lemma1_1}. To start with, we will assume $0 \leq \Arg{\mu_1} \leq \Arg{\mu_2} < \pi$. Then, for any $y \in \mathcal{R}$, the angle between the line segments $\mathcal{L}_{Oy}$ and $\mathcal{L}_{O\mu_1}$ is smaller than the one between the line segments $\mathcal{L}_{Oy}$ and $\mathcal{L}_{O\mu_2}$.\footnote{The line segment $\mathcal{L}_{z_1z_2}$ between the points $z_1 \in \C$ and $z_2 \in \C$ is defined as $\mathcal{L}_{z_1z_2} = \brparen{(1-t) z_1 + t z_2: t \in \sqparen{0,1}}$.} Hence, applying the cosine rule for the triangle formed by $O, y$ and $\mu_1$, and for the triangle formed by $O, y$ and $\mu_2$, it can be seen that $\abs{y - \mu_1} \leq \abs{y - \mu_2}$ for all $y \in \mathcal{R}$.\footnote{This statement is correct even when both $y$ and $\mu_1$ lies on the boundary of $\mathcal{R}$ and the triangle formed by $O, y$ and $\mu_1$ reduces to a line segment.} Therefore, $g\paren{\mu_1} = \frac{1}{\pi}\int_{\mathcal{R}} \exp \paren{-\abs{y-\mu_1}^2 } dy \geq \frac{1}{\pi}\int_{\mathcal{R}} \exp \paren{-\abs{y-\mu_2}^2 } dy = g\paren{\mu_2}$. Next, we assume $\Arg{\mu_2} \in \parenro{-\pi, 0}$ and $0 \leq \Arg{\mu_1} \leq \abs{\Arg{\mu_2}} \leq \pi$. Let $\widetilde{W}$ be the auxiliary random variable distributed according to $\mathcal{CN}\paren{\tilde{\mu} ,1}$ with $\tilde{\mu} = r\e{\jmath \abs{\Arg{\mu_2}}}$, i.e., $\tilde{\mu}$ is the reflection of $\mu_2$ around the real line. Symmetry around the real line implies that $g\paren{\mu_2}$ is equal to $ g\paren{\tilde{\mu}}= \PR{\widetilde{W} \in \mathcal{R}}$, which is less than $g\paren{\mu_1}$ due to our arguments above. For $\Arg{\mu_1} \in \parenro{-\pi, 0}$, the same analysis still holds after reflecting $\mu_1$ around the real line, leading to $g\paren{\mu_1} \geq g\paren{\mu_2}$ for all $\mu_1, \mu_2 \notin \mathcal{R}^\circ$ satisfying $\abs{\mu_1 - z_{\rm mid}} \leq \abs{\mu_2 - z_{\rm mid}}$. \begin{figure}[!t] \center \includegraphics[width=0.7\textwidth]{figures/Lemma_8_Fig_2.eps} \caption{An illustration for the proof of Lemma \ref{Lemma: monotonically dec} when $\mu_1 \in \mathcal{R}^\circ$ and $\mu_2 \notin \mathcal{R}^\circ$. $\abs{\mu_1}=\abs{\mu_2} = r$, $\abs{\mu_1 - z_{\rm mid}} \leq \abs{\mu_2 - z_{\rm mid}}$ and $\alpha_2 = -\alpha_1 = \alpha$.} \label{Lemma1_2} \end{figure} Second, we consider the case where $\mu_1 \in \mathcal{R}^\circ$ but $\mu_2 \notin \mathcal{R}^\circ$. This is the case shown in Fig. \ref{Lemma1_2}. It is enough to establish the desired result only for $0 \leq \Arg{\mu_1} \leq \Arg{\mu_2} < \pi$. When $\mu_1$ or $\mu_2$ has a negative phase angle, the same analysis below still holds after reflecting the mean with negative phase around the real line. Let $\widetilde{W}$ be the auxiliary random variable distributed according to $\mathcal{CN}\paren{\tilde{\mu},1}$ with $\tilde{\mu} = r\e{\jmath \alpha}$, i.e., $\tilde{\mu}$ is located at the upper boundary of $\mathcal{R}$. Our analysis in the first case shows that $g\paren{\tilde{\mu}} = \PR{\widetilde{W} \in \mathcal{R}} \geq g\paren{\mu_2}$ since both $\tilde{\mu}$ and $\mu_2$ are outside $\mathcal{R}^\circ$ and $ 0 \leq \Arg{\tilde{\mu}} \leq \Arg{\mu_2} < \pi$. We next divide $\mathcal{R}$ into two disjoint regions: $\mathcal{R}_1 = \brparen{z \in \C: \Arg{\mu_1} \leq \Arg{z} \leq \alpha}$ and $\mathcal{R}_2 = \brparen{z \in \C: -\alpha \leq \Arg{z} < \Arg{\mu_1}}$. Then, we have $\PR{\widetilde{W} \in \mathcal{R}_1} = \PR{W_1 \in \mathcal{R}_1}$ due to symmetry around the line bisecting $\mathcal{R}_1$ and $\PR{\widetilde{W} \in \mathcal{R}_2} \leq \PR{W_1 \in \mathcal{R}_2}$ since $\abs{y-\mu_1} \leq \abs{y-\tilde{\mu}}$ for all $y \in \mathcal{R}_2$. Hence, $g\paren{\mu_1} \geq g\paren{\tilde{\mu}} \geq g\paren{\mu_2}$. This establishes the desired results for all $\mu_1 \in \mathcal{R}^\circ, \mu_2 \notin \mathcal{R}^\circ$ satisfying $\abs{\mu_1 - z_{\rm mid}} \leq \abs{\mu_2 - z_{\rm mid}}$. \begin{figure}[!t] \center \includegraphics[width=0.7\textwidth]{figures/Lemma_8_Fig_3.eps} \caption{An illustration for the proof of Lemma \ref{Lemma: monotonically dec} when $\mu_1$ and $\mu_2$ lie inside $\mathcal{R}^\circ$. $\abs{\mu_1}=\abs{\mu_2} = r$, $\abs{\mu_1 - z_{\rm mid}} \leq \abs{\mu_2 - z_{\rm mid}}$ and $\alpha_2 = -\alpha_1 = \alpha$.} \label{Lemma1_3} \end{figure} Finally, we will consider the third case where both $\mu_1$ and $\mu_2$ lie inside $\mathcal{R}^\circ$. This is the case shown in Fig. \ref{Lemma1_3}. Similar to the first two cases, it is enough to focus only on $0\leq \Arg{\mu_1} \leq \Arg{\mu_2} \leq \alpha$. We divide $\mathcal{R}$ into four disjoint regions given by \begin{eqnarray*} \mathcal{R}_1 &=& \brparen{z \in \C: \Arg{\mu_2} \leq \Arg{z} \leq \alpha} \\ \mathcal{R}_2 &=& \brparen{z \in \C: \Arg{\mu_1} \leq \Arg{z} < \Arg{\mu_2}} \\ \mathcal{R}_3 &=& \brparen{z \in \C: \Arg{\mu_1} + \Arg{\mu_2} - \alpha \leq \Arg{z} < \Arg{\mu_1}} \\ \mathcal{R}_4 &=& \brparen{z \in \C: -\alpha \leq \Arg{z} < \Arg{\mu_1} + \Arg{\mu_2} - \alpha} \end{eqnarray*} Using the symmetry in the problem, we have $\PR{W_1 \in \mathcal{R}_1} = \PR{W_2 \in \mathcal{R}_3}$, $\PR{W_1 \in \mathcal{R}_2} = \PR{W_2 \in \mathcal{R}_2}$ and $\PR{W_1 \in \mathcal{R}_3} = \PR{W_2 \in \mathcal{R}_1}$. On the other hand, $\PR{W_1 \in \mathcal{R}_4} \geq \PR{W_2 \in \mathcal{R}_4}$ since $\abs{y-\mu_1} \leq \abs{y-\mu_2}$ for all $y \in \mathcal{R}_4$. Hence, $g\paren{\mu_1} \geq g\paren{\mu_2}$ when both $\mu_1$ and $\mu_2$ lie inside $\mathcal{R}^\circ$, which completes the proof. \end{IEEEproof} \begin{figure}[!t] \center \includegraphics[width=0.7\textwidth]{figures/Lemma1_Fig2.eps} \caption{An illustration for the proof of Theorem \ref{Theorem: ML Detector} where $\abs{\sqrt{\mathsf{SNR}}hx^\star - z_k} \leq \abs{\sqrt{\mathsf{SNR}}hx - z_k}$. } \label{Lemma1_Fig2} \end{figure} Now, we will utilize Lemma \ref{Lemma: monotonically dec} to prove Theorem \ref{Theorem: ML Detector}. For $Q\paren{Y} = k$, the ML detector given in \eqref{Eqn: ML detector Q} reduces to finding a signal point in $\mathcal{C}$ maximizing the probability $\PR{\sqrt{\mathsf{SNR}}hx + W \in \mathcal{R}_k}$, i.e., \begin{align} \hat{x}\paren{k, h} \in \underset{x \in \mathcal{C}}{\arg\max }\ \PR{\sqrt{\mathsf{SNR}}hx + W \in \mathcal{R}_k}. \nonumber \end{align} By Lemma \ref{Lemma: monotonically dec}, $\hat{x}\paren{k,h}$ is the signal point in $\mathcal{C}$ such that $\sqrt{\mathsf{SNR}}h \hat{x}\paren{k,h}$ is closest to $z_k = \sqrt{\mathsf{SNR}}r\e{\jmath \paren{\frac{2\pi}{2^n}k +\frac{\pi}{2^n}}}$, where $r = \abs{h}$. Further, $\hat{x}\paren{k,h}$ is unique with probability one due to the continuity assumption of the fading distribution. Consider now the semi-circle $$\mathcal{S} = \brparen{z \in \C : \abs{z} = \sqrt{\mathsf{SNR}}r, \paren{\frac{2\pi}{2^n}k + \frac{\pi}{2^n}-\frac{\pi}{2}} \leq \Arg{z} \leq \paren{\frac{2\pi}{2^n}k + \frac{\pi}{2^n}+\frac{\pi}{2}}}$$ centered around $z_k$ and having $\mathcal{H}_k$ as its bisector. The semi-circle $\mathcal{S}$ is illustrated in Fig \ref{Lemma1_Fig2}. Let $x^\star \in \underset{x \in \mathcal{C}}{\arg\min} \ \dist{\sqrt{\mathsf{SNR}}hx, \mathcal{H}_k}$. For the $M$-PSK modulation scheme ($M\geq 2$) with regularly spaced signal points on the unit circle, we always have $\sqrt{\mathsf{SNR}} h x^\star \in \mathcal{S}$ and $\sqrt{\mathsf{SNR}} h \hat{x}(k,h) \in \mathcal{S}$. Take now another signal point $x \in \mathcal{C}$ different than $x^\star$ and satisfying $\sqrt{\mathsf{SNR}} h x \in \mathcal{S}$. Consider the triangle formed by $0, z_k$ and $\sqrt{\mathsf{SNR}} h x^\star$, and the one formed by $0, z_k$ and $\sqrt{\mathsf{SNR}} h x$. We first observe that the area of the first triangle is smaller than the area of the second one since they share the line segment $\mathcal{L}_{Oz_k}$ as their common base but the height of the first one $\dist{\sqrt{\mathsf{SNR}}hx^\star, \mathcal{H}_k}$ corresponding to this base is smaller than the height of the second one $\dist{\sqrt{\mathsf{SNR}}hx, \mathcal{H}_k}$ corresponding to the same base. This is also illustrated in Fig. \ref{Lemma1_Fig2}. This observation, in turn, implies $\abs{\sqrt{\mathsf{SNR}}hx^\star - z_k} \leq \abs{\sqrt{\mathsf{SNR}}hx - z_k}$ because the remaining side lengths of both triangles are equal to $\sqrt{\mathsf{SNR}} r$. Since this is correct for any $x \in \mathcal{C}$ satisfying $\sqrt{\mathsf{SNR}} h x \in \mathcal{S}$, we conclude that $x^\star$ is unique and equal to $x^\star = \hat{x}(k, h)$. \section{Proof of Lemma \ref{Lemma: p(snr) = 2^n P_00}} \label{Appendix:p(snr)=2^nP00 Proof} The proof of Lemma \ref{Lemma: p(snr) = 2^n P_00} is based on an application of the law of total probability \cite{Tsitsiklis08}. To this end, we consider a partition $\brparen{\mathcal{D}_k}_{k=0}^{2^n-1}$ of $\C$, where each element of this partition is given by $\mathcal{D}_k = \brparen{z \in \C: \paren{2k -1}\frac{\pi}{2^n} \leq \Arg{z} +\pi < \paren{2k+1}\frac{\pi}{2^n}}$ for $k \in \sqparen{1:2^n-1}$, and $\mathcal{D}_0 = \brparen{z \in \C: \pi - \frac{\pi}{2^n} \leq \Arg{z} < \pi} \bigcup \brparen{z \in \C: -\pi \leq \Arg{z} < \frac{\pi}{2^n} - \pi}$. Let $x_i = \e{\jmath \pi\paren{\frac{2i+1}{M} - 1}}$ be the $i$th signal point in the constellation set $\mathcal{C}$ for $i \in \sqparen{0:M-1}$. Then, we can express $p\paren{\mathsf{SNR}}$ according to \begin{align} p\paren{\mathsf{SNR}} &= \frac{1}{M} \sum_{i=0}^{M-1}\sum_{k=0}^{2^n-1} \int\limits_{\mathcal{D}_k}\PR{x_i \neq \hat{x}\paren{Q\paren{Y},h}|H=h, X=x_i} f_H\paren{h} dh. \label{Eqn: p(SNR) 2} \end{align} We will show that all the terms in \eqref{Eqn: p(SNR) 2} are equal to each other. Next, we define $\mathcal{E}_i = \brparen{z \in \C: \Arg{x_i} - \frac{\pi}{M} \leq \Arg{z} < \Arg{x_i} + \frac{\pi}{M}}$ for $i \in \sqparen{0: M-1}$. Note that $\mathcal{E}_i$ contains all $\mathcal{H}_k$'s (i.e., bisectors of quantization regions) to which $x_i$ is the closest signal point since $x_i$'s are uniformly spaced on the unit circle in $\C$. Furthermore, this statement continues to be true for $\sqrt{\mathsf{SNR}}hx_i$ as long as $\Arg{h} \in \parenro{-\frac{\pi}{2^n}, \frac{\pi}{2^n}}$ since the angular spacing between $\mathcal{H}_k$'s is uniform and equal to $\frac{2\pi}{2^{n}}$. Notice that $\Arg{h} \in \parenro{-\frac{\pi}{2^n}, \frac{\pi}{2^n}}$ if and only if $h \in \mathcal{D}_{2^{n-1}}$. On the other hand, if $\Arg{h} \in \parenro{\frac{\pi}{2^n}, \frac{3 \pi}{2^n}}$, the region $\e{\jmath \frac{2\pi}{2^{n}}}\mathcal{E}_i = \brparen{\e{\jmath \frac{2\pi}{2^{n}}} z \in \C: \Arg{x_i} - \frac{\pi}{M} \leq \Arg{z} < \Arg{x_i} + \frac{\pi}{M}}$ contains all $\mathcal{H}_k$'s to which $\sqrt{\mathsf{SNR}}hx_i$ is the closest. Notice also that $\Arg{h} \in \parenro{\frac{\pi}{2^n}, \frac{3 \pi}{2^n}}$ if and only if $h \in \mathcal{D}_{2^{n-1}+1}$. Similarly, $\e{-\jmath \frac{2\pi}{2^{n}}}\mathcal{E}_i$ contains all $\mathcal{H}_k$'s to which $\sqrt{\mathsf{SNR}}hx_i$ is closest if $\Arg{h} \in \parenro{-\frac{3\pi}{2^n}, -\frac{\pi}{2^n}}$, and $\Arg{h} \in \parenro{-\frac{3\pi}{2^n}, -\frac{\pi}{2^n}}$ if and only if $h \in \mathcal{D}_{2^{n-1}-1}$. The same idea extends to any $\mathcal{D}_k$, and we define \begin{eqnarray} \mathcal{E}_{i,k} \defeq \exp\paren{\jmath\paren{k-2^{n-1}}\frac{2\pi}{2^n}} \mathcal{E}_i, \hspace{0.15cm} \mbox{ for } i \in \sqparen{0:M-1} \mbox{ and } k \in \sqparen{0:2^n-1}. \label{Eqn: Equal Probability Sets} \end{eqnarray} We will use the sets defined in \eqref{Eqn: Equal Probability Sets} to show that all the terms in \eqref{Eqn: p(SNR) 2} are equal. To complete the proof, we let $p_{i,k} = \int_{\mathcal{D}_k}\PR{x_i \neq \hat{x}\paren{Q\paren{Y},h}|H=h, X=x_i} f_H\paren{h} dh$ for $i \in \sqparen{0:M-1}$ and $k \in \sqparen{0:2^n-1}$. We also define $\theta^{\prime}_i = -\pi\paren{\frac{2i}{M}-1}$, $\theta^{\prime\prime}_k = -\paren{k - 2^{n-1}}\frac{2\pi}{2^n}$, and $\theta_{i,k} = \theta^{\prime}_i + \theta^{\prime\prime}_k$ for $i \in \sqparen{0:M-1}$ and $k \in \sqparen{0:2^n-1}$. We first observe that $\e{\jmath \theta_{i, k}} \mathcal{E}_{i, k} = \mathcal{E}_{\frac{M}{2}}$ since multiplication with $\e{\jmath \theta^{\prime}_i}$ rotates the $i$th signal point to $x_{\frac{M}{2}}$ and multiplication with $\e{\jmath \theta^{\prime\prime}_k}$ removes the effect of partition selection for $h$. Secondly, we observe that when $h \in \mathcal{D}_k$, the event $\brparen{x_i \neq \hat{x}\paren{Q\paren{Y},h}}$ is equivalent to $\brparen{Y \notin \mathcal{E}_{i, k}}$ since $\mathcal{E}_{i, k}$ contains all bisectors to which $\sqrt{\mathsf{SNR}}h x_i$ is closest for this range of $h$ values. Hence, the following chain of equalities hold: \begin{eqnarray} p_{i,k} &\stackrel{\rm (a)}{=}& \int_{\mathcal{D}_k}\PR{\sqrt{\mathsf{SNR}}h x_i + W \notin \mathcal{E}_{i,k}}f_H\paren{h} dh \nonumber \\ &=& \int_{\mathcal{D}_k}\PR{ W \notin \mathcal{E}_{i,k} - \sqrt{\mathsf{SNR}}h x_i}f_H\paren{h} dh \nonumber \\ &=& \int_{\mathcal{D}_k} \PR{ \e{\jmath \theta_{i, k}}W \notin \e{\jmath \theta_{i, k}} \mathcal{E}_{i,k} - \e{\jmath \theta_{i, k}}\sqrt{\mathsf{SNR}}h x_i}f_H\paren{h} dh \nonumber \\ &\stackrel{\rm (b)}{=}& \int_{\mathcal{D}_k} \PR{ W \notin \mathcal{E}_{\frac{M}{2}} - \sqrt{\mathsf{SNR}}\e{\jmath \theta^{\prime\prime}_{k}}h x_\frac{M}{2}}f_H\paren{h} dh, \label{Eqn: pik Derivation 1} \end{eqnarray} where (a) follows from the independence of $W$, $H$ and $X$, and (b) follows from above observations and the circular symmetry property of $W$. Let us now define $z = \e{\jmath \theta^{\prime\prime}_k} h$ in \eqref{Eqn: pik Derivation 1}. Since multiplication with a unit magnitude complex number is a unitary transformation (i.e., rotation) over the complex plane, we have \begin{eqnarray} p_{i,k} &=& \int_{\mathcal{D}_k} \PR{ W \notin \mathcal{E}_{\frac{M}{2}} - \sqrt{\mathsf{SNR}}\e{\jmath \theta^{\prime\prime}_{k}}h x_\frac{M}{2}}f_H\paren{h} dh \nonumber \\ &=& \int_{\e{\jmath \theta^{\prime\prime}_{k}} \mathcal{D}_k} \PR{ W \notin \mathcal{E}_{\frac{M}{2}} - \sqrt{\mathsf{SNR}}z x_\frac{M}{2}}f_H\paren{\e{-\jmath \theta^{\prime\prime}_{k}} z} dz \nonumber \\ &\stackrel{\rm (a)}{=}& \int_{\mathcal{D}_{2^{n-1}}} \PR{ W \notin \mathcal{E}_{\frac{M}{2}} - \sqrt{\mathsf{SNR}}z x_\frac{M}{2}}f_H\paren{z} dz \nonumber \\ &\stackrel{\rm (b)}{=}& \int_{\mathcal{D}_{2^{n-1}}} \PR{ \sqrt{\mathsf{SNR}}z x_\frac{M}{2} + W \notin \mathcal{E}_{\frac{M}{2}, 2^{n-1}}}f_H\paren{z} dz \nonumber \\ &\stackrel{\rm (c)}{=}& p_{\frac{M}{2}, 2^{n-1}}, \end{eqnarray} where (a), (b) and (c) follow from the circular symmetry of $H$ \cite{Picinbono94, Koivunen12} and the corresponding definitions for $\mathcal{D}_k$, $\mathcal{E}_{i,k}$ and $p_{i,k}$ for $i \in \sqparen{0:M-1}$ and $k \in \sqparen{0:2^n-1}$. This shows $p\paren{\mathsf{SNR}} = 2^n p_{\frac{M}{2}, 2^{n-1}}$. For a circularly-symmetric pdf $f_H\paren{h}$, it is well-known that $rf_H\paren{r\cos \lambda, r\sin \lambda} = \frac{1}{2\pi}f_R\paren{r}$ \cite{book_8}. Switching to polar coordinates, and using the identities $rf_H\paren{r\cos \lambda, r\sin \lambda} = \frac{1}{2\pi}f_R\paren{r}$, $x_\frac{M}{2} = \e{\jmath\frac{\pi}{M}}$ and $\mathcal{E}_{\frac{M}{2}, 2^{n-1}}=\mathcal{E}$, we have \begin{align} p(\mathsf{SNR}) &= \frac{2^{n-1}}{\pi} \int_{-\frac{\pi}{2^n}}^{\frac{\pi}{2^n}}\int_{0}^{\infty}\PR{\sqrt{\mathsf{SNR}}r \e{\jmath\paren{\frac{\pi}{M} +\lambda}} + W \notin \mathcal{E}}f_{R}\paren{r}\,dr \, d\lambda. \nonumber \end{align} By using the change of variables $\theta = \frac{\pi}{M} + \lambda$, we conclude the proof. \section{Proof of Lemma \ref{Lemma: sum of pi(SNR) limit}} \label{Appendix: Proof of Lemma: sum of pi(SNR) limit} The proof of part (i) follows immediately from Definition \ref{Def: Exponential Equality}: \begin{eqnarray} \lim_{\mathsf{SNR} \ra \infty} \frac{\log\paren{\alpha f\paren{\mathsf{SNR}}}}{\log \mathsf{SNR}} = \lim_{\mathsf{SNR} \ra \infty} \frac{\log\alpha}{\log\mathsf{SNR}} + \lim_{\mathsf{SNR} \ra \infty} \frac{\log f\paren{\mathsf{SNR}}}{\log \mathsf{SNR}} = d_i. \nonumber \end{eqnarray} For the proof of part (ii), given any $\epsilon > 0$, let $c>0$ be such that \begin{eqnarray} \mathsf{SNR}^{d_i-\epsilon} \leq f_i\paren{\mathsf{SNR}} \leq \mathsf{SNR}^{d_i + \epsilon} \end{eqnarray} for all $\mathsf{SNR} \geq c$ and $i \in \sqparen{1:N}$. Let $i^\prime = \arg\max_{i \in \sqparen{1:N}} d_i$. Then, as $\mathsf{SNR} \ra \infty$, we can write \begin{eqnarray} \log\paren{\sum_{i=1}^N f_i\paren{\mathsf{SNR}}} &\leq& \log\paren{\sum_{i=1}^N \mathsf{SNR}^{d_i + \epsilon}} \nonumber \\ &=& \log \mathsf{SNR}^{d_{\max} +\epsilon} + \log\paren{1 + \sum_{\genfrac{}{}{0pt}{}{i=1}{i \neq i^\prime}}^N\mathsf{SNR}^{d_i - d_{\max}}} \nonumber \\ &=& \paren{d_{\max} + \epsilon} \log\mathsf{SNR} + \LO{1}, \nonumber \end{eqnarray} which implies $\sum_{i=1}^N f_i\paren{\mathsf{SNR}} \stackrel{\rm e}{\leq} d_{\max} + \epsilon$. Since $\epsilon > 0$ is arbitrary, we conclude that $\sum_{i=1}^N f_i\paren{\mathsf{SNR}} \stackrel{\rm e}{\leq} d_{\max}$. The other direction $\sum_{i=1}^N f_i\paren{\mathsf{SNR}} \stackrel{\rm e}{\geq} d_{\max}$ follows from the same arguments, which completes the proof. \section{Proof of Lemma \ref{Lemma: p1(SNR) limit}} \label{Appendix:Proof of Lemma: p1(SNR) limit} We start with the case $M=4$ and $n=2$, and obtain upper and lower bounds on $p_1\paren{\mathsf{SNR}}$ that will lead to the same exponential equality. For the upper bound, we write $p_1\paren{\mathsf{SNR}}$ as \begin{align} p_1\paren{\mathsf{SNR}} &= \frac{2m^m}{\pi^{2}}\int_{0}^{\frac{\pi}{2}}\int_{0}^{\frac{\pi}{2}} \left( \frac{\mathsf{SNR}}{\sin^2\beta}\cos^2\theta + m \right)^{-m} d\theta d\beta \nonumber \\ &\leq \frac{m^m}{\pi}\int_{0}^{\frac{\pi}{2}} \left( \mathsf{SNR}\cos^2\theta + m \right)^{-m} d\theta. \nonumber \end{align} Let $\theta^*\paren{\mathsf{SNR}}$ be such that $\cos\paren{\theta^*\paren{\mathsf{SNR}}}= \mathsf{SNR}^{-\frac12}$. Then, $\theta^*\paren{\mathsf{SNR}} = \arccos\paren{\mathsf{SNR}^{-\frac12}} = \frac{\pi}{2}-\mathsf{SNR}^{-\frac12} - \LO{\mathsf{SNR}^{-\frac12}}$ as $\mathsf{SNR} \to \infty$. Using the fact that the line $1-\frac{2}{\pi}\theta$ is a lower bound for $\cos\theta$ for $\theta \in \sqparen{0, \frac{\pi}{2}}$, we have \begin{align} p_1\paren{\mathsf{SNR}} &\leq \frac{m^m}{\pi}\int_{0}^{\theta^*\paren{\mathsf{SNR}}} \paren{ \mathsf{SNR}\cos^2\theta + m }^{-m} d\theta +\frac{m^m}{\pi}\int_{\theta^*\paren{\mathsf{SNR}}}^{\frac{\pi}{2}} \paren{ \mathsf{SNR}\cos^2\theta + m }^{-m} d\theta \nonumber \\ &\leq \frac{m^m}{\pi}\mathsf{SNR}^{-m}\int_{0}^{\theta^*\paren{\mathsf{SNR}}} \paren{ 1-\frac{2}{\pi}\theta}^{-2m} d\theta +\frac{1}{\pi} \mathsf{SNR}^{-\frac12}\paren{1 + \LO{1}} \nonumber \\ &= - \frac{m^m}{2}\mathsf{SNR}^{-m} \int_1^{1-\frac{2}{\pi}\theta^*\paren{\mathsf{SNR}}} u^{-2m} du +\frac{1}{\pi} \mathsf{SNR}^{-\frac12}\paren{1 + \LO{1}} \nonumber \\ &= \left \{ \begin{array}{ll} \mathsf{SNR}^{-\frac{1}{2}} \BO{\log{\mathsf{SNR}}} & \mbox{ if } m = \frac12 \\ \mathsf{SNR}^{-\frac{1}{2}}\BO{1} & \mbox{ if } m > \frac12 \end{array} \right. \label{Eqn: upper bound for p1} \end{align} for large values of $\mathsf{SNR}$. Equation \eqref{Eqn: upper bound for p1} shows that $p_1\paren{\mathsf{SNR}} \stackrel{\rm e}{\leq} \mathsf{SNR}^{-\frac12}$ for all $m\geq \frac12$ when $M=4$ and $n=2$. For the other direction, we obtain a lower bound on $p_1\paren{\mathsf{SNR}}$ as below. \begin{align} p_1\paren{\mathsf{SNR}} &= \frac{2m^m}{\pi^{2}}\int_{0}^{\frac{\pi}{2}}\int_{0}^{\frac{\pi}{2}} \paren{\sin\beta}^{2m} \paren{\mathsf{SNR}\cos^2\theta + m \sin^2\beta }^{-m} d\theta d\beta \nonumber \\ &\geq \frac{2m^m}{\pi^{2}}\int_{0}^{\frac{\pi}{2}} \paren{\sin\beta}^{2m} d\beta \int_{0}^{\frac{\pi}{2}} \paren{\mathsf{SNR}\cos^2\theta + m }^{-m} d\theta \nonumber \end{align} We observe that $\int_{0}^{\frac{\pi}{2}} \paren{\sin\beta}^{2m} d\beta = \frac{\sqrt{\pi}\Gamma\paren{m + \frac{1}{2}}}{2\Gamma\paren{m+1}}$ for $m>-\frac{1}{2}$ and let $c = \frac{m^m}{\pi^{1.5}} \frac{\Gamma\paren{m + \frac{1}{2}}}{\Gamma\paren{m+1}}$. We first consider the case $m=\frac12$. Then, as $\mathsf{SNR} \ra \infty$, we have \begin{eqnarray} p_1\paren{\mathsf{SNR}} &\geq& c \mathsf{SNR}^{-\frac12} \int_0^{\frac{\pi}{2}} \paren{\cos^2\theta +1}^{-\frac12} d\theta \nonumber \\ &=& \mathsf{SNR}^{-\frac12} \OO{1}. \label{Eqn: p1 Lower Bound 1} \end{eqnarray} For $m>\frac12$, we define $\theta^*\paren{\mathsf{SNR}}$ as above and lower bound $p_1\paren{\mathsf{SNR}}$ for large values of $\mathsf{SNR}$ as \begin{eqnarray} p_1\paren{\mathsf{SNR}} &\geq& c \mathsf{SNR}^{-m} \int_{\theta^*\paren{\mathsf{SNR}}}^{\frac{\pi}{2}} \paren{\cos^2\theta + \frac{m}{\mathsf{SNR}}}^{-m} d\theta \nonumber \\ &\geq& c \mathsf{SNR}^{-m} \int_{\theta^*\paren{\mathsf{SNR}}}^{\frac{\pi}{2}} \paren{\frac{1}{\mathsf{SNR}} + \frac{m}{\mathsf{SNR}}}^{-m} d\theta \nonumber \\ &=& \mathsf{SNR}^{-\frac12} c (1+m)^{-m} \paren{1 + \LO{1}} \nonumber \\ &=& \mathsf{SNR}^{-\frac12} \OO{1}. \label{Eqn: p1 Lower Bound 2} \end{eqnarray} Using \eqref{Eqn: p1 Lower Bound 1} and \eqref{Eqn: p1 Lower Bound 2}, we conclude that $p_1\paren{\mathsf{SNR}} \stackrel{\rm e}{\geq} \mathsf{SNR}^{-\frac12}$ for all $m\geq \frac12$ when $M=4$ and $n=2$. Since $p_1\paren{\mathsf{SNR}}$ also satisfies $p_1\paren{\mathsf{SNR}} \stackrel{\rm e}{\leq} \mathsf{SNR}^{-\frac12}$ in this case, we have $p_1\paren{\mathsf{SNR}} \stackrel{\rm e}{=} \mathsf{SNR}^{-\frac12}$ for $m\geq \frac12$, $M=4$ and $n=2$. Next, we consider $M=4$ and $n>2$. In this case, we write $p_1\paren{\mathsf{SNR}}$ as \begin{eqnarray*} p_1\paren{\mathsf{SNR}} = \frac{2^{n-1}m^m}{\pi^2}\mathsf{SNR}^{-m} \int_{0}^{\frac{\pi}{2}}\int_{\frac{\pi}{4} - \frac{\pi}{2^n}}^{\frac{\pi}{4} + \frac{\pi}{2^n}} \paren{ \frac{\cos^2\theta}{\sin^2\beta} + \frac{m}{\mathsf{SNR}} }^{-m} d\theta d\beta. \end{eqnarray*} Let $g_{\mathsf{SNR}}\paren{\theta, \beta} = \paren{\frac{\cos^2\theta}{\sin^2\beta} + \frac{m}{\mathsf{SNR}}}^{-m}$ be a collection of functions indexed by $\mathsf{SNR}$. These functions increase to $g_\infty\paren{\theta, \beta} = \paren{\frac{\cos\theta}{\sin\beta}}^{-2m}$ as $\mathsf{SNR} \ra \infty$. Further, $\int_{0}^{\frac{\pi}{2}}\int_{\frac{\pi}{4} - \frac{\pi}{2^n}}^{\frac{\pi}{4} + \frac{\pi}{2^n}} g_\infty\paren{\theta, \beta} d\theta d\beta < \infty$. Hence, as $\mathsf{SNR} \ra \infty$, we conclude that \begin{eqnarray*} p_1\paren{\mathsf{SNR}} = \mathsf{SNR}^{-m} \TO{1} \end{eqnarray*} by using the monotone convergence theorem \cite{book_7}, which implies $p_1\paren{\mathsf{SNR}} \stackrel{\rm e}{=} \mathsf{SNR}^{-m}$. The proof for $M>4$ and $n\geq \log_2M$ is similar, and we omit it to avoid repetition. \section{Proof of Lemma \ref{Lemma: p2(SNR) limit}} \label{Appendix:Proof of Lemma: p2(SNR) limit} For $M=4$ and $n=2$, $p_2\paren{\mathsf{SNR}}=p_1\paren{\mathsf{SNR}}$, and hence the proof of $p_2\paren{\mathsf{SNR}} \stackrel{\rm e}{=} \mathsf{SNR}^{-\frac12}$ in this case directly follows from Lemma \ref{Lemma: p1(SNR) limit}. For $M>4$ and $n > \log_2M$, a similar argument using the monotone convergence theorem as in the proof of Lemma \ref{Lemma: p1(SNR) limit} readily shows that $p_2\paren{\mathsf{SNR}} \stackrel{\rm e}{=} \mathsf{SNR}^{-m}$ for all $m \geq \frac12$. Therefore, we will only focus on the case $M>4$ and $n=\log_2M$ to complete the proof of Lemma \ref{Lemma: p2(SNR) limit}. For $M>4$ and $n=\log_2M$, we will obtain upper and lower bounds on $p_2\paren{\mathsf{SNR}}$ leading to the same exponential equality. For the upper bound, we have \begin{eqnarray} p_2\paren{\mathsf{SNR}} &=& \frac{2^{n-1}m^m}{\pi^2}\int_0^{\frac{\pi}{2}}\int_0^{\frac{2\pi}{M}} \paren{\frac{\mathsf{SNR}}{\sin^2\beta} \sin^2\theta + m}^{-m}d\theta d\beta \nonumber \\ &\leq& \frac{2^{n-1}m^m}{\pi^2}\int_0^{\frac{\pi}{2}}\int_0^{\frac{\pi}{2}} \paren{\frac{\mathsf{SNR}}{\sin^2\beta} \sin^2\theta + m}^{-m}d\theta d\beta \nonumber \\ &\stackrel{\rm e}{=}& \mathsf{SNR}^{-\frac12}, \label{Eqn: p2(SNR) exp upper bound} \end{eqnarray} where the last equality follows from the fact that $p_2\paren{\mathsf{SNR}} \stackrel{\rm e}{=} \mathsf{SNR}^{-\frac12}$ for $M=4$ and $n=2$ and Lemma \ref{Lemma: sum of pi(SNR) limit}. For the lower bound, we have \begin{eqnarray} p_2\paren{\mathsf{SNR}} &=& \frac{2^{n-1}m^m}{\pi^2}\int_0^{\frac{\pi}{2}}\int_0^{\frac{2\pi}{M}} \paren{\sin\beta}^{2m}\paren{\mathsf{SNR} \sin^2\theta + m\sin^2\beta}^{-m}d\theta d\beta \nonumber \\ &\geq& c \mathsf{SNR}^{-m}\int_0^{\frac{2\pi}{M}} \paren{\sin^2\theta + \frac{m}{\mathsf{SNR}}}^{-m}d\theta, \nonumber \end{eqnarray} where $c=\frac{2^{n-2}m^m}{\pi^{1.5}}\frac{\Gamma\paren{m+\frac12}}{\Gamma\paren{m+1}}$. Let $\theta^*\paren{\mathsf{SNR}}$ be such that $\sin\paren{\theta^*\paren{\mathsf{SNR}}} = \mathsf{SNR}^{-\frac12}$. Then, $\theta^*\paren{\mathsf{SNR}} = \arcsin\paren{\mathsf{SNR}^{- \frac12}} = \mathsf{SNR}^{-\frac12} + \LO{\mathsf{SNR}^{-\frac12}}$ as $\mathsf{SNR} \ra \infty$. Hence, as $\mathsf{SNR} \ra \infty$, we have \begin{eqnarray} p_2\paren{\mathsf{SNR}} &\geq& c \mathsf{SNR}^{-m}\int_0^{\theta^*\paren{\mathsf{SNR}}} \paren{\sin^2\theta + \frac{m}{\mathsf{SNR}}}^{-m}d\theta \nonumber \\ &\geq& c \mathsf{SNR}^{-m} \int_0^{\theta^*\paren{\mathsf{SNR}}} \paren{\frac{1}{\mathsf{SNR}} + \frac{m}{\mathsf{SNR}}}^{-m} d\theta \nonumber \\ &=& \mathsf{SNR}^{-\frac12} \OO{1}, \label{Eqn: p2(SNR) exp lower bound} \end{eqnarray} which implies $p_2\paren{\mathsf{SNR}} \stackrel{\rm e}{\geq} \mathsf{SNR}^{-\frac12}$. Since $p_2\paren{\mathsf{SNR}}$ also satisfies $p_2\paren{\mathsf{SNR}} \stackrel{\rm e}{\leq} \mathsf{SNR}^{-\frac{1}{2}}$ in this case, we have $p_2\paren{\mathsf{SNR}} \stackrel{\rm e}{=} \mathsf{SNR}^{-\frac{1}{2}}$ for $m \geq \frac{1}{2}$, $M >4$ and $n=\log_2M$. \section{Proof of Lemma \ref{Lemma: SEP bounds}} \label{Appendix: Proof of Lemma: SEP bounds} We will prove this lemma for general circularly-symmetric fading processes. To this end, let $H = R \e{\jmath \Theta}$ be the circularly-symmetric fading coefficient with the joint phase and magnitude pdf $f_{R, \Theta}\paren{r, \theta} = \frac{1}{2\pi} f_R\paren{r}$ for $\theta \in \parenro{-\pi, \pi}$ and $r \geq 0$. In the proof of Theorem \ref{Theorem: General SEP}, we obtained the expression $p\paren{\mathsf{SNR}, h}$ with $h = r\e{\jmath\theta}$ given by \begin{align} p\paren{\mathsf{SNR}, h} \nonumber \\ &\hspace{-1.5cm}= \mathcal{Q}\paren{\sqrt{2\mathsf{SNR}}r\cos\theta} + \mathcal{Q}\paren{\sqrt{2\mathsf{SNR}}r\sin\theta} - \mathcal{Q}\paren{\sqrt{2\mathsf{SNR}}r\cos\theta}\mathcal{Q}\paren{\sqrt{2\mathsf{SNR}}r\sin\theta} \hspace{3cm} \nonumber \\ &\hspace{-1.5cm}+ \frac{1}{\sqrt{\pi}} \int_{-\sqrt{\mathsf{SNR}}r\cos\theta}^\infty \mathcal{Q}\paren{\sqrt{2\mathsf{SNR}}r\sec\paren{\frac{2\pi}{M}}\sin\paren{\frac{2\pi}{M} - \theta} + \sqrt{2}\tan\paren{\frac{2\pi}{M}}w}\e{-w^2} dw .\label{Eqn: p(SNR, h) Expansion} \end{align} Below, we will always use $r\e{\jmath \theta}$ as the polar coordinate representation of $h$, i.e., $r = \abs{h}$ and $\theta = \Arg{h}$. Integrating $2^n p\paren{\mathsf{SNR}, h}$ with respect to $f_{R, \Theta}\paren{r, \theta}$ for $\theta \in \parenro{\frac{\pi}{M} - \frac{\pi}{2^n}, \frac{\pi}{M} + \frac{\pi}{2^n}}$ and $r \geq 0$, and using the Nakagami-$m$ pdf for $f_{R}(r)$ together with Craig's formula, we obtained the resulting $p\paren{\mathsf{SNR}}$ expression in Theorem \ref{Theorem: General SEP}. Here, we will not assume any specific functional form for $f_{R}(r)$. We start with obtaining the lower bound $L\paren{\mathsf{SNR}}$ on $p\paren{\mathsf{SNR}}$. Let \begin{eqnarray} p_1\paren{\mathsf{SNR}, h} &=& \mathcal{Q}\paren{\sqrt{2\mathsf{SNR}}r\cos\theta} \nonumber \\ p_2\paren{\mathsf{SNR}, h} &=& \mathcal{Q}\paren{\sqrt{2\mathsf{SNR}}r\sin\theta} \nonumber \\ p_3\paren{\mathsf{SNR}, h} &=& \mathcal{Q}\paren{\sqrt{2\mathsf{SNR}}r\cos\theta}\mathcal{Q}\paren{\sqrt{2\mathsf{SNR}}r\sin\theta} \nonumber \end{eqnarray} and $p_4\paren{\mathsf{SNR}, h}$ be the last integral term in \eqref{Eqn: p(SNR, h) Expansion}. For $i \in \sqparen{1:4}$, $p_i\paren{\mathsf{SNR}}$ is defined to be the integral of $2^n p_i\paren{\mathsf{SNR}, h}$ with respect to $f_{R, \Theta}\paren{r, \theta}$ for $\theta \in \parenro{\frac{\pi}{M} - \frac{\pi}{2^n}, \frac{\pi}{M} + \frac{\pi}{2^n}}$ and $r \geq 0$. For the given integration range, $p_3\paren{\mathsf{SNR}, h} \leq \frac12 p_2\paren{\mathsf{SNR}, h}$ since the argument of the $\mathcal{Q}$-function is always positive. Hence, we have \begin{eqnarray} p\paren{\mathsf{SNR}, h} &\geq& p_1\paren{\mathsf{SNR}, h} + p_2\paren{\mathsf{SNR}, h} - p_3\paren{\mathsf{SNR}, h} \nonumber \\ &\geq& p_1\paren{\mathsf{SNR}, h} + \frac12 p_2\paren{\mathsf{SNR}, h}. \label{Eqn: p(SNR, h) Lower Bound} \end{eqnarray} After scaling with $2^n$ and integrating \eqref{Eqn: p(SNR, h) Lower Bound} with respect to $f_{R, \Theta}\paren{r, \theta}$ over the above integration range, we have \begin{eqnarray} p\paren{\mathsf{SNR}} &\geq& p_1\paren{\mathsf{SNR}} + \frac12 p_2\paren{\mathsf{SNR}} \nonumber \\ &=& L\paren{\mathsf{SNR}}. \end{eqnarray} Next, we establish that $U\paren{\mathsf{SNR}} = p_1\paren{\mathsf{SNR}} + 2p_2\paren{\mathsf{SNR}}$ is an upper bound on $p\paren{\mathsf{SNR}}$. To this end, we will show that $p_4\paren{\mathsf{SNR}} \leq p_2\paren{\mathsf{SNR}}$ for all $M \geq 4$. For $M=4$, this is trivial since $p_4\paren{\mathsf{SNR}, h} = 0 \leq p_2\paren{\mathsf{SNR}, h}$. For $M>4$, we define \begin{eqnarray} p_5\paren{\mathsf{SNR}, h} = \frac{1}{\sqrt{\pi}}\int_{-\infty}^\infty \mathcal{Q}\paren{\sqrt{2\mathsf{SNR}}r\sec\paren{\frac{2\pi}{M}}\sin\paren{\frac{2\pi}{M}-\theta} + \sqrt{2}w\tan\paren{\frac{2\pi}{M}}} \e{-w^2} dw. \nonumber \end{eqnarray} We also define $p_5\paren{\mathsf{SNR}}$ to be the integral of $2^n p_5\paren{\mathsf{SNR}, h}$ with respect to $f_{R, \Theta}\paren{r, \theta}$ for $\theta \in \parenro{\frac{\pi}{M} - \frac{\pi}{2^n}, \frac{\pi}{M} + \frac{\pi}{2^n}}$ and $r \geq 0$. We observe that $p_4\paren{\mathsf{SNR}} \leq p_5\paren{\mathsf{SNR}}$ since the integrands are always positive and the integral with respect to $w$ is over the whole real line for $p_5\paren{\mathsf{SNR}, h}$. Thus, it will be enough to show $p_2\paren{\mathsf{SNR}} = p_5\paren{\mathsf{SNR}}$ to conclude the proof. For $\mathsf{SNR} = 0$, this can be verified by using the identity $\mathcal{Q}(x) = 1-\mathcal{Q}(-x)$. To prove the equality for all $\mathsf{SNR} \geq 0$, we define the function $f\paren{\mathsf{SNR}} = p_2\paren{\mathsf{SNR}} - p_5\paren{\mathsf{SNR}}$. It is enough to show that the derivative of $f\paren{\mathsf{SNR}}$, which we represent by $f^\prime\paren{\mathsf{SNR}}$, is equal to zero everywhere in order to show $p_2\paren{\mathsf{SNR}} = p_5\paren{\mathsf{SNR}}$. This is because if $f^\prime\paren{\mathsf{SNR}}$ is equal to zero for all $\mathsf{SNR} \geq 0$, then $f\paren{\mathsf{SNR}}$ must be a constant function. Since $f\paren{0} = 0$, we have $f\paren{\mathsf{SNR}} = p_2\paren{\mathsf{SNR}} - p_5\paren{\mathsf{SNR}} = 0$ for all $\mathsf{SNR} \geq 0$. We devote the rest of the proof to showing that $f^\prime\paren{\mathsf{SNR}} = 0$ for all $\mathsf{SNR} \geq 0$. Using the definition of the $\mathcal{Q}$-function, the derivative of $p_2\paren{\mathsf{SNR}}$ with respect to $\mathsf{SNR}$, which we represent by $p_2^\prime\paren{\mathsf{SNR}}$, is given by \begin{eqnarray} p_2^\prime\paren{\mathsf{SNR}} &=& \frac{2^{n-1}}{\pi} \int_{\frac{\pi}{M} - \frac{\pi}{2^{n}}}^{\frac{\pi}{M} + \frac{\pi}{2^{n}}} \int_{0}^{\infty} \frac{d p_2\paren{\mathsf{SNR}, r\e{\jmath \theta}}}{d\mathsf{SNR}} f_R(r) dr d\theta \nonumber \\ &=& \frac{2^{n-1}}{\pi} \int_{\frac{\pi}{M} - \frac{\pi}{2^{n}}}^{\frac{\pi}{M} + \frac{\pi}{2^{n}}} \int_{0}^{\infty} \frac{-r\sin \theta}{2 \sqrt{\pi \mathsf{SNR}}} \e{-\mathsf{SNR} r^2 \sin^2\theta} f_R(r) dr d\theta. \nonumber \end{eqnarray} Similarly, $p_5^\prime\paren{\mathsf{SNR}}$ can be written as \begin{eqnarray} p_5^\prime\paren{\mathsf{SNR}} &=& \frac{2^{n-1}}{\pi} \int_{\frac{\pi}{M} - \frac{\pi}{2^{n}}}^{\frac{\pi}{M} + \frac{\pi}{2^{n}}} \int_{0}^{\infty} \frac{d p_5\paren{\mathsf{SNR}, r\e{\jmath \theta}}}{d\mathsf{SNR}} f_R(r) dr d\theta \nonumber \\ &=& \frac{2^{n-1}}{\pi \sqrt{\pi}} \int_{\frac{\pi}{M} - \frac{\pi}{2^{n}}}^{\frac{\pi}{M} + \frac{\pi}{2^{n}}} \int_{0}^{\infty} \frac{-A\paren{r, \theta}}{2\sqrt{\pi \mathsf{SNR}}} I\paren{r, \theta} f_R(r)dr d\theta, \label{Eqn: p5(SNR) Derivative 1} \end{eqnarray} where $A\paren{r, \theta} = r \sec\paren{\frac{2\pi}{M}}\sin\paren{\frac{2\pi}{M} - \theta}$, $I\paren{r, \theta} = \int_{-\infty}^\infty \e{-\paren{w^2 + \paren{A\paren{r, \theta}\sqrt{\mathsf{SNR}}+Bw}^2}}dw$ and $B= \tan\paren{\frac{2\pi}{M}}$. After completing the term in the exponent in $I\paren{r, \theta}$ to square and using the affinity of the resulting expression to a Gaussian pdf, $I\paren{r, \theta}$ can be shown to be equal to \begin{eqnarray} I\paren{r, \theta} = \sqrt{\pi}\cos\paren{\frac{2\pi} {M}} \e{-\mathsf{SNR} r^2 \sin^2\paren{\frac{2\pi}{M}-\theta}}. \label{Eqn: I(r, theta)} \end{eqnarray} Using \eqref{Eqn: I(r, theta)} in \eqref{Eqn: p5(SNR) Derivative 1}, we have \begin{eqnarray} p_5^\prime\paren{\mathsf{SNR}} = \frac{2^{n-1}}{\pi} \int_{\frac{\pi}{M} - \frac{\pi}{2^{n}}}^{\frac{\pi}{M} + \frac{\pi}{2^{n}}} \int_{0}^{\infty} \frac{-r\sin\paren{\frac{2\pi}{M}-\theta}}{2 \sqrt{\pi \mathsf{SNR}}} \e{-\mathsf{SNR} r^2 \sin^2\paren{\frac{2\pi}{M}-\theta}} f_R(r) dr d\theta. \label{Eqn: p5(SNR) Derivative 2} \end{eqnarray} Change of variables $u = \frac{2\pi}{M} - \theta$ in \eqref{Eqn: p5(SNR) Derivative 2} shows that $p_2^\prime\paren{\mathsf{SNR}} = p_5^\prime\paren{\mathsf{SNR}}$, and hence $f^\prime\paren{\mathsf{SNR}} = 0$ as desired. \bibliographystyle{IEEEtran}
{ "timestamp": "2019-03-01T02:09:32", "yymm": "1902", "arxiv_id": "1902.10896", "language": "en", "url": "https://arxiv.org/abs/1902.10896" }
\section{Introduction} \IEEEPARstart{R}{elated} research topics of multi-vehicles coordination control include dynamics of vehicles (e.g., integrator-based dynamics, linear or nonlinear systems, Euler-Lagrange (EL) systems, etc) \cite{rencao}, communication modes (e.g., undirected or directed graphs, fixed or switching communication topology, communication delays, etc) \cite{dongxiwang2016}, and various coordination tasks (e.g., consensus, flocking, formation, rigid shape swarming, etc) \cite{oh2015}. In practice, vehicles often suffer from under-actuated constraints, such as the well-known non-holonomic constraints in the planar motion of unmanned vehicles \cite{mag2018}, which should be taken into consideration in coordination control design. In this paper, by taking into account these motion constraints, we focus on the mobile formation coordination for multiple non-holonomic vehicles. First of all, we study coordination control with guaranteed forward motion to achieve vehicle trajectory tracking. In practice, the kinematics of non-holonomic vehicles, such as ground vehicles or unmanned aerial vehicles (UAVs), are often described by a unicycle model. Therefore, besides the non-holonomic motion constraint, a speed constraint should be taken into account since the vehicles only have forward motion (e.g., positive airspeeds for fixed-wing UAVs {\cite{weitz2008,Stipanovic2004,duan2009}}). One similar scenario, for example, is that the backward motion is not allowed when a group of unicycle-type vehicles pass though a narrow path. This raises an interesting control problem that a positive forward motion must be maintained all the time. To solve this problem, some papers, {e.g., \cite{sun,qinjiahu,ma}} assume that the linear speed is a positive constant and mainly focus on the design of angular speed input. However, there are limited results that take both linear speed and angular speed constraints into the design consideration. In {\cite{weitz2008,Stipanovic2004}}, the forward motion is solved by taking acceleration as an auxiliary design variable (i.e., a dynamic controller), and a model predictive control (MPC) approach which demands high online computational cost is presented in {\cite{duan2009}}. From a driver's perspective, the heading of the vehicle needs to be adjusted simultaneously according to the position error to a target vehicle so that the tracking task can be achieved, and this control framework has been adopted in tracking control of under-actuated quadrotors {\cite{ding2016,zouyao2}}. In this paper, we extend this idea to design the forward motion controller for non-holonomic vehicles. The novelty of the proposed control law is that it can guarantee a forward motion all the time, which can be applied to coordinate UAV-type systems with positive forward speeds. Beyond the single-agent control, formation control as a typical multi-agent application has been investigated widely, which aims to control multiple vehicles to form and maintain a prescribed geometric shape \cite{oh2015}. Many approaches, such as behavior-based approach {\cite{lawton2003}}, virtual structure \cite{do2007}, leader-follower structure {\cite{das2002}}, potential field method {\cite{cheah}}, consensus-based approach {\cite{dongxiwang2}}, and MPC approach {\cite{sunzhongqi,sunzhongqi2}}, are proposed to tackle the formation control problem for non-holonomic vehicles. Besides the formation stabilization control approaches, the research of mobile formation maneuvers is also an important issue, especially for the mobile formation coordination with rigid-body motion constraints. Based on position errors in the global frame, the formation tasks for multiple non-holonomic vehicles have been studied in \cite{yuxiao,liutengfei1,liutengfei2,miaozhiqiang}. In those papers, the formation control schemes only consider formation coordination control with translational motions (i.e., the formation shape only has translational motions while rotations are not permitted). Based on the leader-follower structure, the desired formation shapes in {\cite{das2002,liangxinwu}} are defined in the leader's coordinate frame. For example, the formation shape stabilization problem where the follower maintains a desired distance and orientation with respect to its leader's coordinate frame is studied in {\cite{das2002}}, and the formation control problem that the follower maintains a desired relative position with respect to its leader's coordinate frame is studied in {\cite{liangxinwu}}. As an extension of the leader-follower scheme, the formation shapes in {\cite{Consolini2008,Morbiditac,liangxinwu2}} are specified in the follower's coordinate frame. For example, a desired distance and orientation of the leader's barycenter with respect to the follower's coordinate frame is used in {\cite{Consolini2008,Morbiditac}}, and the formation shape in {\cite{liangxinwu2}} is specified by a desired relative position of the leader's barycenter with respect to the follower's coordinate frame. Based on the graph rigidity theory, the paper {\cite{milad}} has proposed rigid-graph-based formation control laws to achieve and maintain a rigid formation shape. In addition, the moving target circular formation task has been studied in {\cite{Arranz}}. However, those formation maneuver controllers cannot maintain the mobile formation shape with a weak/strict rigid-body motion (definitions will be clear in the context). In the formation control field, certain important applications, such as rigid formation pattern and aircraft military maneuver, among others, often require to maintain fixed relative positions for all vehicles with respect to one common vehicle (namely, a mobile formation with weak rigid-body motion). Some particular application scenarios, including aircraft refueling or satellite docking, require a fixed relative position between any two vehicles (namely, a mobile formation with strict rigid-body motion). By taking into account the non-holonomic constraint, formation control and motion coordination with weak/strict rigid-body motion for multiple non-holonomic vehicles becomes even more challenging. To our best knowledge, the condition of mobile formation with weak/strict rigid-body motion for non-holonomic vehicles has not ever been discussed before, which will be one of the main focuses in this paper. Formation control with fixed or rigid shapes for multiple single- or double-integrator agents (i.e., point-mass type models) has been surveyed in \cite{Anderson2008}, wherein most control laws are constructed with position errors and desired distances among agents. Nevertheless, compared with integrator-based vehicle models, the research on mobile formation with weak/strict rigid-body motion for vehicles which are modeled by nonlinear manifolds becomes more meaningful. The fixed distances and relative configurations can be straightforwardly regulated for fully-actuated planar vehicles \cite{dong2013,liuyongfang}. However, it still remains unclear to explore the relationship among fixed distances, fixed relative positions, headings, and speed constraints of multiple non-holonomic vehicles while maintaining a mobile formation with weak/strict rigid-body motion. In the light of the body-fixed frame and inertial frame, several interesting properties on mobile formation behavior for multiple non-holonomic vehicles are first explored in this paper. Different from the rigid formation control with graph rigidity condition in \cite{Anderson2008}, the uniqueness of mobile formation with weak/strict rigid-body motion can be achieved by certain specified formation tasks involving relative positions and headings in this paper. Thereby, a distributed mobile formation control law for coordinating multiple vehicles with strict rigid-body motion is designed under a directed tree graph. To summarize, the main contributions of this paper are listed as follows: \begin{enumerate} \item[1.] A novel forward motion controller is proposed to realize tracking control of SE(2) non-holonomic vehicles. By coupling the position error and desired attitude information, an intermediate attitude is presented for non-holonomic vehicles to achieve forward motion control for trajectory tracking to a leader vehicle. The proposed control inputs ensure a positive forward motion all the time. In addition, the saturation of inputs is also guaranteed. The proposed results can be applied to not only unicycle-type ground vehicles, but also fixed-wing UAVs flying in the planform. \item[2.] The motion properties of relative positions and headings are explored thoroughly for a group of mobile non-holonomic vehicles maintaining a target formation shape. The proposed adjoint orbit and its properties are presented in the first two propositions in this paper. To ensure that any two vehicles have a fixed relative position in a mobile formation, we prove that the ratios of linear speed to angular speed for each individual vehicle have to be constants except for the cases of parallel formation and translational straight line formation. Motion properties for mobile formation with weak rigid-body motion are also demonstrated. To our best knowledge, it is the first time that such necessary and sufficient conditions of mobile formation with weak/strict rigid-body motion for non-holonomic vehicles are studied. \item[3.] The control inputs for maintaining a mobile formation with weak/strict rigid-body motion for each vehicle are provided in this paper. Based on the proposed tracking control law and mobile formation coordination theory, a fully distributed mobile formation control law is designed to form and maintain a mobile formation with strict rigid-body motion. \end{enumerate} The remainder of this paper is structured as follows. The notations and problem formulation are introduced in Section~\ref{sec:prob}. The forward motion controller and its stability analysis are presented in Section \ref{sec:twoStage}. Section \ref{sec:formation} introduces mobile formation coordination of multiple non-holonomic vehicles. Simulation and experiment results are shown in Section \ref{sec:sml}. Finally, conclusions are drawn in Section \ref{sec:cls}. \section{Background and Problem formulation} \label{sec:prob} \subsection{Notations} Notations and concepts in this paper are fairly standard. $\|x\|$ denotes the Euclidean norm of a vector $x$. The symbol $x\in \mathbb{S}^{1}$ represents a vector $x\in \mathbb{R}^{2}$ with unit Euclidean norm. Let $I_{p}$ denote the identity matrix of dimension $p\times p$. Let ${e_{1},e_{2}}$ denote the natural bases of $\mathbb{R}^{2}$. Let the principal axes of the rigid body define a body-fixed reference frame attached to the vehicle's center of mass and be denoted by $\mathcal{F}_{B}$. Let $\mathcal{F}_{I}$ be the inertial frame. Let $\sigma(\cdot)$ define a saturation function which satisfies $|\sigma(x)|<1$, $\sigma(0)=0$ and $x\sigma(x)>0$ for all $x\neq0$, where $x\in \mathbb{R}^{n}$. Examples of the function $\sigma(x)$ include $\tanh(x)$ and $\frac{x}{\sqrt{1+x^{2}}}$. In this paper, we use $SE(2)$ to describe the configuration space of a planar vehicle. Special Orthogonal group is denoted as $SO(2):=\{R\in \mathbb{R}^{2\times2}:R^{T}R=I_{2}, \mathrm{det}(R)=1\}$, and $p\in \mathbb{R}^{2}$ represents the position. The group element $g\in SE(2)$ is denoted by $$g=\left[ \begin{matrix} R & p \\ 0 & 1 \end{matrix} \right]=\left[ \begin{matrix} \cos\theta & -\sin\theta & x \\ \sin\theta & \cos\theta & y \\ 0 & 0 & 1 \end{matrix} \right]$$ where a rotation matrix $R \in SO(2)$ describes the orientation from $\mathcal{F}_{B}$ to $\mathcal{F}_{I}$. The Lie algebra $\mathfrak{se}(2)$ denotes the velocity of a vehicle in $\mathcal{F}_{B}$. A Lie algebra element $\hat{\xi} \in \mathfrak{se}(2)$ is denoted as $$\hat{\xi}=\left[ \begin{matrix} \hat{\omega} & \nu \\ 0 & 0 \end{matrix} \right]=\left[ \begin{matrix} 0 & -\omega & \nu_{x} \\ \omega & 0 & \nu_{y} \\ 0 & 0 & 0 \end{matrix} \right]$$ where $\xi=[\omega, \nu_{x}, \nu_{y}]^{T} \in \mathbb{R}^{3}$, $\nu \in \mathbb{R}^{2}$ represents the translational speed, $\omega \in \mathbb{R}$ is the angular speed and $\hat{\omega}\in \mathfrak{so}(2)$. The hat operator $(\cdot)^{\wedge}: \mathbb{R} \rightarrow \mathfrak{so}(2)$ is a linear map, where $\mathfrak{so}(2):=\{\hat{x}\in \mathbb{R}^{2\times 2} |\hat{x}^{T}=-\hat{x}\}$. The inverse operator to the hat operator $(\cdot)^{\wedge}$ is the vee operator $(\cdot)^{\vee}: \mathfrak{so}(2)\rightarrow \mathbb{R}$. For all $g\in SE(2)$, $A,B \in \mathfrak{se}(2)$, the adjoint map $\mathrm{Ad}_{g}$ is denoted as $\mathrm{Ad}_{g}A=gAg^{-1}$. Without side slipping, the velocity $\xi$ of vehicles should satisfy $\nu_{y}=0$, represented by the non-holonomic constraint $\dot{x}\cos\theta-\dot{y}\sin\theta=0$. In this paper, we let $v \triangleq \nu_{x}$ denote the linear speed of the non-holonomic vehicles. \subsection{Problem Description} Consider $n+1$ non-holonomic vehicles labeled with $i=0,1,\cdots,n$, in which the equation of motion for vehicle $i$ is modeled as \begin{subequations} \label{model} \begin{align} \label{modelp} \dot{p}_{i} =& v_{i}R_{i}e_{1} \\ \label{modelR} \dot{R}_{i}=& R_{i}\hat{\omega}_{i} \end{align} \end{subequations} where the subscript $i$ represents vehicle $i$. The node $0$ represents the leader and other node indexes are followers. We give the following assumption on the desired speeds $v_{0}$ and $\omega_{0}$ for a leader vehicle. \begin{assumption} \label{desiredw} The desired speeds $v_{0}$, $\omega_{0}$ are assumed to be bounded for all time, and $v_{0}$ is assumed to be positive, i.e., $v_{0}>0$ all the time. \end{assumption} For a non-holonomic unicycle-type ground vehicle, the linear speed $v_{i}$ can be positive, negative or zero. However, in practice such as fixed-wing UAVs with zero or negligible speed in the direction perpendicular to vehicles' headings, a persistent forward motion with a positive linear speed $v_{i}$ should be guaranteed. We consider this motion condition as a constraint in the tracking and formation control design. The main objective of this paper is to solve two coordination problems for unicycle-type vehicles subject to non-holonomic constraint. The first one is to design an appropriate control law to realize a forward motion control that achieves trajectory tracking of non-holonomic vehicles with $v_{i}>0$ all the time. The proposed forward motion control approach can be applied to the trajectory tracking, formation guidance, and other applications involving UAVs with a positive forward speed. The second control problem is to thoroughly explore motion properties of relative positions and headings for a group of non-holonomic vehicles to maintain a mobile formation with certain motion constraints, and then design a distributed coordination control law to form and maintain a mobile formation with strict rigid-body motion for multiple non-holonomic vehicles. \section{Two-vehicles case: forward motion tracking control design} \label{sec:twoStage} In this section, we focus on designing forward motion control laws that ensure a vehicle with index $1$ to track the leader with index $0$. First, control inputs are proposed to solve the forward motion control problem for non-holonomic vehicles. Then, stability analysis of the closed-loop system is provided. \subsection{Control input design} In the following, the control inputs $v_{1}$ and $\omega_{1}$ are designed, which consist of three steps. \emph{Firstly,} the translational control input $v_{1}$ is designed. Let $p_{01}=p_{1}-p_{0}$ denote the position error between vehicle $1$ and leader $0$. Then, the virtual control input and linear speed input are given by \begin{equation} \label{trackui} u_{1}=-k_{1}\sigma(p_{01})+v_{0}R_{0}e_{1} \end{equation} \begin{equation} \label{trackvi} v_{1}=\|u_{1}\| \end{equation} where $u_{1}\in \mathbb{R}^{2}$ is a virtual control input vector and $k_{1}>0$ is a constant control gain. \emph{Secondly,} an intermediate attitude $\mathcal{R}_{0} \in SO(2)$ is constructed, which is given by \begin{equation} \label{R01} \mathcal{R}_{0}=[r_{0}^{1},r_{0}^{2}]\in SO(2) \end{equation} with the vectors defined by \begin{equation*} \label{b1i} r_{0}^{1}=\frac{u_{1}}{\|u_{1}\|} \in \mathbb{S}^{1}, \quad r_{0}^{2}=\left[ \begin{matrix} -r_{0}^{1}(2,1)\\ r_{0}^{1}(1,1) \end{matrix} \right] \in \mathbb{S}^{1} \end{equation*} The kinematics of the intermediate attitude $\mathcal{R}_{0}$ satisfies the equation $\dot{\mathcal{R}}_{0}=\mathcal{R}_{0}\hat{\varpi}_{0}$, and the angular speed $\varpi_{0}$ is derived as \begin{equation} \varpi_{0}=(\mathcal{R}_{0}^{T}\dot{\mathcal{R}}_{0})^{\vee} \end{equation} \emph{Finally,} the rotational control input $\omega_{1}$ is designed. In our proposed approach, the attitude of vehicle $1$ is required to track the intermediate attitude $\mathcal{R}_{0}$. Let $R_{01}=\mathcal{R}_{0}^{T}R_{1}\in SO(2)$ be the rotation error between the attitude of vehicle $1$ and the intermediate attitude $\mathcal{R}_{0}$. Thus, the angular speed input $\omega_{1}$ is designed by \begin{equation} \label{trackwi} \omega_{1}=-k_{2}\sigma((R_{01}-R_{01}^{T})^{\vee})+\varpi_{0} \end{equation} where $k_{2}>0$ is a constant control gain. \begin{rmk} From the kinematics equation $(\ref{model})$, the evolution of attitude $R_{i}$ is controlled merely by a self-governed angular speed input $\omega_{i}$. Nevertheless, the evolution of position $p_{i}$ depends on both the heading $R_{i}$ and the linear speed input $v_{i}$. The intuition of the control input design $v_1$ and $\omega_1$ comes from the scenario of a manned vehicle tracking another vehicle, wherein a driver constantly adjusts the heading of the vehicle according to the visible position error, as illustrated in Fig.~$\ref{direction}$. It can be seen that the direction of the position error vector $p_{0}-p_{1}$ points from the mass center of vehicle $1$ to the mass center of leader $0$. A reasonable manner is to drive the heading $R_{1}$ of vehicle $1$ to track the direction vector $p_{0}-p_{1}$. Thus, by enhancing the couplings between the position and attitude variables, we propose a framework for designing translational controller and attitude controller in two stages as shown in Fig.~$\ref{frame}$. In this framework, an intermediate attitude $\mathcal{R}_{0}$ in $(\ref{R01})$ is constructed by the position error vector ($p_{0}-p_{1}$) and the desired attitude $R_{0}$, and therefore the actual heading angle $R_{1}$ tracks the intermediate direction angle $\mathcal{R}_{0}$. Once the position error converges to zero, the intermediate rotation matrix $\mathcal{R}_{0}$ will be the same to the desired rotation matrix $R_{0}$ since the rotation matrix $R \in SO(2)$ is a one-parameter subgroup \cite{Bullo2005}. \end{rmk} \setlength{\abovecaptionskip}{0.cm} \setlength{\belowcaptionskip}{-0.cm} \begin{figure}[!htb] \centering { \subfigure[] {\label{direction}\includegraphics[width=0.4\linewidth]{direction.eps}} \subfigure[] {\label{frame}\includegraphics[width=0.45\linewidth]{frame.eps}} } \caption{The tracking control strategy: (a). forward motion control in a tracking control scenario; (b). the proposed two-stage framework.} \label{twostage} \end{figure} \subsection{Stability analysis} In this subsection we present detailed analysis on the stability of vehicle $1$ with the control inputs $v_{1}$ and $\omega_{1}$. From \eqref{trackvi} and \eqref{R01}, the control vector $u_1$ can be equivalently stated as $u_{1}\triangleq v_{1}\mathcal{R}_{0}e_{1}$. Then, the kinematics $(\ref{modelp})$ can be rewritten as \begin{equation} \dot{p}_{1}=u_{1}+\Delta_{1} \end{equation} where $\Delta_{1}=v_{1}R_{1}e_{1}-v_{1}\mathcal{R}_{0}e_{1} \in \mathbb{R}^{2}$ can be seen as a perturbed term. Hence, the dynamics of the position error $p_{01}$ and the attitude error $R_{01}$ are given by \begin{equation} \label{modelde} \begin{cases} \dot{p}_{01}= -k_{1}\sigma(p_{01})+\Delta_{1} \\ \dot{R}_{01} = -k_{2}R_{01}\sigma(R_{01}-R_{01}^{T}) \end{cases} \end{equation} Without the perturbation term $\Delta_{1}$, the closed dynamics $(\ref{modelde})$ can be written as \begin{equation} \label{modele} \begin{cases} \dot{p}_{01}= -k_{1}\sigma(p_{01}) \\ \dot{R}_{01} = -k_{2}R_{01}\sigma(R_{01}-R_{01}^{T}) \end{cases} \end{equation} Let $\theta_{01}=\theta_{1}-\phi_{0}$ denote the Euler-angle error, where $\theta_{1}\in (-\pi,\pi]$ and $\phi_{0}\in (-\pi,\pi]$ are the Euler-angles corresponding to the rotation matrices $R_{1}$ and $\mathcal{R}_{0}$, respectively. Before giving the main result of forward motion control for leader trajectory tracking, the following lemmas are studied firstly. \begin{lemma} \label{condition} (see \cite{ding2016,zouyao}) Suppose the control inputs $v_{1}$ and $\omega_{1}$ stabilize the states of the (unperturbed) closed-loop error systems $(\ref{modele})$ asymptotically and there exists a bounded positive constant $\varphi$ such that $\|\Delta_{1}\|<\varphi \|\theta_{01}\|$. Then the closed-loop error systems $(\ref{modelde})$ are asymptotically stable. \end{lemma} \begin{lemma} \label{condition2} Under Assumption $\ref{desiredw}$ and the saturated input $(\ref{trackvi})$, the perturbation term $\Delta_{1}$ is bounded. \end{lemma} \begin{proof} From Assumption $\ref{desiredw}$ and the properties of the saturation function, it holds that $v_{1} \leq\eta$, where $\eta$ is a bounded positive constant. On the other hand, one has $\|(\mathcal{R}_{0}-R_{1})e_{1}\| = \|(R_{01}-I_{2})e_{1}\| =2\sin(\theta_{01}/2)$. Thus, we can obtain that $ \|\Delta_{1}\| \leq v_{1} \|(\mathcal{R}_{0}-R_{1})e_{1}\| <\varphi \|\theta_{01}\| $ \end{proof} The main result of the forward motion control and trajectory tracking is given as follows. \begin{thm} \label{thm1} Suppose that the virtual control vector $u_{1}$, initial rotation matrix $R_{01}$ are required as $\|u_{1}\|\neq0$, $\mathrm{tr}(R_{01}(0))\neq -2$, and Assumption $\ref{desiredw}$ is satisfied. Under the saturated inputs $(\ref{trackvi})$ and $(\ref{trackwi})$, the non-holonomic vehicle 1 is able to track the leader 0 asymptotically. \end{thm} \begin{proof} Under the assumption $\|u_{1}\|\neq 0$, the intermediate rotation matrix $\mathcal{R}_{0}$ will always be smooth. Define the positive definite Lyapunov function as $ V=\frac{1}{2}p_{01}^{T}p_{01}+\mathrm{tr}(I_{2}-R_{01}) $. Then, its time derivative along the trajectories $(\ref{modele})$ is given by \begin{equation} \label{dV} \dot{V}=-k_{1}p_{01}^{T}\sigma(p_{01})-k_{2}((R_{01}-R_{01}^{T})^{\vee})\sigma((R_{01}-R_{01}^{T})^{\vee}) \leq 0 \end{equation} where the fact that $\mathrm{tr}(R\hat{\omega})=-\omega(R-R^{T})^{\vee}$ is used. Note that from $\dot{V}\leq 0$, one has that the closed-loop error system $(\ref{modele})$ is Lyapunov stable. Note that $V(t)\leq V(0)$, which implies the undesired equilibrium $\mathrm{tr}(R_{01})=-2$ is excluded. Based on the properties of saturation functions that $x\sigma(x)=0$ if and only if $x=0$, one has that $p_{01}$ and $R_{01}$ converge to the set on which $\dot{V}=0$ which is denoted by $S=\{(p_{01},R_{01})=(0,I_{2}) | \dot{V}=0\}$. Thus, the closed-loop error system $(\ref{modele})$ is asymptotically stable, i.e., $p_{01}\rightarrow 0$, $R_{01}\rightarrow I_{2}$ as $t\rightarrow \infty$. In the light of Lemmas $\ref{condition}$ and $\ref{condition2}$, the asymptotic convergence of the closed-loop error system (\ref{modelde}) is proved. While the position error $p_{01}\rightarrow 0$ as $t\rightarrow \infty$, the vector $r_{0}^{1} \rightarrow R_{0}e_{1}$ as $t\rightarrow \infty$. Since the rotation matrix $R \in SO(2)$ is a one parameter subgroup, one has that $\mathcal{R}_{0}^{T}R_{0}=I_{2}$ as $t\rightarrow \infty$. \end{proof} \begin{rmk} In Theorem {\ref{thm1}}, we assume $\|u_{1}\|\neq0$ all the time. We note that the smoothness of the intermediate rotation matrix $\mathcal{R}_{0}$ is not guaranteed if $\|u_{1}\|=0$. To avoid approaching the point $\|u_{1}\|=0$, we can adopt the switching-based approach proposed in \cite{ding2016}. Once the vehicle is approaching and encounters $\|u_{1}\|=0$, we can hold $v_{1}>0$ and $\mathcal{R}_{0}$. As long as the leader keeps moving, the state $\|u_{1}\|=0$ will not last all the time. After this moment, the controller will make the position error and attitude error converge to zero. Alternatively, $\|u_{1}\|=0$ can be avoided by choosing an appropriate gain $k_{1}$. For example, from $(\ref{trackvi})$, one has $v_{1} \geq \|v_{0}R_{0}e_{1}\|-\|-k_{1}\sigma(p_{01})\| \geq v_{0}-k_{1} > v_{min_{0}}-k_{1}$, where $v_{min_{0}}< v_{0}$ for all times. By choosing $v_{min_{0}}-k_{1}>0$, $\|u_{1}\|=0$ can never occur. \end{rmk} \section{Mobile formation coordination of multiple non-holonomic vehicles} \label{sec:formation} In this section, we study mobile formation coordination for multiple non-holonomic vehicles. To be specific, we shall explore some properties of mobile formation with motion constraints and then design a control law to form and maintain a desired mobile formation with strict rigid-body motion. In the following, rigid-body motion (composed by translational motion and rotational motion) is meant for a formation shape maintained by multiple mobile vehicle. \subsection{Motion behaviors of mobile formation with motion constraints} In the context of non-holonomic vehicle formation, we give the following definitions on mobile formation with weak/strict rigid-body motion. \begin{definition} \label{def:weak_fixed_formation} A mobile formation with \textit{weak} rigid-body motion is a formation that the relative positions of each mobile vehicle with respect to one common mobile vehicle (in its body-fixed frame) are kept constant. \end{definition} \begin{definition} \label{def:fixed_formation} A mobile formation with \textit{strict} rigid-body motion is a formation that the relative positions between any two mobile vehicles (in their body-fixed frames) are kept constant. \end{definition} \begin{rmk} For a more intuitive understanding of the above definitions, three types of mobile formations involving three vehicles are shown in Fig.~$\ref{formation}$. Note that all the mobile formations shown in Fig.~$\ref{formation}$ involve rigid and fixed formation shapes \cite{oh2015,Anderson2008}, but the vehicle motions of formation shapes are different. The mobile formation in Fig.~$\ref{formation}$(a) only has translational motion (i.e., the fixed formation shape only admits a translational motion while all vehicles have a synchronized heading), which has been studied in \cite{yuxiao,liutengfei1,liutengfei2,miaozhiqiang}. Translational motion for the whole formation shape with synchronized vehicles' headings has limited motion freedoms, and is not the focus of this paper. In contrast, the mobile formation with weak rigid-body motion in Fig.~$\ref{formation}$(b) has both translational and rotational motion, and the mobile formation with strict rigid-body motion in Fig.~$\ref{formation}$(c) can be seen as rotating a single rigid body. According to Definition~\ref{def:weak_fixed_formation}, the mobile formation under a weak rigid-body motion only preserves fixed relative positions of each vehicle with respect to one common vehicle, and the relative positions and attitudes between any two vehicles (except for the common vehicle) do not have to be constant. In contrast, according to Definition~\ref{def:fixed_formation}, with a strict rigid-body motion, the mobile formation shape not only preserves fixed inter-vehicle distances but also keeps constant relative attitudes between any two vehicles. \end{rmk} The parallel formation and translational straight line formation, which are two special cases of mobile formation with strict rigid-body motion as shown in Fig.~$\ref{formation}$ (d) and Fig.~$\ref{formation}$ (e), respectively, are defined as follows. \begin{definition} A parallel formation is a formation where the headings of all vehicles are synchronized (or anti-synchronized), meanwhile the vehicle group keeps non-zero constant transverse offsets and zero longitudinal offsets between any two vehicles. \end{definition} \begin{definition} A translational straight line formation is a formation where the angular speeds of all vehicles are zero meanwhile the vehicle group maintains constant transverse offsets and longitudinal offsets between any two vehicles. \end{definition} \setlength{\abovecaptionskip}{0.cm} \setlength{\belowcaptionskip}{-0.cm} \begin{figure}[!htb] \centering { \subfigure[] {\label{nonformation}\includegraphics[width=0.35\linewidth]{nonformation.eps}} \subfigure[] {\label{rigidformation}\includegraphics[width=0.45\linewidth]{weakrigidformation.eps}} \subfigure[] {\label{rigidformation}\includegraphics[width=0.35\linewidth]{rigidformation.eps}} \subfigure[] {\label{rigidformation}\includegraphics[width=0.45\linewidth]{parallel.eps}} \subfigure[] {\label{rigidformation}\includegraphics[width=0.45\linewidth]{line2.eps}} } \caption{Illustrations of mobile formations for non-holonomic vehicles : (a). a mobile formation with only translational motion; (b). a mobile formation with weak rigid-body motion; (c). a mobile formation with strict rigid-body motion; (d). a parallel formation; (e) a translational straight line formation. (We refer the readers to the attached video for more motion demonstrations of these mobile formations.) } \label{formation} \end{figure} Since unicycle-type vehicles cannot move sideways, the weak/strict rigid-body motion for a mobile formation imposes further constraints which limit possible motion freedoms for multi-vehicle groups. In particular, we consider a leader-follower framework to achieve a mobile formation. However, we remark that the vehicle indexed as $0$ is for control inputs that guide a mobile formation, which is not necessarily a physical leader vehicle. In other words, the motion analysis for the defined mobile formation is applicable to both leader-follower structure and leaderless structure. In the following, we focus on studying motion behaviors for a group of vehicles moving in a mobile formation with a strict/weak rigid-body motion. Due to the existence of non-holonomic dynamics, a mobile formation is required to satisfy certain conditions that respect all vehicles' motion constraints. We firstly introduce the following definition. \begin{definition} \label{adjoint} (see \cite{liuyongfang}) For a mobile formation with a desired shape, the desired trajectory of vehicle $i$ is called vehicle $i$'s adjoint orbit. \end{definition} Let $g_{ji}=g_{j}^{-1}g_{i}$ denote the relative configuration of vehicle $i$ with respect to vehicle $j$, which is given by \begin{align*} g_{ji}=\left[ \begin{matrix} R_{ji} & R_{j}^{T}(p_{i}-p_{j}) \\ 0 & 1 \end{matrix} \right] =\left[ \begin{matrix} \cos\theta_{ji} & -\sin\theta_{ji} & x_{ji}^{j} \\ \sin\theta_{ji} & \cos\theta_{ji} & y_{ji}^{j} \\ 0 & 0 & 1 \end{matrix} \right] \end{align*} where $\theta_{ji}=\theta_{i}-\theta_{j}$. Then, the relative position of vehicle $i$ with respect to vehicle $j$ is defined as ${p}_{ji}^{j}=R_{j}^{T}(p_{i}-p_{j})$, and the distance between vehicle $j$ and vehicle $i$ is defined as $d_{ji}=\| R_{j}^{T}(p_{i}-p_{j}) \|=\| p_{i}-p_{j} \|$. The relative configuration of vehicle $i$ with respect to vehicle $0$ is defined as $g_{0i}=g_{0}^{-1}g_{i}$. Motion properties in a mobile formation for a group of non-holonomic vehicles are summarized in Propositions $\ref{pop1}$, $\ref{popp}$, $\ref{pop2}$ and $\ref{popweak}$. \begin{proposition} \label{pop1} Consider a desired relative configuration of vehicle $i$ with respect to vehicle $0$ denoted by $\bar{g}_{0i}\in SE(2)$ in $(\ref{g0i})$, where $\bar{x}_{0i}^{0}$ and $\bar{y}_{0i}^{0}$ are constants which depend on the formation task, and $\bar{\theta}_{0i}$ satisfies the equation $(\ref{theta0i})$. \begin{equation} \label{g0i} \bar{g}_{0i}=\left[ \begin{matrix} \cos\bar{\theta}_{0i} & -\sin\bar{\theta}_{0i} & \bar{x}_{0i}^{0} \\ \sin\bar{\theta}_{0i} & \cos\bar{\theta}_{0i} & \bar{y}_{0i}^{0} \\ 0 & 0 & 1 \end{matrix} \right] \end{equation} \begin{equation} \label{theta0i} \bar{\theta}_{0i}=\mathrm{atan2}(\omega_{0}\bar{x}_{0i}^{0}, v_{0}-\omega_{0}\bar{y}_{0i}^{0}) \end{equation} Then the following properties hold. (i). The trajectory $\tilde{g}_{i}=g_{0}\bar{g}_{0i}$ describes the vehicle $i$'s adjoint orbit. (ii). The adjoint orbit for vehicle $i$ satisfies the following kinematic equation \begin{equation} \label{g0i2} \dot{\tilde{g}}_{i}=\tilde{g}_{i}(\mathrm{Ad}_{\bar{g}_{0i}^{-1}}\hat{\xi}_{0}+\hat{\bar{\xi}}_{0i}) \end{equation} where $(\mathrm{Ad}_{\bar{g}_{0i}^{-1}}\hat{\xi}_{0}+\hat{\bar{\xi}}_{0i})$ is vehicle $i$'s adjoint velocity, and $\bar{\xi}_{0i}=[\bar{\omega}_{0i},0,0]^{T}$ satisfies the following kinematic equation \begin{equation} \label{g0iequation} \dot{\bar{g}}_{0i}=\bar{g}_{0i}\hat{\bar{\xi}}_{0i} \end{equation} where the angular speed is $\bar{\omega}_{0i}=\dot{\bar{\theta}}_{0i}$, and the heading error between vehicle $i$ and vehicle $0$ satisfies $(\ref{theta0i})$. (iii). For the case $\bar{x}_{0i}^{0}\neq 0$ and $\omega_{0}\neq 0$, the heading error $\bar{\theta}_{0i}$ cannot be zero. (iv). For the case $\bar{x}_{0i}^{0}= 0$ or $\omega_{0}= 0$, the heading error $\bar{\theta}_{0i}\in\{0,\pi\}$, which corresponds to a parallel formation or a translational straight line formation. \end{proposition} \begin{proof} Once a mobile formation shape is achieved as shown in Fig.~$\ref{formationfig}$, the matrix $\bar{g}_{0i}$ represents the desired configuration of vehicle $i$ expressed in the body frame of vehicle $0$. Based on Definition $\ref{adjoint}$, the trajectory $\tilde{g}_{i}=g_{0}\bar{g}_{0i}$ represents vehicle $i$'s adjoint orbit. This proves (i). Denote $\xi_{0}=[\omega_{0}, v_{0},0]^{T}$ and $\xi_{i}=[\omega_{i},v_{i},0]^{T}$, then the kinematics of the adjoint orbit satisfy $(\ref{g0i2})$. The adjoint velocity $(\mathrm{Ad}_{\bar{g}_{0i}^{-1}}\hat{\xi}_{0}+\hat{\bar{\xi}}_{0i})$ is rewritten as \begin{equation} \label{adspeed} (\mathrm{Ad}_{g_{0i}^{-1}}\hat{\xi}_{0}+\hat{\bar{\xi}}_{0i})^{\vee}=\left[ \begin{matrix} \omega_{0}+\bar{\omega}_{0i} \\ (v_{0}-\omega_{0}\bar{y}_{0i}^{0})\cos\bar{\theta}_{0i}+ \omega_{0}\bar{x}_{0i}^{0}\sin\bar{\theta}_{0i} \\ -(v_{0}-\omega_{0}\bar{y}_{0i}^{0})\sin\bar{\theta}_{0i}+ \omega_{0}\bar{x}_{0i}^{0}\cos\bar{\theta}_{0i} \end{matrix} \right] \end{equation} To ensure that the adjoint orbit satisfies the non-holonomic constraint, the third component of the velocity is required to be zero, i.e., \begin{equation} \label{conditionlai} -(v_{0}-\omega_{0}\bar{y}_{0i}^{0})\sin\bar{\theta}_{0i}+ \omega_{0}\bar{x}_{0i}^{0}\cos\bar{\theta}_{0i}=0 \end{equation} Therefore, the condition $(\ref{theta0i})$ is obtained, which proves (ii). The condition $(\ref{theta0i})$ shows that the two vehicles cannot have the same headings (i.e., $\bar{\theta}_{0i}\neq 0$) when $\bar{x}_{0i}^{0}\neq 0$ and $\omega_{0}\neq 0$. To keep the relative position fixed, the heading cannot be arbitrarily specified as in the holonomic case {\cite{dong2013}}, while the relative heading $\bar{\theta}_{0i}$ is determined by the relative position and coordinating speeds for non-holonomic vehicles. This proves (iii). In the cases of $\omega_{0}=0$ or $\bar{x}_{0i}^{0} = 0$, one has $\bar{\theta}_{0i}\in\{0,\pi\}$ from $(\ref{theta0i})$. This implies that the two vehicles have synchronized or anti-synchronized heading which corresponds to a parallel formation or a translational straight line formation. This proves (iv). \end{proof} \setlength{\abovecaptionskip}{0.cm} \setlength{\belowcaptionskip}{-0.cm} \begin{figure}[!htb] \centering \includegraphics[width=2.2in]{relative.eps} \caption{Illustration of relative configuration and adjoint orbit for two non-holonomic vehicles in a mobile formation. } \label{formationfig} \end{figure} The relative configuration $\bar{g}_{0i}$ in $(\ref{g0i})$ $(\ref{theta0i})$ will be used to determine the adjoint orbit of vehicle $i$. Regarding the condition $(\ref{theta0i})$, the following property holds. \begin{proposition} \label{popp} Suppose the vehicle $i$ moves along its adjoint orbit $\tilde{g}_{i}$ determined by $g_{0}\bar{g}_{0i}$. Then the linear speed $v_{i}$ is positive if $v_{0}>0$. \end{proposition} \begin{proof} Since the vehicle $i$ moves along its adjoint orbit, from the expression of adjoint velocity $(\ref{adspeed})$, one obtains \begin{equation} \label{advi} v_{i}=(v_{0}-\omega_{0}\bar{y}_{0i})\cos\bar{\theta}_{0i}+ \omega_{0}\bar{x}_{0i}\sin\bar{\theta}_{0i} \end{equation} From the condition \eqref{conditionlai}, one has $\frac{\sin\bar{\theta}_{0i}}{\cos\bar{\theta}_{0i}}=\frac{\omega_{0}\bar{x}_{0i}}{v_{0}-\omega_{0}\bar{y}_{0i}}$. To proceed, from $(\ref{advi})$, one obtains \begin{align*} \frac{v_{i}}{\cos\bar{\theta}_{0i}} &=(v_{0}-\omega_{0}\bar{y}_{0i})+ \omega_{0}\bar{x}_{0i}\frac{\sin\bar{\theta}_{0i}}{\cos\bar{\theta}_{0i}} \\ &=(v_{0}-\omega_{0}\bar{y}_{0i})+ \omega_{0}\bar{x}_{0i}\frac{\omega_{0}\bar{x}_{0i}}{(v_{0}-\omega_{0}\bar{y}_{0i})} \\ &= \frac{(v_{0}-\omega_{0}\bar{y}_{0i})^2+(\omega_{0}\bar{x}_{0i})^2}{(v_{0}-\omega_{0}\bar{y}_{0i})} \end{align*} Thus, the linear speed $(\ref{advi})$ could be rewritten as \begin{align} \label{advizf} v_{i} &=\frac{(v_{0}-\omega_{0}\bar{y}_{0i})^2+(\omega_{0}\bar{x}_{0i})^2}{(v_{0}-\omega_{0}\bar{y}_{0i})}\cos\bar{\theta}_{0i} \end{align} Based on the definition of function $\mathrm{atan2}$, we consider four cases to discuss the sign of $v_{i}$. (i) In the first quartile: $\omega_{0}\bar{x}_{0i}>0$ and $v_{0}-\omega_{0}\bar{y}_{0i}>0$; (ii) In the second quartile: $\omega_{0}\bar{x}_{0i}>0$ and $v_{0}-\omega_{0}\bar{y}_{0i}<0$; (iii) In the fourth quartile: $\omega_{0}\bar{x}_{0i}<0$ and $v_{0}-\omega_{0}\bar{y}_{0i}>0$. (iv) In the coordinate axis except for the origin coordinate: $\omega_{0}\bar{x}_{0i}=0$ and $v_{0}-\omega_{0}\bar{y}_{0i}\neq0$ (or $\omega_{0}\bar{x}_{0i}\neq0$ and $v_{0}-\omega_{0}\bar{y}_{0i}=0$ ). For the case of (i), one obtains $\cos\bar{\theta}_{0i}>0$, which implies that $v_{i}>0$ from equation $(\ref{advizf})$. The same analysis can be used for cases of (ii) and (iii). For the case of (iv), from $(\ref{advi})$, one has $v_{i}=v_{0}-\omega_{0}\bar{y}_{0i}>0$ if $\omega_{0}\bar{x}_{0i}=0$ and $v_{0}-\omega_{0}\bar{y}_{0i}>0$ (since $\bar{\theta}_{0i}=0$), and other situations (i.e. $\bar{\theta}_{0i}=\{ \pi/2, \pi, 3\pi/2 \}$) can be proceeded with the same analysis. This completes the proof. \end{proof} \begin{rmk} According to the singularity of $\mathrm{atan2}(0,0)$, the cases including $\bar{x}_{0i}=0,\bar{y}_{0i}=v_{0}/\omega_{0}$ (which can be avoided by the defined formation task) and $\omega_{0}=0,v_{0}=0$ (which represents a static leader) are not considered in this paper. The relative configuration $\bar{g}_{0i}$ in $(\ref{g0i})$ and $(\ref{theta0i})$ not only determines adjoint orbit of vehicle $i$, but also guarantees the forward motion (i.e. positive linear speed). Since the linear speed of adjoint velocity is positive, the mobile formation could be achieved for coordinating multiple fixed-wing UAVs to ensure they move forward along their adjoint orbits with the corresponding adjoint velocities. \end{rmk} \begin{rmk} If $v_{0}<0$, then the linear speed $v_{i}$ of the adjoint orbit which is determined by $\bar{g}_{0i}$ in $(\ref{g0i})$ $(\ref{theta0i})$ is positive based on the same analysis of Proposition $\ref{popp}$. In practice, some applications may also demand that $v_{i}<0$ for the case $v_{0}<0$. To tackle this problem, based on the same analysis of Proposition $\ref{popp}$, the relative attitude $\bar{\theta}_{0i}$ can be redefined as $\bar{\theta}_{0i}=\arctan(\frac{\omega_{0}\bar{x}_{0i}^{0}}{v_{0}-\omega_{0}\bar{y}_{0i}^{0}})$ for the case of $v_{0}-\omega_{0}\bar{y}_{0i}<0$, and $\bar{\theta}_{0i}$ is redefined as $\bar{\theta}_{0i}=\arctan(\frac{\omega_{0}\bar{x}_{0i}^{0}}{v_{0}-\omega_{0}\bar{y}_{0i}^{0}})+\pi$ for the case of $v_{0}-\omega_{0}\bar{y}_{0i}>0$. In addition, there exists a singularity at $v_{0}-\omega_{0}\bar{y}_{0i}=0$ if we use $\bar{\theta}_{0i}=\arctan(\frac{\omega_{0}\bar{x}_{0i}^{0}}{v_{0}-\omega_{0}\bar{y}_{0i}^{0}})$, and this can be avoided by the defined formation task. \end{rmk} Propositions $\ref{pop1}$ and $\ref{popp}$ present details and calculations about the non-holonomic vehicle $i$'s adjoint orbit. Based on the results of adjoint orbit, the mobile formation with weak/strict rigid-body motion will be discussed. For a mobile formation system involving multiple vehicles, the properties of a mobile formation with strict rigid-body motion are presented firstly. \begin{proposition} \label{pop2} For a networked mobile formation control system with multiple non-holonomic vehicles, suppose each vehicle $i$ moves along its adjoint orbit $\tilde{g}_{i}$ determined by $g_{0}\bar{g}_{0i}$. Then the following properties hold. (i). Except for the parallel formation and translational straight line formation, a mobile formation with strict rigid-body motion can be achieved if and only if the speed ratios $v_{i}/\omega_{i}$ ($i=0,1,\cdots,n$) for each individual vehicle are constants. (ii). For the case of parallel formation (i.e., $\bar{x}_{0i}^{0}= 0$), the vehicle 0 which guides the mobile formation can move with any bounded $v_{0},\omega_{0}$. (iii) For the case of translational straight line formation (i.e., $\omega_{0}= 0$), the vehicle 0 which guides the mobile formation can move with any bounded $v_{0}$. (iv). The adjoint orbit $g_{0}\bar{g}_{0i}$ can be also expressed as $g_{j}\bar{g}_{ji}$, which implies that the vehicle $i$'s adjoint orbit is unique in the inertial frame $\mathcal{F}_{I}$. (v). To maintain a strict rigid-body motion in mobile formation, the linear speeds $v_{i}$ for each individual vehicles are not identical except for the translational straight line formation. \end{proposition} \begin{proof} When each vehicle $i$ moves along its adjoint orbit $\tilde{g}_{i}$, the configuration of vehicle $i$ is given by $g_{i}=\tilde{g}_{i}=g_{0}\bar{g}_{0i}$. Then, the relative configuration of vehicle $j$ with respect to vehicle $i$ is given by \begin{equation} \label{ggij} \bar{g}_{ij}=g_{i}^{-1}g_{j}=(g_{0}\bar{g}_{0i})^{-1}g_{0}\bar{g}_{0j}=\bar{g}_{0i}^{-1}\bar{g}_{0j} \end{equation} where $\bar{R}_{ij}=\bar{R}_{0i}^{T}\bar{R}_{0j}$, $\bar{p}_{ij}^{i}=\bar{R}_{0i}^{T}(\bar{p}_{0j}^{0}-\bar{p}_{0i}^{0})$. Thus, the relative position $\bar{p}_{ij}^{i}$ for any two vehicles is kept constant if and only if the rotation matrix $\bar{R}_{0i}$ is constant, which implies that the angle $\bar{\theta}_{0i}$ is constant. From equation $(\ref{theta0i})$, one can show that the angle $\bar{\theta}_{0i}$ is constant if and only if the speed ratio $v_{0}/\omega_{0}$ is constant except for the cases of parallel formation and translational straight line formation. From the adjoint velocity $(\mathrm{Ad}_{\bar{g}_{0i}^{-1}}\hat{\xi}_{0}+\hat{\bar{\xi}}_{0i})$ in $(\ref{adspeed})$, one has that the speed ratios $v_{i}/\omega_{i}$ are constants. Furthermore, the distance between vehicle $i$ and vehicle $j$ is $\bar{d}_{ij}=\|\bar{p}_{ij}^{i}\|=\|\bar{p}_{0j}^{0}-\bar{p}_{0i}^{0}\|$ which implies $\bar{d}_{ij}$ is a constant. Based on Definition $\ref{def:fixed_formation}$, it proves (i). For the case of $\bar{x}_{0i}^{0}= 0$, the heading error is $\bar{\theta}_{0i}=\{0,\pi\}$ no matter whether the speeds $v_{0}, \omega_{0}$ are time-varying or constants. From $(\ref{adspeed})$, one concludes that $\omega_{i}=\omega_{0}$ and $v_{i}=v_{0}-\omega_{0}\bar{y}_{0i}^{0}$ for vehicle $i,i=1,\cdots,n$ and the speed ratios $v_{i}/\omega_{i}$ for each vehicle ($i=0,1,\cdots,n$) do not have to be constants. This proves (ii). For the case of $\omega_{0}= 0$, the heading error is $\bar{\theta}_{0i}=\{0,\pi\}$ no matter whether the speed $v_{0}$ is time-varying or constants. From $(\ref{adspeed})$, one concludes that $\omega_{i}=0$ and $v_{i}=v_{0}$ for vehicle $i,i=1,\cdots,n$. This proves (iii). If the mobile rigid formation is achieved, one has $g_{j}=g_{0}\bar{g}_{0j}$. Therefore, there holds $g_{j}\bar{g}_{ji}=g_{0}\bar{g}_{0j}\bar{g}_{ij}^{-1}=g_{0}\bar{g}_{0i}$. This shows (iv). To maintain a mobile formation with strict rigid-body motion, the linear speed of vehicle $i$ is $v_{i}= (v_{0}-\omega_{0}\bar{y}_{0i}^{0})\cos\bar{\theta}_{0i}+ \omega_{0}\bar{x}_{0i}^{0}\sin\bar{\theta}_{0i}$, according to the vehicle $i$'s adjoint velocity. For the translational straight line formation, one has that the speeds of all vehicles are $v_{0}$ and $\omega_{0}=0$, which shows that the speeds of all individual vehicles are the same. However, for all other mobile formations with strict rigid-body motion, the linear speeds are not the same for each individual vehicle. This proves (v). \end{proof} \begin{rmk} In Proposition \ref{pop2}, expect for the cases of parallel formation and translational straight line formation, a mobile formation with strict rigid-body motion is achieved if and only if all vehicles move along a circular motion. \end{rmk} \begin{rmk} The seminal paper {\cite{Morbiditac}} firstly proposed coordination controllers to regulate multiple non-holonomic vehicles in a formation moving as a rigid body. In this paper, we have extended the results of {\cite{Morbiditac}} in the following aspects. Firstly, compared with the sufficient condition proposed in {\cite{Morbiditac}}, in this paper by the defined adjoint orbit, the proposed condition for strict rigid-body motion is necessary and sufficient. Specifically speaking, the angular speed in {\cite{Morbiditac}} is assumed to be constant for achieving a target formation, but we demonstrate a weaker condition that the ratio of linear speed to angular speed is constant except for the cases of parallel formation and translational straight line formation. Secondly, the relative positions and attitude of all inter-vehicles are analyzed thoroughly instead of relative positions and attitudes in one common leader's coordinate frame as in {\cite{Morbiditac}}. In addition, the condition of mobile formation with weak rigid-body motion is also considered which will be discussed in the sequel. \end{rmk} Regarding mobile formation coordination under weak rigid-body motion in Definition~\ref{def:weak_fixed_formation}, we give the following proposition. Without loss of generality, we denote vehicle 0 as the common vehicle. \begin{proposition} \label{popweak} For a networked formation system with multiple non-holonomic vehicles, if each vehicle $i$ moves along its adjoint orbit $\tilde{g}_{i}$ determined by $g_{0}\bar{g}_{0i}$, then the following properties hold. (i). The mobile formation with weak rigid-body motion can be achieved for any bounded speeds $v_{0},\omega_{0}$, where vehicle $0$ guides the mobile formation. (ii). The relative positions and attitudes between any two vehicles (except for the common vehicle) do not have to be constant. (iii). The inter-vehicle distances are constant, (i.e., the formation shape is rigid). (iv). To maintain a mobile formation with weak rigid-body motion, the speeds $v_{i},\omega_{i}$ for each individual vehicles are not identical except for the translational straight line formation. \end{proposition} \begin{proof} When each vehicle $i$ moves along its adjoint orbit $\tilde{g}_{i}$, the relative configuration of vehicle $i$ with respect to vehicle $0$ is given in $(\ref{g0i})$. Then, the relative positions of each vehicle $i$ with respect to vehicle $0$ are constant vectors (i.e. $\bar{p}_{0i}^{0}=[\bar{x}_{0i}^{0},\bar{y}_{0i}^{0}]^{T}$). Based on Definition~\ref{def:weak_fixed_formation}, it proves the statement (i). For the time-varying $(\omega_{0}\bar{x}_{0i}^{0})/(v_{0}-\omega_{0}\bar{y}_{0i}^{0})$, the relative attitude of each vehicle $i$ with respect to vehicle $0$ in $(\ref{theta0i})$ is time-varying. Then, based on $(\ref{ggij})$, the relative position and attitude of vehicle $j$ with respect to vehicle $i$ are given as $\bar{R}_{ij}=\bar{R}_{0i}^{T}\bar{R}_{0j}$, $\bar{p}_{ij}^{i}=\bar{R}_{0i}^{T}(\bar{p}_{0j}^{0}-\bar{p}_{0i}^{0})$. Thus, for the time-varying $\bar{\theta}_{0i}$, $\bar{R}_{ij}$ and $\bar{p}_{ij}^{i}$ are time-varying. Therefore, the statement (ii) holds. The distance between any two vehicles is given as $\bar{d}_{ij}=\|\bar{p}_{ij}^{i}\|=\|\bar{R}_{0i}^{T}(\bar{p}_{0j}^{0}-\bar{p}_{0i}^{0})\|=\|\bar{p}_{0j}^{0}-\bar{p}_{0i}^{0}\|$ which implies that (iii) holds. Based on the analysis of Proposition $\ref{pop2}$, the statement (iv) is obvious. \end{proof} \begin{rmk} Different from maintaining the mobile formation with strict rigid-body motion that requires either circular motion, parallel formation or translational straight line formation, to maintain a mobile formation with weak rigid-body motion, the vehicle $0$ which guides the mobile formation can move with any bounded $v_{0},\omega_{0}$ and all other vehicles have more degrees of motion freedoms and can perform other formation motions. \end{rmk} \begin{rmk} From the analysis of Propositions~\ref{pop1}, \ref{popp}, \ref{pop2} and \ref{popweak}, one can conclude that the headings for each individual vehicle are not identical except for the cases of parallel formation and translational straight line formation, and the linear speeds for each individual vehicle are not identical except for the case of translational straight line formation. \end{rmk} \begin{rmk} In \cite{das2002,liangxinwu,Consolini2008,liangxinwu2} and references therein, based on the leader-follower approach, the formation tasks for non-holonomic vehicles are often determined by the body-fixed frame of the leaders or followers. However, since the desired relative position of each vehicle is determined by each preceding vehicle, the rigid formation shape cannot be guaranteed by the proposed controls in those papers, and therefore the mobile formation with weak/strict rigid-body motion cannot be obtained. For example, suppose three vehicles are connected by a directed tree graph (i.e. $0\rightarrow 1 \rightarrow2$), and the formation task is defined as $\lim_{t\rightarrow \infty }g_{0}^{-1}g_{1}=\bar{g}_{01}$, and $\lim_{t\rightarrow \infty }g_{1}^{-1}g_{2}=\bar{g}_{12}$. Then, the relative configuration of vehicle 2 with respect to vehicle 0 is given as $\lim_{t\rightarrow \infty }g_{0}^{-1}g_{2}=\bar{g}_{01}\bar{g}_{12}$. Thereby, the distance between vehicle 0 and 2 is time-varying in the limit if $\bar{\theta}_{01}$ is time-varying. Thus, the rigid formation shape cannot be achieved. In this paper, Propositions \ref{pop1} and \ref{popp} demonstrate the adjoint orbit and its corresponding properties, from which the necessary and sufficient condition of mobile formation with strict rigid-body motion is derived in Proposition \ref{pop2}. Proposition \ref{popweak} provides a way to determine a mobile formation with weak rigid-body motion. As far as we know, this is the first time that such conditions and properties are proposed. \end{rmk} \begin{rmk} In this subsection, we not only analyze the motion properties of some mobile formation maneuvers, but also provide velocity inputs in $(\ref{adspeed})$ to maintain the mobile formation with weak/strict rigid-body motion. Besides, to maintain a mobile formation with weak/strict rigid-body motion (except for the translational straight line formation), the velocities of all vehicles are non-identical, which is different from the mobile formation with only translational motion in Fig.~$\ref{formation}$ (a) which are studied in many previous papers \cite{yuxiao,liutengfei1,liutengfei2,miaozhiqiang}. \end{rmk} \begin{rmk} \label{graphtree} Different from formation shape control based on graph rigidity theory \cite{Anderson2008} that uses inter-agent distances to specify a desired formation shape, the relative configuration $g_{j}^{-1}g_{i}$ is used here to describe a desired formation shape of non-holonomic vehicle $i$ with respect to non-holonomic vehicle $j$ in a mobile formation. Once a formation task $\bar{g}_{0i}$ is determined, the shape in a mobile formation is rigid. Based on the group operation on $SE(2)$ for describing a formation task, a mobile formation with weak/strict rigid-body motion can be achieved under different underlying graph topologies, e.g., directed graphs, undirected graphs, leader-follower structure, or leaderless structure. In the following subsection, we present a formation control scheme based on a directed tree graph. \end{rmk} \subsection{An example of distributed mobile formation control} \label{treegraph} In Propositions $\ref{pop2}$ and $\ref{popweak}$, the weak/strict rigid-body motion for a mobile formation is determined by the task $\bar{g}_{0i}$. However, each vehicle requires real-time information of vehicle speeds of the vehicle $0$. For distributed control of a networked multi-vehicle formation system, it is desirable that the vehicle 0's information is only available to a few vehicles, but not to all follower vehicles. In this subsection, based on the leader-follower structure and the results in Section~\ref{sec:twoStage}, by assuming a directed tree graph, a fully distributed control for achieving a mobile formation with strict rigid-body motion is proposed. In a directed tree graph, each follower has only one parent node. First, we present the following proposition. \begin{proposition} \label{pop3} Consider $n+1$ non-holonomic vehicles labeled with $i=0,1,\cdots,n$, and interacted by a directed tree graph. Let vehicle $j$ be the only parent node of vehicle $i$, and the desired formation shape is given by the fixed relative position vectors $\bar{p}_{ji}^{j}=[\bar{x}_{ji}^{j},\bar{y}_{ji}^{j}]^{T}$. Then, the following properties hold. (i). Except for the cases of parallel formation or translational straight line formation, by designing appropriate controllers, a mobile formation with strict rigid-body motion in the fully distributed networked formation control system can be achieved if and only if the speed ratio $v_{0}/\omega_{0}$ is constant. (ii). The desired relative position of vehicle $i$ with respect to any vehicle is fixed. \end{proposition} \begin{proof} Since vehicle $i$ only obtains the information of its parent node (i.e. vehicle $j$), the networked formation control system is fully distributed. For any two vehicles $i$ and $k$, the desired relative configuration of vehicle $i$ with respect to vehicle $k$ can be calculated as $\bar{g}_{ik}=\bar{g}_{ji}\cdots \bar{g}_{hk} $, where $j$ is the parent node of $i$, $h$ is the parent node of $k$, while nodes $k$ and $i$ do not interact directly. Thus, the relative position $\bar{p}_{ik}^{i}$ for any two vehicles $i,k$ is constant if and only if the matrices $\bar{g}_{ji},\cdots, \bar{g}_{hk} $ are constant, i.e., $\bar{\theta}_{ji}, \cdots, \bar{\theta}_{hk}$ are constant. From $\bar{\theta}_{ji}=\mathrm{atan2}( \omega_{j}\bar{x}_{ji}^{j}, v_{j}-\omega_{j}\bar{y}_{ji}^{j})$, it implies that the speed ratio $v_{0}/\omega_{0}$ should be constant. This proves (i). Since $ \bar{p}_{ik}^{i}$ is a constant vector, the statement (ii) is proved. \end{proof} By assuming that the speed ratio $v_{0}/\omega_{0}$ is constant (i.e the leader moves along a circular motion), we design formation controllers for achieving a mobile formation with strict rigid-body motion as follows. Let $\tilde{g}_{i}=g_{j}\bar{g}_{ji}$ be vehicle $i$'s adjoint orbit. Then the kinematic equation of adjoint orbit for vehicle $i$ is given by \begin{equation} \label{gij} \dot{\tilde{g}}_{i}=\tilde{g}_{i}(\mathrm{Ad}_{\bar{g}_{ji}^{-1}}\hat{\xi}_{j}) \end{equation} where $\mathrm{Ad}_{\bar{g}_{ji}^{-1}}\hat{\xi}_{j}$ is the vehicle $i$'s adjoint velocity which satisfies the non-holonomic constraint from the result of Proposition $\ref{pop1}$. Since the adjoint orbit is the desired trajectory when the mobile formation with strict rigid-body motion is achieved, the adjoint orbits can be viewed as virtual leaders which should be tracked by individual follower vehicles in the networked formation control system. To this end, equation $(\ref{gij})$ can be rewritten as follows \begin{subequations} \label{modelpij} \begin{align} \label{modelppij} \dot{\tilde{p}}_{i} =& \tilde{v}_{i}\tilde{R}_{i}e_{1} \\ \label{modelRij} \dot{\tilde{R}}_{i}=& \tilde{R}_{i}\hat{\tilde{\omega}}_{i} \end{align} \end{subequations} where $\tilde{v}_{i}=(v_{j}-\omega_{j}\bar{y}_{ji}^{j})\cos\bar{\theta}_{ji}+ \omega_{j}\bar{x}_{ji}^{j}\sin\bar{\theta}_{ji}$ and $\tilde{\omega}_{i}=\omega_{j}$. Based on the trajectory tracking result of Section~\ref{sec:prob}, the formation control laws are designed as follows \begin{equation} \label{vi} v_{i} =\|u_{i}\| \end{equation} \begin{equation} \label{wi} \omega_{i} =-k_{2}\sigma((R_{ij}-R_{ij}^{T})^{\vee})+{\varpi}_{j} \end{equation} where the virtual control input vector is given by \begin{equation} u_{i}=-k_{1}\sigma(\tilde{p}_{ij})+v_{j}{R}_{j}e_{1} \end{equation} in which $\tilde{p}_{ij}=\tilde{p}_{i}-p_{i}$, and $R_{ji}=\mathcal{R}_{j}^{T}R_{i}$ with the intermediate rotation matrix $\mathcal{R}_{j}$ constructed as \begin{equation} \mathcal{R}_{j}=[r_{j}^{1},r_{j}^{2}]\in SO(2) \end{equation} with the vectors defined by \begin{equation*} r_{j}^{1}=\frac{u_{i}}{\|u_{i}\|} \in \mathbb{S}^{1}, \quad r_{j}^{2}=\left[ \begin{matrix} -r_{j}^{1}(2,1)\\ r_{j}^{1}(1,1) \end{matrix} \right] \in \mathbb{S}^{1} \end{equation*} The main result in this subsection is in Theorem 2. \begin{thm} \label{thm2} Consider $n+1$ non-holonomic vehicles interacted by a directed tree graph. Assume $\|u_{i}\|\neq0$, $\mathrm{tr}(R_{ji}(0))\neq -2$, and Assumption $\ref{desiredw}$ is satisfied. If the speed ratio $v_{0}/\omega_{0}$ is constant, then the mobile formation with strict rigid-body motion in Proposition $\ref{pop3}$ can be achieved under the saturated inputs $(\ref{vi})$ and $(\ref{wi})$. \end{thm} \begin{proof} According to the trajectory tracking control approach presented in Theorem $\ref{thm1}$, one can conclude that the saturated inputs $(\ref{vi})$ and $(\ref{wi})$ can drive the vehicle $i$ to converge to the trajectory $(\ref{modelpij})$ (i.e., its adjoint orbit). Then, we can obtain that $g_{i}^{-1}\tilde{g}_{i}=g_{i}^{-1}g_{j}\bar{g}_{ji} \rightarrow I_{3}$ as $t\rightarrow \infty$, i.e. $\lim_{t\rightarrow \infty}g_{j}^{-1}g_{i}=\bar{g}_{ji}$. Based on the analysis in Proposition $\ref{pop3}$, the networked formation control system achieves a mobile formation with desired strict rigid-body motion. \end{proof} \begin{rmk} If the formation tasks are determined as the parallel formation or the translational straight line formation, the saturated inputs $(\ref{vi})$ and $(\ref{wi})$ are also applicable. In addition, all vehicles can move along an arbitrary desired trajectory for the cases of parallel formation. \end{rmk} \section{Numerical simulation and experiment}\label{sec:sml} \subsection{Numerical simulation}\label{sec:sml1} In the simulation, we consider a group of autonomous ground vehicles consisting of two followers and one leader, and the underlying interaction graph for three vehicles is described by a directed tree graph (i.e. $0\rightarrow 1 \rightarrow2$). The leader (vehicle 0)'s inputs are given by $v_{0}=0.06\mathrm{m}/\mathrm{s}$, and $\omega_{0}=0.05\mathrm{rad}/\mathrm{s}$, which are constants. The initial configurations of the three vehicles are $g_{0}=\{0,0,0\}$, $g_{1}=\{-\pi/4, 0,-0.2\}$ and $g_{2}=\{\pi/4, 0, 0.2\}$. The control gains are chosen as $k_{1}=k_{2}=0.3$. The desired formation shape is given by relative position vectors $\bar{p}_{01}^{0}=[-0.1,-0.1]^{T}$ and $\bar{p}_{12}^{1}=[0,0.2]^{T}$. The evolutions of all vehicles' states are depicted in Fig.~\ref{constant}, which shows that the linear speeds of each individual non-holonomic vehicles are different when they are tasked to maintain a mobile formation with strict rigid-body motion. Fig.~\ref{constant} also demonstrates the evolution of vehicles' headings and adjoint velocity as discussed in Propositions~\ref{pop1}, \ref{popp}, and \ref{pop2}. The positiveness of linear speeds verifies a guaranteed forwarding motion in the forward motion control proposed in Section~\ref{sec:twoStage}. The formation trajectories of three non-holonomic vehicles in the 2D space are plotted in Fig.~\ref{2d}. All figures show that the desired mobile formation is achieved and maintained by the proposed controller. We refer the readers to the attached video for more simulations and movies on controlling and maintaining mobile formations with weak/strict rigid-body motion. \setlength{\abovecaptionskip}{0.cm} \setlength{\belowcaptionskip}{-0.cm} \begin{figure}[!htb] \centering { \subfigure[Relative positions] {\label{ed}\includegraphics[width=0.48\linewidth]{constant_d2.eps}} \subfigure[Vehicles' headings] {\label{etheta}\includegraphics[width=0.48\linewidth]{constant_theta.eps}} \subfigure[Linear speeds] {\label{ev}\includegraphics[width=0.48\linewidth]{constant_v.eps}} \subfigure[Angular speeds] {\label{ew}\includegraphics[width=0.48\linewidth]{constant_w.eps}} } \caption{The evolutions of relative positions between any two vehicles, and each vehicle's headings, linear speeds and angular speeds in the numerical simulation} \label{constant} \hspace{5cm} \end{figure} \setlength{\abovecaptionskip}{0.cm} \setlength{\belowcaptionskip}{-0.cm} \begin{figure}[!htb] \centering \includegraphics[width=3.0in]{constant_2D.eps} \caption{The mobile formation with strict rigid-body motion in the numerical simulation} \label{2d} \end{figure} \subsection{Experiment}\label{sec:sml2} To further demonstrate the applicability of the proposed scheme, real experiments on a physical multi-robotic system are performed. The experimental platform uses three non-holonomic robots named the wheeled E-puck robots {\cite{epuck,sunzhongqi}}. In the experiment, we model the interaction among the three E-puck robots with a directed tree graph (i.e. $0\rightarrow 1 \rightarrow2$), as the same in numerical simulations. The robots move on a smooth and flat floor as shown in Fig.~\ref{e2d}. The initial conditions, formation shape and gains are chosen the same as that in numerical simulation. The evolutions of relative positions, vehicles' headings, linear speeds and angular speeds are plotted in Fig.~\ref{estates} (plotted by Matlab with sampled data from the experiments). Fig.~\ref{e2d} shows the real-time trajectories of three robots in the mobile formation group, which are captured by a video camera. The experiments further validate both the forward motion control proposed in Section~\ref{sec:twoStage} and mobile formation coordination under strict rigid-body motion proposed in Section~\ref{sec:formation}. \setlength{\abovecaptionskip}{0.cm} \setlength{\belowcaptionskip}{-0.cm} \begin{figure}[!htb] \centering { \subfigure[Relative positions] {\label{ed}\includegraphics[width=0.48\linewidth]{ed2.eps}} \subfigure[Vehicles' headings] {\label{etheta}\includegraphics[width=0.48\linewidth]{etheta.eps}} \subfigure[Linear speeds] {\label{ev}\includegraphics[width=0.48\linewidth]{ev.eps}} \subfigure[Angular speeds] {\label{ew}\includegraphics[width=0.48\linewidth]{ew.eps}} } \caption{The evolutions of relative positions between any two vehicles, and each vehicle's headings, linear speeds and angular speeds in real experiment} \label{estates} \hspace{5cm} \end{figure} \setlength{\abovecaptionskip}{0.cm} \setlength{\belowcaptionskip}{-0.cm} \begin{figure}[!htb] \centering { \subfigure[t=0s] {\label{t0}\includegraphics[width=0.48\linewidth]{t_0.PNG}} \subfigure[t=5s] {\label{t5}\includegraphics[width=0.49\linewidth]{t_5.PNG}} \subfigure[t=13s] {\label{t13}\includegraphics[width=0.49\linewidth]{t_13.PNG}} \subfigure[t=26s] {\label{t26}\includegraphics[width=0.49\linewidth]{t_26.PNG}} } \caption{Trajectories of three robots in a mobile formation experiment, with $t=0s,t=5s,t=13,t=26s$} \label{e2d} \hspace{5cm} \end{figure} \section{Conclusion}\label{sec:cls} In this paper, the forward motion control for leader tracking and mobile formation with weak/strict rigid-body motion for multiple non-holonomic vehicles are studied. An intermediate attitude which includes the relative position and attitude error to a leader vehicle is proposed, and the translational controller and rotational controller are designed in a two-stage framework. The formation behavior of mobile formations with different rigid-body motion constraints for multiple non-holonomic vehicles is explored, and we demonstrate that the headings and linear speeds for individual vehicles are not identical when a group of non-holonomic vehicles maintain a mobile formation under rigid-body motion constraints. The behavior of the mobile formation is analyzed and a formation control strategy is proposed to achieve a mobile formation for multiple non-holonomic vehicles with strict rigid-body motion. Both numerical simulations and experiments are provided to validate the performance of the proposed control approach. \bibliographystyle{elsarticle-num}
{ "timestamp": "2019-03-01T02:15:28", "yymm": "1902", "arxiv_id": "1902.11015", "language": "en", "url": "https://arxiv.org/abs/1902.11015" }
\section{Introduction } \label{intro} In recent years there has been an increased appreciation of the role that higher-curvature theories of gravity play in our understanding of several areas of physics, including supergravity and string theory, holography, cosmology, and black holes. These studies are motivated both by a desire to understand the ultraviolet behaviour of gravity and by a realization that terms non-linear in the curvature generically appear in perturbative calculations, particularly in string theory \cite{Gross:1986mw}. Furthermore, an analysis of this class of theories provides us with new insights into general relativity (or Einstein gravity) and may even provide new empirical tests of gravitational physics \cite{Amendola:2008vd,Navarro:2006mw,sotiriou2010f, AliHaimoud:2011fw,clifton2012modified,Yagi:2012gp,Delsate:2014hba,Hennigar:2018hza,Poshteh:2018wqy,mir:2017m}. Originally proposed nearly a century ago \cite{weyl1923allgemeine, carmichael1925eddington}, a revival of interest in these theories in the theoretical physics community occurred once significant effort began to be expended on constructing a quantized theory of gravity. Adding terms quadratic in the curvature to the Einstein-Hilbert action were found to yield a power-counting renormalizable theory~\cite{Stelle:1977ry}, and later a Gauss-Bonnet term was found to appear in the low energy effective action of string theory \cite{Zwiebach:1985uq}. More recently it has been shown from a variety of perspectives \cite{Myers:2010ru,Hofman:2009ug,Sinamuli:2017rhp} that a proper investigation of dual theories beyond large $N$ in the context of the AdS/CFT correspondence conjecture \cite{Maldacena:1998,mir:1307} entails inclusion of higher curvature terms. A key challenge presented by a generic higher-curvature theory of gravity is that the equations of motion are greater than second order in the derivatives, leading to a number of pathological properties such as the appearance of ghost degrees of freedom and other instabilities. However there exist a few classes of theories in which such pathologies are ameliorated and in certain cases are absent. The best known example is the Lovelock class of gravitational theories~\cite{Lovelock:1971yv}. This class yields second order equations of motion in arbitrary dimensions, with the Einstein-Hilbert term being one of several terms that constitute Lovelock theory in a given dimension. In this sense Einstein gravity can be regarded as a special case of Lovelock gravity. Since this class of theories is ghost-free~\cite{Zwiebach:1985uq} they are viable candidates for generalizations of Einstein gravity in higher dimensions $d \geq 2k+1$ for a Lovelock theory that is $k$th order in the curvature. For dimensions $d < 2k+1$ such terms play no role in the equations of motion. Hence only $k=1$ Einstein gravity has non-trivial field equations and so one must look beyond Lovelock gravity to obtain interesting higher curvature theories with implications in $(3+1)$ dimensions. In the past several years considerable progress has been made along these lines. A broader class of \textit{quasi-topological gravity} theories~\cite{Myers:2010ru, Oliva:2010eb} have been constructed that retain many of the nice properties of Lovelock gravity under certain symmetry restrictions. They are non-trivial in any dimension $d \ge 5$ regardless of the order in the curvature. Cubic-curvature quasi-topological gravity, for example exists in $d \ge 5$ whereas cubic Lovelock gravity exists in $d \ge 7$. Furthermore, their field equations, while generally greater than second-order, become second order under the imposition of spherical symmetry. Moreover, the linearized of the equations of motion of quasi-topological gravity coincide with those of Einstein gravity on maximally symmetric spacetime backgrounds up to an overall prefactor \cite{Myers:2010tj}, ensuring that negative energy excitations do not propagate to asymptotic regions of constant curvature. Even more recently a more general class of higher-curvature gravity theories have been discovered that are of considerable interest both holographically and phenomenologically. This is because they are free of ghosts on constant curvature backgrounds, solutions of their field equations yield a metric that depends on a single metric function in the spherically symmetric case, and they are dynamically non-trivial even in four dimensions. First obtained in $(3+1)$ dimensions for cubic curvature~\cite{Bueno:2016xff} (a theory known as \textit{Einsteinian cubic gravity} or ECG), they were found to have generalizations to any dimension \cite{Hennigar:2017ego} and to quartic powers in the curvature \cite{Ahmed:2017jod}. Generalizations to any power in the curvature exist \cite{Bueno:2017qce} but have not been explicitly constructed. This class of theories -- referred to as Generalized Quasitopological Gravity (GQG) -- has several remarkable features. The constraint that spherically symmetric solutions depend on only a single metric function is found to also eliminate ghosts upon linearization of the theory on a constant curvature background \cite{Hennigar:2017ego}. Very recently it has been shown that they have a well-posed initial value problem for cosmological solutions and the potential to provide a late-time cosmology arbitrarily close to the $\Lambda$-Cold dark matter scenario whilst having a purely geometrical inflationary period in the early universe with a graceful exit \cite{Arciniega:2018fxj,Cisterna:2018tgx,Arciniega:2018tnn}. While the field equations can be solved exactly in certain special cases~\cite{Feng:2017tev}, it is possible to analytically investigate the thermodynamics of black holes even in the generic case where analytic solutions are not available~\cite{Bueno:2016lrh,Hennigar:2016gkm}. Charged black branes have an interesting phase structure that is absent for both their Lovelock and quasi-topological black brane counterparts~\cite{Hennigar:2017umz}. The Kovtun-Son-Starinets bound on the ratio of entropy density to shear viscosity always holds~\cite{Bueno:2018xqc}, and small asymptotically flat black hole solutions were found to be stable~\cite{Bueno:2017qce}, which may have implications for the information loss problem. Recent work has shown that the shadows of GQG black holes have potentially interesting phenomenological signatures~\cite{Hennigar:2018hza,Poshteh:2018wqy}. Thermodynamics of GQG black holes has yet to be fully explored. Previous studies for asymptotically flat solutions and AdS black holes have appeared in restricted contexts~\cite{Bueno:2017sui, Bueno:2017qce, Hennigar:2017ego, Hennigar:2017umz}, but a full study combining both cubic \cite{Hennigar:2017ego} and quartic GQG \cite{Ahmed:2017jod} has yet to be carried out. The purpose of this paper is to conduct such a study for both spherical and hyperbolic charged black holes. Our investigation will be carried out in the context of black hole chemistry, in which the cosmological constant is taken to be a thermodynamic variable \cite{Henneaux:1985tv, Creighton:1995au} that is interpreted as pressure in the first law of black hole mechanics \cite{Kastor:2010gq, KastorEtal:2010}. An extensive amount of work over the past six years has been carried out in this subject \cite{Kubiznak:2016qmn} and has indicated that black holes can exhibit a broad range of phase behaviour that has been observed in other areas of physics. Examples include triple points \cite{Altamirano:2013uqa}, re-entrant phase transitions \cite{Altamirano:2013ane}, polymer-like behaviour \cite{Dolan:2014vba}, and even superfluid-like phase transitions \cite{Hennigar:2016xwd,Hennigar:2016ekz,Dykaar:2017mba}, as well as a deep analogy between charged anti-de Sitter black holes and Van der Waals fluids \cite{KubiznakMann:2012}. Higher-curvature gravity theories have, using this approach, likewise been seen to have a very rich thermodynamic structure~\cite{Wei:2012ui, Cai:2013qga, Xu:2013zea, Mo:2014qsa, Wei:2014hba, Mo:2014mba, Zou:2013owa, Belhaj:2014tga, Xu:2014kwa, Frassino:2014pha, Dolan:2014vba, Sherkatghanad:2014hda, Hendi:2015cka, Hendi:2015oqa, Hennigar:2015esa, Hendi:2015psa, Nie:2015zia, Hendi:2015pda, Hendi:2015soe, Zeng:2016aly, Hennigar:2016gkm, EricksonRobie, Hennigar:2016xwd, Cvetic:2010jb, Hennigar:2014cfa, Johnson:2014yja, Karch:2015rpa, Caceres:2015vsa, Dolan:2016jjc}, with even more results surveyed in a recent review \cite{Kubiznak:2016qmn}. Our paper is organized as follows. In section \ref{sec:bhsolution} we present charged static, spherically symmetric AdS black holes in the {cubic-quartic GQG theory}. This includes an asymptotic solution, a near horizon solution and then their match in the form of a numerical solution. In section \ref{sec:thermo} we investigate the thermodynamic properties of charged black holes in {cubic-quartic} generalized quasi-topological gravity by applying the black hole chemistry formalism. In section \ref{sec: thermoce} we classify the phase structure and critical points for these black holes by considering the perspective of black hole chemistry, working in the fixed charge ensemble. In section \ref{sec: thermofpe} we consider the thermodynamics for fixed potential ensemble. We will analyze the four dimensional case in detail, and then present relevant results in higher dimensions. In section \ref{sec: holog} we present some results of holographic hydrodynamics to understand these theories in the context of AdS/CFT correspondence. We summarize our work in section \ref{discuss} and present some directions for further research. \section{Charged black hole solutions in cubic-quartic GQG} \label{sec:bhsolution} We begin by setting up charged static, spherically symmetric AdS black holes obtained from the equations of motion that follow from a combination of cubic and quartic terms of generalized quasi-topological gravity (GQG). \subsection{Construction of equations of motion} Consider the class of static radially symmetric metrics with radial coordinate $r$ and time coordinate $t$. Choosing coordinates so that the radius of a $(d-2)$ sphere behaves as $r^{d-2}$, higher-curvature gravity theories up to quartic order are obtained by applying the condition $g_{tt}g_{rr} = -1$ in order to get a single metric function. They result in both Lovelock and quasi-topological curvature terms as well as GQG curvature terms. Concentrating only on the properties of GQG theory, we put aside the Lovelock and quasi-topological terms (for a discussion see e.g.~\cite{Frassino:2014pha, Hennigar:2015esa}) and consider Einstein gravity accompanied by cubic and quartic generalized quasi-topological terms, with minimal coupling to an Abelian gauge field. The action\footnote{Here our convention for the cubic coupling differs by a minus sign from that used in \cite{Hennigar:2017ego}.} in $d$ dimensional spacetime reads \cite{Ahmed:2017jod} \begin{eqnarray} \cI&=&\frac{1}{16\pi G}\int d^dx \sqrt{-g}\left[\frac{(d-1)(d-2)}{\ell^2}+R+\hat{\mu}\cS_{3,d} +\hat{\lambda}\cS_{4,d} -\frac{1}{4}F_{a b}F^{a b} \right],\label{action0} \end{eqnarray} where the cosmological constant $\Lambda = -\frac{(d-1)(d-2)}{2\ell^2}$, \begin{eqnarray}\label{Sd} \mathcal{S}_{3, d} &=& 14 R_{a}{}^{e}{}_{c}{}^{f} R^{abcd} R_{bedf}+ 2 R^{ab} R_{a}{}^{cde} R_{bcde}- \frac{4 (66 - 35 d + 2 d^2) }{3 (d-2) (2 d-1)} R_{a}{}^{c} R^{ab} R_{bc} \nonumber\\ &&- \frac{2 (-30 + 9 d + 4 d^2) }{(d-2) (2 d-1)} R^{ab} R^{cd} R_{acbd} - \frac{(38 - 29 d + 4 d^2)}{4 (d -2) (2 d - 1)} R R_{abcd} R^{abcd} \nonumber\\ &&+ \frac{(34 - 21 d + 4 d^2) }{(d-2) ( 2 d - 1)} R_{ab} R^{ab} R - \frac{(30 - 13 d + 4 d^2)}{12 (d-2) (2 d - 1)} R^3, \end{eqnarray} and the quartic generalized quasi-topological term \cite{Ahmed:2017jod} is given at appendix \reef{app:lag_dens}. The rescaled cubic coupling $\hat{\mu}$, and quartic coupling $\hat{\lambda}$, are given by \begin{align} \hat{\mu} &= \frac{12(2d-1)(d-2)\; \mu}{(d-3)} \, , \nonumber\\ \hat{\lambda} &= -{\frac { d\left( 3{d}^{3}-27{d}^{2}+73\,d-57 \right) \lambda }{16 \left( {d}^{5}-14{d}^{4}+79{d}^{3}-224{d}^{2}+316\,d-170 \right) }}, \label{rescall} \end{align} where $\mu$ and $\lambda$ are arbitrary coupling constants, and rescaling is done to simplify the field equations. As per our requirements for radially symmetric metrics, we employ the following ansatz \begin{eqnarray} \label{eqn:metricAnsatz} ds^2&=& -N(r)^2f(r)dt^2+\frac{dr^2}{f(r)}+r^2d\Sigma^2_{(d-2),k}, \end{eqnarray} and we find that the field equations of GQG yield $N(r)=constant$ \cite{Hennigar:2017ego}; we shall set $N(r)=1$ for simplicity\footnote{We note that the choice $N=1/\sqrt{f_{\infty}}$ has been used \cite{Myers:2010ru} to normalize the speed of light on the boundary or to get $c=1$ in the dual CFT. Here we shall set $N=1$, and note that by time reparametrization of the metric we can obtain $c=1$ on the boundary if desired.}. Here $d\Sigma^2_{(d-2),k}$ describes the $(d-2)$-dimensional line element of the transverse space, where $k=+1,0,-1$ stand for spherical, flat and hyperbolic geometries of a surface of constant scalar curvature. As an investigation of the $k=0$ case has previously been carried out \cite{Hennigar:2017umz}, we shall in the sequel consider only non-planar black holes. For a maximally symmetric space, the metric ~\eqref{eqn:metricAnsatz} becomes, \begin{eqnarray} f_{\rm AdS}(r) = k+f_{\infty} \frac{r^2}{ \ell^2} \, \end{eqnarray} where $\ell$ is the AdS radius that is related to the cosmological constant. The quantity $f_\infty = \lim_{r\to\infty} f(r) \ell^2/r^2$ and is obtained by solving following polynomial equation \begin{eqnarray}\label{asympf} 1 - f_\infty +\frac{\mu}{\ell^4} (d-6)(4d^4 - 49 d^3 + 291 d^2 - 514 d + 184) f_\infty^3 + \frac{\lambda}{\ell^6} \frac{(d-8)}{3} f_\infty^4 = 0, \end{eqnarray} which is independent of the choice of $k$ in the transverse section. While at least one coupling is non-zero, the higher curvature terms drive away $f_\infty$ from unity. Since we require the same asymptotics as AdS space we only pick positive real solutions of the above polynomial. The effective radius of the AdS space is given by $\ell_{\rm eff} = \ell/\sqrt{f_\infty}$. In fact it turns out the negative of the derivative of eq.~\eqref{asympf} with respect to $f_\infty$ yields the prefactor of the linearized equations of motion~\cite{Hennigar:2017ego}, \begin{eqnarray}\label{PF} P(f_\infty) = 1 - 3\frac{\mu}{\ell^4} (d-6)(4d^4 - 49 d^3 + 291 d^2 - 514 d + 184) f_\infty^2 - \frac{\lambda}{\ell^6} \frac{4(d-8)}{3} f_\infty^3 \end{eqnarray} and to prevent the appearance of ghosts in the particle spectrum we require $P(f_\infty) > 0$. For charged black holes, we include a Maxwell field $F_{a b}=\partial_a A_b-\partial_b A_a$, with electromagnetic one form defined as \begin{eqnarray} A &=& q E(r) dt, \end{eqnarray} where, by inserting the above expression into the Maxwell equation, we get \begin{eqnarray} E(r) &=& \sqrt{\frac{2(d-2)}{(d-3)}}\frac{1}{r^{d-3}}, \end{eqnarray} for the electric field. The specific choice of prefactor makes for greater simplification later on in the field equations; we choose the constant term in the potential to be zero. The field equation for the action \eqref{action0} yields the relation \begin{eqnarray} F&=&r^{d-3}\left(k-f(r)+\frac{r^2}{\ell^2}\right)+\mu F_{\cS_{3,d}}+\lambda F_{\cS_{4,d}}+r^{3-d}q^2 = m, \label{Feq0} \end{eqnarray} where $m$ is a constant of integration and where \cite{Hennigar:2017ego} \begin{eqnarray} \label{eqn:fullEFE} F_{\cS_{3,d}}&=&12 \Bigl[ (d^2+5d-15)\Bigl( \frac{4}{3} r^{d-4} f'^3- 8 r^{d-5} f f'' \bigl(\frac{r f'}{2} + k - f \bigr) \nonumber\\ && - 2 r^{d-5} ((d-4)f -2k) f'^2 + 8(d-5) r^{d-6} ff'( f - k) \Bigr) -\frac{1}{3} (d-4) r^{d-7}(k-f)^2 \nonumber\\ &&\times \Bigl( \bigl(-d^4 + \frac{57}{4} d^3 - \frac{261}{4} d^2 + 312 d - 489 \bigr)f + k\bigl( 129 - 192 d + \frac{357}{4} d^2 - \frac{57}{4} d^3 + d^4 \bigr) \Bigr) \Bigr] \nonumber\\ \end{eqnarray} and~\cite{Ahmed:2017jod} \begin{eqnarray} \label{eqn:GQT_eom} F_{\cS_{4,d}} &=& \left( k-f \right) \left[ \left( d-4 \right) f \left( k-f \right) f''+{f'}^{2} \left( \left({d}^{2}-\frac{23}{2} d + 32 \right) f- \frac{1}{2}\,k \left( d-4 \right) \right) \right] {r}^{d-7} \nonumber\\ &&\left.+ 2\,f f' f'' \left( \left( k-f \right) \left( d-5 \right) {r}^{d-6}+ \frac{f'}{8} \left( 3d- 16 \right) {r}^{d-5} \right) \right.\nonumber\\ &&\left. +f f' \left( k-f \right) ^{2} \left( d-4 \right) \left( d-7 \right) {r}^{d-8}+\frac{f'^3}{12} \bigg[ \left( \left( 3d- 16 \right) f-8 k \right) \left( d-5 \right) {r}^{d-6} \right.\nonumber\\ &&\left. - 3 \frac{f'}{4}\left( 3d-16 \right) {r}^{d-5} \bigg] \, ,\right. \end{eqnarray} are respectively generated by the cubic and quartic generalized quasi-topological terms. The parameter $m$ has scaling dimension $[{\rm length}]^{d-3}$ and we will see that appears in the formula for the mass of black hole. While exact solutions to the field equation seem hard to find (except in special cases~\cite{Feng:2017tev}), studying the far and near horizon behaviour of the metric perturbatively is still feasible, and permits us to analytically obtain the thermodynamic quantities associated with black hole solutions. Specifically, we shall utilize information from the near horizon expansion to describe the thermodynamics of the black holes. \subsection{Far region solution} In the asymptotic limit, the form of the metric function is \begin{eqnarray} f(r)_{asymp}=k+f_{\infty}\frac{r^2}{\ell^2} +\sum_{n=1}^{\infty}\frac{b_n}{r^n},\label{fexp} \end{eqnarray} Inserting the above expansion into eq. \reef{Feq0} and requiring that it be satisfied at each order in a $1/r$ expansion yields \begin{eqnarray} f(r)_{asymp}&=&f_{\infty}\frac{r^2}{\ell^2}+k -\frac{m}{P(f_{\infty})r^{d-3}} +\frac{q^2}{P(f_{\infty})r^{2d-6}}\nonumber\\ &&\left. +\frac{f_{\infty} m^2 }{\ell^4 [P(f_{\infty})]^3 r^{2 d-4}} \Big[\left(36 d^5-147 d^4+1179 d^3-5940 d^2+9444 d-3312\right) \right.\nonumber\\ &&\left.\times\ell^2 \mu+\left(-\frac{d^4}{2}+4 d^3-\frac{13 d^2}{2}+5 d-16\right) f_{\infty} \lambda\Big]+\frac{k m^2 }{\ell^2 [P(f_{\infty})]^3 r^{ 2 d-2}}\right.\nonumber\\ &&\left.\times \Big[24 (d-2) (d-1)^2 \left(d^2+5 d-15\right) \ell^2 \mu+\big(-\frac{d^4}{2}+5 d^3-\frac{29 d^2}{2}+16 d-6\big)\right.\nonumber\\ &&\left.\times f_{\infty} \lambda\Big]+\frac{f_{\infty} m q^2 }{\ell^4 [P(f_{\infty})]^3 r^{ 3 d-7}}\Big[\big(-216 d^5+342 d^4+2442 d^3-5064 d^2+1992 d\right.\nonumber\\ &&\left.-2016\big) \ell^2 \mu+\big(4 d^4-42 d^3+134 d^2-172 d+104\big) f_{\infty} \lambda\Big]+\frac{k m q^2 }{\ell^2 [P(f_{\infty})]^3 r^{ 3 d-5}} \right.\nonumber\\ &&\left.\times\Big[-96 (d-2) (d-1) (2 d-5) \left(d^2+5 d-15\right) \ell^2 \mu\right.\nonumber\\ &&\left.+\left(4 d^4-46 d^3+170 d^2-248 d+120\right) f_{\infty} \lambda\Big]\right.\nonumber\\ &&\left.+\cO\left(\frac{g_1(\mu,d )m^3}{[P(f_\infty)]^4 r^{3 d-5}},\frac{g_2(\lambda,d )f_{\infty}m^3}{\ell^2 [P(f_\infty)]^4 r^{3 d-5}},\frac{g_3(\mu, d) f_{\infty}q^4}{\ell^2 [P(f_\infty)]^3 r^{3 d-5}},\frac{g_4(\lambda, d) f_{\infty}^2q^4}{\ell^4 [P(f_\infty)]^3 r^{3 d-5}}\right), \right.\nonumber\\ \label{asympsol} \end{eqnarray} where $P(f_{\infty})$ was introduced in \reef{PF}. We have presented the six leading terms, and displayed the schematic structure of the next order corrections to $f(r)_{asymp}$. It is obvious that for $\mu\to 0$ and $\lambda\to 0$ that $P(f_\infty) \to 1$ and so \begin{eqnarray} f^{Ein}(r)=k+f_{\infty}\frac{r^2}{\ell^2}-\frac{m}{r^{d-3}} +\frac{q^2}{r^{2d-6}}, \end{eqnarray} as expected from the solution in Einstein gravity. The homogeneous solution in the far region is found by inserting $f(r)=f(r)_{asymp}+\epsilon f_h(r)$ in eq. \reef{Feq0}, where $\epsilon$ parameterizes the strength of these corrections. Substituting this expression in the field equation, we get an inhomogeneous second order differential equation for the function $f_h(r)$. At leading order in $\epsilon$, assuming that $\mu\neq0$ and $\lambda\neq0$ the homogenous part of the equation at large $r$ becomes\footnote{Since the coefficients of $f_h''$ and $f_h'$ appearing in the differential equation are zero for vanishing $\mu$ and $\lambda$, we recover in this limit the AdS black hole solution of Einstein gravity.} \begin{eqnarray} f_h''-\frac{4}{r}f_h'-\gamma^2 r^{d-3}f_h=0 \label{homog} \end{eqnarray} where \begin{eqnarray} \gamma^2&=& -\frac{\ell^4 [P(f_{\infty})]^2}{(d-1) f_{\infty} m \left(48 \left(d^2+5 d-15\right) \ell^2 \mu-(d-6) f_{\infty} \lambda\right)} \label{gamma2} \end{eqnarray} and we note that it it is independent of the value of $k$; for vanishing $\mu$ and $\lambda$ it yields the well known AdS Reissner-Nordstrom ($RN$) solution. For $d=6$ there is an ambiguity in the above expression for $\mu=0, \lambda\neq0$. For this particular case the applicable equation becomes \begin{equation} f_h''-\frac{9}{r}f_h'-\frac{2 \ell^2 [P(f_{\infty})]^3 }{25 \lambda f_{\infty} m^2} r^{8}f_h=0, \label{deq6d} \end{equation} with$P(f_{\infty})=1+\frac{8 f_{\infty}^3 \lambda}{3 \ell^6} $ and the explicit form of $\gamma^2$ can be read from \eqref{deq6d}. The solution of \reef{homog} in the case of $\gamma^2>0$ is \begin{eqnarray} f_{h+} = A r^{5/2} I_{\frac{5}{d-1}}\left(\frac{2 \gamma r^{\frac{d-1}{2}}}{d-1}\right)+B r^{5/2} K_{\frac{5}{d-1}}\left(\frac{2 \gamma r^{\frac{d-1}{2}}}{d-1}\right), \label{bessel} \end{eqnarray} where $I$ and $K$ are the modified Bessel functions of the first and second kinds, and $A$ and $B$ are constants. In the limit of large $r$ \begin{equation} f_{h+} \sim A r^{5/2} \exp\left(\frac{2 \gamma r^{\frac{d-1}{2}}}{d-1}\right)+B r^{5/2}\exp \left(-\frac{2 \gamma r^{\frac{d-1}{2}}}{d-1}\right),\label{exp} \end{equation} and so we must set $A = 0$ to ensure the AdS boundary conditions are satisfied. As a result no ghost excitations can propagate to infinity. We shall see shortly that the contribution of the second term can be dismissed. The solution for the particular case \eqref{deq6d} can be obtained in a similar fashion using its corresponding value for $\gamma^2$. Notice that $k$ does not appear in the asymptotic solution and the numerator of $\gamma^2$ in conjunction with the positivity condition \reef{PF} ensures freedom from ghosts \cite{Hennigar:2017ego,Ahmed:2017jod}. Indeed, the positivity of the numerator relation \reef{gamma2} gives the same no-ghost condition as in \reef{PF}, and one only needs to check whether the denominator is positive as well. To have the correct asymptotics we must choose ($\mu,~ \lambda$) and the mass parameter in such a way that $\gamma^2 > 0$. To see this, note that if $\gamma^2<0$ the homogenous solution asymptotically takes following form \begin{eqnarray} f_{h-} = C_1 r^{5/2} J_{\frac{5}{d-1}}\left(\frac{2 |\gamma| r^{\frac{d}{2}-\frac{1}{2}}}{d-1}\right)+C_2 r^{5/2} Y_{\frac{5}{d-1}}\left(\frac{2 |\gamma| r^{\frac{d}{2}-\frac{1}{2}}}{d-1}\right), \end{eqnarray} where $J$ and $Y$ are respectively Bessel functions of the first and second kind. In this situation, in any dimension the solution oscillates rapidly and its amplitude becomes larger than $\frac{r^2}{\ell^2}$ at large $r$. It therefore does not approach AdS asymptotically, and so we set $C_1 = C_2 = 0$ to get rid of this homogenous part of the solution. For the rest of our considerations, to avoid any oscillating behaviour near infinity we restrict the solutions to the constraint $\gamma^2>0$. Finally we note that the particular solution \reef{asympsol} polynomially decreases with $1/r$, and is the dominant part of the total solution $f(r) = f_{h+} + f(r)_{asymp}$ for sufficiently large $r$; we therefore neglect the term $f_{h+}$ in eq. \reef{bessel} in the sequel. \subsection{Near horizon solution}\label{nearsol} To construct the solution near the horizon we consider the following expansion \begin{eqnarray}\label{eqn:nh_ansatz} f(r)=4\pi T (r-r_+)+\sum_{n=2}a_n (r-r_+)^n, \end{eqnarray} for the metric function, where $T$ is the Hawking temperature of the black hole. It is found by imposing the regularity condition for the Euclidean sector of the complex manifold (under $t\to i\tau$) and reads as \begin{eqnarray} T=\frac{f'}{4 \pi} \, . \end{eqnarray} Substituting the near horizon expansion of the metric function into the field equation and imposing that it holds at each order of $(r-r_+)$ we obtain for the zeroth and first order terms \begin{eqnarray} m &=&k r_+^{d-3}+\frac{r_+^{d-1}}{\ell^2}+\frac{q^2 }{ r_+^{d-3}} -\frac{12(2 d-1)}{(d-3)}\mu\Bigl[ - \frac{ (d-3)}{2d-1}\nonumber\\ &&\left.\times \Bigl(-\frac{{k} ( d-4 ) ( 129-192d+{\frac {357}{4}}\,{d}^{2}-{\frac {57}{4}}\,{d}^{3}+{d}^{4} ) r_+^{d-7}}{3} \right.\nonumber\\ && \qquad\qquad \left.+ (d^2 + 5d - 15) \bigl(64 k \pi^2 r_+^{d-5} + \frac{256}{3} \pi^3 T r_+^{d-4}\bigr) T^2 \Bigr)\Bigr]\right.\label{MT01}\\ &&\left. +\lambda\Big[8 \pi ^2 (4-d) k^2 T^2 r_+^{d-7}+\frac{128}{3} \pi ^3 (5-d) k T^3 r_+^{d-6}+16 \pi ^4 (16-3 d) T^4 r_+^{d-5}\Big],\right. \nonumber \end{eqnarray} \begin{eqnarray} 0&=&(d-3) k r_+^{d-4}+(d-1)\frac{ r_+^{d-2}}{\ell^2}-(d-3) \frac{q^2}{ r_+^{d-2}}-4 \pi T r_+^{d-3} +\frac{12\mu}{(d-3)} \nonumber\\ &&\left.\times\Bigl[ -\frac{{k}}{12} \left( d-3 \right) \left( d-4 \right) \left( d-7 \right) \left( 516-768d+357{d}^{2}-57{d}^{3}+4{d}^{4} \right) r_+^{d-8}\right.\nonumber\\ &&\qquad\left.\quad -{\frac {128}{3}}{\pi }^{3} \left( d-4 \right) \left( d-3 \right) \left( {d}^{2}+5\,d-15 \right) r_+^{d-5} T^3\right.\nonumber\\ &&\qquad\qquad\left. -64{\pi }^{2} (d-3)(d-5) \left( {d}^{2}+5\,d-15 \right) kr_+^{d-6} T^2\right.\nonumber\\ &&\qquad\qquad\qquad \left. + \left( d-3 \right) \left( d-4 \right)\left( d-6 \right) \pi \left( 4{d}^{3}- 33 {d}^{2}+127 d- 166 \right) {k}^{2} r_+^{d-7} T \Bigr]\right.\nonumbe \end{eqnarray} \begin{eqnarray} &&+\lambda\Big[8 \pi ^2 (d-4)(d-7) k^2 T^2 r_+^{d-8}+\frac{64}{3} \pi ^3 (d-5)(d-6) k T^3 r_+^{d-7 \nonumber\\ &&\qquad\qquad\left. +\frac{16}{3} \pi ^4 (d-5) (3 d-16) T^4 r_+^{d-6}\Big],\right. \label{MT02} \end{eqnarray} which specify the formula for the mass parameter $m$ and temperature $T$ in terms of the horizon radius and coupling constants. We shall use these equations for our thermodynamic investigation later on. Continuing to higher order terms one is able to find all other series coefficients in terms of $a_2$, which is a free parameter and its value is determined by using the boundary condition at infinity. Now that we constructed the asymptotic and near horizon solutions, we next find a numerical solution that interpolates between them. For this purpose we define the rescaled metric function \begin{eqnarray} g(r)=\frac{\ell^2}{f_{\infty}r^2}f(r),\label{gf} \end{eqnarray} where $g(r)\rightarrow 1$ as $r\rightarrow \infty$. Here $f_{\infty}$ is a positive real root of eq. \reef{asympf}. Choosing some specific values for the coupling constants, electric charge and horizon radius, while $\ell=1$, we find the associated values of mass parameter and temperature referring to \eqref{MT01} and \eqref{MT02}. To numerically solve the second order differential equation \eqref{Feq0} we need to identify initial values for the metric function $f$ and its first derivative. We use the value of $f$ close to the horizon to set up the seed solution for $g$. We then fix the value of $a_2$ to desired order, using the shooting method such that the numerical solution for $g$ approaches unity asymptotically. As there are several branches of solutions, we select the one that tends to the Reissner-Nordstrom ($RN$) solution in the $\mu\to 0, ~\lambda\to 0$ limit; otherwise we get other solutions that are not physically interesting. Furthermore, because the differential equation is stiff, the solution can be obtained only to a certain precision. For our choice of $a_2$ the asymptotic solution up to $\cO(r^{-12})$ is precise to one part in 1,000 or better. In order to exhibit the behaviour of the solution while varying the cubic and quartic couplings individually, we performed the computation for the cubic and quartic parts of the equation separately to see the impact of each of these terms individually. Figure \ref{numeric0} illustrates the numerical solution in four dimensions and shows that at fixed quartic coupling, increasing charge drives the horizon inward, whereas at fixed electric charge, larger values of $|\lambda|$ displace the event horizon outward\footnote{To better highlight the distinctions between the various cases we have plotted $f$ as a function of $r/m$ instead of $r/\ell$.}. The right panel demonstrates the difference between the numerical result for the metric function and its corresponding asymptotic behaviour from \eqref{asympsol}; we see that these converge at enough large $r$. We also plotted the corresponding graphs for cubic gravity and find that similar behaviour takes place as charge and/or the cubic coupling are varied. \begin{figure*}[htp] \centering \begin{tabular}{cc} \includegraphics[scale=.3]{numerical-4d-1cq.pdf}&\quad\quad\quad\quad \includegraphics[scale=.3]{numerical-4d-2cq.pdf} \\ \includegraphics[scale=.3]{numeric4d-6-1.pdf}&\quad\quad\quad\quad \includegraphics[scale=.3]{numeric4d-6-3.pdf} \\ \end{tabular} \caption{\textbf{Numerical solutions for cubic and quartic gravity} (color online). \textit{Top left}: The plot presents the rescaled metric function \eqref{gf} in four dimensions for quartic gravity (with $\mu=0$). The red line shows the solution for Einstein gravity (for which the quartic coupling $\lambda=0$) with zero electric charge. For the other curves we choose different values for these parameters. \textit{Top right}: This panel depicts the difference between the numerical solution and its corresponding large-$r$ analytic solution in quartic gravity -- we see that convergence holds asymptotically. \textit{Bottom left}: The plot presents the rescaled metric function \eqref{gf} in four dimensions for cubic gravity (with $\lambda=0$); the red line is again the solution for Einstein gravity with zero charge. \textit{Bottom right}: The difference between the numerical solution and its corresponding large-$r$ analytic solution in cubic gravity; we see again the asymptotic convergence. In all cases, we set $\ell=1$ and $r_+=10$. The mass parameter $m$ is defined in~\reef{MT01}. } \label{numeric0} \end{figure*} To see the behaviour of the metric function in four dimensions as $r\rightarrow 0$, we expand the field equations in powers of $r$. In general we have the following expansion \begin{eqnarray} f(r)=r^s(a_0+r a_1+r^2 a_2+\cdots). \end{eqnarray} Considering only the quartic curvature part of \eqref{Feq0}, for small $r$ the first term in the above expansion is the dominant contribution to the metric function. To find $s$, we use the same numerical procedure described before, and depict the behaviour of $r f'(r)/f(r)$ near $r=0$. We find that in four dimensions $s$ vanishes. However in higher dimensions depending on the choice of parameters $s$ gains an non-integer negative value, except in six dimensions where it becomes positive for the choice of physical parameters as prescribed in the next section. A similar argument was given in \cite{mir:2018mmm} for the cubic part. Therefore in four dimensions the metric is regular at the origin. However, the Kretschmann scalar $R_{a b c d}R^{a b c d}\sim r^{-4}$; the spacetime is still singular, but its singularity is softer than its counterpart in Einstein gravity, in which $R_{a b c d}R^{a b c d}\sim r^{-6}$. We also find that the associated plots for the five-dimensional metric function \eqref{gf} are similar, provided the parameters are chosen to satisfy a physical constraint that we will discuss in the next section. \section{Thermodynamic properties \label{sec:thermo}} \label{prop} Our aim is to study the effects of including cubic and quartic generalized quasi-topological terms on the known behaviour Einstein black hole thermodynamics in four and higher dimensions. We begin by investigating the first law and Smarr relation, where we apply the black hole chemistry formalism \cite{Kubiznak:2016qmn} by taking the cosmological constant, $\Lambda$ and the couplings $\mu, ~ \lambda$ as thermodynamic variables. We then study the physical constraints and find out whether at different domains in terms of the couplings and the charge they satisfy these constraints. We also elucidate the critical behaviour for these black holes. \subsection{First law and Smarr relation} As discussed in section \ref{nearsol}, equations \eqref{MT01} and \eqref{MT02} provide the relations for obtaining the mass and temperature of the black holes without requiring knowledge of an exact solution. Since the explicit form for the temperature is complicated, we apply the second equation implicitly to verify the first law of thermodynamics is satisfied. We utilize the Iyer-Wald formalism~\cite{Wald:1993nt, Iyer:1994ys} to compute the entropy \begin{eqnarray} S = -2\pi \oint d^{d-2}x \sqrt{\gamma}E^{a b c d}\hat{\varepsilon}_{a b}\hat{\varepsilon}_{c d}, \end{eqnarray} where \begin{eqnarray} E^{a b c d}=\frac{\partial \cL}{\partial R_{a b c d}}, \end{eqnarray} and $\hat{\varepsilon}_{a b}$ is the binormal to the horizon, which is normalized as $\hat{\varepsilon}_{a b}\hat{\varepsilon}^{a b}=-2$. The induced metric on the horizon is $\gamma_{a b}$ and $\gamma=\textrm{det} \gamma_{a b}$. From the action \reef{action0} we find \begin{eqnarray} S&=&\frac{\Sigma_{(d-2),k}}{4} r_+^{d-2} \Big[1+\frac{48\mu}{r_+^4} (d-2) \Big(8 \pi \left(d^2+5 d-15\right) r_+ T (k+\pi r_+ T)-\frac{1}{16} (d-4) \nonumber\\ &&\left.\times\left(4 d^3-33 d^2+127 d-166\right) k^2\Big) -\frac{4 \pi \lambda T}{r_+^5}(d-2) \Big((d-4) k^2+4 \pi (d-5) k r_+ T\right.\nonumber\\ &&\left. \qquad +\frac{4}{3} \pi ^2 (3 d-16) r_+^2 T^2\Big)\Big],\right. \label{ENT} \end{eqnarray} for the entropy, where $\Sigma_{(d-2),k}$ is the volume of the submanifold with line element $d\Sigma_{(d-2),k}$. For $k=1$ this is the volume of the $(d-2)$-dimensional sphere and finite, although when $k=0$ and $k=-1$ one needs to perform some kind of identification to define finite volume. Identifying the pressure as \cite{Kastor:2009wy,Kubiznak:2016qmn} \begin{eqnarray}\label{press} P =-\frac{\Lambda}{8\pi} = \frac{(d-1)(d-2)}{16 \pi \ell^2}, \end{eqnarray} the other thermodynamic quantities are \begin{align} V &=\frac{\Sigma_{(d-2),k} r_+^{d-1}}{(d-1)} \, , \quad Q=\Sigma_{(d-2),k} \frac{\sqrt{2(d-2)(d-3)}}{16\pi}q \, ,\quad \Phi =\sqrt{\frac{2(d-2)}{d-3}}\frac{q}{r_+^{d-3}} , \nonumber\\ \Psi_{\mu} =& -32(d-2)(d^2 + 5d - 15) \Sigma_{(d-2),k} \left(\pi^2 r_+^{d-4} T^3 + \frac{3}{2} \pi k T^2 r_+^{d-5} \right) \nonumber\\ &+ \frac{(d-2)(d-4)\Sigma_{(d-2),k}}{4} \bigg[3\left(4d^3 - 33 d^2 + 127 d - 166 \right) k^2 T r_+^{d-6} \nonumber\\ &\qquad\qquad - \left( 129 - 192 d + \frac{357}{4} d^2 - \frac{57}{4} d^3 + d^4 \right) \frac{k^3 r_+^{d-7}}{\pi} \bigg], \nonumber\\ \Psi_{\lambda} &=\frac{\pi (d-2)r_+^{d-7}\Sigma_{(d-2),k}}{6} \left[3 (d-4) k^2 T^2+8 \pi (d-5) k r_+ T^3+2 \pi ^2 (3 d-16) r_+^2 T^4\right] \end{align} and~\cite{Deser:2002jk} the mass is \begin{eqnarray} \label{eqn:adm_mass} M =\frac{(d-2)\Sigma_{(d-2),k} m }{ 16 \pi }. \end{eqnarray} It is straightforward to show these quantities satisfy the extended first law of black hole thermodynamics, \begin{eqnarray} d M =T dS+V dP+\Phi dQ+\Psi_{\mu}d\mu+\Psi_{\lambda}d\lambda, \end{eqnarray} where $V$ is the thermodynamic volume conjugate to the pressure, and $\Psi_\mu,~\Psi_\lambda$ are the respective thermodynamic conjugates to the couplings $\mu,~\lambda$. Furthermore, the Smarr formula \begin{eqnarray} (d-3)M = (d-2)T S-2 P V+(d-3)\Phi Q+4 \mu \Psi_{\mu}+6 \lambda \Psi_{\lambda}, \end{eqnarray} holds for these quantities. To investigate the critical behaviour of these black holes, an equation of state is required. This is obtained by substitution of $\ell^2$ in \eqref{MT02} in terms of pressure. Hence \begin{eqnarray}\label{eos0} P &=&\frac{T}{v}-\frac{(d-3)}{\pi (d-2)} \frac{k}{v^2}+\frac{e^2}{v^{2 d-4}} +(d-7) (d-4)\frac{\beta_0}{v^6} -(d-6) (d-4)\beta_1\frac{ T}{v^5} \\ &&\left. +\left((d-5)\frac{\beta_2}{v^4} -(d-4) \frac{\alpha_2}{v^6}\right)T^2 +\left((d-4)\frac{\beta_3}{v^3} -(d-5)\frac{\alpha_3}{v^5}\right)T^3 - (d-5) \alpha_4 \frac{T^4}{v^4},\right.\nonumber \end{eqnarray} where the different parameters are \begin{eqnarray} v&=&\frac{4 r_+}{(d-2)}, \quad \quad e^2=\frac{16^{d-3}}{\pi } (d-3) (d-2)^{5-2 d} q^2\nonumber\\ \alpha_2&=&\frac{2^{11} \pi (d-7) k^2 }{(d-2)^5}\lambda, \quad \alpha_3=\frac{2^{12} \pi ^2 (d-6) k }{3 (d-2)^4}\lambda, \quad \alpha_4=\frac{2^8 \pi ^3 (3 d-16) }{3 (d-2)^3}\lambda, \nonumber\\ \beta_0&=&\frac{2^8 \left(4 d^4-57 d^3+357 d^2-768 d+516\right) k }{\pi (d-2)^5}\mu, \quad \beta_2=\frac{3\times 2^{12} \pi \left(d^2+5 d-15\right) k }{(d-2)^3}\mu,\nonumber\\ \beta_1&=&\frac{3\times 2^8 \left(4 d^3-33 d^2+127 d-166\right) k^2 }{(d-2)^4}\mu, \quad \beta_3=\frac{2^{11} \pi ^2 \left(d^2+5 d-15\right) }{(d-2)^2}\mu, \label{rescaled} \end{eqnarray} where $v$ is the specific volume \cite{GunasekaranEtal:2012}. As in previous studies \cite{Hennigar:2016gkm, Hennigar:2017umz}, we see from \eqref{eos0} that there is a non-linear dependence of the equation of state on the temperature. For future reference we choose the free parameters to be $e$, $\beta_3$ and $\alpha_4$; these are independent of the choice of $k$. The explicit form of the Gibbs free energy as $G=M-T S$ is \begin{eqnarray} \cG &=&\left[\frac{4}{d-2}\right]^{d-1}\frac{G}{\Sigma_{(d-2),k} } =\frac{ v^{d-1}P}{d-1} +\frac{ v^{d-3}k}{\pi (d-2)}+\frac{e^2 }{(d-3)v^{d-3}} -\beta_0 (d-4) v^{d-7}\nonumber\\ &&\left.+ \left(-\frac{v^{d-2}}{d-2}+\beta_1 (d-4) v^{d-6}\right)T+ \left(\alpha_3\frac{ (d-5) v^{d-6}}{d-6}-\beta_3 v^{d-4}\right)T^3+\alpha_4 v^{d-5}T^4\right.\nonumber\\ &&\left.+ \left(\alpha_2\frac{ (d-4) v^{d-7}}{d-7}-\beta_0\frac{48 \pi ^2 (d-2)^2 \left(d^2+5 d-15\right) v^{d-5}}{4 d^4-57 d^3+357 d^2-768 d+516}\right) T^2,\right. \end{eqnarray} where we pulled out an overall positive factor to simplify the expression; the explicit form of the other parameters is given in eq. \reef{rescaled}. The equilibrium state is the one that minimizes the Gibbs free energy $\cG$ for fixed temperature and pressure. \subsection{Physical constraints}\label{constraints} We now explicate the constraints on the cubic and quartic couplings required for physical solutions. Generalized quasi-topological theories have the property that only the massless graviton propagates on constant curvature backgrounds provided the parameters are appropriately constrained. To ensure this, the effective Newton constant of gravity must have the same sign as that in Einstein gravity. This implies that the pre-factor in the linearized equations of motion about the AdS solution is positive \cite{Hennigar:2017ego}, ${\it i.e.,}\ $ $P(f_{\infty})>0$ with $P(f_{\infty})$ defined in eq. \eqref{PF} and the value of $f_{\infty}$ is given by solution of eq. \eqref{asympf} which is positive in order to get an asymptotic AdS solution. The same relation occurs if we require $\gamma^2>0$ (see the discussion after \eqref{exp}). In terms of the rescaled parameters in eq. \reef{rescaled} and the pressure given in \eqref{press}, the no-ghost constraint \eqref{PF} becomes \begin{eqnarray} 1-\frac{3 (d-6) \left(4 d^4-49 d^3+291 d^2-514 d+184\right) f_{\infty}^2 P^2 \beta_3}{8 (d-1)^2 \left(d^2+5 d-15\right)} -\frac{64 (d-8) f_{\infty}^3 P^3 \alpha_4}{(d-1)^3 (3 d-16)} >0 \qquad \label{nghost} \end{eqnarray} and we note that in the limit $\beta_3\rightarrow 0,~\alpha_4\rightarrow 0$ (or $\mu\rightarrow 0,~\lambda\rightarrow 0$) that we reach the Einstein branch of the theory. Disregarding solutions with $\gamma^2<0$ (since they are not asymptotically AdS) we have from \eqref{gamma2} \begin{eqnarray} \frac{8 \pi (d-1) (3 d-16) [P(f_{\infty})]^2}{3 (d-2) f_{\infty} m P \left(8 \alpha_4 (d-6) f_{\infty} P-\beta_3 \left(3 d^2-19 d+16\right)\right)} >0,\label{gamma21} \end{eqnarray} upon using eq. \reef{rescaled}, where $P(f_{\infty})$ is given in \reef{nghost}. It is well-known that in higher curvature gravity black hole entropy can be negative in some regions of parameter space, perhaps indicative of an occurrence of instability~\cite{Cvetic:2001bk, Nojiri:2001pm}. Imposing the requirement for positive black hole entropy yields \begin{eqnarray} S>0 \Rightarrow && 1+(d-2)\Big[-\frac{ (d-4) {\beta_1}}{v^4} +\left(\frac{2 {\beta_2}}{v^3}-\frac{2 (d-4) {\alpha_2}}{(d-7) v^5}\right)T \nonumber\\ &&\qquad\qquad\qquad \left.+ \left(\frac{3 {\beta_3}}{v^2}-\frac{3 (d-5) {\alpha_3}}{(d-6) v^4}\right)T^2-\frac{4 {\alpha_4}T^3}{v^3}\Big] >0\right. \, .\label{sratio} \end{eqnarray} When temperature and specific volume are positive, in each dimension the values of couplings must be chosen to satisfy the above inequality. We search for the domains in parameter space where these conditions are valid for various charge and coupling constants in the next section. \section{Thermodynamics in the canonical ensemble} \label{sec: thermoce} Equipped with the field equations and relevant thermodynamic relations, we consider first the fixed charge ensemble. We aim to investigate the phase structure and critical points for these black holes. The equation of state in terms of the rescaled parameters $e,~\beta_3$ and $\alpha_4$ introduced in eq. \reef{rescaled} is \begin{eqnarray} P &=&\frac{T}{v}-\frac{(d-3) k}{\pi (d-2) v^2}+\frac{e^2}{ v^{2 d-4}}+\frac{(d-7) (d-4) \left(4 d^4-57 d^3+357 d^2-768 d+516\right) k {\beta_3} }{8 \pi ^3 (d-2)^3 \left(d^2+5 d-15\right) v^6}\nonumber\\ &&\qquad\left.+ \left(-\frac{3 (d-6) (d-4) \left(4 d^3-33 d^2+127 d-166\right) k^2 {\beta_3}}{8 \pi ^2 (d-2)^2 \left(d^2+5 d-15\right) v^5}\right)T\right.\nonumber\\ &&\qquad\qquad\left.+ \Big(\frac{6 (d-5) k {\beta_3}}{\pi (d-2) v^4}-\frac{24 (d-7) (d-4) k^2 {\alpha_4}}{\pi ^2 (d-2)^2 (3 d-16) v^6}\Big)T^2\right.\nonumber\\ &&\qquad\qquad\qquad\left.+ \left(\frac{ (d-4){\beta_3}}{v^3}-\frac{16 (d-6) (d-5) k {\alpha_4}}{\pi (d-2) (3 d-16) v^5}\right)T^3-\frac{ (d-5){\alpha_4} }{v^4}T^4,\right. \end{eqnarray} for any $d \ge 4$; we note that only $e^2$ appears everywhere, so our results are valid for both positive and negative charge. Phase transitions occur if the equation of state demonstrates some oscillatory behaviour, with $P(v)$ having at least one minimum and one maximum. This in turn depends on the signs of the coefficients of different powers of $v$, as these determine how many roots exist in the equations for critical volume and temperature. To get a critical point, the following equations must hold \begin{eqnarray}\label{criteqs} \frac{\partial P}{\partial v}=\frac{\partial^2 P}{\partial^2 v}=0\label{dPd2p} \, . \end{eqnarray} We also find that when $\mu=0$ in four and five dimensions $\lambda$ must be negative, whereas in higher dimensions $\lambda$ must be positive in order to get physical points that satisfy all physical constraints mentioned in the previous section. To get the critical volume and temperature in terms of charge and couplings, we solve equation \label{crit2} in various dimensions. As the explicit form is lengthy, we do not present results explicitly; in practice it is easier to solve the equations parametrically for $T$ and $v_c$ in terms of the other parameters in certain dimensions. \subsection{ $AdS_4$ vacua and maximum pressure}\label{vacua} We consider here the structure of the $AdS_4$ vacua of \reef{action0} in four dimensional spacetime, with curvature scale $1/\ell_{\textrm{eff}}^2=f_{\infty}/\ell^2$. Setting the action length scale to $\ell=1$ (implying a fixed pressure of $3/(8\pi)$), we analyze solutions to \reef{asympf}, consider the cubic and quartic couplings in distinction for simplicity. Starting with only a non-zero cubic coupling $\mu$, we have \begin{eqnarray} 1-f_{\infty}-\frac{1344 \mu}{\ell^4}f_{\infty}^3 =0,\label{eqfinf4d} \end{eqnarray} and illustrate in the left graph of figure \ref{finfplot} three possible branches of real solutions to this equation, depicted in different colours. \begin{figure*}[htp] \centering \begin{tabular}{cc} \includegraphics[scale=.35]{finfd4.pdf}&\quad\quad \includegraphics[scale=.35]{finfd4q.pdf} \\ \end{tabular} \caption{\textbf{Real possible values for $f_{\infty}$ in terms of coupling in four dimensions} (color online). \textit{Left}: The plot presents the real solutions to the equation \reef{eqfinf4d} for different values of $\mu$. The lower dashed black line corresponds to $f_{\infty}<0$, upper dashed black one demonstrates solutions with $\gamma^2<0$, and so unstable. The red line, relates to existence of ghost. Only blue line associates stable branch of solutions. The orange dot corresponds to the critical case $\mu_c=-\ell^4/9072$. \textit{Right}: The plot denotes the real solutions to the equation \reef{eqfinf4dq} versus $\lambda$. The lower dashed black curve corresponds to $f_{\infty}<0$, upper dashed blue line exhibit unstable solutions with $\gamma^2<0$. The red line, shows appearance of ghost solution. Blue solid line exhibits stable branch of solutions. The orange dot denotes the critical case $\lambda_c=-81\ell^6/1024$. In both plots we have set $\ell=1$. } \label{finfplot} \end{figure*} An analysis of the discriminant of \reef{eqfinf4d} indicates that there is a critical value $\mu_c=-\ell^4/9072$ of the coupling where the discriminant $\Delta$ vanishes. For $\mu_c<\mu<0$, $\Delta>0$ and there are two positive real solutions for $f_{\infty}$; however only the smaller branch is free of ghosts ${\it i.e.,}\ $ $P(f_{\infty})>0$ in \reef{PF}. Conversely $\Delta<0$ for both $\mu<\mu_c$ and $\mu>0$; the first of these yields a negative real valued solution, and the second implies $\gamma^2<0$ in \reef{gamma2}. Both regions are unphysical and we exclude them from further analysis. Note that from the linearized equations of Einstein gravity, the following relation \begin{eqnarray} G_{\textrm{eff}}=\frac{G}{1+\frac{4032}{\ell^4}\mu f_{\infty}^2}, \end{eqnarray} holds between the effective Newton constant $G_{\textrm{eff}}$ and $G$. We see that for $f_{\infty}^2=-\frac{\ell^4}{4032\mu}$, $G_{\textrm{eff}}\rightarrow \infty$, and inserting this value inside eq.~\reef{eqfinf4d} yields the critical coupling of $\mu_c$ (noted previously \cite{Feng:2017tev}). At this point the discriminant of the cubic changes sign and there are distinct branches of solutions for values of $\mu < \mu_c$ and $\mu > \mu_c$. Repeating the same approach for higher dimensions, the overall behaviour of $f_{\infty}$ given in figure \ref{finfplot} is similar to that in four dimensions. The critical limit is given by \begin{eqnarray} \mu_c&=&\frac{4 \ell^4}{27 \left(4 d^5-73 d^4+585 d^3-2260 d^2+3268 d-1104\right)}\quad \quad \textrm{if}\ \lambda=0. \end{eqnarray} More generally we can reconsider the above discussion for arbitrary values of the pressure $P$. In four and five dimensions there is a maximum value for the pressure that results from the condition that the discriminant $\Delta>0$, which is a constraint on parameter space for physically acceptable solutions. In general we have \begin{eqnarray} P_{\textrm{max}}&=&\frac{4}{3} \sqrt{\frac{2}{3}} \sqrt{\frac{(d-1)^2 (d^2+5 d-15)}{ ( 4 d^4-49 d^3+291 d^2-514 d+184)(6-d)\beta_3}} \quad \quad \textrm{if}\ \alpha_4=0, \end{eqnarray} where in four and five dimensions $\beta_3>0$ and is given in \reef{rescaled}. For $d=6$ the pressure is unbounded and for $d\geq 7$ a similar procedure does not yield an upper bound for the pressure. Turning now to the quartic case, equation \reef{asympf} becomes \begin{eqnarray} 1-f_{\infty}-\frac{4\lambda}{3\ell^6} f_{\infty}^4=0, \label{eqfinf4dq} \end{eqnarray} in $d=4$ with $\ell=1$. The right graph in figure \ref{finfplot} indicates three possible real solutions to the above equation. The discriminant of \reef{eqfinf4dq} vanishes for $\lambda = \lambda_c=-81\ell^6/1024$. Again, we note that $G_{\textrm{eff}}$ becomes infinity or equivalently $P(f_{\infty})$ vanishes for $f_{\infty}^3=-3\ell^6/(16 \lambda)$ and that \eqref{eqfinf4dq} in turn implies $\lambda=\lambda_c$. Similar to the previous case, for $\lambda_c<\lambda<0$, we have $\Delta<0$ and there are two positive real solutions for $f_{\infty}$. Only the smaller of these has positive $P(f_{\infty})$ and $\gamma^2$. For $\lambda<\lambda_c$, $\Delta>0$ and there are no real solutions for $f_{\infty}$; for $\lambda>0$, although there is one real positive solution to \reef{eqfinf4dq}, it implies $\gamma^2<0$ and is therefore physically inadmissible. Requiring $\lambda_c<\lambda<0$ we conclude that there is a maximum value for the pressure given by \begin{eqnarray} P_{\textrm{max}}&=&\frac{3}{16} \sqrt[3]{\frac{(d-1)^3 (3 d-16)}{ (d-8)\alpha_4}}\quad \quad \textrm{if}\ \beta_3=0,\label{PMAX} \end{eqnarray} where $\alpha_4$ is given in \reef{rescaled} and \begin{eqnarray} \lambda_c&=&\frac{81}{256}\frac{\ell^6}{d-8}\quad \quad \textrm{if}\ \mu=0,\label{muclac} \end{eqnarray} Only for $d=4,5$ is $\alpha_4>0$ and $P_{\textrm{max}} > 0$, yielding an upper bound on the pressure; in higher dimensions there is no bound on the pressure. \subsection{Critical behaviour in four dimensions} Our next task is to determine how many of these possible critical points are actually physical, and study their critical behaviour. We proceed by examining each value of $d$ in succession. Preceding studies have shown that critical points exist for four dimensional charged black holes in Einstein gravity ($\beta_3=0,~\alpha_4=0$) \cite{Kubiznak:2012wp} and in ECG \cite{Hennigar:2016gkm}. A recent study was carried out for cubic GQG (for which $\alpha_4=0$) in $d$ dimensions \cite{mir:2018mmm}. Here we consider the effects of both cubic and quartic GQG in $d=4$. The equation of state~\eqref{eos0} becomes \begin{eqnarray} P &=&\frac{T}{v}-\frac{k}{2 \pi v^2}+\frac{e^2}{v^4}-\frac{3 \beta_3 k T^2}{\pi v^4}+\frac{\alpha_4 T^4}{v^4}+\frac{4 \alpha_4 k T^3}{\pi v^5}. \end{eqnarray} It is obvious that for small $v$ (i.e. for small black holes) that the term cubic in $T$ (coming from quartic GQG) dominates. By taking different linear combinations of \eqref{criteqs} it is possible to obtain an equation linear in $T$; the resultant critical temperature and volume are then easily seen to satisfy the equations \begin{eqnarray} T_c&=&\frac{1}{18}\Big(8000 \pi ^2 \alpha_4 e^4 k^2-12000 \pi \alpha_4 e^2 k v_c^2+ \left(4500 \alpha_4 k^2-3600 \pi^2 \beta_3 e^2 k^2\right)v_c^4\nonumber\\ &&\left.+540 \pi \beta_3 k v_c^6+180 \pi^4 e^2 v_c^8-27 \pi^3 k v_c^{10}\Big) \Big/\Big( v_c^3 \big(-400 \pi^2 \alpha_4 e^2 k^2+300 \pi \alpha_4 k v_c^2\right.\nonumber\\ &&\left.-400 \beta_3^2 k^2+40 \pi^2 \beta_3 k^2 v_c^4-\pi^4 v_c^8\big) \Big), \right. \label{Tc4} \end{eqnarray} and \begin{eqnarray} \left(3 \pi ^2 v_c^4-60 \beta_3 k^2\right)T_c^2+18 \pi k v_c^3T_c+20 \pi e^2 k-15 k^2 v_c^2 = 0, \label{Vc4} \end{eqnarray} which can be solved numerically for any choice of parameters for $T_c,~v_c$. For simplicity, consider the behaviour of the critical temperature and volume, with only the quartic coupling active. Equations \eqref {Tc4} and \eqref {Vc4} become \begin{eqnarray} t_c &=& \frac{8000 \pi^2 \alpha_4 e^4 k^2-12000 \pi \alpha_4 e^2 k v_c^2+4500 \alpha_4 k^2 v_c^4+180 \pi^4 e^2 v_c^8-27 \pi^3 k v_c^{10}}{18 \pi v_c^3 \left(-400 \pi \alpha_4 e^2 k^2+300 \alpha_4 k v_c^2-\pi^3 v_c^8\right) } \quad \label{Tc4Q} \end{eqnarray} with $t_c \equiv {T_c}|_{\beta_3\rightarrow 0}$, and the critical volume $ v_c$ satisfies \begin{eqnarray} 20 \pi e^2 k-15 k^2 v_c^2+18 \pi k t_c v_c^3+3 \pi^2 t_c^2 v_c^4 &=&0. \label{Vc4Q} \end{eqnarray} We see that critical temperature is singular if the critical volume is such that the polynomial quartic in $v_c^2$ in the denominator vanishes. An exception to this is if $\alpha_4= 6400/729 \pi^6 e^6 $: the numerator in \eqref{Tc4Q} also vanishes and $t_c$ remains finite. Note that is only occurs if $k\neq 0$, in accord with earlier work on black branes in GQG \cite{Hennigar:2017umz}. However the corresponding $T_c$ and $v_c$ become imaginary in the case of $k=-1$, so for hyperbolic black hole this singularity is absent. For $k=1$, if $\beta_3=0$ and $\alpha_4= 6400/729 \pi^6 e^6 $, the resulting values of $T_c,~v_c$ correspond to a critical point with standard critical exponents (see \reef{exponents} and the discussion following), and the phase transition in the vicinity of this point is a standard first order VdW transition similar to what is depicted in figure \ref{PT4d}. In studying the behaviour at this point, we note that the equations of state need to be solved for these specific parameter values, instead of using \reef{Tc4Q} and \reef{Vc4Q}, since the latter becomes invalid if the denominator of $t_c$ vanishes. If both couplings are non-zero, again the denominator of $T_c$ in \eqref{Tc4} is quartic in $v^2$, and a similar procedure can be employed to write a formula for $\alpha_4$ in terms of $\beta_3$ and $e$. Doing so, we find that for any values of the parameters the only solution is $\alpha_4=0$, however in cubic gravity critical temperature does not have singularity in four dimensions \cite{mir:2018mmm}. In other words, this particular occurrence of this apparent thermodynamic singularity is obtained only in Einstein-quartic GQG (for which the cubic coupling vanishes). \begin{table}[ht] \centering \begin{tabular}{c c c c c c} \hline Color & Number of Critical Points & $\gamma^2$ & Entropy & $f_{\infty}$ \\ [0.5ex] \hline \textcolor{green}{Green} & 1 & + & + & + \\ \textcolor{ForestGreen}{Dark Green} & 2 & + & + & + \\ \textcolor{orange}{Orange} & 3 & + & + & + \\ \textcolor{cyan}{Blue} & 1 & $-$ & + & + \\ \textcolor{blue}{Dark Blue} & 2 & $-$ & + & + \\ \textcolor{purple}{Purple} & 3 & $-$ & + & +\\ \textcolor{lightbrown}{Brown} & 1 & + & $-$ & +\\ \textcolor{red}{Red} & 2 & + & $-$ & +\\ Black & 1 & $-$ & $-$ & +\\ \textcolor{yellow}{Yellow} & 2 & $-$ & $-$ & +\\ \textcolor{brown}{Light Brown} & 1 & $\times$ & $+/-$ & $\times$\\ \textcolor{grey}{Grey} & 0 & $\times$ & $\times$ & $\times$ \\ [1ex] \hline \end{tabular} \caption{ \textbf{Color Coding for Phase Space of Constraints}: This table illustrates the code for figures \ref{domain}, \ref{domain5d} and \ref{domain6d} that illustrate how many critical points are present at each point in the parameter space ($\beta_3$,$\alpha_4$). For completeness we consider both the existence and signs of $\gamma^2$, the entropy, and $f_{\infty}$. It is only when the signs of all three are positive that we get physical critical points. The `$\times$' for $f_{\infty}$ means that Eq. \reef{asympf} does not have any positive real solution; in other cases it means the corresponding critical quantities are either negative or not real-valued. } \label{table:nonlin} \end{table} Due to the complexity of the equation of state, it is not possible to find an explicit bound on the couplings and the electric charge by applying the positivity constraints \reef{nghost} and \reef{gamma21}. However we can numerically investigate whether these physical constraints are satisfied whilst varying the cubic and quartic couplings for a given fixed charge. The corresponding pattern is given in figure~\ref{domain}, where we also check for positivity of the entropy \reef{sratio} as well. \begin{figure*}[htp] \centering \begin{tabular}{cc} \includegraphics[scale=.3]{domain-cq-4d-3.pdf}&\quad\quad\quad\quad \includegraphics[scale=.3]{domain-cq-4d-k=-1-4.pdf} \\ \includegraphics[scale=.3]{domain-cq-4d-k=1q=0.pdf}&\quad\quad\quad\quad \includegraphics[scale=.3]{domain-cq-4d-k=-1q=0.pdf} \\ \end{tabular} \caption{\textbf{Number of Critical Points as a function of the GQG couplings in $d=4$} (colour online). \textit{Top Left}: The number of critical points for fixed electric charge ($e=1)$ as a function of $\alpha_4$ and $\beta_3$ for $k=1$. The physically admissible region, shown in green colour, has only a single physical critical point for each value of the coupling. For $\gamma^2<0$ there are either one (blue), two (dark blue) or three (purple) critical points for a given value of $\beta_3$ and $\alpha_4$. \textit{Top right}: The analogous plot for $d=4$ and $k=-1$ but with $e^2=0.1$. Grey regions have no critical points. \textit{Bottom Left}: The plot for $d=4$ and $k=1$ but with zero charge. Red and brown regions exhibit two and single critical points with negative entropy. \textit{Bottom Right}: The critical regions for $d=4$ and $k=-1$ for chargeless case. } \label{domain} \end{figure*} The physical critical domain is the part of the parameter space for which the physical constraints discussed in section \ref{constraints} are satisfied, with the property that a phase transition occurs. Looking at the left part of figure~\ref{domain}, for $k=1$, this region has $\beta_3 < 0$ (or $\mu<0$) for all values of $\alpha_4$, with the exception of the axis $\beta_3 = 0$ , where only for $\alpha_4\ge 0$ are the associated phase transitions physical (the positive axis is green). The point $\beta_3 = \alpha_4 = 0$ is also green, recovering the result that in the limit of vanishing cubic and quartic couplings, a charged AdS black hole still has physical critical points \cite{Kubiznak:2012wp}. On the vertical axis $\alpha_4<0$, we have $\gamma^2<0$. The entire physical region has only one critical point, whereas in the unphysical region it is possible to have either one, two, or three critical points depending on the given values of $(\beta_3,\alpha_4)$. As we make the fixed value of $e$ smaller (see for example the top right diagram in figure~\ref{domain}), we find that there are at most two possible critical points but only one of them is physical. Summarizing, in the presence of charge, critical points exist for all $(\beta_3<0,\alpha_4)$ (see figure~\ref{domain}). The center of the parameter space is the $k=1$ Reissner-Nordstrom-AdS solution for which all constraints are satisfied in any dimension. Even for $e=0$, there are regions in the $(\beta_3,\alpha_4)$ plane containing critical points (bottom left in figure~\ref{domain}) quite unlike the situation in Einstein gravity, where there are no critical points for uncharged black holes. However there are also large regions of parameter space with no critical points. The center of the parameter space is the $k=1$ Schwarzschild-AdS solution for which constraints are satisfied but which has no critical points. For $k=-1$, we see from the upper left diagram in figure~\ref{domain} that single physical critical points exist provided both the $\beta_3$ cubic and $\alpha_4$ quartic couplings are nonzero and negative. To our knowledge, this phenomenon has not been previously observed for hyperbolic black holes in four dimensions. No critical points exist if $\alpha_4>0$. The point at the origin of the parameter space is also not a critical, even in the presence of charge. Whenever critical points exist we find that the critical exponents\footnote{ The critical exponents quantify how physical quantities behaves in the vicinity of a critical point \cite{Kubiznak:2012wp}. For $t=T/T_c-1$, the exponent $\alpha$ characterizes the behaviour of the specific heat, while keeping volume constant \begin{eqnarray} C_V=T \frac{\partial S}{\partial T}\Big|_V\propto |t|^{-\alpha}. \nonumber \end{eqnarray} The exponent $\beta$ denotes a difference between the volume of a large black hole $V_l$ and the volume of a small black hole $V_s$ on the isotherm process \begin{eqnarray} V_l-V_s\propto |t|^{\beta}. \nonumber \end{eqnarray} The behaviour of the isothermal compressibility $\kappa_T$ is given by exponent $\gamma$ \begin{eqnarray} \kappa_T=-\frac{1}{V} \frac{\partial V}{\partial P}\Big|_T\propto |t|^{-\gamma}. \nonumber \end{eqnarray} The exponent $\delta$ characterizes the following difference on the critical isotherm $T=T_c$ \begin{eqnarray} |P-P_c|\propto |V-V_c|^{\delta}. \nonumber \end{eqnarray} } are \begin{eqnarray} \alpha=0, \quad \beta=\frac{1}{2}, \quad \gamma=1 ,\quad \delta=3, \label{exponents} \end{eqnarray} which are the standard values from mean field theory, even when both numerator and denominator vanish in \eqref{Tc4Q}. These are typically obtained by considering the equation of state near the critical point \cite{Gunasekaran:2012dq}, writing \begin{eqnarray} v = v_c(\phi + 1) \, , \quad T = T_c (\tau + 1), \end{eqnarray} and expanding in powers of $(\phi,\tau)$. Since here we do not have a closed form for the critical quantities, we insert numerical values for parameters into equation of state to obtain critical values for $T_c$ and $v_c$ and we obtain \begin{eqnarray} \frac{P}{P_{c\ \pm}}&=&1+A \tau-B \tau \phi-C \phi^3+\cO(\tau \phi^2,\phi^4), \label{expcoeff} \end{eqnarray} yielding \eqref{exponents}. We explicitly illustrate the occurrence of the phase transition by drawing a $P-v$ graph in figure~\ref{PT4d} for parameters for which there is a physical critical point by setting $e=1$, $\beta_3=-4 e^4$ and $\alpha_4=5 e^6$. We see clear Van der Waals behaviour, with two distinct phases for $T<T_c$ that coalesce at $T=T_c$ and become indistinguishable for $T>T_c$. For enough low temperature, the curve tends to negative pressures; however only positive values of the pressure are physical. The coexistence line is plotted in the right half of figure~\ref{PT4d}, illustrating the critical point at the end of a line of first-order phase transitions between large and small black holes. The Gibbs free energy as a function of temperature is shown in figure~\ref{GTd4}, exhibiting the typical swallowtail characteristic of Van der Waals behaviour. It is notable that for quite small values of the pressure we still observe a swallowtail shape whose size grows rapidly. Computing the specific heat \begin{eqnarray} C_P = - T \frac{\partial ^2 G}{\partial T^2} \, , \end{eqnarray} we find that two stable branches of black holes exist, with the physical one at the global minimum of $G$. \begin{figure*}[htp] \centering \begin{tabular}{cc} \includegraphics[scale=.25]{P-v-cq-4d.pdf}& \quad \quad\quad\quad \includegraphics[scale=.25]{P-T-cq-4d.pdf} \\ \end{tabular} \caption{{\bf Van der Waals phase transition in four dimensions}(colour online): \textit{Left}: The graph of pressure versus volume at various fixed temperatures for $d=4$ and $k=1$ shows the occurrence of a first order phase transition with VdW behaviour. The dashed line has $T=T_c$, the solid red lines with $T<T_{c}$ are $T=0.8 T_c$ and $0.65 T_c$, and the solid red lines with $T>T_{c}$ are $T=1.3 T_c, 1.7 T_c$. \textit{Right}: The coexistence line in the pressure/temperature plane. In both graphs $e=1$, $\beta_3=-4 e^4$ and $\alpha_4=5 e^6$ with $T_c e\approx 0.03448$; similar behaviour happens for any other values chosen from the physical domain given in figure~\ref{domain}. Appropriate factors of the electric charge parameter $e$ are employed to make the relevant quantities dimensionless. } \label{PT4d} \end{figure*} \begin{figure*}[htp] \centering \begin{tabular}{cc} \includegraphics[scale=.3]{G-T-4d-1cq.pdf} &\quad\quad\quad\quad \includegraphics[scale=.3]{G-T-4d-d01cq.pdf} \\ \end{tabular} \caption{\textbf{Free energy} (color online). We select $e=1$, $\beta_3=-4 e^4$ and $\alpha_4=5 e^6$; physical conditions are fulfilled with $P_c e^2\approx 0.00210$. {\it Left:} A plot of the Gibbs free energy versus temperature for $d=4$ and $k=1$, for various values of the pressure: $P =1.2 P_c$ (dotted, blue curve), for $P =P_c$ (dotted, black curve), for $P =0.6 P_c$ and $P =0.2 P_c$ (solid black and red lines). {\it Right:} Plot for $P =0.01 P_c$. In each plot, the red lines indicate the parts of the curves for which the specific heat is negative; quantities are rescaled by appropriate values of the electric charge $e$ to obtain dimensionless quantities. } \label{GTd4} \end{figure*} For the allowed regions of parameter space in figure~\ref{domain} for $k=-1$, the phase diagrams are qualitatively the same as in figures~\ref{PT4d} and~\ref{GTd4}. \subsection{Critical behaviour in five dimensions} In this section we consider five dimensional solutions. The equation of state becomes \begin{eqnarray} P =\frac{T}{v}-\frac{2 k}{3 \pi v^2}+\frac{\beta_3 T^3}{v^3}+\frac{6 \beta_3 k^2 T}{35 \pi ^2 v^5}+\frac{e^2}{v^6}-\frac{244 \beta_3 k}{945 \pi ^3 v^6}-\frac{16 \alpha_4 k^2 T^2}{3 \pi ^2 v^6 }, \end{eqnarray} and the critical temperature is \begin{eqnarray} T_c&=&\frac{1}{3}\Big(490 \pi ^5 \beta_3 k v_c^{11}+\pi \left(6615 \pi^5 \beta_3 e^2-2128 \pi^2 \beta_3^2 k\right)v_c^7+\pi \big(1003520 \alpha_4^2 k+1464 \beta_3^3 k\nonumber\\&&\left.-5670 \pi^3 \beta_3^2 e^2 k^2\big)v_c^3\Big)\Big/\Big(245 \pi^6 \beta_3 v_c^{12}-420 \pi^4 \beta_3^2 k^2 v_c^8+7840 \pi^3 \alpha_4 \beta_3 k v_c^6\right.\nonumber\\ &&\left.+ \left(313600 \pi^2 \alpha_4^2 k^2+180 \pi^2 \beta_3^3 k^2\right)v_c^4+ \left(105840 \pi ^4 \alpha_4 \beta_3 e^2 k^2-27328 \pi \alpha_4 \beta_3^2 k\right)v_c^2\right. \nonumber\\ &&\left.+53760 \alpha_4^2 \beta_3 k^2\Big),\right.\label{5dTceq} \end{eqnarray} The critical volume satisfies the following equation \begin{eqnarray} -30240 \pi \alpha_4 k^2 T_c^2+ \left(540 \pi \beta_3 k^2 v_c-630 \pi^3 v_c^5\right)T_c-1464 \beta_3 k+5670 \pi^3 e^2+420 \pi^2 k v_c^4=0 \nonumber\\ \label{5dvceq} \end{eqnarray} and, as before, finding an explicit closed form for both the critical temperature and volume is not feasible. However for vanishing cubic coupling, there is a considerable simplification; solving \reef{dPd2p} with $\beta_3=0$ for the corresponding critical temperature \reef{5dTceq} yields we find \begin{eqnarray} t_c=\frac{16 k}{15 \pi v_c}, \label{tc0} \end{eqnarray} where $v_c$ satisfies \begin{eqnarray} -30k\pi^3v_c^6+675e^2\pi^4v_c^2-4096\alpha_4 =0,\label{nuc5d} \end{eqnarray} and we must have $k=1$ so that $t_c > 0$. This equation is a cubic polynomial in $v_c^2$ and can be solved exactly. At most there are two real solutions for any given choice of parameters that are physically acceptable \footnote{ In general, we found that in the five dimensional quartic theory (with $\beta_3=0$), there are at most two physical critical points.}. Figure \ref{domain5d} plots the number of critical points as a function of $(\beta_3,\alpha_4)$ with fixed charge. For $k=1$, unlike in 4 dimensions, we see that only if both couplings are non-zero we get two physical critical points in a certain region of parameter space, shown in dark green. The occurrence of two physical critical points for spherical black hole in five dimensions has to our knowledge not been seen previously. On the axes $\beta_3<0,~\alpha_4=0$ and $\beta_3=0$ ({\it i.e.,}\ on the vertical axis) only for positive values of $\alpha_4$ (or $\lambda < 0$) greater than some specific lower bound for the quartic coupling are the critical points physical, whereas for $\beta_3>0,~\alpha_4=0$ and $\beta_3=0,~\alpha_4<0$ they are unphysical, having $\gamma^2<0$. Physical critical points exist for most of the region $\beta_3<0$, except for small values of $|\beta_3|$ and large enough large values of $|\alpha_4|$ where $\gamma^2<0$. \begin{figure*}[htp] \centering \begin{tabular}{cc} \includegraphics[scale=.3]{domain-cq-5d-4.pdf}&\quad \quad\quad\quad \includegraphics[scale=.3]{domain-cq-5d-k=-1-v2.pdf} \\ \includegraphics[scale=.3]{domain-cq-5d-q=0.pdf}&\quad \quad\quad\quad \includegraphics[scale=.3]{domain-cq-5d-q=0-k=-1.pdf} \\ \end{tabular} \caption{\textbf{Number of Critical Points as a function of couplings in $d=5$} (colour online). \textit{Top Left}: For $k=1$ and $e=1$ we see a broad (green) region in $(\beta_3,\alpha_4)$ parameter space having only one physical critical point with a band (dark green) where two physical critical points exist. There are two and single critical point with $\gamma^2<0$ in the dark blue and blue regions. \textit{Top right}: For $k=-1$ and $e^2=0.1$ the only region having (single) physical critical points is the green band in the lower-left quadrant; the light brown region has a single critical point with an unphysical imaginary asymptotic value for $f_{\infty}$; the grey regions do not have critical points, and the in the white region the mass is negative. \textit{Bottom left}: For $k=1,~e=0$, the brown and red points respectively demonstrate the existence of one and two critical points with $S<0$; \textit{Bottom right}: For $k=-1,~e=0$, there is still an allowed region (green) of physical critical points; in the white region the mass is negative. } \label{domain5d} \end{figure*} The $\alpha_4 = 0$ is discussed in \cite{mir:2018mmm}. \begin{figure*}[htp] \centering \begin{tabular}{cc} \includegraphics[scale=.3]{P-v-cq-5d.pdf}&\quad \quad\quad\quad \includegraphics[scale=.3]{P-T-5d-cq-1v2.pdf} \\ \end{tabular} \caption{\textbf{Van der Waals and reverse Van der Waals phase transition in $d=5$ and $k=1$.} (color online). We choose $e=1$, $\beta_3=-4/5 e^2$ and $\alpha_4=-4/5 e^3$. Dimensionless critical quantities are $T_c \sqrt{e}\approx 0.11730$, $P_c e\approx 0.01686$, $T_{\overline{c}} \sqrt{e}\approx 1.28065$, and $P_{\overline{c}} e\approx 0.40771$. \textit{Left:} The behaviour of pressure versus volume for temperatures in the neighbourhood of the critical point at smaller $T=T_c$. We depict the critical curve (dashed red line), $T\approx 0.65639 T_c$ (solid red line), and $T=1.8 T_c$ (dotted red line). We also plot the behaviour in the neighbourhood of the second critical point at larger $T=T_{\overline{c}}$. We depict the critical curve (dashed black line), $T=0.8 T_{\overline{c}}$ (dotted black line), and $T=1.1 T_{\overline{c}}, 1.24282 T_{\overline{c}}, 1.27 T_{\overline{c}}$ (solid black lines). For $T > 1.24282 T_{\overline{c}}$ the pressure becomes negative and the spacetime is no longer asymptotically AdS. \textit{Right:} Coexistence curves for five dimensional $k=1$ charged black holes, with standard VdW behaviour at the lower left and reverse VdW behaviour in the upper right. The critical points are the green points. } \label{PT5dk1} \end{figure*} \begin{figure*}[h] \centering \begin{tabular}{cc} \includegraphics[scale=.3]{G-T-cq-5d-1.pdf}&\quad\quad\quad\quad \includegraphics[scale=.3]{G-T-qc-5d-d5.pdf}\\ \includegraphics[scale=.3]{G-T-qc-5d-2.pdf}&\quad\quad\quad\quad \includegraphics[scale=.3]{G-T-qc-5d-d99.pdf} \\ \end{tabular} \caption{\textbf{Free energy as a function of temperature for $d=5,~k=1$} (color online). We set $e=1$, $\beta_3=-4/5 e^2$ and $\alpha_4=-4/5 e^3$. Dimensionless critical quantities are $T_c \sqrt{e}\approx 0.11730$ and $P_c e\approx 0.01686$, $T_{\overline{c}} \sqrt{e}\approx 1.28065$ and $P_{\overline{c}} e\approx 0.40771$, and appropriate powers of the electric charge parameter $e$ are used to render the relevant quantities dimensionless. In each plot, red lines depict the parts of the curves for which the specific heat is negative, blue lines indicate negative entropy, and purple line indicates that both specific heat and entropy are negative. \textit{Top left:} The low temperature region (for which there is a standard VdW phase transition), with $P =1.2 P_c$ (dotted, blue curve), $P =P_c$ (dotted, black curve) and $P =0.6 P_c, 0.2 P_c$ (solid, black and red curve) each plotted. The remaining graphs pertain to the high temperature region (for which there is a reverse VdW phase transition). \textit{Top right:} $P =0.5 P_{\overline{c}}$, \textit{Bottom left:} $P = P_{\overline{c}}$ \textit{Bottom right:} $P = 0.99 P_{\overline{c}}$. } \label{GTd5} \end{figure*} \begin{figure*}[htp] \centering \begin{tabular}{cc} \includegraphics[scale=.3]{P-v-cq-5d-km1.pdf}&\quad \quad\quad\quad \includegraphics[scale=.3]{p-T-cq-5d-km1.pdf} \\ \end{tabular} \caption{ \textbf{Reverse Van der Waals phase transition in $d=5,~k=-1$} (color online). We choose $e^2=1/10$, $\beta_3=-4 e^2$ and $\alpha_4=-6 e^3$. Dimensionless critical quantities are $T_c \sqrt{e}\approx 0.42157$ and $P_c e\approx 0.07772$. \textit{ Left:} The behaviour of pressure versus volume for $d=5$ and $k=-1$ for temperatures at critical point $T=T_c$ (dashed black line), $T\approx 0.8 T_c$ (solid black line), $T=0.2 T_c$ (dotted black line), $T=1.1 T_c$ (long dashed black line), $T=1.2 T_c$ (dash-dotted blue line). \textit{Right:} Coexistence curves for five dimensional $k=-1$ charged black holes is depicted, with reverse VdW behaviour. The critical points are denoted by the green points. } \label{PT5dkm1} \end{figure*} \begin{figure*}[h] \centering \begin{tabular}{cc} \includegraphics[scale=.25]{G-T-cq-5d-km1.pdf}& \includegraphics[scale=.25]{G-T-cq-5d-km1-d95.pdf} \includegraphics[scale=.25]{G-T-cq-5d-km1-d9.pdf \\ \includegraphics[scale=.25]{G-T-cq-5d-km1-d8.pdf}& \includegraphics[scale=.25]{G-T-cq-5d-km1-d6.pdf} \includegraphics[scale=.25]{G-T-cq-5d-km1-d2.pdf} \\ \end{tabular} \caption{ \textbf{Free energy as a function of temperature for $d=5$ and $k=-1$} (color online). We set $e^2=1/10$, $\beta_3=-4 e^2$ and $\alpha_4=-6 e^3$, with dimensionless critical quantities $T_c \sqrt{e}\approx 0.42157$ and $P_c e\approx 0.07772$. Red lines correspond to negative specific heat and green lines denote negative mass. \textit{Top left:} A plot of the Gibbs free energy for $P =1.2 P_c$ (dotted, black curve) and $P =P_c$ (solid, black and red and green curve). \textit{Top center}: For $P =0.95 P_c$ a reverse VdW phase transition occurs. \textit{Top right:} $P =0.9 P_c$, \textit{Bottom left:} $P =0.8 P_c$, \textit{Bottom center:} $P =0.6 P_c$, \textit{Bottom right:} $P =0.2 P_c$. The latter sequence shows that the swallowtail develops unphysical branches and then vanishes entirely. } \label{GTd5km1} \end{figure*} Summarizing we have observed for the first time the occurrence of two physical critical points for spherical ($k=1$) black holes and one critical point for hyperbolic ($k=-1$) black holes. We must have both couplings nonzero for this to take place. Again we see that even if $e=0$ there are physical critical points. For $k=1$ we get regions with either one or two physical critical points, and only one for $k=-1$. In the latter case, there are some regions having negative mass (white region) and both couplings must be non-zero in order to get physical critical points. For regions of parameter space having only a single critical point, five dimensional spherical black holes ($k=+1$), have a first order VdW transition behaviour similar to that in the four dimensional case. The critical exponents are the mean field theory values. However in regions of parameter space having two physical critical points, there is new behaviour. We illustrate this in figure~\ref{PT5dk1}, which shows that there are two first order phase transitions. The transition at $T= T_c$ is standard VdW behaviour. But the second phase transition at $T = T_{\overline{c}}$ is that of `reverse VdW' behaviour: it is a transition from one phase for $T < T_{\overline{c}}$ to two distinct small/large black hole phases for $T > T_{\overline{c}}$. Note that for a sufficiently large temperature the pressure becomes negative and the asymptotic structure of the spacetime is no longer AdS. Consequently there is an upper bound on the temperature of AdS black holes. We illustrate this in figure~\ref{PT5dk1}. Note that curves having $P<0$ over a finite range of $T$ can be given physical meaning via the equal-area law \cite{Smailagic.2013}. Referring to figure~\ref{PT5dk1}, we see that although $P<0$ corresponds to a different asymptotic structure from that of AdS, the equal area law implies that $P$ never actually attains these negative values, but rather remains constant and positive as the phase transition takes place. The coexistence line of the two distinct phase small/large has a critical point at a minimal value of $T$ in contrast to that of a standard VdW phase transition. This phenomenon has been previously observed for black branes~\cite{Hennigar:2017umz} and in cubic gravity \cite{mir:2018mmm}. The existence of a standard VdW phase transition follow by a reverse VdW transition at higher temperature is shown in figure~\ref{PT5dk1}. One might anticipate that with an appropriate choice of parameters that these two critical points (illustrated in the top right diagram of figure~\ref{PT5dk1}) could merge, yielding an isolated critical point (see the discussion in six dimensional case in next section). We find that this only occurs for either negative entropy and/or negative mass; these unphysical conditions persist in a neighbourhood of the merged critical point. In figure~\ref{GTd5}, we illustrate the behaviour of the Gibbs free energy in both the low-temperature region containing a standard VdW transition and in the high-temperature region containing a reverse VdW transition. In the former case, we obtain the standard swallowtail behaviour for $P < P_c$. In the latter case, for $P$ sufficiently smaller than $P_{\overline{c}}$ there are no phase transitions, as the upper right part of figure~\ref{GTd5} indicates. As $P \to P_{\overline{c}}$ we obtain swallowtail behaviour, shown in the lower right part of figure~\ref{GTd5}. For $P =P_{\overline{c}}$, in addition to negative specific heat we get regions with negative entropy (shown by the blue curve) and with both negative specific heat and negative entropy (shown by the purple curve); these are all unstable. For larger temperature the black hole solutions become stable (black line). Figure~\ref{GTd5} shows that in addition to a reverse VdW transition, at high temperatures there are three black hole solutions with positive specific heat, with one having a minimal Gibbs free energy. For the $k=-1$ hyperbolic black hole in $d=5$ we get only a reverse VdW transition, in which higher temperatures have two distinct phases, as depicted in figure \ref{PT5dkm1}. The Gibbs free energy in figure \ref{GTd5km1} exhibits swallowtail behaviour for pressures a bit less than the critical pressure. However for larger values of pressure one of the branches corresponds to solutions with negative mass (depicted by the green curve) and the phase transition is no longer physical. For even lower pressure (bottom left diagram in figure~\ref{GTd5km1}) we see that unstable black holes with negative specific heat have lower free energy than those with positive specific heat. It is reasonable to expect that there is now a zeroth order phase transition between the two black curves in this figure. For even more smaller $P$, the unphysical parts of the branches shrink and (apart from a small red region) become stable. As in the $k=1$ case, there are again three black hole solutions with positive specific heat, with one having a minimal Gibbs free energy. \subsection{Critical behaviour in six dimensions} Turning now to six dimensions, we note from \reef{asympf} that for the linearized field equations the contribution of the cubic term drops out\footnote{In eight dimensions the quartic term drops out.}~\cite{Bueno:2016xff}. This yields some simplification, but the analysis is still somewhat complicated. The equation of state takes the following form \begin{eqnarray} P=\frac{T}{v}-\frac{3 k}{4 \pi v^2}+\frac{2 \beta_3 T^3}{v^3}+ \left(\frac{3 \beta_3 k}{2 \pi v^4}+\frac{3 \alpha_4 k^2}{2 \pi ^2 v^6}\right)T^2-\frac{\alpha_4 T^4}{v^4}-\frac{\beta_3 k}{8 \pi ^3 v^6}+\frac{e^2}{v^8}, \end{eqnarray} with the explicit expression \begin{eqnarray} T_c&=&\Big(3\big((36\pi^7\beta_3^3k+24\alpha_4^2\pi^7k){v_c}^{18} +180\alpha_4\pi^6{v_c}^{16}\beta_3^2k^2+(162\pi^5{\beta_3}^4k +276\alpha_4^2\pi^5{\beta_3}k){v_c}^{14}\nonumber\\ &&\left.+(256\alpha_4^2\pi^8e^2 +1476{\alpha_4}\pi^4{\beta_3}^3k^2+960\pi^8{\beta_3}^3e^2-144 {\alpha_4}^3k^2\pi^4){v_c}^{12}+(2880{\alpha_4}\pi^7{\beta_3}^2k e^2\right.\nonumber\\ &&\left. +2988\pi^3{\beta_3}^2{\alpha_4}^2k-108\pi^3{\beta_3}^5k){v_c}^{10} +(-1152{\alpha_4}^3k^2\pi^2{\beta_3}-459{\alpha_4}k^2\pi^2{\beta_3}^4 \right.\nonumber\\ &&\left.+2304\pi^6{\beta_3}^4k^2e^2-384\pi^6{\beta_3}{\alpha_4}^2e^2k^2){v_c}^8 +(-3888{\alpha_4}^4k\pi+11040{\alpha_4}k\pi^5{\beta_3}^3e^2\right.\nonumber\\ &&\left.-3072 {\alpha_4}^3k\pi^5e^2){v_c}^6+(612{\alpha_4}^3k^2{\beta_3}^2+7776\pi^4 {\beta_3}^2{\alpha_4}^2k^2e^2+10240{\alpha_4}\pi^8{\beta_3}^2e^4){v_c}^4 \right.\nonumber\\ &&\left.-12288{\alpha_4}^3k\pi^3{v_c}^2{\beta_3}e^2-10368{\alpha_4}^4k^2\pi^2e^2 -16384{\alpha_4}^3k^2\pi^6e^4\big)\Big)\right.\nonumber\\ &&\left. \times 1\Big/\Big(\pi {v_c}^3\big((72{\alpha_4}^2\pi^7+144 \pi^7{\beta_3}^3){v_c}^{16}+612{v_c}^{14}\pi^6{\beta_3}^2k{\alpha_4}\right.\nonumber\\ &&\left.+(324\pi^5 {\beta_3}^4k^2+792{\alpha_4}^2\pi^5{\beta_3}k^2){v_c}^{12}+(3096\pi^4{\beta_3}^3 k{\alpha_4}-864\pi^4{\alpha_4}^3k){v_c}^{10}\right.\nonumbe \end{eqnarray} \begin{eqnarray} &&\left. +(7146\pi^3{\alpha_4}^2k^2 {\beta_3}^2+192\pi^7{\beta_3}^2e^2{\alpha_4}+486\pi^3{\beta_3}^5k^2){v_c}^8 +(3072\pi^6{\alpha_4}^2k{\beta_3}e^2\right.\nonumber\\ &&\left.-1512\pi^2{\alpha_4}^3k{\beta_3} -8640\pi^6{\beta_3}^4e^2k+2187\pi^2{\beta_3}^4k{\alpha_4}){v_c}^6+ (-9720\pi{\alpha_4}^4k^2\right.\nonumber\\ &&\left.-38880\pi^5{\beta_3}^3e^2{\alpha_4}k^2+36 \pi{\alpha_4}^2k^2{\beta_3}^3-9216\pi^5{\alpha_4}^3k^2e^2){v_c}^4\right. \\ &&\left.+ (-1536\pi^4{\alpha_4}^2k{\beta_3}^2e^2-2916{\beta_3}^2k{\alpha_4}^3) {v_c}^2+51840\pi^3{\beta_3}e^2{\alpha_4}^3k^2+16384{\alpha_4}^2\pi^7 {\beta_3}e^4\big)\Big) \right.\nonumber \end{eqnarray} for the critical temperature and \begin{eqnarray} &&(-24{v_c}^{10}\pi^5{\beta_3}{\alpha_4}-72{v_c}^8\pi^4{\beta_3}^3k-324{v_c}^6\pi^3 {\beta_3}^2{\alpha_4}k^2+432{\alpha_4}^3k^4\pi {v_c}^2){T_c}^2\nonumber\\ &&\left. +(-12{v_c}^5\pi^2{\beta_3}^2{\alpha_4}k+24{v_c}^9\pi^4{\beta_3}{\alpha_4}k +256{v_c}^3\pi^5{\beta_3}{\alpha_4}e^2+24{v_c}^{11}\pi^5{\beta_3}^2 \right.\nonumber\\ &&\left.-72{\alpha_4}^2k^2\pi^3{v_c}^7)T_c+768{\alpha_4}^2k^2e^2\pi^3 -36{\alpha_4}^2k^3{\beta_3}{v_c}^2-18{v_c}^10\pi^4{\beta_3}^2k \right.\nonumber\\ &&\left.+27{v_c}^6\pi^2{\beta_3}^3k-480{v_c}^4\pi^5{\beta_3}^2e^2 +72{\alpha_4}^2k^3\pi^2{v_c}^6 =0 \right. \end{eqnarray} being the equation yielding the critical volume. Setting the cubic coupling to zero, the associated critical temperature is \begin{eqnarray} t_c&=&\frac{1}{3}\Big(-1296 \pi \alpha_4^2 e^2 k^2-2048 \pi^5 \alpha_4 e^4 k^2+ \left(-486 \alpha_4^2 k-384 \pi^4 \alpha_4 e^2 k\right)v_c^6\nonumber\\ &&\left.+ \big(32 \pi^7 e^2 -18\pi^3 \alpha_4 k^2\big)v_c^{12}+3 \pi^6 k v_c^{18}\Big)\right.\nonumber\\ &&\left.\times 1\Big/ \Big(\pi v_c^7 \big(-135 \alpha_4^2 k^2-128 \pi^4 \alpha_4 e^2 k^2-12 \pi^3 \alpha_4 k v_c^6+\pi^6 v_c^{12}\big)\Big), \right.\label{tcd6} \end{eqnarray} and the critical volume obeys the relation \begin{eqnarray} 36 \alpha_4 k^2 t_c^2 v_c^2+64 \pi ^2 e^2+6 \pi k v_c^6-6 \pi ^2 t_c v_c^7 = 0 \; . \end{eqnarray} There is a singularity in the critical temperature at \begin{eqnarray} v_c^6=\frac{6 \alpha_4 k}{\pi^3}+\frac{k \sqrt{ 171 \alpha_4^2+128 \pi^4 \alpha_4 e^2 }}{\pi^3}, \end{eqnarray} however it is possible to remove this singularity via a suitable choice of $\alpha_4$ in \reef{tcd6}. For $k=\pm1$ the solutions to these equations indicate that for $\alpha_4=-130.39277\ e^2,\ -225.86056\ e^2$ this takes place for any values of electric charge. However, none of these values yield physical critical points; if $\alpha_4<0$ we get $\gamma^2<0$ in the first case, and the critical temperature or critical volume becomes negative in the second case. We plot in figure \ref{domain6d} the possible critical points in the $(\beta_3,\alpha_4)$ plane. Physical critical points appear only for $\beta_3 < 0$, and there can be as many as three for certain ranges of $(\beta_3,\alpha_4)$ if $k=1$ but only one for the hyperbolic ($k=-1$) case. Other possible critical points have one or both of $\gamma^2<0$ and $S<0$. For vanishing charge we also have a region of at most two critical points for both $k=1$ and just one critical point for $k=-1$; for vanishing $\alpha_4$ there are still two physical critical points if $k=+1$, studied in detail in \cite{mir:2018mmm}. Clearly the maximal number of critical points for a given value of $(\beta_3,\alpha_4)$ depends on both horizon geometry and dimension. We do not need to have both couplings non-zero to obtain physical critical points. However in the absence of charge or when $k=-1$ (both with or without charge) both couplings must be nonzero. \begin{figure*}[htp] \centering \begin{tabular}{cc} \includegraphics[scale=.3]{domain-cq-6d-v4-2.pdf}&\quad\quad\quad\quad \includegraphics[scale=.3]{domain-cq-6d-k=-1-2.pdf} \\ \includegraphics[scale=.3]{domain-cq-6d-q=0.pdf}&\quad\quad\quad\quad \includegraphics[scale=.3]{domain-cq-6d-k=-1-q=0.pdf} \\ \end{tabular} \caption{{\bf Number of Critical Points as a function of couplings in $d=6$} (color online). At top left is the spherical $k=1$ case, at top right the hyperbolic $k=-1$ case; at bottom left is the spherical $k=1$, $e=0$ case, at bottom right the hyperbolic $k=-1$, $e=0$ case, with colour coding given in table 2. Green, dark green, and orange regions respectively indicate one, two, and three physical critical points. Blue regions indicate single critical points with $\gamma^2<0$, and black indicate single critical points both $S<0$ and $\gamma^2<0$. Dark blue, red, and yellow regions respectively indicate $\gamma^2<0$, $S<0$, and both $S<0$ and $\gamma^2<0$, but with two critical points. Light brown regions have solutions with no asymptotic positive real value for $f_{\infty}$. Grey regions have no critical points, and white regions indicate negative mass.} \label{domain6d} \end{figure*} The appearance of three physical critical points in the $d=6,~k=+1$ case stands in contrast to previous studies. This remarkable feature has not been observed before, and its occurrence is related to the conjunction of electric charge, cubic, and quartic couplings all being nonzero. Figure~\ref{PT6d} shows that there is a reverse VdW transition in between two standard ones, one at cold temperatures $T<T_{c_1}$ and the other at high temperatures $T >T_{c_3}$, with the critical temperature $T=T_{c_2}$ of the reverse transition $T_{c_1}< T_{c_2} < T_{c_3}$. The curves in figure~\ref{PT6d} correspond to phase transitions that obey Maxwell's equal area law \cite{Smailagic.2013} with the actual pressure remaining positive during the phase transition (despite the curve indicating $P$ becomes negative over a finite range of $T$) as noted earlier. At sufficiently low temperatures the $P-v$ curves cross the horizontal axis, yielding unphysical behaviour since the asymptotic structure is no longer AdS. \begin{figure*}[h] \centering \begin{tabular}{cc} \includegraphics[scale=.25]{P-v-cq-6d.pdf} \includegraphics[scale=.25]{P-T-6d-cq-1.pdf} \includegraphics[scale=.25]{P-T-6d-cq-3.pdf \\ \includegraphics[scale=.25]{P-T-6d-cq-2.pdf}& \includegraphics[scale=.25]{P-T-6d-cq-ICP.pdf \includegraphics[scale=.25]{P-T-6d-cq-ICP2.pdf} \\ \end{tabular} \caption{\textbf{Three first order phase transition in six dimensions for $d=6$ and $k=1$}(color online). \textit{Top left }: The reverse VdW transition occurs at intermediate temperatures between two standard VdW transitions, with curves below (solid), at (dashed), and above (dotted) critical temperature displayed for each. For the cold VdW transition, $T=T_{c_1}$ (dashed red line), $T=0.60613 T_{c_1}$ (solid red line) $T=1.7 T_{c_1}$ (dotted red line); for the reverse VdW one $T=0.8 T_{c_2}$ (dotted black line), $T=T_{c_2}$ (dashed black line), $T=1.1 T_{c_2}$ (solid black line); for the hot VdW transition $T=0.8 T_{c_3}$ (solid blue line), $T=T_{c_3}$ (dashed blue line) and $T=1.1 T_{c_3}$ (dotted blue line). \textit{Top center}: The phase diagram, with green dots denoting the critical points and black lines indicating three first order phase transitions for temperatures smaller and larger than the three illustrated critical points; we have chosen $e=1$, $\beta_3=-3/5 e^{4/3}$ and $\alpha_4=-4/5 e^2$, with $T_{c_1}e^{1/3}\approx0.19440$, $P_{c_1}e^{2/3}\approx0.03960$ and $T_{c_2}e^{1/3}\approx1.03337$, $P_{c_2}e^{2/3}\approx0.33752$ and $T_{c_3}e^{1/3}\approx1.76108$, $P_{c_3}e^{2/3}\approx0.42503$. \textit{Top right and bottom left}: Close ups of the curves of the top center plot. \textit{Bottom center}: For $e=1$, $\beta_3\approx -2.04428 e^{4/3}$ and $\alpha_4=-4/5 e^{2}$ we obtain an isolated critical point (in red); numerically we find that the critical temperature and pressure are $T_c e^{1/3} \approx 0.27826$ and $P_c e^{2/3} \approx 0.06790$. Blue and green lines show negative entropy and negative mass respectively. \textit{Bottom right}: A magnification of the the bottom center plot close to the isolated critical point. } \label{PT6d} \end{figure*} Critical points and coexistence lines are also displayed in figure~\ref{PT6d}. Numerical analysis confirms that for typical values of the parameters, each of the three critical points are characterized by mean field theory critical exponents, a hallmark of the end point of a first order phase transition. Note that the two critical points at high temperature are joined by a coexistence line. For certain choice for values of the couplings and charge these two disjoint lines merge into each other, and an \textit{isolated critical point} appears at the merge point. This new critical point is characterized by critical exponents that differ from the mean field theory ones. This phenomena was first observed in Lovelock and quasi-topological gravity \cite{Frassino:2014pha, Dolan:2014vba,Hennigar:2015esa}, where isolated critical points were found to occur for hyperbolic horizons and massless AdS black holes. A thermodynamic singularity, at which pressure remains constant for any temperature and isotherms cross and reverse, was also found to occur at this point. However for Lovelock and quasi-topological black holes accompanied with conformal scalar hair, isolated critical points have been observed in five and higher dimensions for massive black holes without coinciding with a thermodynamic singularity \cite{Hennigar:2016ekz,Dykaar:2017mba}. More recently, in GQG cubic gravity only, isolated critical points have been found in six dimensions with no scalar hair for spherical black holes \cite{mir:2018mmm}. In all of these cases there are initially only two critical points that converge to one isolated critical point. Here we see for the first time three critical points, two of which merge to form an isolated critical point. Furthermore these isolated critical points do not correspond to any thermodynamic singularity, ${\it i.e.,}\ $ the $P-T$ curve does not have a zero slope. From the relation \reef{expcoeff} we find that the coefficient $B$ vanishes and the associated critical exponents are \begin{equation} \tilde{\alpha} = 0 \, , \quad \tilde{\beta} = 1 \, , \quad \tilde{\gamma} = 2 \, , \quad \tilde{\delta} = 3 \, .\label{nonstanexp} \end{equation} which differ from the standard critical exponents appearing in \reef{exponents} but are in accord with previous studies \cite{Frassino:2014pha, Dolan:2014vba,Hennigar:2015esa}. On either side of the isolated critical point the black holes satisfy all physical constraints, as the bottom right diagram in figure~\ref{PT6d} indicates. The behaviour of free energy with respect to temperature for spherical black holes is illustrated in figure~\ref{GT6d}. At low temperatures, for $P =P_c$ the solution is stable and for $P < P_c$ a standard VdW transition occurs. For the two other critical points $P_{c_2},~P_{c_3}$ there is region of the curve that has negative specific heat at low temperatures (red lines). For pressures $P_{c_2} < P < P_{c_3}$ there are reverse and standard transitions that are shown by the rather sharp swallowtails depicted in the bottom graphs of figure~\ref{GT6d} from left to right respectively. In contrast to the lower temperature ($P<P_c$) swallowtails depicted in the upper diagrams in figure~\ref{GT6d}, these swallowtails have positive specific heat everywhere apart from a few short segments. \begin{figure*}[htp] \centering \begin{tabular}{cc} \includegraphics[scale=.3]{G-T-6d-1cq.pdf}&\quad\quad\quad\quad \includegraphics[scale=.3]{g-T-6d-cq-n2.pdf}\\ \includegraphics[scale=.3]{g-T-6d-cq-d38n3.pdf}&\quad\quad\quad\quad \includegraphics[scale=.3]{g-T-6d-cq-d415n4.pdf} \\ \end{tabular} \caption{\textbf{Free energy of six dimensional spherical black holes} (colour online). Plots of the Gibbs free energy for $P =P_{c}$, $0.6 P_{c}$ and $0.2 P_{c}$ (top left), for $P = P_{c_2},\ P_{c_3}$ (top right), for $P = 1.12585 P_{c_2}$ (bottom left) and for $P =0.97639 P_{c_3}$ (bottom right). The latter two cases exhibit very sharp swallowtails. In all cases red lines indicate negative specific heat. For the choice of $e=1$, $\beta_3=-3/5 e^{4/3}$ and $\alpha_4=-4/5 e^{2}$, we have $P_{c_1} e^{2/3} \approx 0.03960$, $P_{c_2}e^{2/3}\approx 0.33752$ and $P_{c_3}e^{2/3}\approx 0.42503$. } \label{GT6d} \end{figure*} \begin{figure*}[h] \centering \begin{tabular}{cc} \includegraphics[scale=.25]{P-v-cq-6d-2pt.pdf}& \includegraphics[scale=.25]{P-T-6d-cq-2p.pdf} \includegraphics[scale=.25]{P-T-6d-cq-2p-ICP.pdf} \\ \end{tabular} \caption{\textbf{Two first order phase transition in six dimensions for $d=6$ and $k=1$} (color online). \textit{Left }: A low temperature VdW transition with critical temperature $T=T_{c}$ (dashed red line), $T=0.8 T_{c},~0.6 T_{c}$ (solid, dotted red lines), and a higher temperature reverse VdW transition with $T=T_{\bar{c}}$ (dashed black line), $T=1.6 T_{\bar{c}},~1.8 T_{\bar{c}}$ (solid and dotted black lines). \textit{Center}: The phase diagram for six dimensional spacetimes. The green dots denote the critical points and the black lines show that there are two first order phase transition for temperatures less and larger than the two critical points. Blue lines correspond to negative entropy. We have chosen $e=1$, $\beta_3=-2 e^{4/3}$ and $\alpha_4=-9/5 e^{2}$ with $T_{c}e^{1/3}\approx0.31392$, $P_{c}e^{2/3}\approx0.08065$ and $T_{\bar{c}}e^{1/3}\approx0.25527$, $P_{\bar{c}}e^{2/3}\approx0.05998$. \textit{Right}: For $e=1$, $\beta_3\approx -2.05912 e^{4/3}$ and $\alpha_4= -9/5 e^{2}$ we obtain an isolated critical point (red point); the approximate values at the conjoined critical temperature and pressure are $T_c e^{1/3} \approx 0.28047$ and $P_c e^{2/3} \approx 0.06876$. } \label{PT6d2pt} \end{figure*} For regions of parameter space having two physical critical points, as depicted in figure \ref{PT6d2pt} there is a first order standard VdW phase transition between small and large black holes at low temperatures and then a reverse VdW transition at higher temperatures. The right graph shows that for fixed charge and $\alpha_4$, and varying $\beta_3$ we obtain the appropriate value for the cubic coupling $\beta_3$ such that two coexistence lines meet, yielding an isolated critical point, with non-standard critical exponents given in \eqref{nonstanexp}. Again we see that there exist a range of temperatures on either side of the isolated critical point for which the black holes satisfy all physical requirements. Finally, in regions of parameter space with one physical critical point we get a standard VdW phase transition for $k=1$. However if $k=-1$ then we find an reverse VdW transition that is similar what we described for the $d=5$ hyperbolic black hole. \subsection{Critical behaviour in more than six dimensions} Increasing the value of $d$ further, we find for seven dimensions that the qualitative features remain similar to $d=6$ spacetime. Quantitatively, however, the parameter regions with only a single critical point get larger, whereas regions with two or three physical critical points get smaller. No further features emerge and so we shall not consider this case further. However for $d=8$ we find that we get up to three critical points. The structure of the associated phase transitions is similar to that we presented for $d=6$ in the previous subsection, so again we shall not consider this case further. Finally, we compute the ratio of critical quantities for any value of $d$. In general this must be done numerically, but we can obtain an analytic expression for small values of the couplings for which the physical constraints hold. Here we shall set the cubic coupling to zero\footnote{See the corresponding results for the cubic case in \cite{mir:2018mmm}.} and compare results with the critical behaviour in Einstein gravity for spherical black holes. To leading order the critical quantities read \begin{eqnarray} T_c&=&\frac{4 (d-3)^2 }{\pi (d-2) (2 d-5) v^{(0)}_c} +\Big(128 \pi (d-3)^4 (d-2)^2 (2 d-5) (2 d-3) \big(24 d^7-540 d^6\nonumber\\ &&\left.+4980 d^5-24292 d^4+67202 d^3-103983 d^2+80703 d-22140\big) e^2 {v^{(0)}_c}^6\right.\nonumber\\ &&\left.-128 (d-3)^5 \big(24 d^7-644 d^6+7012 d^5-40044 d^4+129678 d^3-238653 d^2\right.\nonumber\\ &&\left.+231021 d-90180\big) {v^{(0)}_c}^{2 d}\Big) \Big/\Big(\pi ^5 (d-2)^7 (2 d-5)^5 (2 d-3) (3 d-16) e^2 {v^{(0)}_c}^{13}\right.\nonumber\\ &&\left.-\pi ^4 (d-3) (d-2)^4 (2 d-5)^4 (3 d-16) (4 d-9) {v^{(0)}_c}^{2 d+7}\Big)\alpha_4+\cO(\alpha_4^2),\right.\nonumber \end{eqnarray} \begin{eqnarray} v_c&=&v^{(0)}_c-64 (d-3)^4 \Big(36 d^6-644 d^5+4428 d^4-14776 d^3+24357 d^2-16587 d+1260\Big)\nonumber\\ &&\left.\times 1\Big/\bigg(\pi ^3 (d-2)^3 (2 d-5)^3 (3 d-16) {v^{(0)}_c}^5 \Big(\pi (d-2)^3 (2 d-5) (2 d-3) e^2 {v^{(0)}_c}^{6-2 d}\right.\nonumber\\ &&\left.+(d-3) (9-4 d)\Big)\bigg)\alpha_4+\cO(\alpha_4^2),\right. \nonumber \end{eqnarray} \begin{eqnarray} P_c&=&\frac{(d-3)^2 }{\pi (d-2)^2 {v^{(0)}_c}^2}+ \Big(-128 (d-3)^6 (4 d-9) \big(82 d^6-1594 d^5+12272 d^4-48254 d^3\nonumber\\ &&\left.+102927 d^2-113451 d+50580\big) {v^{(0)}_c}^{4 d}+128 \pi ^2 (d-3)^4 (d-2)^6 (2 d-5)^2 (2 d-3)\right.\nonumber\\ &&\left. \left(36 d^7-802 d^6+7294 d^5-34880 d^4+93686 d^3-138231 d^2+98067 d-21060\right)\right.\nonumber\\ &&\left.\times {e^4} {v^{(0)}_c}^{12}-768 \pi (d-3)^5 (d-2)^3 (2 d-5) \big(24 d^8-616 d^7+6638 d^6-39082 d^5\right.\nonumber\\ &&\left.+136998 d^4-291119 d^3+362005 d^2-234726 d+56880\big) {e^2} {v^{(0)}_c}^{2 d+6}\Big)\right.\nonumber\\ &&\left.\times 1\Big/\bigg({v^{(0)}_c}^8 \Big(\pi ^6 (d-2)^{10} (2 d-5)^6 (2 d-3)^2 (3 d-16) {e^4} {v^{(0)}_c}^{12}-2 \pi ^5 (d-3) (d-2)^7 \right.\nonumbe \end{eqnarray} \begin{eqnarray} &&\left.\times(2 d-5)^5 (2 d-3) (3 d-16) (4 d-9) {e^2} {v^{(0)}_c}^{2 d+6}+\pi ^4 (d-3)^2 (d-2)^4 (2 d-5)^4 \right.\nonumber\\ &&\left.\times(3 d-16) (4 d-9)^2 {v^{(0)}_c}^{4 d}\Big)\bigg) \alpha_4+\cO(\alpha_4^2),\right. \nonumber \end{eqnarray} where \begin{eqnarray} v^{(0)}_c= \left(\frac{(d-2)^2 (2 d-5) \pi e^2}{(d-3) }\right)^{\frac{1}{2 (d-3)}}, \end{eqnarray} is the critical volume in Einstein gravity. We see a dimension-dependent deviation from the critical values in Einstein gravity. The critical temperature and pressure increase and the critical volume decreases relative to Einstein gravity, except in four and six dimensions, where critical pressure and temperature are smaller and the critical volume is larger. In four dimensions these corrections are negligible. We finally obtain \begin{eqnarray} \frac{P_c v_c }{T_c }&=&\frac{2 d-5}{4 (d-2)} +\Big(-8 \pi (d-3)^7 (d-2)^2 (2 d-5) (4 d-9)^2 \big(96 d^8-2288 d^7+22748 d^6\nonumber\\ &&\left.-122532 d^5+386956 d^4-717526 d^3+722577 d^2-299793 d-8460\big) {v^{(0)}_c}^{4 d+1}\right.\nonumber\\ &&\left.+8 \pi ^4 (d-3)^4 (d-2)^{10} (2 d-5)^4 (2 d-3)^2 \big(48 d^9-1240 d^8+13664 d^7-84108 d^6\right.\nonumber\\ &&\left.+317348 d^5-754436 d^4+1108370 d^3-925455 d^2+341247 d-4860\big) {e^6} {v^{(0)}_c}^{19-2 d}\right.\nonumber\\ &&\left.-8 \pi ^3 (d-3)^5 (d-2)^7 (2 d-5)^3 (2 d-3) \big(576 d^{10}-16032 d^9+193720 d^8\right.\nonumber\\ &&\left.-1336844 d^7+5820852 d^6-16626688 d^5+31236380 d^4-37317081 d^3\right.\nonumber\\ &&\left.+25805259 d^2-7920864 d+36720\big) {e^4} {v^{(0)}_c}^{13}+8 \pi ^2 (d-3)^6 (d-2)^4 (2 d-5)^2 \right.\nonumber\\ &&\left. \times(4 d-9) \big(576 d^{10}-15888 d^9+189992 d^8-1295464 d^7+5562612 d^6\right.\nonumber\\ &&\left.-15631748 d^5+28802548 d^4-33602712 d^3+22528269 d^2-6569739 d-57780\big)\right.\nonumber\\ &&\left.\times {e^2} {v^{(0)}_c}^{2 d+7}\Big)\Big/\bigg( \pi ^4 (d-3)^2 (d-2)^6 (2 d-5)^4 (3 d-16) {v^{(0)}_c}^7 \Big(-4 d^2+\pi \big(4 d^5\right.\nonumber\\ &&\left.-40 d^4+159 d^3-314 d^2+308 d-120\big) {e^2} {v^{(0)}_c}^{6-2 d}+21 d-27\Big) \Big(\pi (d-2)^3 \right.\nonumber\\ &&\left.\times(2 d-5) (2 d-3) {e^2} {v^{(0)}_c}^6-(d-3) (4 d-9) {v^{(0)}_c}^{2 d}\Big)^2 \bigg)\alpha_4+\cO(\alpha_4^2),\right. \label{pvtq} \end{eqnarray} and we see that in the limit of vanishing quartic coupling, these results reduce those of charged $k=1$ black holes in Einstein gravity \cite{GunasekaranEtal:2012} . The effect of the quartic curvature term on the Van der Waals ratio is to increase it in any dimension above the Einsteinian value. In four and five dimensions these corrections are negligible for small values of coupling and large enough charge. However in higher dimensions we see considerable deviation and the van der Waals ratio no longer is a ``universal" quantity as it is in Einstein gravity: it depends on parameters of the theory as well as the dimension under considerations. \section{Thermodynamics in grand canonical ensemble} \label{sec: thermofpe} We now consider the grand canonical ensemble, in which we have fixed potential instead of fixed charge. From the viewpoint of AdS/CFT holography, fixed potential on the gravity side is related to fixed chemical potential on the CFT side. We first consider $d=4$ and then discuss properties for generic dimensions, employing the approach in ref.~\cite{ChamblinEtal:1999a}. We shall consider only the quartic term in the action; the cubic case was studied in \cite{mir:2018mmm}. We expect that when both couplings are nonzero a pattern similar to that of the fixed charge case for the number of critical points will emerge, though we shall not perform that analysis here. \subsection{Four dimensions} For fixed potential one needs to find expressions for the mass and the temperature solving again the equation of motions for the metric function near the horizon, since this choice of ensemble alters how the equations depend on the horizon radius. The first two leading order terms in the expansion result in the following formulas, parameterized by the quartic coupling and $r_+$: \begin{eqnarray} 8\pi M &=&k r_+ +\frac{\Phi^2r_+}{4}+\frac{8\pi P r_+^3}{3}+ \frac{64\pi^4\lambda T^4}{r_+} +\frac{128\pi^3k\lambda T^3 }{3 r_+^2},\nonumber\\ 0&=&k-\frac{\Phi^2}{4}+8\pi P r_+^2-4\pi T r_+ +\frac{64\pi^4\lambda T^4}{3r_+^2}+\frac{128\pi^3k \lambda T^3}{3 r_+^3},\label{PTphi} \end{eqnarray} with the second equation yielding the equation of state \begin{equation} \label{eqn:gce_eos_4d} P =\frac{T}{v}-\frac{k}{2\pi v^2}+\frac{\Phi^2}{8 \pi v^2}-\frac{128\pi^3\lambda T^4}{3 v^4}-\frac{512\pi^2k\lambda T^3}{3 v^5}, \end{equation} where we defined $v = 2 r_+$. The explicit form of the critical quantities from the equation of state~\eqref{eqn:gce_eos_4d} are \begin{eqnarray} T_c &=& -\frac{6^{\frac{2}{3}}}{24\pi\lambda}\Big(-\left(588384k^3\lambda -19008k^2\lambda \Phi^2-18k\lambda\Phi^4-\lambda\Phi^6+Z\right)\lambda^4\Big)^{\frac{1}{6}},\nonumber\\ v_c &=& \frac{20\times 6^{\frac{1}{3}}k\lambda}{3}\left(2358720k^3\lambda -78480k^2\lambda \Phi^2+180k\lambda\Phi^4+5\lambda\Phi^6+4 Z\right)\nonumber\\ &&\left.\times1/\Big(\left(-(588384k^3\lambda -19008k^2\lambda \Phi^2-18k\lambda\Phi^4-\lambda\Phi^6+Z)\lambda^4\right)^{\frac{1}{6}}\right.\nonumber\\ &&\left.\times(576576k^3\lambda -16272 k^2\lambda \Phi^2+28k\lambda\Phi^4+\lambda\Phi^6+Z)\Big),\right.\label{vcphi} \end{eqnarray} where \begin{eqnarray} Z=\sqrt{\lambda^2(\Phi^2-24k)(\Phi^2+156k)^2(\Phi^2-84k)^3} \label{Zeq} \end{eqnarray} and the formula for the corresponding critical pressure is given inserting the above relations for $T_c$ and $v_c$ into equation \reef{eqn:gce_eos_4d}. Note that we have the constraint \begin{eqnarray}\label{Phiconstraint} (\Phi^2-24k)(\Phi^2-84k)^3>0, \end{eqnarray} so that critical quantities remain real. \begin{figure*}[h] \centering \begin{tabular}{cc} \includegraphics[scale=.25]{pv-phi-cq-4d.pdf}&\quad\quad\quad\quad\quad\quad \includegraphics[scale=.25]{pt-phi-cq-4d.pdf}\\ \includegraphics[scale=.25]{gt-phi-pc-cq-4d.pdf}&\quad\quad\quad\quad\quad\quad \includegraphics[scale=.25]{gt-phi-d9pc-cq-4d.pdf} \\ \end{tabular} \caption{{\bf Van der Waals behaviour in grand canonical ensemble for $d=4$ and $k=1$}. {\it Top left}: Isotherms in the $P-v$ plane, showing Van der Waals oscillations. The dashed red line corresponds to $T = T_c$; the solid red curves correspond to $T \neq T_c$. {\it Top right}: Coexistence line in the $P-T$ plane for $\Phi = 1$. The green dot exhibits the critical point, and the black line is the line of coexisting phases for the first order phase transition. The solid line has positive entropy, whereas the blue line indicates the occurrence of negative entropy. {\it Bottom left}: The free energy vs temperature for various pressures. Solid black curves correspond to $P = P_c$, and the dotted black curve marks pressures $P > P_c$. The solid blue line has positive specific heat but negative entropy. {\it Bottom right}: For $P =0.9 P_c$ and $\Phi = 1$, the phase transition illustrated by swallowtail behaviour that grows by decreasing pressure. The red curves represent negative specific heat with positive entropy; the blue curves represent positive specific heat but negative entropy. } \label{fig:gce_4d} \end{figure*} Solving equations \reef{PTphi} to determine the explicit form for $M$ and $T$, and choose solutions that in the limit $\lambda \to 0$ approach the Einstein branch, we find that only for $k=+1$ (spherical) black holes are there physical critical points. Admitting a positive mass, while imposing the condition \reef{asympf} with zero cubic coupling, we use the formula \reef{PMAX} to obtain an upper bound \begin{equation} 0 < P \le\frac{9 \sqrt[3]{\frac{3}{2}} }{64 \pi |\lambda|^\frac{1}{3}}, \end{equation} for the pressure, where the absolute value comes about because $\lambda<0$ in four dimensions. Expanding the equation of state about the critical point, we again obtain standard critical exponents \reef{exponents} from mean field theory. The black hole entropy at the critical point becomes \begin{equation} S=\frac{{{r_+^2}_c}}{4}+\frac{6 ^{\frac{1}{3}}k X^{\frac{1}{3}}}{12\lambda {{r_+^2}_c}}-\frac{\sqrt X}{36\lambda^2 {r_+}_c}, \end{equation} where \begin{eqnarray} X=\left(-588384 k^3\lambda+19008k^2\lambda \Phi^2 +18k\lambda\Phi^4+\lambda\Phi^6-Z\right)\lambda^4, \end{eqnarray} and ${r_+}_c=\frac{v_c}{2}$ with $v_c$ introduced in \reef{vcphi} and $Z$ given in \reef{Zeq}. Explicit numerical computation for $k=+1$ and $\lambda < 0$ indicates that the critical entropy is always positive; this will not hold for other values of these parameters. In order to search for the phase structure of the black hole solutions, we obtain the formula for the free energy in the grand canonical ensemble \begin{align} G &= M - TS - \Phi Q \, , \nonumber\\ &= \frac{P v^3}{24}-\frac{ T v^2}{16}-\frac{(-4k+\Phi^2)v}{64\pi }-\frac{16\pi^3\lambda T^4}{3 v}-\frac{32\pi^2\lambda k T^3}{3 v^2}, \end{align} which we plot in figure~\ref{fig:gce_4d}. We again observe standard Van der Waals behaviour. However we also see that the free energy is a decreasing function of the temperature for small $T$, vanishing at $T\to 0$, in contrast to the fixed charge case; however a large branch of the curve has negative entropy. The $P-v$ diagram in figure~\ref{fig:gce_4d} illustrates that a phase transition happens for temperatures a bit less than the critical temperature. However for enough low temperatures, the pressure becomes negative, and we do not consider this unphysical case. Another way to observe the phase transition is via the coexistence line in the $P-T$ plane, shown in the top right of figure~\ref{fig:gce_4d}. For low enough temperatures the entropy becomes negative, denoted by the blue solid line. \begin{figure*}[htp] \centering \begin{tabular}{cc} \includegraphics[scale=.3]{T-phi-4d.pdf} \\ \end{tabular} \caption{{\bf Phase diagram in $T$--$\Phi$ plane for $d=4$ spherical black holes}. We illustrate the phase diagram in the $ T - \Phi$ plane for fixed pressure $P \ell^2 = 3/(8\pi)$. The green dot shows the critical point, while the black line marks the coexistence curve for the first order phase transition. In this figure, the coupling was set to $\lambda/\ell^6 = 0.32$. } \label{fig:phiT_phase_4d} \end{figure*} Instead of finding the equation of state for fixed potential, we perform the analysis with fixed pressure, commonly used in holography. The phase graph of temperature versus potential in figure~\ref{fig:phiT_phase_4d} shows it again describes a first order phase transition between small and large black holes, and the coexistence line terminates at the critical point. \subsection{Higher dimensions} Here, we look into the thermodynamic properties of black hole solutions in the fixed ensemble in generic dimension. The equation of state in $d$ dimensions is \begin{eqnarray} P&=&\frac{(d-2)T}{4 r_+}+\frac{(d-3)[-2(d-2)k+(d-3)\Phi^2]}{32 \pi r_+^2} -\frac{\pi^3 (d-2)(d-5)(3d-16)\lambda T^4}{3 r_+^4}\nonumber\\ &&\left.-\frac{4\pi^2 (d-5)(d-6)(d-2)\lambda k T^3}{3 r_+^5}-\frac{\pi (d-2)(d-4)(d-7)\lambda k^2 T^2}{2 r_+^6},\right. \end{eqnarray} To compare results between the two ensembles with the cubic coupling set to zero, we note that in the fixed potential ensemble for spherical black holes in quartic GQG we get physical critical points for $\alpha_4>0$ in four and five dimensions. However in six dimensions no physical critical points exist. This is in contrast with the fixed charge ensemble, where ($\alpha_4>0$) we get single physical critical points in $d=4,6$ and two physical critical points in $d=5$ (as well as single physical critical points). In addition, for fixed potential and $k=-1$ hyperbolic quartic black holes, while there are potential critical points for $d=4,6$, these all have $\gamma^2<0$ (since $\alpha_4<0$) and so there are no physical critical points for these dimensions (and likewise none for $d=5$). This situation is the same as for the fixed charge ensemble, confirming that both cubic and quartic coupling must exist to obtain physical critical points. \section{Holographic hydrodynamics} \label{sec: holog} \label{sec:holo_hydro} One of applications of the AdS/CFT correspondence is the computation of the ratio of shear viscosity to entropy density $\eta/s$. We investigate in this section this ratio for the quartic theory. It is well-known that for the field theories dual to Einstein gravity, the shear viscosity to entropy density ratio is lower-bounded by ${\it i.e.,}\ $ $\eta/s = 1/(4\pi)$, and it has been suggested that this lower bound is universal, holding for any matter~\cite{Kovtun:2004de}. This conjecture thus states that $\eta/s \ge 1/ (4\pi)$, and is the so-called KSS bound. However higher derivative order contributions can cause the violations of this bound~\cite{Brigante:2007nu}. We therefore evaluate $\eta/s$ for field theories dual to the quartic generalized quasi-topological theory for general $d$ to see whether the KSS bound is satisfied. Here, we employ the planar black hole solutions described by the following metric \begin{equation} ds^2 = \frac{r^2}{\ell^2} \left(-g(r) dt^2 + \sum_i dx_i^2 \right) + \frac{\ell^2 dr^2}{r^2 g(r)} \, . \end{equation} We define a new coordinate $z = 1 - r_+^2/r^2$, to compactify the region outside the horizon. This gives \begin{equation} \label{eqn:zMetric} ds^2 = \frac{r_+^2}{\ell^2 (1-z)} \left(-g(z) dt^2 + \sum_i dx_i^2 \right) + \frac{\ell^2}{4 g(z)} \frac{dz^2}{(1-z)^2}, \end{equation} where $g(z)$ vanishes at $z=0$, and $g(1) = f_\infty$. Expanding $g(z)$ near the horizon \begin{equation} g(z) = g_0^{(1)} z + g_0^{(2)} z^2 + g_0^{(3)} z^3 + \cdots, \end{equation} we solve the field equations to determine $g_0^{(i)}$ for $i \neq 2$. As we discussed previously, the second derivative of the metric function near the horizon, $g_0^{(2)}$, is not determined by the field equations. However, its value can be chosen such that the numerical solution approaches its associated asymptotic solution. It is easy to check that by the coordinate transformation, the parameters $g_0^{(i)}$ are written in terms of the parameters $a_i$ appearing in the near horizon expansion \eqref{eqn:nh_ansatz}. We find \begin{align} g_0^{(1)} &= \frac{2 \pi T \ell^2}{r_+} \, , \quad g_0^{(2)} = - \frac{L^2}{4 r_+} \left(2 \pi T - r_+ a_2 \right), \nonumber\\ g_0^{(3)} &= -\frac{L^2}{8 r_+} \left(2 \pi T - r_+ a_2 - r_+^2 a_3 \right) \end{align} where the explicit form of $a_3$ from section~\ref{sec:bhsolution} for the planar black holes is \begin{eqnarray} a_3&=&-\frac{1}{288\pi^3(3d-16) \ell^2\lambda T^3r_+^2}\bigg[3(d-1)(d-2)r_+^4-24\pi(d-3) \ell^2T r_+^3\nonumber\\ &&\left.+16 \pi^4(-5472+15d^3-341d^2+2434d)\ell^2\lambda\ T^4\right.\nonumber\\ &&\left.+2( 16\pi^3(d-5)(21d-160)\lambda T^3 -3r_+^3)\ell^2 r_+ a_2+ 96\pi^2(3d-16) \lambda \ell^2 T^2r_+^2 a_2^2\bigg].\right.\nonumber\\ \end{eqnarray} Using methods described in~\cite{Paulos:2009yk}, we perform a shift on the metric~\eqref{eqn:zMetric} \begin{equation} dx_i \to dx_i + \epsilon e^{-i\omega t} dx_j, \end{equation} with perturbation parameter $\epsilon$. Computing the Lagrangian for the perturbed metric, and performing a series expansion for small $\epsilon$, we obtain \begin{eqnarray} \sqrt{-g}{\cal L} &=&\frac{1}{16 \pi} \bigg[ \cdots-\frac{\epsilon^2\omega^2 r_+^ {d-3}}{4\ell^{d-4}z g_0^{(1)}}\bigg(1+\frac{16\hat{\lambda}}{5\ell^6}\Big((-1050+911d-261d^2+24d^3)(g_0^{(1)})^3\nonumber\\ &&\left.+4(842-414d+53d^2)(g_0^{(1)})^2 g_0^{(2)}+48(-22+5d)(g_0^{(1)})^2g_0^{(3)}\right.\nonumber\\ &&\left.+16 (15d-68)g_0^{(1)}(g_0^{(2)})^2\Big) \bigg)+ {\rm Regular} \bigg],\right. \end{eqnarray} where $\hat{\lambda}$ is the coupling in the action and is related to the coupling appearing in the equation of motion, using \reef{rescall}. The 'time formula' gives the shear viscosity \begin{equation} \eta = - 8 \pi T \lim_{\omega,\epsilon \to 0} \frac{{\rm Res}_{z=0} \sqrt{-g}{\cal L}}{\omega^2 \epsilon^2}, \end{equation} whose explicit form for the case at hand is \begin{eqnarray} \eta &=&\frac{T r_+^ {d-3}}{8\ell^{d-4} g_0^{(1)}}\bigg(1+\frac{16\hat{\lambda}}{5\ell^6}\Big((-1050+911d-261d^2+24d^3)(g_0^{(1)})^3\nonumber\\ &&\left.+4(842-414d+53d^2)(g_0^{(1)})^2 g_0^{(2)}+48(-22+5d)(g_0^{(1)})^2g_0^{(3)}\right.\nonumber\\ &&\left.+16 (15d-68)g_0^{(1)}(g_0^{(2)})^2\Big) \bigg). \right. \end{eqnarray} where $\hat{\lambda}$ appears in the action \reef{action0} and is related to $\lambda$ via \reef{rescall}. The entropy density for planar black holes is \begin{equation} s = \frac{S}{\ell^{d-2} {\rm Vol}\left(\mathbb{R}^{d-2}\right)} = \frac{r_+^{d-2}}{4\ell^{d-2}}\left[1-16\pi^3(d-2)(3d-16)\frac{\lambda T^3}{3 r_+^3} \right], \end{equation} which follows from eq.\reef{ENT} with $k=0$. Taylor expanding $\eta/s$ about $\lambda=0$ we obtain \begin{align} \frac{\eta}{s} &= \frac{1}{4\pi}\bigg[1-\frac{\lambda}{240 \ell^6(d-1)(3d-16)(d^5-14d^4+79d^3-224d^2+316d-170)} \nonumber\\ &\times \Big(-(891d^8-19104d^7+172820d^6-868818d^5+2696601d^4 \nonumber\\ & -5403214d^3+6944072d^2-5233280d+1740800)(d-1)^4 \nonumber\\ &+96 \ell^8d(5d-22)(d-3)(3d^2-18d+19)\dot{a}_2(0)\Big)\bigg]+\cO(\lambda^2), \label{etaos-taylor} \end{align} where $\dot{a}_2(0)$ denotes derivative of $a_2$ with respect to $\lambda$ and then setting $\lambda=0$. To compute the value of $\eta/s$ numerically , one needs to determine the parameter $a_2$ for the choice of the other parameters, where the value of temperature is known from expansion near the horizon. In the above computation we replaced the parameter $a_3$ in terms of $a_2$ using the second order expansion near $r=r_+$ of eq. \reef{Feq0}. The Taylor expansion yields an $\dot{a}_2(0)$ term that is left undetermined at this level. Only under the condition that the expression at first order in $\lambda$ in the above series expansion for higher dimensions with some positive coupling (but in $d=4$ for certain negative coupling) can we conclude that the KSS bound $\eta/s \ge 1/(4\pi)$ holds in all dimensions in the quartic generalized quasi-topological theories at small coupling. The generalized quasi-topological term can cause the entropy density of black branes to change sign~\cite{Hennigar:2017umz}. Hence there is a certain $T_p$ for which the ratio $\eta/s$ exhibits a pole. Using the first order near-horizon expansion from \reef{Feq0}, the corresponding quartic coupling is \begin{equation} T_p = \frac{(d-2) r_+}{3 \pi \ell^2} \, , \quad \lambda_p = \frac{81\ell^6}{16 (d-2)^4(3d - 16)} \, , \end{equation} where the subscript ``$p$'' stands for ``pole''. It is interesting to note that in four dimensions $\lambda_p$ is equal the value $\lambda_c = -81 \ell^6/1024$ that we encountered in the critical limit of the theory in section \ref{vacua}; however, this coincidence does not hold in higher dimensions. However in any dimension it leads to the Einstein branch of the theory. The leading order term in $(\lambda - \lambda_{p})$ that describes the behaviour of the entropy near zero is \begin{equation} s_{\lambda \to \lambda_{p}} = -\frac{r_+^{d-2}}{81 \ell^{d+4}} (d-2) (3d^5-37d^4+166d^3-348d^2+344d-128) \left(\lambda - \lambda_{p} \right)+\cO\left((\lambda - \lambda_{p})^2\right) \end{equation} Furthermore the corresponding expansion for the shear viscosity reads as \begin{align} \eta &= \frac{r_+^{d-2}}{640\pi \ell^{d-2} \Big((3d-16)^2(d-2)^3(d^5-14d^4+79d^3-224d^2+316d-170)\Big) } \nonumber\\ &\times \bigg[-8\Big(279d^8-7053d^7+72868d^6-394627d^5+1186685d^4-1893868d^3 +1248176d^2 \nonumber\\& +193856d-435200\Big) (d-2)^2 -18\ell^2d(d-2)(d-3)(3d^2-18d+19)(99d^3-1016d^2 \nonumber\\& +3038d-2272)a_2(\lambda_p) +135d\ell^4(d-3)(d-4)(3d-16)(3d^2-18d+19)a^2_2(\lambda_p) \bigg] \nonumber\\&+ \frac{d(3d^3-27d^2+73d-57)r_+^{d-2}}{25920 \pi\ell^{d+4}(3d-16)^2(d-2)^3 (d^5-14d^4+79d^3-224d^2+316d-170)} \nonumber\\ &\times \bigg[ -48\ell^2(3d-16)(42d^4-363d^3+601d^2+1057d-1488)(d-2)^5a_2(\lambda_p) \nonumber\\& +90\ell^4(d+7) (d-4)(3d-16)^2(d-2)^4a^2_2(\lambda_p) \nonumber\\ & -729\ell^8(d-2)(99d^3-1016d^2+3038d-2272)\dot{a}_2(\lambda_p) \nonumber\\& -16(d-1)(3d-16) (93d^4-1543d^3+8894d^2-20420d+14656)(d-2)^6 \nonumber\\& +10935\ell^{10}(d-4)(3d-16)a_2(\lambda_p)\dot{a}_2(\lambda_p)\bigg]\left(\lambda-\lambda_p\right) + \cdots\label{etaexp} \end{align} It is interesting to note that the entropy density vanishes linearly as $\lambda \to \lambda_p$. By contrast, the shear viscosity does not vanish in this limit as it contains a term consisting of some powers of $a_2$ evaluated at the pole. Explicit numerical evaluation shows that this term does not vanish as $\lambda \to \lambda_p$. Consequently the ratio of shear viscosity to entropy density has a pole at $\lambda = \lambda_p$. Figure \ref{fig:etaOvers} shows that there is a smooth curve connecting $\eta/s = 1/(4\pi)$ (for $\lambda = 0$) and $\eta/s = \infty$ (for $\lambda = \lambda_p$). Since including quartic quasi-topological or Lovelock terms into the action does not alter the black hole entropy from its Einstein gravity value~\cite{Hennigar:2017umz}, only quartic GQG contributions are responsible for the occurrence of this pole. A previous study of cubic GQG found that similar behaviour for $\eta/s$ took place \cite{mir:2018mmm}. \begin{figure*}[h] \centering \begin{tabular}{cc} \includegraphics[width=0.45\textwidth]{eta-oS-2.pdf}&\quad \includegraphics[width=0.45\textwidth]{eta-oS45d.pdf} \\ \end{tabular} \caption{{\bf Ratio of shear viscosity to entropy density}: \textit{Left}: Plot of the ratio $\eta/s$ in four (blue) dimensions, five (red) dimensions and six (black) dimensions. \textit{Right}: A close up for small values of the coupling. The thin grey line shows the universal Einstein gravity value of $\eta/s = 1/(4\pi)$. we used, a $[5|5]$ order Pad\'e approximant for $a_2$ (see the appendix) and we have set $\ell=1$ and $r_+=1$. } \label{fig:etaOvers} \end{figure*} To determine how the ratio $\eta/s$ can be recast in terms of $\lambda$, we use the Pad\'e approximant method to evaluate the parameter $a_2$. For our consideration involving quartic term, the computations become cumbersome very rapidly while going to higher orders, so we only present only a few corresponding curves in four, five and six dimensions in figure~\ref{fig:etaOvers}. From this figure it is obvious that $\eta/s$ starts from $1/(4\pi)$ for $\lambda=0$ and then it grows until it diverges as expected at $\lambda = \lambda_p$. For four dimensions this figure illustrates that the KSS bound for $\eta/s$ holds. From the relations \eqref{MT01} and \eqref{MT02} the explicit formulae for the temperature and mass are \begin{eqnarray} M_{d=5}= \frac{3 \left(\ell^6+16 \lambda \right)r_+^4}{16 \pi \ell^8}, \quad\quad T_{d=5}= \frac{r_+}{\pi \ell^2}, \label{mass-d5} \end{eqnarray} where we see that $T$ is independent of $\lambda$ in five dimensions. {In figure \ref{fig:etaOvers} we plot $\eta/s$ for the values of $\lambda$ such that physical constraints are satisfied.} To determine this, one needs $f_{\infty}>0$ for an asymptotic AdS spacetime and $P(f_{\infty})>0$ to satisfy the no-ghost criterion. The behaviour of $f_{\infty}$ in four dimensions is given in right diagram in figure \ref{finfplot}; similar behaviour is observed in $d=5$, but with slightly different contributions from the different solution branches. In four and five dimensions we find that $\lambda$ must be negative so that $\gamma^2>0$. The expression for the mass in \eqref{mass-d5} sets a lower bound on the coupling. In conjunction with the condition $\gamma^2>0$ in \reef{gamma2} required for a well-defined asymptotic region, we must have $0 > \lambda >\lambda_p =-\ell^6/16$ for any choice of AdS length and black hole horizon, where the lower bound $\lambda_p$ corresponds to zero mass and vanishing entropy; note that $|\lambda_p|$ is smaller than $|\lambda_c|$ given in \reef{muclac} . We note from figure \reef{fig:etaOvers} that the KSS bound is violated for $\lambda/\lambda_p>0$ or in other words for all physically acceptable values of $\lambda$ since $\lambda_p < 0$. In six dimensions solving equations \eqref{MT01} and \eqref{MT02} yields four different solutions for the mass and temperature. However the only branch that satisfies the criteria for a physical solution is \begin{eqnarray} M_{d=6}&=&\frac{8 r_+^5-3 \pi \ell^2 r_+^4 \sqrt{Y}+3 \pi \ell^2 r_+^4 \sqrt{\frac{3 r_+^3}{4 \pi ^3 \lambda \sqrt{Y}}-Y}}{2\pi \ell^2},\nonumber\\ T_{d=6}&=&\frac{\sqrt{Y}}{2}-\frac{1}{2} \sqrt{\frac{3 r_+^3}{4 \pi ^3 \lambda \sqrt{Y}}-Y}, \end{eqnarray} where \begin{eqnarray} Y= \frac{W}{4 \sqrt[3]{2} \pi ^2 \lambda \ell^2}+\frac{5 r_+^4}{2^{2/3} \pi ^2 W}, \qquad W = \sqrt[3]{9 \lambda \ell^6 r_+^6+\sqrt{81 \lambda^2 \ell^{12} r_+^{12}-4000 \lambda^3 \ell^6 r_+^{12}}} \quad \end{eqnarray} Here, the mass is positive for $0<\lambda<\lambda_p$ and vanishes (along with the entropy) at $\lambda=\lambda_p$; \ from eq. \reef{deq6d} we find \begin{eqnarray} \gamma^2&=&\frac{2 \left(8 f_{\infty}^3 \lambda+3 \ell^6\right)^3}{675 f_{\infty} \ell^{16} \lambda m^2}, \end{eqnarray} which is positive both for $\lambda > 0$ and for sufficiently negative $\lambda$. On the other hand, even though the overall structure of vacuum solutions looks like what is exhibited in the right graph in figure \ref{finfplot}, in this case the upper branch with negative $\lambda$ is associated with the existence of ghosts -- it is therefore excluded from further consideration. The lower negative $\lambda$ branch yields negative $\gamma^2$. Hence only positive values of $\lambda$ yield physical solutions as well as $\gamma^2>0$. However there is an upper bound for the positive coupling that is enforced by positive mass. From figure~\ref{fig:etaOvers} starting from the same value of the ratio at zero coupling, for small absolute values of $\lambda$, in four dimensions the ratio is initially larger than for $d=6$ but then declines to smaller values. Both curves blow up as the pole is approached. In these cases the KSS bound holds, and mass remains positive as $0<|\lambda|<|\lambda_p|$. We anticipate similar behaviour in seven and higher dimensions, namely that positive coupling is required for correct asymptotic behaviour and physical mass, and also satisfies the KSS bound. It is known that there is an upper bound on the coupling restricting the existence of acceptable CFT duals, so the whole range of $\lambda \in (0, |\lambda_p|)$ cannot possess a holographic interpretation~\cite{Hofman:2008ar, Myers:2010jv}. We postpone how to address this issue to future investigations. If both cubic and quartic couplings are nonzero, there is still a point in parameter space for which the entropy vanishes. This is a singular point. It cannot be removed since $a_2$, which appears at zeroth order in the $\eta$ expansion (similar to Eq. \reef{etaexp}), is computed for the values of the couplings at the pole. Corresponding results for cubic gravity are discussed in \cite{mir:2018mmm}. \section{Discussion} \label{discuss} We have investigated charged static spherically symmetric AdS black holes for both spherical ($k=1$) and hyperbolic ($k=-1$) geometries in generalized quasi-topological gravity (GQG). These theories are of notable interest since this class of solutions has a single metric function, analogous to Lovelock and quasi-topological gravity at the same order. We have considered both cubic and quartic GQG to see how these additional terms modify the results for Einstein gravity in four, five, and six dimensions. Although the metric function cannot be obtained analytically, it is feasible to find both the asymptotic and near horizon behaviour of the metric perturbatively. We then apply the shooting method to verify that these solutions match in the intermediate region. The near horizon expansion characterizes the mass and temperature of black holes, and therefore, the thermodynamics of the black hole can be completely understood, despite the lack of an exact solution. Furthermore our numerical considerations demonstrate that for either fixed cubic coupling or quartic, increasing the electric charge correspondingly decreases the horizon radius. On the other hand at fixed charge, enlarging the coupling has the effect of inceasing the horizon radius. Investigating the solutions near the origin $r = 0$ in four dimensions we find that the curvature scalar singularity is softened. Taking the cosmological constant and cubic and quartic couplings as thermodynamic variables, we examined the thermodynamic properties in the given spherically symmetric configuration in detail, including verification of the extended first law and Smarr relation. However not all solutions obey standard physical requirements (positive mass, positive entropy, AdS asymptotics, and the no-ghost condition), so we constructed the physical constraints between the couplings and the charge that gives the domain for parameters to get physical critical points. Although in some regions of parameter space black hole entropy can be negative, in even dimensions $d = 2n$, adding the Euler density of order $n$ to the action will not alter the equations of motion, however gives a positive contribution to the entropy to reserve its sign positive. Working in both the fixed charge ensemble and fixed chemical potential ensemble, we classified the phase structure and critical points for these black holes. In the fixed charge ensemble, and in four dimensions we found out that even for zero charge, there are still physical critical points for $k=1$ when at least one of couplings is non-zero (in contrast to Einstein gravity) and also for $k=-1$ when both couplings are nonzero. A first order VdW phase transition between small and large black holes was seen. For the first time we have observed critical points for a neutral hyperbolic black hole in any dimension provided both the cubic and quartic couplings are nonvanishing. This emphasizes the importance of the non-linearity of GQG in inducing new phase behaviour. In five dimensions we observed the occurrence of two physical critical points when both cubic and quartic couplings are nonzero, even for vanishing charge. These respectively correspond to the end point of a standard VdW transition and the starting point of a reverse VdW transition. However for the reverse VdW transition the pressures are smaller than the second critical pressure, and so one cannot choose parameters such that these two end points merge to obtain an isolated critical point -- the pressure in the second line of first-order phase transitions does not increase with temperature unlike the first coexistence line. For hyperbolic black holes in five and six dimensions there are regions in parameter space that yield negative mass. In the $d=6,~k=+1$ case, we noted the existence of three and two critical points under the respective condition that three and two of the parameters (charge and two couplings) are non-zero. Since the coexistence lines are both increasing functions of pressure with respect to temperature, it is possible to find parameter choices for which the critical points merge into an isolated critical point. We obtained the critical temperature and volume in terms of the electric charge and couplings in various dimensions up to first order in the quartic coupling $\lambda$. For spherical black holes in quartic gravity the universal relationship in Einstein gravity given by the first term in \reef{pvtq} receives dimension-dependent corrections -- it is no longer universal. These correction are negligibly small in four and five dimensions; however in higher dimensions they cause the critical ratio \eqref{pvtq} to increase. We also analyzed the existence of physical critical points in the grand canonical ensemble in the quartic theory. Obtaining the relevant thermodynamic quantities, we confirmed the presence of a first order phase transition in four dimensions that is absent in the corresponding situation for Schwarzschild black holes in both cases, for which {either} the chemical potential or the pressure are fixed. We also investigated phase transitions in higher dimensions. Finally, in the context of the AdS/CFT, we computed the ratio of shear viscosity to entropy density $\eta/s$ for field theories dual to the quartic generalized quasi-topological theory in all dimensions and concluded that in four dimensions the KSS bound held for choices of the quartic coupling yielding positive mass and temperature. In striking constrast, we found that this behaviour did not hold in five dimensions -- the range of coupling required for a physical solution entailed a violation of the KSS bound. However, the bound remains valid in four and six dimensions, and we anticipate it is satisfied in higher dimensions. It would be interesting to see if including yet higher order terms in the curvature can yield black holes satisfying both the KSS bound and all physically reasonable requirements in five dimensions as well as other dimensions. Further constraints could be imposed by configuring other conditions in the corresponding CFT, such as causality constraints and positivity of energy flux; the implications of this require further investigation. \section*{Acknowledgements} We thank Robie Hennigar for discussion in early stages of this work. M. M. appreciates the hospitality of the University of Waterloo where this work was initiated. This work was supported in part by the Natural Sciences and Engineering Research Council of Canada.
{ "timestamp": "2019-03-01T02:09:42", "yymm": "1902", "arxiv_id": "1902.10906", "language": "en", "url": "https://arxiv.org/abs/1902.10906" }
\section{} \section{Introduction} \label{sec:GenIntro} {For decades, the kinetics of alkali vapors have garnered interest given the role they play in atomic physics experiments, atomic line filters, thermionic generators, etc. More recently, mid-infrared diodes have become spectrally narrow enough to excite individual fine-structure components of the first resonance level in alkali like potassium, rubidium, and cesium. This allowed for a fine-structure-specific three-level lasing scheme in buffered alkali vapors, and introduced a new application: the diode-pumped alkali laser (DPAL). Past research into optically-pumped alkali vapors outside the DPAL context has shown it is relatively easy for the alkali atoms to reach higher excitation levels, ionize, or undergo {chemical} reactions, {causing concerns for DPALs as well.}} {In earlier work,} Wu\cite{2009_Wu_Thesis} presented {a} model of DPAL ionization including laser rate equations, {but assumed that the electrons were in a Maxwell-Boltzmann distribution with the same temperature as the buffer gas.} Oliker et al.\cite{2014_OlikerEt_LossProcesses} studied {the effects of} similar kinetics in a model {that included computational fluid dynamics, and ray-tracing. Wallerestein et al.\cite{2018_WallersteinPerramRice} considered an even broader set of processes involving the bound electrons, and emphasized building intuition for the hierarchy of processes based on their timescales, and likely availability of reactants.} {However,} Zatsarinny et al.\cite{2014_ZatsarinnyEtc_CsSigmas}, and Markosyan et al.\cite{2016_MarkosyanKushner_16CsDpalPlasma,2018_Markosyan_MoreDpalPlasma}, {did} consider a non-Maxwell-Boltzmann (non-MB) electron energy distribution through the use of finely-grained lookup tables\cite{2004_StaffordKushner}. We also relax the Maxwell-Boltzmann assumption, but model, and evolve the electron distribution a bit differently, and focus on experimental comparisons. {Because processes like electron impact transitions (EIT), electron impact ionization (EII), and recombination are sensitive to electron energy, {relaxing assumptions about the energy distribution} avoids underestimating ionization, and its effects. This {generalization} also poses numerical issues like stiffness, and maintaining positivity, which have been tackled for decades as well.} Some relevant techniques see little use in this context though, so we discuss the implementation of one approach that proved instrumental{. For the readers' benefit, we also mention issues with simple implementations of other approaches.} {The paper is organized as follows.} We give an overview of major processes, including those affecting the electron energy distribution in Section \ref{sec:PhysIntro}, while relegating details on inputs to Section \ref{sec:A_FIRST}-\ref{sec:A_LAST}. We describe the numerical implementation in Section \ref{sec:NumIntro}. In Section \ref{sec:Applying}, we compare the model to experiments by Zhdanov et al.\cite{2017_ZhdanovEt_KDPAL_BufferStudy}. {Here, we also demonstrate, and explain the tendency for non-MB electrons to increase heat loading.} We present our conclusions in Section \ref{sec:Conc}. \section{Developing the physical picture} \label{sec:PhysIntro} {A typical DPAL medium includes around 0.9 atmospheres of helium, 0.1 atmospheres of some hydrocarbon buffer, and 1-10 parts per million of alkali vapor (e.g. 10$^{13}$-10$^{14}$/cm$^3$), at temperatures around 400-500K.} Diodes pump alkali atoms to the upper, more degenerate, resonance sub-level, ``D2'', (4\,$^2$P$_{3/2}$ for K). Fine-structure mixing collisions with the buffer drive transitions to the lower resonance sub-level, ``D1'' (4\,$^2$P$_{1/2}$), which is the state used for lasing (fig.~\ref{fig:CartoonK}). Higher pressure designs may achieve enough mixing with less or no molecular buffer, thus avoiding thermal-lensing, and buffer chemistry issues. {For example,} Zhdanov et al. \cite{2007_ZhdanovKnize_KHeDemo} showed that just two atmospheres of helium sufficed to mix {D1 and D2} in potassium. \begin{figure}[!htbp] \includegraphics[width=0.6\textwidth]{f1_rev.eps} \caption{\label{fig:CartoonK} Schematic potassium Grotrian diagram showing fine-structure multiplets modeled, and ideal lasing scheme, energy pooling among resonance-level atoms, and dominant ionization channels (left), along with their characterstic rates. Radiative decays with characteristic timescales, are also shown } \end{figure} Deviation from ideal operation begins when energy-pooling collisions {(EP)} between resonant-level alkali populate ``pioneer'' Rydberg levels with rates\cite{1997_NamiotkaHA_EP_K,1983_BarbierCheret_EP_Rb,1996_VadlaEtc_CsPooling,1996_JabbourEtal_CsEPdata} spanning $10^{-11}$ to $10^{-9}\text{cm$^3$/s}$. Collisional ionization (CI), (single) photo-ionization (PhI), {are the fastest deleterious pathways out of these levels. Decays (unhindered by radiation trapping) to the 5\,$^2$S$_J$ and 3\,$^2$D$_J$ multiplets, and energy-quenching collisions (EQ)} will counter these processes{}. Buffer collisions dominate EQ, and Earl and Herm \cite{1974J_EarlHerm_EQ_NaKxMs} find that methane quenches the 5\,$^2$P$_J$ level with a roughly 60\AA$^{2}$ cross section. For our default model, we assume this 5\,$^2$P$_J$ quenching populates 3\,$^2$D$_J$ and 5\,$^2$S$_J$, and that methane quenches 4\,$^2$D$_J$ and 6\,$^2$S$_J$ to 5\,$^2$P$_J$ with a similar cross section given the similarity of all energy gaps involved (see Section \ref{sec:A_Heavies}). Helium quenching becomes more significant at much higher levels, which we do not resolve in this work. {The following subsections discuss some processes, or aspects of them, that tend to receive less attention, while the appendices give more specific implementation details.} \subsection{Collisional and photon ionization} Explicit data for potassium is scarce, but data on other alkali, and the intermolecular potential curves\cite{1988_BrencherEtal_CI_K} indicate that Penning ionization involving a resonance-level, and a pioneer-level atom -- producing just an atomic ion -- forms the primary collisional ionization (CI) channel. {Looking at rubidium, Barbier et al.\cite{1987_BarbierPC_CI_RbAnalogExp} found that associative ionization between one resonance-level, and one higher energy alkali atom becomes orders of magnitude weaker than Penning ionization}, and they were able to explain this theoretically by invoking electronic exchange\cite{1987_BarbierPC_CI_RbAnalogy}. This leaves collisions between one D1/D2, and one 5\,$^2$S$_J$/3\,$^2$D$_J$ atom as the most plausible route for direct ${\rm K}_2^+$ production, but processes described below can influence the dimer cation fraction far more effectively. {Predictions by Aymar et al. \cite{1976_AymarEt_yIs_Kspd}, and Zatsarinny et al. \cite{2010_ZatsarinnyTayal_yIs_Kspd} for photo-ionization (PhI) cross-sections near D1 and D2 wavelengths} imply that photo-ionization exceeds any CI channel above about 10kW/cm$^2$ for $10^{14}$/cm$^3$ alkali densities. Note that both the dominant CI, and PhI channels give {free electrons an initial kinetic energy of 0.3-0.6eV. \begin{figure}[!htbp] \includegraphics[width=0.5\textwidth]{InvTauEe_180828a.eps} \caption{\label{fig:eTimes} For an electron with energy $E_e$, the main curves show the average gain (red), or loss (blue) rates -- as defined by $(dE/dt)/E$ -- due to each component in a ``typical'', active DPAL. Note that the net effect of pumped potassium is purely energy gain. The gray curve shows the rate for electrons to breakdown methane.} \end{figure} \subsection{Elastic and inelastic energy transfer (free electrons)} A new host of processes enter the picture once ions appear. Fig. \ref{fig:eTimes} sketches gain and loss timescales for electrons in a typical potassium DPAL. For the usual extent of population inversion, electrons experience net {energy} gain from collisions with potassium. Inversion between D1 and D2 even gives a noticeable contribution, a fact previously recognized by Markosyan and Kushner\cite{2016_MarkosyanKushner_16CsDpalPlasma}, and Zatsarinny et al.\cite{2014_ZatsarinnyEtc_CsSigmas}. Even if all the alkali were ionized, energy transfer via elastic collisions with helium still dominate { electron-electron energy transfer. Energy transfer through inelastic} collisions with a hydrocarbon buffer can easily surpass {the energy transfer to} helium{} as fig.\ref{fig:eTimes} illustrates (cf. Section \ref{sec:A_EIT}). \subsection{Hydrocarbon breakdown mechanisms} Hydrocarbons may also present {a target for electron impact breakdown, or excited-alkali reactions}. For methane specifically, ${\rm CH}_4 + {\rm e}^- \rightarrow {\rm CH}_3 + {\rm H}^-$ dominates electron-based breakdown\cite{2015_SongEtal_eIX_Methane}. Data on subsequent pathways is sparse for heavy alkali, but Penning detachment predictions for other excited alkali-${\rm H}^-$ collisions\cite{1997_MartinBerry_eDetach_HnegxLiOrHe,1997_MartinBerry_eDetach_HnegxLiOrCa}, and measurements for oxygen anions striking excited oxygen molecules\cite{2008_MideyDV_OnegExcO2_BigPenDet} suggest that ${\rm KH}+{\rm e}^-$ formation using the abundant ${\rm K}^*$ will have cross sections of order $10^{-14}$cm$^2$. Penning detachment using the ground state is much weaker, and has a high threshold\cite{2016_WuLYWJanev_HnegLi_LowPenDet}. Regarding { excited-state chemical} reactions, Azyazov et al.\cite{2017_AzyazovEt_ExcRbXM_H2orCH4orC2H6} excited Rubidium to its second resonance level, and attempted to measure both reactive, and non-reactive quenching with various molecules. For methane in particular, they obtained a reactive branching ratio of $0.04\pm0.03$, and detected no RbH -- the most likely product expected based on their own theoretical calculations -- thus concluding { the reaction rate was negligible}. { Given this limited, negative data for reactive pathways for hydrocarbon breakdown, we dot not include such processes for now. \subsection{Recombination, and related dimer association rates} Regarding recombination, the introduction references have already noted a hierarchy where dissociative recombination (DR) dominates neutral- and electron- mediated three-body recombination (NMR, EMR), and radiative recombination trails far behind. The more energetic, non-MB nature of the electrons renders recombination {-- especially three-body channels --} more difficult than previously expected. Unfortunately, the dominant DR channel is still riddled with uncertainties in the energy-dependent cross section, product states, and auxiliary processes affecting the dimer ion population like association. {As far as product states for DR, the rules of thumb relating potential crossings to favored product states \cite{1990_Mitchell_DRetcReview}, and analogies to experiments that do resolve products (e.g. Le Padellec et al.\cite{1999_LePadellecEt_DR1} for CN$^+$) indicate that of DR will likely be: one ground state atom, and one excited to the pooling levels, or levels just above.} Arimondo et al.\cite{1985_ArimondoEtc_DatapointCsDR} examined DR for argon-buffered cesium vapor at relevant temperatures and densities, and also reported a best fit rate for association, $\mathrm{Cs}^+ + \mathrm{Cs} + \mathrm{Ar} \rightarrow \mathrm{Cs}_2^+ + \mathrm{Ar}$, of $2.3\times10^{-23}$cm$^6$/s, but noted the error might be an order of magnitude. To our knowledge, the nearest similar measurements are for $\mathrm{Cs}^+ + \mathrm{Cs} + \mathrm{Cs} \rightarrow \mathrm{Cs}_2^+ + \mathrm{Cs}$ at similar temperature, but lower buffer pressure, by Bergman and Chanin\cite{1971_BergmanChanin_CsXRates}, and Morgulis and Korchevoi\cite{1968_MorgulisKorchevoi}, who reported rates on the order of $10^{-30}$cm$^3$/s, and $10^{-26}$cm$^3$/s, respectively. The critical, and uncertain nature of the association rate warrants some brief technical discussion, especially to motivate the lower values that we later find necessary. In a two-step termolecular reaction model\cite{2015_AttachmentRxnsBook}, the {\rm K} and {\rm K}$^{+}$ form a metastable $({\rm K}_2^{+})^*$ with some rate constant, $k_f$. {This metastable state} dissociates in a characteristic lifetime, $\tau$, in the absence of stabilizing third-body collisions{, which have} characteristic rate constant $k_s = n_B\lrangSUscr{\sigma v}{\mboxss{stab}}{}$ where $n_B$ is the buffer density. For an intermediate population in equilibrium, the effective association rate constant is: \begin{equation} k_{\mboxss{eff}} = \dfrac{ k_f k_s }{ 1/\tau + k_s }. \end{equation} The association rate reduces to $k_{\mboxss{eff}} \approx k_f$ when stabilization proceeds much faster than metastable disintegration, or $k_{\mboxss{eff}} \approx k_f k_s \tau $ in the opposite limit. For an ion-induced dipole interaction\footnote{high ionization conceivably screens this interaction, but our estimates for the Debye length even when every alkali is ionized are on the order of 1000\AA}, the appropriate Langevin cross section just depends on (dipole) polarizability, $\alpha$, and collision energy, $E$, as $\sigma_L = \pi(2\alpha/E)^{1/2}$ (in atomic units). Taking polarizabilities from Mitroy et al. \cite{2010_MitroyEt_Alphas}, leads to $k_f \approx 6\times 10^{-9}\mbox{cm$^3$/s}$. Depending on whether the stabilizing atom ``sees'' the ion, and has a Langevin cross section to stabilize, or just a typical momentum transfer cross section, the stabilization rate will be $k_s \approx 10^{-12}\mbox{cm$^3$/s}$ or $k_s \approx 10^{-8}\mbox{cm$^3$/s}$. In order for the three-body rate constant in Arimondo et al. to be as large as $10^{-23}\mbox{cm$^3$/s}$, yet have the process scale with buffer density, the characteristic $\tau$ must be $10^{-3}$s or $10^{-7}$s depending on which limit of $k_s$ applies, but each value is roughly a thousand times greater than the respective $1/k_s$, contradicting the condition $\tau \ll 1/k_s$ to have buffer density dependence. For an atmosphere or so of buffer, the $k_f \approx 6\times 10^{-9}\mbox{cm$^3$/s}$ estimated above coincides instead with a ``three-body'' rate around $\approx10^{-26}\mbox{cm$^3$/s}$. \subsection{Miscellaneous processes} Dissociative collisions by excited alkali, and electron impacts -- called dissociative excitation (DE) -- can counteract association. We model the former process based on measurements for sodium by Tapalian and Smith\cite{1994_TapalianSmith_Na2p_dissoc} (Section \ref{sec:A_FIRST}). Peak cross sections, and energy dependence for DE are fairly uniform among a variety of cation dimers\cite{2015_MotaponEt_DR1, 2011_LecointreEt_DR1, 2004_ElGhazalyEt_DR1, 1999_LePadellecEt_DR1}), so we also included a toy model of DE (Section \ref{sec:A_DRDE}), but found that it played a weak role. Neutral dimer association can divert population from the lasing cycle, {counteracted} by dissociation due to excited alkali\cite{2005_BanAP_Rb2_destr} (Section \ref{sec:A_FIRST}). The neutral and ion dimer populations can interact directly through charge exchange, but at the moment we do not model this. \section{Implementing the model} \label{sec:NumIntro} Now we must simulate all the physics discussed above. Processes involving just the thermalized, massive particles follow standard rate equations. {For the basic three-level cycle, laser intensity, and pump term, we adopt the model of Hager and Perram\cite{2010_HagerPerram_3Lvl1}, which we summarize below. To represent the non-MB electrons, we bin the spectrum to obtain ordinary differential equations, but this introduces issues with energy conservation which we {also address here, following which we explain how to resolve these numerical issues} {We} model the core laser kinetics by source contributions ($\Delta s_x$) to densities of the first three levels, $n_i$ (starting at $i=1$ for the ground state), and the two-way laser photon density, $\psi_L$: \begin{align} \Delta s_1 &= \sigma_{31}(n_3-2n_1)\omega + \sigma_{21}(n_2-n_1)\psi_L + n_2 \Gamma_{21} +n_3 \Gamma_{31}, \\ \Delta s_2 &= -\sigma_{21}(n_2-n_1)\psi_L - n_2\Gamma_{21} +\gamma_{\mboxss{mix}}(n_3-2\exp[-\theta]n_2), \\ \Delta s_3 &= -\sigma_{31}(n_3-2n_1)\omega - n_3 \Gamma_{31} -\gamma_{\mboxss{mix}}(n_3-2\exp[-\theta]n_2), \\ \Delta s_L &= (rt^4\exp[2\sigma_{21}(n_2-n_1)l_g]-1)\psi_L/\tau_{\mboxss{RT}} + \lbrace \text{ $f\times n_1 A_{10}/\tau_{1}$ }\rbrace \end{align} where \begin{equation*} \omega = \dfrac{I_{\mboxss{P,in}}/h\nu_P}{\sigma_{31}(n_3-2n_1)l_g} \left(\exp[\sigma_{31}(n_3-2n_1)l_g]-1\right) t_P \left(1+t_P^2 r_P \exp[\sigma_{31}(n_3-2n_1)l_g]\right) \end{equation*} is the absorption rate based on input pump intensity $I_{\mboxss{P,in}}$. The $g_i$ are level degeneracies, $A_{ij}$; radiative transition rates, $\Gamma_{ij}$; total decay rates, $\gamma_{\mboxss{fs}}$; the fine-structure mixing rate, $\tau_{\mboxss{RT}}$; the cavity round-trip time, $l_g$; the gain length, $r_x$ the output coupler reflection coefficients, $t_x$; the intracavity transmission coefficients (set to 1 throughout), $\theta=(E_3-E_2)/(\kboltz{}T)$, where $T$ and $\kboltz{}$ are temperature and the Boltzmann constant. The last term in the laser equation is a rough estimate of the spontaneous emission seed for lasing, which we retain since our method is still based on evolving the equations in time, whether we apply to finding steady-state solutions or not. The laser term contains a potentially very large exponential factor times the inverse cavity timescale. We can address this extreme stiffness by keeping the overall effective timescale above $0.01\tau_{\mboxss{RT}}$ via \begin{equation} \dfrac{d\psi_L}{dt} \rightarrow a^{-1} \tanh\left[a\left(r_L \exp[2 \sigma \Delta{n} l_g] - 1 \right)\right] \end{equation} with $a=0.01$, for example. Testing with different $a$, and the raw equations showed that this still preserved behavior on timescales of interest while avoiding situations where the code had to switch to excessively small timesteps. We also track contributions to the general buffer specific energy density (the thermal bath) due to mixing of the D1 and D2 energy levels ($E_2$, $E_3$): \begin{equation} \Delta s_{\mboxss{eb}} = \gamma_{\mboxss{mix}} (n_3-2 n_2\,\exp[-\theta]) (E_3-E_2) \end{equation} {Note that energy exchange with the thermal bath also gets tallied for the other processes (like pooling, and quenching).} {Ensuring global energy conservation with a binned electron spectrum, and irregular grid, is not trivial. Solutions} include: modifying the target-bin branching ratios to conserve energy at the cost of straying from the discretized differential cross section, and adopting a better, higher-order ``sub-grid'' model than a flat-top distribution {(see e.g. Le and Cambier\cite{2017_HaiJl_SubGrid})}. The sub-grid approach avoids unsavory tweaks to the cross sections, and better handles transitions with energy less than a bin width. For now though, re-weighting is {simpler} to implement, and {test various methods with}, but {the higher order discretization warrants further} consideration. {This accounts for ionization, recombination, and inelastic collisions for free electrons,} which just leaves energy transfer with the buffer. We base our formulation on the Lorentz model with a correction for the finite temperature of the buffer particles, (see e.g. Loureiro \& Amorim \cite{2016_LoureiroAmorimBook} where it is given in terms of velocity magnitude) \begin{equation} \label{eqn:drag1} \partial_t n_E = \partial_{E} \left[ \dfrac{E-\kboltz{}T/2}{\tau_{\mboxss{exch}}(E)} n_E + \dfrac{E\,\kboltz{}T}{\tau_{\mboxss{exch}}(E)} \partial_{E} n_E \right], \end{equation} where $E$ is electron energy, $n_E$ the electron differential energy density, and $\tau_{\mboxss{exch}}$ the characteristic energy exchange timescale. We implement the discretized version as a series of upwinded advection fluxes between neighboring bins, plus a diffusion term, so the resulting source terms are more automatically conservative. See Section \ref{sec:A_drag} for details. \subsection{Conservation-enforcing projection ala Sandu (2001)} \label{sec:SanduProj} Resolving trace populations while avoiding negative densities and conservation error is {a familiar} challenge, motivating various approaches over the years. {For example,} Preussner and Brand\cite{1981_PreussnerBrand}, and Bertolazzi\cite{1996_Bertolazzi_PosConsKins} {both} focus on preserving non-negativity with semi-implicit and implicit methods respectively. Instead of focusing on the integration method, Sandu\cite{2001_Sandu_PosProj} remarks that finding a point obeying conservation rules, closest to some arbitrary method's guess, just defines a linearly-constrained quadratic optimization problem. This {approach is highly general}, avoids longterm drift in error, and {forms} the backbone of our code The projection relies on expressing conservation laws as a linear constraint: \begin{equation} A^{\intercal} {\bf n}' - A^{\intercal} {\bf n}_0 -{\bf x} = 0 \end{equation} where ${\bf n}_0$ is some previous, trusted population vector, and ${\bf x}=0$ absent any external source/sinks. Sandu reviews how to find $A$ given any stoichiometry, but alkali number and charge conservation rows can be written by inspection for our problem: \begin{alignat}{9} A_N^{\intercal} &= (&1,\hdots,1,&\quad && 2,\quad && 1,\quad && 2,\quad &&~1,~1,\hdots\quad &0,\hdots&), &&\mbox{ and }A_N^{\intercal} {\bf n} = n_{\mboxss{alk}}(t_0) \\ A_Q^{\intercal} &= (&0,\hdots,0,&\quad && 0,\quad && 1,\quad && 1,\quad &&-1,-1,\hdots\quad &0,\hdots&), &&\mbox{ and }A_Q^{\intercal} {\bf n} = 0 \\ & &\mbox{\small ${\rm K}({\rm nLJ})$}& ~ &&\mbox{\small ${\rm K}_2$} ~ &&\mbox{\small ${\rm K}^{+}$} ~ &&\mbox{\small ${\rm K}_2^{+}$} ~ &&\mbox{~e$^-$ {\small bins}} ~ &\mbox{\small misc.}& & \nonumber \end{alignat} {In principle, neutral buffers will contribute their own rows, and internal energy conservation can be included in a similar fashion. In practice, buffer conservation holds well either way, while adding internal energy means adding a large external source term (the pump).} For large timesteps, the energy conservation correction tends to interfere with the others, so some numerical error from the (time) integration method can still affect energy for now. This is not an issue for steady-state-seeking simulations, as any equilibrium is conservative by construction of the source terms. This $A$ then features in the optimization problem \begin{equation} {\bf n}^{n+1} = \mathrm{argmin} \left(\dfrac{1}{2} {\bf n}^{n+1} (G {\bf n}^{n+1}) - (G \tilde{\bf n}^{n+1})^{\intercal} {\bf n}^{n+1} \right) : A^{\intercal}{\bf n}^{n+1}=y_0, {\bf n}^{n+1}\geq \eps. \label{eq:SanduOptProblem_WithG} \end{equation} where $\tilde{\bf n}^{n+1}$ is the solver's uncorrected guess, and $G$ is the error metric for the numerical solver. In Sandu's example $G$ is a diagonal matrix \begin{equation*} G_{ii} = 1/\left(N_{\mboxss{species}}(\mathrm{tol}_{\mboxss{abs}}+\mathrm{tol}_{\mboxss{rel}}|n_i|)^2\right). \end{equation*} The weighting assumes extra importance in a problem like ours with many levels of the same atom (i.e. a long row of zeros in $A^{\intercal}$). Without any for example ($G_{ij}\equiv\delta_{ij}$), the sea of trace populations will ebb and flow when the projection corrects changes in large, dynamic populations. This tool frees us to take steps $\Delta{\bf n}$ using a simple integration method like linearized implicit Euler: \begin{equation} (I-\Delta{t} J)\Delta{\bf n} = {\bf s} \Delta{t} \end{equation} where ${\bf s}$ is the total source vector, and $J$ the Jacobian. \subsection{Alternative approaches} \label{sec:AltMethods} We consider it worth {mentioning here some} alternative approaches, and immediate issues we faced applying them. Computational singular perturbation (CSP)\cite{1994_LamGoussis_CSP, 2005_ValoraniEt_CSP} takes the opposite approach: instead of addressing accuracy issues for implicit methods, it addresses stiffness issues for explicit ones. It achieves this by breaking up the source term into modes with characteristic timescales, typically found by straight-forward eigen-decomposition, or block diagonalization of the Jacobian, with a possible higher order correction for evolution of the bases. `Fast' modes can reach a quasi-equilibrium with respect to the slow ones (or `exhaust'). For example, the laser levels in our problem often reach a quasi-equilibrium that evolves slowly with respect to the slow changes caused by pooling, etc. CSP works well on the `conventional' and three-level kinetics of our problem, but transfer between the electron bins alters modes on a fast (Courant-Friedrichs-Lewy) timescale, generating fast modes which do not exhaust for a long time without the (expensive) higher order corrections. Simple attempts at splitting off processes that only shift electrons in energy helped little as the CSP modes still changed drastically between steps. Efficient application of CSP would likely require representing the electron spectrum differently, e.g. via moments, or some other coarser expansion. Applying the semi-implicit algorithm in Preussner and Brand is straightforward, and should complement the projection step nicely: it avoids negative densities that increase the cost of the projection, while the projection should reduce the need for very small steps to ensure conservation over long times. However, this approach still required small timesteps to avoid oscillations associated with the radiative component, and to a lesser extent, electron drag and excitation/de-excitation. Given the lower cost per step, some variation that splits off the radiative component may still offer a practical path forward for larger, more expensive engineering simulations \section{Applying the model} \label{sec:Applying} Knowing the physical ingredients and main uncertainties, numerical techniques and issues, we can now compare predictions to experiments in a time-dependent, ionization-prone regime, as well as draw some general lessons from simulating general, steady-state situations. \subsection{Testing against time-dependent experiments} \label{sec:Applying_ZRSK17} To avoid complications from thermal build-up and other slow processes, Zhdanov et al. \cite{2017_ZhdanovEt_KDPAL_BufferStudy} ran a set of experiments with time-varying, sub-millisecond pump pulses for a potassium vapor at 190$^{\circ}$C with varying buffer composition. {Their main} experiments used 500 Torr of buffer with a varying percentage of methane, and a peak pump power of 160W, translating to an intensity around 30kW/cm$^2$ for their reported beam profile. They reported laser output, as well as 5\,$^2$P$_J$-4\,$^2$S$_J$ fluorescence, but not in absolute units. They also measured 5\,$^2$P$_J$ fluorescence for trials with 200 Torr of pure helium, and pure argon, and saw that neither composition lased. We first ran the model with just the basic three-level model turned on to make sure it reproduced the lasing threshold, using the reported pump linewidth, laser beam profile, gain and cavity lengths, and pressure-broadening of the D1, and D2 transitions based on Pitz et al.\cite{2014_PitzEt_K4P_Broadening} (c.f. Section \ref{sec:A_FIRST}). Initial runs with the default kinetics, and slight variations, established some basic tenets of an `alkali-depletion' paradigm under these conditions: \begin{itemize} \item[ 1 ] \label{ten:EIIvsDR} past $\sim$1-10\% (alkali) ionization, impact ionization and DR dominate free electron {population gain and loss} \item[ 2 ] \label{ten:Sigmoid} this leads to sigmoidal growth of ion fraction, where drag, and pump intensity can also skew its shape, saturation timescale, and final ion fraction \item[ 3 ] \label{ten:SatMech} at late times, the barrier to more ionization is re-energizing low energy, post-impact electrons back above ionization thresholds before {they recombine} \item[ 4 ] \label{ten:5Ptrend} increased buffer drag delays and diminishes the peak in 5\,$^2$P$_J$ population matching the trend of their 200 Torr experiments, but the early growth is never so linear in time as seen in the experiments \end{itemize} \begin{figure}[!htbp] \includegraphics[width=0.99\textwidth]{f3_rev.eps \caption{\label{fig:RefRuns1} Solid curves show laser power output for pure helium and various amounts of methane where {\it total} pressure remains 500 Torr for all cases. Dashed curves show ionization fraction with scale on the right axis, and the gray curve shows the pump profile in arbitrary units.} \end{figure} The simulations in fig. \ref{fig:RefRuns1} illustrate the basic depletion mechanism, where the association rate for K$_2^+$ production was set to the equivalent of a $10^{-26}$cm$^6$/s three body rate, and atomic-level quenching was reduced to 20\% of its default value. Since we assumed it most significantly quenches the 5\,$^2$P$_J$, 6\,$^2$S$_J$, and 4\,$^2$D$_J$ levels, further reduction of the DR cross section, or faster impact ionization out of these levels would generate similar behavior. \begin{figure}[!htbp] \includegraphics[width=0.99\textwidth]{f4_rev.eps \caption{\label{fig:espec_example} Dark to light solid curves show 10$\mu$s steps in electron spectrum growth for the pure helium simulation of fig.\ref{fig:RefRuns1}. The dashed curve shows the spectrum for the 10:490 methane:helium mixture at late times. For contrast, the dotted curve shows a Maxwell-Boltzmann distribution with the gas temperature, and same total density as the last pure-helium curve.} \end{figure} Fig. \ref{fig:espec_example} shows the evolution of the electron energy distribution in the pure helium run, a spectrum for one of the methane-added runs, and a buffer-temperature Maxwell-Boltzmann distribution with the same final electron count as the pure helium case. The non-MB curve is roughly one-tenth the MB curve below 0.1eV, so already the total recombination rate is significantly changed. The curve for the methane case highlights methane's ability to suppress not only number, but mean electron energy. Figures \ref{fig:4SProcMags}-\ref{fig:D2ProcMags} break down various contributions to the source terms for the ground state, D1, and D2, which demonstrates the role of impact ionization here, and explains a strange phenomenon described below \begin{figure}[!htbp] \begin{subfigure}[t]{0.449\textwidth} \includegraphics[width=1.00\textwidth]{srcs_4S1h_a.eps \end{subfigure} \begin{subfigure}[t]{0.449\textwidth} \includegraphics[width=1.00\textwidth]{srcs_4S1h_b.eps \end{subfigure} \caption{\label{fig:4SProcMags} Contributions/subtractions (solid/dashed) to the ground state from the sum of the basic three-level source terms (3L), EIT, radiative transfer (RT), collisional dissociation (CD), and association with K$^+$ forming K$_2^+$. The right panel adds EIT to three-level losses to highlight the rate of net loss via association} \end{figure} \begin{figure}[!htbp] \begin{subfigure}[t]{0.449\textwidth} \includegraphics[width=1.00\textwidth]{srcs_4P1h_a.eps \end{subfigure} \begin{subfigure}[t]{0.449\textwidth} \includegraphics[width=1.00\textwidth]{srcs_4P1h_b.eps \end{subfigure} \caption{\label{fig:D1ProcMags} Same as fig.\ref{fig:4SProcMags}, but for D1, so energy pooling (EP), collisional ionization (CI), impact ionization (EII), and impact mixing (EIM) are relevant. This time radiative decays from levels above are included in the total for major processes. For the simulation parameters, impact fine-structure mixing has less affect on long term change in gain than the association rate in fig.\ref{fig:4SProcMags}.} \end{figure} \begin{figure}[!htbp] \begin{subfigure}[t]{0.449\textwidth} \includegraphics[width=1.00\textwidth]{srcs_4P3h_a.eps \end{subfigure} \begin{subfigure}[t]{0.449\textwidth} \includegraphics[width=1.00\textwidth]{srcs_4P3h_b.eps \end{subfigure} \caption{\label{fig:D2ProcMags} Same as fig.\ref{fig:D1ProcMags}, but for D2.} \end{figure} For pure helium, the rise in laser output slightly overshoots the other mixtures before cresting in fig.\ref{fig:RefRuns1}, and the source term figures reveal how moderate ionization briefly boosts laser gain to cause this. From lasing onset -- seen as a spike around 20-30$\mu$s -- to 50$\mu$s or so into the simulation, the sum of three-level, impact transfer processes, and eventually association into ${\rm K}_{2}^+$ all drive significant ground state depletion. Meanwhile, D1 experiences net growth, thus raising laser gain until severe ionization depletes both populations. Net electron impact mixing of the D1, and D2 levels also switches sign around this time, as D2 atoms sufficiently outnumber D1 atoms. Artificially raising {absorption} cross sections by a factor of 5 dramatizes this effect in fig.\ref{fig:ErrRuns1}. This also leads to delayed, saturated curves for the methane mixtures resembling their output curves in the experiment, but if this mechanism does play a role in explaining the responses with methane, it can not be the sole cause. For now, this overshoot serves as a hint for where we should examine the physics and methods further, e.g. perhaps the modeled electrons are too hot at early times leaving excitation too strong versus de-excitation, or perhaps the ${\rm K}_{2}^+$ association rate is still too high. \begin{figure}[!htbp] \includegraphics[width=0.99\textwidth]{f8_rev.eps \caption{\label{fig:ErrRuns1} Same as fig.\ref{fig:RefRuns1}, but with artificially higher absorption cross sections. Overshoot during methane runs leads to the type of delayed \& saturated curves seen in experiments, except the trend of final laser output falling with methane density starts too soon, and it overshots too much in the pure helium trial.} \end{figure} \subsection{Ionization can drive significant heat loading} \label{sec:Applying_Heat} Predictions of vigorous heat generation by free electrons in a pumped, alkali vapor date at least as far back as a paper by Measures\cite{1970_Measures_EarlyKplas}. Additional, unexpected heating can hurt beam quality, among other effects, but this aspect of ionization receives relatively less attention, so we review some simple arguments behind strong heating. From fig. \ref{fig:eTimes}, one sees that a typical electron born around 0.3-0.5 eV, will frequently gain 1.6eV by de-exciting a K(4P) atom on a sub-\mbox{$\mu$s} timescale. Often after one such jump, energy loss to the buffer is suddenly much faster. As ionization and recombination are orders of magnitude slower, this means a typical electron can convert a pump photon worth of energy to the buffer hundreds of times while free. Meanwhile, bound electrons converting the $\approx 0.01$eV D1-D2 gap to heat at a 1-10 GHz rate via fine-structure mixing present the greatest heat source in ionization-free models, and the logical point of comparison: \begin{equation} f_{\mboxss{ioni}} n_A \mbox{(1 eV)} \mbox{(1 MHz)} \mbox{ vs. } n_A \mbox{(0.01 eV)} \mbox{(1 GHz)}. \end{equation} This shows the drag heat load approaches the inherent mixing one around 10\% ionization. Electron energy loss to methane will largely go into local heat as well: Menard-Bourcin et al. \cite{2005_MenardBourcinEt_EQVT_CH4xVar} give cross sections for vibrational-translational energy transfer for methane in helium, and the resulting rates exceed radiative decays based on Yurchenko et al. \cite{2013_YurchenkoEt_VibrFoscCH4} by several orders of magnitude. The code accounts for this. Fig. \ref{fig:Starfish} shows heat load, and fraction of it from mixing, helium energy transfer, methane energy transfer, atomic level quenching, and other sources, for a small grid of alkali density, and methane buffer fraction at a fixed value of angle-averaged and frequency-integrated pump intensity, $\bar{J}$ of 100kW/cm$^2$. At moderate alkali density, adding a small amount of methane drops ionization and associated heat load substantially. For the same total buffer density at higher alkali densities, methane does little besides replace helium as the pathway for heat generation. Total buffer density must be raised for such conditions. \begin{figure}[!htbp] \includegraphics[width=0.80\textwidth]{Qdot_Starfish3b.eps \caption{\label{fig:Starfish} ``Starfish'' arms show relative contributions from (4\,$^2$P$_J$) fine-structure mixing, potassium level quenching, energy transfer to helium, energy transfer to methane (when present), and miscellaneous processes according to the legend, while their hue indicates total heat load in Watts per cubic centimeter. The buffer pressure was one atmosphere in all these simulations.} \end{figure} \section{Conclusions} \label{sec:Conc} Active DPAL conditions fundamentally lead to a non-Maxwell-Boltzmann distribution for free electrons, and ignoring this effect underestimates the propensity for alkali to ionize, as well as the extra heat load associated with ionization. By comparing the model to experiments performed by Zhdanov et al., whose variations in time and composition probed the transition from good to poor performance, we confirmed this basic point, but also showed important discrepancies. The model predicted a slight, temporary boost to laser gain at early times for the pure helium case, in contrast to the experiment. The model also over-predicted the efficacy of methane. The latter is less surprising in the face of quenching uncertainties, while the former constrains more central aspects of the model, like impact ionization, drag, alkali dimer ion association. To reduce modeling uncertainties, we have started implementing the higher-order discretization of the distribution, obtaining more accurate impact ionization estimates, and tying the higher fidelity kinetics to a more detailed spatial model of the laser. We also intend to investigate three-body recombination with non-MB electrons more rigorously. The comparison illustrates the utility of time-dependent pumping on kinetics-relevant timescales for setting stronger constraints on models. Further variations on this idea, as well as more quantitative diagnostics for the basic three levels and excited states, are warranted. Direct measurements, or higher-quality estimates of the most uncertain processes would obviously be most helpful too. \begin{acknowledgments} We thank Dr's. Habib Najm, John Shadid, and their colleagues for helpful discussions, especially regarding CSP. HJC thanks Ben Oliker for fielding basic questions at the start of the project. We also thank an anonymous referee for detailed feedback, which significantly improved the paper. \end{acknowledgments}
{ "timestamp": "2019-03-01T02:03:01", "yymm": "1902", "arxiv_id": "1902.10776", "language": "en", "url": "https://arxiv.org/abs/1902.10776" }
\section{Introduction} In this short note, we prove the following: \begin{theorem}\label{thm} For any $1\leq p \leq \infty$, any $m \in \mathbb{N}$ and any positive constants $C_1,C_2$, there is an open Riemannian manifold $(\mathcal{M},g)$ of dimension $m$ such that the $L^p$-Calder\'{o}n--Zygmund estimate is invalid. More precisely, there is a smooth function $f:\mathcal{M} \rightarrow \mathbb{R}$ such that \begin{equation}\label{E} \|\nabla \nabla f\|^p_{L^p} > C_1 \|\Delta f\|_{L^p}^p + C_2 \| f\|_{L^p}^p. \end{equation} \end{theorem} Throughout, a Riemannian manifold (without boundary) $(\mathcal{M},g)$ is said to be open if it is non-compact and geodesically complete. The $2$-tensor field $\nabla_g \nabla_g f$ denotes the Hessian of $f$. Its trace under the metric is the Laplace--Beltrami operator, denoted by $\Delta_g f$. In this note, all the Hessian and Laplace--Beltrami operators shall be taken with respect to a fixed metric $g$ (the one in \eqref{metric} below); let us abbreviate by $\nabla \nabla f :=\nabla_g\nabla_g f$ and $\Delta f :=\Delta_g f$. In the Euclidean space $\mathbb{R}^m$, the classical estimate \begin{equation}\label{cz} \|\nabla \nabla f\|^p_{L^p} \leq C_1 \|\Delta f\|_{L^p}^p + C_2 \| f\|_{L^p}^p\qquad \text{ for all } f \in C^\infty_c(\mathbb{R}^m) \text{ and every } p \in ]1,\infty[ \end{equation} was established by Calder\'{o}n--Zygmund in the seminal paper \cite{cz}. Here the constants $C_1$, $C_2$ depend only on $p$ and $m$. A natural question is the validity of \eqref{cz} on a Riemannian manifold $(\mathcal{M},g)$. Many works are devoted to proving \cite{cz} on $(\mathcal{M},g)$ which satisfies certain geometric assumptions, {\it e.g.}, the boundedness of Ricci or sectional curvatures, the boundedness of the injectivity radius away from zero, and the doubling property for the Riemannian volume measure. We refer to Cheeger--Gromov--Taylor \cite{cgt}, Strichartz \cite{s}, Taylor \cite{t}, Wang \cite{w} and G\"{u}neysu--Pigola \cite{gp} for details; also see the many references cited therein. On the other hand, in \cite{gp} G\"{u}neysu--Pigola constructed a 2-dimensional complete manifold $(\mathcal{M},g)$ on which \eqref{cz} is invalid $p=2$. To the author's knowledge, this is among the first negative results for the Calder\'{o}n--Zygmund estimates. Our goal here is to generalise the arguments in \cite{gp} to prove Theorem \ref{thm} for the whole range of indices $p \in [1,\infty]$ and $m \in \mathbb{Z}_{\geq 2}$. Before starting the proof, let us make three remarks: \begin{enumerate} \item Throughout this paper, $\|\bullet\|_{L^p}$ denotes the $L^p$-norm of tensor fields on $\mathcal{M}$, taken with respect to the metric $g$ in \eqref{metric} below. For the definition and discussions on Sobolev spaces over manifolds, see {\it e.g.} Hebey \cite{h}. \item The Calder\'{o}n--Zygmund estimate is known to be false for $p=1$ and $p=\infty$ on Euclidean spaces; see Ornstein \cite{o} and McMullen \cite{m}. It thus remains to prove for $1<p<\infty$. \item Our proof is crucially based on the construction in Theorem B, \cite{gp} by G\"{u}neysu--Pigola. \end{enumerate} \section{Construction of the Manifold $(\mathcal{M},g)$} In this section, we construct the manifold $(\mathcal{M},g)$ which leads eventually to the proof of Theorem \ref{thm}. The presentation in this section works for all $m \in \mathbb{Z}_{\geq 2}$. It involves the choice of several parameters, which will be specified in subsequent sections. \smallskip \noindent {\bf Warped Product.} The manifold $\mathcal{M}$ we choose is the Euclidean space $\mathbb{R}^m$ equipped with the warped product manifold: \begin{equation}\label{metric} g= dr\otimes dr + \sigma^2(r)\, {\mathbf{can}}^{m-1}, \end{equation} where ${\mathbf{can}}^{m-1}$ is the canonical round metric $(m-1)$-dimensional Euclidean sphere. It is known as a {\em warped product} manifold. Note that the space forms are special examples of warped products: $\mathcal{M}=\mathbb{S}^m$ if $\sigma=\sin$, $\mathcal{M}=\mathbb{R}^m$ if $\sigma={\bf Id}$, and $\mathcal{M} = \mathbb{H}^m$ if $\sigma = \sinh$. We shall choose the {\em warping function} $\sigma$ to be non-negative, smooth and growing to infinity as the radial coordinate $r \nearrow +\infty$. Thus $(\mathcal{M}, g)$ is an open manifold. The warped products are central objects of many recent works on geometric analysis; {\it cf. e.g.} \cite{gl} by Guan--Lu. \smallskip \noindent {\bf Green's Function.} Let $\widetilde{G}(x)$ be the Green's function of the Laplace--Beltrami operator on $\mathcal{M}$ as above. Since $g$ in \eqref{metric} is rotationally symmetric, there is a function $G:[0,\infty[\rightarrow \mathbb{R}$ such that $\widetilde{G}(x) = G(r)$ for $r:=|x|$. Writing $\Delta$ in polar coordinates, we find that \begin{equation}\label{green fn} \Delta \widetilde{G} = 0 \quad \Longleftrightarrow \quad G'' + (m-1) \frac{\sigma'}{\sigma}G = 0\qquad \text{ on } \mathcal{M} \sim \{0\}. \end{equation} \smallskip \noindent {\bf Hessian and Laplacian.} The key idea of the construction, as in \cite{gp}, is to take $f$ to be a localised version of the Green's function $G$. For $k \in \mathbb{N}$ and $[\alpha_k, \beta_k] \subset \mathbb{R}$, let $\phi_k \in C^\infty_c ([\alpha_k, \beta_k])$ be a cut-off function. Here $\phi_k, \alpha_k, \beta_k$ will be specified later. Then, define \begin{equation}\label{uk} u_k(r) := \phi_k \circ G(r). \end{equation} In the end, one shall set $f:=u_k$ for some sufficiently large $k$. Direct computations in \cite{gp} lead to the following formulae for the Hessian and the Laplace--Beltrami of $u_k$, as well as the volume density of $g$: \begin{eqnarray} && \nabla \nabla u_k = u_k''\,dr\otimes dr + \sigma {\sigma'} u_k'\, {\mathbf{can}}^{m-1}, \label{hessian}\\ && \Delta u_k = u_k'' + (m-1)\frac{\sigma' u_k'}{\sigma},\label{laplacian}\\ && \sqrt{\det \, g} = \sigma^{m-1}. \end{eqnarray} Throughout $\sigma, G, u_k$ are functions of $r$ only; $\sigma'$, $u_k''$ etc. denote the derivatives in $r$. In the rest of this section, fixing a $p\in ]1,\infty[$, we collect some preliminary estimates for the $L^p$-norm of $u_k$, $\Delta u_k$ and $\nabla \nabla u_k$. First of all, neglecting the radial components in \eqref{hessian}, we have \begin{equation*} |\nabla \nabla u_k|^p \geq \Big|u_k' \frac{\sigma'}{\sigma}\Big|^p. \end{equation*} Hence, denoting by $\gamma_m := {\rm Vol}_{{\mathbf{can}}^{m-1}}(\mathbb{S}^{m-1})$, the area of the unit sphere, we deduce \begin{align*} \|\nabla \nabla u_k(r)\|_{L^p} &= \gamma_m \bigg\{ \int_0^\infty \big|\nabla \nabla u_k\big|^p \sigma^{m-1}(r)\,{\rm d} r \bigg\}^{\frac{1}{p}}\nonumber\\ &\geq \gamma_m \bigg\{ \int_0^\infty \Big|\phi_k'\big(G(r)\big) \, G'(r) \,\Big(\frac{\sigma'}{\sigma}\Big)(r)\Big|^p \sigma^{m-1}(r)\,{\rm d} r \bigg\}^{\frac{1}{p}}\nonumber\\ &= \gamma_m \bigg\{ \int_0^\infty \Big|\phi_k'\big(G(r)\big)\Big|^p \Big|\frac{\sigma'}{\sigma}(r)\Big|^p \sigma^{(1-m)(p-2)}(r) G'(r)\,{\rm d} r \bigg\}^{\frac{1}{p}}. \end{align*} In the last line we used the identity $$G'(r)=\sigma^{1-m}(r).$$ A change of variable $r \mapsto s=G(r)$ yields that \begin{equation}\label{hessian, Lp} \|\nabla \nabla u_k\|_{L^p} \geq \gamma_m \bigg\{ \int_{\alpha_k}^{\beta_k} |\phi_k'(s)|^p \, \Big| \frac{\sigma'}{\sigma}\circ G^{-1}(s)\Big|^p \big[\sigma \circ G^{-1}(s)\big]^{(1-m)(p-2)} \,{\rm d} s\bigg\}^{\frac{1}{p}}. \end{equation} For the Laplace--Beltrami, it is crucial to observe that \begin{equation}\label{laplacian, 2} \Delta u_k(r) = \phi_k'' \circ G(r) \, \sigma^{2-2m}(r), \end{equation} thanks to the defining property \eqref{green fn} of the Green's function. Thus, we have \begin{align}\label{laplacian, Lp} \|\Delta u_k\|_{L^p} = \gamma_m \bigg\{ \int_{\alpha_k}^{\beta_k} |\phi_k''(s)|^p \big[\sigma\circ G^{-1}(s)\big]^{2(p-1)(1-m)} \,{\rm d} s\bigg\}^{\frac{1}{p}} \end{align} Finally, note that \begin{equation}\label{uk, Lp} \|u_k\|_{L^p} = \gamma_m \bigg\{ \int_{\alpha_k}^{\beta_k} |\phi_k(s)|^p \big[\sigma \circ G^{-1}(s)\big]^{2(m-1)}\,{\rm d} s \bigg\}^{\frac{1}{p}} \end{equation} The key observation: Only the norm of $\sigma$ is involved in the upper bounds for $\|\Delta u_k\|_{L^p}$ and $\|u_k\|_{L^p}$, while $\sigma'$ is present in the lower bound for $\|\nabla \nabla u_k\|_{L^p}$; see \eqref{hessian, Lp}, \eqref{laplacian, Lp} and \eqref{uk, Lp}. As a consequence, by carefully choosing a highly oscillatory profile for $\sigma$, we may force $\|\nabla \nabla u_k\|_{L^p}$ to be much larger than $\|\Delta u_k\|_{L^p}$ and $\|u_k\|_{L^p}$, thus contradicting the Calder\'{o}n--Zygmund inequality. \section{Proof for $m=2$}\label{sec: m=2} In this section we prove Theorem \ref{thm} for $m=2$ by specifying the warping function $\sigma$. The proof is essentially an adaptation of the arguments for Theorem B in \cite{gp} by G\"{u}neysu--Pigola, which corresponds to the case $m=2$, $p=2$. For the sake of completeness, we shall explain in detail why the constructions in \cite{gp} works for all $p \in ]1,\infty[$. First, we set $\alpha_k = k$ and $\beta_k = k+1$ for each $k \in \mathbb{N}$. Next, let us require the warping function $\sigma$ to satisfy the following: \begin{equation}\label{sigma, first properties} \begin{cases} \sigma^{(2k)}(0)=0 \text{ for each } k \in \mathbb{N};\\ \sigma'(0)=1;\\ \sigma(t)>0 \text{ for any } t>0;\\ t\leq \sigma(t) \leq t+1 \text{ for any }t \geq 1. \end{cases} \end{equation} When $m=2$, one has the simple comparison results (see p.377 in \cite{gp}): \begin{equation}\label{comparison 1} \log\Big(\frac{t+1}{2}\Big) \leq G(t) \leq \log t\qquad \text{ for all } t>1 \end{equation} and \begin{equation}\label{comparison 2} e^s \leq \sigma \circ G^{-1} (s) \leq 2e^s\qquad \text{ for all } s>0. \end{equation} Moreover, there exists a {\em universal} constant $\delta>0$ such that for all sufficiently large $k$, we can find $h=h(k)> k$ such that \begin{equation}\label{h def} [h,h+1] \subset \big[G^{-1}(k+\delta),\, G^{-1}(k+1-\delta)\big]. \end{equation} In addition, we choose the cut-off function $\phi_k$ in \eqref{uk} as follows: Fix some $\phi \in C^\infty_c (]0,1[)$ such that $\phi \equiv {\bf Id}$ on $[\delta, 1-\delta]$ and $\phi \leq 1$, and then set $$\phi_k(t) := \phi(t-k)$$ for each $k\in\mathbb{N}$. Here $\delta>0$ is the same constant as in \eqref{h def}. We shall fix $\phi$ once and for all; in particular, $\|\phi\|_{C^2([0,1])}$ is bounded by a universal constant. We can deduce from \eqref{laplacian, Lp}, \eqref{uk, Lp} and \eqref{comparison 2} the following bounds: \begin{align}\label{uk Lp upper bd} \|u_k\|_{L^p}^p \leq 2 (\gamma_2)^p e^{2k+2} \end{align} and \begin{align}\label{laplacian Lp upper bd} \|\Delta u_k\|_{L^p}^p \leq \frac{(\gamma_2)^p}{2(p-1)4^{p-1}}\|\phi''\|_{L^\infty [0,1]}^p e^{-2(p-1)k}. \end{align} So it remains bound $\|\nabla \nabla u_k\|_{L^p}^p$ from below. For this purpose, we shall further specify $\sigma$. Consider the cube \begin{equation*} Q_k := [k,k+1] \times [k,k+1]\qquad \text{ for each } k \in \mathbb{N}; \end{equation*} from the previous constructions, the graph of $\sigma$ is contained in $\bigcup_{k=0}^\infty Q_k$ (in fact, in the union of the upper-left corners of $Q_k$). For certain sequence $\{n_k\} \subset \mathbb{N}$ increasing to $+\infty$ as $k$ grows, we take \begin{equation*} \epsilon_k := \frac{1}{2n_k}. \end{equation*} Define $\mathfrak{S}_k$ on $[k,k+1]$ by the ``sawtooth'' function on p.378, \cite{gp}: \begin{equation*} \mathfrak{S}_k(t):= \begin{cases} &k+2j\epsilon_k + \frac{\epsilon_k+1}{\epsilon_k}(t- k -2j\epsilon_k)\\ &\qquad\qquad\qquad \text{ on } \big[k+2j\epsilon_k, k+(2j+1)\epsilon_k\big]\,\,\text{ for each } j \in \{0,1,\ldots, n_k\},\\ &k+(2j+1)\epsilon_k +1 + \frac{1-\epsilon_k}{\epsilon_k} \big(k+(2j+1)\epsilon_k -t\big)\\ &\qquad\qquad\qquad \text{ on } \big[k+(2j+1)\epsilon_k, k+2(j+1)\epsilon_k\big]\,\,\text{ for each } j \in \{0,1,\ldots, n_k\}. \end{cases} \end{equation*} Then, one defines $\sigma|[k,k+1]$ by smoothing out the corners of $\mathfrak{S}_k$. More precisely, for each $k \in \mathbb{N}$ we can take $\sigma \in C^\infty([k,k+1])$ such that $$\sigma = \mathfrak{S}_k \qquad \text{ on } [k,k+1] \sim \bigsqcup_{j=0}^{n_k} \Big[ k+2j\epsilon_k - \epsilon_k^{10},\, k+2j\epsilon_k + \epsilon_k^{10} \Big]$$ and that $\|\sigma\|_{C^3} \leq 2$ in each of the small intervals removed. The idea for the construction of $\mathfrak{S}_k$ is clear: its graph (lying in the upper-left corner of the cube $Q_k$) is obtained by continuously concatenating $n_k$ copies of the following ``sawtooth unit'' with step-length $(2\epsilon_k)$ --- in the first $\epsilon_k$ it grows with constant gradient $\nicefrac{(\epsilon_k+1)}{\epsilon_k}$, and in the second $\epsilon_k$ it decreases with constant gradient $\nicefrac{(1-\epsilon_k)}{\epsilon_k}$. In particular, in the second half of each sawtooth unit, the norm of the gradient is large. With the above choice of $\sigma$, we can continue the lower bound \eqref{hessian, Lp} for the Hessian of $u_k$ as in below. First, by the definition of $\phi_k$ and \eqref{comparison 2}, we have \begin{align*} \|\nabla\na u_k\|_{L^p}^p \geq (\gamma_2)^p 2^{-p}e^{-p(k+1)} \int_{k+\delta}^{k+1-\delta} \big|\sigma'\circ G^{-1}(s)\big|^p \,\big|\sigma\circ G^{-1}(s)\big|^{2-p}\,{\rm d} s. \end{align*} Considering separately $p\geq 2$ and $p<2$ and using again \eqref{comparison 2}, one deduces \begin{align*} \|\nabla\na u_k\|_{L^p}^p \geq \min\big\{1,2^{2-p}\big\} (\gamma_2)^p 2^{-p}e^{-p(k+1)} \int_{k+\delta}^{k+1-\delta} \big|\sigma'\circ G^{-1}(s)\big|^p\,{\rm d} s. \end{align*} For $m=2$ we have $G'(r)=\sigma^{-1}(r)$, hence \begin{equation*} (G^{-1})'(s) = \frac{1}{G'[G^{-1}(s)]} = {\sigma[G^{-1}(s)]}. \end{equation*} It follows that \begin{align*} \|\nabla\na u_k\|_{L^p}^p &\geq \min\big\{1,2^{2-p}\big\} (\gamma_2)^p 2^{-p}e^{-p(k+1)} \int_{k+\delta}^{k+1-\delta} \big|\sigma'\circ G^{-1}(s)\big|^p \frac{1}{\sigma \circ G^{-1}(s)} (G^{-1})'(s) \,{\rm d} s\\ &\geq \min\big\{1,2^{2-p}\big\} (\gamma_2)^p 2^{-p-1}e^{-p(k+1)}e^{-k-1+\delta} \int_{G^{-1}(k+\delta)}^{G^{-1}(k+1-\delta)} |\sigma'(r)|^p \,{\rm d} r. \end{align*} Here we have used \eqref{comparison 1} once more. Recall that the universal constant $\delta$ is chosen right beneath \eqref{comparison 2}. For $k$ sufficiently large, we have selected $h=h(k)> k$ in \eqref{h def} so that \begin{equation*} \|\nabla\na u_k\|_{L^p}^p \geq \min\big\{1,2^{2-p}\big\} (\gamma_2)^p 2^{-p-1}e^{-(p+1)(k+1)+\delta} \int_{h}^{h+1} |\sigma'(r)|^p \,{\rm d} r. \end{equation*} Thanks to the choice of $\sigma$, on $[h,h+1]$ the norm of the gradient $|\sigma'|$ is larger than $(2n_k-1)$ on more than $n_k$ intervals longer than $(\epsilon_k - \epsilon_k^{10})$, where $2n_k \epsilon_k =1$. Thus, \begin{align}\label{lower bound for hessian} \|\nabla\na u_k\|_{L^p}^p &\geq \min\big\{1,2^{2-p}\big\} (\gamma_2)^p 4^{-p}e^{-(p+1)(k+1)+\delta} (2n_k-1)^p n_k (\epsilon_k-\epsilon_k^{10}) \nonumber\\ &\geq \min\big\{1,2^{2-p}\big\} (\gamma_2)^p 2^{-1-3p} e^{-(p+1)(k+1)+\delta} (1-\epsilon_k^{9}) \epsilon_k^{-p}. \end{align} We may now derive the contradiction by comparing \eqref{lower bound for hessian} with \eqref{uk Lp upper bd} and \eqref{laplacian Lp upper bd}. Note that \begin{equation*} \|u_k\|_{L^p}^p \lesssim e^{2k+2}\qquad \text{ and } \qquad \|\Delta u_k\|^p_{L^p} \lesssim e^{-2(p-1)k} \lesssim 1, \end{equation*} where the constants in $\lesssim$ depend on $p$, $C_1$, $C_2$ and $\|\phi''\|^p_{L^\infty([0,1])}$. On the other hand, \begin{equation*} \|\nabla\na u_k\|_{L^p}^p \gtrsim e^{-(p+1)k} (1-\epsilon_k^{9})\epsilon_k^{-p}. \end{equation*} By further requiring for large $k\in\mathbb{N}$ that $\epsilon_k \leq 100^{-1}$, we get $$\|\nabla\na u_k\|_{L^p}^p \gtrsim e^{-(p+1)k}\epsilon_k^{-p},$$ with the constant in $\gtrsim$ depends only on $p$. Therefore, we can achieve \eqref{E} by choosing {\it e.g.}, $$\epsilon_k := Ce^{-e^{k}}$$ for a suitable constant $C=C(p,C_1,C_2,\|\phi\|_{C^2([0,1])})$. Thus, choosing $k$ to be sufficiently large, we can complete the proof of Theorem \ref{thm} for $m=2$. \section{Proof for $m \geq 3$}\label{sec: mD} In this section, we prove Theorem \ref{thm} for arbitrary $m \geq 3$. The new feature is that the cubes $Q_k$ in $\S \ref{sec: m=2}$ are not available, since we cannot choose the warping function to satisfy $t\leq \sigma(t) \leq t+1$ for all $t \geq 1$. Instead, we shall choose an infinite sparse family of cubes $\{Q'_k\}$ sandwiched between the graphs of $t \mapsto t^{\frac{1}{m-1}}$ and $t \mapsto (t+1)^{\frac{1}{m-1}}$. Necessarily the size of the $Q'_k$ will shrink to zero as $k \nearrow \infty$; nevertheless, we can prescribe the rate of oscillation of $\sigma$ to be much larger than the shrinking rate of $Q'_k$. This is enough to conclude Theorem \ref{thm} for $m \geq 3$. Now we start the proof. First of all, let us observe that the estimates \eqref{hessian, Lp}, \eqref{laplacian, Lp} and \eqref{uk, Lp} are valid for all $m \in \mathbb{Z}_{\geq 2}$, and that the radial Green's function again verifies \begin{equation*} G(r) = \int_1^r \sigma^{1-m}(t)\,{\rm d} t. \end{equation*} We shall pick a $\sigma$ satisfying $G(+\infty)=+\infty$, which ensures the parabolicity of $(\mathcal{M},g)$. For brevity we write \begin{equation*} \alpha \equiv \alpha_m := \frac{1}{m-1}. \end{equation*} Then, we choose $\sigma$ to satisfy a set of properties similar to those in \eqref{sigma, first properties}:\begin{equation}\label{sigma, md} \begin{cases} \sigma^{(2k)}(0)=0 \text{ for each } k \in \mathbb{N};\\ \sigma'(0)=1;\\ \sigma(t)>0 \text{ for any } t>0;\\ t^{\alpha}\leq \sigma(t) \leq (t+1)^\alpha \text{ for any }t \geq 1. \end{cases} \end{equation} The motivation is to require the norm of $\sigma$ to be comparable to $t^\alpha$ without introducing a singularity at the origin. This can be achieved, {\it e.g.}, by gluing $\sigma|[1,\infty[$ to $\sinh$ or $\sin$ near $r=0$. Notice that \eqref{comparison 1} and \eqref{comparison 2} in the $m=2$ case are still valid, namely \begin{equation}\label{comparison 1'} \log\Big(\frac{t+1}{2}\Big) \leq G(t) \leq \log t\qquad \text{ for all } t>1 \end{equation} and \begin{equation}\label{comparison 2'} e^s \leq G^{-1}(s) \leq 2e^s-1 \qquad \text{ for all } s>0. \end{equation} Applying to \eqref{comparison 2'} the last property in \eqref{sigma, md}, we may infer: \begin{equation}\label{comparison 3'} e^{\alpha s} \leq \sigma \circ G^{-1}(s) \leq 2^\alpha e^{\alpha s}\qquad \text{ for all } s>0. \end{equation} In addition, note that \eqref{h def} still holds true. In fact, there exists a universal constant $\delta>0$ such that for all $k \geq 1$, we can find $h=h(k)> k$ satisfying \begin{equation}\label{h'} [h,h+1] \subset \big[G^{-1}(k+\delta),\, G^{-1}(k+1-\delta)\big]. \end{equation} For example, $\delta := 4^{-1}(1-\log 2)$ ensures that the length of the interval on the right-hand side of \eqref{h'} is greater than $2$. Let the choices for $\phi_k, \alpha_k, \beta_k$ and $u_k$ be the same as in the $m=2$ case. It follows from \eqref{laplacian, Lp} and \eqref{uk, Lp} that \begin{eqnarray} && \|u_k\|_{L^p}^p \leq 4(\gamma_m)^p e^{2(k+1)},\label{uk, Lp upper bd, mD}\\ &&\|\Delta u_k\|_{L^p}^p \leq (\gamma_m)^p \|\phi''\|^p_{L^\infty([0,1])} e^{-2(p-1)(k+1)},\label{laplacian uk, Lp upper bd, mD} \end{eqnarray} which are similar to \eqref{uk Lp upper bd} and \eqref{laplacian Lp upper bd} for $m=2$. Now we shall specify the warping function. Again, the idea is to introduce high-frequency oscillations to $\sigma$. In view of the final line in \eqref{sigma, md}, the graph of $\sigma|[1,\infty[$ lies in the strip \begin{equation*} S:= \Big\{ (t,y) \in \mathbb{R}^2: t \geq 1, \, t^\alpha \leq y \leq (t+1)^\alpha \Big\}. \end{equation*} Let us denote by \begin{equation*} S_k := S \cap \big\{ k\leq t \leq k+1\big\}\qquad \text{ for each } k \in\mathbb{N}. \end{equation*} Note that the height of the window $S_k$ shrinks to $0$ as $k \nearrow \infty$. We introduce the parameter: \begin{equation} \eta_k := \min_{t \in [k,k+1]} \frac{(t+1)^\alpha - t^\alpha}{10}. \end{equation} As discussed above, $\eta_k \searrow 0$ as $k \nearrow \infty$. Moreover, it is easy to see that one can place a cube $Q'_k$, whose sides are parallel to the Cartesian axes and have length $\eta_k$, inside the window $S_k$. For $k \in \mathbb{N}$ fixed momentarily, let us define $\sigma$ on part of $[k,k+1]$. More precisely, we shall require that the graph of $\sigma$ over the horizontal projection of the cube $Q_k'$ is contained in $Q_k'$. For this purpose, we can carry out a construction slightly simpler that in \cite{gp} for the $m=2$ case. Indeed, let $\sigma([z_k, z_k+\eta_k])$ be the juxtaposition of $\ell_k$ sawtooth functions of step length $$\delta_k := \frac{\eta_k}{2\ell_k}.$$ Each sawtooth function (modulo an obvious translation) increases from $0$ to $\eta_k$ in step-length $\delta_k$, and then decreases from $\eta_k$ to $0$ in another step-length $\delta_k$. Finally, we smooth out the corners by modifying on $(2\ell_k)$ intervals of the length $\delta_k^{10}$. In this way we complete the definition of $\sigma$ inside $Q_k'$; it is smooth and has gradient $|\sigma'| = \delta_k^{-1} = \nicefrac{2\ell_k}{\eta_k}$ for a large portion of the domain, {\it i.e.,} the horizontal projection of $Q'_k$. We shall specify the small parameter $\delta_k$ and the large parameter $\ell_k$ later in the proof. In passing let us note that, roughly speaking, the parameters $(\ell_k, \delta_k, \eta_k)$ play the role of $(n_k, \epsilon_k, 1)$ as in $\S \ref{sec: m=2}$. In the above paragraph we defined $\sigma$ in $Q_k'$. Now let us extend it globally. For this purpose, consider a sequence $\{k_j\}_{j=1}^\infty$ which tends to $\infty$ as $j \nearrow \infty$. Let $h_j=h(k_j)$ be defined as in \eqref{h'}. As the Green's function $G$ is monotonically increasing, in view of \eqref{h'} one can choose $\{k_j\}$ so that the cubes $Q_{h_j}'$ are disjoint. Let $\sigma$ be defined in each $Q_{h_j}'$ as above. Outside these cubes we take $\sigma$ to be any smooth function satisfying the properties in \eqref{sigma, md}, and by a simple glueing argument we can obtain $\sigma \in C^\infty([0,\infty[)$. For notational convenience, in the sequel let us relabel $k=k_j$ and $Q'_k = Q'_{h_j} \equiv Q'_{h(k_j)}$. It remains to bound $\|\nabla\na u_k\|_{L^p}$ from below. First of all, by \eqref{hessian, Lp}, the choice of $\phi_k$ and the upper bound in \eqref{comparison 3'}, we have \begin{align*} \|\nabla \nabla u_k\|_{L^p}^p \geq (\gamma_m)^p 2^{-\alpha p} e^{-\alpha p (k+1)} \int_{k+\delta}^{k+1-\delta} \big|\sigma' \circ G^{-1}(s)\big|^p \,\big|\sigma \circ G^{-1}(s)\big|^{(1-m)(p-2)}\,{\rm d} s. \end{align*} Utilising once more \eqref{comparison 3'}, we get \begin{align*} \|\nabla \nabla u_k\|_{L^p}^p \geq (\gamma_m)^p 2^{-\alpha (p-1)} e^{-(k+1)(\alpha p + p -2)} \int_{k+\delta}^{k+1-\delta} \big|\sigma' \circ G^{-1}(s)\big|^p\,{\rm d} s. \end{align*} For $\dim\,\mathcal{M}=m$ there holds $G'(r) = \sigma^{1-m}(r)$, so \begin{equation*} (G^{-1})'(s) = \frac{1}{G'[G^{-1}(s)]} = \sigma^{m-1}[G^{-1}(s)]. \end{equation*} It yields that \begin{align*} \|\nabla \nabla u_k\|_{L^p}^p \geq (\gamma_m)^p 2^{-\alpha (p-1)} e^{-(k+1)(\alpha p + p -2)} \int_{k+\delta}^{k+1-\delta} \big|\sigma' \circ G^{-1}(s)\big|^p \big|\sigma^{1-m}\circ G^{-1}(s)\big| (G^{-1})'(s)\,{\rm d} s. \end{align*} Thus, changing the variables $s \mapsto r=G^{-1}(s)$ and invoking \eqref{h'}, we arrive at \begin{align*} \|\nabla \nabla u_k\|_{L^p}^p \geq (\gamma_m)^p 2^{-\alpha (p-1)} e^{-(k+1)(\alpha p + p -2)} \int_h^{h+1} |\sigma'(r)|^p \sigma^{1-m}(r) \,{\rm d} r. \end{align*} By \eqref{comparison 3'}, one further gets \begin{align*} \|\nabla \nabla u_k\|_{L^p}^p \geq (\gamma_m)^p 2^{-\alpha (p-1)} e^{-(k+1)(\alpha p + p -2)-k-\delta}\int_h^{h+1} |\sigma'(r)|^p \,{\rm d} r, \end{align*} where $h=h(k)>k$ is chosen as in \eqref{h'}. To continue, it is crucial to note that in some subinterval of $[h,h+1]$ of length $\eta_k$, $\sigma$ is highly oscillatory. This is due to our choice of $Q'_k$ and the definition of $\sigma$ thereon. More precisely, we can deduce the bound \begin{align}\label{penultimate} \|\nabla \nabla u_k\|_{L^p}^p &\geq (\gamma_m)^p 2^{-\alpha (p-1)} e^{-(k+1)(\alpha p + p -2)-k-\delta} (\eta_k - \delta_k^9) (\delta_k)^{-p}, \end{align} where $\delta>0$ is universal as before. Here, recall that $\eta_k = 2\delta_k \ell_k \searrow 0$ for $\ell_k \nearrow \infty$ to be determined. We shall select some $\delta_k$ that shrinks to $0$ much more rapidly than $\eta_k \sim (k+1)^\alpha - k^\alpha = (k+1)^{\frac{1}{m-1}} - k^{\frac{1}{m-1}}$ does. Indeed, let us require that \begin{equation*} \begin{cases} \delta_k^9 \leq \frac{\eta_k}{2},\\ \delta_k \leq \Big( \frac{\eta_k}{2} e^{-e^k}\Big)^{\frac{1}{p}}. \end{cases} \end{equation*} The above two conditions give us \begin{equation}\label{double exponential} (\eta_k - \delta_k^9) (\delta_k)^{-p} \geq e^{e^{k}}; \end{equation} while the other term on the right-hand side of \eqref{penultimate} is \begin{equation*} (\gamma_m)^p 2^{-\alpha (p-1)} e^{-(k+1)(\alpha p + p -2)-k-\delta} = C_3 e^{- C_4k}, \end{equation*} with $C_3$, $C_4$ being positive constants depending only on $m$ and $p$, and with $\delta$ being a fixed universal constant as before. To conclude the proof, we can deduce from \eqref{double exponential} and \eqref{penultimate} that for any sufficiently large $k \in \mathbb{N}$, there holds \begin{equation*} \|\nabla \nabla u_k\|^p_{L^p} \gtrsim e^{k^{1000}} \end{equation*} with the constants involved in $\gtrsim$ depending on $m$ and $p$. On the other hand, in \eqref{uk, Lp upper bd, mD}\eqref{laplacian uk, Lp upper bd, mD} we have already proved that \begin{equation*} \|u_k\|^p_{L^p} \lesssim e^k,\qquad \|\Delta u_k\|^p_{L^p} \lesssim e^{-2(p-1)k}; \end{equation*} the constants in $\lesssim$ depending on $m$, $p$ and the $C^2$-norm of $\phi$. Finally, the choice of a large $k$ gives us the contradiction to \eqref{cz}, hence the proof of Theorem \ref{thm} for $m \geq 3$ is complete. \section{Concluding Remark} The open manifold $(\mathcal{M},g)$ constructed in this note has no bound on the norm of the Riemann curvature, and its injectivity radius degenerates. On the other hand, if the Ricci curvature of $(\mathcal{M},g)$ is bounded from the above and below, and if the injectivity radius is strictly positive, then the $L^p$-Calder\'{o}n--Zygmund estimate is valid on $(\mathcal{M},g)$ for any $1<p<\infty$ ({\it cf.} Theorem $C$, \cite{gp}). It is interesting to seek for the minimal geometric boundedness assumptions on $(\mathcal{M},g)$ that ensures the validity of the $L^p$-Calder\'{o}n--Zygmund estimate. \bigskip \noindent {\bf Acknowledgement}. This work has been done during Siran Li's stay as a CRM--ISM postdoctoral fellow at Centre de Recherches Math\'{e}matiques, Universit\'{e} de Montr\'{e}al and Institut des Sciences Math\'{e}matiques. The author would like to thank these institutions for their hospitality. Siran Li also thanks Jianchun Chu for insightful discussions on problems in global analysis.
{ "timestamp": "2019-03-01T02:09:56", "yymm": "1902", "arxiv_id": "1902.10913", "language": "en", "url": "https://arxiv.org/abs/1902.10913" }
\section{Introduction} \IEEEPARstart{T}{erahertz} (THz) band communications promise to exploit the large bandwidths at THz frequencies \cite{Akyildiz6882305}, to fulfill the high data rate demands of future generations of wireless communications. Millimeter wave (mmWave) systems \cite{Rangan6732923} have been extensively explored in recent years over the frequency range of 30 to 300 gigahertz (GHz). The highest attainable bandwidth at mmWave is, however, 10 GHz, and so a physical layer efficiency of at least 100 bits/sec/Hz is required to achieve a terabit (Tb)/sec data rate. On the contrary, since the available bandwidth between 0.3 THz and 10 THz (i.e., at the THz range) can reach hundreds of GHz, a target Tb/sec data rate can be achieved with minimal physical layer efficiency-enhancement techniques. Over the past few years, affordable technologies have enabled a widespread usage of mmWave systems. For example, mmWave-enabled IEEE 802.11ad (WiFi) networks (WiGig), high-definition (HD) video applications, and single-chip radar integrated circuits have emerged. Furthermore, the fine resolutions of mmWave radars are favorable for detecting small movements and objects, which is useful for several vehicular, control, and safety applications, such as automatic braking sensors, lane intrusion, and blind spot detection. In particular, one of the IEEE 802.15 wireless personal area networks (WPAN) study group's missions is to explore high-frequency ranges, so as to solve a variety of next-generation wireless communication needs, by supporting multi-gigabit (Gb)/sec and Tb/sec links. Recently, major advancements in electronic, photonic, and plasmonic technologies pave the way for several applications at the THz band. Such advances include indoor wireless communications, vehicular communications, drone-to-drone communications, nano-communications, and bio-medical applications. Furthermore, device-to-device (D2D) communications at the THz band are expected to play a significant role in future cellular communications that promise ultra-low latency. THz photonics, further, have the potential to be used in many non communication-based applications, such as spectroscopy of small bio-molecules and quality control of pharmaceutical products. Figure \ref{f:spectrum} illustrates the spectrum decomposition at several bands, and lists several mmWave and THz communications use cases. \begin{figure}[t] \centering \includegraphics[width=3.5in]{THzSpectrum2.png \caption{Spectrum decomposition and high-frequency applications.} \label{f:spectrum} \end{figure} Despite the promising utilization features of THz communications systems, their high-frequency operation properties impose several implementation hurdles, both at the signal generation and at the signal detection levels. Towards addressing such implementation challenges, a popular photonic radiation method is proposed for generating signals by employing quantum cascade lasers (QCLs) \cite[and references therein]{akyildiz2016realizing}, which are semiconductor-based pumped lasers consisting of unipolar intersubband transitions. Alternatively, signal generation and detection can be enabled by deploying compact electronics which utilize a single high electron mobility transistor (HEMT), based on semiconductors such as gallium nitride (GaN) and gallium arsenide (GaAs) \cite[and references therein]{akyildiz2016realizing}. Although plasma waves can be excited in the channel of a HEMT, the corresponding generators and detectors perform poorly at room temperature, as plasma waves tend to be overdamped. Recently, graphene is being celebrated as a strong candidate that enables THz communications \cite{ju2011graphene,hafez2018extremely}. The unique electrical properties of graphene, such as high electron mobility, electrical tunability, and configurability, allow supporting high-frequency signals. Graphene-based antennas enable the propagation of surface plasmonic polariton (SPP) waves, which are confined electromagnetic (EM) waves that are coupled to electric charges at the interface between a metal and a dielectric. In fact, SPP waves propagate at a much lower speed than regular EM waves, and hence possess a characteristic wavelength $\lambda_{\mathrm{SPP}}$ that is much smaller than the EM wavelength $\lambda$. More compact array designs are therefore enabled, which allows integrating a massive number of antennas in a very small footprint. From a coverage perspective, additional crucial challenges still need to be addressed, particularly those related to high propagation losses and power limitations faced at THz frequencies, which result in short communication ranges. To overcome such limitations, either reflect arrays or ultra-massive multiple input multiple output (UM-MIMO) antenna systems can be deployed as a means to extend the coverage range \cite{akyildiz2016realizing}. UM-MIMO is, however, a more convenient and generic solution than reflect arrays, since the latter is rather tailored for non line-of-sight (NLoS) environments. UM-MIMO, which is the focus of this paper, offers the valuable advantages of increasing the communication range through beamforming, and improving the achievable data-rate through spatial multiplexing (SMX). The main goal of this paper, alongside reviewing the state-of-the-art in high-frequency UM-MIMO solutions, is to establish a clear link between transceiver design, channel characteristics, and system performance, as dictated by the entailed signal processing constraints at THz. Up to our knowledge, the literature lacks a holistic work of this kind. Towards this end, we start by introducing the array of sub-arrays (AoSA) configuration in Sec. \ref{sec:circuit}, and we highlight the particular advantages for adopting graphene utilization. We then detail various channel modeling approaches in Sec. \ref{sec:chmodel}, so as to best describe the relation between channel characteristics and system performance. We further define open challenges and potential research directions for THz UM-MIMO in Sec. \ref{sec:prop_sol}, which include efficient signal modulation, waveform design, and distance-aware resource allocation. Finally, based on previously discussed constraints, we recommend specific THz UM-MIMO use cases and conclude in sections \ref{sec:prop_sol} and \ref{sec:conclusion}, respectively. \section{Array of Sub-Arrays Design} \label{sec:circuit} Massive antenna configurations are constructed as large arrays of antenna elements (AEs). Since inter-AE separations are typically in the order of $\lambda$, operating at high frequencies results in dense packaging. This is further emphasized with plasmnioc antennas, where separations are in the order of $\lambda_{\mathrm{SPP}}$ ($\lambda_{\mathrm{SPP}}\!=\!\lambda/15$ for graphene). However, such compactness in design comes at the cost of limiting the beamforming and multiplexing gains of UM-MIMO, due to inadequate spatial sampling, and increasing the complexity of antenna array control \cite{zakrajsek2017design}. As a solution, large antenna arrays can be divided into multiple sub-arrays (SAs) of smaller size, in an AoSA architecture. Multiple AEs in a SA improve the beamforming gain and decrease the required transmission power for each element. Hence, each SA offers the array gain, and the collaboration between SAs provides the SMX gain. This configuration results in a large number of directed independent paths, each of which can be used to carry independent information, which results in high capacity. AoSA architectures have been previously proposed for mmWave systems in an indoor environment \cite{Torkildson6042312}. The effect of AoSA spacing, the alignment between the transmitting and receiving arrays, and the position of line of sight (LOS) blockage were studied. It was concluded that SMX gains are more important than beamforming gains for indoor 60 GHz links with typical consuming electronics and computing devices. This is further justified by the fact that beamforming comes at the cost of increased system complexity, where the transmitter requires having channel state information to align the beam to the receiver. The energy and spectral efficiencies in an AoSA architecture are dictated by the beamforming strategy. Hence, hybrid beamforming is typically sought to reduce hardware costs and power consumption, in which operations get divided between the analog and digital domains. Hybrid AoSA architectures at THz are detailed in \cite{7786122}, where a two-step analog beamforming and user grouping mechanism is illustrated, that divides users according to their angle of departure. In particular, users having the same angle section are first allocated to the same group. Then, each SA carries out beamforming by searching for each user group in the pre-scanned sector. The beamforming angle is selected such that the overall received signal power for one user group over all subcarriers is maximized. After that, digital beamforming is performed on each subcarrier at the baseband. Many more factors have to be taken into consideration in AoSA designs. For example, larger arrays result in larger feeding losses. Furthermore, the mutual coupling effects between adjacent AEs degrade performance. Mutual coupling depends on the array configuration and the operating frequency. It can be accounted for by multiplicative factors that affect the SA steering vectors, and it can be neglected by setting the inter-AE separation to be larger than $\lambda_{\mathrm{SPP}}$. While a mmWave system requires a footprint of few square centimeters for few tens of antennas, a massive number of antennas can be embedded at THz in a few square millimeters. In particular, an 18 dB array gain is achieved at $\unit[1]{THz}$ \cite{Han8417893} with 16 graphene SAs, each of which comprises $8\times8$ AEs over a footprint of $\unit[0.4]{mm^2}$. \section{Channel Characteristics and Modeling} \label{sec:chmodel} The performance of THz UM-MIMO systems is primarily affected by the channel conditions and the corresponding accuracy in channel state information. Capturing important channel parameters, such as path gain, delay spread, and angular spread allows for efficient exploitation of important channel characteristics, such as spatial degrees of freedom and capacity. The spatial degrees of freedom represent the maximum SMX gain that an UM-MIMO system can support, which is directly linked to channel capacity. Accurate channel models are thus a prerequisite for efficient utilization of the THz band. Such models should take into account the impact of both spreading and molecular absorption losses. Furthermore, line-of-sight (LoS), NLoS, reflected, scattered, and diffracted paths should be considered, and static and time-variant environments should be treated separately. In what follows, we review several channel modeling approaches and detail the peculiar characteristics of the THz channel. THz channel modeling approaches are deterministic, statistical, or hybrid \cite{Han8387210}. Deterministic channel modeling depends on site geometry, and is often achieved via ray tracing (RT) techniques that are capable of handling site-specific structures. Applying RT to every channel path, however, increases system complexity. As a solution, point to point RT can be first used to capture losses between virtual points at the transmitter and the receiver, and the resultant model can then be mapped to other antenna pairs, which reduces the computational complexity. As for statistical modeling, it is either matrix-based or reference-antenna-based. In a matrix-based model, each independent sub-channel is represented by a complex Gaussian variable. On the other hand, reference-antenna-based models assume single-input single-output statistical propagation, for two reference antennas at the transmitter and the receiver, with array steering vectors. Finally, hybrid channel modeling combines the advantages of both, deterministic and statistical approaches. The dominant paths can be individually captured by the deterministic method, while other paths can be statistically generated. This captures the spatial-temporal properties while allowing smooth time evolution and avoiding channel discontinuity. \begin{figure}[t] \centering \includegraphics[width=3.58in]{PathLoss_font12.eps \caption{Path loss as a function of frequency and communication ranges.} \label{f:channel_1} \end{figure} \begin{figure}[t] \centering \includegraphics[width=3.45in]{cond_nb \caption{Channel condition number as a function of $\Delta$ and $D$ for $f = 1$THz and $d= 1$m. } \label{f:channel_2} \end{figure} For all these approaches, inter-stream channel correlation remains a significant challenge that needs to be addressed. While the Kronecker model is typically used to account for the correlation between sub-channels, this model leads to inconsistent measurements when the size of antenna arrays is large. This is because it assumes correlation to be separable, with a resultant correlation matrix that is a product of two matrices at the transmitter and the receiver. Virtual channel representations can be developed instead, that account for the joint correlation between the transmitter and the receiver. Figure \ref{f:channel_1} illustrates a heatmap plot of LoS path loss due to water vapor molecules, between $\unit[0.1]{THz}$ and $\unit[10]{THz}$, over a distance range of $\unit[30]{m}$. It can be noted that the graph is dominated by spikes that represent molecular absorption losses. These losses originate due to excited molecule vibrations at specific resonant frequencies within the THz band. As a result, the spectrum gets divided into smaller distance-dependent windows, where certain spikes are only significant at specific distances. This means that variations in the communication range will not only affect path loss, but also the available transmission bandwidths. Furthermore, the total available bandwidth reduces as frequency increases. The absorption coefficient for a volumetric density depends on system temperature, system pressure, and absorption cross section \cite{Jornet5995306}. All parameters can be obtained from the high-resolution transmission molecular absorption database (HITRAN), as summarized in table \ref{table:para}. Moreover, molecular absorptions are followed by coherent reradiations that can be lumped in a high-power absorption noise factor. The resultant noise is thus dominated by the channel induced component (graphene-based electronic devices are low-noise), and it is colored over frequency. A three-dimensional end-to-end RT-based channel model for THz UM-MIMO AoSA architectures was proposed in \cite{Han8417893}, for graphene-based plasmonic nano-antenna arrays, where the corresponding path gains, array factors, and achievable capacities were studied. Due to large reflection losses at THz, the channel is dominated by LoS and NLoS paths, while scattered and refracted rays can be neglected. Furthermore, the channel tends to be sparse with beamforming and ill-conditioned with SMX. Nevertheless, achieving good multiplexing gains in high-frequency point-to-point LoS environments is feasible when antenna spacings are much larger than the operating wavelength. The largest number of spatial degrees of freedom in a LoS environment at high frequencies is achieved via sparse antenna arrays, that generate spatially uncorrelated channel matrices, resulting in sparse multipath environments. Figure \ref{f:channel_2} shows a plot of the THz channel condition number, which is the ratio of the largest to smallest singular value of a channel matrix. Smaller condition numbers indicate better conditioned channels. Two regions of operation can be noted: region 1 with relatively large $\Delta$ values (smooth part of Fig. \ref{f:channel_2}), and region 2 with relatively small $\Delta$ values that results in ill-conditioned channels. Curve dips represent orthogonality of channel paths, or equality of singular values, which guarantees optimal AoSA designs. Building on these observations of channel characteristics, we next highlight several signal processing open research directions for THz UM-MIMO. \begin{table*}[t] \caption{Simulation Parameters }\label{table:para} \centering \begin{tabular}{|| c | c ||} \hline Parameter & Value \\ \hline\hline System temperature & $\unit[396]{K}$ \\ \hline Reference temperature & $\unit[296]{K}$ \\ \hline Temperature at standard pressure & $\unit[273.15]{K}$ \\ \hline Reference pressure & $\unit[1]{atm}$ \\ \hline System pressure & $\unit[0.1]{atm}$ \\ \hline Mixing ratio of $(i,g)$ & $\unit[0.05]{\%}$ \\ \hline Resonant frequency of $(i,g)$ at reference pressure & $\unit[(0 \sim 276.45)]{Hz}$ \\ \hline Temperature broadening coefficient & $(-0.16 \sim 0.83)$ \\ \hline Linear pressure shift of $(i,g)$ & $\unit[(-0.0409 \sim 0.0251)]{Hz}$ \\ \hline Line intensity & $\unit[(9.98^{-36} \sim 2.66^{-18})]{Hz\ m^2/molecule}$ \\ \hline Broadening coefficient of air & $\unit[(0.0023 \sim 0.1117)]{Hz}$ \\ \hline Broadening coefficient of $(i,g)$ & $\unit[(0.052 \sim 0.916)]{Hz}$ \\ \hline Planck constant & $\unit[6.6262 \times 10^{-34}]{J s}$ \\ \hline Boltzmann constant & $\unit[1.3806 \times 10^{-23}]{J/K}$ \\ \hline Gas constant & $\unit[8.2051 \times 10^{-5}]{m^3atm/K/mol}$ \\ \hline Avogadro constant & $\unit[6.0221 \times 10^{23}]{molecule/mol}$ \\ \hline \end{tabular} \end{table*} \section{Research Advances} \label{sec:advantages} Many classical signal processing problems have to be readdressed at THz, including accurate beamforming and beamstearing criteria, optimal precoding and combining methods, low-cost channel estimation paradigms (fast channel tracking), and near-optimal data detection. Nevertheless, the effect of blockage is also critical. Blockage can occur over the medium, or at the source due to suspended particles that block small AEs. Compressed sensing techniques can be applied to solve these problems by taking advantage of the inherent sparsity at THz. In what follows, we highlight some potential future research directions. \subsection{Modulation} \label{sec:pulse-based-modulation} Only very short pulses can be generated at room temperature at THz frequencies, with a power in the order of few milli-watts, which is not sufficient for long communication distances. This constraint motivated the use of THz communications in nano-networks. Furthermore, the limitations of nano-scale transceivers bound the ability to generate carrier-based modulations. Hence, pulse-based modulations were adopted. In \cite{Jornet6804405}, pulse-based asymmetric on-off keying modulation spread in time (TS-OOK) was proposed. TS-OOK depends on the trade of one hundred femtosecond-long pulses among nano-devices as a logic one. It supports a very large number of nano-devices, that can transmit at very high bit-rates, ranging from few Gb/sec to few Tb/sec. Most of the algorithms that are tailored for regular MIMO systems should be modified to account for pulse-based modulations at THz. \subsection{Resource Allocation and Waveform Design} \label{sec:distance_aware} Efficient use of resources requires the development of optimization frameworks that can control transmission power, sub-window allocation, and modulation formats. For example, a distance-aware bandwidth-adaptive resource allocation scheme was developed for both single-user and multi-user scenarios in \cite{Han7490372}. It targets maximizing distance, rather than energy consumption or data rates as in traditional formulations. In the single-user case, the communication range and data rate are maximized by adapting the transmit power on each sub-window. In the multi-user case, the resource allocation model supports multiple links at the same time. The objective is to maximize the total distance based on the number of transmission links, with the data rate being a constraint that needs to exceed a threshold value. This scheme supports a communication distance of $\unit[21]{m}$ and a data rate of $\unit[100]{Gb/sec}$. Furthermore, THz-specific optimized multi-carrier waveform designs are required, other than orthogonal frequency-division multiplexing (OFDM), that can take advantage of the available spectral windows. A multi-wideband waveform design for distance-adaptive THz communications was developed in \cite{Han7321055}, to enable communications over long-distance networks. The optimization framework was formulated to solve for the number of frames and transmission power, with the objective of maximizing distance. It takes into consideration the peculiarities of distance-varying spectral windows, as well as temporal broadening effects and delay spreads. This scheme severely exploits the transmit power, achieving a communication distance of $\unit[22.5]{m}$, and supporting a $\unit[30]{Gb/sec}$ data rate. \subsection{Multi-Carrier Antenna Configurations} \label{sec:multicarrier} In an AoSA architecture, each AE is individually powered. Hence, the gain of the antenna array is higher. Placing AEs close to each other, however, limits the beamforming gain by reducing the corresponding spatial sampling capabilities. Nevertheless, the maximum distance separation $\delta$ between two AEs should be in the order of half operating wavelength $\lambda/2$, to avoid grating-lobe effects in beamfoarming. Furthermore, graphene-based nano-antenna spacings can be significantly reduced, to the order of $\lambda_{\mathrm{SPP}}$, while still avoiding mutual coupling effects. Towards that end, an interleaved antenna map was suggested \cite{zakrajsek2017design}, in which neighboring AEs operate at different absorption transmission windows. Similarly, much larger same-frequency SA separations $\Delta$ are required to achieve good multiplexing gains. Therefore, a sparse interleaved antenna map is required. The key enabler for such frequency-interleaved maps is the ability to dynamically tune each AE to a specific resonant frequency, without modifying its physical dimensions. This can be achieved at high frequencies by simple material doping or electrostatic bias. For frequencies below $\unit[1]{THz}$, software-defined plasmonic metamaterials also exist. Figure \ref{f:AoSAs} illustrates the interleaving schemes, at the level of SAs or AEs (bottom-right corner), where same colors represent same frequencies. The separation between two AEs tuned to the same frequency is $\lambda/2$, and that between two AEs tuned to different frequencies is $\lambda_{\mathrm{SPP}}$. \begin{figure}[t] \centering \includegraphics[width=3.3in]{AoSA \caption{Interleaved AoSA structures at the level of AEs and SAs.} \label{f:AoSAs} \end{figure} \subsection{Beamforming and Precoding} \label{sec:beamforming} Efficient beamforming schemes are required, that can overcome the high path loss and capture the distance and frequency-dependent characteristics of the THz channel. A hybrid beamforming scheme with multi-carrier transmission was developed in \cite{Lin7116524}, using analog beamforming for user grouping and digital beamforming with dynamic SAs at the baseband. An adaptive power allocation scheme was proposed that allows targeting different distances, alongside a SA selection algorithm that reduces the cost of radio frequency circuits. In the analog domain, users in different groups share the same frequency without interference, while users in the same group are assigned orthogonal frequencies based on the distance-aware multi-carrier scheme. In the digital domain, the data streams of a user group are assigned to specific SAs. \subsection{Spatial Modulation} \label{sec:SM} Instead of antenna frequency maps, as those in Fig. \ref{f:AoSAs} for multi-carrier designs, we can design antenna maps that turn AEs on and off in the context of a spatial modulation (SM) setup. SM can be thought of as a spectrum and power-efficient paradigm for THz UM-MIMO. Due to inherent large array dimensions, a significant number of information bits can be assigned to these maps. Up to our knowledge, SM at THz has never been addressed in the literature. In fact, SM at very high frequencies is challenging because of LoS-dominance. Based on the previous analysis of THz channel conditions in Sec. \ref{sec:chmodel}, it can be noted that frequency, communication range, and separations between AEs can be tuned for favorable propagation settings that result in sufficient channel diversity. Adaptive and hierarchical SM solutions can be achieved by mapping information bits to antenna locations, at the level of SAs or AEs. We can perceive the antenna arrays as large fully-configurable graphene sheets, the dimensions of which can be adapted in real time by activating a specific set of AEs, to achieve a target bit rate at a specific communication range. Figure \ref{f:BER} shows sample bit error rate (BER) results for several SM and SMX schemes. Region 2 optimized corresponds to operations in region 2 of Fig. \ref{f:channel_2}, with compact designs, but with parameter tuning to guarantee favorable propagation conditions. It is observed that operations in region 2 can be made efficient, and that SMX is more sensitive to channel conditions than SM. Note that SM can be combined with frequency-interleaved antenna map designs, to come up with generic index modulation solutions. Such solutions can take full advantage of available resources by assigning information bits to frequency allocations as well. \begin{figure}[t] \centering \includegraphics[width=3.7in]{BER \caption{BER performance of SM (solid lines) and SMX (dotted lines).} \label{f:BER} \end{figure} \section{Prospect Use-Cases} \label{sec:prop_sol} Having detailed the severe channel conditions at THz, as well as recent research proposals that aim at facilitating the use of THz communications at relatively larger communication distances, we hereby discuss a select few prospect use cases that are more likely to get realized at THz in the near future. Candidate use cases should have good LoS conditions and should support sufficient design adaptability and flexibility. In what follows, we promote the use of THz communications at the intersection of communications and sensing, as an alternative to wired backbone connectivity in data centers, as part of large intelligent surface deployments, as well as in the context of mobile wireless mid-range communications. The latter use case is the holy grail of THz communications. \subsection{Communications and Sensing} \label{sec:use_nano} THz nano-sensors can be used to monitor air pollution by detecting the molecular compositions of different gases in the atmosphere. They can also be used to monitor physical parameters, such as temperature and displacement, as well as for medical diagnostic purposes. Furthermore, graphene-based nano-materials have been efficiently used to develop nano-batteries, nano-processors, and nano-sensors. Large MIMO nano-antenna array configurations can be used to enhance the accuracy or such sensors, by exploiting the spatial degrees of freedom to increase sensing resolution. In fact, the field of nano-technology continues to expand with the advent of novel nano-materials. EM communications among nano-devices, however, suffer from several limitations, which are mainly due to their small size and low energy. Low-power and low-cost UM-MIMO nano-antenna arrays can be used to communicate sensing information over a distributed wireless sensing network. \subsection{Data Centers} \label{sec:use_datacenters} Data centers are composed of networked computers and storage devices, that can be used for storing, processing, and accessing large amounts of data. There is a huge need for novel communication technologies and networking solutions to improve the achievable data rates of such systems. Wiring a massive number of servers increases the size of data centers and reduces system efficiency. This is particularly true for under-water data center deployments. Wireless links can reduce system costs and yield more energy efficient data centers by eliminating the need for power-hungry switches. The flexibility and reconfigurability of THz antenna arrays can be leveraged to support multiple inter-rack and intra-rack communication links at the same time. THz UM-MIMO transceivers with high power, low noise figures, and good sensitivity can thus be optimized in such static environments. \subsection{Large Intelligent Surfaces} \label{sec:LINs} A large intelligent surface (LIS) is a recently proposed concept in wireless communications, in which future man-made structures, such as buildings, roads, and walls, are expected to be electronically active. With LISs, the entire environment is intelligent and active for communication purposes. LISs achieve extremely high data rates, support efficient wireless charging capabilities, and enable high-precision sensing applications. This is of interest for beyond fifth generation (5G) communication paradigms that provide connections for a massive number of devices. LISs can be particularly utilized at THz because of two main favorable conditions. First, they yield perfect LoS indoor and outdoor propagation environments. Second, they allow for spreading of AEs over large distances to avoid mutual coupling and antenna correlations. Hence, the favorable propagation settings of region 1 in Fig. \ref{f:channel_2} are always met. Note that, LISs support simple channel estimation techniques and simple feedback mechanisms, which are important for low-latency applications. \subsection{Mid-Range Mobile Communications} \label{sec:LANs} Mid-range communication applications, that require several meters of distance coverage, are the ultimate target of THz communications, and are the main motive behind developing UM-MIMO techniques that operate efficiently at THz. Towards that end, the IEEE 802.15 wireless personal network group formed a THz interest group, and several experiments on THz wave propagation in room environments have been conducted. It was demonstrated that transmission windows up to 500 GHz can support personal wireless networks, with $\unit[20]{Gb/sec}$ peak rates. Moreover, communications at the THz band bring many exciting opportunities for vehicular networks, as well as challenges. In fact, transmitting at high data rates causes the system to be quasi-static, even when users are mobile. Furthermore, moving to high carrier frequencies minimizes the Doppler effect. Intelligent and adaptive UM-MIMO designs can serve the need for high-efficiency antennas, that allow for sharing transceiver resources, choosing carrier frequencies, and directing antenna beams to multiple users, all in a compact integration. Similarly, fast-moving unmanned aerial vehicles or drones are highly dependent on the throughout, reliability, and latency of wireless systems, which makes THz UM-MIMO a candidate solution. \section{Conclusion} \label{sec:conclusion} In this paper, the characteristics of the THz channel have been demonstrated to advocate the potential of UM-MIMO systems at high frequencies. THz UM-MIMO systems overcome the distance and power limitation problems, and graphene-based nano-antenna arrays, in particular, can efficiently realize such systems. We illustrated that while better conditioned channels require large AE separations, interleaved multi-carrier antenna configurations can maintain design compactness. We further argued that potential THz UM-MIMO research directions include modulation, resource allocation, and beamforming. Finally, the paper concluded by motivating prospect use cases of THz UM-MIMO. \ifCLASSOPTIONcaptionsoff \newpage \fi \end{document} \section{Introduction} \label{sec:Intro} \IEEEPARstart{T}{erahertz} (THz) band communications promise to exploit the large bandwidths at THz frequencies \cite{Akyildiz6882305}, to fulfill the high data rate demands of future generations of wireless communications. While millimeter wave (mmWave) systems \cite{Rangan6732923} have been extensively explored in recent years over the frequency range of 30 to 300 gigahertz (GHz), the highest attainable bandwidth at such frequencies is 10 GHz, and so a physical layer efficiency of at least 100 bits/sec/Hz is required to achieve a terabit (Tb)/sec data rate. On the contrary, since the available bandwidth between 0.3 THz and 10 THz (i.e., at the THz range) can reach hundreds of GHz, a target Tb/sec data rate can be achieved with minimal physical layer efficiency-enhancement techniques \cite{akyildiz2016realizing}. Over the past few years, affordable technologies have enabled a widespread usage of mmWave systems. For example, mmWave-enabled IEEE 802.11ad (WiFi) networks (WiGig), high-definition (HD) video applications, and single-chip fine-resolution radar integrated circuits have emerged. One of the IEEE 802.15 wireless personal area networks (WPAN) study group's missions is to explore high-frequency ranges, so as to solve a variety of next-generation wireless communication needs, by supporting multi-gigabit (Gb)/sec and Tb/sec links. Recently, major advancements in transceiver design are closing the so-called THz gap, which paves the way for several applications at the THz band, ranging from indoor wireless communications, to vehicular and drone-to-drone communications, device-to-device (D2D) communications, and nano-communications. THz signals, further, have the potential to be used in many non communication-based applications, such as spectroscopy of small bio-molecules and quality control of pharmaceutical products. The spectrum decomposition and the corresponding applications are illustrated in Fig. \ref{f:spectrum}. \begin{figure}[t] \centering \includegraphics[width=3.5in]{THzSpectrum3.png \caption{Spectrum decomposition and high-frequency applications.} \label{f:spectrum} \end{figure} Despite the promising utilization features of THz communications systems, their high-frequency operation properties impose several implementation hurdles, both at the signal generation and at the signal detection levels. Towards addressing such implementation challenges, a variety of integrated electronic and photonic solutions are proposed, that not necessarily result in perfect THz devices, but rather efficient and programmable devices that satisfy emerging system-level properties; see \cite{sengupta2018terahertz} and references therein. Electronics III-V-based semiconductor technologies include Indium Phosphide (InP) heterojunction bipolar transistors (HBTs), high electron mobility transistors (HEMT), and gallium arsenide (GaAs) based Schottky diodes. At high frequency ranges, however, the corresponding generators and detectors perform poorly at room temperature (plasma waves excited by a HEMT tend to be instable). Photonic solutions, on the other hand, are based on photomixers, difference frequency generation, or parametric generation with nonlinear materials. Quantum cascade lasers (QCLs), which are semiconductor-based pumped lasers consisting of unipolar intersubband transitions, are also used for THz signal generation. Nevertheless, integrated hybrid electronic-photonic systems \cite{sengupta2018terahertz} have been proosed as a deviation from classical approaches. Recently, plasmonic solutions have emerged as strong candidates that enable communications at the THz band, in particular, graphene-based solutions \cite{ju2011graphene}. The unique electrical properties of graphene, such as high electron mobility, electrical tunability, and configurability, allow supporting high-frequency signals. Plasmonic-based antennas enable the propagation of surface plasmonic polariton (SPP) waves, which are confined electromagnetic (EM) waves that travel through the metal-dielectric interface due to oscillations of electric charges. In fact, SPP waves propagate at speeds that are much lower than those of regular EM waves and, hence, possess a characteristic wavelength (denoted by $\lambda_{\mathrm{SPP}}$) that is much smaller than the EM wavelength (denoted by $\lambda$). Therefore, compact array designs can be deployed, which integrate a massive number of antennas in a tiny footprint \cite{akyildiz2016realizing}. From a coverage perspective, additional crucial challenges still need to be addressed, particularly those related to high propagation losses and power limitations faced at THz frequencies, which result in short communication ranges. To overcome such limitations, we distinguish between either reflect arrays or ultra-massive multiple input multiple output (UM-MIMO) antenna systems, as alternative means to extend the coverage range \cite{akyildiz2016realizing}. UM-MIMO is a more convenient and generic solution than reflect arrays, since the latter is tailored for non line-of-sight (NLoS) environments. UM-MIMO, which is the focus of this paper, offers the valuable advantages of increasing the communication range through beamforming, and improving the achievable data-rate through spatial multiplexing (SMX). In fact, the ability to build compact phase-coherent UM-MIMO arrays (using plasmonic materials such as graphene) is a key added value of THz communications compared to free-space optical (FSO) communications. The main aim of this paper is to establish a clear link between transceiver design, channel characteristics, and prospect use-cases of THz UM-MIMO systems. Up to our knowledge, the literature lacks a holistic work of this kind. Towards this end, we start by introducing the array of sub-arrays (AoSA) configuration and highlighting the particular advantages for adopting graphene utilization in Sec. \ref{sec:circuit}. We then detail various channel modeling approaches in Sec. \ref{sec:chmodel}, so as to best describe the relation between channel characteristics and system performance. We further define open challenges and potential signal-processing research advances in Sec. \ref{sec:prop_sol}, which include efficient signal modulation, waveform design, and distance-aware resource allocation. Finally, based on previously discussed constraints, we recommend specific THz UM-MIMO use cases and conclude in sections \ref{sec:prop_sol} and \ref{sec:conclusion}, respectively. \section{Array of Sub-Arrays Design} \label{sec:circuit} Massive plasmonic antenna configurations are constructed as large arrays of antenna elements (AEs). Since inter-AE separations are typically in the order of $\lambda$, operating at high frequencies naturally results in dense packaging. For instance, while mmWave AoSAs require footprints of few square centimeters for a small number of antennas, a massive number of antennas can be embedded at THz in a few square millimeters. This densification is further emphasized with plasmonic antennas, where separations are in the order of $\lambda_{\mathrm{SPP}}$ (e.g., $\lambda_{\mathrm{SPP}}\!=\!\lambda/15$ for graphene). Such compactness in design, however, comes at the cost of limiting the beamforming and multiplexing gains of UM-MIMO, due to inadequate spatial sampling, and increasing the complexity of antenna array control \cite{zakrajsek2017design}. As a solution, large antenna arrays can be divided into multiple sub-arrays (SAs) of smaller size in an AoSA architecture. Deploying multiple AEs in a SA improves the beamforming gain and decreases the required transmission power for each element. Hence, each SA offers the array gain, and the collaboration between SAs provides the SMX gain. This configuration results in a large number of directed independent paths, each of which can be used to carry independent information, which in turn results in high data capacity. For example, an 18 dB array gain is achieved with a $\unit[0.4]{mm^2}$ footprint at $\unit[1]{THz}$, using 16 graphene-based SAs, each of which comprises $8\times8$ AEs \cite{Han8417893}. AoSA architectures were originally proposed for mmWave systems in an indoor environment \cite{Torkildson6042312}, by investigating the effect of AoSA spacing, the alignment between the transmitting and receiving arrays, and the position of line of sight (LOS) blockage. Reference \cite{Torkildson6042312} concludes that SMX gains are more important than beamforming gains for indoor 60 GHz links with typical consuming electronics and computing devices. This is further justified by the fact that beamforming comes at the cost of increased system complexity, where the transmitter requires a perfect knowledge of channel state information for aligning the beam towards the receivers of interest. Since the energy and spectral efficiencies in an AoSA architecture are dictated by the beamforming strategy, hybrid beamforming is typically sought to reduce hardware costs and power consumption, in which operations get divided between the analog and digital domains. Hybrid AoSA architectures at THz are detailed in \cite{7786122}, where a two-step analog beamforming and user grouping mechanism is illustrated by dividing users according to their angle of departure. Users having the same angle section are first allocated to the same group. Then, each SA carries out beamforming by searching for each user group in the pre-scanned sector. The beamforming angle is selected such that the overall received signal power for one user group over all subcarriers is maximized. Afterwards, digital beamforming is performed in baseband on each subcarrier. Several other factors have to be taken into consideration in AoSA designs, e.g. the feeding losses due to larger arrays configurations. Furthermore, the mutual coupling effects between adjacent AEs, which depend on the array configuration and the operating frequency, often degrade the system performance (can be neglected for inter-AE separations larger than $\lambda_{\mathrm{SPP}}$). \section{Channel Modeling and Characteristics} \label{sec:chmodel} The previous section discusses that UM-MIMO AoSA can combat propagation losses and maximize the achievable gains at the THz band. The exact performance of THz UM-MIMO systems, however, is dictated by the exact channel conditions and the corresponding accuracy in channel state information. In particular, capturing important channel parameters, such as path gain, delay spread, and angular spread allows for efficient exploitation of important channel characteristics, such as spatial degrees of freedom and capacity. The spatial degrees of freedom dictate the maximum SMX gain that can be supported, which is directly linked to channel capacity. Accurate channel models are thus a prerequisite for efficient utilization of the THz band. Such models should take into account the impact of both spreading and molecular absorption losses. Furthermore, line-of-sight (LoS), NLoS, reflected, and scattered paths should be considered, and static and time-variant environments should be treated separately. Note that realistic channel measurements have been recently reported, such as in \cite{xing2018propagation}, where a $\unit[140]{GHz}$ channel sounder was tailored for both long distance propagation measurements with delay spread, and short range dynamic channel measurements. In what follows, we review several channel modeling approaches and we detail the peculiar characteristics of the THz channel. \subsection{THz Channel Modelling} THz channel modeling approaches are deterministic, statistical, or hybrid \cite{Han8387210}. Deterministic channel modeling depends on site geometry, and is often achieved via ray tracing (RT) techniques that are capable of handling site-specific structures. Applying RT to every channel path, however, increases system complexity. As a solution, point to point RT can be first used to capture the losses between virtual points at the transmitter and the receiver, and the resultant model can then be mapped to other AEs, which reduces the computational complexity. As for statistical modeling, it is either matrix-based or reference-antenna-based. In a matrix-based model, each independent sub-channel is represented by a complex Gaussian variable. On the other hand, reference-antenna-based models assume single-input single-output statistical propagation, for two reference antennas at the transmitter and the receiver, with array steering vectors. Finally, hybrid channel modeling combines the advantages of both deterministic and statistical approaches, where dominant paths can be individually captured by the deterministic method, while other paths can be statistically generated. This captures the spatial-temporal properties while allowing smooth time evolution and avoiding channel discontinuity. \begin{figure}[t] \centering \includegraphics[width=3.4in]{PathLoss_font12 \caption{Path loss as a function of frequency and communication ranges.} \label{f:channel_1} \end{figure} \begin{table}[t] \small \caption{Simulation Parameters and their Typical Values ($T$: system temperature, $p$: system pressure, $q^{i,g}$: mixing ratio of $(i,g)$, $f_{c0}^{i,g}$: resonant frequency of $(i,g)$ at reference pressure, $\gamma$: temperature broadening coefficient, $\delta^{i,g}$: linear pressure shift of $(i,g)$, $S^{i,g}$: line intensity, $\alpha^{air}_0$: broadening coefficient of air, $\alpha^{i,g}_0$: broadening coefficient of $(i,g)$)}\label{table:para} \centering \begin{tabular}{| c || c |} \hline Parameter & Value \\ \hline\hline $T$ & $\unit[396]{K}$ \\ \hline $p$ & $\unit[0.1]{atm}$ \\ \hline $q^{i,g}$ & $\unit[0.05]{\%}$ \\ \hline $f_{c0}^{i,g}$ & $\unit[(0 \sim 276.45)]{Hz}$ \\ \hline $\gamma$ & $(-0.16 \sim 0.83)$ \\ \hline $\delta^{i,g}$ & $\unit[(-0.0409 \sim 0.0251)]{Hz}$ \\ \hline $S^{i,g}$ & $\unit[(9.98^{-36} \sim 2.66^{-18})]{Hz\ m^2/molecule}$ \\ \hline $\alpha^{air}_0$ & $\unit[(0.0023 \sim 0.1117)]{Hz}$ \\ \hline $\alpha^{i,g}_0$ & $\unit[(0.052 \sim 0.916)]{Hz}$ \\ \hline \hline \end{tabular} \end{table} \begin{figure}[t] \centering \includegraphics[width=3.45in]{cond_nb \caption{Channel condition number as a function of $\Delta$ and $D$ for $f = 3$THz. } \label{f:channel_2} \end{figure} For all these approaches, inter-stream channel correlation remains a significant challenge that needs to be addressed. While the Kronecker model is typically used to account for the correlation between sub-channels, this model leads to inconsistent measurements when the size of antenna arrays is large. This is because it assumes correlation to be separable, with a resultant matrix that is a product of two correlation matrices at the transmitter and the receiver. Virtual channel representations can be developed instead, by accounting for the mutual correlation between the transmitter and the receiver. \subsection{THz Channel Conditions} At THz frequencies, the channel response is dominated by molecular absorption losses. The LoS path loss due to water vapor molecules is illustrated in Fig. \ref{f:channel_1} (as a heatmap), between $\unit[0.1]{THz}$ and $\unit[10]{THz}$, over a distance range of $\unit[30]{m}$. It can be noted that the plot is dominated by spikes (in yellow) that originate due to excited molecule vibrations at specific resonant frequencies within the THz band. With certain spikes only appearing at specific distances (as reflected by the blue shade in the bottom-left corner), the available spectrum is divided into smaller distance-dependent windows. This means that increasing the communication range does not only increase the path loss, but also reduces the available transmission bandwidths. Furthermore, the total available bandwidth reduces as frequency increases (higher occurrence of absorption spikes and higher propagation losses). The absorption coefficient for a volumetric density depends on system temperature, system pressure, and absorption cross section \cite{Jornet5995306}. All parameters that are required for absorption loss computations can be obtained from the high-resolution transmission molecular absorption database (HITRAN), some of which summarized in table \ref{table:para}. Note that molecular absorptions are followed by coherent reradiations that can be lumped in a high-power absorption noise factor. The resultant noise is thus dominated by the channel induced component (graphene-based electronic devices are low-noise), and it is colored over frequency. Building on these THz channel characteristics, a three-dimensional end-to-end RT-based channel model for THz UM-MIMO AoSA architectures is proposed in \cite{Han8417893} for graphene-based nano-antenna arrays, where the corresponding path gains, array factors, and achievable capacities are studied. Due to large reflection losses, the THz channel is dominated by LoS and NLoS paths, while scattered and refracted rays can be neglected. Furthermore, the channel tends to be sparse with beamforming and ill-conditioned with SMX. Nevertheless, achieving good multiplexing gains in high-frequency point-to-point LoS environments is feasible when antenna spacings are much larger than the operating wavelength. The largest number of spatial degrees of freedom in a LoS environment at high frequencies is achieved via sparse antenna arrays, that generate spatially uncorrelated channel matrices, resulting in sparse multipath environments. The channel condition number, which is the ratio of the largest to smallest singular value of a channel matrix, is plotted in Fig. \ref{f:channel_2}, as a function of the communication range ($D$) and the distance between AEs ($\Delta$). Smaller condition numbers indicate better conditioned channels. Two regions of operation can be noted: region 1 with relatively large $\Delta$ values (upper-left side of Fig. \ref{f:channel_2}), and region 2 with relatively small $\Delta$ values (where yellow curves indicate ill-conditioned channels). The dark-blue curves, in between yellow curves, represent orthogonality of channel paths or equality of singular values, that guarantee optimal AoSA designs. Building on these observations on channel characteristics, we next highlight several signal processing open research directions for THz UM-MIMO. \section{Signal Processing Research Advances} \label{sec:advantages} Due to fundamental differences in signal and channel characteristics, classical signal processing problems have to be readdressed at the THz band. Such problems include, but are not limited to, accurate beamforming and beamsteering criteria, optimal precoding and combining methods, low-cost channel estimation paradigms, and near-optimal data detection. Compressed sensing techniques can be employed to solve most of these problems by taking advantage of the inherent sparsity at THz. We hereby highlight some relevant research advances. \subsection{Modulation} \label{sec:pulse-based-modulation} The limitations of nano-scale transceivers bound the ability to generate continuos carrier-based modulations. In fact, only very short pulses can be generated in the higher THz range with graphene at room temperature, with a corresponding power in the order of few milli-watts, which is not sufficient for long-distance communications. Pulse-based asymmetric on-off keying modulation spread in time (TS-OOK) is thus proposed in \cite{Jornet6804405}, and consists of trading very short pulses (one hundred femtosecond-long) among nano-devices as a logic one. It supports a very large number of nano-devices, that can transmit at very high rates, ranging from few Gb/sec to few Tb/sec. Most of the algorithms that are tailored for regular MIMO systems should be modified to account for pulse-based modulations, at least for the time being. But judging by the pace of growth in THz technologies, it is expected that regular carrier-based modulations will be the norm in the not-so-far future. Note that cognitive systems at THz can then adapt modulation types depending on system and channel conditions. An interesting signal processing exercise in this context would be to blindly estimate such modulations at the receiver side. The latter would be an extension to the well-investigated classical modulation classification problem. \subsection{Waveform Design and Beamforming} \label{sec:distance_aware} In order to make the best use of spectral windows, THz-specific optimized multi-carrier waveform designs are required, other than orthogonal frequency-division multiplexing (OFDM). Since the channel is assumed flat at THz, such designs would typically be single-carrier, with the possibility of encorporating carrier aggregation schemes. Furthermore, maintaining an efficient use of resources can be achieved through optimization frameworks that jointly control transmission power, sub-window allocation, and modulation formats. For example, a multi-wideband waveform design for distance-adaptive THz communications is developed in \cite{Han7321055} to enable communications over long-distance networks. The optimization framework is designed to solve for the number of frames and transmission power, with the objective of maximizing the communication range. The optimization framework in \cite{Han7321055} takes into consideration the characteristics of distance-varying spectral windows, as well as temporal broadening effects and delay spreads. This scheme severely exploits the transmit power, achieving $\unit[30]{Gb/sec}$ at $\unit[22.5]{m}$ communication range. Similarly, efficient beamforming schemes are required, so as to overcome the high path loss and account for the distance-dependent and frequency-dependent characteristics of the THz channel. A hybrid beamforming scheme is developed in \cite{Lin7116524}, with multi-carrier transmission, using analog beamforming for user grouping and digital beamforming with dynamic SAs at the baseband. An adaptive power allocation scheme is proposed for targeting different distances, alongside a SA selection algorithm that reduces the cost of radio frequency circuits. In the analog domain, different user groups can share the same frequency without correlation, and same user groups are allocated at orthogonal frequencies. In the digital domain, the data streams of a user group are assigned to specific SAs. \subsection{Multi-Carrier Antenna Configurations} \label{sec:multicarrier} In a plasmonic AoSA architecture, nano-antenna spacings can be significantly reduced, to the order of $\lambda_{\mathrm{SPP}}$, while still avoiding mutual coupling effects. Placing AEs very close to each other, however, limits the beamforming gain by reducing the corresponding spatial sampling capabilities. In fact, the maximum distance separation $\delta$ between two AEs should be in the order of half operating wavelength $\lambda/2$ to avoid grating-lobe effects in beamfoarming. Towards that end, an interleaved antenna map is suggested in \cite{zakrajsek2017design}, in which neighboring AEs operate at different absorption transmission windows. Similarly, much larger same-frequency SA separations $\Delta$ are required to achieve good multiplexing gains. Therefore, a sparse interleaving antenna map is required. The key enabler for such frequency-interleaved maps is the ability to dynamically tune each AE to a specific resonant frequency, without modifying its physical dimensions. This can be achieved at high frequencies by simple material doping or electrostatic bias. For frequencies below $\unit[1]{THz}$, software-defined plasmonic metamaterials also exist. Figure \ref{f:AoSAs} illustrates the interleaving schemes, at the level of SAs or AEs (bottom-right corner), where same colors represent same frequencies. The separation between two AEs tuned to the same frequency is $\delta\!=\!\lambda/2$, and that between two AEs tuned to different frequencies is $\delta\!=\!\lambda_{\mathrm{SPP}}$. Note that each AE is individually powered, which results in larger antenna array gains. In fact, reconfigurability, adaptability and scalability are key target features for all types of future THz tranceivers, not just the plasmonic-based ones. \begin{figure}[t] \centering \includegraphics[width=3.28in]{AoSA \caption{Interleaved AoSA structures at the level of AEs and SAs (not to scale).} \label{f:AoSAs} \end{figure} \subsection{Spatial Modulation} \label{sec:SM} Spatial modulation (SM) can be thought of as a spectrum and power-efficient paradigm for THz UM-MIMO. Instead of antenna frequency maps, as those shown in Fig. \ref{f:AoSAs} for multi-carrier designs, we can design antenna maps that turn AEs on and off in the context of a SM setup. Due to inherent large array dimensions, a significant number of information bits can be assigned to antenna locations in these maps. Up to our knowledge, SM at THz has never been addressed in the literature. In fact, SM at very high frequencies is challenging because of LoS-dominance. Based on the previous analysis of THz channel conditions in Sec. \ref{sec:chmodel}, it can be noted that frequency, communication range, and separations between AEs can be tuned for favorable propagation settings. Adaptive and hierarchical SM solutions can be achieved by mapping information bits to antenna locations, at the level of SAs or AEs. We perceive the antenna arrays as large fully-configurable graphene sheets of AEs that can get partially activated. Such arrays can be adapted in real time by activating a specific set of AEs, to achieve a target bit rate at a specific communication range (as per our arguments in previous sections). Sample bit error rate (BER) results for several SM and SMX schemes are shown in Fig.~\ref{f:BER}. Note that \emph{Region 2 optimized} corresponds to operations in region 2 of Fig.~\ref{f:channel_2}, with corresponding compact designs, but with dimension tuning to guarantee favorable propagation conditions (sufficient channel diversity). We observe that operations in region 2 can be made efficient, and that SMX is more sensitive to channel conditions than SM. Note that SM can be combined with frequency-interleaved antenna map designs, so as to come up with generic index modulation solutions. Such solutions can take full advantage of available resources by assigning information bits to frequency allocations as well. \begin{figure}[t] \centering \includegraphics[width=3.3in]{BER \caption{BER performance of SM (solid lines) and SMX (dotted lines) under different array configurations and neglecting antenna gains.} \label{f:BER} \end{figure} \section{Prospective Research Directions} \label{sec:prop_sol} Having detailed the channel conditions and research trends, we hereby discuss a select few prospect THz UM-MIMO use cases that are more likely to get realized in the near future. Candidate use cases should provide good LoS conditions and should support sufficient design adaptability and flexibility. In what follows, we promote the use of THz communications at the intersection of communications and sensing, as an alternative to wired backbone connectivity in data centers, as part of large intelligent surface deployments, as well as in the context of mobile wireless mid-range communications. \subsection{Communications and Sensing} \label{sec:use_nano} Many applications can be piggybacked onto THz wireless communications, particularly in the areas of imaging, localization, and sensing. Perhaps the most interesting of these applications that could make use of UM-MIMO is gas sensing. The specific absorption spectral characteristics of molecules serve as fingerprints for specific gaseous compositions. Hence, by shooting THz signals into a medium, and then estimating the channel response at the receiver side, using time-domain spectroscopy, for example, the molecular composition of this medium can be detected. UM-MIMO systems can enable sensing over extended distances, where the distance-dependent behavior of molecular absorptions can be mitigated to correct measurements. This could be exploited to monitor air pollution from a distance. Furthermore, THz signals are used to monitor other physical parameters, such as temperature and displacement. They are also used for medical diagnostic purposes. UM-MIMO nano-antenna array configurations can enhance the accuracy or such sensors by exploiting the spatial degrees of freedom to increase sensing resolution. Furthermore, the field of nano-technology continues to expand with the advent of novel nano-materials. For instance, graphene has been recently used to develop efficient nano-batteries, nano-processors, and nano-sensors. EM communications among nano-devices, however, suffer from several limitations, which are mainly due to the small sizes and low energy levels. Towards this end, low-power and low-cost UM-MIMO nano-antenna arrays can be used to communicate sensing information over a distributed wireless sensing network. \subsection{Data Centers} \label{sec:use_datacenters} Due to the large number of networked computers and storage devices in data centers, novel communication technologies are required to facilitate accessing and processing of data. In data centers, servers are typically arranged in multiple racks, and wired connections are often sought for convenience. Wiring a massive number of servers, however, increases the size of data centers and reduces system efficiency. To the contrary, wireless links can reduce system costs and yield more energy efficient data centers by eliminating the need for power-hungry switches. Such links should be complemented by efficient networking solutions and scheduling mechanisms, that allocate channels to servers based on the traffic demand. The high data rates make THz communications a strong candidate for wireless data centers. Furthermore, the reconfigurability of THz antenna arrays can be leveraged to support multiple inter-rack and intra-rack communication links. THz UM-MIMO transceivers with high power, low noise figures, and good sensitivity can thus be optimized in such static environments. Note that THz for data centers is already attracting attention. For example, TERAPOD (Horizon 2020 project supported by the European Union) is using data centers as a proof-of-concept deployment of end-to-end THz wireless links. In particular, recently-proposed naturally-cooled under-water data centers can make the best use of the THz band. These enclosed data centers can control the gaseous composition (using nitrogen) to reduce absorption losses. \subsection{Large Intelligent Surfaces} \label{sec:LINs} Making the entire environment intelligent and active for communication purposes is one of the visions for beyond fifth generation (5G) communication paradigms. Future man-made structures, such as buildings, roads, and walls, are thus expected to be electronically active. In this context, the concept of large intelligent surfaces (LISs) has been recently proposed, in which surfaces scale up beyond conventional antenna arrays, and act as transmitting and receiving structures in an environment. These surfaces should achieve extremely high data rates, support efficient wireless charging capabilities, and enable high-precision sensing applications. LISs can be particularly realized via THz UM-MIMO because of two main favorable conditions. First, they are more likely to yield perfect LoS indoor and outdoor propagation environments. Second, they impose little restrictions on how AEs can be spread. Hence, mutual coupling effects and antenna correlations can easily be avoided, and the favorable propagation settings of region 1 in Fig. \ref{f:channel_2} can easily be met. LISs further support simple channel estimation techniques and simple feedback mechanisms, which are important for low-latency applications. \subsection{Mid-Range Mobile Communications} \label{sec:LANs} Mid-range wireless mobile communication applications, that require several meters of distance coverage, are the holy grail of THz communications, and are the main motive behind developing UM-MIMO techniques that operate efficiently at the THz band. THz communications promise to support ultra-broadband and ultra-high-speed applications, such as terabit wireless personal/local area networks. Recently, the IEEE 802.15 wireless personal network group formed a THz interest group, and several experiments on THz wave propagation in room environments have been conducted. It is demonstrated that transmission windows up to 500 GHz can support personal wireless networks with $\unit[20]{Gb/sec}$ peak rates. Adaptive and compact THz UM-MIMO array designs allow for sharing transceiver resources, tuning carrier frequencies, and directing antenna beams to multiple users. THz communications bring many exciting opportunities for vehicular networks as well. In fact, transmitting at high data rates causes the system to be quasi-static, even when users are mobile. Furthermore, moving to high carrier frequencies minimizes the Doppler effect. Similarly, fast-moving unmanned aerial vehicles or drones are highly dependent on the throughout, reliability, and latency of wireless systems, which makes THz UM-MIMO a candidate solution. One of the main challenges in these mobile setups is to mitigate the effect of blockage. Blockage can easily occur over the medium due to the small wavelengths. It can also easily occur at the source due to tiny suspended particles that are big enough to block AEs. Nevertheless, while it is not easy to overcome all the challenges that govern THz wireless communications, a plausible solution, and an important research direction, is to allow for the co-existence of THz, mmWave, and microwave systems. In the mean time, high mobility can still be treated by the more mature mmWave solutions, while backhaul transmissions can be conducted at the THz band. \section{Conclusion} \label{sec:conclusion} In this paper, we examined the characteristics of the THz channel to advocate the potential of UM-MIMO systems at high frequencies. With proper configurations, UM-MIMO antenna arrays can overcome the distance and power limitations. We argued that graphene-based nano-antenna arrays, in particular, can efficiently realize THz UM-MIMO systems. We defined multiple research advances, from a signal processing perspective, that are critical for increasing the efficiency of THz communications, including signal modulation, waveform design, and resource allocation. Finally, building on all preceding arguments, we envisioned select few use-cases that are likely to realize THz UM-MIMO in the near future. \section*{Biographies} \footnotesize \textbf{Alice Faisal} (S'18) is a senior electrical and computer engineering student at Effat University, Jeddah, Saudi Arabia. She is currently the chair of IEEE women in engineering affinity group at Effat University student branch. Her research interests are in the areas of wireless communications and signal processing. \textbf{Hadi Sarieddeen} (S'13-M'18) received the B.E. degree (summa cum laude) in computer and communications engineering from Notre Dame University-Louaize (NDU), Zouk Mosbeh, Lebanon, in 2013, and the Ph.D. degree in electrical and computer engineering from the American University of Beirut (AUB), Beirut, Lebanon, in 2018. He is currently a postdoctoral research fellow in the Electrical and Mathematical Sciences and Engineering (CEMSE) Division at King Abdullah University of Science and Technology (KAUST), Thuwal, Makkah Province, Saudi Arabia. His research interests are in the areas of communication theory and signal processing for wireless communications, with emphasis on large, massive, and ultra-massive MIMO systems and terahertz communications. He was a recipient of the General Khalil Kanaan Award at NDU in 2013 for ranking first on the graduating class, and the National Council for Scientific Research Doctoral Scholarship Award at AUB in 2016. \textbf{Hayssam Dahrouj} (S'02, M'11, SM'15) received his B.E. degree (with high distinction) in computer and communications engineering from the American University of Beirut (AUB), Lebanon, in 2005, and his Ph.D. degree in electrical and computer engineering from the University of Toronto (UofT), Canada, in 2010. In May 2015, he joined the Department of Electrical and Computer Engineering at Effat University as an assistant professor, and also became a visiting scholar at KAUST. Between April 2014 and May 2015, he was with the Computer, Electrical and Mathematical Sciences and Engineering group at KAUST as a research associate. Prior to joining KAUST, he was an industrial postdoctoral fellow at UofT, in collaboration with BLiNQ Networks Inc., Kanata, Canada. His main research interests include multi-base signal processing in cloud-radio access networks, multi-sensor networks, free-space optics, machine learning, convex optimization, and distributed algorithms. \textbf{Tareq Y. Al-Naffouri} (M'10-SM'18) Tareq Al-Naffouri received the B.S. degrees in mathematics and electrical engineering (with first honors) from King Fahd University of Petroleum and Minerals, Dhahran, Saudi Arabia, the M.S. degree in electrical engineering from the Georgia Institute of Technology, Atlanta, in 1998, and the Ph.D. degree in electrical engineering from Stanford University, Stanford, CA, in 2004. He was a visiting scholar at Caltech, Pasadena, CA in 2005 and summer 2006. He was a Fulbright scholar at USC in 2008. He has held internship positions at NEC Research Labs, Tokyo, Japan, in 1998, Adaptive Systems Lab, UCLA in 1999, National Semiconductor, Santa Clara, CA, in 2001 and 2002, and Beceem Communications Santa Clara, CA, in 2004. He is currently an Associate Professor at the Electrical Engineering Department, KAUST. His research interests lie in the areas of sparse, adaptive, and statistical signal processing and their applications, localization, machine learning, and network information theory. \textbf{Mohamed-Slim Alouini} (S'94-M'98-SM'03-F'09) was born in Tunis, Tunisia. He received the Ph.D. degree in Electrical Engineering from the California Institute of Technology (Caltech), Pasadena, CA, USA, in 1998. He served as a faculty member in the University of Minnesota, Minneapolis, MN, USA, then in the Texas A\&M University at Qatar, Education City, Doha, Qatar before joining King Abdullah University of Science and Technology (KAUST), Thuwal, Makkah Province, Saudi Arabia as a Professor of Electrical Engineering in 2009. His current research interests include the modeling, design, and performance analysis of wireless communication systems. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
{ "timestamp": "2019-04-04T02:20:01", "yymm": "1902", "arxiv_id": "1902.11090", "language": "en", "url": "https://arxiv.org/abs/1902.11090" }
\section{Introduction} Efficient grasp planning for multi-fingered hands remains a challenging task. First, searching multiple contacts and corresponding hand configurations is a high-dimensional problem. Second, the collision should be considered in both the grasp planning and execution stages. Thirdly, the system should be robust to the uncertainties caused by the imperfect sensing, calibration and actuation. Due to the high dimensional state space, majority of the planning researches use sampling based methods~\cite{miller2004graspit,vahrenkamp2018planning,ciocarlie2007dexterous,saut2012efficient,song2018grasp} by employing the precise 3D mesh model of the object. To overcome the curse of dimensionality, the hand poses in~\cite{vahrenkamp2018planning} were sampled around the object skeleton. To accelerate the inverse kinematics (IK) and collision detection during sampling, the object and fingertip workspace in~\cite{saut2012efficient} were approximated as tree structures. Without the gradient information, the sampling-based methods essentially rely on dense sampling and heavy manually-designed heuristics. Learning-based methods have been introduced to simplify the heuristics design~\cite{levine2016learning,mahler2017dex,quillen2018deep}. To reduce the learning dimension, these methods restricted the learned policies to vertical grasps with parallel grippers. Due to the high dimensionality, it is usually intractable to learn the grasp policy for multi-fingered hand directly in configuration space. In~\cite{varley2015generating}, the heatmaps with high success probability were learned with a deep network, after which simulated annealing was applied for collision-free hand configurations. The planning time was over 16 secs/grasp. It is observed that human tends to increase the contact surface to generate powerful, robust grasps. With larger contact surface, the hand provides force/torque from various positions, and the resultant wrench space can resist larger disturbances. The idea of matching the hand to the object has been explored by several researchers. In~\cite{ciocarlie2007dexterous}, the grasp quality was quantified by the distances between the key points on hand surfaces and the object mesh. The optimization was solved by the simulated annealing in absence of the analytical gradient, and the computation for the optimization was excessively heavy for real time applications. A data-driven method was proposed in~\cite{li2007data} to find the hand pose that matches features with the object. However, it is time consuming to traverse the whole database for feature matching. In~\cite{song2018grasp}, the hand-object geometric fitting was achieved by sampling contacts on the object. This assignment implicitly maps the kinematics of hand to the object by manually-designed heuristics. The resultant grasps could be a narrow subset of all well-matched grasps. To avoid excessive hand-engineering, we proposed a grasp planning method called iterative surface fitting (ISF) in~\cite{fan2018grasp} and increased the efficiency by combining with learning in~\cite{fan2018learning}. ISF optimized for both the palm pose and finger displacements by maximizing the surface fitting score. However, the previous ISF has several limitations. First, it can only handle the grippers with one degree of freedom (DOF). Secondly, the grasps with collisions were detected and pruned after the optimization. The optimize-then-prune operation produced sub-optimal grasps. Besides planning the desired grasp configurations, the robot should generate proper robot-finger trajectories to execute the grasps. The trajectory planning can be extremely expensive in this high dimensional space. Majority of the current grasp planning methods ignore possible collisions during the execution and simply close the fingers to execute the grasps~\cite{vahrenkamp2018planning,saut2012efficient,song2018grasp,shi2017real}. A method based on rapidly-exploring random tree (RRT) was introduced in~\cite{vahrenkamp2012simultaneous} to plan motion and grasp simultaneously. With the manually designed heuristics, the RRT dimension was reduced to 3. These heuristics defined a potentially narrow and suboptimal subspace for RRT search. A general trajectory optimization (TrajOpt) algorithm was presented in~\cite{schulman2013finding} using the sequential quadratic programming (SQP)~\cite{boggs1995sequential}. Both the RRT and TrajOpt require object mesh model during the optimization, which is generally absent in the online grasp planning scenario. In this paper, we propose a framework to plan and execute grasps with multi-fingered hands. The framework contains both the grasp planning and grasp imagination. The grasp planning searches for optimal grasps by deforming the hand surfaces along its feasible kinematic directions and matching towards the surface of the object, given the assumption that the large matching area produces more stable and powerful grasp. With the planned grasp configurations, the grasp imagination optimizes the robot-finger trajectories to reach the target grasps given the point cloud representation of the objects under uncertainties. The contributions of this paper are as follows. First, the grasp planning is able to find grasps with plausible surface fitting performance efficiently. The average optimization time is 0.40 sec/grasp using the raw point cloud captured by stereo cameras. By optimizing the palm pose and finger joints iteratively, the planning algorithm is able to implement on the hands with multiple DOFs. Secondly, the collision is penalized by the gradient-based methods directly, instead of being pruned after the optimization as~\cite{fan2018grasp, hang2016hierarchical}. Furthermore, the proposed method can generate both the power grasps and precision grasps by adjusting the fitting weights of fingertips. Finally, the proposed grasp imagination is able to plan collision free finger trajectories in 0.61 sec/grasp with the imperfect point cloud and underlying uncertainties. The reminder of the paper is as follows. Section~\ref{sec:statement} describes the problem formulation, followed by the proposed grasping framework in Section~\ref{sec:proposed}. The grasp planning and grasp imagination are introduced in Section~\ref{sec:mdisf} and Section~\ref{sec:gi}, respectively. The experimental results on a multi-fingered hand are introduced in Section~\ref{sec:results}. Section~\ref{sec:conclusion} concludes the paper and describes the future work. The experimental videos are available at~\cite{website}. \section{Problem Statement} \label{sec:statement} With the surface contact, the grasp planning for a multi-fingered hand problem can be formulated as: \begin{subequations} \label{eq:general_form} \begin{align} \max_{R, \boldsymbol{t}, \delta\boldsymbol{q}, \boldsymbol{\mathcal{S}^f}, \boldsymbol{\mathcal{S}^o}} &\ Q(\boldsymbol{\mathcal{S}^f},\boldsymbol{\mathcal{S}^o}) \label{eq1:cost}\\ s.t. \quad & \boldsymbol{\mathcal{S}^f} \subset \mathcal{T}(\boldsymbol{\partial \mathcal{F}};R,\boldsymbol{t},\delta\boldsymbol{q}), \label{eq1:surface_finger}\\ & \boldsymbol{\mathcal{S}^o} = NN_{\partial \mathcal{O}} (\boldsymbol{\mathcal{S}^f}), \label{eq1:surface_object}\\ & dist(\mathcal{T}(\boldsymbol{\partial \mathcal{F}};R,\boldsymbol{t},\delta\boldsymbol{q}), \partial \mathcal{O}|\mathcal{G})\geq 0 \label{eq1:collision}\\ & \boldsymbol{q}_0 + \delta\boldsymbol{q} \in [\boldsymbol{q}_{\text{min}}, \boldsymbol{q}_{\text{max}}], \label{eq1:constraint2} \end{align} \end{subequations} where $R\in SO(3), \boldsymbol{t}\in \mathbb{R}^3$ denote the rotation and translation of the hand palm, $\boldsymbol{q}\in \mathbb{R}^{N_{jnt}}$ denotes the joint angle, with ${N_{jnt}}$ representing the number of joints, and $\boldsymbol{q}_0$ and $\delta \boldsymbol{q}$ represent the original and displacement of $\boldsymbol{q}$. $\boldsymbol{\mathcal{S}^f}=[\mathcal{S}^f_1,...,\mathcal{S}^f_{N_{cnt}}],\boldsymbol{\mathcal{S}^o}=[\mathcal{S}^o_1,...,\mathcal{S}^o_{N_{cnt}}]$ are contact surfaces for all fingers/palms and objects, with $\mathcal{S}^f_i$ and $\mathcal{S}^o_i$ representing the $i$-th contact surface on the finger/palm and object, and $N_{cnt}$ denoting the number of contact surfaces. $Q\in \mathbb{R}$ represents the grasp quality related to $\boldsymbol{\mathcal{S}^f},\boldsymbol{\mathcal{S}^o}$. Constraints~(\ref{eq1:surface_finger}) shows that $\boldsymbol{\mathcal{S}^f}$ is a subset of the surface transformed from the hand surface $\boldsymbol{\partial\mathcal{F}}$ by $(R,\boldsymbol{t},\delta \boldsymbol{q})$. Constraint~(\ref{eq1:surface_object}) denotes $\boldsymbol{\mathcal{S}^o}$ is computed from the nearest neighbor ($NN$) of $\boldsymbol{\mathcal{S}^f}$ on object surface $\partial \mathcal{O}$. Constraint~(\ref{eq1:collision}) denotes that the transformed hand surface $\boldsymbol{\partial \mathcal{F}}$ should not collide with the object $\partial \mathcal{O}$ and ground $\mathcal{G}$, and (\ref{eq1:constraint2}) indicates that $\boldsymbol{q}$ stays in $[\boldsymbol{q}_{\text{min}}, \boldsymbol{q}_{\text{max}}]$. Problem~(\ref{eq:general_form}) would be a standard grasp planning problem if all contact surfaces were degenerated into contact points. In general case, however, the problem is challenging to solve by either the sampling based methods or gradient based methods considering the inverse kinematics (IK) and collision detection with the objects of complex shapes. We observe that human tends to match the contact surfaces during grasping in order to increase the force exerted on the object and improve the robustness to uncertainties. Therefore, the grasp quality $Q$ is chosen as the surface fitting error between the hand contact surface $\boldsymbol{\mathcal{S}^f}$ and object contact surface $\boldsymbol{\mathcal{S}^o}$. More concretely, \begin{equation} \label{eq:Q_form} Q(\boldsymbol{\mathcal{S}^f},\boldsymbol{\mathcal{S}^o}) = -dist(\boldsymbol{\mathcal{S}^f},\boldsymbol{\mathcal{S}^o}). \end{equation} Based on this quality formulation, this paper introduces a framework to plan and execute the grasps. The technical details are explained in the following sections. \begin{figure}[t] \begin{center} \includegraphics[width=3.3in]{framework.png} \caption{Illustration of the grasp planning and execution framework.} \label{fig:framework} \end{center} \end{figure} \section{The General Planning Framework} \label{sec:proposed} This paper introduces an optimization-based framework to plan the desired grasps and optimize robot-finger trajectories to execute the grasps. The framework is illustrated in Fig.~\ref{fig:framework}. It consists of three main blocks: grasp planning by the multi-dimensional iterative surface fitting (MDISF), the grasp imagination by the grasp trajectory optimization (GTO) and the guided sampling. Starting with an initial configuration, MDISF fits the hand surface onto the object by optimizing the palm transformation $(R,\boldsymbol{t})$ and joint displacements $\delta\boldsymbol{q}$. The registration considers the deformation from the feasible hand motion and the collision between the hand and the environment. Compared with non-rigid registration~\cite{myronenko2010point,bookstein1989principal}, MDISF only deforms in the feasible directions of the joint motion. Compared with the ISF algorithm~\cite{fan2018grasp}, MDISF is able to plan grasps for the hands with multiple DOFs, and the collision between the hand and the object/ground is penalized directly in the optimization. The grasp imagination is to evaluate the planned grasps and generate robot-finger trajectories to execute the highly ranked grasps. The GTO algorithm is proposed to avoid collision with the environment and plan optimal finger trajectories using the incomplete point cloud and various types of uncertainties. The guided sampling is introduced to avoid trapped in bad-performed local optima by prioritizing different initial palm placements, so that the regions with smaller fitting errors and better collision avoidance performance can be sampled more often. The introduction of guided sampling is in~\cite{fan2018grasp}. The MDISF algorithm and the grasp imagination will be introduced below. \section{Multi-Dimensional Iterative Surface Fitting} \label{sec:mdisf} \begin{figure}[t] \begin{center} \includegraphics[width=3in]{ISF_Illustration.png} \caption{Illustration of the multi-dimensional iterative surface fitting (MDISF) algorithm. MDISF iterates between (a) correspondence matching and (b) surface fitting. The correspondence matching searches $q_i$ (shown in blue) on object that pair with finger surface points $p_i$ (shown in red). The surface fitting minimizes distance between the paired points on hand and object while avoiding collision. Only partial correspondence is displayed for simplification.} \label{fig:ISF_Illustration} \end{center} \end{figure} With the surface fitting score~(\ref{eq:Q_form}) as the quality, Problem~(\ref{eq:general_form}) can be solved with the proposed multi-dimensional iterative surface fitting (MDISF) algorithm. Similar to iterative closest point (ICP)~\cite{besl1992method}, MDISF iterates between the correspondence matching and surface fitting, as shown by Fig.~\ref{fig:ISF_Illustration}. To employ the gradient in improving the searching efficiency, the hand surface $\boldsymbol{\partial \mathcal{F}}$ is discretized into points $\{p_i, n_i^p\}_{i=1}^{N_p}$ and bounding boxes $\{\mathcal{B}_k\}_{k=1}^{N_b}$, where $p_i\in \mathbb{R}^3, n_i^p\in \mathbb{S}^2$ represent the point position and normal vector pointing outward, and $N_p, N_b$ are the total number of hand surface points and boxes to cover the surface, as shown in Fig.~\ref{fig:ISF_Illustration}(a). Only the front surface is sampled for simplification. Similarly, the object surface $\partial \mathcal{O}$ is discretized into points $\{q_i, n_i^q\}_{i=1}^{N_q}$, where $q_i\in \mathbb{R}^3, n_i^q\in \mathbb{S}^2$ represent the point position and normal vector pointing outward, and $N_q$ is the total number of points on the object point cloud. The correspondence matching finds the paired points $\{q_i\in\mathbb{R}^3, n_i^q\in\mathbb{S}^2\}_{i\in\mathcal{I}}$ on object point cloud by the nearest neighbor search with duplicate/outlier removal~\cite{zinsser2003refined}. The surface fitting minimizes the distance between the point pairs $\{p_i, n_i^p\}_{i\in\mathcal{I}}$ and $\{q_i, n_i^q\}_{i\in\mathcal{I}}$, as shown in Fig.~\ref{fig:ISF_Illustration}(b). With the point representation of the surfaces, the surface fitting error $E_{fit}$ is re-formulated as \begin{equation} \label{eq:fitting_error} \begin{aligned} E_{fit}(R,\boldsymbol{t},\delta \boldsymbol{q}) = \sum_{i\in\mathcal{I}}&\left((\bar{p}_{i}-q_{i})^Tn_{i}^q\right)^2 \\& + \alpha^2((Rn_i^p)^Tn_i^q +1)^2 \end{aligned} \end{equation} where $\bar{p}_{i} = Rp_{i} + \boldsymbol{t} + {R\mathcal{J}_i(\boldsymbol{q})\delta \boldsymbol{q}}$ describes the hand surface point after the palm transformation and finger displacement, and $\mathcal{J}_i(\boldsymbol{q})$ is the Jacobian matrix at the point $p_i$ with the joint $\boldsymbol{q}$. The first term describes the point distance projected to the object surface normal direction. This point-to-plane distance is broadly used in ICP~\cite{rusinkiewicz2001efficient} to allow sliding on flat surface, so that the algorithm is not sensitive to incomplete point cloud. The second term describes the alignment of the normal vectors. $\alpha$ is to balance the scale of normal alignment. The joint displacement $\delta \boldsymbol{q}$ is neglected in normal alignment due to the limited performance improvement. With the current correspondence matching and fitting error representation in~(\ref{eq:fitting_error}), the Problem~(\ref{eq:general_form}) becomes: \begin{subequations} \label{eq3:overall} \begin{align} \min_{R, \boldsymbol{t}, \delta \boldsymbol{q}} &\ E_{fit}(R,\boldsymbol{t},\delta \boldsymbol{q}) \label{eq3:cost}\\ s.t. \quad & dist\left(\mathcal{T}(\{\mathcal{B}_k\}_{k=1}^{N_b};R,\boldsymbol{t},\delta\boldsymbol{q}), \partial \mathcal{O}\right)\geq 0, \label{eq3:collision}\\ & dist\left(\mathcal{T}(\boldsymbol{\partial \mathcal{F}};R,\boldsymbol{t},\delta\boldsymbol{q}), \mathcal{G}\right)\geq 0, \label{eq3:collision_plane}\\ & \delta \boldsymbol{q} + \boldsymbol{q}_0 \in [\boldsymbol{q}_\text{min}, \boldsymbol{q}_\text{max}], \label{eq3:surface_finger} \end{align} \end{subequations} where~(\ref{eq3:collision}) represents the collision between the object $\partial \mathcal{O}$ and the bounding boxes of the hand $\{\mathcal{B}_k\}_{k=1}^{N_b}$, and~(\ref{eq3:collision_plane}) denotes the collision between the hand surface and the ground. Equation~(\ref{eq3:overall}) is a non-convex programming due to the coupling term $R\mathcal{J}_i(\boldsymbol{q})\delta \boldsymbol{q}$ in~(\ref{eq3:cost}) and the collision constraints~(\ref{eq3:collision}, \ref{eq3:collision_plane}). \subsection{Collision Handling} To address the collision term, we employ the point representation of the object $\{q_i, n_i^q\}_{i = 1}^{N_q}$ and check the inclusion of the points in $\{\mathcal{B}_k\}_{k=1}^{N_b}$. As for the hand-ground collision, we represent the ground by a point on ground $q_g$ and normal vector $n_g$, and check the ground collision by the sign of $(p_i - q_g)^Tn_g$. Penalty method~\cite{luenberger1984linear} is introduced to avoid collision and ensure that the hand can move smoothly in the space occupied by the object. More concretely, the collision error is formulated as: \begin{equation} \label{eq4:collision_error} E_{col}(R,\boldsymbol{t},\delta\boldsymbol{q}) = \sum_{l\in\mathcal{L}_o}\|\bar{p}_{l} - q_{l}\|_2^2 + \sum_{l\in\mathcal{L}_g}\left((\bar{p}_{l} - q_{g})^Tn_g\right)^2 \end{equation} where $\{q_{l}\}_{l\in\mathcal{L}_o}$ denotes the object points that are in collision with the bounding boxes. $\{p_{l}\}_{l\in\mathcal{L}_o}$ denotes the corresponding points on the box front or back surfaces, and $\bar{p}_{l} = Rp_{l} + \boldsymbol{t} + R\mathcal{J}_{l}(\boldsymbol{q})\delta \boldsymbol{q}$. To ensure that~(\ref{eq4:collision_error}) reduces all types of collision, we choose the front or the back surfaces that $q_{l}$ paired with, as shown in Fig.~\ref{fig:collision_type}. \begin{figure}[t] \begin{center} \includegraphics[width=3.3in]{collision_type.png} \caption{Illustration of different collision types. (a-b) denote the hand-object collision and (c-d) denote the hand-ground collision. (ac) show the inner side collision while (bd) show the outer side collision. The algorithm first searches the correspondence pair $\{p_l,q_l\}_{l\in\mathcal{L}}$, as shown by red and blue points. The sign of $\sum_{l\in\mathcal{L}}n_l^p\cdot n_l^q$ is used for detect the collision type, and $\sum_{l\in\mathcal{L}}n_l^p\cdot n_l^q\leq0$ means inner side contact. For the outer side contact, $p_l$ is replaced by the points on the back, as shown by purple dots in (bd). } \label{fig:collision_type} \end{center} \end{figure} With the penalty method, the surface fitting~(\ref{eq3:overall}) becomes \begin{subequations} \label{eq6:overall} \begin{align} \min_{R, \boldsymbol{t}, \delta \boldsymbol{q}} &\ E(R,\boldsymbol{t},\delta \boldsymbol{q})\label{eq6:cost}\\ s.t. \quad & \delta \boldsymbol{q} + \boldsymbol{q}_0 \in [\boldsymbol{q}_\text{min}, \boldsymbol{q}_\text{max}], \label{eq6:surface_finger} \end{align} \end{subequations} where $E(R,\boldsymbol{t},\delta \boldsymbol{q})= E_{fit}(R,\boldsymbol{t},\delta \boldsymbol{q}) + w^2 E_{col}(R,\boldsymbol{t},\delta\boldsymbol{q})$ represents the overall error during the surface fitting of the current correspondence. $w$ denotes the penalty weight of the collision. \subsection{Iterative Palm Finger Optimization (IPFO)} Problem~(\ref{eq6:overall}) is a discretization of~(\ref{eq:general_form}) under the current correspondence and collision penalty, and is solved by the iterative palm finger optimization (IPFO) revised from~\cite{fan2018grasp}. The IPFO algorithm iteratively optimizes the palm transformation $(R,\boldsymbol{t})$ and the finger displacements $\delta\boldsymbol{q}$. \subsubsection{Palm Optimization} The palm optimization searches for optimal $(R, \boldsymbol{t})$ by fixing the finger joint configuration: \begin{equation} \label{eq:palm_optimization} \min_{R, \boldsymbol{t}} E(R,\boldsymbol{t},0) = \min_{x}\|Ax - b\|^2 \end{equation} where $x = [r^T,\boldsymbol{t}^T]^T\in \mathbb{R}^6$ is a local parameterization of the palm transformation, and $r\in \mathbb{R}^3$ is the axis-angle vector to approximate $R$ in small rotation angle assumption, i.e. $R\approx I + \hat{r}$, where $\hat{\bullet}$ is a skew-symmetric representation of cross product. The matrix $A = [a_{p,i}^T... a_{n,i}^T... a_{col,l}^T]^T\in\mathbb{R}^{(2|\mathcal{I}|+3|\mathcal{L}_o|+|\mathcal{L}_g|)\times6}$, with $a_{p,i} = [(p_i\times n_i^q)^T, (n_i^q)^T]$ as the point-to-plane fitting error, and $a_{n,i} = \alpha [(n_i^p \times n_i^q)^T, 0_3^T]$ as the normal alignment error. $a_{col,l}$ includes hand-object collision $a_{obj,l} = w[-\hat{p}_l, I_3]$ and hand-ground collision $a_{gnd,l} = w[(p_l\times n_g)^T, (n_g)^T]$. Similarly, $b = -[b_{p,i}^T... b_{n,i}^T... b_{col,l}^T]^T\in\mathbb{R}^{2|\mathcal{I}|+3|\mathcal{L}_o|+|\mathcal{L}_g|}$, with $b_{p,i} = (p_i - q_i)^Tn_i^q$ and $b_{n,i} = \alpha ((n_i^p)^Tn_i^q + 1)$. $b_{col,l}$ includes $b_{obj,l} = w(p_l - q_l)$ and $b_{gnd,l}=w(p_l - q_g)^Tn_g$. Equation~(\ref{eq:palm_optimization}) is a least squares problem and is solved analytically by: \begin{equation} \label{eq:palm_optimization_solution} x^*= (A^TA)^{-1}A^Tb \end{equation} \subsubsection{Finger Optimization} The finger optimization fixes the palm transformation $(R^*,\boldsymbol{t}^*)$ and searches for optimal finger displacements $\delta \boldsymbol{q}$: \begin{subequations} \label{eq:finger_optimization} \begin{align} \min_{\delta \boldsymbol{q}} &\ E(R^*,\boldsymbol{t}^*,\delta\boldsymbol{q}) = \min_{\delta\boldsymbol{q}}\|C\delta\boldsymbol{q} - d\|^2\\ s.t. \quad & \delta \boldsymbol{q} + \boldsymbol{q} \in [\boldsymbol{q}_\text{min}, \boldsymbol{q}_\text{max}], \end{align} \end{subequations} where $C = [c_{p,i}^T...c_{col,l}^T]^T \in \mathbb{R}^{(|\mathcal{I}| + 3|\mathcal{L}_o|+|\mathcal{L}_g|)\times N_{jnt}}$, with $c_{p,i}=(n_i^q)^T\mathcal{J}_i(\boldsymbol{q})$ as the point-to-plane fitting error, $c_{col,l}$ includes hand-object collision $c_{obj,l}=wR^*\mathcal{J}_l(\boldsymbol{q})$ and hand-ground collision $c_{gnd,l}=(n_g)^T\mathcal{J}_l(\boldsymbol{q})$. Similarly, $d=-[d_{p,i}^T...d_{col,l}^T]^T\in\mathbb{R}^{|\mathcal{I}|+3|\mathcal{L}_o|+|\mathcal{L}_g|}$, with $d_{p,i} = R^*p_i + \boldsymbol{t}^* - q_i$. $d_{col,l}$ includes $d_{obj,l}=w(R^*p_l + \boldsymbol{t}^* - q_l)$ and $d_{gnd,l} = R^*p_l + \boldsymbol{t}^* - q_g$. Equation~(\ref{eq:finger_optimization}) is a least-squares with box constraints, and is solved by initializing $\delta\boldsymbol{q}_0 = (C^TC)^{-1}C^Td$ and iterating between \begin{subequations} \label{eq:finger_optimization_solution} \begin{align} &\delta\boldsymbol{q}_{\bar{m}} = \delta\boldsymbol{q}_m - \gamma C^T(C\delta\boldsymbol{q}_m - d)\label{eq:fo_gradient_decent}\\ &\delta\boldsymbol{q}_{m+1} = \max(\min(\delta\boldsymbol{q}_{\bar{m}}, \boldsymbol{q}_\text{max} - \boldsymbol{q}), \boldsymbol{q}_\text{min}- \boldsymbol{q})\label{eq:fo_box_constraint} \end{align} \end{subequations} until converge, where $\gamma$ is the step size for gradient decent and is set as $0.1N_{jnt}/trace(C^TC)$. Equation~(\ref{eq:finger_optimization_solution}) is able to converge around $10\sim50$ iterations\footnote{Due to the simple form of the constraints, the iteration is more efficient than calling a general constrained least squares solver.}. IPFO is summarized in Alg.~(\ref{alg:IPFO}). \begin{algorithm} [t] \caption{Iterative Palm-Finger Optimization (IPFO)}\label{alg:IPFO} \begin{algorithmic}[1] \State \textbf{Input:} $\boldsymbol{\partial\mathcal{F}}, \partial \mathcal{O}, \mathcal{I}, \mathcal{L}$\label{IPFO:input} \State \textbf{Init:} $(R,\boldsymbol{t})\leftarrow(I,0),\delta \boldsymbol{q} = 0,e_{prev} = \infty$ \label{dual:init} \For {$t = 0:T_{max}$} \label{IPFO:forloop} \State$\{p_i,n_i^p, \mathcal{J}_i\}_{i\in\mathcal{I}}, \{q_i,n_i^q\}_{i\in\mathcal{I}}\leftarrow sample(\boldsymbol{\partial\mathcal{F}},\partial \mathcal{O},\mathcal{I})$\label{IPFO:sample} \State $\{p_l, q_l\}_{l\in\mathcal{L}}\leftarrow sample\_collision(\boldsymbol{\partial\mathcal{F}}, \partial\mathcal{O}, \mathcal{L})$ \label{IPFO:collision} \State $\{R^*, \boldsymbol{t}^*\} \leftarrow \min_{R,\boldsymbol{t}}E(R,\boldsymbol{t},0)$ by (\ref{eq:palm_optimization_solution}) \label{IPFO:palm} \State $(\delta \boldsymbol{q}^*, e) \leftarrow \min_{\delta \boldsymbol{q}} E(R^*,\boldsymbol{t}^*,\delta \boldsymbol{q})$ by (\ref{eq:finger_optimization_solution})\label{IPFO:finger} \State $\boldsymbol{\partial\mathcal{F}}\leftarrow \mathcal{T}(\boldsymbol{\partial\mathcal{F}}; R^*, \boldsymbol{t}^*,\delta \boldsymbol{q}^*)$ \label{IPFO:transform} \State $\delta \boldsymbol{q}\leftarrow\delta\boldsymbol{q} + \delta\boldsymbol{q}^*,\quad (R,\boldsymbol{t})\leftarrow(R^*, \boldsymbol{t}^*)*(R,\boldsymbol{t})$\label{IPFO:update} \If{$e_{prev} - e < \Delta$} \label{IPFO:terminate1} \State $(R,\boldsymbol{t},\delta\boldsymbol{q},e)\leftarrow (R_{prev}, \boldsymbol{t}_{prev}, \delta\boldsymbol{q}_{prev}, e_{prev})$ \State$\textbf{break}$ \EndIf \label{IPFO:terminate2} \State $(R_{prev}, \boldsymbol{t}_{prev}, \delta\boldsymbol{q}_{prev}, e_{prev})\leftarrow (R,\boldsymbol{t},\delta\boldsymbol{q}, e)$ \EndFor \State \Return $\{R, \boldsymbol{t}, \delta \boldsymbol{q}, \boldsymbol{\partial \mathcal{F}}, e\}$ \end{algorithmic} \end{algorithm} The Alg.~(\ref{alg:IPFO}) feeds as inputs $\boldsymbol{\partial\mathcal{F}}$ represented by $\{p_i, n_i^p\}_{i=1}^{N_p}$, $\partial \mathcal{O}$ represented by $\{q_i, n_i^q\}_{i=1}^{N_q}$, the surface fitting indices $\mathcal{I}$ and collision avoidance indices $\mathcal{L}$. The corresponding points for fitting and collision are then sampled in Line~(\ref{IPFO:sample}-\ref{IPFO:collision}). The palm optimization and finger optimization are shown in Line~(\ref{IPFO:palm}-\ref{IPFO:finger}). The hand surface and hand configuration are updated in Line~(\ref{IPFO:transform}-\ref{IPFO:update}). IPFO terminates once the error reduction is less than threshold $\Delta$, as shown in Line~(\ref{IPFO:terminate1}-\ref{IPFO:terminate2}). IPFO returns the optimal transformation $R,\boldsymbol{t}$, finger displacement $\delta\boldsymbol{q}$ and the updated hand surface $\boldsymbol{\partial \mathcal{F}}$. \begin{theorem} \textit{The IPFO algorithm in Alg.~(\ref{alg:IPFO}) converges to a local optimum of Problem~(\ref{eq6:overall}). } \textbf{\proof} The convergence of IPFO is proved based on the global convergence theorem~\cite{luenberger1984linear}. First, The $R,\boldsymbol{t}, \delta\boldsymbol{q} \in D=SE(3)\times \mathbb{R}^{N_{jnt}}$ is a compact set. Second, the function $E(R,\boldsymbol{t},\delta\boldsymbol{q})$ in~(\ref{eq6:overall}) is a continuous function. With the construction of IPFO in Line~(\ref{IPFO:palm}-\ref{IPFO:finger}) of Alg.~(\ref{alg:IPFO}), we claim that the function $E(R,\boldsymbol{t},\delta\boldsymbol{q})$ is decent since $E(R^*,\boldsymbol{t}^*,\delta\boldsymbol{q}^*)<E(R,\boldsymbol{t},\delta\boldsymbol{q})$ when outside of the solution set. Lastly, the IPFO algorithm composited by the palm optimization $\mathcal{PO}$ (Line~\ref{IPFO:palm}) and finger optimization $\mathcal{FO}$ (Line~\ref{IPFO:finger}) is a closed mapping, since $\mathcal{PO}$ is continuous and point-to-point, and $\mathcal{FO}$ is closed in $\mathcal{PO}(R,\boldsymbol{t},\delta\boldsymbol{q})$. Therefore, IPFO described by Alg.~(\ref{alg:IPFO}) converges to a local optimum under the current correspondence. $\left[\textbf{End of Proof}\right]$ \end{theorem} \subsection{Fitting Weights Reshaping} The current MDISF algorithm assumes that all points have equivalent importance. With this assumption, MDISF may produce unsatisfying power grasps which either prevent the hand from closing fingers if it matches to the region close to hinge, or easily collide with the ground if the object is flat. To generate natural power grasps, we shape the weights of points on different regions of the hand surface with Gaussian, as shown in Fig.~\ref{fig:gaussian_weights}(a). With this shaping, the central regions of palms and links are emphasized since these regions have better robustness to uncertainties and allow large-scale joint motion. To produce precision grasps for flat objects, we emphasize the fitting of points on fingertips, as shown in Fig.~\ref{fig:gaussian_weights}(b). \begin{figure}[t] \begin{center} \includegraphics[width=2.7in]{gaussian_weights.png} \caption{Illustration of weights shaping for (a) power grasp and (b) precision grasp generation.} \label{fig:gaussian_weights} \end{center} \end{figure} The surface fitting with weight shaping is similar the original one and can be solved by IPFO. The details are neglected for simplicity. With the IPFO algorithm in Alg.~(\ref{alg:IPFO}), MDISF searches optimal hand configuration hierarchically using the multi-resolution pyramid, as shown in Alg.~(\ref{alg:isf}). MDISF iterates between matching the correspondence (Line~\ref{isf:nn}-\ref{isf:collision}) and searching for optimal transformation and finger displacements $R,\boldsymbol{t},\delta \boldsymbol{q}$ with IPFO (Line~\ref{isf:IPFO}). \begin{algorithm}[t] \caption{Multi-Dimensional Iterative Surface Fitting}\label{alg:isf} \begin{algorithmic}[1] \State \textbf{Input:} Initial $R_s,\boldsymbol{t}_s, \delta\boldsymbol{q}_s$, $\partial \mathcal{O}$, $\boldsymbol{\partial\mathcal{F}}$, $L, I_0, \epsilon_0$ \label{isf:input} \State \textbf{Init:} $\boldsymbol{\partial\mathcal{F}} = \mathcal{T}(\boldsymbol{\partial\mathcal{F}}; R_s, \boldsymbol{t}_s, \delta\boldsymbol{q}_s)$ \label{isf:init} \For {$l = L-1, \cdots, 0$} \label{isf:paraymid} \State $I_l = I_0/2^l$, $\epsilon_l= 2^l\epsilon_0$, $e_{prev} \leftarrow \infty$, $\eta \leftarrow 0$, $it \leftarrow 0$ \While {$\eta \notin [1 - \epsilon_l, 1 + \epsilon_l] $ and $it\texttt{++} < I_l$} \State $\mathcal{I} \leftarrow \texttt{filter}(NN_{\partial O}(\texttt{downsample}(\boldsymbol{\partial\mathcal{F}}, 2^l)))$\label{isf:nn} \State $\mathcal{L} \leftarrow \texttt{collisioncheck}(\boldsymbol{\partial\mathcal{F}}, \partial \mathcal{O})$ \label{isf:collision} \State $\{R, \boldsymbol{t}, \delta \boldsymbol{q}, \boldsymbol{\partial\mathcal{F}}, e\} \leftarrow \textbf{\texttt{IPFO}}(\boldsymbol{\partial\mathcal{F}}, \partial \mathcal{O}, \mathcal{I}, \mathcal{L})$ \label{isf:IPFO} \State $\eta \leftarrow e/e_{prev}, e_{prev}\leftarrow e$, $\textbf{Confs}\leftarrow \{R,\boldsymbol{t},\delta\boldsymbol{q}\}$ \EndWhile \EndFor \label{isf:paraymid2} \State \Return $\{ e, \boldsymbol{\partial \mathcal{F}},\textbf{Confs}\}$ \end{algorithmic} \end{algorithm} \section{Grasping Imagination} \label{sec:gi} In this section, the found grasps are first ranked based on the proposed quality metric, after which the grasp trajectories are planned to reach the highly ranked grasps. \subsubsection{Grasp Quality Evaluation} The grasp quality is evaluated based on the grasp wrench space (GWS)~\cite{roa2015grasp}. In this paper, GWS $\mathcal{P}$ is constructed by 1) finding contact points by the nearest neighbor of the final hand surface on the object, 2) removing the contacts with large normal alignment error, 3) extracting the center points and the average normals by K-means, and 4) building $\mathcal{P}$ based on the extracted grasp points and normals using the soft finger model~\cite{murray2017mathematical}. With the GWS $\mathcal{P}$, three quality features are calculated. The first feature $Q_{in}$ is a bool type variable indicates the ability to resist arbitrary small disturbance by checking the inclusion of origin in $\mathcal{P}$. The second feature $Q_{vol}$ indicates the magnitude of disturbance resistance by computing the volume of $\mathcal{P}$. The third feature $Q_{cond}$ indicates the isotropy of the disturbance resistance by the condition number of the $WW^T$, where $W \in \mathbb{R}^{6\times N_p}$ is the vertex matrix of convex hull $\mathcal{P}$. The final grasp quality metric $Q_{gsp}$ is represented as: \begin{equation} \label{eq:quality_form} Q_{gsp}= Q_{vol} + \frac{3}{Q_{cond}} + 11 Q_{in} \end{equation} The parameters of~(\ref{eq:quality_form}) is obtained by regression using the standard Ferrari-Canny metric~\cite{ferrari1992planning} on 200 grasps from 10 objects. Equation~(\ref{eq:quality_form}) is able to rank the found collision-free grasps more efficiently than the Ferrari-Canny metric with comparable accuracy. Compared with Ferrari-Canny metric, the computation time of~(\ref{eq:quality_form}) reduced by 98.77\% from 2.43 secs/grasp to 0.034 sec/grasp. The top-1 score is 70\% and top-3 score is 90\%, out of 20 classes to be ranked. Highly ranked grasps are fed for trajectory generation. \subsubsection{Grasp Trajectory Optimization} This paper presents a two-step procedure to plan the robot-finger trajectories. First, the hand keeps half-closed and approaches the pre-grasp pose with~\cite{lin2017real}. The object is represented by its bounding box in this step. The pre-grasp position is defined by lifting the hand $0.3$ m from the final grasp, and the rotation is defined as the closest canonical orientation from the final grasp pose. Second, we optimize for the finger trajectories while predefining the palm trajectory by interpolation. The grasp trajectory optimization (GTO) becomes: \begin{subequations} \label{eq:GTO} \begin{align} \min_{\boldsymbol{q}_1,...,\boldsymbol{q}_S} &\ \sum_{s=1}^{S-1}\|\boldsymbol{q}_{s+1} - \boldsymbol{q}_s\|^2 \label{eq:gto_cost}\\ s.t. \quad & dist\left(\mathcal{T}(\boldsymbol{\partial \mathcal{F}_s};\boldsymbol{q}_s-\boldsymbol{q}_{s}^0),\partial\mathcal{O}| \mathcal{G}\right)\geq 0,\label{eq:gto_collision}\\ & |\boldsymbol{q}_{s+1} - \boldsymbol{q}_s| \leq \Delta_q, \quad s=1,...,S-1\\ & \boldsymbol{q}_{s} \in [\boldsymbol{q}_\text{min}, \boldsymbol{q}_\text{max}], \quad s=1,...,S\\ & \boldsymbol{q}_{1} = \boldsymbol{q}_\text{pregrasp}, \boldsymbol{q}_{S} = \boldsymbol{q}_\text{final}. \label{eq:gto_prefinal} \end{align} \end{subequations} where $s$ is the sample index, $S$ is the number of samples on the trajectory, and $\mathcal{T}(\boldsymbol{\partial \mathcal{F}_s};\boldsymbol{q}_s-\boldsymbol{q}_{s}^0)$ denotes the transformed hand surfaces after the joint displacement at the $s$-th sample. Optimization~(\ref{eq:GTO}) is to minimize the total length of the trajectory~(\ref{eq:gto_cost}) from the pre-grasp to final grasp~(\ref{eq:gto_prefinal}) and avoid collision with both the object and ground~(\ref{eq:gto_collision}). \begin{figure}[t] \begin{center} \includegraphics[width=2.2in]{gto_collision.png} \caption{(a) Point-box distance calculation. The closest point $p_{l_k}^s$ is obtained by projection and filtering. (b) The pair with the smallest signed distance ($p_{l_k^*}^s,q_{l_k^*}^s$) is chosen as the critical points for collision avoidance.} \label{fig:gto_collision} \end{center} \end{figure} Similar to MDISF, the collision constraints are penalized in the cost. We adopt the formulation in TrajOpt~\cite{schulman2013finding}: \begin{equation} \label{eq:collision_formulation} col\_term = | d_{safe} - sd(\partial \mathcal{O}, \mathcal{T}(\boldsymbol{\partial \mathcal{F}_s};\boldsymbol{q}_s-\boldsymbol{q}_{s}^0)|^+ \end{equation} where $d_{safe}$ denotes the safety distance, $sd(A,B)$ denotes the signed distance between A and B, and $|x|^+ = \max(x,0)$. We propose an approach to compute the signed distance $sd(\partial \mathcal{O}, \mathcal{T}(\boldsymbol{\partial \mathcal{F}_s};\boldsymbol{q}_s-\boldsymbol{q}_{s}^0))$ in absence of the 3D mesh and convex decomposition of the object, as shown in Fig.~\ref{fig:gto_collision}. We first inflate the bounding boxes $\{\mathcal{B}_k^s\}_{k=1}^{N_b}$ at sample $s$ by $d_{check}$ and check the inclusion of object points. For each interior point $q_{l_k}$, we calculate the signed distance by projecting $q_{l_k}$ to surfaces of $\mathcal{B}_k^s$ and filtering out the points with $(q_{l_k} - p_j)^Tn_l^{q}<0$ for those $q_{l_k}\in \mathcal{B}_k^s$. The closest point to $q_{l_k}$ is denoted as $p_{l_k}^s$, as shown in Fig.~\ref{fig:gto_collision}(a). The point-box signed distance $sd(\mathcal{B}_k^s, q_{l_k})=(p_{l_k}^s - q_{l_k})^Tn_{l_k}^s$ and $n_{l_k}^s$ is a normal vector with direction $q_{l_k} - p_{l_k}^s$ if $q_{l_k} \in \mathcal{B}_k^s$ or reverse otherwise. Therefore, $sd(\mathcal{B}_k^s, \partial \mathcal{O}) = \min_{l_k}sd(\mathcal{B}_k^s, q_{l_k})$ with the critical index $l_k^*=\argmin_{l_k}sd(\mathcal{B}_k^s, q_{l_k})$, as shown in Fig.~\ref{fig:gto_collision}(b). The hand-object collision indexes $\mathcal{L}_{s,o} = \{l_k^*\}_{k=1}^{N_b}$. Similarly, the hand-ground collision indexes $\mathcal{L}_{s,g}$ includes all the points that have potential collision with ground. With $\mathcal{L}_{s,o}, \mathcal{L}_{s,g}$, the collision for the $s$-th sample is penalized as: \begin{equation} \begin{aligned} E_{col,s} = \sum_{l_k^*\in \mathcal{L}_{s,o}}\left(|d_{safe} - (\bar{p}_{l_k^*}^s - q_{l_k^*})^Tn_{l_k^*}^s|^+\right)^2 +& \\ \sum_{l_k^*\in \mathcal{L}_{s,g}}\left(|d_{safe} - (\bar{p}_{l_k^*}^s - q_{g})^Tn_g|^+\right)^2 \end{aligned} \end{equation} where $\bar{p}_{l_k^*}^s = {p}_{l_k^*}^s + \mathcal{J}_{l_k^*}(\boldsymbol{q}_s^0)\delta \boldsymbol{q}_s$. Therefore, GTO~(\ref{eq:GTO}) can be reformulated as: \begin{subequations} \label{eq:GTO2} \begin{align} \min_{\delta \boldsymbol{q}_1,...,\delta \boldsymbol{q}_S} &\ \sum_{s=1}^{S-1}\|\boldsymbol{q}_{s+1}^0+\delta\boldsymbol{q}_{s+1}-\boldsymbol{q}_s^0- \delta\boldsymbol{q}_s\|^2 + cE_{col,s} \\ s.t. \quad & |\boldsymbol{q}_{s+1}^0+\delta \boldsymbol{q}_{s+1} - \boldsymbol{q}_s^0 - \delta \boldsymbol{q}_{s}|_\infty \leq \Delta_q,\\ & \boldsymbol{q}_{s}^0 + \delta \boldsymbol{q}_{s} \in [\boldsymbol{q}_\text{min}, \boldsymbol{q}_\text{max}], \quad s=1,...,S\\ & \delta \boldsymbol{q}_{1} = 0, \delta \boldsymbol{q}_{S} = 0, \\ & |\delta \boldsymbol{q}_s| < \Delta_{\delta q}. \quad s=1,...,S \end{align} \end{subequations} Optimization~(\ref{eq:GTO2}) solves for optimal joint displacements $\{\delta\boldsymbol{q}_s^*\}_{s=1}^S$ using the current joint samples and collided points. The $\{\delta\boldsymbol{q}_s^*\}_{s=1}^S$ then updates joint samples $\boldsymbol{q}_s^0\leftarrow\boldsymbol{q}_s^0+\delta\boldsymbol{q}_s^*$, hand surfaces $\boldsymbol{\partial \mathcal{F}_s}\leftarrow\mathcal{T}(\boldsymbol{\partial \mathcal{F}_s};\delta\boldsymbol{q}_s^*)$, collision penalty $c\leftarrow\mu c$ and indexes $\mathcal{L}_{s,o}, \mathcal{L}_{s,g}$. Optimization~(\ref{eq:GTO2}) iterates until no collision or reaching the maximum iterations. \section{Simulation and Experiment} \label{sec:results} This section presents the simulation and experiment. The experimental videos are available at~\cite{website}. \subsection{Parameter Lists} For MDISF, $\alpha=0.03$. $N_p=450, N_b = 7$, $L = 4, I_0 = 200, \epsilon_0 = 0.02$. IPFO used $\Delta = 1e-5, T_{max}= 20$. The power grasp used Gaussian $\Sigma = diag([l/2,w/0.1])$ with mean at the link center, where $l,w$ are the link length and width, and the base weights for palm, proximal and distal link were $0.1, 0.1, 1$. The precision grasp used Gaussian $\Sigma = diag([l/5,w/0.1])$ with mean at the fingertip and base weights $0.01,0.01,1$. As for GTO, $d_{check}=0.03$ m and $d_{safe} = 0.01$ m, $S=30$, $\Delta_{\delta q} = [0.2, 0.2, 0.2,0.4]$, $\Delta_{q} = [0.4, 0.4, 0.4,0.4]$, The maximum iterations for (\ref{eq:GTO2}) was 20. The starting collision penalty $c_0 = 1$ and $\mu = 2$. \subsection{Simulation and Experiment Results} The simulation was conducted on a desktop with 32GB RAM and 4.0GHz CPU. The grasps were computed by Matlab and visualized by V-REP. The BarrettHand BH8-282 was used to test the effectiveness of the algorithm. \begin{figure}[t] \begin{center} \includegraphics[width=3.3in]{ISF_Animation.png} \caption{Visualization of MDISF iterations on a dragon object. (a) Initial hand configuration. The initial pose has collision and large fitting error. (b-h) Error reduction in different MDISF iterations.} \label{fig:ISF_Animation} \end{center} \end{figure} The visualization of the MDISF iterations are shown in Fig.~\ref{fig:ISF_Animation}. MDISF considers both the collision avoidance and the surface fitting in each IPFO iteration. MDISF started from a random pose around the object (Fig.~\ref{fig:ISF_Animation}(a)) and optimized for palm pose and joint displacements to reduce the fitting error and penalize the collision (Fig.~\ref{fig:ISF_Animation}(b-h)). Figure~\ref{fig:error_reduction} shows the error reduction profile to validate that both the average surface fitting error $E_{fit}/|\mathcal{I}|$ and collision cost $E_{col}$ are reduced during MDISF. The red and blue plots show the mean errors and the deviations for all samples, while the purple and yellow plots show those for collision-free grasps. In average, it took $25.2 \pm 6.3$ IPFOs and $100.2 \pm 34.4$ PFOs to converge. The average fitting error $E_{fit}/|\mathcal{I}|$ reduced from $0.0072 \pm 0.0031$ m to $0.0027 \pm 0.0012$ m, and the absolute $E_{col}$ reduced from $0.5952\pm 0.4342$ m to $0.0287 \pm 0.0410$ m. All statistics were computed based on 50 samples on bunny object shown in Fig.~\ref{fig:ISF_Illustration}. \begin{figure}[t] \begin{center} \includegraphics[width=3.4in]{error_reduction.png} \caption{Profile of the error reduction during MDISF. (Left) Overall error. (Middle) Average surface fitting error after outliers/duplicate removal. (Right) Collision error without multiplying the penalty weight. } \label{fig:error_reduction} \end{center} \end{figure} Figure~\ref{fig:sim_vis} and Table~\ref{tab:opt_details} show the visualization and quantitative results of the proposed method on ten different objects. The MDISF algorithm sampled 10 times for each object and returns 6.2 collision free grasps in 2.45 secs. The highest 5 (or the maximum number of collision free grasps found by MDISF) grasps were selected and fed to GTO for trajectory optimization. GTO returned 3.3 collision-free trajectories out of 4.0 grasps in 2.02 secs. The surface fitting error provided a reliable metric for grasp searching, since the majority (6.1/6.2) grasps found by MDISF were force closure (FC). The $Q_{gsp}$ reflects the graspability (difficulty of being grasped) of the objects with the Barrett hand. Goblet and screwdriver were the top-2 objects with the highest graspability due to the proper size and simple structure. Hand model and gun were the top-2 objects with the lowest graspability since they are flat and close to ground, and Barrett hand can easily collide with ground/object and get trapped in the infeasible local optima. \begin{figure}[t] \begin{center} \includegraphics[width=3.4in]{sim_vis.png} \caption{Visualization of the MDISF on ten objects.} \label{fig:sim_vis} \end{center} \end{figure} \begin{table}[t] \centering \caption{Numerical Results of the Grasping Framework} \label{tab:opt_details} \begin{tabular}{l|ll|lll|ll} \hline \multicolumn{1}{l|}{Object} & \multicolumn{2}{l|}{\begin{tabular}[c]{@{}c@{}} collision-free\#\\ \hline total samples \end{tabular} } & \multicolumn{3}{l|}{Time (secs)} & \multicolumn{2}{l}{Qualities} \\ \hline & ISF& GTO& ISF & GTO& Sum& FC & $Q_{gsp}$ \\\hline\hline Bunny & 8/10& 4/5& 2.8& 3.2& 6.0& 7/8& 6.86 \\ Screwdriver& 9/10& 1/5& 2.2& 3.0& 5.2& 9/9& 9.56 \\ Gun& 2/10& 1/2& 2.5& 1.8& 4.3& 2/2& -9.41 \\ Kettle& 8/10& 5/5& 2.1& 2.2& 4.3& 8/8& 3.39 \\ Goblet& 10/10& 5/5& 2.4& 1.7& 4.1& 10/10& 13.66 \\ Doraemon& 9/10& 5/5& 2.1& 1.6& 3.7& 9/9& 9.27 \\ Hand& 1/10& 1/1& 2.3& 0.6& 2.9& 1/1& -12.32 \\ Banana& 4/10& 3/4& 2.2& 2.5& 4.7& 4/4& -1.71 \\ Mug& 8/10& 5/5& 3.0& 2.2& 5.2& 8/8& 8.18\\ Oscar & 3/10& 3/3& 2.9& 1.4& 4.3& 3/3& -6.80 \\\hline \hline Average& $\frac{6.2}{10}$& $\frac{3.3}{4.0}$& 2.45& 2.02 & 4.47 &$\frac{6.1}{6.2}$& 2.068 \end{tabular} \end{table} Figure~\ref{fig:precision_and_power} compares the precision grasps and power grasps generated by MDISF. The precision mode and power mode generated 8 and 5 collision-free grasps out of 10 samples on Bunny object, respectively. The hand tended to collide with the object in power grasp mode since the fitting of palm/proximal links were emphasized and hand stayed closer to object, as shown in Fig.~\ref{fig:precision_and_power}(b). \begin{figure}[t] \begin{center} \includegraphics[width=3.3in]{precision_and_power.png} \caption{Comparison of (Top) precision grasp mode and (Bottom) power grasp mode of MDISF on Bunny object.} \label{fig:precision_and_power} \end{center} \end{figure} Figure~\ref{fig:gto_vis} shows the result of GTO on kettle object. The trajectory started from the top and reached the desired grasp with one finger in narrow space. The fingers collided with the object when grasping with the predefined finger motion, as shown in Fig.~\ref{fig:gto_vis}(Top). GTO planed finger trajectories to avoid collision and reached the target grasp in narrow space, as shown in Fig.~\ref{fig:gto_vis}(Bottom). \begin{figure}[t] \begin{center} \includegraphics[width=3.3in]{gto_vis.pdf} \caption{Snapshots of the trajectories to execute the grasp. (Top) Predefined finger motion to close the hand. (Bottom) Optimized trajectory by GTO. The green spots indicate the collided regions. } \label{fig:gto_vis} \end{center} \end{figure} Figure~\ref{fig:grasp_exp} shows the experimental results using the FANUC LRMate 200iD/7L manipulator and BarrettHand BH8-282 on ten objects. The scene was captured by two IDS Ensenso N35 stereo cameras. The observed point cloud and the optimized grasp are shown in the left, and the executed grasp after GTO is shown in the middle. Extra amount of finger motion was executed in order to provide necessary force to clamp the object, as shown in the right. The observed point cloud was incomplete and noisy, and the system also contained uncertainties in calibration ($\sim$3 mm for robot-camera frame alignment), positioning ($\sim$$1^\circ$ TCP-palm alignment, $\sim$$2.0^\circ$ finger joint tracking error) and communication (~0.1 sec robot-hand command synchronization error). The system exhibited certain robustness and was able to plan and execute grasps under the unsatisfying point cloud and various types of uncertainties, as shown in Fig.~\ref{fig:grasp_exp}(1-9). We also include three failed grasps to reflect three failure modes, as shown in Fig.~\ref{fig:grasp_exp}(10-12). The first failure mode was raised from slippage. Without force optimization, the exerted force might fall out of the friction cone, as shown in Fig.~\ref{fig:grasp_exp}(10). The second failure mode was raised from the asymmetric distribution of contact forces during clamping, as shown in Fig.~\ref{fig:grasp_exp}(11). The third failure mode was raised from the internal disturbance. The proximal link accidentally collided with object during the clamp stage and introduced a large disturbance, as shown in Fig.~\ref{fig:grasp_exp}(12). \begin{figure}[t] \begin{center} \includegraphics[width=3.4in]{grasp_exp.pdf} \caption{Illustration of the grasp experiments on 10 objects. (1-9) Successful grasp trials. (10-12) Failed grasp trials. For each grasp, the left shows the point cloud and planned grasp, the middle shows the actual grasp after GTO, and the right shows the grasp after clamping and lifting.} \label{fig:grasp_exp} \end{center} \end{figure} \section{Conclusion} \label{sec:conclusion} This paper has proposed an efficient framework for grasp generation and execution. The framework includes a multi-dimensional iterative surface fitting (MDISF) and a grasp trajectory optimization (GTO). The MDISF algorithm searches for optimal grasps by minimizing the hand-object fitting error and penalizing the collision, and the GTO algorithm plans finger trajectories for grasp execution with the point cloud representation of the object. The MDISF-GTO exhibits certain robustness to the incomplete/noisy point cloud and various underlying uncertainties. In average, it took 0.40 sec for MDISF to find a collision-free grasp, and took 0.61 sec for GTO to optimize the trajectory to reach the grasp. The current implementation clamps objects by simply closing fingers. This may cause slippage, uneven force distribution and introduce internal disturbances to the system. Future work will optimize the grasping force~\cite{fan2017real} to have better slippage resistance and disturbance robustness. \addtolength{\textheight}{-1cm} \bibliographystyle{IEEEtran}
{ "timestamp": "2019-03-01T02:05:56", "yymm": "1902", "arxiv_id": "1902.10841", "language": "en", "url": "https://arxiv.org/abs/1902.10841" }
\section{Introduction} A number of remarkable techniques arising from particular gauge theories in physics have long been incarnated into different branches of mathematics. They have been notably employed to study low dimensional topology and geometry in a rather sophisticated way, such as Donaldson theory on four-manifolds \cite{Don}, the work of Floer on the topology of 3-manifolds and Yang-Mills instantons that serves as a Morse-theoretic interpretation of Chern-Simons gauge theory (and hence an infinite-dimensional counterpart of the classical smooth Morse theory \cite{Floer}, \cite{DFK}, \cite{Ruberman}), and Witten's knot invariants \cite{Wit3} arising from a certain \textit{three-dimensional} Chern-Simons theory. Main motivations of this current discussion are as follows: \textit{(i)} to provide a brief introduction to the notion of quantization, \textit{(ii)} to introduce the \textit{geometric quantization formalism} (GQ) and try to understand how the notion of \textit{quantization} boils down to the study of representation theory of \textit{classical observables} in the sense that one can construct \textit{the quantum Hilbert space} $\mathcal{H}$ and a certain Lie algebra homomorphism, and \textit{(iii)} to elaborate in a rather intuitive manner the quantization of Chern-Simons theory together with a brief discussion of a TQFT in the sense of Atiyah \cite{Atiyah} and the language of category theory (cf. \cite{Stacks}, \cite{Vakil}) that manifestly captures the essence of TQFT. With this formalism in hand, we shall investigate Witten's construction of quantum invariants \cite{Wit3} in three-dimensions, and where geometric quantization formalism comes into play. \vspace{5pt} \noindent \textbf{Acknowledgments. }This is \textit{an extended version} of the talk given by the author at \textit{the Workshop on Mathematical Topics in Quantization}, Galatasaray University, Istanbul, Turkey in 2018. The shorter version, on the other hand, will appear in the proceedings of this workshop. This note consists of introductory materials to the notion of geometric quantization based on a series of lectures, namely \textit{Geometric Quantization and Its Applications}, delivered by the author as a weekly seminar/lecture at theoretical physics group meetings organized by Bayram Tekin at the Department of Physics, METU, Spring 2016-2017. Throughout the note, we do not intend to provide neither original nor new results related to subject that are not known to the experts. The references, on the other hand, are not meant to be complete either. But we hope that the material we present herein provides a brief introduction and a na\"{\i}ve guideline to the existing literature for non-experts who may wish to learn the subject. For a quick and accessible treatment to the geometric quantization formalism, including a short introduction to symplectic geometry, see \cite{Blau}, \cite{BDV} or \cite{Honda}. \cite{daS}, on the other hand, provides pedagogically-oriented complete treatment to symplectic geometry. The full story with a more systematic formulation is available in \cite{Woodhouse} and \cite{Brian}. Finally, I'd like to thank Özgür Kişisel and Bayram Tekin for their comments and corrections on this note and I am also very grateful to them for their enlightening, fruitful and enjoyable conversations during our regular research meetings. Also, I would like to thank organizers and all people who make the event possible and give me such an opportunity to be the part of it. \section{Quantization in What Sense and GQ Formalism} We would like to elaborate the notion of geometric quantization in the case of quantization of classical mechanics. Recall that observables in classical mechanics with \textit{a phase space} $(X,\omega)$, a finite dimensional symplectic manifold, form a \textit{Poisson algebra} with respect to \textit{the Poisson bracket} $\{ \cdot , \cdot \}$ on $C^{\infty}(X)$ given by \begin{equation} \label{defn of poisson bracket} \{f,g \} := -w(X_f,X_g)=X_f (g) \ \ for \ all \ f,g \in C^{\infty}(X), \end{equation} where $X_f$ is \textit{the Hamiltonian vector field associated to} $f$ defined implicitly as \begin{equation} \label{defn of Hamiltonian vector field} \imath_{ X_f} \omega=df. \end{equation} Here, $ \imath_{ X_f} \omega $ denotes \textit{the contraction} of a 2-form $\omega$ with the vector field $X_f$ in the sense that \begin{equation} \imath_{ X_f} \omega \ (\cdot):= \omega(X_f, \cdot). \end{equation}Employing canonical/geometric quantization formalism (cf. \cite{Honda}, \cite{Blau}, \cite{Woodhouse}, \cite{Brian}), the notion of quantization boils down to the study of representation theory of classical observables in the sense that one can construct the quantum Hilbert space $\mathcal{H}$ and a Lie algebra homomorphism \footnote{\textit{A Lie algebra homomorphism} $\beta:\mathfrak{g} \rightarrow \mathfrak{h}$ is a linear map of vector spaces such that $\beta([X,Y]_{\mathfrak{g}})=[\beta(X),\beta(Y)]_{\mathfrak{h}}$. Keep in mind that, one can easily suppress the constant "-$i\hbar$" in \ref{quantum cond.} into the definition of $\mathcal{Q}$ such that the quantum condition \ref{quantum cond.} becomes the usual compatibility condition that a Lie algebra homomorphism satisfies.} \begin{equation} \mathcal{Q}:\big(C^{\infty}(X),\{ \cdot, \cdot \}\big)\longrightarrow \big(End(\mathcal{H}),[\cdot , \cdot ]\big) \end{equation} together with \textit{the Dirac's quantum condition}: $ \forall $ $f,g\in C^{\infty}(X) $ we have \begin{equation} \label{quantum cond.} [\mathcal{Q}(f), \mathcal{Q}(g)]=-i\hbar \mathcal{Q}\big(\{f ,g \}\big) \end{equation} where $[\cdot, \cdot]$ denotes the usual commutator on $End(\mathcal{H})$. \vspace{5pt} \noindent A primary motivation of this part is to understand how to associate manifestly a suitable Hilbert space $\mathcal{H}$ to a given symplectic manifold $ (M, \omega )$ of dimension $2n$ together with its \textit{Poisson algebra} $ \big(C^{\infty}(M),\{ \cdot , \cdot \}\big) $ in accordance with a certain set of \textit{quantization} axioms given as follows: \vspace{-18pt} \begin{quote} \begin{definition} \label{defn of quantum system} (cf. \cite{Blau}, \cite{BDV}) Let $ (M, \omega )$ be the classical phase space and $\mathcal{A}$ a subalgebra of $ C^{\infty}(M) $. \textit{The quantum system} $ \big(\mathcal{H},\mathcal{Q}\big) $ \textit{associated to} $ \big(M,C^{\infty}(M)\big) $ consists of the following data: \begin{enumerate} \item A complex separable Hilbert space $ \mathcal{H} $ where its elements $\psi$ are called the \textit{quantum wave functions} and the rays $ \{\lambda\psi : \lambda \in \mathbb{C}\} $ are \textit{the quantum states}. \item For each $f \in \mathcal{A}$, $\mathcal{Q}(f) $ is a self-adjoint $\mathbb{C}$-linear map on $\mathcal{H}$ such that $\mathcal{Q}$ sends the function $f=1$ to the identity operator $id_{\mathcal{H}} \in End(\mathcal{H}).$ \item \textit{The quantum condition} \ref{quantum cond.} for $f,g \in \mathcal{A}$. \item \textit{The irreducibility condition}: If $\{f_1,...,f_n \}$ is a \textit{complete set} of observables in $\mathcal{A}$, i.e. a function $g\in \mathcal{A}$ commuting with all $f_i$'s must be constant: \begin{equation} \{g,f_i\}=0 \ for \ all \ i \Leftrightarrow g=c \ for \ some \ c \in \mathbb{C}, \end{equation} then so is the set $\{\mathcal{Q}(f_1),...,\mathcal{Q}(f_n)\}$ of corresponding operators. \end{enumerate} \end{definition} \end{quote} \noindent\textbf{Geometric quantization} (GQ) is a formalism that encodes the construction of the assignment $ \big(\mathcal{H},\mathcal{Q}\big) $ in a well-established manner (cf. \cite{Blau}, \cite{BDV}, \cite{Woodhouse}, \cite{Brian}). In that respects, it enjoys the following properties: \begin{enumerate} \item GQ is available for any \textit{finite} dimensional symplectic manifold $ (M, \omega )$. \item If $ (M, \omega, \mathcal{G}, \mu )$ is a Hamiltonian $\mathcal{G}$-space with the gauge group $\mathcal{G}$ and the \textit{moment map} $\mu$ (cf. \cite{daS} ch.22), then GQ \textit{remembers} the symmetries of classical system in the sense that the corresponding quantum states form an \textit{irreducible} representation of $\mathcal{G}$ (this is in fact the representation-theoretic interpretation \cite{Woit} of so-called the irreducibility condition stated above). \end{enumerate} GQ is a two-step process: \textit{(i)} \textit{Pre-quantization}, and \textit{(ii)} \textit{the polarization.} \textit{The fist step} involves the construction of so-called \textit{a prequantum line bundle} $\mathcal{L}$ on $ (M, \omega )$, the description of a \textit{pre-quantum Hilbert space} $\mathcal{H}_{pre}$ as the space $\Gamma(M,\mathcal{L})$ of smooth square-integrable sections of $\mathcal{L}$ , and a \textit{(pre-) assignment} $\mathcal{Q}_{pre}$ as a certain differential operator acting on such sections of $\mathcal{L}$ (cf. Theorem \ref{existence of prequantum line bundle} and Definition \ref{defn of GQ assingment}). Note that even if the first step captures almost all necessary constructions related to the axioms in Definition \ref{defn of quantum system}, it satisfies all but one: \textit{the irreducibility condition}. This is where \textit{the second step} comes into play: In order to circumvent such a pathological assignment, which fails to satisfy the irreducibility condition, we need to restrict the space of smooth functions to be quantized in a certain subalgebra $\mathcal{A}$ that \textit{the irreducibility condition} holds as well. This corresponds to a particular choice of a certain Lagrangian $n$-subbundle $\mathcal{P}$ of $TM$, called \textit{the polarization}, and hence it leads to define the quantum Hilbert space $\mathcal{H}$ as the space $\Gamma_{\mathcal{P}}(M,\mathcal{L})$ of sections of $\mathcal{L}$ which are \textit{covariantly constant} along $\mathcal{P} \subset TM$ (aka the space of $\mathcal{P}$-polarized sections of $\mathcal{L}$). That is, \begin{equation} \Gamma_{\mathcal{P}}(M,\mathcal{L})=\{s\in \Gamma(M,\mathcal{L}) : \nabla_{X} s=0, \ X\in \Gamma(M,\mathcal{P})\subset \Gamma(M,TM)\}. \end{equation} \noindent\textit{\textbf{A motivational example.}} This example motivates the notion of polarization in a particular case without providing the formal definition of a polarization (for more detail see \cite{Brian}, \cite{Blau}): Every K\"{a}hler manifold $ (M, \omega, J) $, where for all $ p\in M$, $J: \ p \mapsto J_p \in End(T_p M)$ is \textit{an integrable almost complex structure compatible with the sypmlectic structure} $\omega$, gives rise to \textit{a holomorphic K\"{a}hler polarization} associated to $(M, \omega)$ by setting $\mathcal{P}:= T^{(0,1)}(M)$, the $(-i)$-eigenspace subbundle of the complexified tangent bundle $TM \otimes \mathbb{C}$. Indeed, since the complex structure $J$ is diagonizable, it defines the \textit{splitting} of the complexified tangent bundle $TM \otimes \mathbb{C}$ as follows: For each $p \in M$, \begin{equation} T_pM \otimes \mathbb{C}=T^{(0,1)}_p(M) \oplus T^{(1,0)}_p(M) \end{equation} where $ T^{(1,0)}_p(M) = \{v\in T_pM \otimes \mathbb{C} : Jv=iv\} $ and $ T^{(0,1)}_p(M) = \{v\in T_pM \otimes \mathbb{C} : Jv=-iv\} $, which are called \textit{$J$-holomorphic (anti-holomorphic resp.) tangent spaces of $M,$} are both \textit{Lagrangian} subspaces of $ T_pM \otimes \mathbb{C} $ such that $ T^{(0,1)}_p(M) \cap T^{(1,0)}_p(M) =\{0\} $. In local coordinates $(U,z_1, z_2, ... , z_{dim_{\mathbb{C}}M})$ with $z_k=x_k+iy_k$ for $k=1,...,dim_{\mathbb{C}}M$, on the other hand, one has \begin{equation} T^{(0,1)}_p(M)=span_{\mathbb{C}} \big\{\partial/\partial \bar{z}_k |_p\big\}_{k=1}^{dim_{\mathbb{C}}M} \ \ \ and \ \ \ \ T^{(1,0)}_p(M)=span_{\mathbb{C}} \big \{\partial/\partial z_k |_p \big\}_{k=1}^{dim_{\mathbb{C}}M}, \end{equation} where $ \partial/\partial \bar{z}_k=\frac{1}{2}(\partial/\partial x_k+i\partial/\partial y_k) $ and $ \partial/\partial z_k=\frac{1}{2}(\partial/\partial x_k-i\partial/\partial y_k) $. In accordance with the above language, therefore, the space $\Gamma_{\mathcal{P}}(M,\mathcal{L})$ of $\mathcal{P}$-polarized sections of $\mathcal{L}$ is defined as \begin{equation} \Gamma_{\mathcal{P}}(M,\mathcal{L})=\{s\in \Gamma(M,\mathcal{L}) : \nabla_{\partial/\partial \bar{z}_k} s=0\}. \end{equation} Adopting the usual \textit{summation convention}, we consider, for instance, the case where $M:=\mathbb{C}^n$ with the usual coordinates $\{z_k=x_k+iy_k\}_{k=1}^{n}$ and $\mathcal{L}$ the trivial complex bundle on $M$ together with the standard K\"{a}hler structure on $M$, described by \textit{the K\"{a}hler potential} $\phi$, \begin{equation} \omega=\frac{i}{2} \delta^{jk} dz_j\wedge d\bar{z}_k = \frac{i}{2}\partial\bar{\partial} \phi \ \ \ where \ \ \ \phi = \sum_k |z_k|^2, \end{equation} and the usual compatible complex structure: $ J(\partial/\partial x_k)=\partial/\partial y_k $ and $ J(\partial/\partial y_k)=-\partial/\partial x_k $. Since $\mathbb{C}^n$ is a \textit{flat} K\"{a}hler, we also have $ \nabla_{\partial/\partial \bar{z}_k} =\partial/\partial \bar{z}_k $ and hence the space $ \Gamma_{\mathcal{P}}(M,\mathcal{L}) $ becomes \begin{equation} \Gamma_{\mathcal{P}}(M,\mathcal{L})=\{s\in C^{\infty}(\mathbb{C}^n) : \dfrac{\partial s}{\partial \bar{z}_k}=0\} \end{equation} which is exactly the space of \textit{holomorphic functions} on $M$. Describing a suitable subalgebra $\mathcal{A}$, on the other hand, is a different story \textit{per se}, and this task is beyond scope of the current discussion. \vspace{5pt} The following section serves as an introductory material and consists of underlying mathematical treatment for the step-\textit{(i)}. Step-\textit{(ii)}, on the other hand, is beyond the scope of this note and will be discussed in detail elsewhere (cf. \cite{Blau}, \cite{Brian} or \cite{Woodhouse}). \section{The Construction of Prequantization } We first investigate the quantization of observables in classical mechanics with the phase space $(\mathbb{R}^{2n},q_1,...q_n,p_1,...,p_n)$ and the standard symplectic structure $\omega= \delta^{jk} dq_j\wedge dp_k$ as \textit{a prototype example encoding \textit{the wish list} for the quantum system indicated in Definition \ref{defn of quantum system}.} Recall that given a Hamiltonian function $H\in C^{\infty}(\mathbb{R}^{2n})$, its corresponding Hamiltonian vector field $X_H$ (defined implicitly via \ref{defn of Hamiltonian vector field}) is given locally by \begin{equation} X_H= \delta^{jk} \big(\dfrac{\partial H}{\partial p_j} \dfrac{\partial }{\partial q_k} - \dfrac{\partial H}{\partial q_j} \dfrac{\partial }{\partial p_k} \big). \end{equation} \noindent Therefore, for the coordinate functions $ q_j $ and $ p_i $ one has \begin{equation} X_{q_j}= -\dfrac{\partial }{\partial p_j} \ \ and \ \ X_{p_i}= \dfrac{\partial }{\partial q_i}, \end{equation} and hence the set $\mathcal{S}:= \{ q_1,...q_n,p_1,...,p_n\} $ forms a \textit{complete set} (cf. Definition \ref{defn of quantum system}) due to the following relations: \begin{equation} \{q_i,p_j\}=-\delta_{ij} \ \ and \ \ \{q_i,q_j\}=0=\{p_i,p_j\}. \end{equation} \noindent Quantization, on the other hand, gives rise to the similar kind of relations given as \begin{equation} \ [ \mathcal{Q}(p_i),\mathcal{Q}(q_j) ]= -i\hbar\delta_{ij} \ \ and \ \ [ \mathcal{Q}(p_i),\mathcal{Q}(p_j)]=0=[ \mathcal{Q}(q_i),\mathcal{Q}(q_j)] \end{equation} which define so-called \textit{the Heisenberg Lie algebra}. By Schur's lemma, for the complete set $ \mathcal{S}$, the irreducibility condition in Definition \ref{defn of quantum system} boils down to finding the irreducible representations of the Heisenberg algebra where such representations (thanks to the Stone–von Neumann theorem, cf. \cite{Brian} ch. 14, \cite{Blau}) are given by the space $L^2(\mathbb{R}^{n})$ of square-integrable functions on $ \mathbb{R}^{n}$ with the action $ \mathcal{Q} $ of $ C^{\infty}(\mathbb{R}^{2n}) $ on $ L^2(\mathbb{R}^{n}) $ defined as follows: For each $\psi \in L^2(\mathbb{R}^{n}) $ and $\textbf{x}=(x_1,...,x_n) $, we define \begin{equation} \mathcal{Q}(q_k)(\psi(\textbf{x})):= x_k\psi(\textbf{x}) \ \ and \ \ \mathcal{Q}(p_k)(\psi(\textbf{x})):= -i\hbar\dfrac{\partial \psi}{\partial x_k}(\textbf{x}) \end{equation} which exactly recover so-called \textit{the Schrödinger's picture} of quantum mechanics. \vspace{10pt} \begin{remark} Note that the underlying mathematical structures of the above example are manifestly discussed in the language of representation theory. The geometric approach, on the other hand, is rather na\"{\i}ve in the sense that the corresponding (pre) quantum line bundle $\mathcal{L}$, which will be elaborated below, is just \textit{the trivial complex bundle} with sections $\psi$ being complex-valued smooth functions and the pre-quantum Hilbert space $\mathcal{H}_{pre} $ being the space of smooth square-integrable sections of $\mathcal{L}$. Furthermore, $ \mathbb{R}^{2n}\backsimeq \mathbb{C}^n $ admits the natural K\"{a}hler structure and the polarization mentioned above. \end{remark} Now, we would like to introduce \textit{an appropriate construction generalizing the above prototype example as follows:} Let $(M,\omega)$ be a symplectic manifold of dimension $2n$. Since $\omega$ is closed 2-form, it defines the de Rham class $ [\omega] \in H^2_{dR}(M) $ and hence it follows from Poincar\'{e} lemma that $ \omega $ is locally exact; that is, there is an open cover $\mathcal{U}= \{U_i\} $ of $M$ such that \begin{equation} w=dA_i \ on \ U_i \ \ where \ \ A_i \in \Omega^1(U_i). \end{equation} \newpage \noindent If $ [\frac{\omega}{2\pi\hbar}] \in H^2_{dR}(M;\mathbb{Z})$, then one can construct\textit{ a particular complex line bundle} $\mathcal{L}$ with a certain connection $\nabla$ as follows (cf. \cite{Brian} ch. 22-23 or \cite{Blau}): \begin{enumerate} \item Take the cover $\mathcal{U}$ as a local trivializing cover for $\mathcal{L}$ so that $\mathcal{L}|_{U_i}$ is trivial and $ \omega $ is locally exact on each $U_i$; say $w=dA_i \ on \ U_i \ \ where \ A_i \in \Omega^1(U_i).$ We define a connection $\nabla$ on each $U_k$ as \begin{equation} \nabla:= d-\frac{i}{\hbar}A_k. \end{equation} \item \textit{The gauge transformation} for such cover $\mathcal{U}$ (and hence the transition maps) are defined by making use of Poincar\'{e} lemma on the overlap $U_j \cap U_k$ as follows: Consider two local trivializing sections $s_k: U_k \rightarrow \mathcal{L}$ and $s_j: U_j \rightarrow \mathcal{L}$ of $\mathcal{L}$. Then, on the overlap $U_j \cap U_k$, we have $ dA_j=\omega=dA_k $; i.e., \begin{equation} d(A_k-A_j)=0 \ on \ U_j \cap U_k, \end{equation} which implies that $(A_k-A_j) $ is a closed 1-form on $ U_j \cap U_k $ as well, and hence, from Poincar\'{e} lemma, $ A_k-A_j $ is also locally exact. That is, \begin{equation} A_k-A_j= df_{kj} \ for \ some \ f_{kj} \in C^{\infty}(U_j \cap U_k), \end{equation} which induces the desired gauge transformations (with the symmetry group $S^1$) \begin{equation} g: U_j \cap U_k \longrightarrow S^1 \end{equation} where $ g(x):= e^{-\frac{i}{\hbar}f_{jk}(x)} $ for all $x\in U_j \cap U_k$ and for all $k,j$ (by which one can define \textit{glueing algorithm} for any two given patches along the overlap). \item \textit{The corresponding curvature 2-form} $F_A=dA + A\wedge A$ with this \textit{abelian} gauge group can be expressed locally as follows: On each $U_k$, one has \begin{equation} F_{A_k}=-\frac{i}{\hbar}dA_k=-\frac{i}{\hbar}\omega \end{equation} which leads the following theorem. \end{enumerate} \begin{theorem} \label{existence of prequantum line bundle}Let $\omega$ be a closed 2-form on $M$ such that $ [\frac{\omega}{2\pi\hbar}] \in H^2_{dR}(M;\mathbb{Z})$, then there exists a complex line bundle $\mathcal{L}$, called \textit{a prequantum line bundle}, with a connection $\nabla$ as constructed above. \end{theorem} \noindent Theorem \ref{existence of prequantum line bundle} is at the heart of geometric quantization formalism and it gives rise to the following definition formalized in a rather succinct and na\"{\i}ve way (for a complete treatment see \cite{Brian} ch. 23 or \cite{Blau}): \begin{definition} \label{defn of GQ assingment} Let $(M,\omega)$ be a symplectic manifold of dimension $2n$ such that $ [\frac{\omega}{2\pi\hbar}] \in H^2_{dR}(M;\mathbb{Z})$, and $\mathcal{L}$ an associated prequantum line bundle with $\nabla$ as in Theorem \ref{existence of prequantum line bundle}. \begin{enumerate} \item We set $\mathcal{H}_{pre}:=\Gamma(M,\mathcal{L})$, the space of (equivalence classes of) smooth square-integrable sections (with respect to the Liouville measure on $M$) of $\mathcal{L}$, with a suitable (hermitian) inner product. \item GQ assignment $ \mathcal{Q}_{pre}: \big(C^{\infty}(M),\{ \cdot, \cdot \}\big)\longrightarrow \big(End(\mathcal{H}_{pre}),[\cdot, \cdot ]\big)$ is defined by \begin{equation} \label{Qpre} \mathcal{Q}_{pre}(f):= -i\hbar \nabla_{X_f} - f, \end{equation} which provides the required operator \footnote{If we set $ \mathcal{Q}_{pre}(f):= \nabla_{X_f}-\frac{i}{\hbar} f $, one would have $ \mathcal{Q}_{pre}\big(\{f ,g \}\big)=[\mathcal{Q}_{pre}(f),\mathcal{Q}_{pre}(g)]$. However, we shall always consider the compatibility condition in the form of \ref{quantum cond.} in oder to capture the physical relevance of the subject and make the interpretation more transparent.} satisfying all axioms except the irreducibility condition in Definition \ref{defn of quantum system}. \end{enumerate} \end{definition} \begin{remark} One can easily verify that the GQ assignment $ \mathcal{Q}$ satisfies \textit{the quantum condition} \ref{quantum cond.} by direct computation together with the definition of $F_A$ as follows: Recall that for all vector fields $X,Y \in \Gamma(M,TM)$, we have \begin{equation} \label{defn of curvature} F_A (X,Y)=[\nabla_X ,\nabla_Y]-\nabla_{[X,Y]}, \end{equation} \noindent and it follows from the construction (cf. Theorem \ref{existence of prequantum line bundle}) that we also have \begin{equation} \label{defn of curvature as symplectic form} F_{A} (X,Y) =-\frac{i}{\hbar}\omega (X,Y). \end{equation} Let $f,g \in C^{\infty}(M)$ and $s\in \mathcal{H}_{pre}$, then one has \begin{align} &(1) \ \ [f,i\hbar\nabla_{X_g}]s = -i\hbar(X_gf)s,\label{observation 1} \\ &(2) \ \ X_fg=\{f,g\}=-\{g,f\}=-X_gf, \label{observation 2} \\ &(3) \ \ X_{\{f,g \}} = [X_f,X_g] \ \ \ (from \ Cartan's \ formula \ and \ \ref{defn of Hamiltonian vector field}). \label{Cartans formula} \end{align} Therefore, from the definition \ref{Qpre} of $ \mathcal{Q}_{pre}(f) $ and $ \mathcal{Q}_{pre}(g) $, we obtain \begin{align} [\mathcal{Q}_{pre}(f),\mathcal{Q}_{pre}(g)]s &= [-i\hbar\nabla_{X_f}-f, -i\hbar\nabla_{X_g}-g]s \nonumber \\ &= [-i\hbar\nabla_{X_f},-i\hbar\nabla_{X_g}]s + [-f,-i\hbar\nabla_{X_g}]s + [-i\hbar\nabla_{X_f}, -g]s+[f,g]s \nonumber \\ &= -\hbar^2[\nabla_{X_f},\nabla_{X_g}]s + i\hbar \big(X_fg-X_gf\big)s \ \ \ \ \ \ \ \ \ \ \ \ \ (by \ \ref{observation 1}) \nonumber\\ &= -\hbar^2 \big(F_A (X_f,X_g)+\nabla_{[X_f,Y_g]}\big)s + 2i\hbar \{f,g\}s \ \ \ \ ( by \ \ref{defn of curvature} \ and \ \ref{observation 2}) \nonumber \\ &= i\hbar \omega(X_f,X_g)s-\hbar^2\nabla_{[X_f,Y_g]}s+ 2i\hbar \{f,g\}s \ \ \ \ \ \ \ (by \ \ref{defn of curvature as symplectic form} ) \nonumber\\ &= -\hbar^2\nabla_{[X_f,Y_g]}s + i\hbar \{f,g\}s \ \ \ \ \ \ \ \ (by \ \ref{defn of poisson bracket}) \nonumber \\ & = -\hbar^2\nabla_{X_{\{f,g \}}}s + i\hbar \{f,g\}s \ \ \ \ \ \ \ \ \ (by \ \ref{Cartans formula}) \nonumber \\ &= -i\hbar \big(-i\hbar \nabla_{X_{\{f,g \}}} - \{f,g \}\big)s \ \ \ \ (by \ \ref{Qpre} \nonumber) \\ &= -i\hbar \mathcal{Q}_{pre}\big(\{f ,g \}\big)s \end{align} which yields the desired quantum condition \ref{quantum cond.}. \end{remark} \section{A Review on Chern-Simons Theory} To motivate how the above formalism naturally emerges in the context of a particular quantum field theory and enjoy the richness of this language, we shall study the \textit{quantization} of the $SU(2)$ Chern-Simons gauge theory (\cite{Wit3}) on a closed, orientable 3-manifold $ X $ (we may consider, in particular, an integral homology 3-sphere for some technical reasons \cite{Ruberman}) as a non-trivial prototype example for a \textit{3-TQFT} formalism in the sense of Atiyah \cite{Atiyah} (for a complete mathematical treatment of the subject, see \cite{Mnev}, \cite{Honda}). \vspace{5pt} \noindent Main ingredients of this structure are encoded by the theory of principal $ G $-bundles in the following sense: Let $P\rightarrow X$ be a principal $SU(2)$-bundle on $X$, $\sigma \in \Gamma(U,P)$ a local trivializing section given schematically as \begin{equation} \begin{tikzpicture} \matrix (m) [matrix of math nodes,row sep=3em,column sep=4em,minimum width=2em] { P & P \\ \ & X \\}; \path[-stealth] (m-1-1) edge node [above] {$ \bullet SU(2) $} (m-1-2) (m-1-2) edge node [right] {$\pi$} (m-2-2) (m-2-2) edge [bend left=40] node [left] {$\sigma$} (m-1-2); \end{tikzpicture} \end{equation} \noindent Note that when $ G=SU(2) $, $P$ is a trivial principal bundle over $X$, i.e. $P\cong X \times SU(2)$ compatible with the bundle structure, and hence there exists a globally defined nowhere vanishing section $ \sigma \in \Gamma(X,P)$. Assume $\omega$ is a Lie algebra-valued connection one-form on $P$. Let $ A:= \sigma^* \omega $ be its representative, i.e. the Lie algebra-valued connection 1-form on $X$, called \textit{the Yang-Mills field}. Then the theory consists of the space of \textit{fields}, which is defined to be the infinite-dimensional space $\mathcal{A}$ of all $ SU(2) $-connections on a principal $ SU(2)$-bundle over $X$, i.e. $ \mathcal{A}:=\Omega^1 (X) \otimes \mathfrak{g} $, and the Chern-Simons action funtional $ CS: \mathcal{A} \longrightarrow S^1$ given by \begin{equation} CS(A):=\frac{k}{4\pi} \displaystyle \int \limits_{X} \mathrm{Tr}(A\wedge \mathrm{d} A +\frac{2}{3} A \wedge A \wedge A), ~~~~ k\in \mathbb{Z}, \end{equation} together with the gauge group $\mathcal{G}=Map(X,SU(2))$ acting on the space $\mathcal{A}$ as follows: For all $g\in \mathcal{G}$ and $A \in \mathcal{A}$, we set \begin{equation} g\bullet A := g^{-1}\cdot A \cdot g + g^{-1}\cdot \mathrm{d} g. \end{equation} The corresponding Euler-Lagrange equation in this case turns out to be \begin{equation} F_{A}=0, \end{equation}where $F_{A}=\mathrm{d} A+A \wedge A$ is the $ \mathfrak{g}$-valued curvature two-form on $X$ associated to $A\in\Omega^1 (X) \otimes \mathfrak{g}.$ Furthermore, under the gauge transformation, the curvature 2-form $F_A$ behaves as follows: \begin{equation} F_A \longmapsto g \bullet F_A := g^{-1} \cdot F_A \cdot g \ \ for \ all \ g \in \mathcal{G}. \end{equation} Now, in order study the quantization of Chern-Simons theory, we need to adopt the language of \textit{path integral formalism} which will be discussed below (cf. Section \ref{path integral formalism}). The essence of this approach is as follows: In accordance with the axioms of \textit{sigma model} (or those of \textit{TQFT} as in \cite{Atiyah}, \cite{Honda}, \cite{Mnev}), which will be elaborated succinctly below, we shall consider a decomposition of a closed, orientable 3-manifold $X$ along a Riemannian surface $\Sigma$ (see Figure \ref{fig:decomposititon}) \begin{equation} X= (X_{+}\amalg X_{-})/ \Sigma, \end{equation} where $X_{\pm}$ is a compact oriented smooth 3-manifold with boundary $ \partial X_{+} = \Sigma = -\partial X_{-}$ respectively such that $X$ can be obtained by gluing $ X_{+} $ and $ X_{-} $ along their boundaries. \begin{figure}[h] \centering \includegraphics[width=0.6\linewidth]{Untitled-2} \caption{Decomposition of $X$ along a Riemannian surface $\Sigma$.} \label{fig:decomposititon} \end{figure} \noindent Then, we would like to study so-called \textit{the partition function $ \mathcal{Z}_X $ assigned to $X$} which essentially captures the probabilistic nature of the quantum Chern-Simons theory and it can be expressed implicitly as a certain pairing (which roughly speaking encodes \textit{the glueing axiom} of QFTs \cite{Honda}) \begin{equation} \label{pairing} \mathcal{Z}_X= \langle \mathcal{Z}_{X_{+}}, \mathcal{Z}_{X_{-}}\rangle \in \mathbb{C}, \end{equation} where $\mathcal{Z}_{\Sigma} $ is \textit{the associated vector space} together with the natural pairing $\langle-,-\rangle $ on $\mathcal{Z}_{\Sigma} $ such that $ \mathcal{Z}_{X_{+}} \in \mathcal{Z}_{\Sigma}$ and $ \mathcal{Z}_{X_{-}}\in \mathcal{Z}_{\Sigma}^* $. Here $ \mathcal{Z}_{X_{+}} $ and $ \mathcal{Z}_{X_{-}} $ can be considered as "reduced" partition functions associated to each piece $ X_{+} $ and $ X_{-} $ respectively. Informally speaking, $ \mathcal{Z}_X $ is in fact determined by data on the boundary via the pairing above with the objects $ \mathcal{Z}_{X_{+}} $ and $ \mathcal{Z}_{X_{-}}. $ \vspace{10pt} The following sections will be devoted to \textit{unpackage} the construction of the pairing (\ref{pairing}) and to investigate its relation with low dimensional topology. In order to better understand the underlying mathematical structure encoding the objects like $ \mathcal{Z}_{\Sigma} $ and $\mathcal{Z}_{X_{\pm}}$, we shall briefly discuss the notion of \textit{topological field theory}. \newpage \section{TQFT and Category Theory} Before discussing the notion of \textit{topological field theory} in the language of\textit{ category theory}, we first recall how to define a na\"{\i}ve version of TQFT (\cite{Mnev}) in the sense of Atiyah \cite{Atiyah}: \vspace{-15pt} \begin{quote} \begin{definition}\label{defn of n-tqft} \textit{A n-TQFT} $\mathcal{Z}$ consists of the following data: \begin{itemize} \item For each closed orientable ($n-1$)-manifold $\Sigma$, a vector space $\mathcal{Z}_{\Sigma}$ over $\mathbb{C}$ which is called \textit{the space of states}. Furthermore if $-\Sigma$ denotes reversed-oriented version of $\Sigma$, then one has \begin{equation} \mathcal{Z}_{-\Sigma} \cong \mathcal{Z}_{\Sigma}^* \end{equation} where $ \mathcal{Z}_{\Sigma}^* $ denotes the linear \textit{dual} of the vector space $ \mathcal{Z}_{\Sigma} .$ \item For each compact orientable $n$-manifold $M$ with boundary \begin{equation} \partial M= -\Sigma_{in} \amalg \Sigma_{out} \end{equation} where $M$ is in fact called \textit{$n$-cobordism from $ \Sigma_{in} $ to $ \Sigma_{out} $}, $\mathcal{Z}$ associates a $\mathbb{C}$-linear map of vector spaces \begin{equation} \mathcal{Z}_{M}: \mathcal{Z}_{\Sigma_{in}}\longrightarrow \mathcal{Z}_{\Sigma_{out}}, \end{equation} which is called \textit{the partition function}. \begin{figure}[h] \centering \includegraphics[width=0.6\linewidth]{Untitled-3} \caption{A $n$-cobordism from $ \Sigma_{in} $ to $ \Sigma_{out}. $} \label{fig:n-cobordism} \end{figure} \item If $\Sigma \xrightarrow{f} \Sigma'$ is a diffeomorphism of two closed orientable ($n-1$)-manifolds, then the associated vector spaces are isomorphic: $\mathcal{Z}_{\Sigma} \cong \mathcal{Z}_{\Sigma'}$. If $f$ is orientation-preserving (resp. reversing), then the associated map is $\mathbb{C}$-linear (resp. anti-linear). \end{itemize} together with certain \textit{multiplicativity, gluing/composition, normalization} ($\mathcal{Z}_{\emptyset} := \mathbb{C}$) and \textit{compatibility conditions (under diffeomorphisms)} (for a complete definition, see \cite{Mnev}). \end{definition} \end{quote} \begin{remark} The axioms of $ n $-TQFT above in fact encode those of sigma model in a quantum field theory (cf. \cite{Honda}). With these axioms in hand, note that any closed oriented $n$-manifold $X$ can be realized as a cobordism between $(n-1)$-dimensional empty sets (i.e. a theory from \textit{vacuum} to \textit{vacuum}) \begin{equation} \mathcal{Z}_{X}: \mathbb{C}\longrightarrow \mathbb{C} \end{equation} which is defined as a multiplication by some complex number (recall $ \mathcal{Z}_{\emptyset} := \mathbb{C}$). \end{remark} \vspace{5pt} \noindent\textbf{A digression on main ingredients of category theory.} In this section we would like to introduce a number notions, such as \textit{category, functor between categories} etc., in a rather intuitive manner in the sense that all definitions to be appeared below are given in a relatively na\"{\i}ve sense (for instance, to better articulate the essence of the item without indicating further technical details, we cross our fingers and use repeatedly the phrase \textit{"with certain compatibility conditions"} encompassing certain natural commutative diagrams encoding, for instance, associativity or the behavior under compositions etc...). For a complete mathematical treatment of the subject, see \cite{Vakil} and \cite{Stacks}. \newpage \begin{quote} \begin{definition} \textit{A category} $\mathcal{C}$ consists of the following data: \begin{itemize} \item A \textit{collection} of objects $ Obj (\mathcal{C})$. \item For each pair of objects $ A,B \in \mathcal{C} $, there is a \textit{set} $ Mor_{\mathcal{C}}(A,B) $ of morphisms between $ A $ and $ B $. \textit{A morphism} $ f\in Mor_{\mathcal{C}}(A,B)$ is denoted by $A\xrightarrow{f}B.$ In particular, for each object $ A \in \mathcal{C} $ there is an identity morphism $ id_{A}: A\rightarrow A $ in $ Mor_{\mathcal{C}}(A,A). $ \item For each triple of objects $ A,B,C $ , there is a composition map \begin{equation} Mor_{\mathcal{C}}(A,B) \times Mor_{\mathcal{C}}(B,C) \rightarrow Mor_{\mathcal{C}}(A,C), \end{equation} \end{itemize} together with certain \textit{compatibility conditions}. \end{definition} \end{quote} \noindent Some na\"{\i}ve examples of categories are in order: $ \textit{Top} $ denotes the category of topological spaces with \textit{objects} being topological spaces and \textit{morphisms} being continuous maps between topological spaces. $ \textit{Vect}_{\mathbb{C}} $ denotes the category of vector spaces over $\mathbb{C}$ where objects are vector spaces over $\mathbb{C}$ and morphisms are $\mathbb{C}$-linear maps between such vector spaces. \vspace{-15pt} \begin{quote} \begin{definition} \textit{A (covariant) functor} $\mathcal{F}$: $\mathcal{C} \longrightarrow \mathcal{D}$ between two categories $\mathcal{C} ,\mathcal{D}$ consists of the following data: \begin{itemize} \item For objects we have a map $\mathcal{F}: Obj (\mathcal{C})\longrightarrow Obj (\mathcal{D})$ sending an object $A$ of $\mathcal{C}$ to the object $\mathcal{F}(A)$ of $\mathcal{D}$. \item On morphisms, we have a map $ Mor_{\mathcal{C}}(A,B) \longrightarrow Mor_{\mathcal{D}}(\mathcal{F}(A),\mathcal{F}(B))$. \end{itemize} together with certain \textit{compatibility conditions} for compositions and identity morphism, and the existence of the identity functor $ \textit{id}:\mathcal{C} \rightarrow \mathcal{C} $ for each category $\mathcal{C.}$ \end{definition} \end{quote} With above category-theoretic language in hand, we can re-state the Definition \ref{defn of n-tqft} as follows (cf. \cite{Mnev}): \vspace{-15pt} \begin{quote} \begin{definition} A $ n $-TQFT is a functor $\mathcal{Z}: (Cob_{n},\amalg) \longrightarrow\textit({Vect}_{\mathbb{C}},\otimes) $ of symmetric, monoidal categories where $ (Cob_{n},\amalg) $ denotes the category of \textit{n-cobordisms} with objects being closed orientable ($n-1$)-manifolds and morphisms being $n$-cobordisms. \end{definition} \end{quote} \noindent \textit{The end of a digression.} \vspace{15pt} Now, we shall briefly explain where \textit{geometric quantization} comes into play so as to construct the vector space $\mathcal{Z}_{\Sigma}$. To find suitable symplectic manifold to be \textit{quantized}, we need to analyze the critical locus of the Chern-Simons action $CS$ (when restricted to the boundary $\Sigma$). Indeed, we can locally decompose $X$ as $\Sigma \times \mathbb{R}$ where $\Sigma$ is a closed orientable Riemannian surface and $\mathbb{R}$ the time direction. Fixing the gauge condition $A_0=0$, we have the action functional $ CS_{\Sigma}: \mathcal{A}_{\Sigma} \longrightarrow S^1$ of the form \begin{equation} CS_{\Sigma}(A):=\frac{k}{8\pi} \displaystyle \int \mathrm{d}t\int \limits_{\Sigma} \epsilon^{i j} \mathrm{Tr}(A_i \frac{\mathrm{d}}{\mathrm{d}t} A_j), ~~~~ k\in \mathbb{Z}, \end{equation} such that the corresponding field equation is also given by \begin{equation} \epsilon^{i j}F^A_{ij}=0, \end{equation} which also implies that the connections $A$ on $\Sigma$, which are solutions to the Euler-Lagrange equation, are flat. As stressed in \cite{daS} (ch. 25), it follows from highly non-trivial theorems of Atiyah and Bott in \cite{AB} that \begin{enumerate} \item The space $ \big(\mathcal{A}_{\Sigma}, \omega_{\Sigma}, \mathcal{G}\big) $ is an \textit{infinite-dimensional symplectic manifold} together with a certain choice of $SU(2)$-invariant bilinear form $\langle \cdot, \cdot \rangle $ on its Lie algebra, by which one can define $\omega_{\Sigma}$ manifestly (see \cite{daS} ch.25 for the concrete definition of $ \omega_{\Sigma} $), \item Furthermore, the space $ \big(\mathcal{A}_{\Sigma}, \omega_{\Sigma}, \mathcal{G}, \mu\big) $ is a Hamiltonian $\mathcal{G}$-space with the gauge group $\mathcal{G}$ and the \textit{moment map} $\mu$ (cf. \cite{daS} ch.25) defined as \textit{the curvature map}, namely \begin{equation} \mu: \mathcal{A}_{\Sigma} \longrightarrow LieAlg(\mathcal{G})^*, \ \ A \longmapsto F_A. \end{equation} \item By using \textit{symplectic reduction theorem} (aka The Marsden-Weinstein-Meyer Theorem, see \cite{daS} ch. 23 for the statement and proof) and results in \cite{AB}, the reduced space \begin{equation} \mathcal{M}_{\Sigma} :=\mu^{-1}(0)/\mathcal{G}, \end{equation} which is \textit{the moduli space of flat connections over $\Sigma$ modulo gauge transformation}, turns out to be a compact, finite-dimensional symplectic manifold. Note that the space $ \mathcal{M}_{\Sigma} $ is generically a finite-dimensional symplectic \textit{orbifold} due to the non-freeness of the action of $\mathcal{G}$ on $\mathcal{A}_{\Sigma}$, but in the case where $X$ is a homology 3-sphere and $G=SU(2)$ one can circumvent the pathological quotient by restricting $\mathcal{A}_{\Sigma}$ to a certain dense open subset $\mathcal{A}^* \subset \mathcal{A}_{\Sigma}$ consisting of connections on which $\mathcal{G}$ acts freely (for details see \cite{Ruberman}). \end{enumerate} With the above observations in hand, $ \mathcal{M}_{\Sigma} $ serves as a required symplectic manifold to be assigned to $\Sigma$ so that one can construct $\mathcal{Z}_{\Sigma}$ by means of geometric quantization formalism. At the end of the day, therefore, $\mathcal{Z}_{\Sigma}$ becomes the space of holomorphic sections of a certain complex line bundle (for detailed discussion see \cite{Wit3}). By using the dimensionality of $\mathcal{Z}_{\Sigma}$, on the other hand, one can derive some relations in terms of \textit{the partition function} $\mathcal{Z}$ (cf. Equation \ref{skeinlike}) such that one can eventually realize that the derived relations turns out to be \textit{the skein relations for the Jones polynomial} in some parameter (see equation \ref{eq:Jonesskeinlike}) if we introduce a knot (oriented) in X. In order to elaborate the last argument, we need to introduce a number of notions that naturally emerge in so-called \textit{the path integral formalism of a quantum field theory} (thought of as a quantum counterpart of the Lagrangian formalism encoding a classical field theory). For an elementary and readable introduction to knot theory, see \cite{Prasolov}. \section{The Path Integral Formalism} \label{path integral formalism} We first recall how to define a na\"{\i}ve and algebro-geometric version of a quantum field theory (\cite{Gw}, \cite{Mnev}) in the path integral formalism: \vspace{-15pt} \begin{quote} \begin{definition} \textit{A quantum field theory} on a manifold $X$ consists of the following data: \begin{itemize} \item [\textit{(i)}] the space $ \mathbb{F}_{X}$ of \textit{fields} of the theory defined to be the space $\Gamma(X,\mathcal{F})$ of sections of a particular \textit{sheaf} $ \mathcal{F} $ on $X$, \item [\textit{(ii)}] the action functional $\mathcal{S}: \mathbb{F}_{X}\longrightarrow \mathbb{C}$ \ that captures the behavior of the system under consideration. \item [\textit{(iii)}] An observable $ \varTheta $ defined as a function on $ \mathbb{F}_{X}$: \begin{equation} \varTheta:\mathbb{F}_{X} \longrightarrow \mathbb{C}, \end{equation} \item [\textit{(iv)}] together with its \textit{expectation value} $\langle \varTheta \rangle $ defined by \begin{equation} \langle \varTheta \rangle := \frac{1}{\mathcal{Z}_X} \displaystyle \int\limits_{\phi \in \mathbb{F}_{X}}\varTheta (\phi)e^{iS(\phi)/\hbar} \mathrm{d} \phi, \end{equation} where $ e^{iS(\phi)/\hbar} d \phi $ is a \textit{putative} measure on $ \mathbb{F}_{X} $ and \textit{the partition function} \begin{equation} \mathcal{Z}_X :=\displaystyle \int\limits_{\phi \in \mathbb{F}_{X}}e^{iS(\phi)/\hbar} \mathrm{d} \phi. \end{equation} \end{itemize} \end{definition} \end{quote} \vspace{5pt} Now we employ the above formalism for the Chern-Simons theory described at the beginning. We shall study the \textit{quantization} of the $SU(2)$ Chern-Simons gauge theory (\cite{Wit3}) on a closed, orientable 3-manifold $ X $ (in particular, we will take $ X= S^3 $ in a second to make the connection to knot theory more transparent). As before, Let $P\rightarrow X$ be a principal $SU(2)$-bundle on $X$, and $A\in \mathcal{A}:=\Omega^1 (X) \otimes \mathfrak{g} $ the Lie algebra-valued connection 1-form on $X$, then we have \begin{itemize} \item The partition function \begin{equation} \mathcal{Z}_X :=\displaystyle \int\limits_{A \in \mathcal{A}_{X}}e^{iCS(A)/\hbar} \mathrm{d} A \end{equation} is a 3-manifold invariant where the integration is a Feynman path integral over all $SU(2)$-connections modulo gauge transformation. Such an invariant can be tractable in accordance with the surgery presentation of given $X$ (see \cite{Wit3}). \item More generally, by introducing a functional $ \varTheta_C (A) $ associated to a connection $A$ on $X$, one can construct an invariant for the \textit{data} $ C $ defining $ \varTheta_C (A) $ as follows \\ \begin{equation}\label{correlation function} \mathcal{Z}_{X, \varTheta_C}:= \displaystyle \int\limits_{A \in \mathcal{A}}\varTheta_C (A)e^{iCS(A)/\hbar} \mathrm{d} A \end{equation} \end{itemize} The case under consideration to derive knot invariant is that we take $X= S^3$ and $C$, a knot in $X$, together with the structure group $G=SU(2)$ such that \begin{equation} \varTheta_C (A) := \mathrm{Tr_R} Hol_{A} (C)=\mathrm{Tr}_{R_i} \mathcal{P}e^{i \oint \limits_{C} A} \end{equation} where $\mathcal{P}$ denotes \textit{the path ordering} and $Hol_{A} (C) = \{ P_{C} \in GL(P_{x}): P $ is a parallel transport along $C$ defined by $A$\}, the \textit{holonomy group} of $A$ along $C$, and $R$ is a certain irreducible representation of $G$ attached to $C$, which is called a \textit{labeling} of given knot. When we have a \textit{link} $L=\bigcup C_i$, each component $C_i$ is decorated by some irreducible representations $R_i$ of $G$ accordingly and we set \begin{equation} \varTheta_L(A) :=\prod \varTheta_{C_i} (A)_i, \ \ where \ \ \varTheta_{C_i}(A)_i :=\mathrm{Tr_{R_i}} Hol_{A} (C_i). \end{equation} Here $ \varTheta_C (A) $ is called \textit{the Wilson line operator} in the physics literature. In that case, $ \mathcal{Z}_{X, \varTheta_C} $ leads an invariant for $C.$ When we consider a decomposition (Figure \ref{fig:decomposititon}) of $X$ along a Riemannian surface $\Sigma$ (see \cite{Mnev}, \cite{Honda} or \cite{Gw} for details) \begin{equation} X= (X_{+}\amalg X_{-})/ \Sigma, \end{equation} where $X_{\pm}$ is a compact oriented smooth 3-manifold with boundary $ \partial X_{+} = \Sigma = -\partial X_{-}$ respectively such that $X$ can be obtained by gluing $ X_{+} $ and $ X_{-} $ along their boundaries. Then, in accordance with the axioms of TQFT, we have \vspace{5pt} \begin{align} &\mathcal{Z}_{X_+} :=\displaystyle \int\limits_{A \in \mathcal{A}_{X_+}}e^{iCS(A)/\hbar} \mathrm{d} A \ \in \ \mathcal{Z}_{\Sigma}, \\ &\mathcal{Z}_{X_-} :=\displaystyle \int\limits_{A \in \mathcal{A}_{X_-}}e^{iCS(A)/\hbar} \mathrm{d} A \ \in \ \mathcal{Z}_{-\Sigma}\cong \mathcal{Z}_{\Sigma}^* \end{align} \vspace{3pt} \noindent such that \begin{equation} \mathcal{Z}_X= \langle \mathcal{Z}_{X_{+}}, \mathcal{Z}_{X_{-}}\rangle \in \mathbb{C}, \end{equation} where $\mathcal{Z}_{\Sigma} $ is the vector space associated to $\Sigma$ via \textit{geometric quantization} together with the natural pairing $\langle-,-\rangle $ on $\mathcal{Z}_{\Sigma} $ such that $ \mathcal{Z}_{X_{+}} \in \mathcal{Z}_{\Sigma}$ and $ \mathcal{Z}_{X_{-}}\in \mathcal{Z}_{\Sigma}^* $. \vspace{10pt} Note that the pairing above can be studied more explicitly when we consider \textit{the sigma model}, i.e. a quantum field theory on $X$ with the space of fields being the space $ C_X := Maps(X,N)$ of smooth maps from $X$ to $N$ for some fixed target manifold $ N $, and re-interpreting \textit{the gluing axiom} of sigma model with the help of the usual Fubini's theorem and properties of Feynmann path integrals as follows (see \cite{Honda} for the complete treatment): Let $X, X_{\pm}$ and $\Sigma$ be as above. Then we have \begin{align} \mathcal{Z}_X &= \displaystyle \int\limits_{\phi \in C_X}e^{iS_X(\phi)/\hbar} \mathrm{d} \phi \nonumber \\ &= \displaystyle \int\limits_{\alpha \in C_{\Sigma}} \bigg(\displaystyle \int_{\phi_+ \in C_{X_+}(\alpha)}e^{iS_{X_+}(\phi_+)/\hbar} \mathrm{d} \phi_+ \cdot \displaystyle \int_{\phi_- \in C_{X_-}(\alpha)}e^{iS_{X_-}(\phi_-)/\hbar} \mathrm{d} \phi_-\bigg) \mathrm{d} \alpha \nonumber \\ &= \displaystyle \int\limits_{\alpha \in C_{\Sigma}} \mathcal{Z}_{X_{+}} \cdot \mathcal{Z}_{X_{-}} \mathrm{d} \alpha \nonumber\\ &= \langle \mathcal{Z}_{X_{+}}, \mathcal{Z}_{X_{-}} \rangle, \end{align} \noindent where $C_X (\alpha)$ is a subset of $C_X$ that contains maps $\phi: X \longrightarrow N$ such that $\phi | _{\partial X} = \alpha$, and $\phi_{\pm}$ denote the restriction of $\phi$ to $X_{\pm}$ respectively. \newpage \section{The Construction of Witten's Quantum Invariants} Similar kind of analysis is also applicable when $X$ contains a knot $C$ in a way that after cutting out along a Riemannian surface $\Sigma$, both pieces $X_{\pm}$ involves some part of the knot $C$. Assume $ X $ contains a knot as depicted in Figure \ref{fig:decomposition with a knot}. Consider a ball $D^3$ containing a crossing. Let $\partial D^3=S^2$ play the role of $\Sigma$ as in Figure \ref{fig:decomposititon}, then depending on how pieces of $ X $ (each consists of different part of the original knot) glue back, one can obtain a \textit{non-isotopic} knot, say $L$ (arises from different \textit{brading}, see \cite{Prasolov} for systematic treatment of knot invariants including the computation of certain knot polynomials), and hence different \textit{correlation function}, denoted by $ \mathcal{Z}_{X, \varTheta_L}$ (cf. definition \ref{correlation function}). \vspace{5pt} \begin{figure}[h] \centering \includegraphics[width=0.65\linewidth]{Untitled-111} \caption{Decomposition of $S^3$ along $\partial D^3=S^2$ containing four marked points $a, b, c, d.$.} \label{fig:decomposition with a knot} \end{figure} \begin{remark} Note that $\partial D^3$ in Figure \ref{fig:decomposition with a knot} is a 2-sphere with some finite number of \textit{marked points} (that is, points with labeling in the sense of above discussion - decorating with certain representations-), and hence we have a vector space $\mathcal{Z}_{\Sigma}$ different from the one that is assigned to 2-sphere without marked points. Furthermore, that sort of vector spaces, the ones that are associated to $\Sigma$ being $ S^2\cong \mathbb{C}P^1 $ with finite number of \textit{marked points} $p_1,...,p_k$, sometimes denoted by $S^2_k$, naturally emerge in other branches of physics and encode some relation between theories in different dimensions, such as the one between \textit{1+1 conformal field theories} (CFTs) and 2+1 dimensional TQFTs. As stressed in \cite{Honda} and \cite{Wit3}, \textit{the space $\mathcal{H}_{S^2_k}$ of conformal blocks} for $S^2_k$ in the context of $ 1+1 $ conformal field theory is the quantum Hilbert space $\mathcal{Z}_{S^2_k}$ obtained by quantizing 2+1 SU(2)-Chern-Simons theory discussed above. \cite{Schttenloher} provides an elementary introduction to conformal field theory in a particular perspective that is more suited to mathematicians. For an accessible treatment of conformal blocks and the formulation of Witten's knot invariant in the language of conformal field theory, see \cite{Kohno}. Furthermore, \cite{Kohno} and \cite{Schttenloher} also include a systematic treatment for the construction of the space of conformal blocks $\mathcal{H}_{S^2_k}$ for $S^2_k $ and its properties in a way which is essentially based on representation-theoretic approach including so-called \textit{the quantum Clebsch-Gordan condition} and counting the dimension of the space of conformal blocks with the aid of certain combinatorial objects, such as \textit{fusion rules} for surfaces with marked points and \textit{Verlinde formula}. \end{remark} \noindent With the observations related to the existence of a certain correspondence between 1+1 CFTs and 2+1 TQFTs in hand, we shall analyze the decomposition depicted in Figure \ref{fig:decomposition with a knot} in detail by adopting the more combinatorial approach appearing in CFT formulation of Witten's knot invariant (see \cite{Kohno}). The sketch of idea is as follows: \vspace{10pt} \begin{itemize} \item Without touching anything, i.e. using a diffeomorphims on $\Sigma$ not changing the braiding, such as \textit{the identity map}, if we glue back each pieces $X_{\pm}$ along $\Sigma:=\partial D^3=S^2$ which contains a particular crossing (and hence we in fact have $\Sigma=S^2_4$), then we recover $ X= (X_{+}\amalg X_{-})/ \partial D^3 $ together with the original knot $C$. Otherwise, each configuration differs from each other by a certain diffeomorpmhism of $S^2$ which can be presented in an well-established manner by studying the representation theory of its mapping class group $Map(\Sigma)$. See \cite{Prasolov}, \cite{Kohno} and \cite{Honda} for details. \vspace{5pt} \item Notice that while the piece $X_-$ includes some complicated part of the original knot $C$ (that part is depicted as a \textit{white box} in Figure \ref{fig:decomposition with a knot}), the other one, $X_+$, consists of a part with some \textit{"braiding"} in a sense that each choice of possible braiding corresponds to the one of "independent" line configurations depicted in Figure \ref{fig:skeinhomfly} that naturally appear in knot theory (see \cite{Prasolov} for more concrete discussion). \begin{figure}[h] \centering \includegraphics[width=0.27\linewidth]{Skein_HOMFLY} \caption{"Independent" line configurations where $ L_{0} $, $ L_{-} $,$ L_{+} $ are the usual notations for zero-crossing, undercrossing and overcrossing resp. in knot theory.} \label{fig:skeinhomfly} \end{figure} \item In accordance with the type of line configuration, if we glue $ X_+ $ and $ X_- $ along their boundaries, we can recover either the orinigal knot (including the positive crossing, aka \textit{overcrossing} $ L_+ $, as in Figure \ref{fig:decomposition with a knot}) or the one with \textit{undercrossing} $ L_- $ or the one with \textit{zero-crossing} $ L_0 $. As stressed above, each such line configuration is encoded by a certain diffeomorphism of $ \Sigma $. As a remark, we abuse the notation from now on in the sense that $ L_{0} $, $ L_{-} $ and $ L_{+} $ denote the knots in $X$ obtained from gluing back $ X_{+} $ amd $ X_{-} $ with respect to the choice of line configurations (and hence diffeomorphisms) $ L_{0} $, $ L_{-} $ and $ L_{+} $ respectively. \item As outlined in \cite{Kohno}, the choice of braiding of four marked points determines different vectors in the vector space $\mathcal{Z}_{S^2_4}$ associated to the Riemann surface $ S^2_4 $, the 2-sphere with four marked points, in accordance with the axioms of TQFT (or those of sigma model) and the construction provided by GQ formalism. That is, one has the associated vectors \begin{equation} \mathcal{Z}_{X, \varTheta_{L_-}}, \mathcal{Z}_{X, \varTheta_{L_+}}, \mathcal{Z}_{X, \varTheta_{L_0}} \in \mathcal{Z}_{S^2_4}. \end{equation} \item Employing representation-theoretic approaches endowed with certain combinatorial techniques such as \textit{fusion rules} and \textit{Verlinde formula} as in \cite{Kohno}, one has the following fact: \begin{equation} dim_ {\mathbb{C}} \mathcal{Z}_{S^2_4} \leq 2. \end{equation} \end{itemize} \vspace{5pt} \noindent Due to the finite dimensionality of $\mathcal{Z}_{S^2_4}$, we end up with a certain dependence relation for such vectors corresponding to possible "independent" configurations, namely \begin{equation} \label{skeinlike} \alpha \mathcal{Z}_{X, \varTheta_{L_+}}+ \beta \mathcal{Z}_{X, \varTheta_{L_-}}+ \gamma\mathcal{Z}_{X, \varTheta_{L_0}}=0 \end{equation}with some weighted coefficients $\alpha, \beta, \gamma$ which arise from \textit{rational conformal field theory} and manifestly given in \cite{Wit3}. Having computed those coefficients and manipulated the above dependence relation, at the end of the day, we are able to recover \textit{the skein-like relation} defining the Jones polynomial $V(q)$ as follows (\cite{Wit3}): \begin{equation}\label{eq:Jonesskeinlike} q^{-1}V(L_+)-qV(L_-)-\big(q^{1/2}-q^{-1/2}\big)V(L_0)=0 \end{equation} where $ q:=e^{2\pi i/(k+2)} $, $k\in \mathbb{Z} $ is the level appearing in the definition of Chern-Simons functional, $V(L_i)$ with $i \in \{0,+,-\}$ denote the Jones polynomial associated to knots with the configurations $ L_{0} $, $ L_{-} $,$ L_{+} $, and we set \begin{equation} V(q) = \mathcal{Z}_{X, \varTheta_{C}}. \end{equation} \begin{remark} In the physics jargon, evaluating the quantity $ \mathcal{Z}_{X, \varTheta_{C}} $ in fact corresponds to computing \textit{the expectation value of the Wilson line observable associated to the knot $C$ in X}. That essentially gives the 3-dimensional description of knot invariants in terms of 2+1 dimensional $SU(2)$ Chern-Simons theory. Furthermore, if we have a generic closed oriented smooth manifold $ M $, by using the effect of surgery operations (the formal recipe how to obtain $M$ from $S^3$) on the partition function one can effectively evaluate the generalized Jones polynomial for any given knot in $ M.$ This direction is beyond the scope of this section, and for a complete treatment, we again refer to \cite{Wit3}. \end{remark} \vspace{15pt} \newpage
{ "timestamp": "2019-03-01T02:04:30", "yymm": "1902", "arxiv_id": "1902.10813", "language": "en", "url": "https://arxiv.org/abs/1902.10813" }
\section{Introduction} Diffeological spaces have been introduced by Souriau in the early 1980s \cite{So}. The notion generalizes that of a manifold. More precisely, the category $\mathsf{Mfd}$ of finite dimensional manifolds embeds into $\mathsf{Diff}$ the cateogory of diffeological spaces, which is complete, cocomplete and cartesian closed. As an advantage, we can {\it very naturally} define a function space of manifolds in $\mathsf{Diff}$ so that the evaluation map is smooth without arguments on infinite dimensional manifolds; see \cite{IZ} and also \cite[Section 4]{C-S-W}. It is worth to mention the existence of adjoint functors between $\mathsf{Diff}$ and $\mathsf{Top}$ the category of topological spaces; see Appendix B for a brief summary of the functors. Thanks to reflective properties of the adjoint functors, the full subcategory of $\Delta$-generated (numerically generated or arc-generated) topological spaces, which contains all CW-complexes, also embeds into $\mathsf{Diff}$; see Remark \ref{rem:Delta-generated_Top}. Thus in the category $\mathsf{Diff}$, it is possible to deal with simultaneously such topological spaces and manifolds without forgetting the smooth structure. The category $\mathsf{Diff}$ is indeed equivalent to the category of concrete sheaves on a concrete site; see \cite{B-H}. Moreover, Watts and Wolbert \cite{W-W} have shown that $\mathsf{Diff}$ is closely related to the category of stacks over manifolds with adjoint functors between them. As Baez and Hoffnung have mentioned in \cite[Introduction]{B-H}, we can use the larger category $\mathsf{Diff}$ for abstract constructions and the smaller one $\mathsf{Mfd}$ for theorems that rely on good control over local structure. In this article, we intend to focus on and develop cohomological methods for considering local and global nature of each diffeological space. The de Rham complex for a diffeological space is introduced in \cite{So} and its applications, for example to moment maps, are extensively studied in \cite{IZ_MM, IZ_deRham}. Moreover, differential forms on diffeological spaces are effectively used in the study of bundles in $\mathsf{Diff}$; see \cite{M-W, Waldorf} and also \cite[Future work]{C-W_bundles}. Let $(X, {\mathcal D}^X)$ be a diffeological space and $\Omega^*(X)$ the de Rham complex due to Souriau \cite{So}. It is in fact regarded as a counterpart of the de Rham complex for developing Chen's iterated integrals \cite{C} in diffeology. In the de Rham calculus for diffeological spaces, Iglesias-Zemmour \cite{IZ_deRham} has introduced an integration, which is a cochain map, \[ \int^{\text{IZ}} : \Omega^*(X) \longrightarrow C^*_{\text{cube}}(X) \] and investigated its properties, where $C^*_{\text{cube}}(X)$ denotes the normalized cubic cochain complex whose $p$-simplexes are smooth maps from ${\mathbb R}^p$ to $X$. However, the de Rham theorem in deffeology, which asserts that such an appropriate integration map induces an isomorphism of cohomology algebras, has not yet been established. Indeed, the de Rham theorem described with $\int^{\text{IZ}}$ does not hold for the irrational torus; see \cite[Section 8]{IZ_Cech} and Remark \ref{rem:An_example}. In this article, the de Rham theory in $\mathsf{Diff}$ is formulated in the context of simplicial objects. In particular, a new de Rham complex is introduced and a morphism from the original de Rham complex mentioned above to the new one is discussed together with morphisms between other related cochain algebras; see Theorem \ref{thm:main} that is the main result in this article. In consequence, the theorem allows one to deduce that the de Rham theorem holds for {\it every} diffeological space in our setting; see Corollary \ref{cor:main}. We deduce that the de Rham complex introduced in this article is quasi-isomorphic to the original one if the given diffeological space stems from a CW-complex, a manifold or a parametrized stratifold in the sense of Kreck \cite{Kreck}. As a corollary, we also see that the integration map $\int^{\text{IZ}}$ induces a morphism of algebras on the cohomology; see Corollary \ref{cor:main2}. The Chen iterated integral map \cite{C} is deeply related to our de Rham complex. Let $M$ be a simply-connected manifold and $LM$ the free loop space consisting of smooth loops endowed with Chen space structure; see \cite{C,C2}. Let $\text{So}$ denote the functor from the category of Chen spaces to $\mathsf{Diff}$ introduced by Stacey \cite{Stacey}. Then it follows that the Chen complex which is a cochain subalgebra of the de Rham complex of $LM$ in the sense of Chen is quasi-isomorphic to our de Rham complex of $\text{So}(LM)$; see Proposition \ref{prop:Chen}. Furthermore, behavior of the Chen iterated integral map in diffeology is described in Theorems \ref{thm:the_second_main} and \ref{thm:general_main}. It seems that the de Rham complex we introduce is a correct target of Chen's iterated integrals; see Remark \ref{rem:target}. These results are also obtained by adaptation of Theorem \ref{thm:main}. The proof of Theorem \ref{thm:main} relies on the {\it extendability} of simplicial cochain algebras in the real and rational de Rham theory in \cite{F-H-T, H, S, W}. Moreover, an argument due to Kihara in \cite{Kihara} with the method of acyclic models \cite{E-M, B-G} is applied to our setting. The latter half of the theorem follows from the usual argument with the Mayer-Vietoris sequence; see \cite{I-I, Haraguchi} for applications of the sequence in diffeology. Properties of the $D$-topology for diffeological spaces, which are studied in \cite{C-S-W}, are also used through this article. Thus the classical result, well-known methods in algebraic topology and recent results in diffeology as well serve mainly in the proofs of our assertions. Then, it seems that no new idea for the study of diffeology is given in this article. However, we would like to emphasize that an advantage of this work is to give plenty of simplicial objects for homology of dffeological spaces; see Table \ref{table1} in Section \ref{sect7} and the comment that follows. Indeed, there is a suitable choice of a simplicial set and a simplicial cochain algebra with which one deduces the de Rham theorem of the diffeological spaces as mentioned above. It is worth to mention that the de Rham complex, which we choose, definitely concerns the simplicial argument and cohomology developed in \cite{Hector, C-W, Kihara1, Kihara, G}. Other choice of simplicial sets for given diffeological spaces enables us to construct the Leray-Serre spectral sequence and the Eilenberg-Moore spectral sequence for an appropriate {\it fibration} in $\mathsf{Diff}$; see Theorems \ref{thm:LSSS} and \ref{thm:EMSS}. By elaborate replacement of pullbacks with homotopy pullbacks for considering smooth lifts of fibrations, we obtain the spectral sequences and then Theorem \ref{thm:general_main}, which explains a diffeological version of Chen's isomorphism induced by iterated integrals. This is another highlight in this manuscript; see also Remark \ref{rem:highlight}. We mention here that in \cite{IZ_Cech}, the \v{C}ech-de Rham spectral sequence converging to the \v{C}ech cohomology of a diffeological space is introduced. Observe that the original de Rham cohomology appears in the vertical edge of the $E_2$-term of the spectral sequence.  In future work, it is expected that the local systems in the sense of Halperin \cite{H} which we use in the proof of Theorem \ref{thm:general_main} develop rational homotopy theory for non-simply connected diffeological spaces and Sullivan diffeological spaces; see \cite{G-H-T, F-H-T_II}. Moreover, the new de Rham complex may produce the argument on $1$-minimal models as in \cite{C_extensions, C-P} in diffeology. \section{The main results} \label{sect2} We begin by recalling the definitions of a diffeological space and the de Rham complex due to Souriau. A good reference for the subjects is the book \cite{IZ}. We refer the reader to \cite[\S 1.2]{C} and \cite[\S 2]{B-H} for Chen spaces and \cite{Stacey} for the comparison between diffeological spaces and Chen spaces. \begin{defn For a set $X$, a set ${\mathcal D}^X$ of functions $U \to X$ for each open set $U$ in ${\mathbb R}^n$ and for each $n \in {\mathbb N}$ is a {\it diffeology} of $X$ if the following three conditions hold: \begin{enumerate} \item (Covering) Every constant map $U \to X$ for all open set $U \subset {\mathbb R}^n$ is in ${\mathcal D}^X$; \item (Compatibility) If $U \to X$ is in ${\mathcal D}^X$, then for any smooth map $V \to U$ from an open set $V \subset {\mathbb R}^m$, the composite $V \to U \to X$ is also in ${\mathcal D}^X$; \item (Locality) If $U = \cup_i U_i$ is an open cover and $U \to X$ is a map such that each restriction $U_i \to X$ is in ${\mathcal D}^X$, then the map $U \to X$ is in ${\mathcal D}^X$. \end{enumerate} \end{defn} We call a pair $(X, {\mathcal D}^X)$ consisting a set and a diffeology a {\it diffeological space}. An element of a diffeology ${\mathcal D}^X$ is called a {\it plot}. Let $(X, {\mathcal D}^X)$ be a diffeological space and $A$ a subset of $X$. The {\it sub-diffeology} ${\mathcal D}^A$ on $A$ is defined by the initial diffeology for the inclusion $i : A \to X$; that is, $p\in {\mathcal D}^A$ if and only if $i\circ p \in {\mathcal D}^X$. \begin{defn} Let $(X, {\mathcal D}^X)$ and $(Y, {\mathcal D}^Y)$ be diffelogical spaces. A map $X \to Y$ is {\it smooth} if for any plot $p \in {\mathcal D}^X$, the composite $f\circ p$ is in ${\mathcal D}^Y$. \end{defn} All dffeological spaces and smooth maps form a category $\mathsf{Diff}$. Let $(X, {\mathcal D}^X)$ be a diffeological space. We say that a subset $A$ of $X$ is $D$-{\it open} (open for short) if $p^{-1}(A)$ is open for each plot $p \in {\mathcal D}^X$, where the domain of the plot is equipped with the standard topology. This topology is called the {\it $D$-topology} on $X$. Observe that for a subset $A$ of $X$, the $D$-topology of the sub-diffeology on $A$ coincides with the sub-topology of the $D$-topology on $X$ if $A$ is $D$-open; see \cite[Lemma 3.17]{C-S-W}. We here recall the de Rham complex $\Omega^*(X)$ of a diffeological space $(X, {\mathcal D}^X)$ in the sense of Souriau \cite{So}. For an open set $U$ of ${\mathbb R}^n$, let ${\mathcal D}^X(U)$ be the set of plots with $U$ as the domain and $\wedge^*(U) = \{h : U \longrightarrow \wedge^*(\oplus_{i=1}^{n} {\mathbb R}dx_i ) \mid h \ \text{is smooth}\}$ the usual de Rham complex of $U$. Let $\mathsf{Open}$ denote the category consisting of open sets of Euclidian spaces and smooth maps between them. We can regard ${\mathcal D}^X( \ )$ and $\wedge^*( \ )$ as functors from $\mathsf{Open}^{\text{op}}$ to $\mathsf{Sets}$ the category of sets. A $p$-{\it form} is a natural transformation from ${\mathcal D}^X( \ )$ to $\wedge^*( \ )$. Then the de Rham complex $\Omega^*(X)$ is the cochain algebra consisting of $p$-forms for $p\geq 0$; that is, $\Omega^*(X)$ is the direct sum of \[ \Omega^p(X) := \Set{ \xymatrix@C35pt@R10pt{ \mathsf{Open}^{\text{op}} \rtwocell^{{\mathcal D}^X}_{\wedge^p}{\hspace*{0.2cm}\omega} & \mathsf{Sets} } | \omega \ \text{is a natural transformation} } \] with the cochain algebra structure defined by that of $\wedge^*(U)$ pointwisely. We mention that the interpretation above of the de Rham complex appears in \cite{P} and \cite{I-I}. In what follows, we may use the terminology a {\it differential graded algebra} (for short, abbreviated DGA) for a cochain algebra. \begin{rem} The category $\mathsf{Open}$ is regarded as a concrete site endowed with coverages consisting of open covers; see \cite[Lemma 4.14]{B-H}. Then the result \cite[Proposition 4.15]{B-H} yields that the functor ${\mathcal D}^X$ is a concrete sheaf on $\mathsf{Open}$. On the other hand, the functor $\wedge^p$ is a sheaf for each $p \geq 0$ but not concrete in general. It is readily seen that $\wedge^p$ is a concrete sheaf if and only if $p=0$. \end{rem} In order to describe our main theorem, we recall appropriate simplicial sets and simplicial cochain algebras. Let ${\mathbb A}^{n}:=\{(x_0, ..., x_n) \in {\mathbb R}^{n+1} \mid \sum_{i=0}^n x_i = 1 \}$ be the affine space equipped with the sub-diffeology of ${\mathbb R}^{n+1}$. Let $\Delta^n_{\text{sub}}$ denote the diffeological space, whose underlying set is the standard $n$-simplex $\Delta^n$, equipped with the sub-diffeology of the affine space ${\mathbb A}^{n}$. Let $(A_{DR}^*)_\bullet$ be the simplicial cochain algebra defined by $(A^*_{DR})_n := \Omega^*({\mathbb A}^{n})$ for each $n\geq 0$. We denote by $(\widetilde{A^*_{DR}})_\bullet$ the sub-simplicial cochain algebra of $\Omega^*(\Delta^\bullet_{\text{sub}})$ consisting of elements in the image of the map $j^*: \Omega^*({\mathbb A}^n) \to \Omega^*(\Delta^n_{\text{sub}})$ induced by the inclusion $j : \Delta^n_{\text{sub}} \to {\mathbb A}^{n}$. For a diffeological space $(X, {\mathcal D}^X)$, let $S^D_\bullet(X)$ be the simplicial set defined by \[ S^D_\bullet(X):= \{ \{ \sigma : {\mathbb A}^n \to X \mid \sigma \ \text{is a $C^\infty$-map} \} \}_{n\geq 0}. \] We mention that $S^D_\bullet( \text{-})$ gives the {\it smooth singular functor} defined in \cite{C-W}. Moreover, let ${S^\infty_\bullet(X)}$ denote the sub-simplicial set of \[ {S^D_\bullet(X)}_{\text{sub}}:= \{ \{ \sigma : \Delta^n_{\text{sub}} \to X \mid \sigma \ \text{is a $C^\infty$-map} \} \}_{n\geq 0} \] consisting of the elements which are restrictions of $C^\infty$-maps from ${\mathbb A}^{n}$ to $X$; see \cite{Hector} for the study of the simplicial set ${S^D_\bullet(X)}_{\text{sub}}$ in diffeology. In what follows, let $\Delta$ be the category which has posets $[n]:=\{0, 1, ..., n\}$ with $k < k+1$ for $n\geq 0$ as objects and non-decreasing maps $[n] \to [m]$ for $n, m\geq 0$ as morphisms. Let $K$ be a simplicial set, namely a contravariant functor from $\Delta$ to the category $\mathsf{Sets}$ of sets. We denote by $C^*(K)$ the cochain complex of maps from $K_p$ to ${\mathbb R}$ on degree $p$. The simplicial structure gives $C^*(K)$ the cochain algebra structure with the standard manner; that is, the {\it cup product} is defined on $C^*(K)$; see \cite[10 (d)]{F-H-T} for example. We also recall the simplicial cochain algebra $(C^*_{PL})_\bullet:=C^*(\Delta[\bullet])$, where $\Delta[n] =\text{hom}_\Delta( \text{-}, [n])$ denotes the standard simplicial set. For a simplicial cochain algebra $A_\bullet$, we denote by $A(K)$ the cochain algebra $\mathsf{Sets^{\Delta^{op}}}(K, A_\bullet)$. Observe that, for a simplicial set $K$, the map $\nu : C_{PL}^p(K) \to C^p(K)$ defined by $\nu(\gamma)(\sigma) = \gamma(\sigma)(id_{[p]})$ for $\sigma \in K_p$ gives rise to a natural isomorphism $C_{PL}^*(K) \stackrel{\cong}{\to} C^*(K)$ of cochain algebras; see \cite[Lemma 10.11]{F-H-T}. Moreover, we define a map $\alpha : \Omega^*(X) \to A_{DR}^*(S^D_\bullet(X))$ of cochain algebras by $\alpha(\omega)(\sigma) = \sigma^*(\omega)$ and define $\alpha' : \Omega(X) \to \widetilde{A^*_{DR}}(S^\infty_\bullet(X))$ as well. The well-definedness and properties of $\alpha$ and $\alpha'$ will be discussed in Section 4. One of the aims of this article is to relate cochain algebras induced by simplicilal objects mentioned above to one another. The following is the main theorem which describes such a relationship. \begin{thm}\label{thm:main} \text{\em (}cf. \cite{E}, \cite[Theorem 9.7]{I-I}, \cite[Th\'eor\`emes 2.2.11, 2.2.14, 2.2.18]{G}\text{\em)} For a diffeological space $(X, {\mathcal D}^X)$, one has a homotopy commutative diagram \[ \xymatrix@C28pt@R20pt{ C^*({S^D_\bullet(X)}_{\text{\em sub}}) \ar[rd]_{=} \ar[r]_-{\simeq}^-\varphi & (C^*_{PL} \otimes A^*_{DR})(S^D_\bullet(X)) \ar[d]_(0.4){\text{\em mult } \! \circ (1\otimes \int)} & A^*_{DR}(S^D_\bullet(X)) \ar[l]^-{\simeq}_-\psi \ar[ld]^(0.4){\text{an ``integration"} \int}& \Omega^*(X) \ar[l]_-{\alpha} \ar[dl]^-{\int^{\text{\em IZ}}} \\ &C^*({S^D_\bullet(X)}_{\text{\em sub}}) & C^*_{\text{\em cube}}(X) \ar[l]^-{\simeq}_-{l}& } \] in which $\varphi$ and $\psi$ are quasi-isomorphisms of cochain algebras and the integration map $\int$ is a morphism of cochain complexes. Here $\text{\em mult}$ denotes the multiplication on the cochain algebra $C^*({S^D_\bullet(X)}_{\text{\em sub}})$. Moreover, if $(X, {\mathcal D}^X)$ stems from a CW-complex or a parametrized stratifolds; see Appendix B, then $\alpha$ is a quasi-isomorphism. \end{thm} The homology $H(C^*({S^D_\bullet(X)}_{\text{sub}}))$ has been introduced in \cite{Hector} in which tangent spaces of diffeological spaces are also discussed. We observe that the latter half of Theorem \ref{thm:main} gives a partial answer of \cite[Probl\`{e}me D]{Hector}. As it turns out, the de Rham theorem holds for diffeological spaces. \begin{cor} \label{cor:main} For every diffeological space $(X, {\mathcal D}^X)$, the integration map \[ \int : A^*_{DR}(S^D_\bullet(X))) \to C^*({S^D_\bullet(X)}_{\text{\em sub}}) \] in Theorem \ref{thm:main} induces an isomorphism of algebras on the cohomology. \end{cor} In Theorem \ref{thm:main}, the map $l$ induces an isomorphism of algebras in the cohomology. This follows from the result \cite[Theorem 8.2]{SNPA} for example. Then we have an obstruction for $\int^{\text{IZ}}$ to give an isomorphism on the cohomology. \begin{cor}\label{cor:main2} {\em (i)} The integration map $\int^{\text{\em IZ}} : \Omega^*(X) \longrightarrow C^*_{\text{\em cube}}(X)$ induces a morphism of algebras on the cohomology. \\ {\em (ii)} The integration map $\int^{\text{\em IZ}}$ induces an isomorphism of algebras on the cohomology if and only if so does the morphism $\alpha$ in Theorem \ref{thm:main}. \end{cor} We observe that in general, the morphism $\alpha$ does not induce an isomorphism on the cohomology; see Remark \ref{rem:An_example}. In \cite{I-I}, Iwase and Izumida have proved the de Rham theorem for a CW-complex by using cubic de Rham cohomology, which admits the Mayer-Vietoris sequence for {\it every} diffeological space. Let $X$ be a diffeological space. Then the excision axiom for the homology of $C^*({S^D_\bullet(X)}_{\text{sub}})$ holds with respect to the $D$-topology for $X$ and hence so does the homology $H(A_{DR}^*(S^D_\bullet(X)))$; see Section \ref{sect5}. Thus we have the Mayer-Vietoris exact sequence for $H(A_{DR}^*(S^D_\bullet(X)))$ with respect to a $D$-open cover. We describe two applications of the integration map in Theorem \ref{thm:main} which are related to Chen's iterated integrals \cite{C, C2}. Let $M$ be a manifold and $M^I$ the Chen space of smooth free paths. Then we have the pullback \[ \xymatrix@C35pt@R18pt{ LM \ar[r]^{\widetilde{\Delta}} \ar[d]_{ev} & M^I \ar[d]^{(\varepsilon_0, \varepsilon_1)} \\ M \ar[r]_{\Delta} & M\times M } \eqnlabel{add-0} \] of the free path fibration $(\varepsilon_0, \varepsilon_1) : M^I \to M\times M$ along the diagonal map $\Delta : M \to M\times M$ in the category $\mathsf{ChenSp}$ of Chen spaces, where $\varepsilon_i$ is the evaluation map at $i$; see \cite[Section 1.2]{C} and \cite[Section 2]{B-H}. We also recall Chen's iterated integral $\mathsf{It} : \Omega^*(M)\otimes B(\Omega^*(M)) \to \Omega^* (LM)_{\text{Chen}}$ which is a morphism of DG-modules; see \cite[(2.1)]{C2} and \cite[THEOREM 4.2.1]{C}, where $\Omega^* (X)_{\text{Chen}}$ denotes the de Rham complex for a Chen space $X$ and the source complex is the bar construction; see \cite[Section 2]{C_bar} and \cite[Section 1.1]{P}. More precisely, let $\omega_i$ be a differential $p_i$-form in $\Omega^*(M)$ for each $1\leq i \leq k$ and $\rho : U\to M^I$ a plot of the Chen space $M^I$. We define $\widetilde{\omega_{i\rho}}$ by $\widetilde{\omega_{i\rho}}:=(id_U \times t_i)^*\rho_\sharp^*\omega_i$, where $\rho_\sharp : U\times I \to M$ is the adjoint to $\rho$ and $t_i : {\bf \Delta}^k :=\{(x_1, ..., x_k) \in {\mathbb R}^k \mid 0\leq t_1\leq \cdots \leq t_k \leq 1\} \to I$ denotes the projection in the $i$th factor. By using the integration along the fibre of the trivial fibration $U \times {\bf \Delta}^k \to U$, the iterated integral $(\int \omega_1\cdots \omega_k)_\rho$ is defined by \[ (\int \omega_1\cdots \omega_k)_\rho := \int_{{\bf \Delta}^k} \widetilde{\omega_{1\rho}}\wedge \cdots \wedge \widetilde{\omega_{k\rho}}. \] Then by definition, Chen's iterated integral $\mathsf{It}$ has the form \[ \mathsf{It}(\omega_0[\omega_1 | \cdots | \omega_k])= ev^*(\omega_0)\wedge \widetilde{\Delta^*} (\int \omega_1\cdots \omega_k). \eqnlabel{add-1} \] Let $\text{So} : \mathsf{ChenSp} \to \mathsf{Diff}$ be the functor introduced by Stacey \cite{Stacey} for which the underlying set is the same as that of the Chen space and $p : U \to X$ is a plot in $\text{So} X$ if and only if $p : U \to X$ is smooth in $\mathsf{ChenSp}$. Then we shall define a morphism $\beta : \Omega^* (X)_{\text{Chen}} \to \Omega^*(\text{So} X)$ of DGA's for each Chen space $X$ in Section \ref{sect6}. We choose a DG subalgebra $A$ of $\Omega^*(M)$ which satisfies the condition that $A^p = \Omega^p(M)$ for $p>1$, $A^0={\mathbb R}$ and $A^1\cap d\Omega^0(M) =0$. Let ${\mathcal Chen} (M)$ be the image of the restriction $\Omega^*(M)\otimes \overline{B}(A) \to \Omega(LM)_{\text{Chen}}$ of $\mathsf{It}$ mentioned above. Here $\Omega^*(M)\otimes \overline{B}(A)$ denotes the reduced bar complex. Observe that ${\mathcal Chen} (M)$ is a DG subalgebra of $\Omega(LM)_{\text{Chen}}$; see \cite[2.1]{C}. By applying Theorem \ref{thm:main}, we have \begin{prop}\label{prop:Chen} Let $M$ be a simply-connected manifold. Then the composite $\alpha \circ \beta \circ \mathsf{It}: \Omega^*(M)\otimes \overline{B}(A) \to A^*_{DR}(S^D_\bullet(\text{\em So}(LM)))$ is a quasi-isomorphism of DG-modules over $\Omega^*(M)$. Moreover, the restriction $(\alpha\circ \beta)|_{{\mathcal Chen} (M)} : {\mathcal Chen} (M) \to A^*_{DR}(S^D_\bullet(\text{\em So}(LM)))$ is a quasi-isomorphism of DGA's. \end{prop} While the proof we give in the article heavily relies on the results in \cite{C,C2}, it may be possible to prove the latter half of Proposition \ref{prop:Chen} with relative Sullivan models for fibrations \cite{H, F-H-T}. In fact, it is realized in a diffeological framework; see Theorem \ref{thm:the_second_main} below. We can consider the same diagram as (2.1) in $\mathsf{Diff}$ in which $M$ is a general diffeological space and $M^I$ is the diffeological space endowed with the functional diffeology. The pullback is denoted by $ev : L_{\text{free}}M \to M$. Modifying the definition of Chen' iterated integral in $\mathsf{Diff}$, we have a morphism $\mathsf{It} : \Omega^*(M)\otimes \overline{B}(A) \to \Omega^*(L_{\text{free}}M)$ of differential graded $\Omega^*(M)$-modules; see Section \ref{sect7} for more details. The integration map in Theorem \ref{thm:main} and careful treatment of a local system in the sense of Halperin \cite{H} with respect to the evaluation map $M^I \to M\times M$ in (2.1) enable us to deduce the following pivotal theorem. \begin{thm}\label{thm:the_second_main} Let $M$ be a simply-connected diffeological space. Then the composite $\alpha \circ \mathsf{It} : \Omega^*(M)\otimes \overline{B}(A) \to \Omega^*(L_{\text{\em free}}M) \to A^*_{DR}(S^D_\bullet(L_{\text{\em free}}M))$ is a quasi-isomorphism of $\Omega^*(M)$-modules. \end{thm} \begin{rem}\label{rem:target} Proposition \ref{prop:Chen}, Theorem \ref{thm:the_second_main} and its generalization Theorem \ref{thm:general_main} below reveal that the new de Rham complex functor $A_{DR}^*(S^D_\bullet( \ ))$ gives an appropriate target of Chen's iterated integrals. \end{rem} The rest of this article is organized as follows. In order to prove Theorem \ref{thm:main}, the extendability of the simplicial cochain algebra $A^*_{DR}$ and its variants is verified in Section \ref{sect3}. Section \ref{sect4} explains the integration map and the map $\alpha$ in the main theorem. Theorem \ref{thm:main}, Corollaries \ref{cor:main}, \ref{cor:main2} and Proposition \ref{prop:Chen} are proved in Section \ref{sect5}. In Section \ref{sect7}, after recalling Chen's iterated integrals in a diffeological point of view, we prove Theorem \ref{thm:the_second_main} as a corollary of a more general result (Theorem \ref{thm:general_main}). In Appendix A, the acyclic model theorem for cochain complexes is recalled. Appendix B summarizes briefly the notion of a stratifold due to Kreck \cite{Kreck} and functors between categories concerning our subjects in this article. \section{Preliminaries} \subsection{Extandability of the simplicial cochain algebra $\widetilde{A^*_{DR}}$}\label{sect3} We begin with the definition of the extendability of a simplicial object. The notion plays an important role in the proof of the main theorem. \begin{defn} \label{defn:extendableOb} A simplicial object $A$ in a category ${\mathcal C}$ is {\it extendable} if for any $n$, every subset set ${\mathcal I} \subset \{0, 1, ..., n\}$ and any elements $\Phi_i \in A_{n-1}$ for $i \in {\mathcal I}$ which satisfy the condition that $\partial_i\Phi_j = \partial_{j-1} \Phi_i$ for $i <j$, there exists an element $\Phi \in A_n$ such that $\Phi_i = \partial_i \Phi$ for $i \in {\mathcal I}$. \end{defn} Let $M$ be a manifold and $\Omega_{\text{deRham}}^*(M)$ the usual de Rham complex of $M$. We recall the tautological map $\theta : \Omega_{\text{deRham}}^*(M) \to \Omega^*(M)$ defined by $\theta(\omega) = \{ p^*\omega\}_{p \in {{\mathcal D}^{M}}}$. Observe that $\theta$ is isomorphism; see \cite[Section 2]{H-V-C}. With this in mind, we prove the following lemma due to Emoto \cite{Emoto}. Though the proof indeed uses the same strategy as in \cite[13.8 Proposition]{H} and \cite[Lemma 10.7 (iii)]{F-H-T}, we introduce it for the reader. \begin{lem} \label{lem:extendability} The simplicial differential graded algebra $(\widetilde{A^*_{DR}})_\bullet$ is extendable. \end{lem} \begin{proof} Let ${\mathcal I}$ be a subset of $\{0, 1, ..., n\}$ and $\Phi_i $ an element in $(\widetilde{A^*_{DR}})_{n-1}$ for $i \in {\mathcal I}$. We assume that $\partial_i\Phi_j = \partial_{j-1} \Phi_i$ for $i <j$. We define inductively elements $\Phi_r \in (\widetilde{A^*_{DR}})_{n}$ for $-1 \leq r \leq n$ which satisfy the condition that (*): $\partial_i \Psi_r= \Phi_i$ if $i \in {\mathcal I}$ and $i\leq r$. Put $\Psi_{-1} =0$ and suppose that $\Psi_{r-1}$ is given with (*). Define a smooth map $\varphi : {\mathbb A}^n-\{v_r\} \to {\mathbb A}^{n-1}$ by $ \varphi(t_0, t_1, ..., t_n) = \big(\frac{t_0}{1-t_{r}}, .., \frac{t_{r-1}}{1-t_r}, \frac{t_{r+1}}{1-t_r}, ..., \frac{t_n}{1-t_r}\big), $ where $v_r$ denotes the $r$th vertex. The map $\varphi$ induces a morphism $\varphi^* : \Omega^*({\mathbb A}^{n-1}) \to \Omega^*({\mathbb A}^n - \{v_r\})$ of cochain algebras. For an element $u$ in $(\widetilde{A^*_{DR}})_{n-1}$, we write $u'$ for an element in $\Omega^*({\mathbb A}^{n-1})$ with $j^*(u')=u$. If $r$ is not in ${\mathcal I}$, we define $\Psi_r$ by $\Psi_{r-1}$. In the case where $i \in {\mathcal I}$, we consider the element $\Phi_r' -\partial_r\Psi_{r-1}'$ in $\Omega^*({\mathbb A}^{n-1})$. Define $\Psi \in (\widetilde{A^*_{DR}})_n$ by \[ \Psi := j^*\big((\rho\circ k_r))\star \varphi^*(\Phi_r' -\partial_r\Psi_{r-1}') \big), \] where $k_r : {\mathbb A}^n \to {\mathbb A}$ is the projection in the $r$th factor and $\rho$ is a cut-off function with $\rho(0) =1$ and $\rho(1) = 0$. We observe that $(\rho\circ k_r)$ is in $\Omega^0_{DR}({\mathbb A}^n)$ and that the action of $(\rho\circ k_r)$ on $\Omega^*_{DR}({\mathbb A}^n - \{v_r\})$ defined by the pointwise multiplication gives rise to a linear map $ (\rho\circ k_r)\star \text{-} : \Omega^*({\mathbb A}^n - \{v_r\}) \to \Omega^*({\mathbb A}^n). $ Moreover, we see that the map $(\rho\circ k_r)\star \text{-}$ fits in the commutative diagram \[ \xymatrix@C35pt@R16pt{ \Omega^*({\mathbb A}^{n-1}) \ar[r]^-{\varphi^*} \ar[d]_{\partial_i}& \Omega^*({\mathbb A}^n - \{v_r\}) \ar[r]^-{(\rho\circ k_r)\star } \ar[d]_{\partial_i} & \Omega^*({\mathbb A}^n) \ar[d]^{\partial_i} \\ \Omega^*({\mathbb A}^{n-2}) \ar[r]_-{\varphi^*} & \Omega^*({\mathbb A}^{n-1} - \{v_{r-1}\}) \ar[r]_-{(\rho\circ k_{r-1})\star } & \Omega^*({\mathbb A}^{n-1}) } \] for $i < r$. Since $\partial_i(\Phi_r-\partial_r\Psi_{r-1}) = \partial_{r-1}(\Phi_i -\partial_i\Psi_{r-1})=0$ by assumption for $i <r$, it follows from the commutative diagram above that for $i <r$, \begin{align*} \partial_i\Phi &= \partial_ij^*((\rho\circ k_r)\star \varphi^*(\Phi_r' -\partial_r\Psi_{r-1}')) \\ &= j^*((\rho\circ k_{r-1})\star \partial_i\varphi^*(\Phi_r' -\partial_r\Psi_{r-1}'))) = (\rho\circ k_{r-1})\star j^*(\partial_i\varphi^*(\Phi_r' -\partial_r\Psi_{r-1}'))) \\ &= (\rho\circ k_{r-1})\star j^*\varphi^*\partial_i(\Phi_r' -\partial_r\Psi_{r-1}')) = (\rho\circ k_{r-1})\star \varphi^*\partial_i(\Phi_r -\partial_r\Psi_{r-1}))=0. \end{align*} The third and fifth equalities follow from the commutativity of the diagram \[ \xymatrix@C35pt@R15pt{ \Omega^*({\mathbb A}^{n-2}) \ar[r]^-{\varphi^*} \ar[d]_{j^*}& \Omega^*({\mathbb A}^{n-1} - \{v_{r-1}\}) \ar[r]^-{(\rho\circ k_{r-1})\star } \ar[d]_{j^*} & \Omega^*({\mathbb A}^{n-1}) \ar[d]^{j^*} \\ \text{Im} \ j^* \ar[r]_-{\varphi^*} \ar@<-0.3ex>@{^{(}->}[d] & \text{Im} \ j^* \ar[r]_-{(\rho\circ k_{r-1})\star } \ar@<-0.3ex>@{^{(}->}[d] & (\widetilde{\Omega^*})_{n-1} \\ \Omega^*(\Delta_\text{sub}^{n-2}) \ar[r]_-{\varphi^*}& \Omega^*(\Delta_\text{sub}^{n-1}- \{v_{r-1}\}) . } \] Since $\partial_r (\rho \circ k_r) =1$ and $\varphi \circ \partial^r =id_{{\mathbb A}^n}$, it follows that the diagram \[ \xymatrix@C35pt@R16pt{ \Omega^*({\mathbb A}^{n-1}) \ar[r]^-{\varphi^*} \ar[rd]_{id}& \Omega^*({\mathbb A}^{n} - \{v_{r}\}) \ar[r]^-{(\rho\circ k_{r})\star } \ar[d]_{\partial_{r}} & \Omega^*({\mathbb A}^{n}) \ar[d]^{\partial_r} \\ & \Omega^*({\mathbb A}^{n-1}) \ar[r]_-{id} & \Omega^*({\mathbb A}^{n-1}) } \] is commutative. Thus we have $\partial_r\Psi = \Phi_r -\partial_r\Psi_{r-1}$. It turns out that $\partial_j(\Psi + \Psi_{r-1})=\Phi_j$ for $j \in {\mathcal I}$ and $j\leq r$. This completes the proof. \end{proof} We verify that the Poincar\'e lemma holds for $(\widetilde{A_{DR}})_n$. \begin{lem}\label{lem:PoincareLemma} One has $H^*((\widetilde{A_{DR}})_n)= {\mathbb R}$ for any $n\geq 0$. \end{lem} \begin{proof} We first remark that the chain homotopy operator defined in \cite[6.83]{IZ} is natural with respect to smooth maps. A smooth contraction map $\Delta_\text{sub}^n \to \Delta_\text{sub}^n$ extends to one on the affine space $\mathbb{A}^n$. Moreover, we have a smooth homotopy between the contraction and the identity map on $\mathbb{A}^n$ whose restriction is such a homotopy on $\Delta_\text{sub}^n$. The homotopy gives rise to chain homotopy maps $K$ and $K'$ which fit into a commutative diagram \[ \xymatrix@C25pt@R15pt{ \Omega^*(\mathbb{A}^n) \ar[r]^{j^*} \ar[d]_{K} & \Omega^*(\Delta^n_{\text{sub}}) \ar[d]^{K'} \\ \Omega^{*-1}(\mathbb{A}^n) \ar[r]^{j^*} & \Omega^{*-1}(\Delta^n_{\text{sub}}). } \] Observe that in the construction of the chain homotopies, we use the chain homotopy operator mentioned above. Thus $K'$ is restricted to $(\widetilde{A_{DR}})_n$. In consequence, we have the result. \end{proof} Thanks to the extendability and the Poincar\'e lemma for the simplicial cochain algebra $(\widetilde{A_{DR}})_\bullet$, the same argument as in the proof of \cite[Theorem 10.9]{F-H-T}, which gives quasi-isomorphisms between $C^*(X ; {\mathbb Q})$ and the rational de Rham complex $A_{PL}(X)$ for a space $X$, does work well in our setting. In fact in the proof, replacing the simplicial complex $(A_{PL})_\bullet$ of polynomial differential forms with $(\widetilde{A_{DR}})_\bullet$, we have \begin{prop} \label{prop:3.4} Let $K$ be a simplicila set. Then one has a sequence of quasi-isomorphisms \[ \xymatrix@C25pt@R25pt{ C^*(K) & C_{PL}^*(K) \ar[l]^{\nu}_{\cong} \ar[r]^-{\simeq}_-\varphi & (C_{PL} \otimes \widetilde{A_{DR}})^*(K) & \widetilde{A_{DR}^*}(K), \ar[l]_-{\simeq}^-\psi } \] where $\varphi$ and $\psi$ are defined by $\varphi(\gamma)=\gamma \otimes 1$ and $\psi(\omega)=1\otimes \omega$, respectively. \end{prop} Let $(\Omega_{\text{deRham}}^*)_\bullet$ be the simplicial cochain algebra defined by $(\Omega_{\text{deRham}}^*)({\mathbb A}^n)$ in degree $n$. The affine space ${\mathbb A}^n$ is a manifold diffeomorphic to ${\mathbb R}^n$ with the projection $\pi : {\mathbb A}^n \to {\mathbb R}^n$ defined by $\pi (x_0, x_1, ..., x_n) =(x_1, ..., x_n)$. Observe that the sub-diffeology of ${\mathbb A}^n$ described in Section \ref{sect2} coincides with the diffeology comes from the structure of the manifold ${\mathbb A}^n$ mentioned above. We see that the extendability is satisfied and that the Poincar\'e lemma holds for $(\Omega_{\text{deRham}}^*)_\bullet$ and hence so does $(A^*_{DR})_\bullet$. In fact, these results follow from the same arguments as in the proofs of Lemmas \ref{lem:extendability} and \ref{lem:PoincareLemma}. Therefore, Proposition \ref{prop:3.4} is also valid after replacing $(\widetilde{A_{DR}^*})_\bullet$ with the simplicial cochain algebra $(A_{DR}^*)_\bullet$. Thus the result \cite[Proposition 10.5]{F-H-T} enables us to deduce the following corollary. \begin{cor}\label{cor:RHT} For a simplicial set $K$, one has a sequence of quasi-isomorphisms \[ \xymatrix@C25pt@R20pt{ A_{PL}^*(K)\otimes_{\mathbb Q} {\mathbb R} \ar[r]_-{\simeq}^-l & \Omega_{\text{\em deRham}}^*(K) \ar[r]^-{\theta}_-{\cong} &A_{DR}^*(K) \ar[r]^-{(j^*)_*}_-{\simeq} & (\widetilde{A^*_{DR}})(K), } \] where $l$ denotes the map induced by the inclusion $(A^*_{PL})_\bullet \to (\Omega_{deRham}^*)_\bullet$ and $\theta$ is the isomorphism mentioned after Definition \ref{defn:extendableOb}. \end{cor} \subsection{The map $\alpha$ and an integration map}\label{sect4} In this subsection, for a map $\tau : [n] \to [m]$ in $\Delta$, we use the same notation for the affine maps ${\mathbb A}^n \to {\mathbb A}^m$ and $\Delta^n \to \Delta^m$ induced by the non-decreasing map as $\tau$. We recall the map $\alpha' : \Omega^*(X) \to \mathsf{Sets^{\Delta^{op}}}(S^\infty_\bullet(X), \widetilde{A_{DR}})=\widetilde{A_{DR}}(S^\infty_\bullet(X))$ defined by $\alpha'(\omega)(\sigma) = \sigma^*(\omega)$ for $\sigma \in S^\infty_l(X)$. Let $j : \Delta_\text{sub}^l \to \mathbb{A}^l$ be the inclusion. By definition, we see that $\sigma = \widetilde{\sigma}\circ j$ for some smooth map $\widetilde{\sigma} : \mathbb{A}^l \to X$. Then we see that $\sigma^*(\omega)=j^*(\widetilde{\sigma}^*\omega)$ and hence $\alpha'$ is well defined. Moreover, the standard calculation allows us to conclude that $\alpha'$ is a morphism of cochain alegbras. Observe that $\alpha'$ is natural with respect to diffeological spaces. In fact, for a morophism $Y \to X$ in $\mathsf{Diff}$, we have $((f_*)^*\alpha' (\omega))(\sigma_Y)= \alpha'(\omega)(f_*\sigma_Y) =\alpha'(\omega)(f\circ \sigma_Y) = (f\circ \sigma_Y)^*\omega =\sigma_Y^*(f^*\omega) = \alpha'(f^*\omega)(\sigma_Y)$, where $\omega \in \Omega^*(X)$ and $\sigma_Y \in S^\infty_\bullet(Y)$. Thus, the map $\alpha'$ gives rise to a natural transformation $\alpha' : \Omega^*(\text{-}) \to \widetilde{A_{DR}}(S^\infty_\bullet(\text{-}))$. We also define a natural transformation map $\alpha : \Omega^*(\text{-}) \to A_{DR}^*(S^D_\bullet(\text{-}))$ by the same way as that for $\alpha'$. The natural transformation gives the map $\alpha$ described in the Section 2. We here define an integration map $\int_{\Delta^p} : (\widetilde{A_{DR}^p)}_p \to \mathbb{R}$ by $\int_{\Delta^p}\omega = \int_{\Delta^p}\eta$ choosing $\eta \in \Omega^p_\text{deRham}(\mathbb{A}^p)$ with $j^*\theta(\eta) = \omega$. The definition of the integration is independent on the choice of the element $\eta$. In fact, for $\eta$ and $\eta'$ with $j^*\theta (\eta) = \omega = j^*\theta(\eta')$, we see that $j^*\theta (\eta) (\tau)= j^*\theta(\eta') (\tau)$ for the inclusion $\tau : (\Delta^p)^\circ \to \Delta_\text{sub}^p$ from the interior of $\Delta^p$ which is a plot in $\mathcal{D}^{{\mathbb A}^p}$. This implies that $\eta\circ(j\circ \tau) = \eta'\circ(j\circ \tau)$. Since $\eta$ and $\eta'$ are smooth maps on $\mathbb{A}^p$, it follows that $\eta = \eta'$ on $\Delta^p$. Then $\int_{\Delta^p}\eta = \int_{\Delta^p}\eta'$. We define a map $\int : (\widetilde{A_{DR}^*)}_\bullet \to (C_{PL}^*)_\bullet = C^*(\Delta[\bullet])$ by \[ (\int \gamma)(\sigma)= \int_{\Delta^p}\sigma^*\gamma \eqnlabel{add-3} \] for $\gamma \in (\widetilde{A_{DR}^p)}_n$, where $\sigma : \Delta^p \to \Delta^n$ is the affine map induced by $\sigma : [p] \to [n]$. Since the affine map $\sigma$ is extended to an affine map from $\mathbb{A}^p$ to $\mathbb{A}^n$, it follows that $\sigma^*\gamma$ is in $(\widetilde{A_{DR}^p)}_p$. Stokes' theorem enables us to conclude that the map $\int$ is a cochain map. In fact, let $\sigma$ be an element in $\Delta[n]_p$ and $\gamma'$ a form in $(\widetilde{A^{p-1}_{DR}})_n$ with $\sigma^*(\gamma')=j^*\theta(\eta')$ for some $\eta'\in \Omega_{DR}^{p-1}(\mathbb{A}^p)$. Then we have \begin{align*} (\int d\gamma')(\sigma) &= \int_{\Delta^p}\sigma^*(d\gamma') = \int_{\Delta^p}d(\sigma^*\gamma') = \int_{\Delta^p}d(\eta') \\ &= \int_{\partial\Delta^p}\eta' = \sum_i(-1)^i\int_{\Delta^{p-1}} d_i^*\eta' = \sum_i(-1)^i\int_{\Delta^{p-1}} d_i^* \sigma^*\gamma' \\ &= \sum_i(-1)^i\int_{\Delta^{p-1}} (\sigma \circ d_i)^*\gamma' = (d(\int \gamma'))(\sigma). \end{align*} The fourth and fifth equalities follows from Stokes' theorem for a manifold; see \cite[V. Sections 4 and 5]{B} for example. We show that the integration is a morphism of simplicial sets. Let $\sigma : [p] \to [m]$ and $\tau : [m] \to [n]$ be a map in $\Delta$. For a $\gamma \in (\widetilde{A^{p}_{DR}})_n$, we take a differential form $\eta$ in $\Omega_{DR}^{p}(\mathbb{A}^p)$ with $(\tau\circ \sigma)^*\gamma = j^*\theta (\eta)$. Then it follows that \begin{align*} \tau^*(\int \gamma) (\sigma) &= (\int \gamma) (\tau\circ \sigma) = \int_{\Delta^p}\eta = \int_{\Delta^p}\sigma^*(\tau^*\gamma) =(\int \tau^*\gamma)(\sigma). \end{align*} The fact that $\sigma^*(\tau^*\gamma) = j^*\theta (\eta)$ yields the third equality. In consequence, we see that $\int$ is a morphism of simplicial differential graded {\it modules}. Let $1$ be the unit of $\widetilde{A_{DR}^*}_\bullet$, which is in $\mathsf{Diff}({\mathbb A}^n, {\mathbb R}) =\Omega_{DR}^0({\mathbb A}^n)=(\widetilde{A_{DR}^0})_n$. Then it follows that $\int 1 =1$ in $(C_{PL}^0)_n$ for $n\geq 0$. This yields the commutative diagram \[ \xymatrix@C45pt@R18pt{ (C_{PL}^*)_\bullet \ar[rd]_{=} \ar[r]^-\varphi & (C_{PL} \otimes \widetilde{A_{DR}})_\bullet^* \ar[d]_(0.4){\text{mult } \! \circ (1\otimes \int)} & \widetilde{A_{DR}^*}_\bullet \ar[l]_-\psi \ar[ld]^{\int}\\ &(C_{PL}^*)_\bullet. & } \eqnlabel{add-4} \] The argument above in this subsection does work well for the simplicial cochain algebra ${A_{DR}}^*_\bullet$. In consequence, in the diagram above, the commutativity remains valid even if $\widetilde{A_{DR}^*}_\bullet$ is replaced with the simplicial cochain algebra ${A_{DR}}^*_\bullet$; see \cite[Remark, page 130]{F-H-T} for the same triangles as above for the polynomial de Rham complex $A_{PL}^*$. \section{Proofs}\label{sect5} \subsection{Proofs of the main theorem and corollaries}\label{sub4.1} We may write $H_*^D(X)$ for the homology of ${\mathbb Z}{S^D_\bullet(X)}_{\text{sub}}$ which is the chain complex with coefficients in ${\mathbb Z}$ induced by the simplicial set ${S^D_\bullet(X)}_{\text{sub}}$. The homotopy axiom for the homology $H_*^D(X)$ is now discussed. Let $f$ and $g$ be smooth maps from $X$ to $Y$ which are homotopic smoothly in the sense of Iglesias-Zemmour \cite{IZ}. Then the homomorphisms $f_*$ and $g_*$ induced on the homology coincide with each other: $f_*=g_* : H_*(X) \to H_*(Y)$. The construction of the chain homotopy is almost verbatim repetition of the usual one on the singular chain. Observe that the proof uses the fact that $\Delta_\text{sub}^n \times {\mathbb R} \cong (\Delta^n \times {\mathbb R})_\text{sub}$ as a diffeological space and the following lemma; see \cite[1.10 Theorem]{V} for example and the sequence (4.1) below. \begin{lem} \label{lem:acyclic} If $X$ is a convex set of ${\mathbb R^k}$ with sub-diffeology, then the $n$th homology $H_n(S^D_\bullet(X))$ is trivial for $n >0$ .The same assertion is valid for the functors ${\mathbb Z}{S^D_\bullet( \text{-})}_{\text{\em sub}}$ and ${\mathbb Z}{S^\infty_\bullet(\text{-})}$. \end{lem} \begin{proof} For a smooth simplex $\sigma$ in ${S^D_n(X)}$ and a point $v \in X$, we define a {\it cone} $K_v(\alpha)$ by \[ K_v(\sigma)(t_0 ,...., t_{n+1}) = \begin{cases} \rho(1- t_{0})\sigma(\frac{t_1}{1-t_{0}}, ..., \frac{t_n+1}{1-t_{0}}) + \tau(1-t_{0})v & \text{for} \ t_{0} \neq 1 \\ v & \text{for} \ t_{0} =1, \end{cases} \] where $\rho$ is a cut-off function with $\rho(0)= 0$, $\rho(1)=1$ and $\tau$ is the smooth function defined by $\tau= 1- \rho$. We see that $K_v(\sigma)$ is in $S^D_{n+1}(X)$. By extending $K_v$ linearly, we have a homomorphism $K_v : {\mathbb Z}{S^D_n(X)} \to {\mathbb Z}{S^D_{n+1}(X)}$. This gives a homotopy between the identity and the zero map; see the proof of \cite[1.8 Lemma]{V}. The same argument as above does work in ${S^D_n(X)}_{\text{sub}}$. As for the functor ${S^\infty_\bullet(\text{-})}$, we can define a cone $K_v : {\mathbb Z}{S^\infty_n(X)} \to {\mathbb Z}{S^\infty_{n+1}(X)}$ with an extension $\widetilde{\sigma} : {\mathbb A}^n \to X$ for $\sigma : \Delta_{\text{sub}}^n \to X$. This gives a homotopy between the identity and the zero map. \end{proof} In order to apply the method of acyclic models for proving the main theorem, we need the following result. \begin{lem} \label{lem:representable} Let ${\mathcal M}$ be the set of convex sets of ${\mathbb R}^k$ for $k \geq 0$. Then the three functors ${\mathbb Z}{S^D_n( \text{-})}$, ${\mathbb Z}{S^D_n( \text{-})}_{\text{\em sub}}$ and ${\mathbb Z}{S^\infty_n(\text{-})}$ are representable for ${\mathcal M}$ in the sense of Eilenberg-Mac Lane for each $n$. \end{lem} \begin{proof} Let $\widetilde{{\mathbb Z}{S^\infty_n}}(X)$ be the free abelian group generated by $\amalg_{M \in {\mathcal M}}({\mathbb Z}S^\infty_n(M) \times \text{Hom}_{\mathsf{Diff}}(M, X))$. We define a map $ \Phi : \widetilde{{\mathbb Z}{S^\infty_n}}(X) \to {\mathbb Z}S^\infty_n(X) $ by $\Phi (m, \phi) = \phi_*m = m\circ \phi$. For $m \in {\mathbb Z}S^\infty_n(X)$, one has an extension $\widetilde{m} : {\mathbb A}^n \to X$ by definition. Define a map $\Psi : {\mathbb Z}S^\infty_n(X) \to \widetilde{{\mathbb Z}{S^\infty_n}}(X)$ by $\Psi(m)= (\iota, \widetilde{m})$, where $\iota : \Delta_{\text{sub}}^n \to {\mathbb A}^n$ is the inclusion. It is readily seen that $\Phi\circ \Psi = id$. Therefore the functor ${\mathbb Z}{S^\infty_n(\text{-})}$ is representable for ${\mathcal M}$. Observe that the inclusion $\iota$ is in ${\mathbb Z}S^\infty_n({\mathbb A}^n)$. Since the identity maps $id_{{\mathbb A}^n}$ and $id_{\Delta_{\text{sub}}^n}$ belong to ${\mathbb Z}{S^D_n({\mathbb A}^n)}$ and ${\mathbb Z}{S^D_n(\Delta_{\text{sub}}^n)}_{\text{sub}}$, respectively, it follows from the same argument as above that the functors ${\mathbb Z}{S^D_n( \text{-})}$ and ${\mathbb Z}{S^D_n( \text{-})}_{\text{sub}}$ are representable for ${\mathcal M}$. This completes the proof. \end{proof} We consider the excision axiom for the homology of ${S^D_\bullet(X)}_{\text{sub}}$. Kihara's consideration in the proof of \cite[Proposition 3.1]{Kihara} enables us to regard the chain complex ${\mathbb Z}{S^D_\bullet(X)}_{\text{sub}}$ as a subcomplex of the singular chain complex $C_*(DX)$, where $D: \mathsf{Diff} \to \mathsf{Top}$ denotes the functor mentioned in Appendix B. In fact, \cite[Lemma 3.16]{C-S-W} implies that $D(\Delta_\text{sub}^n)$ is the simplex $\Delta^n$ with the standard topology. We observe that for the diffeology ${\mathbb R}^n$ with smooth plots, $D({\mathbb R}^n)$ is the Euclidian space. Thus for a diffeology $X$, the unit $id : X \to CDX$ yields the sequence of inclusions \[ \mathsf{Diff}(\Delta_\text{sub}^n, X) \to \mathsf{Diff}(\Delta_\text{sub}^n, CDX) \cong \mathsf{Top}(D(\Delta_\text{sub}), DX) = \mathsf{Top}(\Delta^n, DX). \eqnlabel{add-5} \] Observe that ${S^D_n(X)}_\text{sub} = \mathsf{Diff}(\Delta_\text{sub}^n, X)$. Then we can prove the excision axiom by applying the barycentric subdivision argument. Indeed, the subdivision map $Sd : {S^D_n(X)}_\text{sub} \to {S^D_n(X)}_\text{sub}$ is defined by restricting the usual one for the singular chain complex, which is chain homotopic to the identity. It turns out that the relative homology $H_*^D(X, A)$ satisfies the excision axiom for the $D$-topology; that is, the inclusion $i : (X-U, A-U) \to (X, U)$ induces an isomorphism on the relative homology if the closure of $U$ is contained in the interior of $A$ with respect to the $D$-topology of $X$; see \cite[IV, Section 17]{B} for example. Thus we also see that the (co)homology of $S^D_\bullet(X)_{\text{sub}}$ has the Mayer-Vietoris exact sequence. More observations concerning cochain complexes in the diagram in Theorem \ref{thm:main} are given. \smallskip \noindent I) The method of acyclic models \cite[Section 8]{E-M} implies that there exists a chain homotopy equivalence $l : {\mathbb Z}S^D_\bullet(X)_{\text{sub}} \stackrel{\simeq}{\longrightarrow} {C_\text{cube}}_*(X)$. This yields a cochain homotopy equivalence $C^*(S^D_\bullet(X)_{\text{sub}}) \stackrel{\simeq}{\longleftarrow} C_\text{cube}^*(X) $, which induces a morphism of algebras on the cohomology; see also \cite[Theorem 8.2]{SNPA} for example. \smallskip \noindent II) The restriction map $j^* : {\mathbb Z}S^D_\bullet(X) \to {\mathbb Z}S^D_\bullet(X)_{\text{sub}}$ has a homotopy inverse $k$ in the category of chain complexes. This follows from the method of acyclic models \cite[Theorems Ia and Ib]{E-M} with Lemmas \ref{lem:acyclic} and \ref{lem:representable}. Then the map $k^* : C^*(S^D_\bullet(X)) \to C^*(S^D_\bullet(X)_{\text{sub}})$ induces an isomorphism of {\it algebras} on the homology. In fact, the inverse induced by $(j^*)^* : C^*(S^D_\bullet(X)_{\text{sub}}) \to C^*(S^D_\bullet(X))$ is a morphism of algebras. \smallskip \noindent III) The homotopy commutativity of the square in Theorem \ref{thm:main} also follows from the method of acyclic models for cochain complexes; see Appendix A for more details. \begin{proof}[Proof of the first assertion in Theorem \ref{thm:main}] The considerations in I), II), III) and the commutative diagram (3.2) allow us to deduce the first part. \end{proof} \begin{proof}[Proof of Corollary \ref{cor:main}] By Theorem \ref{thm:main}, we see that $\text{mult}\circ (1\otimes \int)$ is a quasi-isomorphism. The commutativity of the right triangle implies that the integration map is also a quasi-isomorphism. \end{proof} \begin{proof}[Proof of Corollary \ref{cor:main2}] The assertion (i) follows from II) above and the fact that the integration map $\int$ induces a morphism of algebras. The first assertion in Theorem \ref{thm:main} yields (ii). \end{proof} \begin{proof}[Proof of the latter half of Theorem \ref{thm:main}] We first observe that the Poincar\'e lemma for the cohomology of the de Rham complex in the sense of Souriau and the homotopy axiom hold for diffeological spaces; see \cite{IZ}. By Corollary \ref{cor:main}, it suffices to show that the composite $v:=\int \circ \ \alpha$ induces an isomorphism on the cohomology. In case of a CW-complex $K$, we can use the Mayer-Vietoris exact sequence argument for proving the result; see \cite{I-I, Haraguchi}. In fact, we have a partition of unity of $CK$ with respect to $D$-topology; see Appendix B for the functor $C : \mathsf{Top} \to \mathsf{Diff}$. Suppose that $(X, {\mathcal D}^X)$ is a manifold. Then the usual argument as in \cite[V. \S9]{B} enables us to deduce that the map $H(v)$ induced by $v$ on the cohomology is an isomorphism. By definition, a $p$-stratifold $(S, {\mathcal C})$ is constructed from manifolds with boundaries via an attaching procedure; see Appendix B. In general, a stratifold admits a partition of unity; see \cite{Kreck}. Moreover, we see that an open set of the underlying topological space $S$ is a $D$-open set of the diffeology $k(S, {\mathcal C})$; see Lemma \ref{lem:D-top}. Thus the Mayer-Vietoris sequence argument does work well to show that $H(v)$ is an isomorphism in case of a $p$-stratifold. \end{proof} \begin{rem}\label{rem:An_example} Let $(X, {\mathcal D}^X)$ be a {\it fibrant} diffeological space in the sense of Christensen and Wu \cite[Definition 4.7]{C-W}. Let $\pi_*^D(X, x)$ and $\pi_*(S^D_\bullet(X), x)$ denote the smooth homotopy group of $X$ and the simplicial homotopy group of $S^D_\bullet(X)$, respectively. Then we have a sequence of groups $\xymatrix@C20pt@R20pt{ \pi_*^D(X, x) \ar[r]^-\eta_-{\cong} & \pi_*(S^D_\bullet(X), x) \ar[r]^-h & H_*(S^D_\bullet(X)) } $ for $* \geq 1$, where $\eta$ and $h$ are the isomorphism in \cite[Theorem 4.45]{C-W} and the simplicial Hurewicz homomorphism, respectively. Let $T^2_\theta$ be the irrational torus of slot $\theta$. With the result \cite[8.38]{IZ} which asserts that $\pi_1(T^2_\theta, x)\cong {\mathbb Z}\oplus {\mathbb Z}$, Corollary \ref{cor:main} in particular yields a sequence of isomorphisms \begin{align*} H^1(A_{DR}(S^D_\bullet(T^2_\theta))) &\cong H^1(S^D_\bullet(T^2_\theta))) \cong \text{Hom}_{\mathbb Z}(H_1(S^D_\bullet(T^2_\theta)), {\mathbb R}) \\ & \cong \text{Hom}_{\mathbb Z}(\pi_1(T^2_\theta, x)/[\pi_1, \pi_1], {\mathbb R}) \cong {\mathbb R}\oplus {\mathbb R}, \end{align*} where $\pi_1(T^2_\theta, x)/[\pi_1, \pi_1]$ denotes the abelianization; see \cite[(3.11)]{Cu}. Observe that the 1st singular cohomology of $D(T^2_\theta)$ is trivial because $\pi_1(D(T^2_\theta), x)=0$; see \cite[8.21]{IZ} and \cite[Example 3.18]{C-W}. Moreover, it follows from the computation in \cite[Exercise 116]{IZ} that $H^1(\Omega^*(T^2_\theta)) \cong {\mathbb R}$. This implies that in general, the map $\alpha$ in Theorem \ref{thm:main} does not induce an isomorphism on the cohomology. \end{rem} \begin{rem} Let $(S, {\mathcal C})$ be a $p$-stratifold. By virtue of Lemma \ref{lem:D-top}, we have the map $i : D(S) \to S$ is in $\mathsf{Top}$. Consider the map $\iota : H_*(S^D(k(S, {\mathcal C}))_\bullet) \to H_*(D(S)) \to H_*(S)$ to the singular homology, which is induced by the map $i$ and the inclusion mentioned in (4.1); see Appendix B for the functor $k$. If $S$ is an manifold, then $\iota$ is an isomorphism. This follows from the same argument as in \cite[9.5 Lemma]{B}. Then the usual argument with the Mayer-Vietoris sequence enables us to conclude that $\iota$ is an isomorphism for every parametrized stratifold; see Remark \ref{rem:CDC} below for the cases of CW-complexes and more general one. \end{rem} \subsection{Applications of the integration map in the main theorem}\label{sect6} In this section, we describe applications of the integration map on $A^*_{DR}(S^D_\bullet(X))$ mentioned in Theorem \ref{thm:main}. Let $j^* : S_n^D(X) \to S_n^\infty(X)$ and $j^* : (A^*_{DR})_n \to (\widetilde{A^*_{DR}})_n$ be the restriction maps induced by the inclusion $j : \Delta_{\text sub}^n \to {\mathbb A}^{n}$. The naturality of the integration map $\int$ in the theorem implies that the map $\alpha'$ described in Section \ref{sect2} is an extension of $\alpha$ on the cohomology. \begin{prop}\label{prop:alpha_beta} One has the diagram \[ \xymatrix@C40pt@R16pt{ & \mathsf{Sets^{\Delta^{op}}}(S_\bullet^\infty(X), (\widetilde{A^*_{DR}})_\bullet) & \hspace{-1.5cm}= \widetilde{A^*_{DR}}(S^\infty_\bullet(X)) \\ \Omega^* (X) \ar[ur]^{\alpha'} \ar[dr]_{\alpha}& \mathsf{Sets^{\Delta^{op}}}(S_\bullet^\infty(X), (A^*_{DR})_\bullet) \ar[u]_{(j^*)_*}^{} \ar[d]^{(j^*)^*} & \hspace{-1.5cm}= A^*_{DR}(S^\infty_\bullet(X))\\ & \mathsf{Sets^{\Delta^{op}}}(S_\bullet^D(X), (A^*_{DR})_\bullet) & \hspace{-1.5cm}= A^*_{DR}(S^D_\bullet(X)) } \] in which $(j^*)_*$ and $(j^*)^*$ are quasi-isomorphisms. Moreover, the diagram is commutative on the cohomology. \end{prop} Observe that a natural map from $\Omega^*(X)$ to $A^*_{DR}(S^\infty_\bullet(X))$ cannot be defined in such a way as to give the map $\alpha$. However, Proposition \ref{prop:3.4} and the commutative diagram (3.2) imply that the integration $\int : (A_{DR})_\bullet \to (C_{PL})_\bullet$ in Section \ref{sect4} gives rise to a quasi-isomorphism $\int_* : A_{DR}^*(K) \to C_{PL}^*(K) \cong C^*(K)$ of differential graded modules for {\it each} simplicial set $K$, which is an isomorphism of algebras on the cohomology. This is a key to proving Proposition \ref{prop:alpha_beta}. \begin{proof}[Proof of Proposition \ref{prop:alpha_beta}] We consider the diagram \[ \xymatrix@C40pt@R16pt{ & \mathsf{Sets^{\Delta^{op}}}(S_\bullet^\infty(X), (\widetilde{A^*_{DR}})_\bullet) \ar[rd]^{\int_*}_(0.4){\simeq} & \\ \Omega^* (X) \ar[ur]^{\alpha'} \ar[dr]_{\alpha}& \mathsf{Sets^{\Delta^{op}}}(S_\bullet^\infty(X), (A^*_{DR})_\bullet) \ar[u]_{(j^*)_*}^{} \ar[d]^{(j^*)^*} \ar[r]^{\int_*}_{\simeq} & \mathsf{Sets^{\Delta^{op}}}(S_\bullet^\infty(X), C_{PL}) \ar[d]^{(j^*)^*}\\ & \mathsf{Sets^{\Delta^{op}}}(S_\bullet^D(X), (A^*_{DR})_\bullet) \ar[r]_{\int_*}^{\simeq} &\mathsf{Sets^{\Delta^{op}}}(S_\bullet^D(X), C_{PL}). } \eqnlabel{add-20} \] The method of acyclic models implies that the restriction $(j^*) : {\mathbb Z}S_\bullet^D(X) \to {\mathbb Z}S_\bullet^\infty(X)$ is a quasi-isomorphism and then so is the map $(j^*)^*$ in the right hand side. It follows that the center triangle and square are commutative by the definition of the integration map; see (3.1). The commutative diagram (3.2) implies that the integration maps are quasi-isomorphisms. Then we see that maps $(j^*)_*$ and $(j^*)^*$ in the left hand side are quasi-isomorphisms. Moreover, the direct calculation indicates that $(j^*)^*\circ \int_* \circ \alpha' = \int_*\circ \alpha$. Therefore, the left triangle is commutative on the cohomology. This completes the proof. \end{proof} Let $X$ be a Chen space in the sense of \cite[Definition 1.2.1]{C}; see also \cite[Definition 2.5]{Stacey}. We here define the map $\beta : \Omega^* (X)_{\text{Chen}} \to \Omega^*(\text{So} X)$ in Proposition \ref{prop:Chen} by \[ \beta(\omega) =\big\{ \{(\rho^* \omega)_\psi \}_{\text{$\psi \in${Charts}($\rho$)}} \big\}_{\rho \in \mathcal{D}^{So X}}, \] where $\text{Charts}(\rho)$ denotes the set of appropriate charts of the domain of $\rho$ and $\rho^* : \Omega^* (X)_{\text{Chen}} \to \Omega^* (U)_{\text{Chen}} \cong \wedge^*(U) = \Omega^*_{\text{deRham}}(U)$ is a map defined by $(\rho^*\omega)_\psi = \omega_{\rho\psi}$ for any chart $\psi : C \to U$ of the domain $U$ of $\rho$. Observe that $\{(\rho^* \omega)_\psi \}_{\text{$\psi \in${Charts}($\rho$)}}$ is an equivalent class in $\Omega^*_{\text{deRham}}(U)$; see \cite{B-T} for example. \begin{lem} The map $\beta$ is a well-defined morphism of DGA's. \end{lem} \begin{proof} The differential graded algebra structure of $\Omega^* (C)_{\text{Chen}}$ is defined by that of the usual de Rham complex $\Omega_{\text{deRham}}^*(C)$, where $C$ is a convex set of ${\mathbb R}^n$ for some $n\geq 0$. Then we see that $\beta$ is a morphism of DGA's if the map is well defined. In order to show the well-definedness, it suffices to prove that for any $\omega \in \Omega^p (X)_{\text{Chen}}$ and each smooth map $u : V \to U$, the diagram \[ \xymatrix@C25pt@R15pt{ {\mathcal D}^{\text{So}X}(U) \ar[r]^-{\beta(\omega)} \ar[d]_-{u^*} & \wedge^p(U) \ar[d]^-{u^*} \\ {\mathcal D}^{\text{So}X}(V) \ar[r]_-{\beta(\omega)} & \wedge^p(V) } \] is commutative. This comes from the direct calculation. In fact, it follows that $u^*(\beta(\omega)_\rho) = u^*\{\omega_{\rho\psi}\}_\psi =\{(\psi^{-1}u\varphi)^*\omega_{\rho\psi}\}_\varphi$, where $\varphi : C' \to V$ are appropriate charts of $V$. On the other hand, we see that $\beta(\omega)_{u^*\rho} = \beta(\omega)_{\rho u} = \{((\rho u)^*\omega)_\varphi\}_\varphi = \{\omega_{\rho u \varphi}\}_{\varphi}$. By definition, the $p$-form $\omega \in \Omega^p (X)_{\text{Chen}}$ satisfies the condition that $(\psi^{-1}u\varphi)^*\omega_{\rho\psi} = \omega_{\rho\psi\psi^{-1}u\varphi} = \omega_{\rho u\varphi}$. We have $u^*(\beta(\omega)_{\rho})=\beta(\omega)_{u^*\rho}$. \end{proof} \begin{proof}[Proof of Proposition \ref{prop:Chen}] Let $C^*_{\text{cube}, I}(LM)$ be the cubic cochain complex whose $p$-simplexes are smooth maps to $LM$ from $I^p$ which is regarded as Chen subspace of ${\mathbb R}^p$. We recall the morphism $\Gamma : \Omega^*(M)\otimes B(\Omega^*(M)) \to C^*_{\text{cube}, I}(LM)$ of differential graded modules constructed via the appropriate pairing in \cite[(2.2)]{C2}. Here we may use the normalized bar complex as $\Omega^*(M)\otimes B(\Omega^*(M))$; see \cite[(2.1)]{C_bar}. Then we see that the composite $i\circ \Gamma : \Omega^*(M)\otimes \overline{B}(A) \to \Omega^*(M)\otimes B(\Omega^*(M)) \to C^*_{\text{cube}, I}(LM)$ is a quasi-isomorphism; see the proof of \cite[Theorem 0.1]{C2}. Consider the diagram \[ \xymatrix@C16pt@R10pt{ A^*_{DR}(S^D_\bullet(\text{So}(LM))) \ar[dd]_-{\int_*}^-{\simeq} & \Omega^*(\text{So}(LM)) \ar[l]_-{\alpha} & {\mathcal Chen} (M) \ar[l]_-{\beta} & \Omega^*(M)\otimes \overline{B}(A) \ar[l]_-{\mathsf{It}}^-{\cong} \ar[dd]^-i_{\simeq} \ar[ldd]_(0.5){\simeq}\\ & & \Omega^*(LM)_{\text{Chen}} \ar[lu]^{\beta} & \\ C^*(S^D_\bullet(\text{So}(LM))) & & C^*_{\text{cube}, I}(LM) \ar[ll]^-{l'}_-{\simeq} & \Omega^*(M)\otimes B(\Omega^*(M)) \ar[l]^-{\Gamma} \ar[ul]_(0.4){\mathsf{It}} } \] in which right triangles are commutative. Here $l'$ is a quasi-isomorphism obtained by the method of acyclic models whose models are consisting of the Chen spaces $\mathbb{A}^n$ and $I^n$ for $n\geq 0$. Observe that $\text{So}(\mathbb{A}^n)= \mathbb{A}^n$ by the standard argument and then the identity map $\mathbb{A}^n \to \text{So}(\mathbb{A}^n)$ is in $\mathsf{Diff}$. Thus the method of acyclic models is applicable to $C^*_{\text{cube}, I}( \text{-} )$ and also $C^*(S^D_\bullet(\text{So}( \text{-} )))$. We have a quasi-isomorphism $l' : C^*_{\text{cube}, I}(X) \to C^*(S^D_\bullet(\text{So}(X)))$ for each Chen space $X$. The functor $\Omega^*(\text{-})\otimes B(\Omega^*( \text{-}))$ is acyclic for models ${\mathbb A}^n$ with $n\geq 0$. Moreover, the functor $C^*(S^D_\bullet(L(\text{-}) )$ is corepresentative; see Appendix A below. In fact, the result follows from the same argument as that after Theorem \ref{thm:AcyclicModels}. Then Theorem \ref{thm:AcyclicModels} implies that the left sqare is commutative up to homotopy. The argument of a spectral sequence implies that the inclusion $i$ is quasi-isomorphism. Therefore the map $\Gamma$ is a quasi-isomorphism. We see that the composite $\alpha\circ \beta\circ \mathsf{It}$ in the first row is quasi-isomorphism. We observe that the iterated integral map $\mathsf{It}$ to the Chen complex is an isomorphism; see \cite[Theorem 4.2.1]{C}. This yields the latter half of the assertions. \end{proof} \section{Chen's iterated integral map in diffeology}\label{sect7} We recall iterated integrals due to Chen \cite{C} modifying them in the diffeological setting. Let $N$ be a diffeological space and $\rho : {\mathbb R} \to I$ the cut-off function. Then a $p$-form $u$ on the diffeological space $I\times M$ is called a $\Omega^p(N)$-{\it valued function on} $I$ if for any plot $\psi : U \to N$ of $N$, the $p$-form $u_{\rho \times \psi}$ on ${\mathbb R} \times U$ is of the type $ \sum a_{i_1\cdots i_p}(t, \xi)d\xi_{i_1}\wedge \cdots \wedge \xi_{i_p}, $ where $(\xi_1, ..., \xi_n)$ denotes the coordinates of $U$ we fix. For such an $\Omega^p(N)$-valued function $u$ on $I$, we define the integration $\int_0^1u \ dt \in \Omega^p(N)$ by \[ (\int_0^1u \ dt)_\psi = \sum (\int_0^1a_{i_1\cdots i_p}(t, \xi) \ dt) d\xi_{i_1}\wedge \cdots \wedge \xi_{i_p}. \] Each $p$-form $u$ has the form $u = dt\wedge ((\partial /\partial t) \rfloor u) + u''$, where $(\partial /\partial t) \rfloor u$ and $u''$ are an $\Omega^{p-1}(N)$-valued function and an $\Omega^{p}(N)$-valued function on $I$, respectively. Let $F : I \times N^I \to N^I$ be the homotopy defined by $F(t, \gamma)(s) = \gamma(ts)$. The Poincar\'e operator $\int_F : \Omega(N^I) \to \Omega(N^I)$ associated with the homotopy $F$ is defined by $\int_F v = \int_0^1((\partial /\partial t) \rfloor F^*v)dt$. Moreover, for forms $\omega_1$, ..., $\omega_r$ on $N$, the {\it iterated integral} $\int \omega_1\cdots \omega_r$ is defined by $\int \omega_1 = \int_F\varepsilon_1^*\omega_1$ and \[ \int \omega_1\cdots \omega_r = \int_F\{J(\int \omega_1\cdots \omega_{r-1}) \wedge \varepsilon_1^*\omega_r\}, \] where $\varepsilon_i$ denotes the evaluation map at $i$, $Ju =(-1)^{\deg u}u$ and $\int \omega_1\cdots \omega_r =1$ if $r=0$; see \cite[Definition 1.5.1]{C}. Observe that the operator is of degree $-1$ and then $\int \omega_1\cdots \omega_r$ is of degree $\sum_{1\leq i \leq r}(\deg \omega_i -1)$. With a decomposition of the form $A^1\oplus d\Omega^0(N)$, we obtain the DG subalgebra $A$ of $\Omega(N)$ which satisfies the condition that $A^p = \Omega(N)$ for $p> 1$ and $A^0={\mathbb R}$. The DGA $A$ gives rise to the normalized bar complex $B(\Omega(N), A, \Omega(N))$; see \cite[\S 4.1]{C}. Consider the pullback diagram \[ \xymatrix@C25pt@R20pt{ E_f \ar[r]^-{\widetilde{f}} \ar[d]_{p_f} & N^I \ar[d]^{(\varepsilon_0, \varepsilon_1)}\\ M \ar[r]_-{f} & N\times N } \eqnlabel{add-6} \] of $(\varepsilon_0, \varepsilon_1) : N^I \to N\times N$ along a smooth map $f : M \to N\times N$. In what follows, we assume that the cohomology $H^*(A_{DR}(S^D_\bullet(N))$ is of finite type. We write $\overline{B}(A)$ for $B({\mathbb R}, A, {\mathbb R})$. Then we have a map \[ \mathsf{It} : \Omega(M)\otimes_{\Omega(N)\otimes\Omega(N)}B(\Omega(N), A, \Omega(N))\cong \Omega(N) \otimes_f \overline{B}(A) \to \Omega(E_f) \] defined by $\mathsf{It} (v\otimes [\omega_1| \cdots | \omega_r])= p_f^*v\wedge \widetilde{f}^*\int \omega_1\cdots \omega_r$. Observe that the source of $\mathsf{It}$ gives rise to the differential on $\Omega(M) \otimes_f \overline{B}(A)$. Since $\rho(0)=0$ and $\rho(1)=1$ for the cut-off function $\rho$, it follows that the result \cite[Lemma 1.4.1]{C} remains valid. Then the formula of iterated integrals with respect to the differential in \cite[Proposition 1.5.2]{C} implies that $\mathsf{It}$ is a well-defined morphism of differential graded $\Omega(N)$-modules. \begin{rem} The cut-off function $\rho$ does not satisfy the formula $\rho(s)\rho(t) = \rho(st)$ for $s, t \in {\mathbb R}$ in general. Then we do not have the same assertion as that of \cite[Lemma 1.5.1]{C} which allows us to deduce a constructive definition of the iterated integral as in (2.1); see \cite[page 840]{C}. \end{rem} \begin{thm}\label{thm:general_main} Suppose that in the pullback diagram \text{\em (5.1)}, the diffeological space $N$ is simply connected and $f$ is an induction; that is, the map $f : M \stackrel{\cong}{\to} f(M)$ is a diffeomorphism, where $f(M)$ is a diffeological space endowed with subdiffeology. Assume further that the cohomology $H^*(S^D_\bullet(M))$ is of finite type. Then the composite $\alpha \circ \mathsf{It} : \Omega^*(M)\otimes_f \overline{B}(A) \to \Omega(E_f) \to A^*_{DR}(S^D_\bullet(E_f))$ is a quasi-isomorphism of $\Omega^*(M)$-modules. \end{thm} \begin{proof}[Proof of Theorem \ref{thm:the_second_main}] The diagonal map $\Delta : M \to M\times M$ is an induction. Then the result follows from Theorem \ref{thm:general_main}. \end{proof} \begin{rem}\label{rem:highlight} Theorem \ref{thm:general_main} is regarded as a generalization of the result \cite[Theorem 0.1]{C2} in which $M$ and $N$ are assumed to be manifolds, however $f$ is a more general smooth map. The theorem due to Chen asserts that the homology of the bar complex $\Omega^*(M)\otimes_f \overline{B}(A)$ is isomorphic to the cubical cohomology of the Chen space $E_f$ via the pairing with the iterated integrals and cubic smooth chains; see \cite[(2.2)]{C2}. \end{rem} The rest of this section is devoted to proving Theorem \ref{thm:general_main}. We here recall a local system over a simplicial set and its global sections. Let $K$ be a simplicial set. We regard $K$ as a category whose objects are simplicial maps $\sigma : \Delta[p] \to K$ for $p \geq 0$ and whose morphisms $\alpha : \tau \to \sigma$ are simplicial maps $\alpha : \Delta[q] \to \Delta[p]$ with $\tau = \sigma \circ \alpha$, where $\tau : \Delta[q] \to K$ and $\sigma : \Delta[p] \to K$. Then a {\it local system $F$ over $K$ of differential coefficients} is defined to be a contravariant functor from $K$ to the category $\mathsf{DGAs}$ of unital differential graded algebras with non-negative grading which satisfies the condition that the map $F(\alpha) : F_{\sigma}\to F_{\alpha^*\sigma}$ is a quasi-isomorphism for each $\alpha$ in the category $K$; see \cite[Definition 12.15]{H}. Observe that such a local system $F$ is an object of the functor category $\mathcal{E} := \mathsf{DGAs}^{K^{\text{op}}}$. We define the space $\Gamma(F)$ of global sections of $F$ by $\Gamma(F) := \text{Hom}_{\mathcal{E}}({\mathbb R}, F)$, where ${\mathbb R}$ denotes the DGA concentrated in degree zero with trivial differential. There are at least two kinds of {\it fibrations} in the category $\mathsf{Diff}$. One of them is the fibration $f : X \to Y$ in the sense of Christensen and Wu \cite{C-W}, namely a smooth map which induces a fibration $S^D(f) : S^D_\bullet(X) \to S^D_\bullet(Y)$ of simplicial sets in $\mathsf{Sets}^{\Delta^{op}}$. An important example of such a fibration is a diffeological bundle in the sense of Iglesias-Zemmour \cite[Chapter 8]{IZ} whose fibre is fibrant; see \cite[Proposition 4.24]{C-W}. Another type concerns mapping spaces with evaluation maps. For example, with the interval $I=[0,1]$, the map $(\varepsilon_0, \varepsilon_1) : N^I \to N\times N$ defined by the evaluation map $\varepsilon_i$ at $i$ is a fibration in $\mathsf{Top}$ if $N^I$ is endowed with compact open topology; that is, the map $(\varepsilon_0, \varepsilon_1)$ enjoys the right lifting property with respect to the inclusion $\Delta^n \to \Delta^n\times \{0\} \to \Delta^n\times I$ for $n \geq 0$. However, it seems that a smooth lifting problem is not solved in general for such an evaluation map in $\mathsf{Diff}$. Then in order to prove Theorem \ref{thm:the_second_main}, it is needed to reconstruct the Leray-Serre spectral sequences and algebraic models for path spaces in the diffeological framework. In what follows, we may write $A^*_{DR}(X)$ and $A^*(X)$ for $A^*_{DR}(S^D_\bullet(X)_{\text{sub}})$ and $A^*_{DR}(S^D_\bullet(X))$, respectivley. Observe that the natural map $(j^*) : S^D_\bullet(X) \to S^D_\bullet(X)_{\text{sub}}$ induced by inclusion $j : \Delta^n_{\text{sub}} \to {\mathbb A}^{n}$ gives rise to a natural quasi-isomorphism \[ (j^*)^* : A^*_{DR}(X) \to A^*(X). \eqnlabel{add-7} \] This follows from II) in Section \ref{sub4.1}, Proposition \ref{prop:3.4} and the naturality of the integration map; see the proof of Proposition \ref{prop:alpha_beta}. The argument in \cite[Sections 5, 6 and 7]{Grivel} due to Grivel enables us to obtain the Leray-Serre spectral sequence with a local system for a fibration and the Eilenberg-Moore spectral sequence for a fibre square. \begin{thm} \label{thm:LSSS} Let $\pi : E \to M$ be a smooth map between path-connected diffeological spaces with path-connected fibre $F$ which is \text{\em i)} a fibration in the sense of Christensen and Wu or \text{\em ii)} the pullback of the evaluation map $(\varepsilon_0, \varepsilon_1) : N^I \to N\times N$ for a connected diffeological space $N$ along an induction $f : M \to N\times N$. Suppose that the cohomology $H(A^*(M))$ is of finite type. Then one has the Leary-Serre spectral sequence $\{_{LS}E_r^{*,*}, d_r\}$ converging to $H(A^*(E))$ as an algebra with an isomorphism \[ _{LS}E_2^{*,*}\cong H^*(M, \mathcal{H}^*(F)) \] of bigraded algebras, where $H^*(M, \mathcal{H}(F))$ is the cohomology with the local coefficients $\mathcal{H}^*(F)=\{H(A^*(F_c))\}_{c\in S^D_0(M)}$; see Lemma \ref{lem:DiffCoeff} below. \end{thm} \begin{thm} \label{thm:EMSS} Let $\pi : E \to M$ be the smooth map as in Theorem \ref{thm:LSSS} with the same assumption, $\varphi : X \to M$ a smooth map from a connected diffeological space $X$ for which the cohomology $H(A^*(X))$ is of finite type and $E_\varphi$ the pullback of $\pi$ along $\varphi$. Suppose further that $M$ is simply connected in case of \text{\em i)} and $N$ is simply connected in case of \text{\em ii)}. Then one has the Eilenberg-Moore spectral sequence $\{_{EM}E_r^{*,*}, d_r\} $ converging to $H(A^*(E_\varphi))$ as an algebra with an isomorphism \[ _{EM}E_2^{*,*} \cong \text{\em Tor}_{H(A^*(M))}^{*,*}(H(A^*(X)), H(A^*(E))) \] as a bigraded algebra. \end{thm} \begin{proof}[Proofs of Theorems \ref{thm:LSSS} and \ref{thm:EMSS}] For the case i), the Leray-Serre spectral sequence and the Eilenberg-Moore spectral sequence are obtained by applying the same argument as in the proofs of \cite[5.1 Theorem and 7.3 Theorem]{Grivel} to the functor $A^*( \ ):=A^*_{DR}(S^D( \ )_\bullet)$. We consider the case ii). By replacing $A^*(\ )$ with $A^*_{DR}( \ )$, Theorem \ref{thm:LSSS} follows from the argument of the proof of Proposition \ref{prop:KSextension} below. By virtue of the result \cite[20.6]{H} and Proposition \ref{prop:KSextension}, we have $H^*(A_{DR}(E_f)) \cong \text{Tor}_{A_{DR}^*(M)}(A_{DR}^*(X), A_{DR}^*(E))$ as an algebra and then Theorem \ref{thm:EMSS} follows; see \cite[Th\'eor\`eme 4.1.1]{VP}. In consequence, the natural quasi-isomorphism $(j^*)^*$ in (5.2) yields the results. \end{proof} \begin{rem} In Theorems \ref{thm:LSSS} and \ref{thm:EMSS}, we have dealt with {\it fibrations} of the type i) and of the type ii). We do not know whether the socond fibration is indeed the first one. Therefore, we have considered the two cases separately. One might regard that an appropriate concept of a smooth relative CW-complex $i : A \to X$ enables us to obtain a map $i^* : \text{map}(X, Y) \to \text{map}(A, Y)$ with the homotopy extension property in $\mathsf{Diff}$ for a diffeological space $Y$, where $\text{map}(X, Y)$ and $\text{map}(A, Y)$ are endowed with functional diffeology. As consequence, we expect the spectral sequences as in Theorems \ref{thm:LSSS} and \ref{thm:EMSS} for the map $i^*$. We do not pursue this topic in this manuscript. \end{rem} The argument in \cite[Chapter 19]{H} will be replaced with that in our setting. Recall the standard face and degeneracy maps $\eta_i : \Delta_{\text{sub}}^{p-1} \to \Delta_{\text{sub}}^{p}$ and $\zeta_j : \Delta_{\text{sub}}^{p+1} \to \Delta_{\text{sub}}^{p}$. For $0 \leq m \leq p$, let $\alpha_m : \Delta_{\text{sub}}^{p+1} \to \Delta_{\text{sub}}^{p} \times I$ be a smooth map defined by $\alpha_m (x) =(\zeta_m(x), \sum_{i=m+1}^{p+1}x_i)$, where $x =(x_0, x_1, ..., x_{p+1})$. Observe that the maps $\alpha_m$ give the standard triangulation of $\Delta^{p}\times I$ in the category of topological spaces. For any nondecreasing map $u : [p] \to [n]$, we denote by the same notation the affine map $\Delta_{\text{sub}}^{p} \to \Delta_{\text{sub}}^{n}$ defined by $u$. Such an affine map $u$ gives a set map $\overline{u} : \Delta^{p}\times I \to \Delta^{n}$ defined by $(\overline{u} \alpha_m)(\sum_{i=0}^{p+1} \lambda_i v_i) = \sum_{i=0}^{m} \lambda_i v_{u(i)} + (\sum_{i=m+1}^{p+1}x_i)v_n$; see \cite[(19.4)]{H}. It is readily seen that the composite $\overline{u} \alpha_m$ is a smooth map for each $m$. Let $N$ be a diffeological space and $f : M \to N\times N$ an induction. Then the pullback $\nu' : E_f \to M$ of the map $(\varepsilon_0, \varepsilon_1) : N^I \to N\times N$ along $i$ is identified with the map \[ \nu := f^{-1}\circ (\varepsilon_0, \varepsilon_1) : P_MN:=\{\gamma \in N^I \mid (\varepsilon_0, \varepsilon_1)(\gamma) \in f(M)\} \to M. \] We consider the homotopy pullback $\pi : P_\sigma^h \to \Delta_{\text{sub}}^{n}$ of $\nu$ along the $n$-simplex $\sigma : \Delta_{\text{sub}}^{n} \to M$. By definition, it is given by \[ P_\sigma^h := \{(a, \zeta, \gamma) \in \Delta_{\text{sub}}^{n}\times M^I\times P_MN \mid \sigma(a) = \zeta(0), \ \nu(\gamma) = \zeta(1)\} \] with $\pi$ the projection. Let $\underline{P_\sigma^h}$ be the sub-simplicial set of $S^D_\bullet(P_\sigma^h)_{\text{sub}}$ consisting of $p$-simplexes each of which satisfies the condition that $\pi\circ \sigma : \Delta^{p} \to \Delta^{n}$ is the affine map defined by a nondecreasing map $[p] \to [n]$. By the same way, we have a sub-simplicial set $\underline{P_\sigma}$ of $S^D_\bullet(P_\sigma)_{\text{sub}}$, where $P_\sigma$ denotes the pullback of $\nu$ along the $n$-simplex $\sigma : \Delta_{\text{sub}}^{n} \to M$. The restriction map of $\sigma$ to the vertex $\sigma(n)$ gives rise to the pullback $P_{\sigma(n)}$ of $\nu : P_MN \to M$. Then we have the natural inclusion $j : P_{\sigma(n)} \to P_\sigma^h$ defined by $j(\gamma) = (\sigma(n), C_{\sigma(n)}, \gamma)$, where $C_{\sigma(n)}$ is the constant map at $\sigma(n)$. The following lemma is proved modifying the argument of the proof of \cite[Lemma 19.9]{H} in the diffeological framework. \begin{lem}\label{lem:KEY} The homomorphism $ \xymatrix@C25pt@R15pt{ A^*_{DR}(P_{\sigma(n)}) & \ar[l]_-{A(j)} A^*_{DR}(\underline{P_\sigma^h}) } $ induced by the natural map $j : S^D_\bullet(P_{\sigma(n)})_{\text{\em sub}} \to \underline{P_\sigma^h}$ of simplicial sets is a quasi-isomorphism. \end{lem} \begin{proof} By Theorem \ref{thm:main}, it suffices to show that the map $j_* : C_*( S^D_\bullet(P_{\sigma(n)})) \to C_*(\underline{P_\sigma^h})$ of chain complexes induced by $j$ is chain homotopy equivalent. We identify $M$ with the subspace $f(M)$ of $N\times N$. For each $\tau \in( \underline{P_\sigma^h})_p$, we have the map $\overline{\pi\circ \tau} : \Delta^{p} \times I \to \Delta^{n}$ mentioned above. Let $J$ denote the space $(I \times \{0, 1\}) \cup (\{0\} \times I)$ endowed with the subdiffeology of $I\times I$. For each point $z \in I\times I$, join $(2, \frac{1}{2})$ to $z$ by a straight line, and make the line beyond $z$ until it meets $J$ at a point $z'$. Then one defines a retraction $r : I\times I \to J$ by $r(z)=z'$ as a set map. Moreover, by using the projections $\text{pr}_i$ from $P_\sigma^h$ in the $i$th factor and the adjoint $ad$ of a map to $M^I$, we define a map $\widetilde{\tau} : \Delta^{p}\times I \times I \to M$ by \[ \widetilde{\tau} := ((\sigma\circ \overline{\pi\circ \tau})_0\circ(1\times \rho)) \cup ((\sigma\circ \overline{\pi\circ \tau})_1\circ (1\times \rho)) \cup ( \omega^{-1} \ast \ell \ast \omega) \circ (1\times r), \] where $\omega(z, 0, s)=ad(\text{pr}_2\tau)(z, \rho(s))$, $(\omega)^{-1}(z, 0, s)=ad(\text{pr}_2\tau)(z, 1- \rho(s))$, $\ell(z, 0, s)=ad(\text{pr}_3\tau)(z, \rho(s))$ and $\rho$ is a cut-off function. Moreover, $(\sigma\circ \overline{\pi\circ \tau})_i$ denotes the map defined by $(\sigma\circ \overline{\pi\circ \tau})$ on $\Delta^{p}\times I \times \{ i \}$ for $i = 0, 1$. We observe that $\widetilde{\tau}$ is smooth on $(\text{Im}\ \alpha_m) \times I$. In fact, the map is constant in appropriate neighborhoods of the rays from $(2, \frac{1}{2})$ to the points $(0,0 )$, $(0,1)$, $(0, \frac{1}{3})$ and $(0, \frac{2}{3})$ in $I\times I$. For any plot $p : U \to \Delta^{p}\times I \times I$, we talk a point $r \in U$ and write $p(r) = (x, (a,b))$. Then there exists an open neighborhood $U$ of $(a, b)$ such that $\widetilde{\tau}\circ p |_A$ is constant or a composite of $(1\times r)$ and $(\sigma\circ \overline{\pi\circ \tau})_i$ or $(\omega^{-1} \ast \ell \ast \omega)$, where $A = p^{-1}(\Delta^{p}\times U)$. This implies that $\widetilde{\tau}\circ p |_A$ is locally smooth. It follows that $\widetilde{\tau}\circ p$ is in ${\mathcal D}^M$ the diffeology of $M$ and then $\widetilde{\tau}$ is smooth. We define $\overline{\overline{\tau}}' : \Delta^{p}\times I \to \text{Map}(I, M)$ by the adjoint to $\overline{\tau}$. The homotopy $H_\tau : (\Delta^{p}\times I) \times I \to M$ from $\sigma\circ (\overline{\pi\circ \tau})$ to $\nu \circ \overline{\overline{\tau}}'$ defined by $H_\tau((z, t)), s) = (\sigma\circ \overline{\pi\circ \tau})(z,(1-s)t + s\rho(t))$ gives a map $\overline{\overline{\tau}} : \Delta^{p}\times I \to P_\sigma^h$ with \[ \overline{\overline{\tau}}(z,t) =( (\overline{\pi\circ \tau})(z,t), ad(H_\tau)(z,t), \overline{\overline{\tau}}'(z, t)). \] Observe that the domain of the map will be restricted to the space $(\text{Im}\ \alpha_m)$ when constructing a simplicial homotopy below. We call $\overline{\overline{\tau}}$ the {\it canonical lift} with respect to $\tau$. \[ \xymatrix@C35pt@R1pt{ \bullet \ar@{-}[rr]^-{(\sigma\circ \overline{\pi\circ \tau})_1\circ (1\times \rho))} \ar@{-}[dd]_{\omega^{-1}} & & \bullet \ar@{.}[dddddd] &&\\ & & & &\\ \bullet \ar@{-}[dd]_-{\ell} && && \\ & && & \bullet (2, \frac{1}{2}) \ar[lllluuu] \ar[lllld]\\ \bullet \ar@{-}[dd]_-{\omega} && && \\ & & & & \\ \bullet \ar@{-}[rr]_-{(\sigma\circ \overline{\pi\circ \tau})_0\circ (1\times \rho))} & & \bullet && } \eqnlabel{add-8} \] Since $\overline{\pi\circ \tau}(z, 1)$ is the constant map at $v_n$, it follows that $\overline{\overline{\tau}}( \text{-}, 1)$ factors through $P_{v_n}$. Then we define a simplicial map $\lambda : \underline{P_\sigma^h} \to S^D_\bullet(P_{\sigma(n)})_{\text{sub}}$ by $\lambda(\tau) = \overline{\overline{\tau}}(\text{-}, 1)$. In order to show that $j \circ \lambda$ is homotopic to the identity, we define $h_m : (\underline{P_\sigma^h})_p \to (\underline{P_\sigma^h})_{p+1}$ by $h_m(\tau)= \overline{\overline{\tau}} \circ \alpha_m$ for any $0\leq m \leq p$ by using the canonical lift. Since $\pi h_m(\tau)(z) = \overline{\pi\circ \tau} \alpha_m(z)$, it follows that $h_m$ is well defined. Then we see that $\{h_m\}_{0\leq m\leq p}$ gives rise to a simplicial homotopy; that is, the maps $h_m$ satisfy the following equalities \[ d_jh_m = \begin{cases} f & \text{if $j=m=0$,} \\ h_{m-1}d_j & \text{if $j < m$,} \\ d_jh_{m+1} & \text{if $0\leq j-1 =m < p$,} \\ h_md_{j-1} & \text{if $0\leq m < j-1 \leq p$,} \\ g & \text{if $j-1=m=p$,} \end{cases} \eqnlabel{add-9} \] where $f(\tau)=\overline{\overline{\tau}}(\text{-}, 1)$ and $g(\tau)=\overline{\overline{\tau}}(\text{-}, 0)$; see \cite{Barr} for example. The construction of the canonical lift yields that $d_jh_m(\tau) = \overline{\overline{\tau}}\circ \alpha_m\circ \eta_j$ and $h_md_j(\tau) = \overline{\overline{\tau}}\circ (\eta_j\times 1)\circ \alpha_m$. Thus each equality mentioned above follows from the the relations between $\alpha_m$ and $\eta_m$. It remains to verify that $\overline{\overline{\tau}}(\text{-}, 0)$ is homotopic to the identity. Let $\tau$ be an element in $(\underline{P_\sigma^h})_p$. We write $\tau (z) = (\pi\circ \tau(z), \zeta(z), \ell(z))$ and $\overline{\overline{\tau}}(z, 0) = (\overline{\overline{\tau}}(z, 0), C_y, \ell'(z))$, where $z \in \Delta^n$ and $C_y$ denotes the constant path at $y = \overline{\pi\circ \tau}(z, 0)$. Observe that $\overline{\pi\circ \tau}(z, 0) = (\pi\circ \tau)(z)$ and $\ell'$ is nothing but the path $(\omega^{-1}) \ast \ell \ast \omega$ in (5.2). Thus by using homotopies from $C_y$ to $\omega(z)$, from $(\omega^{-1}) \ast \ell \ast \omega (z, \text{-})$ to $\ell(z)$ and form the cut-off function $\rho$ to the identity, we can construct a homotopy $H_\tau : \Delta_{\text{sub}}^p \times I \to P_\sigma^h$ from $\overline{\overline{\tau}}(\text{-}, 0)$ to $\tau$ with $H_{d_j\tau} = H_\tau \circ (\eta_j \times 1)$. Define an element $h_m(\tau)$ in $(\underline{P_\sigma^h})_{p+1}$ by $h_m(\tau) = H_\tau\circ \alpha_m$. By the direct calculation, we see that $h_m$ satisfies the equalities in (5.4), where $f(\tau) = \tau$ and $g(\tau) = \overline{\overline{\tau}}(\text{-}, 0)$. As for the composite $\lambda \circ j : S^D_\bullet (P_{\sigma(n)}) \to \underline{P_\sigma^h} \to S^D_\bullet(P_{\sigma(n)})$, it is also homotopic to the identity. The same homotopy as $H_\tau$ mentioned above gives such a chain homotopy. This completes the proof. \end{proof} \begin{rem} We define a map $\varphi : M^I \to \mathsf{stPath}_\varepsilon(M)$ by composing the cut-off function $\rho$, where the target denotes the stationary path space. Then the map $\varphi$ is smooth. This follows from the smoothness of the evaluation map. Moreover, it follows from the locality of plots that the concatenation $\mathsf{stPath}_\varepsilon(M) \times \mathsf{stPath}_\varepsilon(M) \to \mathsf{stPath}_\varepsilon(M)$ is also smooth. By using these facts, we have proved Lemma \ref{lem:KEY}. \end{rem} \begin{lem} \label{lem:KEY2} The map $S^D(\iota) : \underline{P_\sigma} \to \underline{P_\sigma^h}$ defined by the inclusion $\iota : P_\sigma \to P_\sigma^h$ induces a quasi-isomorphism $\iota^* : A^*_{DR}(\underline{P_\sigma^h}) \to A^*_{DR}(\underline{P_\sigma})$. \end{lem} \begin{proof} The inclusion $\iota$ is given by $\iota(a, \gamma) = (a, C_{\sigma(a), \gamma})$. We define a map $\mu : P_\sigma^h \to P_\sigma$ by $\mu(a, \omega, \gamma) = (a, \widetilde{\omega^{-1}\ast \gamma \ast \omega})$, where $\widetilde{\omega^{-1}\ast \gamma \ast \omega} = (\omega^{-1}\circ \rho)\ast (\gamma\circ \rho)\ast (\omega\circ \rho)$. By adjusting the parameters of paths $\omega$ and $\gamma$, we can construct smooth homotopies $H : P_\sigma^h \times I \to P_\sigma^h$ from $1$ to $\iota\circ \mu$ and $G : P_\sigma \times I \to P_\sigma$ from $1$ to $\mu \circ \iota$ which preserve the first factor. For an $n$-simplex $\tau : \Delta^n \to P_\sigma^h$, define $h_m(\tau)$ by the composite $H\circ(\tau\times 1)\circ \alpha_m : \Delta^{n+1} \to \Delta^n \times I \to P_\sigma^h \times I \to P_\sigma^h$. Then the same argument as in the proof of Lemma \ref{lem:KEY} yields that the family $\{ \{h_m\}_{0\leq m\leq n} \}_{n\geq 0}$ gives a simplicial homotopy on $\{S^D_\bullet(\underline{P_\sigma^h)}\}$. By using the homotopy $G$, we have a simplicial homotopy on $\{S^D_\bullet(\underline{P_\sigma)}\}$. This completes the proof. \end{proof} Lemmas \ref{lem:KEY} and \ref{lem:KEY2} imply that the map $\text{top}_n : [0] \to [n]$ defined by $\text{top}_n(0) =(n)$ induces a quasi-isomorphism $\text{top}_n^* : A^*_{DR}(\underline{P_\sigma}) \to A^*_{DR}(\underline{P_{\sigma(n)}})$. A map $\alpha(\eta) : \Delta_{\text{sub}}^{m} \to \Delta_{\text{sub}}^{n}$ defined by a nondecreasing map $\eta : [m] \to [n]$ \[ \xymatrix@C30pt@R25pt{ P_{\alpha(\eta)^*\sigma} \ar[r]^-{\xi_{\alpha(\eta)}} \ar[d]_{\pi_{{\alpha(\eta)}^*\sigma}} & P_\sigma \ar[d]^{\pi_\sigma} \ar[r]^{\xi_\sigma}& E_f \ar[d]^{\nu} \\ \Delta_{\text{sub}}^m \ar[r]_-{\alpha(\eta)} & \Delta_{\text{sub}}^n \ar[r]_\sigma & M. } \] Moreover, the smooth map $\xi_{\alpha(\eta)}$ gives rise to a simplicial map $\underline{\xi_{\alpha(\eta)}} : \underline{P_{{\alpha(\eta)}^*\sigma}} \to \underline{P_\sigma}$. By using the quasi-isomorphisms $\text{top}_n^*$ mentioned above, we have \begin{lem}\label{lem:DiffCoeff} The family $F:=\{ A^*_{DR}(\underline{P_\sigma}) \}_{\sigma \in K}$ of DGA's gives an extendable local system over $K$ of differential coefficients. \end{lem} \begin{proof} For a nondecreasing map $\eta : [m] \to [n]$, we define a map $\tau : [n] \to [n+1]$ which sends $\eta(m), \eta(m)+1, ..., n$ to $n+1$. Since $\tau \circ \text{top}_n = \text{top}_{n+1}$ and $\tau\circ \eta \circ \text{top}_m = \text{top}_{n+1}$, it follows that $\underline{\xi_{\alpha(\eta)}}$ induces a quasi-isomorphism $(\underline{\xi_{\alpha(\eta)}})^* : A_{DR}(\underline{P_\sigma}) \to A_{DR}(\underline{P_{{\alpha(\eta)}^*\sigma}})$. The extendability of the local system follows from Lemma \ref{lem:extendability}; see the proof of \cite[19.17 Lemma]{H}. \end{proof} \noindent Let $j : R:=A_{DR}(M)\otimes \wedge V \to A_{DR}(E_f)$ be a KS extension for the map $\nu^* : A_{DR}(M) \to A_{DR}(E_f)$ induced by the projection $\nu : E_f \to M$. Let $F_m$ denote the fibre over a point $m \in M$. Since the composite $\nu$ and the inclusion $l : F_m \to E_f$ is the constant map at $m$, it follows that the map $l^*\circ \pi^*$ factors through the augmentation $\varepsilon : A_{DR}(M) \to A_{DR}(\{m\})={\mathbb R}$ and then $j$ induces a map $k : \wedge V = A_{DR}(\{m\})\otimes_{A_{DR}(M)}R \to A_{DR}(P_m)$ of DGA's. \begin{prop}\label{prop:KSextension} Suppose that $N$ is simply connected. Then the morphism $k : \wedge V \to A_{DR}(P_m)$ of DGA's is a quasi-isomorphism. \end{prop} This result follows from \cite[20.3 Theorem]{H}. We here prove Proposition \ref{prop:KSextension} by constructing the Leary-Serre spetral sequence and by applying the comparison theorem of spectral sequences. To this end, we first recall an isomorphism $a : A_{DR}(E_f) \to \Gamma(F)$ of DGA's in \cite[19.21 Lemma]{H} defined by $(a\psi)_\sigma = a_\sigma \psi$, where $a_\sigma$ denotes the composite \[ A_{DR}(E_f) \stackrel{\xi_\sigma}{\to} A_{DR}(P_\sigma) \to A_{DR}(\underline{P_\sigma}). \] For the map $P_m \to \{m\}$, Lemma \ref{lem:DiffCoeff} enables us to obtain a local system over $L:=\{S^D(\{m\})_{\text{sub}}$ of the form $F':= \{A_{DR}(\underline{(P_m)_\tau}) \}_{\tau \in L}$. Observe that the inclusion $i : P_m \to E_f$ induces a morphism $i^* : F\to F'$ of local systems. Moreover, we have an isomorphism $a : A_{DR}(P_m) \to \Gamma(F')$ by applying \cite[19.21 Lemma]{H}. Recall the quasi-isomorphism $i_F : \Gamma(F) \to \Gamma((A_{DR}^*)_\bullet\otimes F)$ which is defined by the inclusion $F_\sigma \to 1\otimes F_\sigma \subset A_{DR}({\mathbb A}^n)\otimes F_\sigma$ for $\sigma \in K_n$; see \cite[13.12 Theorem]{H}. Moreover, the map $\xi_F : \Gamma(A\otimes F) \to \Gamma(F)$ is defined by \[ (\xi_F(a\otimes \Phi))_\sigma = a_\sigma\cdot\Phi_\sigma, \] where $\cdot$ denotes the multiplication on $(A_{DR}^*)_\bullet$. It is readily seen that $\xi_F$ is a left inverse of $i_F$ and then it is a surjective quasi-isomorphism. We observe that $\xi_F$ is a morphism of $A_{DR}(M)$-algebras. These maps gives a commutative diagram \[ \xymatrix@C35pt@R15pt{ A_{DR}(\{m\}) ={\mathbb R} \ar[d] \ar[dr] \ar@/^0.9pc/[drr] \ar@/^1.4pc/[drrr] & & & \\ \wedge V \ar[r]^-k & A_{DR}(P_m) \ar[r]^-{a}_-{\cong} & \Gamma(F') & \Gamma((A_{DR}^*)_\bullet\otimes F')\ar@{->>}@<0ex>[l]_-{\xi_{F'}}^-{\simeq}\\ A_{DR}(M)\otimes \wedge V \ar[r]_-{\simeq}^-j \ar[u]^q & A_{DR}(E_f) \ar[u]^-{l^*} \ar[r]_-{a}^-{\cong} & \Gamma(F) \ar[u]_{\Gamma(i^*)} & \Gamma((A_{DR}^*)_\bullet\otimes F). \ar@{->>}@<0ex>[l]^-{\xi_F}_-{\simeq} \ar[u]_{\Gamma(1\otimes i^*)}\\ A_{DR}(M) \ar@/^3.5pc/[uuu] \ar[u] \ar[ur]^-{\pi^*} \ar@/_0.9pc/[urr] \ar@/_1.4pc/[urrr]& & & } \] containing the KS extension, where the maps $i_{F'}$ and $\xi_{F'}$ are defined by the same way as $i_F$ and $\xi_F$, respectively. Lifting lemma gives rise to a morphism $\Psi : A_{DR}(M)\otimes \wedge V \to \Gamma((A_{DR}^*)_\bullet\otimes F)$ of $A_{DR}(M)$-algebras with $\xi_F\circ \Psi = j \circ a$. More precisely, we define $\Psi$ by $\Psi(v) = i_F \circ j \circ a (v)$ for $v \in V$. The commutativity of three squares enables us to deduce that \[ a \circ k\circ q|_{1\otimes \wedge V} = \xi_{F'}\circ \Gamma(1\otimes i^*) \circ \Psi|_{1\otimes \wedge V}. \eqnlabel{add-10} \] Define filtrations $G=\{G^p\}_{p\geq 0}$ of $A_{DR}(M)\otimes \wedge V$ and $'G=\{'G^p\}_{p\geq 0}$ by $G^p = \sum_{i\geq p}A^i_{DR}(M) \otimes \wedge V$ and $'G^p = \Gamma (\sum_{i\geq p}(A^i_{DR})_\bullet \otimes F)$, respectively. Since the morphism $\Psi$ of DGA's over $A_{DR}(M)$ preserves the filtrations, it follows that the map induces a morphism $\{f_r\}_{r\geq 2} : \{E_r^{*,*}, d_r\} \to \{'E_r^{*,*}, 'd_r\}$ of spectral sequences constructed from the filtrations mentioned above; see \cite[(12.43)]{H}. We recall the integration map defined in (3.1). The integration induces a quasi-isomorphism \[ \int : \ \! \! 'E_1= \Gamma((A^*_{DR})_\bullet \otimes H(F)) \to C^*(M; \mathcal{H}(F)), \] where $C^*(M; \mathcal{H}(F))$ denotes the cochain complex of $S^D_\bullet(M)_{\text{sub}}$ with the local coefficients induced by the local system $F$. This follows form the same argument as in \cite[14.13]{H}. \begin{rem} The above argument yields that in the Leray-Serre spectral sequence in Theorem \ref{thm:LSSS}, one has an isomorphism $_{LS}E_2 \cong H^*(S^D_\bullet(M)_{\text{sub}})\otimes H^*(A_{DR}(F_m))$ as an algebra if $M$ is simply connected. \end{rem} \begin{proof}[Proof of Proposition \ref{prop:KSextension}] Since $N$ is simply connected, it follows that the local system $F$ on $S^D(M)_{\text{sub}}$ is simple. We observe that the action of $\pi_1(M)$ on $F$ is induced by that of $\pi_1(N)$. Then we see that $f_2$ is a morphism of free $H^*(M)$-modules. Therefore if $f_2^{0,q}$ is an isomorphism, then so is $f_2^{p,q}$. It follows form the comparison theorem (\cite[17.17 Theorem]{H}) that $f_2^{0,q}$ is an isomorphism for any $q \geq 0$. The formula (5.5) implies that the isomorphism $f_2^{0, q}$ is nothing but the map \[ H(\int \circ \xi_{F'}\circ a\circ k) : H(\wedge V) \to H(F')=H(F_m). \] This completes the proof. \end{proof} We are ready to prove the main theorem in this section. \begin{proof}[Proof of Theorem \ref{thm:general_main}] For a diffeological space $X$, we recall the quasi-isomorphism $(j^*)^* : A^*_{DR}(X) \to A^*_{DR}(S^D_\bullet(X))=:A(X)$ in (5.2). Let $\Omega M \to PM\to M$ be the pullback of the evaluation map $(\varepsilon_0, e_1) : M^I \to M\times M$ along the induction $s : M \to M\times M$ defined by $s(x)= (*, x)$, where $*$ denotes the base point of $M$. We have a commutative diagram of solid arrows \[ \xymatrix@C20pt@R10pt{ & A^*_{DR}(\Omega M) \ar[rr]^{(j^*)^*}_-\simeq & & A(\Omega M) & \overline{B}(A) \ar[l]_{\alpha \circ \mathsf{It}} \ar@{..>}@/^8pt/[lld]_{\overline{\alpha}} \\ T \ar[ur]^-k \ar@{.>}[rr]^(0.6){\overline{\beta}}& & T' \ar[ru]^-{k'}& \\ & A^*_{DR}(PM) \ar[uu]|{\hole} \ar[rr]|{\hole}^(0.4)\simeq & & A(PM) \ar[uu]_(0.3){A(i)} & \Omega^*(M)\otimes \overline{B}(A) \ar[l]_-{\alpha \circ \mathsf{It}} \ar[uu] \ar[uu] \ar@{..>}@/^8pt/[lld]_{\widetilde{\alpha}} \\ R \ar[uu] \ar[ur]^-\simeq \ar@{.>}[rr]^(0.6)\beta & & R'\ar[uu] \ar@{->>}[ru]_p^-\simeq & \\ & A^*_{DR}(M) \ar[uu]|{\hole}_(0.4){\pi^*} \ar[rr]|{\hole}^(0.4)\simeq & & A(M) \ar[uu]_(0.3){A(\pi)} & \Omega^*(M) \ar[l]_{\alpha} \ar[uu] \ar@/^8pt/[lld]_\alpha\\ A^*_{DR}(M) \ar[uu]_(0.6)j \ar@{=}[ur] \ar[rr] & & A(M) \ar@{=}[ur] \ar[uu]_(0.7){j'} & } \] in which $j$ and $j'$ are KS extension of $\pi^*$ and $A(\pi)$, respectively. Here $A$ denotes the DG subalgebra of $\Omega^*(M)$ described in the paragraph before (5.1). We may assume that the quasi-isomorphism $p$ is a surjection by the surjective trick; see \cite[Section 12 (b)]{F-H-T}. By applying Lifting lemma, we have a morphism $\beta : R \to R'$ which makes the two squares commutative. Then we have a morphism $\overline{\beta} : T:={\mathbb R}\otimes_{A_{DR}(M)}R \to T':={\mathbb R}\otimes_{A(M)}R'$ of DGA's. Moreover, the the map $\beta$ is a quasi-isomorphism and hence so is $\overline{\beta}$ by \cite[Theorem 6.10]{F-H-T}. Proposition \ref{prop:KSextension} implies that $k$ is a quasi-isomorphism and then so is $k'$. Since the bar complex $\Omega^*(M)\otimes \overline{B}(A)$ is a semifree $\Omega(M)$-module, it follows from Lifting lemma that there exist a morphism $\widetilde{\alpha} : \Omega^*(M)\otimes \overline{B}(A) \to R'$ of $\Omega(M)$-modules and a morphism $\overline{\alpha} : \overline{B}(A) \to T'$ of differential graded modules which fit in the commutative diagram. Observe that $\overline{B}(A)\cong {\mathbb R}\otimes_{\Omega^*(M)}(\Omega^*(M)\otimes \overline{B}(A))$. The complex $\Omega^*(M)\otimes \overline{B}(A)$ is indeed a resolution of ${\mathbb R}$ and the diffeological space $PM$ is smoothly contractible. Then the map $\widetilde{\alpha}$ is a quasi-isomorphism and hence so is $\overline{\alpha}$. We see that $\alpha \circ \mathsf{It} : \overline{B}(A) \to A(\Omega M)$ is a quasi-isomorphism. We apply the same argument to the pullback $\Omega N \to E_f \to M$ of the evaluation map $(\varepsilon_0, e_1) : N^I \to N\times N$ along an induction $f: M \to N\times N$. Then in the diagram above, the bar complex $\Omega^*(M)\otimes \overline{B}(A)$ is also replaced with the complex $\Omega^*(M)\otimes_f \overline{B}(A)$. The proof of \cite[7.1 Theorem]{H} enables us to conclude that $\alpha \circ \mathsf{It} : \Omega^*(M)\otimes_f B(A) \to A(E_f)$ is a quasi-isomorphism. We have the result. \end{proof} We conclude this section with a table which summarizes simplicial objects used in this manuscript. \begin{table}[h] {\small \begin{tabular}{|l|c|c|c|} \hline & $S^D_\bullet(X)$ & ${S^D_\bullet(X)_{\text{sub}}}$ & $S^\infty_\bullet(X)$ \\ \hline & This pair is used in & We use this pair when & This is used in \\ $(A_{DR}^*)_\bullet$ & proving the & constructing the SSes & describing \\ & de Rham theorem & in Theorems \ref{thm:LSSS} and \ref{thm:EMSS} & Proposition \ref{prop:alpha_beta} \\ \hline \vspace*{-0.12cm} & This case (1) is & This case (2) is & We use this pair \\ $\widetilde{(A_{DR}^*)}_\bullet$ & not used & not used & when constructing $\alpha'$ \\ & in our framework & in our framework & in Proposition \ref{prop:alpha_beta} \\ \hline \end{tabular} } \vspace*{0.15cm} \caption{} \label{table1} \end{table} \vspace*{-0.6cm} \noindent The pair of a simplicial set in the first row and a simplicial cochain algebra in the first column gives a DGA. These DGA's are quasi-isomorphic to one another. In fact, the quasi-isomorphisms are induced by the inclusion $S^\infty_\bullet(X) \to S^D_\bullet(X)$, the restrictions $j^* : S^D(X)_\bullet \to S^\infty_\bullet(X)$ and $j^* (A_{DR}^*)_\bullet \to \widetilde{(A_{DR}^*)}_\bullet$. More precisely, the results for the pairs in the second row follow from Lemmas \ref{lem:acyclic}, \ref{lem:representable} and the commutativity of the same right square as in the diagram (4.2) in the proof of Proposition \ref{prop:alpha_beta}. We have the result for the pair in each column by considering the commutativity of the same right triangle as in (4.2). In \cite{Kihara1}, Kihara has introduced {\it standard simplexes} $\Delta^p_{\text{Ki}}$ for $p\geq 0$ in $\mathsf{Diff}$ whose underlying topological spaces are the standard ones in the category of topological spaces. With the simplexes, it is proved that $\mathsf{Diff}$ admits a Quillen model category structure; see \cite[Theorem 1.3]{Kihara1}. For a diffeological space $X$, we can consider the complex associated with the singular simplex $S^D_p(X)_{\text{Ki}}$ consisting of smooth maps $\Delta^p_{\text{Ki}} \to X$, which is quasi-isomorphic to the complex for $S^D_\bullet(X)_{\text{sub}}$; see \cite[Remark 3.8]{Kihara}. Then the pairs (3)$:=(S^D_\bullet(X)_{\text{Ki}}, (A_{DR}^*)_\bullet)$ and (4)$:=(S^D_\bullet(X)_{\text{Ki}}, \widetilde{(A_{DR}^*)}_\bullet)$ give rise to the DGA's which are quasi-isomorphic to the DGA for the pair (2). While it is possible to choose the pairs (1), (2), (3) and (4) when considering the cohomology algebras, we do not use them explicitly in this manuscript. We anticipate that such a pair is relevant in the study of diffeological spaces. \medskip \noindent {\it Acknowledgements.} The author thanks Akinori Emoto for many valuable discussions on the extendability of the simplicial cochain algebras concerning the de Rham theory for diffeological spaces. The author is also grateful to Hiroshi Kihara for his comments on the main theorem. That will make the author consider the problem when the morphism $\alpha$ in Theorem \ref{thm:main} induces an isomorphism on the cohomology with a model structure on the category $\mathsf{Diff}$. A part of this article has been written during his stay at Nesin Mathematics Village in which Diffeology, Categories and Toposes and Non-commutative Geometry Summer School has been held in the summer 2018. The author thanks Serap G\"urer, who was the organizer of the school, for her hospitality. \section{Appendix} \subsection{Appendix A: The acyclic model theorem for cochain complexes}\label{app0} We recall the acyclic model theorem due to Bousfield and Gugenheim \cite{B-G}. \begin{defn} Let ${\mathcal C}$ be a category and $\text{Ch}^*({\mathbb K})$ the category of cochain complexes over a field ${\mathbb K}$. A contravariant functor $F : {\mathcal C} \to \text{Ch}^*({\mathbb K})$ admits a {\it unit} if for each object $X$ in ${\mathcal C}$, there exists a morphism $\eta_X : {\mathbb K} \to F(X)$ in $\text{Ch}^*({\mathbb K})$. Let ${\mathcal M}$ be a set of objects in ${\mathcal C}$, which is called {\it models}. A functor $F$ with unit is {\it acyclic on models} ${\mathcal M}$ if for any $M$ in ${\mathcal M}$, there exists a morphism $\varepsilon_M : F(M) \to {\mathbb K}$ such that $\varepsilon_M \circ \eta_M \simeq id$ and $\eta_M\circ \varepsilon_M \simeq id$. \end{defn} Let $F : {\mathcal C} \to {\mathbb K}\text{-Mod}$ be a functor form a category with models ${\mathcal M}$ to the category of vector spaces over ${\mathbb K}$. Then we define a contravariant functor $\widehat{F} : {\mathcal C} \to {\mathbb K}\text{-Mod}$ by \[ \widehat{F}(X) := \prod_{M \in {\mathcal M}}(F(M)\times {\mathcal C}(M ,X))=\prod_{M \in {\mathcal M}, \sigma \in {\mathcal C}(M, X)}(F(M)\times \{\sigma \}), \] where for a morphism $f : X \to Y$ in ${\mathcal C}$, the morphism $\widehat{F}(f) : \widehat{F}(Y) \to \widehat{F}(X)$ is defined by $\widehat{F}(f)\{m_\sigma, \sigma\} =\{m_{f\tau}, \tau\}$. Moreover, we define a natural transformation $\Phi : F \to \widehat{F}$ by $\Phi_X(u) =\{F(x)u, x\}$. We say that $F$ is {\it corepresentative} on the models ${\mathcal M}$ if there exists a natural transformation $\Psi : \widehat{F} \to F$ such that $\Psi\circ \Phi = id_F$. \begin{thm}\label{thm:AcyclicModels} \cite[2.4 Proposition]{B-G} Let ${\mathcal C}$ be a category with models ${\mathcal M}$. Let $K_1$ and $K_2$ be contravariant functors from ${\mathcal C} \to \text{\em Ch}^*({\mathbb K})$ with units $\eta : {\mathbb K} \to K_1^0, K_2^0$. Here ${\mathbb K}$ denotes the constant functor defined by ${\mathbb K}(X)= {\mathbb K}$. Suppose that $K_1$ is acyclic on models ${\mathcal M}$ and $U_k\circ K_2$ is corepresentative on the models, where $U_k$ denotes the forgetful functor to ${\mathbb K}\text{\em -Mod}$ on the degree $k$. Then there exists a natural transformation $T : K_1 \to K_2$ which preserves the unit. Moreover any two such natural transformations are naturally homotopic. \end{thm} We here consider III) in Section \ref{sect6} more precisely. In the theorem above, we take the category $\mathsf{Diff}$ as ${\mathcal C}$ and then put $K_1=\Omega^*(\text{-})$ and $K_2= C^*(S_\bullet^D( \text{-} ))$. Let ${\mathcal M}$ be the subset of objects in ${\mathcal C}$ consisting of the affine spaces ${\mathbb A}^n$ for any $n\geq0$. Then the category $\mathsf{Diff}$ is regarded as a category with models ${\mathcal M}$. Poincar\'e lemma for diffeology implies that the functor $\Omega^*(\text{-})$ is acyclic for ${\mathcal M}$; see \cite[6.83]{IZ}. For a non-negative integer $k\geq 0$, we define a map \[ \Psi_X : \widehat{C^k(S_\bullet^D(X))} := \prod_{{\mathbb A}^n \in {\mathcal M}} (C^k(S_\bullet^D({\mathbb A}^n))\times C^{\infty}({\mathbb A}^n, X)) \to C^k(S_\bullet^D(X)) \] by $\Psi_X(\{m_\sigma, \sigma\})(\tau) = m_\tau(id_{{\mathbb A}^k})$, where $\tau \in S_k^D(X)$. Then $\Psi_{\text{--}}$ is a natural transformation. In fact, we see that for a smooth map $f : X \to Y$ and $u \in S_k^D(X)$, \[ \Psi_X(\widehat{C^k(S_k^D(f))} \{m_\sigma, \sigma\})(u) = \Psi_X\{m_{f\tau}, \tau\}(u) = m_{fu}(id_{{\mathbb A}^k}) \ \ \text{and} \] \[ ((C^kS_\bullet^D)(f))(\Psi_Y\{m_\sigma, \sigma\})(u) = \Psi_Y\{m_\sigma, \sigma\}(fu) = m_{fu}(id_{{\mathbb A}^k}). \] Since $\Phi_X(u)= \{C^k(S_\bullet^D(\sigma))u, \sigma\} $ for $u \in C^k(S_\bullet^D(X))$ by definition, it follows that \begin{align*} (\Psi_X\Phi_X(u))(\tau) &= \Psi_X(\{C^k(S_\bullet^D(\sigma))u, \sigma\})(\tau) = C^k(S_\bullet^D(\tau))u(id_{{\mathbb A}^k}) \\ &= u(\tau \circ id_{{\mathbb A}^k})=u(\tau) \end{align*} for $\tau \in S_k^D(X)$. Then we have $\Psi\Phi = id$ and hence $C^k(S_\bullet^D( \text{-} ))$ is corepresentative. Theorem \ref{thm:AcyclicModels} enables us to deduce the homotopy commutativity of the right square in Theorem \ref{thm:main}. \subsection{Appendix B}\label{App} We begin with the definition of a differential space in the sense of Sikorski \cite{Sik} in order to define a stratifold. \begin{defn} \label{defn:differential_space} A {\it differential space} is a pair $(S, {\mathcal C})$ consisting of a topological space $S$ and an $\mathbb{R}$-subalgebra ${\mathcal C}$ of the $\mathbb{R}$-algebra $C^0(S)$ of continuous real-valued functions on $S$, which is supposed to be {\it locally detectable} and $C^\infty$-{\it closed}. \medskip Local detectability means that $f \in {\mathcal C}$ if and only if for any $x \in S$, there exist an open neighborhood $U$ of $x$ and an element $g \in {\mathcal C}$ such that $f|_U = g|_U$. \medskip $C^\infty$-closedness means that for each $n\geq 1$, each $n$-tuple $(f_1, ..., f_n)$ of maps in ${\mathcal C}$ and each smooth map $g : \mathbb{R}^n \to \mathbb{R}$, the composite $h : S \to \mathbb{R}$ defined by $h(x) = g(f_1(x), ...., f_n(x))$ belongs to ${\mathcal C}$. \end{defn} Let $(S, {\mathcal C})$ be a differential space and $x \in S$. The vector space consisting of derivations on the $\mathbb{R}$-algebra ${\mathcal C}_x$ of the germs at $x$ is denoted by $T_xS$, which is called the {\it tangent space} of the differential space at $x$; see \cite[Chapter 1, section 3]{Kreck}. \begin{defn} \label{defn:stratifold} A {\it stratifold} is a differential space $(S, {\mathcal C})$ such that the following four conditions hold: \begin{enumerate} \item $S$ is a locally compact Hausdorff space with countable basis; \item the {\it skeleta} $sk_k(S):= \{x \in S \mid \text{dim } \!T_xS\leq k\}$ are closed in $S$; \item for each $x \in S$ and open neighborhood $U$ of $x$ in $S$, there exists a {\it bump function} at $x$ subordinate to $U$; that is, a non-negative function $\rho \in {\mathcal C}$ such that $\rho(x)\neq 0$ and such that the support $\text{supp }\!\rho :=\overline{\{p \in S \mid \rho(p) \neq 0\}}$ is contained in $U$; \item the {\it strata} $S^k := sk_k(S) - sk_{k-1}(S)$ are $k$-dimensional smooth manifolds such that restriction along $i : S^k \hookrightarrow S$ induces an isomorphism of stalks $ i^* : {\mathcal C}_x \stackrel{\cong}{\to} C^\infty(S^k)_x $ for each $x \in S^k$. \end{enumerate} \end{defn} A {\it parametrized} stratifold ($p$-stratifold for short) is constructed from a manifold attaching another manifold with boundary. More precisely, let $(S, {\mathcal C})$ be a stratifold of dimension $n$ and $W$ a $k$-dimensional manifold with boundary $\partial W$ endowed with a collar $c : \partial W \times [0, \varepsilon) \to W$. Suppose that $k > n$. Let $f : \partial W \to S$ be a morphism of stratifolds. We define a pair $ (S', {\mathcal C}') $ of the identification space $S'=S\cup_f W$ and the subalgebra $ C'=\Set{ g : S' \to {\mathbb R} | g_{| S} \in {\mathcal C}, \text{$gc(w, t)=gf(w)$ for $w \in \partial W$} } $ of $C^0(S')$. For more details; see \cite[Example 9]{Kreck}. A stratifold constructed inductively by attaching manifolds with such a way is called a parametrized stratifolds. Let $\mathsf{Diff}$ be the category of diffeological spaces. We recall a functor $k : \mathsf{Stfd} \to \mathsf{Diff}$ defined by $k(S, {\mathcal C}) = (S, {\mathcal D}_{\mathcal C})$ and $k(f) = f$ for a morphism $f : S \to S'$ of stratifolds, where \[ {\mathcal D}_{\mathcal C}:=\Set{u : U \to S | \begin{array}{l} \text{$U :$ open in ${{\mathbb R}^q}, q \geq 0$,} \\ \text{$\phi\circ u \in C^\infty(U)$ for any $\phi \in {\mathcal C}$} \end{array} }. \] We observe that a plot in ${\mathcal D}_{\mathcal C}$ is a set map. The functor $k$ is faithful, but not full, meaning that for a continuous map $f : S \to S'$, it is more restrictive to be a morphism of stratifolds $(S, {\mathcal C}) \to (S', {\mathcal C}')$ than to be a morphism of diffeological spaces $(S, {\mathcal D}_{\mathcal C}) \to (S', {\mathcal D}_{{\mathcal C}'})$; see \cite{A-K} for the details. \begin{lem}\label{lem:D-top} Let $(S, {\mathcal C})$ be a stratifold. An open set of the underlying topological space $S$ is a $D$-open set of the diffeological space $k(S, {\mathcal C})$. \end{lem} \begin{proof} Let $u$ be an element in ${\mathcal D}_{\mathcal C}$ with domain $U$. Then $u : U \to k(S, {\mathcal C})$ is a smooth map in the sense of diffeology. In fact, for any plot $p : V \to U$ of the diffeology $U$ and for any $\phi \in {\mathcal C}$, we see that $\phi \circ u_*(p) = (\phi \circ u) \circ p$ is in $C^\infty(V)$ and hence $u_*(p)$ is in ${\mathcal D}_{\mathcal C}$. Since $U$ is a manifold, it follows from \cite[Proposition 5.1]{A-K} that $u$ is a morphism in $\mathsf{Stfd}$. In particular, the plot $u$ is continuous. It turns out that, by definition, each open set of $S$ is $D$-open. \end{proof} We here summarize categories and functors concerning our work. \[ \xymatrix@C50pt@R18pt{ & & \mathsf{Sets}^{\Delta^{op}} \ar@<1ex>[d]^-{| \ |_D}& \\ \mathsf{Mfd} \ar[r]_{\text{\tiny fully faithful}}^j \ar@/^2.0pc/[rr]^{\ell : \text{fully faithful}} & \mathsf{Stfd} \ar[r]^-k &\mathsf{Diff} \ar@<1ex>[r]^-{D}_-{\bot} \ar@<1.5ex>[u]^-{S^D}_{\vdash} & \mathsf{Top}, \ar@<1.2ex>[l]^-{C} \\ && C\text{-}\mathsf{Diff} \ar@<1ex>[r]^-{D}_-{\simeq} \ar[u] & \Delta\text{-}\mathsf{Top}\ar@<1ex>[l]^-{C}\ar[u] } \] The $D$-topology of diffeological spaces gives rise to the functor $D: \mathsf{Diff} \to \mathsf{Top}$. For a topological space $X$, all continuous maps make the diffeology ${\mathcal D}^X$ of the underlying set $X$. Thus we have the functor $C$ mentioned in the diagram above; see Remark \ref{rem:Delta-generated_Top} for an important property of the adjoint pair. The realization of a simplicial set in $\mathsf{Diff}$ with affine spaces ${\mathbb A}^n$ for $n\geq 0$ gives the realization functor $|\ |_D$. The results \cite[Propositions 4.14 and 4.15]{C-W} assert that the functors $S^D \circ C$ and $D \circ | \ |_D$ coincide with the usual singular simplex functor and the realization functor up to weak equivalence, respectively. We refer the reader to \cite{S-Y-H} and \cite{C-W} for more properties of the adjoint pairs $(D, C)$ and $(| \ |_D, S^D)$, respectively; \begin{rem}\label{rem:Delta-generated_Top} The functors $C$ and $D$ give rise to an equivalence between appropriate full subcategories of $\mathsf{Diff}$ and $\mathsf{Top}$. In fact, we see that the unit and counit induce isomorphisms $\eta_{CX} : CX \stackrel{\cong}{\to} CDCX$ and $\varepsilon_{DY} : DCDY \stackrel{\cong}{\to} DY$; see \cite[Propositios 3.3]{C-S-W} and also \cite{S-Y-H}. Let $C\text{-}\mathsf{Diff}$ be the full subcategory of $\mathsf{Diff}$ consisting of objects isomorphic to diffeolocical spaces in the image of $C$ and $\Delta\text{-}\mathsf{Top}$ the full subcategory of $\mathsf{Diff}$ consisting of objects isomorphic to topological spaces in the image of $D$. We observe that the objects in the image of $D$ are exactly the $\Delta$-generated topological spaces; see \cite[Proposition 3.10]{C-S-W}. The result \cite[Lemma II. 6.4]{M-M} implies that the functors are restricted to equivalences between $C\text{-}\mathsf{Diff}$ and $\Delta\text{-}\mathsf{Top}$. It is worth to mention that all CW-complexes are included in $\Delta\text{-}\mathsf{Top}$; see \cite[Corollary 3.4]{S-Y-H}. \end{rem} \begin{rem}\label{rem:CDC} Let $X$ be in the category $C\text{-}\mathsf{Diff}$. We can assume that the units $\eta_X : X \to CDX$ is an isomorphisms. Then it follows that for an object $X$ in $\mathsf{Diff}_{\text{top}}$, the map $\mathsf{Diff}(\Delta_\text{sub}^n, X) \to \mathsf{Diff}(\Delta_\text{sub}^n, CDX)$ induced by the unit in Section 6 is bijective and hence so is the composite $\mathsf{Diff}(\Delta_\text{sub}^n, X) \to \mathsf{Top}(\Delta^n, DX)$. This yields that the functor $C_n D$ is representable for each $n$, where $C_*$ denotes the singular chain functor. Thus by the method of acyclic models, we see that $H(C^*({S^D_\bullet(X)}_{\text{sub}})) \cong H^*(DX, {\mathbb R})$ for each object $X$ in $\mathsf{Diff}_{\text{top}}$, where $H^*(\text{-}, {\mathbb R})$ denotes the singular cohomology with coefficients in ${\mathbb R}$. \end{rem} \begin{rem} Let $(S, {\mathcal C})$ be a stratifold. Then it follows from \cite[Corollary 5.2]{A-K} that $S^D_\bullet(k(S, {\mathcal C})) \cong \mathsf{Stfd}({\mathbb A}^\bullet, (S, {\mathcal C}))$ as a simplicial set. Corollary \ref{cor:main} implies that that the de Rham cohomology of $k(S, {\mathcal C})$ is isomorphic to the cohomology of the chain complex induced by $\mathsf{Stfd}({\mathbb A}^\bullet, (S, {\mathcal C}))$ as an algebra. \end{rem}
{ "timestamp": "2019-03-15T01:12:23", "yymm": "1902", "arxiv_id": "1902.10937", "language": "en", "url": "https://arxiv.org/abs/1902.10937" }
\section{Model} \textit{Model.} We begin our discussion using the susceptible-infected-susceptible (SIS) epidemic model. The SIS model describes the dissemination of a single communicable disease in a susceptible population of size $N$. The transmission of the pathogen occurs when infected hosts transmit the disease pathogen to healthy susceptible individuals. The infectious period extends throughout the whole course of the disease until recovery of the patient, warranting a two-stage model: either infected or susceptible. The essence of the model is summarized by inset in Fig.~\ref{fig:fig1}. \begin{figure} \includegraphics[width=0.95\columnwidth]{fig1.eps} \caption{\label{fig:fig1} Numerical simulations of the SIS model. (inset) Infected hosts (I) recover to susceptible state (S) with rate $\gamma$ (left). Adequate interaction between an infected host with a susceptible one may trigger a new infection with rate $\alpha$ (right). Stochastic effects are far more prominent for small population sizes ($N=50$, $\gamma/\alpha = 1/2$ ), reducing the accuracy of compartmental equations. The time derivative of the density of infected $\rho$ extracted directly from data (cross) using forward-derivative agrees with the RHS of Eq.~(\ref{eq:imp1}), as a function of the density and variance. The dashed line shows the expected RHS of compartmental equation Eq.~(\ref{eq:compartmental}). The equation of motion (line) for $d\sigma^2/d\tau$ in Eq.~(\ref{eq:imp2}) also agrees with simulated data (circles). } \end{figure} The traditional formulation of the problem assumes the random-mixing hypothesis (see Introduction) holds for a large population size $N \gg 1$, compromised of statistically equivalent individuals. Under these circumstances, the only relevant variable is the instantaneous density of infected elements $\rho(t)$, which means that fluctuations can be safely neglected. Furthermore, $\rho(t)$ decreases with rate $\gamma\,\rho$, where $\gamma$ is the recovery rate. New infections per unit of time (disease incidence) are proportional to $\alpha \rho(1-\rho)$, i.e., they depend on the chance that infected elements interact with susceptible ones, with intensity given by the transmission rate $\alpha$. This picture provides an interpretation where $\rho(t)$ is continuously exchanged between two compartments, leading a simple description called compartmental equation: $ d\rho(t)/dt = \alpha \rho (1 -\rho) - \gamma \rho $. For the sake of convenience, redefine the timescale as $\tau \equiv \alpha t$ and $\rho_0 \equiv 1 - \gamma/\alpha$, so that \begin{equation} \label{eq:compartmental} \frac{d}{d \tau}\rho(\tau) = \rho (\rho_0 -\rho). \end{equation} Clearly, the equilibrium density can either be $\rho_{\textrm{eq}} =0$ or $\rho_{\textrm{eq}} = \rho_0$. Also, $\rho_0$ is related to the basic reproduction number $R_0 = N (\alpha/\gamma) $ which provides an estimate on the number of new infections per generation \cite{caliriJBioPhys2003}. In light of its long age, compartmental equations have met considerable success in predicting the time evolution of disease outbreaks, providing valuable insights for intervention strategies and funding allocation \cite{murray}. However, outbreaks that fail to meet the underlying hypotheses (random mixing and large population of statistically equivalent elements) can contradict compartmental equations. These inconsistencies are largely attributed to stochastic effects and their inherent fluctuations \cite{heesterbeekScience2015}. \textit{Improved compartmental equations.} Stochastic variables are known to cause the emergence of critical phenomena in computer simulations of epidemic models, under certain parameter ranges \cite{rhodesProcRSocB1997,rhodesTheorPopulBio1997}. One key ingredient common to almost every critical phenomena points is the scale invariance of fluctuations \cite{chialvoPhysRevLett2017,stanleyRevModPhys1999}. This special symmetry remains the foundation of cooperative phenomena and critical phase transitions, whose contributions spans over a broad set of research fields such as condensed-matter, quantum field theories, and neuroscience to name a few \cite{nakamuraJPhysA2010,wilsonPhysRevD1974,kogutRevModPhys1979,chialvoPhysRevLett2005,bialekNature2006}. In these special systems, fluctuations of descriptive variables occur in all sizes and, ultimately, dictate the general behavior of the problem. It thus begs the question: if critical behavior has been observed previously in disease outbreaks \cite{rhodesProcRSocB1997}, why fluctuations have been neglected in the mathematical modeling of epidemics? So far, the effects of stochastic fluctuations on general epidemics remains poorly known. New experiments on this subject provide evidence that temporal fluctuations can drastically alter the prevalence of pathogens \cite{duncanProcRSocB2013}. Spatial heterogeneity also introduces an extra layer of complexity as it may trap or delay the pathogen transmission \cite{biekJRSoc2007}. As a result, requirements of statistical equivalence may not hold for all scales. To deal with this issue, stochastic formulations and numerical simulations have been the default tools to investigate fluctuations in disease outbreaks. Our discussion assumes the disease spreading follows a Markov chain in discrete time $\delta t$. Moreover, $\delta t$ is such that at most a single recovery or transmission event is likely to occur during the course of its duration. Under these requirements, the master equation of the SIS model in discrete time reads \begin{equation} \label{eq:master} \frac{d P_{\mu}(t)}{dt} = - \sum_{\nu=0}^{2^N-1}H_{\mu\nu}P_{\nu}(t). \end{equation} Here, $P_{\mu}(t)$ refers to the instantaneous probability to observe the system in the $\mu$-th configuration. Configuration labels follow the binary ruling $\mu = n_0 2^0+ n_1 2^1+ \cdots + n_{N-1} 2^{N-1}$, where $n_k = 1$ if the $k$-th agent is infected, or $n_k=0$ otherwise, with $k=0,1,\ldots,N-1$. For instance, for $N=3$, the configuration $\lvert \mu = 3 \rangle = \lvert 110 \rangle$ states that only the agent with label $k=2$ is susceptible. The matrix elements $H_{\mu\nu}$ express the transition rates from configuration $\nu$ to configuration $\mu$. By virtue of probability conservation, in each time step the transition rules satisfy $\sum_{\mu}H_{\mu\nu} =0$. The matrix elements $H_{\mu\nu} = \langle \mu | \hat{H} | \nu\rangle$ are computed from projections on the time step operator \begin{equation} \hat{H} = \frac{\alpha}{N}\sum_{k,\ell=0}^{N-1}A_{k\ell}(1 -\hat{n}_k-\hat{\sigma}^+_k)\hat{n}_{\ell}+ \gamma\sum_{k=0}^{N-1}(\hat{n}_k-\hat{\sigma}^-_k), \end{equation} where $A_{k\ell}$ is the adjacency matrix, $\hat{n}_k$ represents the $k$-th occupation operator (with eigenvalues $n_k=1$ if infected, $0$ otherwise), and $\hat{\sigma}_k^{+}$ are the localized spin-$1/2$ ladder operators that produce the transition $S\rightarrow I$. Clearly, $\hat{\sigma}_k^{-}$ produce the opposite transitions, $I\rightarrow S$ in relation the $k$-th agent. As notation, the hat symbol always accompanies operators to quickly distinguish them from numbers. The master equation Eq.~(\ref{eq:master}) provides the means to evaluate the time evolution of relevant statistical moments of $\rho(t)$. Notice that the average density of infected agents in the system reads $ \langle \rho(t)\rangle = (1/N)\sum_{\mu=0}^{2^N-1} \sum_{k=0}^{N-1} \langle \mu \rvert \hat{n}_k \lvert \mu \rangle P_{\mu}(t)$. Applying the time derivative, and using Eq.~(\ref{eq:master}), one arrives at the equation of motion for $\langle \rho(t) \rangle$. Useful expressions are known only for a few types of adjacency matrix $A$. The simplest ones are proportional to $A_{k\ell} = 1-\delta_{k\ell}$, which recovers the random mixing hypothesis. In those particular instances, the complete time evolution of the system comprehends a set of hierarchical equations that involves the statistical moments of $\rho(t)$, as shown in Ref.~\cite{nakamuraSciRep2017}. More explicitly \cite{nakamuraArxiv2018}, the first two equations for instantaneous mean $\langle \rho \rangle$ and variance $\sigma^2 = \langle \rho^2\rangle - \langle \rho \rangle^2$ are \begin{subequations} \label{eq:imp} \begin{align} \label{eq:imp1} \frac{d\langle \rho \rangle}{d \tau} &= \langle \rho \rangle \left[ \rho_0 -\langle \rho \rangle \right] - \sigma^2(\tau),\\ \label{eq:imp2} \frac{d \sigma^2}{d \tau} & = 2\sigma^2\left[ \rho_0 +\langle \rho \rangle \right] - 2\Delta_3(\tau) +\frac{1}{N}\langle \rho (1-\rho)\rangle + \frac{\gamma}{N\alpha}\langle\rho\rangle, \end{align} \end{subequations} where $\Delta_3(\tau) = \langle \rho^3(\tau)\rangle - \langle \rho(\tau)\rangle^3$. These results find excellent agreement with simulated data using an ensemble with $10^6$ replicas starting from the same initial condition (see Fig.~\ref{fig:fig1}). Comparing Eqs.~(\ref{eq:compartmental}) and (\ref{eq:imp1}), the case that considers temporal fluctuations decays faster than the compartmental equation by $\sigma^2(\tau)$, even in the regime $N \gg 1$. Both equations are equivalent whenever $\sigma(\tau)$ becomes irrelevant compared to $\langle \rho \rangle$. Therefore, a generalization of compartmental equations for the SIS model is readily available by retaining both mean and variance, neglecting higher statistical moments. Thus, the dynamical system describes a gaussian variable evolving along time. The skewness coefficient vanishes as a direct consequence of this assumption, so that $\Delta_3(\tau) \approx 3 \langle \rho(\tau)\rangle \sigma^2(\tau).$ For $N\gg 1$, the resulting equations are \begin{subequations} \label{eq:system} \begin{align} \label{eq:improved1} \frac{d}{d\tau}\ln \langle \rho\rangle &= \rho_{0}-\langle \rho\rangle - \frac{\sigma^2}{ \langle \rho\rangle},\\ \label{eq:improved2} \frac{1}{2}\frac{d}{d\tau}\ln\sigma^2 &=\rho_0 - 2\langle \rho\rangle. \end{align} \end{subequations} We emphasize that the variance in Eq.~(\ref{eq:improved1}) slows down the growth rate of $\langle \rho (\tau)\rangle$, recalling the Allee effect \cite{murray,ribeiroJTheoBio2015}. Equations~(\ref{eq:system}) can be further combined into a single second-order differential equation \cite{nakamuraArxiv2018}, with solution \begin{subequations} \label{eq:sol} \begin{align} \label{eq:sol1} \langle \rho(\tau)\rangle &= \frac{{\rho_0}\left(1+c_1 \mathrm{e}^{- \rho_0 \tau}\right)}{1+2c_1 \mathrm{e}^{- \rho_0 \tau} + c_2 \mathrm{e}^{-2 \rho_0 \tau}},\\ \label{eq:sol2} \sigma^2(\tau) &= \frac{\langle\rho (\tau)\rangle^2(c_1^2-c_2) \mathrm{e}^{-2\rho_0 \tau}}{\left( 1 + c_1 \mathrm{e}^{- \rho_0 \tau}\right)^2}. \end{align} \end{subequations} The constants $c_1$ and $c_2$ depend solely on the initial conditions. The special case $c_2= c_1^2$ recovers the usual solution of Eq.~(\ref{eq:compartmental}). We assumed that fluctuations behave as gaussian fluctuations. While reasonable for various situations, the assumption does not hold for $\gamma/\alpha $ around unity or small population sizes, according to numerical simulations \cite{nakamuraArxiv2018}, in which Eq.~(\ref{eq:imp2}) should be used instead of Eq.~(\ref{eq:improved2}). \textit{Hamilton's equations.} The fact that the dynamical system Eq.~(\ref{eq:system}) can be combined into a single second-order differential equation suggests an interpretation of the epidemic model in terms of Hamilton equations \cite{goldsteinClassicalMechanics}. Hamiltonian systems are ubiquitous in Physics, serving as basis to describe and explain countless physical phenomena. The hallmark of systems are the Hamilton equations: \begin{subequations} \label{eq:hami} \begin{align} \frac{d q}{d \tau} & = \;\,\,\frac{\partial \mathcal{H}}{\partial p},\\ \frac{d p}{d \tau} & = -\frac{\partial \mathcal{H}}{\partial q}, \end{align} \end{subequations} where $q(t)$ and $p(t)$ are conjugated variables, and the Hamiltonian function $\mathcal{H}$ encodes some information about the problem -- usually associated with energy for conservative systems but not restricted to them. Besides classical mechanics and related areas, quantum field theories and statistical mechanics are deeply intertwined with Hamilton's principle and Liouville theorem. Despite its usefulness in Physics, Hamilton formulation and surrounding principles are rarely used in population dynamics, ecological problems, or epidemic models, where first-order differential equations are dominant. The lack of second-order differential equations in these areas, although not prohibitive, rises questions about the description of the dynamics, as discussed extensively in Ref.~\cite{chester2011}. In part, because it means some interactions and forces acting on the system remains unaccounted. By adopting a true Hamiltonian formulation, stochastic events may produce counterintuitive effects, such as noise induced metastable states \cite{parkerPhysRevLett2011}. In view of the inherent stochasticity behind disease spreading and Eqs.~(\ref{eq:system}), it seems necessary to determine whether the SIS model is a Hamiltonian system or not. A brief inspection shows the pair $(\langle \rho \rangle, \sigma^2)$ does not satisfy the usual Hamilton equations. The solution to this issue is obtained by assuming, instead, that the correct conjugated pair is $(\langle \rho \rangle, h(\sigma^2))$, where $h(x)$ is some analytical function. Inspiration from common pairs of conjugate variables can be used to refine the choice of $h(x)$. For instance, the product $\langle \rho \rangle\times h(\sigma^2)$ should be dimensionless, in close analogy the scalar product between position and wave vectors. One possible candidate is $h(x) = x^{-1/2}$, which entails $1/\sigma$ as the conjugated variable to $\langle \rho \rangle$. Define the dynamical variables $q(\tau) = \langle \rho (\tau)\rangle$ and $p(\tau) = 1/\sigma(\tau)$ to describe the SIS model. In addition, consider the following Hamiltonian \begin{equation} \label{eq:hamiltonian} \mathcal{H} = q(\tau)p(\tau)\left[ \rho_0 - q(\tau) \right] + \frac{1}{p(\tau)}. \end{equation} Plugging these expressions in Eqs.~(\ref{eq:hami}), one obtains the equations of motion: \begin{subequations} \begin{align} \frac{d q}{d \tau} &= q(\rho_0 - q) - \frac{1}{p^2} \equiv \langle \rho \rangle[\rho_0 - \langle \rho \rangle] - \sigma^2,\\ \frac{d p}{d \tau} &= -p(\rho_0 - 2 q) \equiv -\frac{1}{\sigma}[\rho_0 - 2\langle \rho \rangle]. \end{align} \end{subequations} Thus, at first glance $\mathcal{H}$ appears to be a valid candidate to describe the SIS model. Even more, replacing $(q,p)$ by Eq.~(\ref{eq:sol}) in Eq.~(\ref{eq:hamiltonian}) shows the Hamiltonian is a constant of motion $\mathcal{H}^{\infty} = \rho_0 c_1(c_1^2-c_2)^{-1/2}$. The upper index in $\mathcal{H}^{\infty}$ is a reminder that calculations take place in the absence of finite size corrections. \begin{figure} \includegraphics[width=0.95\columnwidth]{fig2.eps} \caption{\label{fig:fig2} Finite size effects on the Hamiltonian. Simulated data with $N=50$ and $10^6$ Monte Carlo runs for various ratios $\gamma/ \alpha$. (inset) Initial decay of $\mathcal{H}$ compatible with power-law, $\mathcal{H} \sim \tau^{-\lambda}$. The exponent $\lambda = 1/2$ remains constant for different ratios $\gamma/\alpha$, suggesting an universal behavior.} \end{figure} However, taking finite size corrections into account changes drastically the notion of $\mathcal{H}$ as a constant of motion. In fact, as Fig.~\ref{fig:fig2} depics, numerical simulations for finite populations reveal $\mathcal{H}$ changes continuously along time until equilibrium sets in, akin to a non-conservative system. A precise meaning of $\mathcal{H}$ in the epidemiological context is still murky, at best. A detailed analysis of correlations between changes in $\mathcal{H}$ and the spreading pattern of real outbreaks is mandatory to understand the action-reaction analogy. In the meantime, it is instructive to study $\mathcal{H}$ for $\tau \ll 1$ and $\tau \gg 1$ (see Fig.~\ref{fig:fig2}). For $\tau \ll 1$, where incidentally fluctuations varies the most (see Fig.~\ref{fig:fig1}), a remarkable feature appears via the relation $\mathcal{H}\sim \tau^{-\lambda}$ with $\lambda = 1/2$. In particular, the exponent $\lambda$ seems insensitive to changes in the epidemiological parameter $\gamma$. This parameter-free behavior is not observed for the remaining statistics, $\langle \rho(\tau)\rangle$ and $\sigma(\tau)$. Power-laws are crucial to identify scaling relations and emergence of universal features, and they are usually related to the symmetry of the problem rather than microscopic details. Here, evidence of universal behavior is captured by the data collapse $\mathcal{H}/\rho_0^2$ (not shown). From these observations, we can infer fluctuations play a larger role in the early disease spreading, being largely independent of exact values of epidemiological parameters. An effective decay $\textrm{e}^{-\tau/\tau_{\textrm{eff}}}$ describes the general behavior of $\mathcal{H}$ in the low temperature regime. The relaxation time $\tau_{\textrm{eff}}$ depends on $N$ and the ratio $\gamma/\alpha$, and it can be estimated from data by fitting $\mathcal{H}$ to an exponential function plus a constant. Alternatively, it can be evaluated as \begin{equation} \tau_{\textrm{eff}} = \frac{1}{\mathcal{H}(0)}\int_{0}^{\infty} d\tau [\mathcal{H}(\tau)-\mathcal{H}(\infty)]. \end{equation} From a formal point of view, the evaluation of $\tau_{\textrm{eff}}$ requires the solutions of Eqs.~(\ref{eq:imp1}) and (\ref{eq:imp2}) in Eq.~(\ref{eq:hamiltonian}), followed by an integration. Surely, the procedure is arguably more demanding than estimating $R_0$. However, as others have reasoned before, $R_0$ provides a naive estimation on secondary infections because the growth rate of the outbreak changes continuously along time \cite{feffernanJRSoc2005}. In contrast, $\tau_{\textrm{eff}}$ mimics a constant of motion. \textit{Lagrangian.} Another insight from $\tau_{\textrm{eff}}$ links the temporal integral of $\mathcal{H}$ with the mechanical action $S$. A formal connection with $S$ is desirable because it brings a large machinery revolving around variational principles and conservation laws. However, the action $S = \int d\tau \mathcal{L}(q,\dot{q};\tau)$ is a functional of the Lagrangian $\mathcal{L}$. It turns out that $\mathcal{L}$ can obtained from $\mathcal{H}$ by inspection. From Eqs.~(\ref{eq:hamiltonian}) and (\ref{eq:improved1}), $\mathcal{H}$ takes the following form: $ \mathcal{H} = p\left[ q(\rho_0 - q) + p^{-2} \right]= p (d q/d\tau) + 2/p $. Recalling the formal expression $\mathcal{H} = p\dot{q} - \mathcal{L}$, it becomes clear that \begin{equation} \label{eq:lagrangian} \mathcal{L} = -\frac{2}{p} =-2 \sigma(\tau)= -2 \sqrt{q(\rho_0 - q) -\frac{d q}{d\tau}}, \end{equation} where we have used Eq.~(\ref{eq:improved1}) and considered only the positive root. Thus, $\mathcal{L}$ is proportional to the standard deviation while the action entails the accumulated deviation over the course of the outbreak. To check our result for large populations $N\gg 1$, the minimal action recovers Eq.~(\ref{eq:compartmental}) as expected for a noise-free system. In general, the equation of motion reads \begin{equation} \frac{d^2q}{d \tau^2} = 3 (\rho_0-2 q)\left[ \frac{d q}{d\tau} - \frac{2}{3}q(\rho_0-q) \right]. \end{equation} The fact that $\mathcal{L}$ contains solely the standard deviation allow us to understand how to add uncorrelated fluctuations into the model. By virtue of $\textrm{Var}[x+y] = \textrm{Var}[x]+\textrm{Var}[y]$ for uncorrelated random variables $x$ and $y$, the perturbed Lagrangian can be obtained by adding a $\sigma_{\textrm{ext}}^2(\tau)$ to the variance of the system $\sigma^2(\tau)$: \begin{equation} \mathcal{L}'=-2 \sqrt{q(\rho_0 - q) -\frac{d q}{d\tau} + \sigma_{\textrm{ext}}^2(\tau)}. \end{equation} This picture is consistent with addition of a noise function $\sigma_{\textrm{ext}}^2(\tau)$ to Eq.~(\ref{eq:improved1}). The perturbed Lagrangian $\mathcal{L}'$ describes, ultimately, the time evolution of the disease prevalence in environments with noise. Note that this description differs from the usual derivation of Langevin equations, in which the noise function (force) $r(\tau)$ couples linearly with $q$, i.e., $\mathcal{L}'=\mathcal{L}-r(\tau) q(\tau)$. By the same token, the addition of correlated signals $\eta(\tau)$ to the Lagrangian entails corrections from the covariance matrix: since $\textrm{Var}[X+Y]=\textrm{Var}[X]+\textrm{Var}[Y]+2\textrm{Cov}[X,Y]$, then $\mathcal{L}'=-2 \sqrt{\sigma_{\rho}^2 +\sigma_{\eta}^2 + 2 \textrm{Cov}[\rho,\eta]}$. The covariance matrix can estimated or modeled directly from data, promoting further understanding on the spreading of co-existing diseases, where facilitation or competition processes are in place. With both Hamiltonian and Lagrangian formalisms secured, canonical transformations become available. These transformations are particularly useful to highlight properties of the dynamical systems and to solve them. They change the old variables $(q,p)$ into new variables $(Q,P)$, while preserving Hamilton's equations. There are a large number of transformation available: it would render impossible to cover all of them here. Instead, we show that at least one canonical transformation exists, and that it promotes the interpretation of the stochastic spreading process as effective mechanical systems. Consider: $P_1(t) = 2 p^{1/2} q $ and $Q_1(t) = - p^{1/2}$. The Poisson bracket $\{Q_1,P_1 \}_{q,p} = (\partial Q_1/\partial q)(\partial P_1/\partial p) - (\partial Q_1/\partial p)(\partial P_1/\partial q) = 1$ shows the transformation is canonical. Setting $m=2$, the Hamiltonian in terms of the canonical variables $(Q_1,P_1)$ becomes \begin{equation} -\mathcal{H}_1 = \frac{1}{2m} \left(P_1+\rho_0 Q_1 \right)^2 - \frac{\rho_0^2Q_1^2}{2m} - \frac{1}{Q_1^2}. \end{equation} One may interpret $-\mathcal{H}_1$ as the Hamiltonian of an effective mechanical problem in one-dimension, in which the particle has mechanical momentum $P_1(\tau)$, with generalized coordinate $Q_1(\tau)$, subjected to a velocity dependent potential. \textit{Conclusion.} The description of several real world problems often contains stochastic fluctuations. The SIS epidemic model includes them due to uncertainties associated with pathogen transmission. For small fluctuation amplitudes, $\langle \rho(\tau)\rangle$ and $\sigma^2(\tau)$ are adequate descriptors. Our findings demonstrate $\langle \rho(\tau)\rangle$ and $1/\sigma(\tau)$ are conjugated variables, and they satisfy Hamilton's equation. These results link the stochastic SIS epidemic model with a pure dynamical system, which can be solved and manipulated using standard analytical tools. We find the Hamiltonian is a constant of motion for $N\gg 1$. However, finite size effects break the temporal symmetry of the system: $\mathcal{H}\sim \tau^{-1/2}$ follows a power-law around the outbreak onset. A clear explanation for this scaling is still lacking. The relaxation time $\tau_{\textrm{eff}}$ portrays the decay of $\mathcal{H}$ until equilibrium sets in, meaning that it can also be used to characterize the SIS epidemic. Unlike popular estimates of epidemic growth rate, such as $R_0$, $\tau_{\textrm{eff}}$ remains constant along time and can be extracted from data values of $\mathcal{H}$. Finally, our results also suggests a way to incorporate interactions into the SIS model via the Lagrangian function. This finding has intriguing implications for our understanding of facilitation-competition mechanisms between co-occurring diseases since it does not replicate the canonical procedure to obtain Langevin equations. \begin{acknowledgments} The authors acknowledge funding CNPq 307948/2014-5 and Capes 88887.136416/2017-00. \end{acknowledgments}
{ "timestamp": "2019-03-01T02:03:43", "yymm": "1902", "arxiv_id": "1902.10786", "language": "en", "url": "https://arxiv.org/abs/1902.10786" }
\section*{Acknowledgements} \small{This research is based upon work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA) under contract number 2017-17020200005. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. This work was supported by the German Federal Ministry of Education and Research (BMBF) as well as by the Hessen State Ministry for Higher Education, Research and the Arts (HMWK) within the Center for Research in Security and Privacy (CRISP, \url{www.crisp-da.de}), and by the projects Cognimetrics (TEC2015-70627-R MINECO/FEDER) and Bio-Guard (Ayudas Fundacion BBVA a Equipos de Investigacion Científica 2017). This work was carried out during an internship of R. Tolosana at da/sec. R. Tolosana is supported by a FPU Fellowship from Spanish MECD. } \bibliographystyle{IEEEtran} \section{Presentation Attack Detection Methodology: Hardware} \label{sec:swir} The finger SWIR capture device used for the present work was developed within the BATL project \cite{BATL} in cooperation with our project partners. A general diagram of its inner components is included in Fig.~\ref{fig:sensor} (a). As it may be observed, the camera and lens are placed inside a closed box, which includes an open slot on the top. When the finger is placed there, all ambient light is blocked and therefore only the desired wavelengths are used for the acquisition. In particular, we have used a Hamamatsu InGaAs SWIR sensor array, which captures $64 \times 64$ px images, with a 25 mm fixed focal length lens optimised for wavelengths within 900 -- 1700 nm. More specifically, the following SWIR wavelengths were selected for PAD purposes: $\lambda_1 = 1200$ nm, $\lambda_2 = 1300$ nm, $\lambda_3 = 1450$ nm, and $\lambda_4 = 1550$ nm. These are similar to the wavelengths considered in \cite{Steiner-facePADswir-Sensors-2016} for the skin vs.\ non-skin facial classification. An example of the acquired images for a bona fide sample is shown in Fig.~\ref{fig:sensor} (b) for the 1200 nm wavelength. As it may be observed, before applying any PAD algorithm, the region of interest (ROI) (i.e., the central finger-slot region corresponding to the open slot where the finger is placed) needs to be extracted from the background. Given that the finger is always placed over the fixed open slot, and the camera does not move, a simple fixed size cropping can be applied. The ROI corresponding to Fig.~\ref{fig:sensor} (b) with a size of $18 \times 58$ px is depicted in Fig.~\ref{fig:sensor} (c). Finally, the four samples acquired from two bona fides (a, b) and three PAIs (c to e) fabricated with different materials are included in Fig.~\ref{fig:input_DNN}: (c) a full yellow playdoh finger, (d) a monster latex overlay, and (e) a glue overlay. As it may be observed, the playdoh finger shows some similarities with respect to the bona fide presentations (i.e., a similar change of intensity across wavelengths), which will make the PAD task harder. However, the change trend is completely different for the other two PAIs, thereby making it easier to discriminate them from bona fide presentations. In addition to the SWIR images captured by the device, fingerprint verification can be carried out with contactless finger photos acquired in the visible spectrum with a 1.3 MP camera and a 35 mm VIS-NIR lens, which are placed next to the SWIR sensor within the closed box (see Fig.~\ref{fig:sensor} (a)). Note that the project sponsor IARPA has indicate that they make the SWIR finger database available in the near future such that research results presented in this paper can be reproduced. \section{Presentation Attack Detection Methodology: Software} \label{sec:pad} This section describes the state-of-the-art software methods proposed in order to detect fingerprint PAs, as summarised in Fig.~\ref{fig:diagram}. Two different approaches are studied: $i)$ handcrafted features, and $ii)$ deep learning features. For both approaches, the information provided by the sensor described in Sect.~\ref{sec:swir} is used as input. In general, it should be noted that each individual score $s_i$ generated by the individual PAD algorithms needs to be transformed into a single range to allow the final fusion and a fair benchmark. In compliance with the ISO/IEC 30107-2 standard on biometric presentation aattack detection -- Part 2: data formats \cite{ISO-IEC-30107-2-PAD-data-format-170328}, we define $s_i \in [0, 100]$, where low values close to 0 will represent bona fide presentations and high values close to 100 will denote presentation attacks. \subsection{Handcrafted Features} \label{sec:pad:svm} As it was firstly proposed in \cite{Gomez-Barrero-SWIR-SS-PAD-NISK-2018}, this method builds upon the spectral signatures of the pixels across all four acquired wavelengths, in order to capture the different properties attributed to skin (i.e., bona fide presentation) and non-skin (i.e., PAI) materials. In particular, let us define the spectral signature $\mathbf{ss}$ of a pixel with coordinates $\left( x, y\right)$ as follows: \begin{equation} \mathbf{ss}\left(x, y\right) = \left(i_1, \dots, i_N\right) \end{equation} where $i_n$ represents the intensity value of the pixel for the $n$th wavelength $\lambda_n$. In our particular case study, $N = 4$. However, such a representation is vulnerable to illumination changes. Even if they have been minimised in the sensor due to only having the finger slot open to the outer world, thinner fingers for instance can let some tiny amounts of light through. As a consequence, in order to achieve a signature independent of the absolute brightness of the image at hand, a normalised signature is computed. In addition, since our final goal is to capture the distinct trends across different wavelengths shown in Fig.~\ref{fig:input_DNN} for the bona fides and the PAIs, only differences between wavelengths will be used as final handcrafted features. Therefore, the final normalised difference vector $\mathbf{d}\left( x, y\right) $ of one pixel is computed as follows: \begin{eqnarray} d\left[i_a, i_b\right] &=& \frac{i_a - i_b}{i_a + i_b} \\ \mathbf{d}\left( x, y\right) &=& \left\lbrace d\left[ i_a, i_b\right] \right\rbrace_{a, b \le N, a \ne b} \end{eqnarray} with $ -1 \le d\left[i_a, i_b\right] \le 1$. In other words, the normalised differences between all possible wavelength combinations are computed. For our case study with $N = 4$, a total number of six differences are calculated. These normalised difference vectors $\mathbf{d}\left(x, y\right)$ will be used to classify skin vs.\ non-skin pixels using an SVM. The procedure so far performs a pixel wise classification. Hence, the final score $s_\text{ss}$ returned by the PAD method will be the proportion of non-skin pixels of the sample ROI in a range of 0 to 100. \subsection{Deep Learning Features} \label{sec:pad:DL_features} \begin{figure}[t] \centering \centerline{\includegraphics[width=0.80\linewidth]{fig/SWIR_RGB.pdf}} \caption{Examples of bona fides and PAs acquired by the SWIR sensor and the final RGB image created for the input of the deep neural network systems (see Eq.~\ref{eq:RGB}).} \label{fig:input_DNN}\vspace{-0.4cm} \end{figure} CNNs have been one of the most successful deep neural network architectures in the last years. Some of their key design principles were drawn from the findings of the Neurophysiologists Nobel Prizes David Hubel and Torsten Wiesel in the field of human vision \cite{Goodfellow-et-al-2016}. Traditional (a.k.a. plain) CNN based systems are mainly composed of convolutional and pooling layers. The former extracts patterns from the images through the application of several convolutions in parallel to local regions of the images. These convolutional operations are carried out by means of different kernels, adapted by the learning algorithm, and which assign a weight to each pixel of the local region of the image depending on the type of patterns to be extracted. Therefore, each kernel of one convolutional layer is focused on extracting different patterns, such as horizontal or vertical edges, over image patches whose size is determined by the dimension of the layer. The output of these operations produces a set of linear activations (a.k.a. feature map), which serve as input to nonlinear activations, such as the rectified linear activation function (ReLU). Finally, it is common to use pooling layers to make the representation invariant to small translations of the input. The pooling function replaces the output of the network at a certain region with a statistical summary of the nearby outputs, and facilitates the learning convergence. For instance, the max-pooling function selects the maximum value of the region. As it was summarised in Fig.~\ref{fig:diagram}, in this study we explore the potential of deep learning features in comparison to handcrafted features by means of two different strategies: $i)$ using CNNs as an end-to-end approach (i.e., for both feature extraction and classification), and $ii)$ using CNNs as feature extractors in combination with SVMs for classification. In addition, two different training scenarios have been analysed, namely: $i)$ training CNN models from scratch, and $ii)$ adapting CNN pre-trained models. For the input of the networks, and in order to consider the information provided by the four wavelengths captured by the sensor, we need to build a single RGB image. To that end, each dimension or channel of the RGB space will comprise information stemming from different SWIR wavelengths or combinations thereof. To maximise the discriminative power of the input images, we analysed which wavelengths provided a higher inter-class (i.e., between bona fide and PA presentations) and a lower intra-class (i.e., within the bona fide presentation samples) variation in terms of the heatmaps of the differences between samples. That is, to estimate the inter-class variability we computed the pixel wise difference of bona fide and PA samples, and for the intra-class variability, the differences between bona fide samples. The former should have high intensity values and the latter low values. After an exhaustive analysis of the different possible combinations, we defined the three dimensions as follows \begin{equation} \mathbf{image}\left(R, G, B\right) = (\lambda_4 - \lambda_1, \lambda_4 - \lambda_2, \lambda_4 - \lambda_3)\label{eq:RGB} \end{equation} Fig.~\ref{fig:input_DNN} shows examples of bona fides and PAIs acquired by the SWIR sensor and the final RGB image created following Eq.~\ref{eq:RGB}. This RGB image will serve as input for the deep neural network systems. All strategies have been implemented under the Keras framework using Tensorflow as back-end, with a NVIDIA GeForce GTX 1080 GPU. Adam optimizer is considered with a learning rate value of 0.0001 and a loss function based on binary cross-entropy. We now describe the details of each of the deep learning strategies analysed in this work. \vspace*{0.2cm} \subsubsection{\textbf{Training CNN Models from Scratch}} \label{sec:pad:dl} The first approach is focused on training \textbf{residual CNNs} \cite{DBLP:journals/corr/HeZRS15} from scratch. These networks have outperformed traditional (a.k.a. plain) networks in many different datasets such as ImageNet 2012 \cite{DBLP:journals/corr/RussakovskyDSKSMHKKBBF14}, CIFAR-10 \cite{CIFAR10}, PASCAL VOC 2007/2012 \cite{everingham2010pascal} and COCO \cite{DBLP:journals/corr/LinMBHPRDZ14} for both image classification and object detection tasks. The peculiarity of this network is the insertion of shortcut connections every few stacked layers, converting the plain network into its residual version. This allows to use deeper neural network architectures as well as accelerating the training of the networks significantly \cite{DBLP:journals/corr/HeZRS15, DBLP:journals/corr/SzegedyIV16}. Our proposed residual CNN is depicted in Fig.~\ref{fig:network_architectures} (left). Batch normalization (BN) is applied right after each convolution and before the activation following \cite{DBLP:journals/corr/IoffeS15}. All activation functions are based on ReLU apart from the Sigmoid activation used in the last fully-connected layer, which provides output scores between 0 and 100 \begin{figure}[t] \centering \centerline{\includegraphics[width=\linewidth]{fig/architecture_networks.pdf}} \caption{Proposed network architectures. \textbf{Left:} the residual CNN trained from scratch using only the SWIR fingerprint database (319,937 parameters). \textbf{Middle:} the pre-trained MobileNet-based model (815,809 parameters). \textbf{Right:} the pre-trained VGG-19-based model (20,155,969 parameters). Both middle and right networks are adapted using transfer learning techniques over the last white-background layers} \label{fig:network_architectures}\vspace{-0.4cm} \end{figure} \vspace*{0.2cm} \subsubsection{\textbf{Adapting Pre-Trained CNN Models}} \label{sec:pad:tl} The second approach evaluates the potential of state-of-the-art pre-trained models for fingerprint PAD. In order to adapt the pre-trained models to our task, we replace and retrain the classifier (i.e., the fully-connected layers), and adapt the weights of the last convolutional layers to the fingerprint PAD task. The reason for adapting only the last convolutional layers lies on the fact that the first layers of the CNN extract more general features related to directional edges and colours, whereas the last layers of the network are in charge of extracting more abstract features related to the specific task. We propose to use both MobileNet and VGG-19 network architectures pre-trained using the ImageNet database \cite{DBLP:journals/corr/HowardZCKWWAA17, VGG19_2015}. This database contains more than one million images from 1000 different classes, thereby allowing the extraction of very robust features in the first layers \cite{DBLP:journals/corr/RussakovskyDSKSMHKKBBF14}. Fig.~\ref{fig:network_architectures} (middle) shows the architecture of our adapted \textbf{MobileNet} network. This architecture has been modified compared to the original version by removing some of the last convolutional layers in order to reduce the complexity of the features extracted. Furthermore, the fully-connected layers designed for the ImageNet classification task have been also removed. This network is based on depthwise separable convolutions, which factorize a standard convolution into: $i)$ a depthwise convolution, and $ii)$ a $1x1$ convolution called pointwise convolution. Therefore, the depthwise convolution applies a single filter to each input channel, and the pointwise convolution subsequently applies a $1x1$ convolution to combine the outputs of the depthwise convolution \cite{DBLP:journals/corr/HowardZCKWWAA17}. Downsampling is directly applied by the convolutional layers that have a stride of 2 (represented by /2 in the convolutional layers of Fig.~\ref{fig:network_architectures}). This network architecture allows to reduce both model size and training/testing times, thus being a good solution for mobile and embedded vision applications. It has been tested in different datasets such as ImageNet \cite{DBLP:journals/corr/RussakovskyDSKSMHKKBBF14}, PlaNet \cite{DBLP:journals/corr/WeyandKP16} and COCO \cite{DBLP:journals/corr/LinMBHPRDZ14} with state-of-the-art results. Finally, Fig.~\ref{fig:network_architectures} (right) shows the architecture of the adapted \textbf{VGG-19} network \cite{VGG19_2015}. This architecture has also been modified replacing the last 3 fully-connected layers with 2 fully-connected layers (with a final sigmoid activation). This network architecture belongs to the family of traditional or plain networks and appeared before the residual and MobileNet configurations. Despite of that, and due to its simplicity, it is one of the most used network architectures nowadays providing very good results in many different competitions. \vspace*{0.2cm} \subsubsection{\textbf{Using CNNs as Feature Extractors}} \label{sec:pad:dlsvm} In addition to the end-to-end approaches described in Sect.~\ref{sec:pad:dl} and~\ref{sec:pad:tl}, we also analyse the potential of adapting and using all the aforementioned CNNs (i.e., the residual CNN trained from scratch, the adapted MobileNet CNN and the adapted VGG-19 CNN) as feature extractors. For this strategy, we consider the same architecture networks described in Fig.~\ref{fig:network_architectures}, but removing the last fully-connected layers in order to use only the features provided by the last convolutional layer (after the Average or Max pool layers, respectively). Then, these features are transformed to the range $[0, 1]$ and subsequently used to train an SVM for final classification purposes. \subsection{Fused Approach} \label{sec:pad:fusion} Finally, we analyse to which extent the proposed algorithms complement each other to enhance the final fingerprint PAD decisions. To that end, the algorithms are fused with a weighted sum of the individual PAD scores as follows: \begin{equation} s = \left(1 - \alpha\right) \cdot s_1 + \alpha \cdot s_2\label{eq:fusion} \end{equation} where $s_i$ with $i \in \left\lbrace \text{ss}, \text{res}, \text{mob}, \text{VGG} \right\rbrace$ represent the individual scores output by the approaches described above, $\alpha$ the fusion weight, and $s$ the final fusion score. \section{Experimental Results} \label{sec:res} \subsection{Exp 1 - Handcrafted Features}\label{lab:handcrafted_features} Fig.~\ref{fig:DET_results} shows the DET curves of each of the individual methods proposed in this study. As it may be observed, the spectral signature pixel wise approach has achieved a 12.61\% D-EER. Compared to the results first reported in \cite{Gomez-Barrero-SWIR-SS-PAD-NISK-2018} (APCER = 5.6\% and BPCER = 0\%), there is a clear decrease in the detection performance. This is due to the preliminary character of the first study, over a small database comprising only 60 samples and 12 different PAI species. In this work, the more thorough evaluation unveils the main drawbacks of the approach: it is not possible to get an APCER $\le$ 2\%, and for APCER $\approx$ 5\%, the BPCER is over 20\% (i.e., the system is not convenient any more). \subsection{Exp 2 - Deep Learning Features}\label{lab:deepLearning_features} Deep learning strategies have considerably improved the results achieved using handcrafted features (see Fig.~\ref{fig:DET_results} for a comparison). In general, the features extracted by the neural network models provide a higher discriminative power and generalization to new samples (note that during the development of the systems, all strategies were able to achieve loss values very close to zero for both training and validation datasets). For the case of training end-to-end residual CNN models from scratch, the best result obtained is a 2.25\% D-EER. This result outperforms the handcrafted feature approach by relative improvement of 82\%. Furthermore, low APCERs below 1\% can be achieved for BPCERs below 8\%, thereby overcoming the main drawback of the handcrafted features. Similarly, for high convenient systems with BPCERs under 1\%, the APCER ranges between 4 and 15\%. These facts highlight the potential of incorporating residual connections to plain CNNs, being able to easily train neural network models without the necessity of having thousands of labelled images for each class, but only 130 (see Table~\ref{tab:partition_datasets}). Very good results have been also obtained for the use of pre-trained CNN models. In particular, the proposed MobileNet- and VGG19-based models have obtained state-of-the-art results with final values of 1.80\% and 1.35\% D-EER, respectively. These results have further improved the results obtained using handcrafted features, achieving an average relative improvement of 86\% and 89\%, respectively \begin{figure*}[tb] \centering \centerline{\includegraphics[width=\linewidth]{fig/features_VGG19.pdf}} \caption{Examples of the features extracted in the first convolutional layer (64 filters) of the VGG19-based model from the samples depicted in Fig.~\ref{fig:input_DNN}.} \label{fig:features_VGG19}\vspace{-0.4cm} \end{figure*} In addition, it is important to note that, even though an improvement at the D-EER operating point can be achieved using these end-to-end pre-trained models in combination to transfer learning techniques, with respect to training a network from scratch, this does not hold for all operating points. If we take a closer look at Fig.~\ref{fig:DET_results}, we can observe that for low BPCERs (i.e., high convenience), the best performing approach is the residual CNN trained from scratch. On the contrary, the lowest BPCERs for APCER $\le$ 2\% (i.e., high security) are achieved by the VGG19 pre-trained model. However, it should be noted that the VGG19 based system cannot reach BPCERs under 1\%, which can be done using the pre-trained MobileNet model. Therefore, even if depending on the final application, some CNN approaches might be more suitable than others, the ResNet inspired approach achieves overall the best performance. For completeness, we also analyse the potential of using CNNs as feature extractors in combination with the SVM classifiers. This way, we can also analyse the improvement achieved using deep learning features compared to the handcrafted features, which were also classified using SVMs. The performance in terms of APCER and BPCER is summarised in Table~\ref{tab:cnn_svm} (note that the SVMs output a single binary decision for the CNN features instead of a score). As it may be observed, the operating points are always contained within the DET curves reported in Fig.~\ref{fig:DET_results}, which means that no further improvement has been achieved using the SVM classification with respect to the last fully-connected sigmoid activation layer of the end-to-end CNNs. Therefore, in the remaining experiments, only the end-to-end CNNs will be considered. On the other hand, the advantages of the learned features with respect to the handcrafted approach are further confirmed. All these results show the potential of using CNNs in combination with SWIR images for fingerprint PAD purposes, and the robustness of the features extracted. Fig.~\ref{fig:features_VGG19} shows some examples of the features extracted in the first convolutional layer (64 filters) of the VGG19-based model for bona fide and PA samples. In general, very different features are extracted for bona fide and PA samples. This fact can be easily observed when considering overlays based on monster latex and glue, Fig.~\ref{fig:features_VGG19} (d) and (e), respectively. However, the features extracted for the network when considering other materials such as yellow playdoh (Fig.~\ref{fig:features_VGG19} (c)) seem more similar to the bona fide samples (Fig. \ref{fig:features_VGG19} (a) and (b)), indicating the difficulty of the task. \begin{table}[t] \small \centering \caption{\label{tab:cnn_svm}Performance evaluation of the deep learning feature extractors in combination with the SVM classifiers.}\vspace*{-0.2cm} \begin{tabular}{lcc} \toprule & BPCER (\%) & APCER (\%)\\ \midrule Residual CNN & 3.37 & 1.35 \\ MobileNet-Based Model & 5.33 & 0.45 \\ VGG19-Based Model & 1.89 & 0.90 \\ \bottomrule \end{tabular}\vspace*{-0.4cm} \end{table} \subsection{Exp 2 - Deep Learning: Robustness to Unkown Attacks} Finally, we have also studied the robustness and generalisation capacity of the deep learning methods to new PAIs (a.k.a. unknown attacks). In order to do that, 30 samples acquired from five out of the 35 total PAIs available in the database (see Table~\ref{table:types_PAIs}) were considered only for testing the systems (i.e., none of those PAI samples where included in the training and validation datasets). The reason behind this particular PAI selection is twofold. On the one hand, we chose one PAI species from each row or type on Table~\ref{table:types_PAIs}, to increase the variability also in the unknown attacks. On the other hand, we selected the PAI species with the smallest number of samples available, in order to maximise the number of training samples and hence the detection performance. In general, very good results have been achieved for all methods. At the D-EER operating point, for the residual CNN and MobileNet-based models only one sample from a yellow playdoh finger has been misclassified, whereas for the case of using the VGG19-based model, all 30 samples stemming from the unknown attacks have been correctly classified. On the other hand, none of the three samples acquired from the yellow playdoh finger were detected by the handcrafted features, which were able to detect the remaining four PAIs. This proves the robustness of the proposed methods to even unknown attacks, which may appear in the future. \subsection{Exp 3 - Fused Systems}\label{lab:fusion_systems} In order to further enhance the results achieved by individual methods, and analyse to which degree the systems complement each other, we study in this last set of experiments the fusion of multiple systems at score level. In all cases, the performance has been optimised in terms of the D-EER for values of $\alpha \in [0, 1]$ (see Eq.~\ref{eq:fusion}), where this $\alpha$ weight corresponds to the second system referred to in the legend. First, the fusion of handcrafted and deep learning features is evaluated in Fig.~\ref{fig:DET_SSfusion}. Only the fusions with the systems based on MobileNet and VGG-19 are depicted, since no improvement was achieved for the fusion of the residual net and the spectral signatures with respect to the individual CNN. As it could be expected given the big performance gap between the spectral signatures based PAD and the deep learning counterparts, the score level fusion yields a minimum improvement with respect to the CNNs only in two cases: $i)$ for either low BPCER $\le$ 0.5\% or low APCER $\le$ 0.5\% for the MobileNet approach (dashed yellow vs solid purple curves), and for $ii)$ BPCER $\le$ 1\% for the VGG19 network (dashed orange vs solid green curves). Afterwards, the three CNN based approaches have been fused in a two-by-two basis (the fusion of all three systems showed no further improvement), and the best performing fusions are depicted in Fig.~\ref{fig:DET_CNNfusion}. As it may be observed, no further improvements have been achieved for the operating point around the D-EER. However, for APCER $\le 0.5\%$, the corresponding BPCER values for the fused systems (solid lines) are significantly lower than those of the individual networks (dashed lines): close to 2\% for the fusions with VGG-19 instead of between 5\% and 15\% (i.e., close to a 90\% relative improvement). That yields convenient systems (i.e., low BPCER) even for highly secure (i.e., very low APCER) scenarios. On the other hand, for low BPCER $\le 1\%$, the best APCER ($\le 10\%$) is achieved for either the residual CNN alone (dashed dark blue) or its fusion with the VGG-19 inspired (solid green). In this last case, taking a closer look at the individual PAD scores, we can see that both networks complement each other. Lastly, if we compare Figs.~\ref{fig:DET_SSfusion} and \ref{fig:DET_CNNfusion}, we observe a superior performance in the latter case, thereby further supporting the fact that CNNs can perform better than the baseline handcrafted fusion in this task. All in all, we can conclude that a remarkable performance can be achieved for fingerprint PAD using SWIR images and the fusion of two CNN models: a residual CNN trained from scratch and a pre-trained VGG-19 CNN. A D-EER as low as 1.36\% can be reached, which is lower to the most similar study in the literature (ACER = 2\% in \cite{chugh-fingerprintPADcnn-TIFS-2018}). Furthermore, other operating points yield a BPCER of 2\% for APCER $\le 0.5\%$, and an APCER $\approx$ 7\% for BPCER = 0.1\%. In addition, the fused system was able to correctly detect all unknown attacks. \section{Introduction} \label{sec:intro} There is an increasing demand in the current society for automatic and reliable authentication of individuals in a wide number of scenarios. To address this need, biometric recognition systems based on the individuals' biological (e.g., iris or fingerprint) or behavioural (e.g., signature or voice) characteristics have been consolidated as a reliable paradigm in the last decades. Its advantages over traditional authentication methods (e.g., no need to carry tokens or remember passwords, they are harder to circumvent and provide at the same time a stronger link between the subject and the action or event), have allowed a wide deployment of biometric systems, including large-scale national and international initiatives such as the Unique ID program of the Indian government \cite{indianUID} or the Smart Border project of the European Comission \cite{SmartBorders}. In spite of their numerous advantages, biometric systems are vulnerable to external attacks as any other security-related technology. Among all possible attack points defined in \cite{Zwiesele-BioIS-Study-IEEE-CCST2000,Ratha-EnhancingSecurityAndPrivacy-IBM2001,ISO-IEC-30107-1-PAD-Framework-160115}, the biometric capture device is probably the most exposed one: an eventual attacker requires no knowledge about the inner functioning of the system in order to launch an attack and break the system. Instead, he/she can simply present the capture device with a \textit{presentation attack instrument} (PAI), such as a gummy finger or a fingerprint overlay, in order to interfere with its intended behaviour. The main goal might be to impersonate someone else (i.e., active impostor) or to avoid being recognised (i.e., identity concealer). These attacks are known in the ISO/IEC 30107 \cite{ISO-IEC-30107-1-PAD-Framework-160115} as \textit{presentation attacks} (PAs). Given the severe security threat posed by such PAs, the development of automatic techniques which are able to distinguish between bona fide (i.e., real or live) presentations and access attempts carried out by means of PAIs has become of the utmost importance \cite{Marcel-handbookAntispoofing-2014,hadid15SPMspoofing}. Referred to as \emph{presentation attack detection} (PAD) methods, research in this area has been recently funded by several international projects like the European Tabula Rasa \cite{TabulaRasa} and BEAT \cite{BEAT}, or the more recent US ODIN research program \cite{odinThorProgram}. Together with the organisation of the LivDet -- liveness detection competition series on iris and fingerprint \cite{Ghiani-ReviewFpLivDet-IVC-2017,Mura-LivDet2017-ICB-2018}, where the number of participants has been increasing year after year (up to 17 algorithms submitted in 2017), these initiatives have fostered a considerable number of publications on PAD for different biometric characteristics, including iris \cite{galbally-PADIrisChapter-2017}, fingerprint \cite{Marasco-PAD-SurveyFingerprint-CSUR-2015,Sousedik-PAD-Survey-IET-BMT-2014}, face \cite{galbally-PADfaceSurvey-Access-2014}, or handwritten signature \cite{Tolosana-SignPAD-HoBAS2-2018}. The initial approaches to PAD were based on the so-called handcrafted features, such as texture descriptors or motion analysis \cite{Marcel-handbookAntispoofing-2014,Raghu-PAD-VeinMotionMagnification-BTAS-2015}. However, in the last years deep learning (DL) has become a thriving topic \cite{Goodfellow-et-al-2016,Deep_NLP,Zhou_2016_CVPR}, and biometric recognition in general, and PAD in particular, are not an exception. DL allows expert systems to learn from experience and understand the world in terms of a hierarchy of simpler units, thereby enabling significant advances in complex domains. The main reasons to understand its high deployment lie on the increasing amount of available data and the evolution of graphical processing units (GPU), which in turn allows the successful training of deep architectures. However, the belief that DL schmes can be only used for tasks with massive amounts of available data is changing thanks to the development of pre-trained models. This transfer learning concept refers to network models that are trained for a given task with large available databases, including any kind of images and not only those expected for the problem at hand. Those models are subsequently retrained (a.k.a. fine-tuned, adapted) for a different task for which data are usually scarce. All the aforementioned advances have allowed the deployment of DL architectures in many different fields, including biometric recognition \cite{Rattani-CNNocular-IJCB-2017,2018_IEEEAccess_RNN_Tolosana}. More specifically, convolutional neural networks (CNNs) and deep belief networks (DBNs) have been used for fingerprint PAD purposes, based either on the complete fingerprint samples \cite{Nogueira-PADfingerprintCNN-TIFS-2016,Jang-fingerprintPADcontrastCNN-ICISA-2017,Kim-fingerprintPAD-DNN-PRL-2016} or on a patch-wise manner \cite{Toosi-fingerprintPADpatchCNN-ICCI-2017,Souza-FingerprintPAD-DBM-IJCNN-2017,chugh-fingerprintPADcnn-TIFS-2018}. \begin{figure*}[t] \centering \centerline{\includegraphics[width=.99\linewidth]{fig/generalDiagram.pdf}} \caption{General diagram of the PAD method proposed. On the left hand, the capture device acquires the samples at four different wavelengths within the SWIR spectrum. On the right hand, several software approaches have been proposed, namely: $i)$ three different state-of-the-art CNN architectures have been tested as an end-to-end solution, $ii)$ the features output by the CNNs have been used to feed an SVM, $iii)$ handcrafted features (i.e., spectral signatures) have been extracted, and $iv)$ a final fusion of the aforementioned algorithms has been evaluated for completeness.} \label{fig:diagram \end{figure*} As it will be described in more detail in Sect.~\ref{sec:related}, DL based PAD approaches have boosted the performance over common PAD benchmarks from the LivDet competitions, achieving detection rates over 90\%. Such high accuracy rates indicate the valuable contributions of the existing approaches. However, the LivDet databases comprise altogether up to 11 different materials for the fabrication of PAIs, even if the choice for the attacker is much wider based on commercial products readily available even online. As a consequence, other databases, comprising a larger number of materials for the fabrication of the PAIs, should be explored. Very few works have considered this issue, including a database comprising over twelve different PAI species in \cite{chugh-fingerprintPADcnn-TIFS-2018}, and 21 materials in \cite{Kanich-fingerprintPAIs-IWBF-2018}. We address this issue with the acquisition of a database including 35 different PAI species, within the US ODIN research program \cite{odinThorProgram}. In addition, one question that remains mostly unanswered is the following one: Once a artificial neural network is trained on a large number of PAI species, will unknown attacks also be detected? The evaluation carried out in \cite{Kanich-fingerprintPAIs-IWBF-2018} has shown that the error rates were multiplied by a factor of six when unknown PAI species are tested, with respect to the detection accuracy reached on known attacks. Therefore, we can conclude that additional research efforts are needed in this area. To further tackle these issues and in order to reach robustness to unknown attacks, some researchers have considered other sources of information different from traditional capture devices \cite{galbally-PADIrisChapter-2017,Sousedik-PAD-Survey-IET-BMT-2014}. More specifically, the use of multi-spectral near infrared (NIR) technologies has been studied for face \cite{Wang-multispecFacePAD-ACPR-2013,Steiner-facePADswir-Sensors-2016} and fingerprint \cite{Rowe-multispec-fingerprint-book-2008,Chang-FpPANIR-InTech-2011}. In this new context, a recent trend for both biometric PAD and face recognition enhancement is based on skin detection. On the one hand, non-skin materials (e.g., a mask or a scarf) can be masked for recognition purposes. On the other hand, such materials can be considered an attempt of a PA. This will be the fundamental idea followed in this article: PAD is regarded as the problem of discriminating skin vs. non-skin materials. In order to overcome one of the main challenges of skin detection, namely, the plurality of different skin colours \cite{Lumini-SkinDetectionComparison-arxiv-2018}, we choose the short wave infrared (SWIR) band as a promising information source. It has been shown that human skin shows characteristic remission properties for multi-spectral SWIR wavelengths, which are independent of a capture subject's age, gender or skin type \cite{Jacquez-spectralSkin-JAP-1955}. In fact, several approaches have been proposed for face recognition in the infrared domain \cite{Ghias-IRfaceRecognition-Survey-PR-2014,Bourlai-FaceRecognitionImageSpectrum-Book-2016}. In particular, for surveillance purposes, the SWIR range has been analysed by several research groups, either as solely source of information or in combination with visible light images \cite{Bourlai-faceRecSWIR-ICPR-2010,Nicolo-SWIR-VIS-FaceRec-TIFS-2012,Narang-SWIR-faceRecognition-IVC-2015}. The advantages of SWIR are mostly its robustness in challenging environmental conditions (e.g., with fog or at night time). In addition, the benefits of multi-spectral hand based recognition within the SWIR bands were studied in \cite{Ferrer-SWIRhand-IS-2014}, where the authors outperformed state-of-the-art recognition approaches. For the particular task of PAD, the characteristic remission properties of the human skin observed in the multi-spectral SWIR band were exploited in \cite{Steiner-facePADswir-Sensors-2016} for facial PAD, achieving a 99\% detection accuracy. A similar approach was analysed in \cite{Gomez-Barrero-SWIR-SS-PAD-NISK-2018} over a small fingerprint database, comprising 60 samples. It was shown that the method was able to detect all 12 PAIs except for one. In addition, a preliminary DL approach based on a pre-trained CNN was tested on the same database in \cite{Tolosana-FingerPAD-CNN-SWIR-BIOSIG-2018}, achieving perfect detection rates over the small preliminary database. Keeping these thoughts in mind, we propose in this work a biometric presentation attack detection method based on SWIR images and state-of-the-art CNN architectures, as depicted in Fig.~\ref{fig:diagram}. Both networks trained from scratch (i.e., a residual network \cite{DBLP:journals/corr/HeZRS15}) and also pre-trained models (i.e., MobileNet \cite{DBLP:journals/corr/HowardZCKWWAA17} and VGG19 \cite{VGG19_2015}) have been analysed. In addition, two different approaches have been studied: $i)$ using the CNNs as an end-to-end solution, and $ii)$ utilising the CNNs as a feature extractor and carrying out classification with support vector machines (SVMs). The results obtained are compared to the handcrafted feature extraction approach proposed in \cite{Gomez-Barrero-SWIR-SS-PAD-NISK-2018}. Then, a final fusion of the different single algorithms is also explored for completeness. The experimental evaluation is carried out on a database captured within the BATL project of the ODIN Program, which includes more than 4700 samples and 35 different PAI species. Over this database, under the unkown attack scenario, a Detection Equal Error Rate (D-EER) of 1.36\% has been achieved, thereby proving the soundness of the proposed approach. It should be finally noted that, being a skin detection based method, the proposed PAD technique can be applied not only to fingerprints but also to other biometric characteristics, such as the face, the hand, or the periocular regions. The main contributions of this article can be summarised as follows: \begin{itemize} \item Review of the state-of-the-art on fingerprint PAD based on either $i)$ non-conventional capture devices, or $ii)$ traditional sensors and deep learning approaches. \item Evaluation of multiple state-of-the-art CNN architectures, using both pre-trained models and training the networks from scratch. The CNNs are evaluated as wither end-to-end solutions or alternatively as feature extractors in combination with SVMs. \item Benchmark of deep learning approaches with high-performing handcrafted features \cite{Gomez-Barrero-SWIR-SS-PAD-NISK-2018}. \item Fusion of handcrafted and deep learning features on SWIR images. \item Detection performance evaluation on a large database comprising 35 different PAIs and over 4700 samples. \item Detection performance evaluation including unknown attacks, achieving a state-of-the-art detection performance. \end{itemize} The rest of the article is organised as follows. Sect.~\ref{sec:defs} presents the main terms which will be used in the remainder of the article. Related works on fingerprint PAD are summarised in Sect.~\ref{sec:related}. Sects.~\ref{sec:swir}~and~Sect.~\ref{sec:pad} describe the proposed approach. The evaluation framework is presented in Sect.~\ref{sec:setup}, and the results discussed in Sect.~\ref{sec:res}. Final conclusions are drawn in Sect.~\ref{sec:conc}. \section{Conclusions} \label{sec:conc} In this article, we have presented a fingerprint PAD scheme based on $i)$ a new capture device for the acquisition of finger samples in the SWIR spectrum, and $ii)$ state-of-the-art deep learning techniques. An in depth analysis of several networks, either trained from scratch or using transfer learning over pre-trained models, and either as end-to-end solutions or as feature extractors in combination with SVMs for classification, has revealed the soundness of the proposed approach. Three different CNN architectures have been tested: a residual CNN trained from scratch \cite{DBLP:journals/corr/HeZRS15,DBLP:journals/corr/SzegedyIV16}, and the adaptation of the final layers of the VGG-19 \cite{VGG19_2015} and the MobileNet \cite{DBLP:journals/corr/HowardZCKWWAA17} pre-trained models. In addition, the performance of the proposed DL approaches has been benchmarked against the only handcrafted approach for fingerprint PAD based on SWIR images available in the literature \cite{Gomez-Barrero-SWIR-SS-PAD-NISK-2018}. The performance of all the individual algorithms has been tested over a database comprising more than 4700 samples, stemming from 562 different subjects and 35 different PAI species. Furthermore, several score level fusion schemes have been evaluated. The experimental protocol was designed to simulate a real life scenario: only 260 samples were used for training, and 30 samples acquired from 5 PAI species were excluded from the development stages and utilised only for testing (i.e., unkown attack scenario). In the aforementioned conditions, the best performance was reached for the fusion of two end-to-end CNNs: the residual CNN trained from scratch and the adapted VGG19 pre-trained model. A D-EER of 1.35\% was obtained. Moreover, this system can be used for different applications. On the one hand, if high user convenience is preferred, an APCER around 7\% can be achieved for a BPCER of 0.1\% (i.e., only 1 in 1000 bona fide samples will be rejected). On the other hand, for highly secure scenarios, a BPCER of 2\% can be achieved for any APCER under 0.5\%. These results clearly outperform those achieved with the handcrafted features, which yielded a D-EER over 12\% and had trouble reaching APCERs under 2\%. We may thus conclude, that the use of SWIR images in combination with state-of-the-art CNNs offers a reliable and efficient solution to the threat posed by presentation attacks. However, the development of new countermeasures usually brings the corresponding development of new attacks, in this case, new PAI species. To tackle them, we plan to fuse the techniques developed in this work, which analyse the surface of the finger within the SWIR spectrum, with other approaches analysing bona fide properties below the skin \cite{Keilbach-PADlsciTexture-BIOSIG-2018,Kolberg-fingerveinPAD-PLBP-IETBook-2019}. \section{Definitions} \label{sec:defs} In the following, we include the main definitions stated within the ISO/IEC 30107-3 standard on biometric presentation attack detection - part 3: testing and reporting \cite{ISO-IEC-30107-3-PAD-metrics-170227}, which will be used throughout the article: \textbf{Bona fide presentation}: \lq\lq \emph{interaction of the biometric capture subject and the biometric data capture subsystem in the fashion intended by the policy of the biometric system}''. That is, a normal or genuine presentation. \textbf{Presentation attack (PA)}: \lq\lq \emph{presentation to the biometric data capture subsystem with the goal of interfering with the operation of the biometric system}''. That is, an attack carried out on the capture device to either conceal your identity or impersonate someone else. \textbf{Presentation attack instrument (PAI)}: \lq\lq \emph{biometric characteristic or object used in a presentation attack}''. For instance, a silicone 3D mask or an ecoflex fingerprint overlay. \textbf{PAI species}: \lq\lq \emph{class of presentation attack instruments created using a common production method and based on different biometric characteristics}''. \vspace*{0.2cm} In order to evaluate the vulnerabilities of biometric systems to PAs, the following metrics should be used: \textbf{Attack Presentation Classification Error Rate (APCER)}: \lq\lq \emph{proportion of attack presentations using the same PAI species incorrectly classified as bona fide presentations in a specific scenario}''. \textbf{Bona fide Presentation Classification Error Rate (BPCER)}: \lq\lq \emph{proportion of bona fide presentations incorrectly classified as presentation attacks in a specific scenario}''. Derived from the aforementioned metrics, the detection equal error rate (D-ERR) is defined as the error rate at the operating point where APCER = BPCER. \section{Related Works} \label{sec:related} In this section, we summarise the key works on fingerprint PAD for both non-conventional optical or capacitive sensors (see Sect.~\ref{sec:related:nc} and Table~\ref{tab:sotaOther}) and using DL approaches on conventional sensors (see Sect.~\ref{sec:related:dl} and Table~\ref{tab:sotaDL}). For further details on fingerprint PAD, the reader is referred to \cite{Sousedik-PAD-Survey-IET-BMT-2014,Marasco-PAD-SurveyFingerprint-CSUR-2015}. It should be noted that, in addition to the metrics defined in Sect.~\ref{sec:defs} two different metrics are used in the LivDet competitions \cite{Ghiani-ReviewFpLivDet-IVC-2017,Mura-LivDet2017-ICB-2018}. The Average Classification Error Rate (ACER) is defined as the average of the APCER and the BPCER for a pre-defined decision threshold $\delta$: \begin{equation} \mathrm{ACER}\left(\delta\right) = \frac{\mathrm{APCER}\left(\delta\right) + \mathrm{BPCER}\left(\delta\right)}{2} \end{equation} It should be noted that averaging APCER and BPCER has been deprecated in ISO/IEC 30107-3. The ACER is reported here for the only purpose to relate our results to the LiveDet competition, where ACER has been used. The detection accuracy (Acc.) refers to the rate of correctly classified bona fide and PAs at $\delta = 0.5$: \begin{equation} \begin{split} \mathrm{Acc}\left(\delta\right) &= \frac{1}{\text{\# samples}} \cdot \Bigg\lbrace \left(1 - \mathrm{APCER}\left(\delta\right)\right) \cdot \left\lbrace \text{\# PA samples} \right\rbrace \\ &+ \left(1 - \mathrm{BPCER}\left(\delta\right)\right) \cdot \left\lbrace \text{\# BF samples} \right\rbrace \Bigg\rbrace \end{split} \end{equation} These will be used in Table~\ref{tab:sotaDL} where needed \begin{table*}[t] \begin{small} \begin{center} \caption{Summary of the most relevant methodologies for fingerprint PAD based on non-conventional sensors.}\label{tab:sotaOther} \centering \begin{tabular}{YcRccc} \toprule \textbf{Year} & \textbf{Spectrum} & \textbf{Ref.} & \textbf{Description} & \textbf{Performance} & \textbf{Database (\# PAIs)} \\ \midrule \multirow{2}{*}{2008} &\multirow{2}{*}{430 -- 630 nm} & \multirow{2}{*}{\cite{Rowe-Lumidigm-WP-2008}} & \multirow{2}{*}{Wavelet transform} & APCER = 0.9\% & Unavailable DB \\ & & & & BPCER = 0.5\% & (49) \\ \cline{1-6} \multirow{4}{*}{2011} & \multirow{2}{*}{400 -- 1650 nm} & \multirow{2}{*}{\cite{Hengfoss-spectroscopyPADfinger-FSI-2011}} & \multirow{2}{*}{Spectroscopic properties} & \multirow{2}{*}{-} & Unavailable DB \\ & & & & & (0) \\ \cline{2-6} & OCT & \multirow{2}{*}{\cite{Chang-FpPANIR-InTech-2011}} & \multirow{2}{*}{-} & \multirow{2}{*}{-} & Unavailable DB \\ &400 -- 850 nm & & & & (-) \\ \cline{1-6} \multirow{6}{*}{2018} & \multirow{4}{*}{1200 -- 1550 nm} & \multirow{2}{*}{\cite{Gomez-Barrero-SWIR-SS-PAD-NISK-2018}} & \multirow{2}{*}{Multi-spectral signatures} & APCER = 5.7\% & Unavailable DB \\ & & & & BPCER = 0.0\% & (12) \\ \cline{3-6} & & \multirow{2}{*}{\cite{Tolosana-FingerPAD-CNN-SWIR-BIOSIG-2018}} & \multirow{2}{*}{Pre-trained VGG19 model} & APCER = 0.0\% & Unavailable DB \\ & & & & BPCER = 0.0\% & (12) \\ \cline{2-6} & \multirow{2}{*}{1310 nm (LSCI)} & \multirow{2}{*}{\cite{Keilbach-PADlsciTexture-BIOSIG-2018}} & \multirow{2}{*}{Texture descriptors} & APCER = 10.97\% & Unavailable DB \\ & & & & BPCER = 0.84\% & (32) \\ \bottomrule \end{tabular}\vspace{-0.5cm} \end{center} \end{small} \end{table*} \subsection{Non-Conventional Fingerprint Sensors} \label{sec:related:nc} To the best of our knowledge, the pioneering work on fingerprint multi-spectral PAD with non-conventional capacitive or optical sensors was carried out by Rowe \textit{et al.} in \cite{Rowe-Lumidigm-WP-2008}. The presented, and now widely used, Lumidigm sensor, captures multi-spectral images in four different wavelengths (i.e., 430, 530 and 630 nm, as well as white light). In their work, the authors studied the PAD capabilities of the combined images using absolute magnitudes of the responses of each image to dual-tree complex wavelets. In a self-acquired database including 49 PAI species, they obtained an APCER of 0.9\% for a BPCER of 0.5\%. Even if these results are remarkable, the PAD methods used are not described and not many details about the acquired database or the experimental protocol are presented. Therefore, it is difficult to establish a fair benchmark. Three years later, Hengfoss \textit{et al.} analysed extensively the spectroscopic properties of living against the cadaver fingers using four wavelengths between 400 nm and 1630 nm \cite{Hengfoss-spectroscopyPADfinger-FSI-2011}. However, no PAIs were analysed in their work. Later that year, Chang \textit{et al.} studied in \cite{Chang-FpPANIR-InTech-2011} the complex properties of the skin, which differentiate it from PAIs, using optical coherence tomography (OCT) and nine different wavelengths between 400 nm and 850 nm. A single volunteer provided the bona fide and PA samples, and not many details about the algorithms used were reported. More recently, in 2018, some preliminary PAD studies were carried out in \cite{Gomez-Barrero-SWIR-SS-PAD-NISK-2018,Tolosana-FingerPAD-CNN-SWIR-BIOSIG-2018} on a small database, comprising a total of 60 samples and 12 different PAI species, which was acquired at the University of South California within the BATL project \cite{BATL}. Gomez-Barrero \textit{et al.} extracted multi-spectral signatures from four different wavelengths in SWIR spectrum, achieving an APCER = 5.7\% and a BPCER = 0\%. In this case, all classification errors stem from a single PAI made with orange playdoh. In a subsequent work on the same database, Tolosana \textit{et al.} used a pre-trained VGG19 CNN model \cite{VGG19_2015} for PAD purposes. In this case, all 60 samples were correctly classified (i.e., APCER = BPCER = 0\%). Finally, Keilbach \textit{et al.} analysed in \cite{Keilbach-PADlsciTexture-BIOSIG-2018} the PAD capabilities of laser speckle contrast images (LSCI) over a larger database, also acquired within the BATL project and comprising 32 PAIs and more than 750 samples. In this case, several descriptors were extracted from the LSCI sequences, including the well-known local binary patterns (LBP) or the histogram of oriented gradients (HOG). The final cascaded score level fusion yielded an APCER = 10.97\% for a BPCER = 0.84\%. \begin{table*}[t] \begin{small} \begin{center} \caption{Summary of the most relevant methodologies for fingerprint PAD based on DL approaches.}\label{tab:sotaDL} \centering \begin{tabular}{lYRlcc} \toprule \textbf{Category} & \textbf{Year} & \textbf{Ref.} & \textbf{Description} & \textbf{Performance} & \textbf{Database (\# PAIs)} \\ \midrule \multirow{8}{*}{Full Sample} & \multirow{2}{*}{2015} & \multirow{2}{*}{\cite{Menotti-PADdeepRep-TIFS-2015}} & \multirow{2}{*}{CNN optimization} &\multirow{2}{*}{Acc. = 98.97\%} & LivDet 2013 \\ & & & & & (7) \\ \cline{2-6} & \multirow{6}{*}{2016} & \multirow{2}{*}{\cite{Nogueira-PADfingerprintCNN-TIFS-2016}} & Pre-trained CNNs &\multirow{2}{*}{ACER = 2.90\%} & LivDet 2009-13 \\ & & & (Best: VGG) & & (8) \\ \cline{3-6} & & \multirow{2}{*}{\cite{Kim-fingerprintPAD-DNN-PRL-2016}} & \multirow{2}{*}{DBN with RBMs} &\multirow{2}{*}{Acc. = 97.10\%} & LivDet 2013 \\ & & & & & (7) \\ \cline{3-6} & & \multirow{2}{*}{\cite{Marasco-PADfingerprintCNN-2016-HST}} & Pre-trained CNNs and Siamese networks &\multirow{2}{*}{Acc. = 96.60\%} & LivDet 2011-13 \\ & & & (Best: GoogLeNet) & & (8) \\ \cline{1-6} \multirow{2}{*}{ROI} & \multirow{2}{*}{2017} & \multirow{2}{*}{\cite{Yuan-FingerprintPAD-CNN-PCA-CMC-2017}} & CNNs + ROI and PCA optimization &ACER = 4.57\% (2011) & LivDet 2011-13 \\ & & & and SVM classification & ACER = 7.25\% (2013) & (8) \\ \cline{1-6} \multirow{16}{*}{Patch-wise} & \multirow{2}{*}{2015} & \multirow{2}{*}{\cite{Wang-FpPAD-DCNN-CCBR-2015}} & \multirow{2}{*}{DCNN (CiFar10-Net + FingerNet)} & ACER = 0.88\% (2011) & LivDet 2011-13 \\ & & & & ACER = 0.90\% (2013) & (8) \\ \cline{2-6} & \multirow{2}{*}{2016} & \multirow{2}{*}{\cite{Park-PADfpCNN-BIOSIG-2016}} & \multirow{2}{*}{CNN trained from scratch} & \multirow{2}{*}{ACER = 3.42\%} & LivDet 2009 \\ & & & & & (Identix, 3) \\ \cline{2-6} & \multirow{8}{*}{2017} & \multirow{2}{*}{\cite{Jang-fingerprintPADcontrastCNN-ICISA-2017}} & Contrast enhancement &\multirow{2}{*}{ACER = 0.20\%} & ATVS FP \\ & & & + Ad hoc CNN & & (2) \\ \cline{3-6} & & \multirow{2}{*}{\cite{Souza-FingerprintPAD-DBM-IJCNN-2017}} & \multirow{2}{*}{Deep Boltzmann Machine} &\multirow{2}{*}{Acc. = 85.96\%} & LivDet 2013 \\ & & & & & (7) \\ \cline{3-6} & & \multirow{2}{*}{\cite{Toosi-fingerprintPADpatchCNN-ICCI-2017}} & Pre-trained AlexNet & ACER = 4.63\% (2011) & LivDet 2011-13 \\ & & &+ Data augmentation and log-likelihood& ACER = 1.90\% (2013) & (8) \\ \cline{3-6} & & \multirow{2}{*}{\cite{Pala-FingerprintPAD-DeepTriplet-ICIP-2017}} & \multirow{2}{*}{Deep triplet embedding} &\multirow{2}{*}{ACER = 1.74\%} & LivDet 2009-13 \\ & & & & & (8) \\ \cline{2-6} & \multirow{4}{*}{2018} & \multirow{2}{*}{\cite{chugh-fingerprintPADcnn-TIFS-2018}} & Pre-trained MobileNet & ACER = 0.96\% & LivDet 2011-15 (11) \\ & & & + Minutiae patches & ACER = 2.00\% & Own DB (12) \\ \cline{3-6} & & \multirow{2}{*}{\cite{Park-FingerprintPADCNN-arxiv-2018}} & Fully CNN (SqueezeNet)& \multirow{2}{*}{ACER = 1.43\%} & LivDet 2011-15 \\ & & & + Data augmentation & & (11) \\ \cline{1-6} \multirow{2}{*}{Deep Fusion} & \multirow{2}{*}{2017} & \multirow{2}{*}{\cite{Toosi-2017-FingerprintPAD-FeatureFusion-Access-2017}} & Texture based features & \multirow{2}{*}{ACER $\approx$1.70\%} & LivDet 2009-13 \\ & & & and DNN fusion & & (8) \\ \bottomrule \end{tabular}\vspace{-0.5cm} \end{center} \end{small} \end{table*} \subsection{Deep Learning for Conventional Sensors} \label{sec:related:dl} The DL based fingerprint PAD proposed in the literature can be widely classified depending on the input of the networks into: $i)$ using the full samples as input to the network, $ii)$ cropping the region of interest (ROI) and feeding it to the network, and $iii)$ extracting patches from the ROI as input. Moreover, some articles $iv)$ use the network for feature level fusion of handcrafted descriptors. In the following, the main studies in all categories are summarised. \textbf{Full samples}. To the best of our knowledge, the first work on fingerprint PAD based on deep learning algorithms was presented in 2015 by Menotti \textit{et al.} \cite{Menotti-PADdeepRep-TIFS-2015}. The authors proposed two different CNN optimization approaches for the particular purpose of PAD. On the one hand, the architecture was optimized with feedforward convolutional operations and hyperparameter optimization. On the other hand, the inner weights of the network were optimized via back-propagation. Both techniques were tested on iris, face and fingerprint benchmarks, thus proving the generalisation capabilities of the proposal. Their best fingerprint related results achieved an average detection accuracy, Acc., across the four fingerprint sensors of LivDet 2013 of 98.97\%. A year later, three different approaches were proposed. Nogueira \textit{et al.} \cite{Nogueira-PADfingerprintCNN-TIFS-2016} tested three different CNNs, namely: $i)$ the pre-trained VGG \cite{VGG19_2015}, $ii)$ the pre-trained Alexnet \cite{Alexnet-ImageClassificationWithDeepCNN-ANIPS-2012}, and $iii)$ a CNN with randomly initialised weights and trained from scratch. They compared the ACER obtained with the networks over the LivDet 2009, 2011 and 2013 databases to a classical state-of-the-art algorithm based on LBP. In the evaluation, the best detection performance is achieved using a VGG pre-trained model and data augmentation (average ACER = 2.9\%), with a clear improvement with respect to LBP (average ACER = 9.6\%). It should be also noted, that the ACER decreased between 25\% and 50\% (relative decrease) for all three networks tested when data augmentation was used. Then, Kim \textit{et al.} analysed the use of deep belief networks based on superimposed restricted Boltzmann machines (RBMs) \cite{Kim-fingerprintPAD-DNN-PRL-2016}. The global network is trained in a two-stage manner with layer-wise greedy training and fine-tuning with labelled inputs. On LivDet 2013, they achieved a detection accuracy Acc. of 97.10\%, noting again the considerable enhancement achieved with data augmentation. Also Marasco \textit{et al.} explored in \cite{Marasco-PADfingerprintCNN-2016-HST} two different pre-trained CNNs: $i)$ CaffeNet \cite{Alexnet-ImageClassificationWithDeepCNN-ANIPS-2012}, and $ii)$ GoogLeNet \cite{Szegedy-Googlenet-CNNs-CVPR-2015}. Furthermore, the performance of such networks was compared to a Siamese network, which optimised a metric distance to yield high bona fide - PA distances and low bona fide - bona fide distances. In a thorough evaluation on LivDet 2011 and 2013, a detection accuracy over 96\% was achieved for GoogLeNet, closely followed by the other networks. The authors showed an accuracy decrease when dealing with either unknown attack or a cross-sensor scenario. \textbf{ROI}. In 2017, Yuan \textit{et al.} followed a different approach to optimise the performance of CNN models \cite{Yuan-FingerprintPAD-CNN-PCA-CMC-2017}. First, only the ROI was fed to the network. Then, principal component analysis (PCA) was introduced for each convolutional and pooling operation in order to discard non-relevant information. Finally, the output was classified with SVMs. This way, no data augmentation was required to achieve a 4.57\% ACER over LivDet 2013, thereby outperforming other existing approaches. \textbf{Patch-wise}. In 2015, a different two-step approach was proposed by Wang \textit{et al.} \cite{Wang-FpPAD-DCNN-CCBR-2015}. First, the ROI of the fingerprint was segmented. Then, two deep CNNs (DCCNs) were used in a patch-wise manner: $i)$ the CiFar10-Net \cite{Wan-ciFar10Net-ICML-2013}, and $ii)$ the self-developed Finger-Net, yielding an ACER under 1\% over LivDet 2011 and 2013. In 2016, Park \textit{et al.} extracted random patches from the fingerprint samples and trained a CNN from scratch in \cite{Park-PADfpCNN-BIOSIG-2016}, achieving an ACER = 3.4\% over the Identix subset of LiveDet 2009. In 2017, Jang \textit{et al.} proposed contrast enhancement and block-wise processing of the fingerprint to improve the state of the art results achieved with DL \cite{Jang-fingerprintPADcontrastCNN-ICISA-2017}. The blocks were then combined with a majority voting rule. They also designed a CNN from scratch inspired in the VGG19 model, and evaluated the proposed approach over the ATVS fake fingerprint DB \cite{Galbally-FingerprintPAEval-TS-2011}. An ACER of 0.2\% was reported. Souza \textit{et al.} analysed again in \cite{Souza-FingerprintPAD-DBM-IJCNN-2017} the use of Boltzmann machines, this time in a patch-wise manner and using a majority vote rule. In particular, they used deep Boltzmann machines (DBMs), which can learn more complex and internal representations from a low number of labelled samples. The accuracy obtained over LivDet 2013 was 85.96\% Following this patch-wise trend, Toosi \textit{et al.} tested in \cite{Toosi-fingerprintPADpatchCNN-ICCI-2017} the accuracy of AlexNet with data augmentation. For classification, the scores are calibrated using log-likelihood ratios. The average ACER on LivDet 2011 and 2013 is 3.26\%. Similarly, Pala \textit{et al.} tested the feasibility of using deep triplet embedding for PAD purposes \cite{Pala-FingerprintPAD-DeepTriplet-ICIP-2017}. In contrast to Siamese networks, this method requires no enrolment database, since the triplets are selected from patches within the input sample. Over LivDet 2009 to 2013, an ACER of 1.74\% was reported. The robustness to unknown attacks was also evaluated on LivDet 2013, achieving ACERs much lower than other approaches (e.g., 0.7\% vs 1.4\% for Siamese networks for the modasil PAIs). \begin{figure*}[t] \centering \centerline{\includegraphics[width=.65\linewidth]{fig/sensorDiagram.pdf}} \caption{Finger sensor diagram. \textbf{Left}: a diagram of the inner components: two different sensors for the SWIR images and the visible (VIS) light images, together with the corresponding LEDs. \textbf{Right}: a sample and the corresponding ROI for a bona fide at 1200 nm.} \label{fig:sensor}\vspace{-0.4cm} \end{figure*} In 2018, Chugh \textit{et al.} presented in \cite{chugh-fingerprintPADcnn-TIFS-2018} a different way to extract fingerprint patches: around the minutiae. The idea behind this patch computation is the fact that PAIs can present spurious minutiae, which can be surrounded by a distinct texture. Therefore, these patches were fed to the MobileNet pre-trained network \cite{DBLP:journals/corr/HowardZCKWWAA17}. The detection performance was evaluated on LivDet 2011 to 2015, achieving a remarkable ACER of 0.96\% on average. However, the ACER increased to 2.0\% for a self-acquired database, comprising a larger number of PAIs (12). In the same year, Park \textit{et al.} developed in \cite{Park-FingerprintPADCNN-arxiv-2018} a fully CNN based on the Fire module of SqueezeNet \cite{Iandola-SqueezeNet-arxiv-2016}. They analysed different patch sizes and compared the common voting method to an optimal thresholding approach, which yielded a better performance: an ACER of 1.43\% over LivDet 2011 to 2015. \textbf{Deep fusion}. Toosi \textit{et al.} proposed in \cite{Toosi-2017-FingerprintPAD-FeatureFusion-Access-2017} a completely different approach to use DL for fingerprint PAD. Instead of using deep networks for feature extraction, ten different hand-crafted descriptors, including the well-known local phase quantization (LPQ), binary statistical features (BSIF) or scale invariant feature transform (SIFT) were fed to a self-developed deep network (Spidernet) for final fusion and classification. The performance was compared to classical fusion approaches, such as SVMs and AdaBoost, and ACER around 1.6-1.8\% were reported for LivDet 2009 to 2013. \section{Experimental Framework} \label{sec:setup} \begin{table}[t] \caption{PAI species included in the experimental work of this study. PAI species used only for testing and not for training (i.e., unknown attacks) have been underlined.}\label{table:types_PAIs} \begin{adjustbox}{width=0.49\textwidth} \begin{tabular}{ll} \hline \multicolumn{1}{l}{\textbf{Type}} & \textbf{Description} \\ \hline \multicolumn{1}{l}{Dragon Skin} & Finger, conductive, conductive nanotips white, \underline{graphite} \\ \multicolumn{1}{l}{Latex} & Finger \\ \multicolumn{1}{l}{Overlay} & Conductive silicone, monster latex, glue, silicone, \underline{urethane}, wax, dragon skin \\ \multicolumn{1}{l}{Playdoh} & Black, blue, green, orange, pink, purple, red, teal, \underline{yellow} \\ \multicolumn{1}{l}{Printed} & 2D photograph/matte paper, 3D normal/Ag paint, \\ \multicolumn{1}{l}{Silicone} & Barepaint coating, finger flesh/yellow, graphite, normal, \underline{coating} \\ \multicolumn{1}{l}{Silly Putty} & Glow in the dark, normal, \underline{metallic} \\ \multicolumn{1}{l}{Wax} & Finger \\ \hline \end{tabular} \end{adjustbox}\vspace*{-0.3cm} \end{table} \subsection{Database} \label{sec:setup:db} The database considered in the experimental evaluation was acquired within the BATL research project \cite{BATL} in collaboration with our partners at the Univiersity of South California (USC). The project is financed by IARPA ODIN program \cite{odinThorProgram}. Data were collected in two different stages and comprise both bona fide and PA samples. For the bona fide samples, a total of 163 subjects participated during the first stage. For each of them, all 5 fingers of the right hand were captured. For the second stage, there were a total of 399 subjects. Index, middle and ring fingers of both hands were captured from each subject. It is important to highlight that people from different gender, ethnicity and age were considered during the acquisition, in order to evaluate the systems and algorithms in realistic conditions. For the PA samples, the selection of the PAI fabrication materials was based on the requirements of IARPA ODIN program evaluation, covering the most challenging PAIs \cite{Sousedik-PAD-Survey-IET-BMT-2014, Marasco-PAD-SurveyFingerprint-CSUR-2015}. There are a total of 35 different PAI species, which can be further categorized into eight main groups, namely: dragon skin, latex, overlay, playdoh, printed fingers, silicone, silly putty and wax. All details are included in Table~\ref{table:types_PAIs}. Finally, all captured samples were manually reviewed in order to remove all samples with operational errors (e.g., finger movement) or hardware failures, ending up with a total of 4,290 and 443 bona fide and PA samples, respectively. \subsection{Experimental Protocol} \label{sec:setup:prot} The main goal behind the experimental protocol design was to analyse and prove the soundness of our proposed fingerprint PAD approach in a realistic scenario. Therefore, the database described in the previos section is split into non-overlapping training, validation and test datasets, following the same procedure considered in previous works \cite{VGG19_2015, DBLP:journals/corr/HeZRS15}. All details are shown in Table~\ref{tab:partition_datasets}. In order to allow a fair benchmark among the approaches described in Sect.~\ref{sec:pad}, the same partitions will be used for all the experiments. \begin{table}[t] \small \centering \caption{\label{tab:partition_datasets}Partition of training, validation and test datasets.}\vspace*{-0.2cm} \begin{tabular}{lccc} \toprule & \# Samples & \# PA Samples & \# BF Samples\\ \midrule Training set & 260 & 130 & 130 \\ Validation set & 180 & 90 & 90 \\ Test set & 4293 & 222 & 4071 \\ \bottomrule \end{tabular}\vspace*{-0.3cm} \end{table} \begin{figure*}[tb] \centering \begin{subfigure}[tb]{0.32\textwidth} \centering \centerline{\includegraphics[width=\linewidth]{fig/DETs_all.pdf}} \caption{} \label{fig:DET_results \end{subfigure} \begin{subfigure}[tb]{0.32\textwidth} \centerline{\includegraphics[width=\linewidth]{fig/fusion_SS_VGG_MobileNet.pdf}} \caption{} \label{fig:DET_SSfusion \end{subfigure} \begin{subfigure}[tb]{0.32\textwidth} \centerline{\includegraphics[width=\linewidth]{fig/fusion_ResNet_VGG_MobileNet.pdf}} \caption{}\label{fig:DET_CNNfusion \end{subfigure} \caption{Performance evaluation of: (a) all the individual systems, (b) the fusion of handcrafted features (SS, Sect.~\ref{sec:pad:svm}) and end-to-end deep learning approaches (MobileNet and VGG19, Sect.~\ref{sec:pad:DL_features}), and (v) the fusion of end-to-end deep learning approaches (ResNet, MobileNet and VGG19, Sect.~\ref{sec:pad:DL_features}).} \label{fig:DET_fusion}\vspace{-0.4cm} \end{figure*} For the development of our proposed fingerprint PAD methods, both training and validation datasets are used in order to train the weights of the systems and select the optimal network architectures. For the training dataset, we consider a total of 130 samples for each class (i.e., bona fide and PA), whereas for the validation dataset the number of samples is reduced to 90 per class. It is important to highlight that the same number of samples per class are considered during the development of the systems in order to avoid bias towards one class. For the final evaluation, the test dataset comprises the remaining bona fide and PA samples not used during the development of the systems, thereby allowing a fair performance analysis. A total of 4070 and 223 bona fide and PA samples are considered, respectively. Moreover, it is important to remark that the test dataset includes 5 unkown PAIs, which were not considered during the development stages (i.e., they are not present either in the train or in the validation set). This way, the robustness of the proposed methods to unknown attacks can be evaluated, thereby modelling realistic scenarios. These unknown attacks are underlined in Table~\ref{table:types_PAIs}. Based on these partitions, three different sets of experiments are carried out: \begin{enumerate} \item \textit{Exp 1 - Handcrafted features}: first, the performance of the handcrafted features described in Sect.~\ref{sec:pad:svm} is evaluated. \item \textit{Exp 2 - Deep learning features}: then, we evaluate the performance of each deep learning based approach described in Sect.~\ref{sec:pad:DL_features} (i.e., end-to-end and feature-extraction + SVM classification, CNNs trained from scratch and transfer learning), and establish a fair benchmark by following the same experimental protocol. \item \textit{Exp 3 - Fused system}: in the last set of experiments, the score level fusion (see Sect.~\ref{sec:pad:fusion}) of the aforementioned systems will be evaluated, in order to determine the best performing configuration and assess the complementarity of the individual algorithms. \end{enumerate}
{ "timestamp": "2019-03-01T02:17:45", "yymm": "1902", "arxiv_id": "1902.11065", "language": "en", "url": "https://arxiv.org/abs/1902.11065" }
\section{\label{sec:1}Introduction} Quantum electrodynamics in the presence of polarizable boundaries is a crucial element of the theory describing the interaction of quantum particles with surfaces. It is becoming increasingly relevant thanks to progress in both nanotechnology and experimental techniques in atomic physics. Modern state-of-the-art measurements in atomic physics have reached impressive level of accuracy and subject any theory to previously unparalleled scrutiny \cite{Zimmermann,Harber,Chwedenczuk}. There is a vast number of theoretical tools at one's disposal for studying the interaction between surfaces and quantum objects which can be thought of as being mediated by the electromagnetic field in its vacuum or a thermal state \cite{Bardeen,Casimir,McLachlan,Barton,Power,Sipe,Yeung,Robaschik,Bechler,ActaSlovaca}. All approaches are similar in one aspect, in that they treat the coupling between the quantum object and the quantized electromagnetic field perturbatively. On the other hand, the interaction between the electromagnetic field and the boundary surface needs to be taken into account to all orders, i.e. an exact solution of the operator-valued Maxwell equations is required. It is this aspect of the theory, i.e. the quantization of the electromagnetic field in the presence of a boundary, that even today is a subject of ongoing discussions \cite{Bordag,MiltonNew}. Quantum electrodynamics (QED) in free space has been formulated in a variety of ways to suit every need and taste \cite{CT}. Whatever approach is taken, is ultimately dictated merely by convenience, and the gauge invariance of the theory guarantees final results to coincide. However, the situation is different in macroscopic cavity QED where one typically accounts for the presence of the material boundaries by introducing a spatially dependent dielectric function $\epsilon(\mathbf{r})$, which is usually taken to be a piecewise constant function of position $\mathbf{r}$. Even in the presence of boundaries, the theory still needs to be gauge invariant, but the choice of gauges that are convenient to work with becomes rather restricted \cite{Dalton}. Macroscopic QED is one of the two fundamentally different approaches to formulate the QED in the presence of polarizable media. Another way is to consider the electromagnetic field as interacting with the microscopic constituents of the macroscopic body, which is essentially equivalent, at least at the initial stage when the theory is formulated, to QED in free space. Throughout this article we will focus strictly on the macroscopic approach to cavity QED of non-relativistic particles i.e. those describable by the Schr\"odinger equation. For a quantum theory to be consistently formulated one needs a Hamiltonian and an appropriate set of commutation relations between canonically conjugate variables, which in turn allow Heisenberg equations of motion to be derived. In the case of QED a clear-cut way of achieving this is to start from a classical Lagrangian that yields the macroscopic Maxwell equations. At this point, for suitably chosen generalised coordinates, one can unambiguously identify canonically conjugate momenta and proceed to write down the Hamiltonian by using a Legendre transformation. Quantization is then achieved by the correspondence principle, i.e. by converting Poisson brackets to commutators. Formally the transition from the Lagrangian, usually written down in terms of electromagnetic potentials $\mathbf{A}(\mathbf{r},t)$ and $\phi(\mathbf{r},t)$, to the Hamiltonian does not require any specific gauge to be chosen. The Hamiltonian may be written in a gauge invariant form, that is, in terms of the electric and magnetic fields alone \cite{Babiker}. However, there is a price to be paid for that, which is that the Hamiltonian takes on a superficial form in which the coupling terms are not manifestly apparent. Thus, for most practical purposes, e.g. in order to apply perturbation theory, a specific gauge needs to be chosen, which in turn affects the workable form of the Hamiltonian. The choice of the gauge is usually restricted to the ones in which an explicit form of the non-interacting electromagnetic operators is easily derived. A common aim is to decouple equations of motion for the potentials $\mathbf{A}(\mathbf{r},t)$ and $\phi(\mathbf{r},t)$ and deal with them separately. For QED in free space, this can be achieved in a number of gauges, the most popular being the Coulomb [$\boldsymbol{\nabla}\cdot\mathbf{A}(\mathbf{r},t)=0$] and the Lorenz [$\boldsymbol{\nabla}\cdot\mathbf{A}(\mathbf{r},t)+\partial{\phi}(\mathbf{r},t)/\partial t/c^2=0$] gauge \cite{Jackson}. However, when a polarizable boundary is present and accounted for by introducing piecewise-constant dielectric function $\epsilon=\epsilon(\mathbf{r})$, neither the Coulomb nor the Lorenz gauge result in the decoupling of the equations of motion for $\mathbf{A}(\mathbf{r},t)$ and $\phi(\mathbf{r},t)$. Instead, one is led to introduce the so-called generalized Coulomb gauge $\boldsymbol{\nabla}\cdot[\epsilon(\mathbf{r})\mathbf{A}(\mathbf{r},t)]=0$, which allows one to retain certain analogies between free-space QED in the Coulomb gauge and macroscopic QED in the presence of boundaries \cite{Glauber}. In particular, in both cases the scalar potential is not quantized and remains static $\phi(\mathbf{r},t)=\phi(\mathbf{r})$ for a static charge distribution. This yields the instantaneous Coulomb interaction between free charges. However, in the case of macroscopic QED with boundaries, this interaction also includes the coupling of charges to the surface, which for simple enough geometries can be determined by the method of images \cite{Jackson}. This paper demonstrates how to arrive consistently at a correct formulation of QED in the presence of a polarizable boundary in the \emph{true} Coulomb gauge. This is done by finding an explicit gauge transformation connecting the generalized Coulomb gauge $\boldsymbol{\nabla}\cdot[\epsilon(\mathbf{r})\mathbf{A}(\mathbf{r},t)]=0$ with the true Coulomb gauge $\boldsymbol{\nabla}\cdot\mathbf{A}(\mathbf{r},t)=0$. It will be shown how the Hamiltonian in the true Coulomb gauge can be obtained from that in the generalized Coulomb gauge by a unitary transformation. Once the Hamiltonian is known, one can use standard perturbation theory to calculate interaction energies between charges and surfaces; we shall be demonstrating the gauge invariance of macroscopic QED by explicit calculation of the electrostatic contributions to the interaction of an electron with dielectric surface. \section{\label{sec:GC}Generalised Coulomb gauge} Although the considerations we report here are quite general, we would like to explain them by referring to a specific example. To that end, we consider a dielectric half-space occupying the region of space $z<0$. For simplicity, the dielectric is assumed to be non-dispersive, i.e. its electromagnetic response is described by a single number, the index of refraction $n$, that is one and the same for all frequencies. This model is described by the dielectric constant \begin{equation} \epsilon(z)=1+\theta(-z)(n^2-1)\label{eqn:Epsilon} \end{equation} where $\theta(z)$ is the Heaviside step function. The quantization of the electromagnetic field that coexists with such a dielectric can be achieved by normal-mode expansion \cite{Glauber}. We start with Maxwell's equations without sources, \begin{eqnarray} \boldsymbol{\nabla}\cdot\mathbf{D}(\mathbf{r},t)&=&0,\label{eqn:Max1}\\ \boldsymbol{\nabla}\cdot\mathbf{B}(\mathbf{r},t)&=&0,\label{eqn:Max2}\\ \boldsymbol{\nabla}\times\mathbf{E}(\mathbf{r},t)+\frac{\partial}{\partial t}\mathbf{B}(\mathbf{r},t)&=&0,\label{eqn:Max3}\\ \boldsymbol{\nabla}\times\mathbf{H}(\mathbf{r},t)-\frac{\partial}{\partial t}\mathbf{D}(\mathbf{r},t)&=&0.\label{eqn:Max4} \end{eqnarray} For a material that is non-magnetic and has the non-dispersive dielectric function (\ref{eqn:Epsilon}), the constitutive relations may be written as \begin{equation} \mathbf{B}(\mathbf{r},t)=\mu_0\mathbf{H}(\mathbf{r},t),\;\;\mathbf{D}(\mathbf{r},t)=\epsilon_0\epsilon(z)\mathbf{E}(\mathbf{r},t) \end{equation} Introducing the electromagnetic potentials in the usual way \cite{Jackson} \begin{eqnarray} \mathbf{E}(\mathbf{r},t)=-\frac{\partial}{\partial t}\mathbf{A}(\mathbf{r},t)-\boldsymbol{\nabla}\phi(\mathbf{r},t)\label{eqn:potE}\\ \mathbf{B}(\mathbf{r},t)=\boldsymbol{\nabla}\times\mathbf{A}(\mathbf{r},t)\label{eqn:potB}, \end{eqnarray} takes care of Eqs.~(\ref{eqn:Max2}) and (\ref{eqn:Max3}). The remaining two Maxwell equations (\ref{eqn:Max1}) and (\ref{eqn:Max4}) turn into: \begin{eqnarray} \boldsymbol{\nabla}\cdot\left[\epsilon(z)\boldsymbol{\nabla}\phi(\mathbf{r},t)\right]+\frac{\partial}{\partial t}\boldsymbol{\nabla}\cdot\left[\epsilon(z)\mathbf{A}(\mathbf{r},t)\right]&=&0,\label{eqn:Wave1}\\ \boldsymbol{\nabla}\times[\boldsymbol{\nabla}\times\mathbf{A}(\mathbf{r},t)]+\frac{\epsilon(z)}{c^2}\frac{\partial^2}{\partial t^2}\mathbf{A}(\mathbf{r},t)\nonumber\\ +\frac{\epsilon(z)}{c^2}\frac{\partial}{\partial t}\boldsymbol{\nabla}\phi(\mathbf{r},t)&=&0.\label{eqn:Wave2} \end{eqnarray} The solution of these coupled differential equations can be very much simplified by a suitable choice of gauge for the electromagnetic potentials. It is expedient to decouple the two equations. In non-relativistic QED, the most convenient approach is to work in the generalized Coulomb gauge where we require that \begin{eqnarray} \boldsymbol{\nabla}\cdot\left[\epsilon(z)\mathbf{A}(\mathbf{r},t)\right]&=& 0 ,\nonumber\\ \epsilon(z)\boldsymbol{\nabla}\cdot\mathbf{A}(\mathbf{r},t)+(1-n^2)A_z(\mathbf{r},t)\delta(z)&=&0.\;\label{eqn:Generalized} \end{eqnarray} where the specific form of the dielectic constant of Eq.~(\ref{eqn:Epsilon}) has been used to get the second line. We note that, since $\epsilon(z)$ is not spatially uniform but has a finite jump at $z=0$, the generalised Coulomb gauge differs from the standard Coulomb gauge \begin{equation} \boldsymbol{\nabla}\cdot\mathbf{A}(\mathbf{r},t)=0\label{eqn:TrueCoulomb} \end{equation} by a surface term that is proportional to a $\delta(z)$-function. With Eq.~(\ref{eqn:Generalized}) it follows from Eq.~(\ref{eqn:Wave1}) that in the absence of sources we can set $\phi(\mathbf{r},t)=0$. Thus in generalized Coulomb gauge, Eq.~(\ref{eqn:Wave2}) reduces to \begin{equation} \boldsymbol{\nabla}\times[\boldsymbol{\nabla}\times\mathbf{A}(\mathbf{r},t)]+\frac{\epsilon(z)}{c^2}\dfrac{\partial^2}{\partial t^2}\mathbf{A}(\mathbf{r},\omega)=0. \end{equation} Therefore, only the vector potential undergoes quantization, which is accomplished by expanding $\mathbf{A}(\mathbf{r},t)$ in a complete set of the mode functions that satisfy \begin{equation} \boldsymbol{\nabla}\times[\boldsymbol{\nabla}\times\mathbf{f}_\sigma(\mathbf{r})]-\epsilon(z)\dfrac{\omega_\sigma^2}{c^2}\mathbf{f}_\sigma(\mathbf{r})=0, \end{equation} and are supplemented by the condition that derives from the gauge we are working in, cf. Eq.~(\ref{eqn:Generalized}) \begin{equation} \boldsymbol{\nabla}\cdot\left[\epsilon(z)\mathbf{f}_\sigma(\mathbf{r})\right]=0.\label{eqn:GaugeForModes} \end{equation} We have labelled solutions corresponding to the eigenvalue $\omega_\sigma$ by $\sigma$. The double-curl operator can be rewritten using Eq.~(\ref{eqn:GaugeForModes}) \begin{eqnarray} \boldsymbol{\nabla}\times[\boldsymbol{\nabla}\times\mathbf{f}_\sigma(\mathbf{r})]&=&\boldsymbol{\nabla}\left[\boldsymbol{\nabla}\cdot\mathbf{f}_\sigma(\mathbf{r})\right]-\nabla^2\mathbf{f}_\sigma(\mathbf{r},\sigma)\nonumber\\ &=&-\nabla^2\mathbf{f}_\sigma(\mathbf{r},\sigma),\hspace{5mm}\mbox{for}\; z\neq0.\;\;\;\nonumber \end{eqnarray} Thus away from the interface we can work with the Helmholtz equation \begin{equation} \nabla^2\mathbf{f}_\sigma(\mathbf{r})+\epsilon(z)\dfrac{\omega_\sigma^2}{c^2}\mathbf{f}_\sigma(\mathbf{r})=0,\;\;\;\mbox{for}\; z\neq 0, \end{equation} which can be solved as usual by considering the two distinct regions of space, $z<0$ and $z>0$, and using Maxwell boundary conditions to match solutions across the interface. Once the mode functions are known, the expansion of the vector potential is written as \begin{equation} \mathbf{A}^{\rm gc}(\mathbf{r},t)=\sum_\sigma\sqrt{\frac{\hbar}{2\epsilon_0\omega_{\sigma}}}\left[a_\sigma\mathbf{f}_\sigma(\mathbf{r})e^{-i\omega_\sigma t}+{\rm C.C.}\right],\label{eqn:AExpansion} \end{equation} where the superscript gc reminds us that the expansion is written down in generalized Coulomb gauge, Eq.~(\ref{eqn:Generalized}). Quantization is accomplished by the promotion of the expansion coefficients $a_\sigma$ to operators that satisfy bosonic equal-time commutation rules \begin{eqnarray} &&[\hat{a}_\sigma, \hat{a}^\dagger_{\sigma'}]=\delta_{\sigma, \sigma'},\\ &&[\hat{a}_\sigma, \hat{a}_{\sigma'}]=0. \nonumber \end{eqnarray} In the present geometry, described by the dielectric function (\ref{eqn:Epsilon}), the procedure outlined above yields the well-known Carnigila-Mandel modes for the vector field operator which naturally split into two parts describing left-incident and right-incident photons, respectively \cite{Carnigila}: \begin{widetext} \begin{eqnarray} \hat{\mathbf{A}}^{\rm gc}(\mathbf{r},t)&=&\sum_\lambda\int\hspace{-1mm}{\rm d}^2\mathbf{k}_\parallel \left\{\left[\int_0^\infty \hspace{-1mm}{\rm d} k_{zd}\sqrt{\frac{\hbar}{2\epsilon_0\omega_{\mathbf{k}\lambda}}}\;\hat{a}^{L}_{\mathbf{k}\lambda}(t)\mathbf{f}_{\mathbf{k}\lambda}^L(\mathbf{r})\right]+\left[\int_0^\infty\hspace{-1mm}{\rm d} k_{z}\sqrt{\frac{\hbar}{2\epsilon_0\omega_{\mathbf{k}\lambda}}}\;\hat{a}^{R}_{\mathbf{k}\lambda}(t)\mathbf{f}_{\mathbf{k}\lambda}^R(\mathbf{r})\right]\right\}+{\rm H.C.}\;\;\;\label{eqn:AOperator}\\ \mathbf{f}^L_{\mathbf{k}\lambda}(\mathbf{r})&=&\dfrac{\hat{\mathbf{e}}_\lambda(\boldsymbol{\nabla})}{(2\pi)^{3/2}n}\left\{ \theta(-z)\left[e^{i\mathbf{k}^+_d\cdot\mathbf{r}}+R_\lambda^L e^{i\mathbf{k}^-_d\cdot\mathbf{r}}\right]+ \theta(z)\left[T^L_\lambda e^{i\mathbf{k}^+\cdot\mathbf{r}} \right]\right\}\label{eqn:LeftIncident}\\ \mathbf{f}^R_{\mathbf{k}\lambda}(\mathbf{r})&=&\dfrac{\hat{\mathbf{e}}_\lambda(\boldsymbol{\nabla})}{(2\pi)^{3/2}}\left\{ \theta(z)\left[e^{i\mathbf{k}^-\cdot\mathbf{r}}+R_\lambda^R e^{i\mathbf{k}^+\cdot\mathbf{r}}\right]+ \theta(-z)\left[T^R_\lambda e^{i\mathbf{k}_d^+\cdot\mathbf{r}} \right]\right\}\label{eqn:RightIncident} \end{eqnarray} \end{widetext} Here $\lambda$ labels the polarization of the photons ${\lambda=\{{\rm TE}, {\rm TM} \}}$ as transverse electric and transverse magnetic, and a harmonic time-dependence of the annihilation and creation operators is implicitly assumed i.e. ${a_{\mathbf{k}\lambda}(t)=a_{\mathbf{k}\lambda}(0)e^{-i\omega_{\mathbf{k}\lambda}t}}$. The mode functions $\mathbf{f}_{\mathbf{k}\lambda}(\mathbf{r})$ entering the expansion (\ref{eqn:AOperator}) contain wavevectors $\mathbf{k}$ and $\mathbf{k}_d$ i.e. the wavevectors in vacuum and dielectric, respectively \begin{equation} \mathbf{k}^\pm=(\mathbf{k}_\parallel,\pm k_z),\;\;\mathbf{k}_d^\pm=(\mathbf{k}_\parallel,\pm k_{zd}). \end{equation} Their $z$-components are related to each other via the law of refraction, ${k_{zd}=\sqrt{n^2k_z^2+(n^2-1)\mathbf{k}_\parallel^2}}$. The sign of the square root is chosen in such a way that on the real axis we have ${\rm sgn}(k_{z})={\rm sgn}(k_{zd})$. This ensures that for a single mode of the electromagnetic field that consists of incident, reflected and transmitted waves, the direction of propagation is consistent between those waves. In Eqs.~(\ref{eqn:LeftIncident}) and (\ref{eqn:RightIncident}) a shorthand notation has been introduced to represent the unit polarization vectors $\hat{\mathbf{e}}_\lambda$. We have defined them as \begin{eqnarray} \hat{\mathbf{e}}_{\rm TE}(\boldsymbol{\nabla})&=& \left(-\nabla^2_\parallel\right)^{-1/2}\left(-i\nabla_y,i\nabla_x,0\right), \label{eqn:PW1}\\ \hat{\mathbf{e}}_{\rm TM}(\boldsymbol{\nabla})&=& \left(\nabla^2_\parallel\nabla^2\right)^{-1/2}\left(-\nabla_x\nabla_z,-\nabla_y\nabla_z,\nabla^2_\parallel\right), \label{eqn:PW2} \end{eqnarray} where it is understood that the derivatives are acting on plane waves and thus give the corresponding components of the wave vector of that wave, e.g. for the right-incident incoming wave $e^{i\mathbf{k}^-\cdot\mathbf{r}}$ the operator $\nabla_z$ gives $-ik_z$. We emphasize that our notation is such that the polarization vectors do not act on the step function in $\epsilon(z)$. This is a convenient notation as the polarization vectors point in different directions for incident, reflected and transmitted waves, respectively. However, one needs to be careful when carrying out explicit calculations with the mode functions (\ref{eqn:LeftIncident})--(\ref{eqn:RightIncident}) and remember that the operator $\hat{\mathbf{e}}_\lambda(\boldsymbol{\nabla})$ is merely a shorthand notation. The Fresnel coefficients in mode functions (\ref{eqn:LeftIncident}) and (\ref{eqn:RightIncident}) are given by \begin{eqnarray} R^R_{{\rm TE}}=\frac{k_z-k_{zd}}{k_z+k_{zd}},\;\;R^R_{{\rm TM}}=\frac{n^2k_z-k_{zd}}{n^2k_z+k_{zd}},\;\;R^L_{\lambda}=-R^R_{\lambda},\nonumber\\ T^R_{{\rm TE}}=\frac{2k_{z}}{k_z+k_{zd}},\;\;T^R_{{\rm TM}}=\frac{2nk_{z}}{n^2k_z+k_{zd}},\;\;T^L_\lambda=\dfrac{k_{zd}}{k_{z}}T^R_\lambda.\nonumber\\\label{eqn:Frnl} \end{eqnarray} The mode functions (\ref{eqn:LeftIncident})--(\ref{eqn:RightIncident}) need to satisfy a completeness relation which can be written in the form \cite{Glauber} \begin{eqnarray} \sum_\lambda\int\hspace{-1mm}{\rm d}^2\mathbf{k}_\parallel\bigg[\int_0^\infty \hspace{-1mm}{\rm d} k_z \;f_{\mathbf{k}\lambda, i}^R(\mathbf{r})f_{\mathbf{k}\lambda, j}^{*R}(\mathbf{r}')\hspace{25mm}\nonumber\\ +\int_0^\infty \hspace{-1mm}{\rm d} k_{zd} \;f_{\mathbf{k}\lambda, i}^L(\mathbf{r})f_{\mathbf{k}\lambda, j}^{*L}(\mathbf{r}') \bigg] =\delta^\epsilon_{ij}(\mathbf{r},\mathbf{r}')\hspace{0.2 cm}\label{eqn:Completeness} \end{eqnarray} where for definiteness throughout this paper we choose $\mathbf{r}'$ to refer to a point that lies outside dielectric, i.e. $z'>0$. The proof of the relation \begin{eqnarray} \nabla^2\sum_\lambda\int\hspace{-1mm}{\rm d}^2\mathbf{k}_\parallel\bigg[\int_0^\infty \hspace{-1mm}{\rm d} k_z \;f_{\mathbf{k}\lambda, i}^R(\mathbf{r})f_{\mathbf{k}\lambda, j}^{*R}(\mathbf{r}')\hspace{1.5cm}\nonumber\\ +\int_0^\infty \hspace{-1mm}{\rm d} k_{zd} \;f_{\mathbf{k}\lambda, i}^L(\mathbf{r})f_{\mathbf{k}\lambda, j}^{*L}(\mathbf{r}') \bigg]\hspace{2.0 cm}\nonumber\\ =\left(\nabla_i\nabla_j-\delta_{ij}\nabla^2\right)\delta^{(3)}(\mathbf{r}-\mathbf{r}')\label{eqn:Completeness1} \end{eqnarray} has been presented in \cite{Birula}. Equation (\ref{eqn:Completeness1}) is of course obtained by acting with the Laplace operator $\nabla^2$ on (\ref{eqn:Completeness}). However, at this point it is not obvious that \begin{equation} \nabla^2\delta_{ij}^{\epsilon}(\mathbf{r},\mathbf{r}')=\left(\nabla_i\nabla_j-\delta_{ij}\nabla^2\right)\delta^{(3)}(\mathbf{r}-\mathbf{r}').\label{eqn:BirulaDelta} \end{equation} The object $\delta_{ij}^{\epsilon}(\mathbf{r},\mathbf{r}')$ represents the unit kernel in the subspace of the mode functions that satisfy the generalised Coulomb gauge i.e. if $\mathbf{f}_{\mathbf{k}\lambda}(\mathbf{r})$ satisfies Eq. (\ref{eqn:GaugeForModes}) then \begin{equation} \int\hspace{-1mm}{\rm d} ^3\mathbf{r}'\delta^\epsilon_{ij}(\mathbf{r},\mathbf{r}')\mathbf{f}^j_{\mathbf{k}\lambda}(\mathbf{r}')=\mathbf{f}^i_{\mathbf{k}\lambda}(\mathbf{r}). \end{equation} Even less obvious is that, even though the generalized Coulomb gauge differs from the standard Coulomb gauge only by a surface term, cf. Eq. (\ref{eqn:Generalized}), the corresponding unit kernels in the position representation in these two gauges differ in the whole of space because of their non-local character, i.e. even though \begin{equation} \boldsymbol{\nabla}\cdot \mathbf{f}_{\mathbf{k}\lambda}(\mathbf{r})=\boldsymbol{\nabla}\cdot \left[\epsilon(z)\mathbf{f}_{\mathbf{k}\lambda}(\mathbf{r})\right],\;{\rm for}\;z\neq0, \end{equation} we have \begin{equation} \delta^\perp_{ij}(\mathbf{r}-\mathbf{r}')\neq\delta^\epsilon_{ij}(\mathbf{r},\mathbf{r}'),\;\;\;{\rm for\;all}\;z,z'. \end{equation} Here, $\delta^{\perp}_{ij}(\mathbf{r}-\mathbf{r}')$ is the usual transverse $\delta$-function \begin{equation} \delta^{\perp}_{ij}(\mathbf{r}-\mathbf{r}')=\frac{1}{(2\pi)^3}\int\hspace{-1mm}{\rm d}^3\mathbf{k}\left(\delta_{ij}-\frac{k_i k_j}{\mathbf{k}^2}\right)e^{i\mathbf{k}\cdot(\mathbf{r}-\mathbf{r}')},\label{eqn:TransverseDelta} \end{equation} i.e. the unit kernel in the subspace of mode functions that satisfy $\boldsymbol{\nabla}\cdot \mathbf{f}_{\mathbf{k}\lambda}(\mathbf{r})=0$. We also emphasize that $\delta^\epsilon_{ij}(\mathbf{r},\mathbf{r}')$ is not translation-invariant, because translation invariance is broken by the presence of the interface where waves are partially reflected. It turns out that it is possible to calculate the $\mathbf{r}$-representation of $\delta^\epsilon_{ij}(\mathbf{r},\mathbf{r}')$ directly by evaluating the integrals in (\ref{eqn:Completeness}). Before we do so, let us rewrite the transverse delta function (\ref{eqn:TransverseDelta}) as \begin{equation} \delta^{\perp}_{ij}(\mathbf{r}-\mathbf{r}')=\delta_{ij}\delta^{(3)}(\mathbf{r}-\mathbf{r}')-\nabla_i\nabla_j' G^0(\mathbf{r}-\mathbf{r}'),\label{eqn:TransverseDelta1} \end{equation} where we have introduced the Green's function of the Poisson equation in free space \begin{equation} G^0(\mathbf{r}-\mathbf{r}')=\frac{1}{4\pi}\frac{1}{|\mathbf{r}-\mathbf{r}'|}\;.\label{eqn:Poisson} \end{equation} Let us now turn to the explicit evaluation of the LHS of Eq. (\ref{eqn:Completeness}). First we deal with the case $z<0$ and $z'>0$ for which we provide a detailed calculation. Substituting the mode functions (\ref{eqn:LeftIncident})--(\ref{eqn:RightIncident}) into (\ref{eqn:Completeness}) and multiplying out we obtain \begin{eqnarray} \delta_{ij}^\epsilon(\mathbf{r},\mathbf{r}')=\frac{1}{(2\pi)^3}\sum_\lambda\int \hspace{-1mm}{\rm d}^2\mathbf{k}_\parallel e^{i\mathbf{k}_\parallel\cdot(\mathbf{r}_\parallel-\mathbf{r}_\parallel')}\hspace{1cm} \nonumber\\ \times \left\{\int_{0}^\infty \dfrac{\hspace{-1mm}{\rm d} k_{zd}}{n^2}\left[ T^{L*}_\lambda\;\hat{e}^i_\lambda(\mathbf{k}_d^+)\hat{e}^{*j}_{\lambda}(\mathbf{k}^+)e^{ik_{zd} z-ik_z^*z'} \right.\right. \nonumber\\ \left.\left. +R^{L}_\lambda T^{L*}_\lambda\; \hat{e}^i_\lambda(\mathbf{k}_d^-)\hat{e}^{*j}_{\lambda}(\mathbf{k}^+)e^{-ik_{zd}z-ik_z^*z'}\right]\right. \nonumber\\ \left.+\int_0^\infty \hspace{-1mm}{\rm d} k_z \left[ T^{R}_\lambda \;\hat{e}^i_\lambda(\mathbf{k}_d^-)\hat{e}^{*j}_{\lambda}(\mathbf{k}^-)e^{-ik_{zd}z+ik_zz'} \right.\right. \nonumber\\ \left.\left. +R^{R}_\lambda T^{R}_\lambda \;\hat{e}^i_\lambda(\mathbf{k}_d^-)\hat{e}^{*j}_{\lambda}(\mathbf{k}^+)e^{-ik_{zd}z-ik_zz'}\right]\right\}\nonumber\\ \label{eqn:CompExplicitA1} \end{eqnarray} where $\hat{e}_\lambda^i(\mathbf{k}^\pm)\equiv \hat{e}_\lambda^i(\boldsymbol{\nabla}) e^{i\mathbf{k}^\pm\cdot\mathbf{r}}$. We proceed by focussing attention on the $k_z$ and $k_{zd}$ integrals. We convert the $k_{zd}$ integral using the relation ${k_{zd}=\sqrt{n^2k_z^2+(n^2-1)\mathbf{k}_\parallel^2}}$ \begin{equation} \int_0^\infty\hspace{-1mm}{\rm d} k_{zd} = n^2\int_{i\Gamma}^0\hspace{-1mm}{\rm d} k_z \dfrac{k_z}{k_{zd}}+n^2\int_0^\infty\hspace{-1mm}{\rm d} k_z\dfrac{k_z}{k_{zd}}, \end{equation} where $\Gamma=|\mathbf{k}_\parallel|(n^2-1)^{1/2}/n$. After this change of variables the expression we wish to evaluate consists of an integral along the real-positive axis (travelling modes) and an integral along part of the positive imaginary axis where $k_z \in [0,\Gamma]$ (evanescent modes) \begin{eqnarray} \delta_{ij}^\epsilon(\mathbf{r},\mathbf{r}')=\frac{1}{(2\pi)^3}\sum_\lambda\int \hspace{-1mm}{\rm d}^2\mathbf{k}_\parallel e^{i\mathbf{k}_\parallel\cdot(\mathbf{r}_\parallel-\mathbf{r}_\parallel')}\hspace{1.0 cm}\nonumber\\ \times \left\{ \int_{i\Gamma}^{0^+} \hspace{-1mm}{\rm d} k_z \left[ \dfrac{k_z}{k_{zd}}T^{L*}_\lambda\;\hat{e}^i_\lambda(\mathbf{k}_d^+)\hat{e}^{j}_{\lambda}(\mathbf{k}^-)e^{ik_{zd} z+ik_z z'} \right.\right. \nonumber\\ \left.\left. + T^{L*}_\lambda R^{L}_\lambda\; \hat{e}^i_\lambda(\mathbf{k}_d^-)\hat{e}^{j}_{\lambda}(\mathbf{k}^-)e^{-ik_{zd}z+ik_zz'} \right] \right.\nonumber\\ \left. +\int_{0}^\infty \hspace{-1mm}{\rm d} k_z \left[ \dfrac{k_z}{k_{zd}}T^{L}_\lambda\;\hat{e}^i_\lambda(\mathbf{k}_d^+)\hat{e}^{j}_{\lambda}(\mathbf{k}^+)e^{ik_{zd} z-ik_z z'} \right.\right. \nonumber\\ \left.\left. + T^{R}_\lambda\; \hat{e}^i_\lambda(\mathbf{k}_d^-)\hat{e}^{j}_{\lambda}(\mathbf{k}^-)e^{-ik_{zd}z+ik_zz'} \right. \right.\nonumber\\ \left. \left. +\dfrac{k_z}{k_{zd}} T^{L}_\lambda R^L_\lambda \;\hat{e}^i_\lambda(\mathbf{k}_d^-)\hat{e}^{j}_{\lambda}(\mathbf{k}^+)e^{-ik_{zd}z-ik_zz'} \right.\right. \nonumber\\ \left.\left. +R^{R}_\lambda T^{R}_\lambda \;\hat{e}^i_\lambda(\mathbf{k}_d^-)\hat{e}^{j}_{\lambda}(\mathbf{k}^+)e^{-ik_{zd}z-ik_zz'} \right]\right\}.\label{eqn:CompExplicitA2} \end{eqnarray} Here the integral on the interval $k_z\in [i\Gamma, 0^+]$ runs on the right side of the branch cut due to $k_{zd}$ that runs from $k_z=-i\Gamma$ to $k_z=i\Gamma$. The last two terms in Eq.~(\ref{eqn:CompExplicitA2}) cancel out by virtue of the relations (\ref{eqn:Frnl}), and the other two terms in that integral can be combined to a single integral running along the interval $k_z\in (-\infty, 0^-] \cap [0^+, \infty)$ \begin{eqnarray} \delta_{ij}^\epsilon(\mathbf{r},\mathbf{r}')=\frac{1}{(2\pi)^3}\sum_\lambda\int \hspace{-1mm}{\rm d}^2\mathbf{k}_\parallel e^{i\mathbf{k}_\parallel\cdot(\mathbf{r}_\parallel-\mathbf{r}_\parallel')}\hspace{23mm}\nonumber\\ \times\left\{ \int_{-\infty}^\infty \hspace{-1mm}{\rm d} k_z \left[ T^{R}_\lambda\; \hat{e}^i_\lambda(\mathbf{k}_d^-)\hat{e}^{j}_{\lambda}(\mathbf{k}^-)e^{-ik_{zd}z+ik_zz'} \right]\right.\hspace{2mm}\nonumber\\ \left. +\int_{i\Gamma}^{0^+} \hspace{-1mm}{\rm d} k_z \left[ \dfrac{k_z}{k_{zd}}T^{L*}_\lambda\;\hat{e}^i_\lambda(\mathbf{k}_d^+)\hat{e}^{j}_{\lambda}(\mathbf{k}^-)e^{ik_{zd} z+ik_z z'} \right.\right.\hspace{-2mm} \nonumber\\ \left.\left. + T^{L*}_\lambda R^{L}_\lambda\; \hat{e}^i_\lambda(\mathbf{k}_d^-)\hat{e}^{j}_{\lambda}(\mathbf{k}^-)e^{-ik_{zd}z+ik_zz'} \right] \right\}.\hspace{4mm} \label{eqn:CompExplicitA3} \end{eqnarray} To proceed any further, close inspection of Eq.~(\ref{eqn:CompExplicitA3}) is necessary. To illustrate the argument, we focus on the TM contributions to the integral. The TE contributions are treated in an exactly analogous way. We start by noting that for purely imaginary $k_z$ we have $k_z^*=-k_z$ so that we get \begin{equation} T^{L*}_{\rm TM}=\dfrac{2nk_{z}}{k_{zd}-n^2k_z},\hspace{1 cm}\dfrac{k_z}{k_{zd}}T^{L*}_{\rm TM} R^L_{{\rm TM}}=\dfrac{2nk_{z}}{k_{zd}+n^2k_z}.\nonumber \end{equation} Therefore, the $k_z$-integral in the last two lines of Eq.~(\ref{eqn:CompExplicitA3}) can be written as \begin{eqnarray} \int_{i\Gamma}^{0^+} \hspace{-1mm}{\rm d} k_z \left( \dfrac{2nk_z}{k_{zd}+n^2k_z}\right)\hat{e}^i_{\rm TM}(\mathbf{k}_d^-)\hat{e}^{j}_{{\rm TM}}(\mathbf{k}^-)e^{-ik_{zd} z+ik_z z'}\nonumber\\ +\int_{i\Gamma}^{0^+} \hspace{-1mm}{\rm d} k_z\left(\dfrac{2nk_z}{k_{zd}-n^2k_z}\right) \hat{e}^i_{\rm TM}(\mathbf{k}_d^+)\hat{e}^{j}_{{\rm TM}}(\mathbf{k}^-)e^{+ik_{zd}z+ik_zz'}.\nonumber \end{eqnarray} Now we observe that the second integral differs from the first integral only by the sign of $k_{zd}$. This allows us to combine these two integrals into a single contour integral around the branch-cut due to $k_{zd}$ \begin{eqnarray} \int_\mathcal{C} \hspace{-1mm}{\rm d} k_z T^R_{\rm TM}\hat{e}^i_{\rm TM}(\mathbf{k}_d^-)\hat{e}^{j}_{{\rm TM}}(\mathbf{k}^-)e^{-ik_{zd} z+ik_z z'}\label{eqn:AroundCut1} \end{eqnarray} where the contour $\mathcal{C}$ is illustrated in Fig. \ref{fig:AroundCut}. \begin{figure}[htbp] \centering \includegraphics[width=8cm,height=6cm]{CH2Contour1.eps} \caption{The dashed line represents the contour $\mathcal{C}$ used to evaluate the $k_z$-integral in equation (\ref{eqn:AroundCut1}).} \label{fig:AroundCut} \end{figure} Thus the completeness relation (\ref{eqn:CompExplicitA3}) may be written compactly as \begin{eqnarray} \delta_{ij}^\epsilon(\mathbf{r},\mathbf{r}')=\frac{1}{(2\pi)^3}\sum_\lambda\int \hspace{-1mm}{\rm d}^2\mathbf{k}_\parallel e^{i\mathbf{k}_\parallel\cdot(\mathbf{r}_\parallel-\mathbf{r}_\parallel')}\hspace{0.5cm}\nonumber\\ \times\int_\gamma \hspace{-1mm}{\rm d} k_z T^{R}_\lambda\; \hat{e}^i_\lambda(\mathbf{k}_d^-)\hat{e}^{j}_{\lambda}(\mathbf{k}^-)e^{-ik_{zd}z+ik_zz'} \label{eqn:CompExplicitA4} \end{eqnarray} where the contour $\gamma$ runs along the negative real axis from $k_z=-\infty$ to $k_z=0^-$, then around the branch-cut along the contour $\mathcal{C}$ depicted in Fig. \ref{fig:AroundCut} and then from $k_z=0^+$ to $k_z=\infty$. The $k_z$-integral may now be evaluated with the help of the residue theorem. We note that for $z<0$ and $z'>0$ the integrand in Eq.~(\ref{eqn:CompExplicitA4}) vanishes exponentially in the upper $k_z$-plane so that we can close the contour there. To do so we need to determine the position of the integrand's poles, if any. The Fresnel coefficients for the half-space geometry are analytic for ${\rm Im}(k_z)>0$ so that it remains to look at the analytic properties of the polarization vectors defined in Eqs.~(\ref{eqn:PW1})--(\ref{eqn:PW2}). For the TE mode we immediately note that $\hat{\mathbf{e}}_{\rm TE}$ are independent of $k_z$. Thus the transverse electric modes do not contribute to the integral (\ref{eqn:CompExplicitA4}). For the TM mode, each polarization vector contributes a factor of $1/|\mathbf{k}|$ where $|\mathbf{k}|=\sqrt{k_z^2+\mathbf{k}_\parallel^2}$. Thus for a TM mode the integrand has a simple pole in the upper half-plane at $k_z=i|\mathbf{k}_\parallel|$. Using the residue theorem, one can now easily show that \begin{equation} \delta_{ij}^\epsilon(\mathbf{r},\mathbf{r}')=-\nabla_i \nabla'_j G^T(\mathbf{r}-\mathbf{r}'),\mbox{ for }z<0,\;z'>0\label{eqn:GeneralizedDeltaLeft} \end{equation} where \begin{equation} G^T(\mathbf{r}-\mathbf{r}')=\frac{1}{4\pi n^2}\frac{2n^2}{n^2+1}\frac{1}{|\mathbf{r}-\mathbf{r}'|} \end{equation} is the transmitted part of the electrostatic Green's function in the half-space geometry, see e.g. \cite{Jackson}. In order to evaluate Eq.~(\ref{eqn:Completeness}) for the case $z>0,\;z'>0$ we again substitute the relevant the mode functions (\ref{eqn:LeftIncident})--(\ref{eqn:RightIncident}) and after utilizing straightforward properties of the Fresnel reflection coefficients we arrive at \begin{eqnarray} \delta_{ij}^\epsilon(\mathbf{r},\mathbf{r}')=\frac{1}{(2\pi)^3}\sum_\lambda\int \hspace{-1mm}{\rm d}^2\mathbf{k}_\parallel e^{i\mathbf{k}_\parallel\cdot(\mathbf{r}_\parallel-\mathbf{r}_\parallel')}\hspace{24mm}\nonumber\\ \times\left\{\int_{-\infty}^\infty\hspace{-1mm}{\rm d} k_z \hat{e}^i_\lambda(\mathbf{k}^+)\hat{e}^j_\lambda(\mathbf{k}^+)e^{ik_z(z-z')}\right.\hspace{12mm}\nonumber\\ +\int_{-\infty}^\infty\hspace{-1mm}{\rm d} k_z R_\lambda^R\hat{e}^i_\lambda(\mathbf{k}^+)\hat{e}^j_\lambda(\mathbf{k}^-)e^{ik_z(z+z')} \hspace{4mm}\nonumber\\ \left.+\int_{i\Gamma}^0\hspace{-1mm}{\rm d} k_z \frac{k_z}{k_{zd}}\left| T^L_\lambda\right|^2\hat{e}^i_\lambda(\mathbf{k}^-)\hat{e}^j_\lambda(\mathbf{k}^-)e^{ik_z(z+z')}\right\}\; \hspace{2mm} \label{eqn:CompExplicit} \end{eqnarray} with $\Gamma=|\mathbf{k}_\parallel|(n^2-1)^{1/2}/n$ and $\hat{e}_\lambda^i(\mathbf{k}^\pm)\equiv \hat{e}_\lambda^i(\boldsymbol{\nabla}) e^{i\mathbf{k}^\pm\cdot\mathbf{r}}$. Now we note that, because of the completeness properties of the polarization vectors, the first $k_z$ integral in Eq.~(\ref{eqn:CompExplicit}) yields the transverse $\delta$-function, Eq. (\ref{eqn:TransverseDelta}). The remaining two terms can be combined into a single contour integral around the branch cut due to $k_{zd}=\sqrt{n^2k_z^2+(n^2-1)\mathbf{k}_\parallel^2}$. This is done in exactly the same manner as in \cite{Birula,Robaschik}. Thus the result reads \begin{eqnarray} \delta_{ij}^\epsilon(\mathbf{r},\mathbf{r}')=\delta^\perp_{ij}(\mathbf{r}-\mathbf{r}')+\frac{1}{(2\pi)^3}\sum_\lambda\int \hspace{-1mm}{\rm d}^2\mathbf{k}_\parallel e^{i\mathbf{k}_\parallel\cdot(\mathbf{r}_\parallel-\mathbf{r}_\parallel')}\nonumber\\ \times\int_\gamma\hspace{-1mm}{\rm d} k_z R_\lambda^R\hat{e}^i_\lambda(\mathbf{k}^+)\hat{e}^j_\lambda(\mathbf{k}^-)e^{ik_z(z+z')}\;\; \label{eqn:CompExplicit1} \end{eqnarray} where the contour $\gamma$ runs along the negative real axis from $k_z=-\infty$ to $k_z=0^-$, then around the branch cut along the contour $\mathcal{C}$ depicted in Fig. \ref{fig:AroundCut} and then from $k_z=0^+$ to $k_z=\infty$. Since the reflection coefficient $R^R_\lambda$ has no poles in the upper $k_z$-plane we can close the contour there. Then, for the ${\rm TE}$ mode the integral vanishes because the polarization vectors do not depend on $k_z$. For the ${\rm TM}$ mode, however, the polarization vectors contribute a pole in the upper half-plane at $k_z=i|\mathbf{k}_\parallel|$. The integral is easily evaluated using the residue theorem and leads to the final result that can be written explicitly as \begin{eqnarray} \delta_{ij}^\epsilon(\mathbf{r},\mathbf{r}')&\!\!=&\!\!\delta_{ij}\delta^{(3)}(\mathbf{r}-\mathbf{r}')\nonumber\\ &&\!\!-\nabla_i\nabla'_j\left[G^0(\mathbf{r}-\mathbf{r}')+G^R(\mathbf{r},\mathbf{r}')\right]\mbox{ for }z,z'>0\nonumber\\\label{eqn:GeneralizedDeltaRight} \end{eqnarray} with $G^{\rm R}(\mathbf{r},\mathbf{r}')$ being the reflected part of the electrostatic Green's function in the half-space geometry \begin{eqnarray} G^{R}(\mathbf{r},\mathbf{r}')=-\frac{1}{4\pi}\frac{n^2-1}{n^2+1}\frac{1}{|\mathbf{r}-\bar{\mathbf{r}}'|}\label{eqn:ReflectedPoisson} \end{eqnarray} where $\bar{\mathbf{r}}'=(x',y',-z')$. The results (\ref{eqn:GeneralizedDeltaLeft}) and (\ref{eqn:GeneralizedDeltaRight}) may be written in compact form as \begin{equation} \delta_{ij}^\epsilon(\mathbf{r},\mathbf{r}')=\delta_{ij}\delta^{(3)}(\mathbf{r}-\mathbf{r}') -\nabla_i\nabla'_j G(\mathbf{r},\mathbf{r}'),\mbox{ for }z'>0\label{eqn:GeneralizedDeltaTot} \end{equation} where \begin{eqnarray} G(\mathbf{r},\mathbf{r}')&=&\frac{1}{4\pi n^2}\frac{2n^2}{n^2+1}\frac{1}{|\mathbf{r}-\mathbf{r}'|}\theta(-z)\nonumber\\ &+&\left(\frac{1}{4\pi}\frac{1}{|\mathbf{r}-\mathbf{r}'|}-\frac{1}{4\pi}\frac{n^2-1}{n^2+1}\frac{1}{|\mathbf{r}-\bar{\mathbf{r}}'|}\right)\theta(z)\nonumber\\ \label{eqn:FullPoissonGreenF} \end{eqnarray} is the Green's function of the Poisson equation for the case of a source being outside the dielectric occupying the $z<0$ region of space. We see that the end result has formally the same form as (\ref{eqn:TransverseDelta1}) only that the free-space Green's function of the Poisson equation is replaced by the Green's function in the presence of a dielectric half-space of refractive index $n$. The result (\ref{eqn:GeneralizedDeltaTot}) may be formally written as \begin{equation} \delta_{ij}^\epsilon(\mathbf{r},\mathbf{r}')=\left(\delta_{ij}+\nabla_i\nabla'_j\nabla^{-2}\right)\delta^{(3)}(\mathbf{r}-\mathbf{r}')\label{eqn:GeneralizedDelta} \end{equation} provided an appropriate meaning is attached to the integral operator $\nabla^{-2}$. We would like to remark that it is in this sense that the completeness relation proven in \cite{Completeness} holds. There, of course, the Green's function is that in the slab geometry, see the appendix of Ref.~\cite{Slab}. Equation (\ref{eqn:GeneralizedDelta}) needs to be compared with Eq.~(\ref{eqn:BirulaDelta}). Note in particular, that the derivative $\nabla'_j$ which acts on $\mathbf{r}'$ can not be shifted to act on $\mathbf{r}$ because of the reflection term in (\ref{eqn:FullPoissonGreenF}). This is possible only after one acts with Laplace operator on (\ref{eqn:GeneralizedDelta}). Only then one can replace $\nabla_j'$ with $-\nabla_j$ and recover the result (\ref{eqn:BirulaDelta}) derived in \cite{Birula}. Once the completeness relation of the mode functions has been explicitly calculated, one can also evaluate the equal-time field commutator. Using Eq.~(\ref{eqn:AExpansion}) we have \begin{equation} \left[A_i^{\rm gc}(\mathbf{r}),\epsilon_0 E_j(\mathbf{r}')\right]=-i\hbar\delta^{\epsilon}_{ij}(\mathbf{r},\mathbf{r}') \end{equation} so for the case of the electromagnetic field in the presence of a dielectric half-space the commutator between the vector potential and electric field operator reads \begin{eqnarray} \left[A_i^{\rm gc}(\mathbf{r}),\epsilon_0 E_j(\mathbf{r}')\right]&=&-i\hbar\delta_{ij}\delta^{(3)}(\mathbf{r}-\mathbf{r}')\nonumber\\ &+&i\hbar\nabla_i\nabla'_j G(\mathbf{r},\mathbf{r}').\label{eqn:CommGenGauge} \end{eqnarray} where $G(\mathbf{r},\mathbf{r}')$ is given by (\ref{eqn:FullPoissonGreenF}) and we remind the reader that we consider the case $z'>0$ only. We see that, compared to the standard commutation relations of QED, the commutator in the presence of the dielectric gains an additional term that represents the reflection from the surface. Note that in the limit of perfect reflectivity, i.e. $n\rightarrow\infty$, we recover the results obtained in \cite{Milonni3,Power}. We will come back to this fact at the end of the section \ref{sec:3}. \section{\label{sec:3}Coulomb gauge} The natural question arising is whether it is possible to quantize the electromagnetic field in the presence of a dielectric half-space but work in true Coulomb gauge. The direct approach to solving the Maxwell equations (\ref{eqn:Wave1})--(\ref{eqn:Wave2}) proves intractable, but we shall show that one can exploit a gauge transformation for working out the field operators in the true Coulomb gauge from the ones in the generalized Coulomb gauge. A gauge transformation from the generalized Coulomb gauge to the true Coulomb gauge may be written as follows \begin{eqnarray} \mathbf{A}^{\rm c}(\mathbf{r},t)&=&\mathbf{A}^{\rm gc}(\mathbf{r},t)-\boldsymbol{\nabla}\chi(\mathbf{r},t),\label{eqn:ChangeA}\\ \phi^{\rm c}(\mathbf{r},t)&=&\phi^{\rm gc}(\mathbf{r},t)+\frac{\partial}{\partial t}\chi(\mathbf{r},t)\label{eqn:changePhi}. \end{eqnarray} where we set $\phi^{\rm gc}(\mathbf{r},t)=0$ in the absence of charges. It is clear that in the true Coulomb gauge, even in the absence of charges, the scalar potential does not vanish. In fact, we shall see shortly that in true Coulomb gauge the scalar potential enters the Hamiltonian on an equal footing with the vector potential as a second-quantized operator. We note that the left-hand side of Eq.~(\ref{eqn:ChangeA}) is transverse, and since $\mathbf{A}^{\rm gc}$ is not, the gradient of the generating function $\chi(\mathbf{r},t)$ must compensate for it \cite{Babiker}. In other words we have \cite{maximumPrincipleFootnote} \begin{equation} \nabla_i \chi(\mathbf{r},t)=\int\hspace{-1mm}{\rm d}^3 \mathbf{r}' \delta_{ij}^\parallel(\mathbf{r}-\mathbf{r}')A_j^{\rm gc}(\mathbf{r}', t).\label{eqn:ChiConstr} \end{equation} The form of the $\chi$ can be easily found if we use the position representation of the longitudinal $\delta$-function \begin{equation} \nabla_i \chi(\mathbf{r},t)=\frac{1}{4\pi}\int\hspace{-1mm}{\rm d}^3 \mathbf{r}'\left(\nabla_i\nabla'_j\frac{1}{|\mathbf{r}-\mathbf{r}'|}\right)A_j^{\rm gc}(\mathbf{r}', t) \end{equation} where the primed derivative acts only on the Green's function and not on $A_j^{\rm gc}$. After integrating by parts, we identify \begin{equation} \chi(\mathbf{r},t)=-\frac{1}{4\pi}\int\hspace{-1mm}{\rm d}^3\mathbf{r}'\frac{1}{|\mathbf{r}-\mathbf{r}'|}\boldsymbol{\nabla}'\cdot\mathbf{A}^{\rm gc}(\mathbf{r}',t).\label{eqn:GeneratingFunction} \end{equation} The generating function $\chi(\mathbf{r},t)$ can be obtained directly by using the explicit form of the field operator $\mathbf{A}^{\rm gc}$ from Eq.~(\ref{eqn:AOperator}) and evaluating the integrals in Eq.~(\ref{eqn:GeneratingFunction}). Alternatively, we take the divergence of Eq.~(\ref{eqn:ChangeA}) followed by a time derivative and find that the scalar potential in the true Coulomb gauge $\phi^c=\dot{\chi}$ satisfies the Poisson equation \begin{equation} -\nabla^2\dot{\chi}(\mathbf{r},t)=\frac{\sigma(\mathbf{r}_\parallel,t)}{\epsilon_0}\;\delta(z)\label{eqn:PoissonEquationForChi}, \end{equation} with the surface charge density \begin{eqnarray} \sigma(\mathbf{r}_\parallel,t)=-2i\int\hspace{-1mm}{\rm d}^2\mathbf{k}_\parallel|\mathbf{k}_\parallel|\hspace{4cm}\nonumber\\ \times\left\{\left[\int_0^\infty \hspace{-1mm}{\rm d} k_{zd}\sqrt{\frac{\hbar\epsilon_0}{2\omega_\mathbf{k}}}\;\hat{a}^{L}_{\mathbf{k}{\rm TM}}(t)g_{\mathbf{k}}^L(\mathbf{r}_\parallel)-{\rm H.C.}\right]\hspace{1 cm}\right.\nonumber\\ \left.+\left[\int_0^\infty\hspace{-1mm}{\rm d} k_{z}\sqrt{\frac{\hbar\epsilon_0}{2\omega_\mathbf{k}}}\;\hat{a}^{R}_{\mathbf{k}{\rm TM}}(t)g_{\mathbf{k}}^R(\mathbf{r}_\parallel)-{\rm H.C.}\right]\right\}.\;\;\label{eqn:Sigma} \end{eqnarray} Here we have introduced the two mode functions \begin{eqnarray} g^R_\mathbf{k}(\mathbf{r}_\parallel)&=&\frac{1}{(2\pi)^{3/2}}\frac{n^2-1}{2n^2}\left(1+R^R_{\rm TM}\right) e^{i\mathbf{k}_\parallel\cdot\mathbf{r}_\parallel},\label{eqn:GL}\\ g^L_\mathbf{k}(\mathbf{r}_\parallel)&=&\frac{1}{(2\pi)^{3/2}}\frac{n^2-1}{2n^2} \frac{T^L_{\rm TM}}{n}e^{i\mathbf{k}_\parallel\cdot\mathbf{r}_\parallel},\;\;\;\label{eqn:GR} \end{eqnarray} with reflection coefficients as given by Eqs.~(\ref{eqn:Frnl}). The solution of Eq.~(\ref{eqn:PoissonEquationForChi}) can be easily found as \begin{eqnarray} \dot{\chi}(\mathbf{r},t)&=&i\int\hspace{-1mm}{\rm d}^2\mathbf{k}_\parallel e^{-|\mathbf{k}_\parallel||z|}\nonumber\\ &\times&\left\{\left[\int_0^\infty \hspace{-1mm}{\rm d} k_{zd}\sqrt{\frac{\hbar}{2\epsilon_0\omega_\mathbf{k}}}\;\hat{a}^{L}_{\mathbf{k}{\rm TM}}(t)g_{\mathbf{k}}^L(\mathbf{r}_\parallel)-{\rm H.C.}\right]\right.\nonumber\\ &\;&+\left.\left[\int_0^\infty\hspace{-1mm}{\rm d} k_{z}\sqrt{\frac{\hbar}{2\epsilon_0\omega_\mathbf{k}}}\;\hat{a}^{R}_{\mathbf{k}{\rm TM}}(t)g_{\mathbf{k}}^R(\mathbf{r}_\parallel)-{\rm H.C.}\right]\right\}.\nonumber\\ \label{eqn:Phi} \end{eqnarray} As anticipated, the potential $\phi^c=\dot{\chi}$ turns out to be a second-quantized operator. It relates the vector potential in true Coulomb gauge to that in generalized Coulomb gauge via Eq.~(\ref{eqn:ChangeA}). It only affects photons with TM polarization and, interestingly, it is symmetric with respect to the interface i.e. $\dot{\chi}(-z)=\dot{\chi(z)}$. The generating function $\chi$ is found by integrating Eq.~(\ref{eqn:Phi}) with respect to time, \begin{eqnarray} \chi(\mathbf{r},t)&=&-\int\hspace{-1mm}{\rm d}^2\mathbf{k}_\parallel e^{-|\mathbf{k}_\parallel||z|}\nonumber\\ &\times&\left\{\left[\int_0^\infty \hspace{-1mm}{\rm d} k_{zd}\sqrt{\frac{\hbar}{2\epsilon_0\omega^3_\mathbf{k}}}\;\hat{a}^{L}_{\mathbf{k}{\rm TM}}(t)g_{\mathbf{k}}^L(\mathbf{r}_\parallel)+{\rm H.C.}\right]\right.\nonumber\\ &\;&+\left.\left[\int_0^\infty\hspace{-1mm}{\rm d} k_{z}\sqrt{\frac{\hbar}{2\epsilon_0\omega^3_\mathbf{k}}}\;\hat{a}^{R}_{\mathbf{k}{\rm TM}}(t)g_{\mathbf{k}}^R(\mathbf{r}_\parallel)+{\rm H.C.}\right]\right\}.\nonumber\\ \label{eqn:chi_result} \end{eqnarray} Let us now come back to the issue of the commutation relations between the field operators. In true Coulomb gauge we expect \begin{eqnarray} \left[A_i^{\rm c}(\mathbf{r}),\epsilon_0 E_j(\mathbf{r}')\right]&=&-i\hbar\delta^\perp_{ij}(\mathbf{r}-\mathbf{r}')=-i\hbar\delta_{ij}\delta^{(3)}(\mathbf{r}-\mathbf{r}')\nonumber\\ &+&i\hbar\nabla_i\nabla'_jG^0(\mathbf{r}-\mathbf{r}')\label{eqn:CommTrueGauge} \end{eqnarray} which is a consequence of the fact that $\boldsymbol{\nabla} \chi$ is the longitudinal part of $\mathbf{A}^{gc}$, cf. Eq. (\ref{eqn:ChiConstr}). This can also be confirmed by an explicit calculation using the mode functions (\ref{eqn:GL})--(\ref{eqn:GR}). The commutator splits as follows \begin{eqnarray} \left[A_i^{\rm c}(\mathbf{r}),\epsilon_0 E_j(\mathbf{r}')\right]=\left[A_i^{\rm gc}(\mathbf{r})-\nabla_i\chi(\mathbf{r}),\epsilon_0 E_j(\mathbf{r}')\right]\hspace{11mm}\nonumber\\ =-i\hbar\delta^\epsilon_{ij}(\mathbf{r},\mathbf{r}')-\left[\nabla_i\chi(\mathbf{r}),\epsilon_0 E_j(\mathbf{r}')\right]\hspace{3mm} \label{eqn:SCGaugeComm} \end{eqnarray} where $\delta^\epsilon_{ij}(\mathbf{r},\mathbf{r}')$ is given by Eq.~(\ref{eqn:GeneralizedDeltaTot}) and the reader is reminded that we consider the case $z'>0$ only. Substituting the mode functions (\ref{eqn:GL})--(\ref{eqn:GR}) into Eq.~(\ref{eqn:SCGaugeComm}), we find, using the same techniques as in the calculation of the completeness relation (\ref{eqn:Completeness}), that \begin{eqnarray} \left[\nabla_i\chi(\mathbf{r}),\epsilon_0 E_j(\mathbf{r}')\right]\hspace{55mm}\nonumber\\ =i\hbar\nabla_i\nabla'_j \;\left\{ \begin{array}{lr} -\dfrac{n^2-1}{n^2+1}\;G^0(\mathbf{r}-\mathbf{r}')& \mbox{ for }z<0,z'>0,\\ \\ \;\;G^R(\mathbf{r},\mathbf{r}') & \mbox{ for }z>0, z'>0, \end{array} \right. \nonumber\\ \label{eqn:ChiECommutator} \end{eqnarray} where $G^0$ and $G^R$ are the Green's functions as introduced in Eqs.~(\ref{eqn:Poisson}) and (\ref{eqn:ReflectedPoisson}). Equation (\ref{eqn:ChiECommutator}) when combined with Eqs.~(\ref{eqn:GeneralizedDeltaTot}) and (\ref{eqn:SCGaugeComm}) confirms the assertion stated by Eq.~(\ref{eqn:CommTrueGauge}). The above considerations have clearly demonstrated that the commutator between the vector potential and the electric field operators is gauge dependent. Therefore, the modification of the QED commutation relations is not a physical effect but rather is related to the choice of gauge in which the electromagnetic field is quantized, which is of course ultimately only a matter of convenience. However, we note that the commutation relations between the physical fields retain the standard form, as they should. Consider the commutator \begin{equation} \left[\mathbf{B}(\mathbf{r}),\mathbf{E}(\mathbf{r}')\right]=\boldsymbol{\nabla}\times\left[\mathbf{A}(\mathbf{r}),\mathbf{E}(\mathbf{r}')\right].\label{eqn:GaugeIndepComm} \end{equation} We see from Eq.~(\ref{eqn:ChangeA}) that, regardless of the gauge one uses to calculate the right-hand side of the above relation, the end result is the same. The commutators (\ref{eqn:CommGenGauge}) and (\ref{eqn:CommTrueGauge}) differ only by a longitudinal part that is annihilated by the curl operator. Thus, the shape of the cavity has no impact on the fundamental commutation relations of physical fields. \section{Perfect reflectors} If the walls of the cavity are modelled as perfectly reflecting mirrors, the generalized Coulomb gauge (\ref{eqn:Generalized}) is meaningless. Then, a common way to quantize the electromagnetic field is to work with the free-space form of Eq. (\ref{eqn:Wave2}) in true Coulomb gauge (\ref{eqn:TrueCoulomb}) and demand that the fields are excluded from interior of the perfect reflector, i.e. one solves \begin{eqnarray} \left(\nabla^2-\frac{\partial^2}{\partial t^2}\right)\mathbf{A}(\mathbf{r},t)&=&0,\nonumber\\ \boldsymbol{\nabla}\cdot\mathbf{A}(\mathbf{r},t)&=&0,\label{eqn:PREquations} \end{eqnarray} together with the condition that the electric field vanishes for $z\leq 0$. This implies in particular that \begin{equation} E_x(z=0^+)=0,\ \ E_y(z=0^+)=0. \end{equation} The relation between the vector potential and the electric field is taken to be \begin{equation} \mathbf{E}(\mathbf{r},t)=-\frac{\partial \mathbf{A}(\mathbf{r},t)}{\partial t},\label{eqn:AasE} \end{equation} and for this reason the boundary conditions for the electric field immediately imply rules for the vector potential. This method of quantization gives the vector field operator that can be be obtained by taking the $n\rightarrow\infty$ limit of Eq.~(\ref{eqn:AOperator}). This in turn implies that the commutation relations for the field operators are given by the perfect reflector limit of the commutation rule (\ref{eqn:CommGenGauge}) and \emph{not} by Eq. (\ref{eqn:CommTrueGauge}). Explicitly: \begin{eqnarray} \left[A_i(\mathbf{r}),\epsilon_0 E_j(\mathbf{r}')\right]=-i\hbar\delta_{ij}\delta^{(3)}(\mathbf{r}-\mathbf{r}')\hspace{2cm}\nonumber\\ +\frac{i\hbar}{4\pi}\nabla_i\nabla'_j \left(\frac{1}{|\mathbf{r}-\mathbf{r}'|}-\frac{1}{|\mathbf{r}-\bar{\mathbf{r}}'|}\right),\;\;\;z,z'>0,\;\;\label{eqn:CommGenGaugePR} \end{eqnarray} where $\bar{\mathbf{r}}=(x,y,-z)$. At first it seems surprising that, despite the Coulomb gauge condition having been imposed on the vector potential, the reflected part of the Green's function appears in the commutator. However, this can be explained as follows. In the presence of a perfect reflector the fluctuations of the quantized electromagnetic field imply the existence of a fluctuating charge density on the surface of the perfect reflector. Gauss's law reads \begin{equation} \boldsymbol{\nabla}\cdot\mathbf{E}(\mathbf{r},t)=\frac{\sigma(\mathbf{r}_\parallel,t)}{\epsilon_0}\delta(z),\label{eqn:GaussLawPR} \end{equation} where $\sigma(\mathbf{r}_\parallel)$ is given as a perfect-reflector limit of Eq.~(\ref{eqn:Sigma}). Relation (\ref{eqn:GaussLawPR}) is a consequence of the boundary conditions applied to the electric field at $z=0$ (and vice versa). We observe that Eqs.~(\ref{eqn:PREquations}), (\ref{eqn:AasE}) and (\ref{eqn:GaussLawPR}) cannot be simultaneously satisfied on the surface of the perfect reflector. Thus, the gauge condition in Eq.~(\ref{eqn:PREquations}) must for a perfect reflector be amended to read \begin{equation} \boldsymbol{\nabla}\cdot\mathbf{A}(\mathbf{r},t)=0\;\;\;\mbox{ for }\;z\neq 0 \end{equation} which is in fact an adaptation of the generalized Coulomb gauge condition (\ref{eqn:Generalized}) to the case of the perfect reflector rather than the true Coulomb gauge. This is the origin of the reflected Green's function term appearing in the commutator (\ref{eqn:CommGenGaugePR}) as has also been pointed out in Ref.~\cite{Bimonte}. Our analysis also permits us to observe that the oversimplified model of perfectly reflecting cavity walls obscures the fact that the form of the commutation relation is actually determined by the choice of gauge. While it is claimed in Ref.~\cite{Bimonte} that the commutator between the physical fields (\ref{eqn:GaugeIndepComm}) is affected by the cavity walls if one assumes them to be perfectly reflecting, we have clearly shown this to be an erroneous conclusion. \section{Hamiltonians} Quantum electrodynamics in the presence of dielectrics is most conveniently formulated in the generalized Coulomb gauge. The minimal-coupling Hamiltonian of a charged particle that is placed near dielectric half-space and coupled to the quantized electromagnetic field may be written as \cite{Dalton} \begin{eqnarray} H^{gc}&=&\dfrac{\left[\mathbf{p}-q\mathbf{A}^{gc}(\mathbf{r}_0)\right]^2}{2m}\nonumber\\ &+&\frac{1}{2}\int\hspace{-1mm}{\rm d}^3\mathbf{r}\left\{\epsilon_0\epsilon(z)\left[\frac{\partial\mathbf{A}^{gc}(\mathbf{r})}{\partial t}\right]^2+\frac{\mathbf{B}^2(\mathbf{r})}{\mu_0}\right\}\nonumber\\ &+&\frac{1}{2}\int\hspace{-1mm}{\rm d}^3\mathbf{r}\epsilon_0\epsilon(z)\boldsymbol{\nabla}\phi^{gc}(\mathbf{r})\cdot\boldsymbol{\nabla}\phi^{gc}(\mathbf{r}), \end{eqnarray} where $\mathbf{r}_0$ is the position of the particle. In the following, it will prove most convenient to write the Hamiltonian $H^f$ of the electromagnetic field in the form \begin{equation} H^f=\sum_{\mathbf{k},\lambda} \hbar\omega_\mathbf{k}\left(a^\dagger_{\mathbf{k}\lambda} a_{\mathbf{k}\lambda} +\frac{1}{2}\right). \end{equation} The integral involving the scalar potential $\phi^{gc}$ is a $c$-number and it contains the infinite self-energy of the particle $\Xi$ as well as the $z_0$-dependent electrostatic interaction between the dielectric and the charge \begin{equation} \frac{1}{2}\int\hspace{-1mm}{\rm d}^3\mathbf{r}\epsilon_0\epsilon(z)\boldsymbol{\nabla}\phi^{gc}(\mathbf{r})\cdot\boldsymbol{\nabla}\phi^{gc}(\mathbf{r})=\Xi + V^{es}, \end{equation} with \begin{equation} V^{es}=-\frac{q^2}{4\pi\epsilon_0}\frac{n^2-1}{n^2+1}\frac{1}{4z_0}.\label{eqn:Ves} \end{equation} Equation (\ref{eqn:Ves}) can be seen as an interaction energy of a static charge with its image in the dielectric, multiplied by a factor of $1/2$ because the image is not independent but a consequence of the charge \cite{Jackson}. Dropping the irrelevant self-energy of the particle $\Xi$, one can write the Hamiltonian $H^{gc}$ as \begin{equation} H^{gc}=\dfrac{\left[\mathbf{p}-q\mathbf{A}^{gc}(\mathbf{r}_0)\right]^2}{2m}+H^f+V^{es}.\label{eqn:HamGC} \end{equation} Perhaps the most instructive way of obtaining the Hamiltonian in true Coulomb gauge $H^c$ is by using the unitary transformation \begin{equation} H^c=e^{iS/\hbar}H^{gc}e^{-iS/\hbar}+i\hbar\left(\frac{d}{dt}e^{iS/\hbar}\right)e^{-iS/\hbar},\label{eqn:Unitary} \end{equation} with the operator S is given by \begin{equation} S(\mathbf{r}_0,t)=-q\chi(\mathbf{r}_0,t). \end{equation} The generating function $\chi(\mathbf{r},t)$ is given by Eq.~(\ref{eqn:chi_result}) and now taken at the position of the particle $\mathbf{r}_0$. In what follows we set operators to be time-independent (adopting the Schr\"{o}dinger picture) so that the term containing the time derivative in Eq.~(\ref{eqn:Unitary}) does not contribute. Then, using the same methods as in the proof of the completeness relation (\ref{eqn:Completeness}), it is not difficult to show that \begin{equation} e^{iS/\hbar}\left[\mathbf{p}-q\mathbf{A}^{gc}(\mathbf{r}_0)\right]e^{-iS/\hbar}=\left[\mathbf{p}-q\mathbf{A}^{c}(\mathbf{r}_0)\right],\nonumber \end{equation} as well as \begin{eqnarray} e^{iS/\hbar}H^f e^{-iS/\hbar}&=&H^f+\frac{i}{\hbar}\left[S(\mathbf{r}_0),H^f\right]\nonumber\\ &&+\frac{1}{2}\left(\frac{i}{\hbar}\right)^2\left[S(\mathbf{r}_0),\left[S(\mathbf{r}_0),H^f\right]\right]\nonumber\\ &=&H^f+q\dot{\chi}(\mathbf{r}_0)-\frac{n^2-1}{2n^2}V^{es}.\label{eqn:UnitaryDone} \end{eqnarray} With this, we obtain for the Hamiltonian in the Coulomb gauge \begin{equation} H^{c}=\frac{\left[\mathbf{p}-q\mathbf{A}^{c}(\mathbf{r}_0)\right]^2}{2m}+H^f+q\dot{\chi}(\mathbf{r}_0)+\left(\frac{n^2+1}{2n^2}\right)V^{es}. \end{equation} We see that compared to the Hamiltonian of Eq. (\ref{eqn:HamGC}) written out in the generalized Coulomb gauge, some of the electrostatic interaction energy has been redistributed and is now contained in the second-quantized part of the Hamiltonian $H^c$. One can actually see that this electrostatic interaction energy is now shared between two terms \begin{equation} H^{es}_{int}=q\dot{\chi}(\mathbf{r}_0)+\left(\frac{n^2+1}{2n^2}\right)V^{es}.\label{eqn:TwoTerms} \end{equation} Using standard time-independent perturbation theory applied to the interaction term $q\dot{\chi}(\mathbf{r})$, one finds that the first non-vanishing contribution is of second order in the perturbation and is given by \begin{eqnarray} \Delta E^{es}&\!\!=&\!\!\sum_{\mathbf{k},\mathbf{p}_f}\dfrac{\left|\langle \mathbf{p}_f ;1_{\mathbf{k}{\rm TM}}|q\dot{\chi}(\mathbf{r}_0)|\mathbf{p};0\rangle\right|^2}{\dfrac{\mathbf{p}^2}{2m}-\left(\dfrac{\mathbf{p}_f^2}{2m}+\omega_\mathbf{k}\right)} \approx -q^2\sum_\mathbf{k}\dfrac{\left|\dot{\chi}(\mathbf{r})\right|^2}{\omega_\mathbf{k}}\nonumber\\ &\!\!=&\!\!-\frac{q^2}{2\epsilon_0}\int\hspace{-1mm}{\rm d}^2\mathbf{k}_\parallel e^{-2|\mathbf{k}_\parallel | z_0}\nonumber\\ &&\times\left[\int_0^\infty\hspace{-1mm}{\rm d} k_{zd}\dfrac{\left|g^L_\mathbf{k}(\mathbf{r}_\parallel)\right|^2}{\omega_\mathbf{k}^2}+\int_0^\infty\hspace{-1mm}{\rm d} k_{z}\dfrac{\left|g^R_\mathbf{k}(\mathbf{r}_\parallel)\right|^2}{\omega_\mathbf{k}^2}\right],\nonumber\\ \label{eqn:SecondOrderShift} \end{eqnarray} where we have used the no-recoil approximation. The mode functions $g$ are given in Eqs.~(\ref{eqn:GL})--(\ref{eqn:GR}). The resulting integrals in Eq.~(\ref{eqn:SecondOrderShift}) can be calculated analytically and the result is \begin{equation} \Delta E^{es}=\left(\frac{n^2-1}{2n^2}\right)V^{es}. \end{equation} Thus, the contributions from both terms in Eq.~(\ref{eqn:TwoTerms}) add up to yield the whole of the electrostatic interaction energy \begin{equation} \left(\frac{n^2-1}{2n^2}\right)V^{es}+\left(\frac{n^2+1}{2n^2}\right)V^{es}=V^{es}. \end{equation} This is of course what one would expect since both formulations of the theory must lead to the same physical results. \section{Conclusions} In this paper we have illustrated some intricacies involved in the quantization of the electromagnetic field when polarizable boundaries are present and modelled macroscopically by the introduction of the spatially varying and piecewise constant dielectric function. Starting from the generalized Coulomb gauge we have derived the expression for the coordinate representation of the unit kernel in that gauge, thereby explicitly verifying the completeness relation of the mode functions. While this calculation has its own merit, it has served us to develop tools that allow us to explicitly carry out a gauge transformation from the generalized Coulomb gauge to the true Coulomb gauge, where the expression for the vector field operators is truly transverse even in the presence of the boundaries. This has shed light on some misconceptions about the nature of the commutation relations in macroscopic quantum electrodynamics, especially in the case when the boundaries are modelled as perfect reflectors. We have also written down the Hamiltonian for a charged particle near a dielectric boundary in true Coulomb gauge and shown that and why it is different from the one in generalized Coulomb gauge. It contains extra terms due to an induced fluctuating surface charge at the boundary, now represented as a second-quantized operator. This term contains parts of the electrostatic interaction of a particle and the surface, which in generalized Coulomb gauge is represented by a c-number, namely the electrostatic potential obtained by classical methods, e.g. the method of images. Finally, we have explicitly demonstrated the gauge invariance of the theory by working out the electrostatic parts of the charge-surface interactions. This work paves the way to more elaborate gauge transformations which provide a link between well understood approaches to macroscopic QED and more elaborate theories.
{ "timestamp": "2019-03-01T02:06:16", "yymm": "1902", "arxiv_id": "1902.10843", "language": "en", "url": "https://arxiv.org/abs/1902.10843" }
\section{Introduction} In \cite{JePu}, the notion of a dimension effect algebra was introduced as a counterpart of the notion of a dimension group. Recall that a dimension group (or a Riesz group) is a directed, unperforated interpolation group. By \cite{EHS}, dimension groups can be also characterized as direct limits of directed systems of simplicial groups. In analogy with the latter characterization, dimension effect algebras were defined as direct limits of directed systems of finite effect algebras with the Riesz decomposition property. It is well known that the latter class of effect algebras corresponds to the class of finite MV-algebras, and in analogy with simplicial groups, we call them simplicial effect algebras. It turns out that dimension effect algebras are exactly the unit intervals in unital dimension groups, and simplicial effect algebras are exactly the unit intervals in unital simplicial groups. In \cite {JePu}, an intrinsic characterization of dimension effect algebras was found, and also a categorical equivalence between countable dimension effect algebras and unital AF C*-algebras was shown \cite[Theorem 5.2]{JePu}. In this paper we continue the study of dimension effect algebras. In particular, we study the tensor product of dimension effect algebras in the category of effect algebras. We recall that the tensor product in the category of effect algebras exists, and its construction was described in \cite{Dvu}. We first prove that the tensor product of simplicial effect algebras is again a simplicial effect algebra and is (up to isomorphism) the unit interval in the tensor product of the corresponding unital simplicial groups (Theorem \ref{th:tenprodfmv}). Then we extend this result to any dimension effect algebras, using the fact that every dimension effect algebra is a direct limit of a directed system of simplicial effect algebras. Namely, we prove that the tensor product of dimension effect algebras is a dimension effect algebra (Theorem \ref{th:tenproddimea}), and is (up to isomorphism) the unit interval in the tensor product of the corresponding dimension groups (Corollary \ref{cor:unigroup}). We conjecture that this last statement holds more generally for tensor products of interval effect algebras and their universal groups. We note that the categorical equivalence between effect algebras with RDP and interpolation groups proved in \cite[Theorem 3.8]{JePu}, or the known constructions of tensor products in the category of interval effect algebras \cite{FGB} cannot be applied here, since the category of effect algebras is much larger than the category of effect algebras with RDP or interval effect algebras. In the last section, we apply our results to the interval ${\mathbb R}^+[0,1]$ and construct a directed system of simplicial groups that has this interval as its direct limit. \section{Preliminaries} The notion of an effect algebra was introduced by D.J. Foulis and M.K. Bennett in \cite{FoBe}. An alternative definition of so called \emph{D-poset} was introduced in \cite{KCh}. Effect algebras and D-posets are categorically equivalent structures \cite{DvPu}. \begin{definition}\label{de:ea} An \emph{effect algebra} is an algebraic system $(E;0,1,\oplus)$, where $\oplus$ is a partial binary operation and $0$ and $1$ are constants, such that the following axioms are satisfied for $a,b,c\in E$: \begin{enumerate} \item[{\rm(i)}] if $a\oplus b$ is defined the $b\oplus a$ is defined and $a\oplus b=b\oplus a$ (commutativity); \item[{\rm(ii)}] if $a\oplus b$ and $(a\oplus b)\oplus c$ are defined, then $a\oplus(b\oplus c)$ is defined and $(a\oplus b)\oplus c=a\oplus(b\oplus c)$ (associativity); \item[{\rm(iii)}] for every $a\in E$ there is a unique $a^{\perp}\in E$ such that $a\oplus a^{\perp}=1$; \item[{\rm(iv)}] if $a\oplus 1$ is defined then $a=0$. \end{enumerate} \end{definition} In what follows, if we write $a\oplus b$, $a,b\in E$, we tacitly assume that $a\oplus b$ is defined in $E$. The operation $\oplus$ can be extended to the $\oplus$-sum of finitely many elements by recurrence in an obvious way. Owing to commutativity and associativity, the element $a_1\oplus a_2\oplus \cdots \oplus a_n$ is ambiguously defined. In any effect algebra a partial order can be defined as follows: $a\leq b$ if there is $c\in E$ with $a\oplus c=b$. In this partial order, $0$ is the smallest and $1$ is the greatest element in $E$. Moreover, if $a\oplus c_1=a\oplus c_2$, then $c_1=c_2$, and we define $c=b\ominus a$ iff $a\oplus c=b$. In particular, $1\ominus a=a^{\perp}$ is called the \emph{orthosupplement} of $a$. We say that $a,b\in E$ are \emph{orthogonal}, written $a\perp b$, iff $a\oplus b$ exists in $E$. It can be shown that $a\perp b$ iff $a\leq b^{\perp}$. An effect algebra which is a lattice with respect to the above ordering is called a \emph{lattice effect algebra}. Let $E$ and $F$ be effect algebras. A mapping $\phi:E\to F$ is an \emph{effect algebra morphism} iff $\phi (1)=1$ and $\phi(e\oplus f)=\phi(e)\oplus \phi(f)$ whenever $e\oplus f$ is defined in $E$. The category of effect algebras with effect algebra morphisms will be denoted by $\ea$. \subsection{Interval effect algebras and RDP} Important examples of effect algebras are obtained in the following way. Let $(G,G^+,0)$ be a (additively written) partially ordered abelian group with a positive cone $G^+$ and neutral element $0$. For $a\in G^+$ define the interval $G[0,a]:=\{ x\in G: 0\leq x\leq a\}$. Then $G[0,a]$ can be endowed with a structure of an effect algebra by defining $x\perp y$ iff $x+y\leq a$, and then putting $a\oplus b:=a+b$. Effect algebras obtained in this way are called \emph{interval effect algebras}. We note that a prototype of effect algebras is the interval $[0,I]$ in the group of self-adjoint operators on a Hilbert space, so-called algebra of Hilbert space effects. Hilbert space effects play an important role in quantum measurement theory, and the abstract definition was motivated by this example. On the other hand, there are effect algebras that are not interval effect algebras, see e.g. \cite{Na}. The partially ordered abelian group $G$ is \emph{directed} if $G=G^+-G^+$. An element $u\in G^+$ is an \emph{order unit} if for all $a\in G$, $a\leq nu$ for some $n\in {\mathbb N}$. If $G$ has an order unit $u$, it is directed, indeed, if $g\leq nu$, then $g=nu-(nu-g)$. An element $u\in G^+$ is called a \emph{generating unit} if every $a\in G^+$ is a finite sum of (not necessarily different) elements of the interval $G[0,u]$. Clearly, a generating unit is an order unit, the converse may be false. If $G$ and $H$ are partially ordered abelian groups, then a group homomorphism $\phi:G\to H$ is \emph{positive} if $\phi(G^+)\subseteq H^+$. An isomorphism $\phi : G\to H$ is an \emph{order isomorphism} if $\phi(G^+)=H^+$. If $G$ and $H$ have order units $u$ and $v$, respectively, then a positive homomorphism $\phi:G\to H$ is called \emph{unital} if $\phi(u)=v$. The category of partially ordered abelian groups having an order unit, with positive unital homomorphisms will be denoted by $\pog$. Relations between interval effect algebras and partially ordered abelian groups are described in the following theorem, proved in \cite{BeFo}. Recall that a mapping $\phi:E\to K$, where $E$ is an effect algebra and $K$ is any abelian group, is called a \emph{$K$-valued measure} on $E$ if $\phi(a\oplus b)=\phi(a)+\phi(b)$ whenever $a\oplus b$ is defined in $E$. \begin{theorem}\label{th:unigroup} Let $E$ be an interval effect algebra. Then there exists a unique (up to isomorphism) partially ordered directed abelian group $(G,G^+)$ and an element $u\in G^+$ such that the following conditions are satisfied: \begin{enumerate} \item[{\rm(i)}] $E$ is isomorphic to the interval effect algebra $G^+[0,u]$. \item[{\rm(ii)}] $u$ is a generating unit. \item[{\rm(iii)}] Every $K$-valued measure $\phi:E\to K$ can be extended uniquely to a group homomorphism $\phi^*:G\to K$. \end{enumerate} \end{theorem} The group $G$ in the preceding theorem is called a \emph{universal group} for $E$, and will be denoted by $G_E$. In what follows we consider a property that ensures that a partially ordered group with order unit is the universal group for its unit interval. There are examples (see \cite[Example 11.3, 11.5]{FoGr}) that show that this is not true in general. A partially ordered abelian group $G$ is said to have the \emph{Riesz interpolation property} (RIP), or to be an \emph{interpolation group}, if given $a_i,b_j$ ($1\leq i\leq m, 1\leq j\leq n$) with $a_i\leq b_j$ for all $i,j$, there exists $c\in G$ such that $a_i\leq c\leq b_j$ for all $i,j$. The Riesz interpolation property is equivalent to the \emph{Riesz decomposition property} (RDP): given $a_i,b_j\in G^+$, ($1\leq i\leq m, 1\leq j\leq n$) with $\sum a_i=\sum b_j$, there exist $c_{ij}\in G^+$ with $a_i=\sum_jc_{ij}, b_j=\sum_ic_{ij}$. An equivalent definition of the RDP is as follows: given $a,b_i$ in $G^+$, $i\leq n$ with $a\leq \sum_{i\leq n}b_i$, there exist $a_i\in G^+$ with $a_i\leq b_i, i\leq n$, and $a=\sum_{i\leq n}a_i$. To verify these properties, it is only necessary to consider the case $m=n=2$ (cf. \cite{Fuchs, Good}). For interpolation groups we have the following theorem \cite{Pu}, \cite[Theorem 3.5]{JePu}. \begin{theorem}\label{th:rdpunigroup} Let $G$ be an interpolation group with order unit $u$. Put $E:=G^+[0,u]$. Then $(G,u)$ is the universal group for $E$. \end{theorem} In a similar way as for partially ordered abelian groups, RDP can be defined for effect algebras. We say that an effect algebra $E$ has the \emph{Riesz decomposition property} (RDP) if one of the following equivalent properties is satisfied: \begin{enumerate} \item[(R1)] $a\leq b_1\oplus b_2\oplus \cdots \oplus b_n$ implies $a=a_1\oplus a_2\oplus \cdots \oplus a_n$ with $a_i\leq b_i, i\leq n$; \item[(R2)] $\oplus_{i\leq m}a_i=\oplus_{j\leq m} b_j$, $m,n\in {\mathbb N}$, implies $a_i=\oplus_jc_{ij}, i\leq m$, and $b_j=\oplus_ic_{ij}, j\leq n$, where $c_{ij}\in E$. \end{enumerate} Similarly as for partially ordered groups, it suffices to consider the case $m=n=2$. Let us remark that RIP can be also defined for effect algebras. In contrast with the case of partially ordered abelian groups, RIP and RDP are not equivalent for effect algebras: RDP implies RIP, but there are examples of effect algebras with RIP which do not have RDP (e.g., the "diamond" is lattice ordered effect algebra that does not satisfy RDP, \cite{DvPu}). It was proved by Ravindran \cite{Rav}, that every effect algebra with RDP is an interval effect algebra, and its universal group is an interpolation group. Ravindran's result can be extended to a categorical equivalence between the category of effect algebras with RDP with effect algebra morphisms and the category of interpolation groups with order unit with positive unital group homomorphisms, \cite[Theorem 3.8]{JePu}. \section{Dimension groups and dimension effect algebras} In this section, we study dimension groups and their effect algebra counterpart, introduced in \cite{JePu}. These are interpolation groups with some additional properties. A partially ordered abelian group $G$ is called \emph{unperforated} if given $n\in {\mathbb N}$ and $a\in G$, then $na\in G^+$ implies $a\in G^+$. Every Archimedean, directed abelian group is unperforated \cite[Proposition 1.24]{Good}, and also every lattice ordered abelian group is unperforated \cite[Proposition 1.22]{Good}. \begin{definition}\label{de:dingr} {\rm \cite{Good}} A partially ordered group $G$ is a \emph{dimension group} (or a \emph{Riesz group}) if it is directed, unperforated and has the interpolation property. \end{definition} A simple example of a dimension group is as follows. \begin{definition}\label{de:simplicgr} {\rm \cite[Definition p. 183]{Good}, \cite{GoHa}} A \emph{simplicial group} is any partially ordered abelian group that is isomorphic (as partially ordered abelian group) to ${\mathbb Z}^n$ (with the product ordering) for some nonnegative integer $n$. A \emph{simplicial basis} for a simplicial group $G$ is any basis $(x_1,\ldots,x_n)$ for $G$ as a free abelian group such that $G^+={\mathbb Z}^+x_1+\cdots +{\mathbb Z}^+x_n$. \end{definition} It was proved by Effros, Handelman and Shen \cite{EHS} that the dimension groups with order unit are precisely the direct limits of directed systems of simplicial groups with an order unit in the category $\pog$. Note that an element $v\in {\mathbb Z}^r$ is an order unit if and only if all of its coordinates are strictly positive. In this case, the interval $({\mathbb Z}^+)^r[0,v]$ is the direct product of finite chains $(0,1,\ldots,v_i), i=1,2,\ldots,r$ and therefore is a finite effect algebra with RDP. Conversely, every finite effect algebra with RDP is a unit interval in a simplicial group. Below, such effect algebras will be called \emph{simplicial}. In analogy with dimension groups, in \cite{JePu}, direct limits of directed systems of simplicial effect algebras have been called \emph{dimension effect algebras}. It was shown that an effect algebra is a dimension effect algebra if and only if its universal group is a dimension group. An intrinsic characterization of dimension effect algebras was found in \cite[Theorem 4.2]{JePu}. For the convenience of the readers, we give a short description of the directed system and direct limit of effect algebras \cite[Definition 1.9.36]{DvPu}. A \emph{directed system of effect algebras} is a family $A_I:=(A_i; (f_{ij}: A_j\to A_i, i,j\in I, j\leq i)$ where $(I,\leq)$ is a directed set, $A_i$ is an effect algebra for each $i\in I$, and $f_{ij}$ is a morphism such that \begin{enumerate} \item[(i1)] $f_{ii}=id_{A_i}$ fir every $i\in I$; \item[(i2)] if $m\leq j\leq i$ in $I$, then $f_{ij}f_{jm}=f_{im}$. \end{enumerate} Let $A_I$ be a directed system of effect algebras, then $\underline{f}:=(A; (f_i:A_i\to A; i\in I))$ is called the \emph{direct limit} of $A_I$ iff the following conditions hold: \begin{enumerate} \item[(ii1)] $A$ is an effect algebra; $f_i$ is a morphism for each $i\in I$; \item[(ii2)] if $j\leq i$ in $I$, then $f_if_{ij}=f_j$ (i.e., $\underline{f}$ is compatible with $A_I$); \item[(ii3)] if $\underline{g}:=(B; (g_i:A_i\to B, i\in I))$ is any system compatible with $A_I$, then there exists exactly one morphism $g:A\to B$ such that $gf_i=g_i$, for every $i\in I$. \end{enumerate} It was proved (cf. \cite[Theorem 1.9.27]{DvPu}) that the direct limit in the category of effect algebras exists. A sketch of the construction of the direct limit is as follows. Let $A= \dot{\cup}_{i\in I}A_i$ be the disjoint union of $A_i, i\in I$. Define a relation $\equiv$ on $A$ as follows. Put $a\equiv b$ ($a\in A_i, b\in A_j$) if there exists a $k\in I$ with $i,j\leq k$ such that $f_{ki}(a)=f_{kj}(b)$ in $A_k$. Then $\equiv$ is an equivalence relation, and the quotient $\bar{A}:=A/\equiv$ can be organized into an effect algebra with the operation $\oplus$ defined as follows: let $\bar{a}$ denotes the equivalence class corresponding to $a$. For $a\in A_I, b\in A_j$, $\bar{a}\oplus \bar{b}$ is defined iff there is $k\in I$, $i,j\leq k$ such that $f_{ki}(a)\oplus f_{kj}(b)$ exists in $A_k$, and then $\bar{a}\oplus \bar{b}=\overline{(f_{ki}(a)\oplus f_{kj}(b))}$ in $\bar{A}$. For every $i\in I$, define $f_i: A_i\to A/\equiv$ as the natural projection $f_i(a)=\bar{a}$. Then $\lim_{\rightarrow} A:=(\bar{A}; f_i:A_i\to \bar{A}, i\in I)$ is the desired direct limit.\label{pg:direct} From this construction, it can be derived that properties involving finite number of elements, such as RDP or being a dimension effect algebra (cf. the characterization in \cite[Thm. 4.2]{JePu}), are preserved under direct limits in $\ea$. \section{Tensor product of dimension effect algebras} The tensor product in the category ${\bf EA}$ is defined below as an universal bimorphism. We will show that such a tensor product always exists and that it is essentially given by the construction in \cite{Dvu}, see also \cite[Chap. 4.2]{DvPu}. Let $E,F,L$ be effect algebras. A mapping $\beta: E\times F\to L$ is called a \emph{bimorphism} if \begin{enumerate} \item[(i)] $a,b\in E$ with $a\perp b$, $q\in F$ imply $\beta(a,q)\perp \beta(b,q)$ and $\beta(a\oplus b,q)=\beta(a,q)\oplus \beta(b,q)$; \item[(ii)] $c,d\in F$ with $c\perp d$, $p\in E$ imply $\beta(p,c)\perp \beta(p,d)$ and $\beta(p,(c\oplus d))=\beta(p,c)\oplus \beta(p,d)$; \item[(iii)] $\beta(1,1)=1$. \end{enumerate} \begin{definition}\label{de:tenprodea} Let $E$ and $F$ be effect algebras. A pair $(T,\tau)$ consisting of an effect algebra $T$ and a bimorphism $\tau:E\times F \to T$ is said to be the \emph{tensor product} of $E$ and $F$ if whenever $L$ is an effect algebra and $\beta : E\times F \to L$ is a bimorphism, there exists a unique morphism $\phi :T\to L$ such that $\beta=\phi \circ \tau$. \end{definition} It is clear that if the tensor product exists, it is unique up to isomorphism. We will use the notation $E\otimes F$ for the effect algebra $T$ and $\otimes$ for the bimorphism $\tau$: $\tau(e,f)=e\otimes f\in E\otimes F$. \begin{theorem}\label{th:tenprodea} The tensor product always exists in ${\bf EA}$. \end{theorem} \begin{proof} The theorem was essentially proved in \cite[Theorem 7.2]{Dvu}, see also \cite[Theorem 4.2.2]{DvPu}. There a somewhat different definition of a tensor product is considered and the bimorphisms are assumed nontrivial, that is, the target algebra is required to satisfy $0\ne 1$. If at least one such bimorphism exists, it is easy to see that \cite{Dvu} provides a construction of a tensor product in our sense. On the other hand, if there are no nontrivial bimorphisms, then the tensor product is given by the one-element effect algebra $\{0=1\}$ and the unique bimorphism $E\times F\to \{0\}$. \end{proof} The tensor product of dimension groups in the category $\pog$ was studied by Goodearl and Handelman \cite{GoHa} and it was proved that such a tensor product is a dimension group as well. Recall that the tensor product of $G_1$ and $G_2$ in $\pog$ can be constructed as the abelian group tensor product $G_1\otimes G_2$, endowed with the positive cone $G_1^+\otimes G_2^+$, generated by simple tensors of positive elements. Our aim in this section is to describe the tensor product of dimension effect algebras in the category ${\bf EA}$. Note that we cannot directly apply the above result via the categorical equivalence of \cite[Theorem 3.8]{JePu}, since the category ${\bf EA}$ is much larger than the category of effect algebras with RDP. We first consider the case of simplicial effect algebras. Let $E$ and $F$ be simplicial effect algebras, with atoms \[ (e_1,\dots,e_n),\qquad (f_1,\dots,f_m) \] and unit elements \[ u=\sum_iu_ie_i, \qquad v=\sum_jv_jf_j, \] respectively. Then $G_E$ and $G_F$ are simplicial groups and $G_E\otimes G_F$ is a simplicial group with generators \[ g_{ij}=e_i\otimes f_j, i=1,\dots,n; j=1,\dots,m. \] Hence the unit interval $G_E\otimes G_F[0,u\otimes v]$ is a simplicial effect algebra with atoms $g_{ij}$ and and unit element $w=\sum_{i,j} u_iv_jg_{ij}$. \begin{theorem}\label{th:tenprodfmv} Tensor product of simplicial effect algebras in the category ${\bf EA}$ is a simplicial effect algebra, namely \[ E\otimes F\simeq G_E\otimes G_F[0,u\otimes v]. \] \end{theorem} \begin{proof} Let $G$ denote the simplicial effect algebra on the right hand side. Obviously, (bi)morphisms on simplicial effect algebras are uniquely determined by their values on the atoms. Let $\tau: E\times F\to G$ be the bimorphism determined by \[ \tau(e_i,f_j)=g_{ij}, \qquad i=1,\dots,n, \ j=1,\dots,m. \] We need to prove that for any effect algebra $H$ and bimorphism $\beta: E\times F\to H$, there is a morphism $\psi: G\to H$, such that \[ \psi(g_{ij})=\beta(e_i,f_j),\qquad i=1,\dots,n, \ j=1,\dots,m. \] Since $g_{ij}$ generate $G$, uniqueness of such a morphism is clear. So let $z\in G$, then $z=\sum_{i,j}z_{ij}g_{ij}$, for $z_{ij}\le u_iv_j$ for all $i$ and $j$. There are nonnegative integers $q_{ij}$, $r_{ij}$ such that \[ z_{ij}= v_jq_{ij}+r_{ij},\qquad r_{ij}< v_j, \] then since $v_j q_{ij}\le z_{ij}\le u_iv_j$, we have $q_{ij}\le u_i$, with equality only if $r_{ij}=0$. Then $a_j:=\sum_iq_{ij} e_i\in E$ and $r_{ij}f_j\in F$. We have \begin{align*} z&=\sum_j (\sum_iq_{ij}v_j g_{ij}+ \sum_{i} r_{ij}g_{ij})\\ &= \sum_j \tau(a_j,v_j f_j)+ \sum_{i, r_{ij}>0}\tau(e_i,r_{ij}f_j) \end{align*} Put $a'_j:=\sum_{i, r_{ij}>0} e_i$, then $a_j\perp a_j'$. Now we can write \begin{align*} H\ni 1=\beta(u,v)&=\sum_j\beta(u,v_jf_j)\\ &= \sum_j \left[\beta(a_j, v_jf_j)+ \beta(a'_j, v_jf_j)+\beta(u-(a_j+a_j'), v_jf_j)\right]\\ &= \sum_j [\beta(a_j, v_jf_j)+\sum_{i}\beta (e_i, r_{ij}f_j)+ \sum_{i, r_{ij}>0}\beta(e_i, (v_j-r_{ij})f_j)\\ &+\beta(u-(a_j+a_j'), v_jf_j)] \end{align*} It follows that \begin{align*} \sum_{i,j} z_{ij}\beta(e_i,f_j)&=\sum_{i,j} [q_{ij}v_j\beta(e_i,f_j)+r_{ij}\beta(e_i,f_j)]\\ &=\sum_j [\beta(a_j, v_jf_j)+\sum_i\beta (e_i, r_{ij}f_j)] \end{align*} is a well defined element in $H$ and we may put \[ \psi(z)=\sum_{i,j} z_{ij}\beta(e_i,f_j), \] which clearly defines a morphism $G\to H$. \end{proof} Let \begin{align*} A_I&=(A_i; (f_{ij}:A_j \to A_i); i,j\in I, j\leq i),\\ B_J&=(B_k; (g_{k\ell}:B_\ell \to B_{k}); k,\ell \in J, \ell\leq k) \end{align*} be directed systems of simplicial effect algebras. Let us define the index set $(\mathcal I,\leq)$ as the product $I\times J$ with pointwise ordering. By the previous theorem, each $A_i\otimes B_k$, $(i,k)\in \mathcal I$ is a simplicial effect algebra. Let $(j,\ell)\in \mathcal I$ be such that $(j,\ell)\leq (i,k)$, then we have morphisms $f_{ij}:A_j\to A_i$ and $g_{k\ell}: B_\ell\to B_k$. For $a\in A_j$, $b\in B_\ell$, put $\beta(a,b)=f_{ij}(a)\otimes g_{k\ell}(b)\in A_i\otimes B_k$, this defines a bimorphism $A_j\times B_\ell\to A_i\otimes B_k$. By properties of tensor product, this extends to a unique morphism $f_{ij}\otimes g_{k\ell}: A_j\otimes B_{\ell}\to A_i\otimes B_k$. \begin{theorem}\label{th:directed} Let \begin{eqnarray*} & A_I\otimes B_J:=(A_i\otimes B_k; (f_{ij}\otimes g_{k\ell}:A_j\otimes B_\ell \to A_i\otimes B_{k}), \\ & (i,k), (j,\ell)\in \mathcal I, (j,\ell)\leq (i,k)). \end{eqnarray*} Then $A_I\otimes B_J$ is a directed system of simplicial effect algebras. \end{theorem} \begin{proof} We have to check properties (i1) and (i2). For (i1), note that $f_{ii}=id_{A_i}$, $g_{kk}=id_{B_k}$ imply $f_{ii}\otimes g_{kk}=id_{A_i\otimes B_k}$. For (i2), let $(m,n)\leq (j,\ell)\leq (i,k)$. Then \[ m\leq j\leq i \ \implies \ f_{ij}f_{jm}=f_{im} \] \[n\leq \ell \leq k\ \implies \ g_{k\ell} g_{\ell n}=g_{kn} \] and for $a_m\in A_m, b_n\in B_n$, \begin{eqnarray*} (f_{ij}\otimes g_{k\ell})(f_{jm}\otimes g_{\ell n})(a_m\otimes b_n) &=& (f_{ij}\otimes g_{k\ell})(f_{jm}(a_m)\otimes g_{\ell n}(b_n))\\ &=& f_{ij}f_{jm}(a_m)\otimes g_{k\ell}g_{\ell n}(b_n)\\ &=& f_{im}(a_m)\otimes g_{kn}(b_n)\\ &=& f_{im}\otimes g_{kn}(a_m\otimes b_n). \end{eqnarray*} Since this holds on simple tensors, it extends to whole $A_m\otimes B_n$. \end{proof} \begin{theorem}\label{th:tenproddimea} Let $A_I, B_J$ be directed systems of simplicial effect algebras, and let $(\bar{A};(f_i:A_i\to \bar{A}, i\in I))$ and $(\bar{B}; (g_j:B_j\to \bar{B}, j\in J))$ be their corresponding direct limits. Then $(\bar{A}\otimes \bar{B}; (f_i\otimes g_j:A_i\otimes B_j \to \bar{A}\otimes \bar{B}, i\in I, j\in J))$ is the direct limit of $A_I\otimes B_J$. \end{theorem} \begin{proof} We have to check properties (ii1), (ii2) and (ii3). The first one is clear: since $\bar{A}$, $\bar{B}$ are effect algebras, $\bar{A}\otimes \bar{B}$ is an effect algebra as well. To prove compatibility, let $(j,\ell) \leq(i,k)$. Then $j\leq i, \ell\leq k$ and we have \[ (f_i\otimes g_k)(f_{ij}\otimes g_{k\ell})= f_if_{ij}\otimes g_kg_{k\ell}=f_j\otimes g_{\ell}. \] For (ii3), let $(C; (h_{ij}:A_i\otimes B_j \to C, i\in I, j\in J))$ be another system compatible with $A_I\otimes B_J$ (i.e., $h_{ik}(f_{ij}\otimes g_{k\ell})=h_{j\ell}, j\leq i, \ell\leq k$). Let $a\in \bar A$, $b\in \bar B$. Since $\bar A$ and $\bar B$ are direct limits, there are some indices $i\in I$, $k\in J$ and elements $a_i\in A_i$ and $b_k\in B_k$ such that $a=f_i(a_i)$ and $b=g_k(b_k)$, see the construction on page \pageref{pg:direct}. Define $h(a,b):=h_{ik}(a_i\otimes b_k)$. Then $h:\bar{A}\times \bar{B} \to C$ is a bimorphism, which extends to a morphism $\bar{h}:\bar{A}\otimes \bar{B} \to C$. \end{proof} \begin{corollary}\label{cor:unigroup} Let $E$ and $F$ be dimension effect algebras, and let $G_E$ and $G_F$ be their universal groups with units $u_E$ and $u_F$. Then the tensor product $E\otimes F$ is isomorphic to the unit interval $[0,u_E\otimes u_F]$ in the tensor product $G_E\otimes G_F$ of their universal groups, that is \[ G_E[0,u_E]\otimes G_F[0,v_F]\simeq G_E\otimes G_F[0,u_E\otimes u_F]. \] \end{corollary} \begin{proof} Let $E=\bar A$, $F=\bar B$ be direct limits of directed systems $A_I$ and $B_J$. Each $A_i$, $i\in I$ and $B_k$, $k\in J$ is a simplicial effect algebra and $G_{A_i}$, $G_{B_k}$ are simplicial groups. By \cite[Theorem 4.1]{JePu}, we obtain that $G_E$ is a direct limit of $(G_{A_i}, f_{ij}^*)$, where $f_{ij}^*$ are the unique morphisms in $\pog$, extending $f_{ij}$, similarly for $G_F$. By Theorem \ref{th:tenprodfmv}, $A_i\otimes B_k$ is a simplicial effect algebra and $G_{A_i\otimes B_k}\simeq G_{A_i}\otimes G_{B_k}$. By Theorem \ref{th:tenproddimea}, $E\otimes F$ is the direct limit of the directed system $A_I\otimes B_J$. Since $A_I\otimes B_J$ has RDP, it follows by \cite[Theorem 4.1]{JePu} that the universal group $G_{E\otimes F}$ is a direct limit of the system of universal groups \[ \{ G_{A_i\otimes B_k}\simeq G_{A_i}\otimes G_{B_k}, (f_{ij}\otimes g_{k\ell})^*\simeq f_{ij}^*\otimes g_{k\ell}^*\}, \] Using \cite[Lemma 2.2]{GoHa}, we obtain \[ G_{E\otimes F}\simeq G_E\otimes G_F,\qquad u_{E\otimes F}=u_E\otimes u_F. \] \end{proof} \section{Conclusions and a conjecture} We have proved that the $\ea$ tensor product of dimension effect algebras is again a dimension effect algebra. The tensor product $E\otimes F$ is proved to be the direct limit of a directed system of simplicial effect algebras, obtained as a ''tensor product'' of the directed systems corresponding to dimension effect algebras $E$ and $F$. It is also proved that $E\otimes F$ is (isomorphic to) the unit interval in the $\pog$ tensor product of the corresponding universal groups $G_E$ and $G_F$. We conjecture that this is true for general interval effect algebras. Note that in the category of \emph{interval} effect algebras, the tensor product exists \cite[Theorem 9.1]{FGB} and our conjecture says that it is (isomorphic to) the $\ea$ tensor product. A special class of interval effect algebras are the algebras with RDP. It is again an open question whether in this case the $\ea$ tensor product has RDP. If our conjecture is true, $E\otimes F$ is the unit interval in the $\pog$ tensor product of groups with RDP. As it was shown in \cite[cf. Remark 2.13]{W}, the $\pog$ tensor product of groups with RDP might not have RDP, but in the presence of generating units, RDP holds in an asymptotic form in the sense of \cite{Par}. \section{An example: $\mathbb R[0,1]$} Let us consider the interval $[0,1]$ in $(\mathbb R,\mathbb R^+,0)$. This is clearly a dimension group with order unit $1$ and hence the interval $[0,1]$ is a dimension effect algebra. It was proved in \cite{Pu2} that the ${\bf EA}$ tensor product $[0,1]\otimes [0,1]$ is not lattice ordered and thus not isomorphic to $[0,1]$. By our results, $[0,1]\otimes [0,1]$ is a dimension effect algebra, which is the interval $R\otimes R[0,1\otimes 1]$. Note that the fact that the $\pog$ tensor product $R\otimes R$ is not lattice ordered was shown in \cite{W}. As an example, we will present $[0,1]$ as a direct limit of a directed system of simplicial effect algebras. The tensor product $[0,1]\otimes [0,1]$ is then obtained as a direct limit as in Theorem \ref{th:tenproddimea}. We first need to introduce some notations. For any $n$-tuple \[ A=(x_1,\dots,x_n) \] of elements in $\mathbb R^+$, let $f_A$ denote the positive group homomorphism \[ f_A: \mathbb Z^n\to \mathbb R,\quad e^n_i\mapsto x_i,\ i=1,\dots,n \] and let \[ L(A):= f_A(\mathbb Z^n), \quad L(A)^+:= f_A((\mathbb Z^n)^+),\quad L_>(A)^+:=f_A(\mathbb (Z_>^n)^+), \] where $\mathbb (Z_>^n)^+:= \{\sum_i z_i e^n_i$ with $z_i>0$ for all $i=1,\dots,n\}$. We also use the notations \[ Q(A):=Lin_{\mathbb Q}(A),\quad Q(A)^+:=Q(A)\cap\mathbb R^+. \] Let us define the index set as \[ \mathcal I:=\{A\subset [0,1], \mbox{finite, }\mathbb Q-\mbox{linearly independent, } 1\in L_>(A)^+\}. \] Any $A\subset \mathbb R^+$ with cardinality $n$ can be identified with the $n$-tuple of its elements $(x_1,\dots,x_n)$, indexed so that $x_1<\dots <x_n$. For $A,B\in \mathcal I$, write $B\preceq A$ if $B\subset L(A)^+$. It is easy to see that $\preceq$ is a preorder in $\mathcal I$. \begin{prop}\label{prop:directed} $(\mathcal I,\preceq)$ is directed. \end{prop} For the proof, we need some lemmas. \begin{lemma}\label{lemma:sums} Let $B=(y_1,\dots,y_k)$ be a tuple of elements in $\mathbb R^+$. Assume that for some $1\le N< k$, \[ \sum_{i=1}^Ny_i=\sum_{i=N+1}^ky_i. \] Then there is some tuple $A=(x_1,\dots, x_l)$ of elements in $Q(B)^+$ such that $l<k$ and $y_i\in L(A)^+$, $i=1,\dots,k$. \end{lemma} \begin{proof} We proceed by induction on $k$. By the assumptions, we see that $k$ is at least 2, in which case we have $y_1=y_2$. Put $A:=\{y_2\}$ and we are done. Now let $k>2$ and assume that the assertion is true for tuples of length $k-1$. By reindexing and rearranging the sums, we may assume that $y_k=\min\{y_1,\dots,y_k\}$. Put $y_1':=y_1-y_k$, then $y_1'\in Q(B)^+$ and we have the equality \[ y_1'+y_2+\dots +y_N=y_{N+1}+\dots+ y_{k-1} \] containing only $k-1$ elements. By the induction hypothesis, there is some tuple $A'=(x_1,\dots,x_{l'})$ with elements in $Q(B)^+$ and $l'< k-1$, and some $(k-1)\times l'$ matrix $Z'$ with values in nonnegative integers such that \[ y_1'=f_{A'}(z'_{1\cdot}),\quad y_i=f_{A'}(z'_{i\cdot}),\ i=2,\dots,k-1, \] here $z'_{i\cdot}$ denotes the $i$-th row of $Z'$. Let $A=(x_1,\dots,x_{l'},y_k)$ and \[ Z=\left(\begin{array}{cc} Z' & \begin{array}{c} 1\\ 0\\ \vdots\\ 0\end{array}\\ 0 & 1 \end{array}\right). \] Then $A$ is an $l$-tuple of elements in $Q(B)^+$, $l=l'+1<k$ and $y_i=f_A(z_{i\cdot})\in L(A)^+$ for all $i$. \end{proof} \begin{lemma}\label{lemma:basis_positive} Let $B=(y_1,\dots,y_k)$ be a tuple of elements in $\mathbb R^+$. Then there is a $\mathbb Q$-linearly independent tuple $A=(x_1,\dots,x_n)$ of elements in $Q(B)^+$ such that $y_i\in L(A)^+$, $i=1,\dots,k$. \end{lemma} \begin{proof} If $B$ is $\mathbb Q$-linearly independent, there is nothing to do. Otherwise, there are some $r_i\in\mathbb Q$ such that $\sum_i r_i y_i=0$ with some $r_i\ne 0$. Clearly, by multiplying by a common denominator, we may assume that $r_i\in \mathbb Z$. Assume that the elements are arranged in such a way that \[ r_i\left\{\begin{array}{cc} >0 & \mbox{ for } i=1,\dots,N\\ <0 & \mbox{ for } i=N+1,\dots M\\ =0 & \mbox{ for } i=M+1,\dots k. \end{array}\right. \] Put $p_i=\Pi_{i\ne j\le M} |r_j|$ and let $y'_i=\frac{y_i}{p_i}$ for $i=1,\dots,M$. Clearly, $y_1',\dots, y_M'\in Q(B)^+$. Then by multiplying the equality by $\Pi_{j=1}^M|r_i|^{-1}$, we obtain \[ \sum_{i=1}^Ny'_i=\sum_{i=N+1}^My'_i. \] Applying Lemma \ref{lemma:sums}, there is some $l$-tuple $A'=(x_1',\dots, x_l')\in Q(B)^+$ with $l<M$ such that $y_i'\in L(A')^+$ for $i=1,\dots,M$, so that also $y_i=p_iy_i'\in L(A')^+$, $i=1,\dots,M$. We now repeat the same process with $B'=(x_1',\dots,x_l',y_{M+1},\dots,y_k)$. Since $Q(B')=Q(B)$ and $|B'|<k$, after a finite number of steps we obtain a $\mathbb Q$-linearly independent set $A=\{x_1,\dots,x_n\}$ with the required properties. \end{proof} \noindent \textit{Proof of Proposition \ref{prop:directed}}. Let $B,C\in \mathcal I$, then by Lemma \ref{lemma:basis_positive} there is some $\mathbb Q$-linearly independent tuple $A=(x_1<\dots<x_n)$ of elements in $Q(B\cup C)^+$ such that $B\cup C\subset L(A)^+$. By assumptions, $1\in L_>(B)^+\subset L(A)^+$, so that $1=\sum_i z_ix_i$ for unique coefficients $z_1,\dots, z_n\in \mathbb Z^+$. Assume that $z_{i_0}=0$ for some $i_0$. Let $B=(y_1<\dots<y_k)$. There are some positive integers $v_1,\dots,v_k$ such that $1=\sum_{j=1}^k v_jy_j$ and some nonnegative integers $w^j_1,\dots,w^j_n$ such that $y_j=\sum_iw^j_ix_i$. It follows that \[ 1=\sum_{j=1}^k v_jy_j=\sum_i(\sum_j v_j w^j_i )x_i=\sum_i z_ix_i, \] so that $\sum_j v_j w^j_{i}=z_i$, in particular, $\sum_j v_j w^j_{i_0}=0$. Since all $v_j$ are positive, this implies that $w^j_{i_0}=0$ for all $j$ and we have \[ y_j=\sum_{i\ne i_0} w^j_ix_i. \] Hence $B\subset L(A\setminus \{x_{i_0}\})^+$, similarly also $C\subset L(A\setminus \{x_{i_0}\})^+$. It follows that we may assume that $1\in L_>(A)^+$. This means that $1=\sum_i z_i x_i$ for positive integers $z_i$, which implies that we must have $0<x_i\le 1$. It follows that $A\in \mathcal I$ and $\mathcal I$ is directed. \qed We now construct a directed system of simplicial effect algebras. Let $A\in \mathcal I$. Since $A$ is $\mathbb Q$-linearly independent, $f_A$ is a $\pog$ isomorphism onto its range. Let $E_A$ be the interval $[0,f_A^{-1}(1)]$ in $\mathbb Z^{|A|}$ and let $g_A=f_A|_{E_A}$. Then $g_A$ is an effect algebra isomorphism onto the interval $[0,1]$ in $(L(A),L(A)^+,0)$. Let $B\in \mathcal I$, $B\preceq A$, then since $L(B)^+\subseteq L(A)^+$, we have $g_B(E_B)\subseteq g_A(E_A)$. Put \[ g_{AB}: E_B\to E_A, \qquad g_{AB}=g_A^{-1}g_B, \] then it is clear that \[ \mathcal E=(E_A, A\in \mathcal I; g_{AB}, B\preceq A) \] is a directed system of simplicial effect algebras. \begin{prop} $([0,1]; g_A, A\in \mathcal I)$ is the direct limit of $\mathcal E$. \end{prop} \begin{proof} It is clear that $([0,1]; g_A, A\in \mathcal I)$ is compatible with $\mathcal E$. Note also that any $x\in [0,1]$ is contained in the range of some $g_A$. Indeed, assume that $x\in \mathbb Q\cap[0,1]$, then $x=\tfrac mn$ for $n\in \mathbb N$, $m\in \mathbb Z^+$. Let $A=\{\tfrac1n\}$, then $A\in \mathcal I$ and we have $E_A=[0,n]_{\mathbb Z}$, $x=g_A(m)$. If $x\notin \mathbb Q$, then $A=\{x,1-x\}\in \mathcal I$ and $x\in A\subset g_A(E_A)$. Now let $E$ be an effect algebra and let $k_A: E_A\to E$ be a morphisms for $A\in \mathcal I$, such that $(E; k_A, A\in \mathcal I)$ is compatible with $\mathcal E$. Let $x\in [0,1]$ be in the range of $g_A$ and put \[ \psi(x)=k_A(g_A^{-1}(x)). \] Assume that $B\in \mathcal I$ is such that $x$ is also in the range of $g_B$ and let $C\in \mathcal I$ be such that $A,B\preceq C$. Then $g_A(E_A)\subseteq g_C(E_C)$ and by compatibility \[ k_A(g_A^{-1}(x))=k_Cg_{CA}(g_A^{-1}(x))=k_C(g_C^{-1}(x)). \] Similarly we obtain that $k_B(g_B^{-1}(x))=k_C(g_C^{-1}(x))$, hence $\psi$ is a well defined map. Let $I=\{1\}$, then clearly $I\in \mathcal I$, $E_I=\{0,1\}\subset \mathbb Z$ and we have \[ \psi(0)=k_I(0)=0,\qquad \psi(1)=k_I(1)=1, \] since $k_I$ is an effect algebra morphism. Further, let $x_1,x_2,x\in [0,1]$ be such that $x=x_1+x_2$. Let $A\in \mathcal I$ be such that $x_1,x_2\in g_A(E_A)$, then clearly also $x\in g_A(E_A)$ and we have $g_A^{-1}(x_1)+g_A^{-1}(x_2)=g_A^{-1}(x)$, since $g_A$ is an isomorphism onto its range. Hence \[ \psi(x)=k_A(g_A^{-1}(x))=k_A(g_A^{-1}(x_1)+g_A^{-1}(x_2))=\psi(x_1)+\psi(x_2). \] This proves that $\psi$ is an effect algebra morphism $[0,1]\to E$. Further, for any $A\in \mathcal I$ and $z\in E_A$, \[ k_A(z)=k_A(g_A^{-1}g_A(z))=\psi g_A(z), \] so that $k_A=\psi g_A$. Since $\psi$ is obviously the unique map $[0,1]\to E$ with this property, this proves the statement. \end{proof}
{ "timestamp": "2019-03-01T02:16:51", "yymm": "1902", "arxiv_id": "1902.11031", "language": "en", "url": "https://arxiv.org/abs/1902.11031" }
\section{Introduction} Simulated fluctuation amplitudes and turbulent fluxes often exhibit chaotic behavior, with no obvious regular period or amplitude, and may exhibit large variances and skewnesses. Unlike sampling rates for experimental measurements, where the frequency of measurements combined with sufficient collection windows can often provide enough temporal samples to accurately infer the mean distribution of the quantity, obtaining such long time series in the physics rich simulations is often computationally prohibitive. However, rigorous assessments of simulated quantity statistics, especially their mean value and its uncertainty, is essential for verification and validation (V\&V) studies\cite{holland2016}. Therefore, accurate and computationally feasible estimates of minimum turbulence simulation length are necessary for meaningful V\&V studies. Running simulations long enough to shrink down the uncertainty of the simulation quantity is a favorable endeavor, as it minimizes the temporal uncertainties with respect to other uncertainties, such as uncertainty in fitting profiles to the experimental measurements\cite{chilenski2015}, or input parameter uncertainties into a computational model due to experimental measurements errors\cite{vaezi2018,vaezi2018b}. Among temporal uncertainties, determining the variance of the mean distribution for a simulated quantity (in this paper frequently referred as mean variance) is of significant importance. The variance of the mean distribution is needed for determining the temporal uncertainty in V\&V studies and computing the fractional uncertainty of the predicted turbulence quantity\cite{holland2016}. Moreover, advanced reduced models of turbulent plasma transport such as the trapped-gyro-Landau-fluid (TGLF)\cite{staebler2007} model are calibrated to the results of nonlinear simulations. It is therefore essential to ensure uncertainties in the time averaged turbulence levels are small. Different techniques to measure the fractional uncertainty of nonlinear turbulence simulation have been previously pursued within the magnetic confinement based fusion energy (MFE) research community\cite{mikkelsen2008,holland2016,anderson2017,parker2018}. However, to our knowledge, no rigorous review of measuring and forecasting of temporal fractional uncertainty has been performed within the MFE community. Hence, in this paper we first review the previous methods used within the community to address this issue. Second, we compare the previously used methods of mean variance measurement against the analytic results for model time series to study the pros and cons each method. Third, we examine the convergence of turbulent energy flux means in nonlinear gyrokinetic simulations and forecast temporal fractional uncertainty of turbulence quantities at later simulation times. To carry out this analysis, we use Autoregressive Moving-Average (ARMA) model to forecast the variance of the mean distribution at later simulation times. In practice, we apply the mean variance techniques to gyrokinetic simulation cases in both near and well-above critical temperature gradients. In this paper, our approach is to discuss practical mean variance measurement techniques without delving too much into their detailed statistics. However, references for more detailed statistical explanations are given for interested readers. In Sec. \ref{sec:meanvaroldtechniques}, we review the previous methods of measuring mean variance, including the integral correlation time method, and sub-interval averaging of correlated measurements. In Sec. \ref{sec:armatest}, we variance results of these previously used techniques against analytic results calculated for ARMA model time series. In Sec. \ref{simulationvar}, we test ARMA model fits to gyrokinetic simulations, to forecast the mean variance at later simulation times and determine the minimum length of simulation required to achieve a desired variance of the mean. In Sec. \ref{summary}, we summarize our study and propose future directions for investigation. \section{Historical Methods of Measuring Mean Variance of Autocorrelated Measurements} \label{sec:meanvaroldtechniques} Calculating the natural variation of turbulence levels within nonlinear initial value simulations is essential to quantifying the temporal uncertainty of turbulence quantities, as shown in Fig. \ref{fig:simtimeseries}. Determining the mean of a turbulence quantity in the saturation phase is straightforward, but determining its aleatory uncertainty and variance is more complicated. In many cases, determining the uncertainty of a mean turbulence quantity in the nonlinear simulations is necessary for verification and validation purposes, as well as calibration of reduced models. In this section, we review some methods of temporal uncertainty estimation used within the MFE turbulence community, and compare these techniques with analytical solutions of the mean distribution variance for model time series. \begin{figure}[ht] \centering \includegraphics[width=0.37\textwidth]{chrsits-eps-converted-to.pdf} \caption{Time trace of a sample ion energy flux from a gyrokinetic turbulence simulation. Reprinted with permission from Holland\cite{holland2016}. Copyright 2016 American Institute of Physics.} \label{fig:simtimeseries} \end{figure} The simplest possible estimate of the fractional temporal uncertainty of measured quantity is given by \begin{equation} \delta_{X} = {\frac{Std[\bar{X}]}{E[\bar{X}]}} = {\frac{\sqrt{Var[\bar{X}]}}{E[\bar{X}]}}, \end{equation} where $Std[\bar{X}]$ is the standard deviation of samples $X$, $Var[\bar{X}]$ is the variance of samples $X$, and $E[\bar{X}]$ is the expected value or the mean of samples $X$. When the measurement samples are not autocorrelated, the variance of the sample mean of a quantity ${X}$ according to Levy-Lindberg Central Limit Theorem\cite{brown1971} is calculated as \begin{equation} \label{noncorrvar} Var[\bar{X}] = {\frac{\sigma_X^2}{n}}, \end{equation} where $\sigma^2_X = {1/n} \sum_{i=1}^n (X_i - \bar{X})^2$ is the standard deviation of sample, $\bar{X}$ is the mean of samples, and $n$ is the number of repeated measurements. As is well-known, the variance of the mean sampling distribution shrinks down with the availability of more samples. However, if the measurements are correlated, due to for example a high sampling rate relative to physical timescales, the calculation of the variance of $\bar{X}$ becomes more complicated\cite{zhang2006}. For measurements with finite autocorrelation, Andrews\cite{andrews1991} suggests estimating the long-run variance using a kernel estimator, \begin{equation}\label{corrvar} \hat{\sigma}_X^2 = \sum_{|j| \leq q} \omega_q(j) \rho_{X}(j); \end{equation} \begin{equation} \rho_X(j) = {\frac{1}{n}} \sum_{i=1}^{n - |j|} \left(X_i - \bar{X}\right) \left(X_{i+|j|} - \bar{X} \right). \end{equation} In the above equations, $\hat{\sigma}_X^2$ is the variance of time series accounting for autocorrelated lags, $q$ is a specified cut-off lag, and $\omega_q(j)$ is a symmetric, bounded, and integrable kernel estimator. The kernel estimator mathematically is a symmetric filtering function to filter out the noise of larger lags arising from finite length of sampled time series. Possible choices for the kernel estimator include the Barlett kernel, truncated kernel, and Hanning kernel\cite{anderson2011}. If measurements become uncorrelated, the covariance terms become zero and $\rho_X(j)$ reduces to the standard deviation of the samples, recovering Eqn. \ref{noncorrvar}. To approximate the mean variance of autocorrelated processes, different heuristic approaches have been used within the MFE community. Two examples considered here are \begin{enumerate} \item Using an approximate integral correlation time in the calculation of the variance. \item Splitting the signal into sub-windows of sufficient length that the sample sub-window means are no longer correlated, and calculating the variance of uncorrelated sub-window means will become an approximation of the mean variance of original time series. \end{enumerate} In the following subsections, we describe these two different approaches, and in Sec. \ref{sec:armatest} we compare the variance calculation of these approaches against analytical values of the mean variance for model time series. We note that the methods described and pursued in this manuscript assumes that the time series is stationary and ergodic, therefore can only be applied to the saturated phase of simulations. Possible approaches for future studies which relax this restriction are discussed in Sec. \ref{summary}. \subsection{Integral Correlation Time} In this approach, the variance of $\bar{X}$ is originated from Eqn. \ref{corrvar} by assuming each time step is a measurement sample, and the correction coefficient due to finite autocorrelation of the measurements is approximated as \begin{equation} \label{ictmethod} Var[\bar{X}] = {\tau_{int}} {\frac{\sigma_X^2}{n}}. \end{equation} Essentially, the effective number of samples is reduced by the integral correlation time $\tau_{int}$. The problem is now reduced to estimating the integral correlation time $\tau_{int}$ over all the time lags. However, due to finite sample size of our measurement, larger lag may pose error on the estimate of $\tau_{int}$. A practical estimate of $\tau_{int}$ is proposed by Nevins\cite{nevins2004} is \begin{equation} \tau_{int} = \int_{-\tau_{lag}}^{\tau_{lag}} d\tau \bar{C}_{X}(\tau), \end{equation} where the $\tau_{lag} \approx \sqrt{\tau_c T}$, $\tau_c$ is the width of the lag region in which autocorrelation is above standard error \begin{equation} \label{standarderror} S.E.(\tau) = \sqrt{{\frac{1}{N}}\left(1 + 2 \sum_{i=1}^{\tau} \rho_i^2 \right)}, \end{equation} and $\bar{C}_{X}(\tau)$ is the auto-variance of time lag $\tau$ passed into a Hanning kernel which can be obtained as, \begin{equation} \bar{C}_{X}(\tau) = \mathbb{H}(\tau / \tau_{lag}) C_{X}(\tau), \end{equation} in which $\mathbb{H}$ is the Hanning function, and $C_{X}(\tau)$ is the standard estimate of the auto-variance, \begin{equation} C_{X}(\tau) = {\frac{1}{T}} \int_{0}^{T - \tau} dt X(t) X(t+\tau). \end{equation} Hence, in practice one can calculate auto-variance of different time lags, pass the auto-variance function through a symmetric kernel estimator (here a Hanning function is used), and integrate the weighted auto-variance function over significant time lags. If there is no correlated lag, the integral correlation time reduces to one, retaining the uncorrelated variance of samples. To illustrate integral correlation time variance measurement technique, in Fig. \ref{fig:intcorrtime} we have shown the autocorrelation function of a sample time series with the large-lag standard error. A Hanning kernel filter is passed to remove the noise arising from large-lag terms. The filtered auto-variance is therefore weighted to compute the integral correlation time. A result, the variance of sample can be adjusted to account for the autocorrelated measurements. As might be expected, this mean variance estimatation technique is highly sensitive to the width of lag region, and the choice of kernel estimator. \begin{figure}[ht] \centering \includegraphics[trim={0 0.5cm 0 0},clip,width=0.4\textwidth]{intcorrtime-eps-converted-to.pdf} \caption{Autocorrelation function shown with the shaded \%95 confidence intervals ($\pm 1.96\sqrt{S.E.}$), as well as fitted Hanning kernel estimator for an arbitrary time series of 500 samples. {The analysis shown here is taken from time series of Fig. }\ref{fig:subintervalts}.} \label{fig:intcorrtime} \end{figure} \subsection{Sub-interval Averaging of Correlated Measurements} In the second approach\cite{mikkelsen2008b}, the measurement samples are taken as the means of $N$ non-overlapping sub-interval windows of the original time series of length $T$. These subwindow means are denoted with $Y_i$ here and can be obtained as $$ Y_1 = {X_1 + \dots + \frac{X_{T}}{T}} $$ $$\dots$$ \begin{eqnarray}Y_{N} = {X_{(N-1)T + 1} + \dots + \frac{X_{NT}}{T}}. \end{eqnarray} Here, by selecting large enough sub-interval width $T$, the lagged autocorrelation of sub-window means become statistically insignificant. We can therefore use the subwindow means to estimate the variance of the mean for the total signal via Eqn. \ref{noncorrvar}, which in this case reads as \begin{equation} \label{subwindowmethod} Var[\bar{X}] \sim Var[\bar{Y}] = {\frac{\sigma_{{Y}}^2}{N}}, \end{equation} where $\sigma_Y^2$ is the variance of sub-interval means. The key question for this approach is how to obtain the minimum acceptable value of $T$. To obtain the minimum sub-interval width for uncorrelated sub-windows, we can test the statistical significance of lagged correlations for sub-window means. Typically, the sub-interval width should be larger than turbulence timescale. For a stochastic process where the autocorrelation exponentially drops, a quick comparison of the first time lag autocorrelation of sub-window means, $\rho_Y(1)$, against their standard error\cite{parzen1963} is sufficient to determine if the width of sub-window averaging is large enough or not. Autocorrelation of the first moving-average lag below standard error can be approximated as uncorrelated moving-averages. For better illustration of this mean variance estimation technique, in Fig. \ref{fig:subintervalts} we have shown a sample time series with the ten sub-interval means. If the sub-window means have statistically insignificant correlation, we can easily calculate the uncorrelated variance of sub-window means as the mean variance of the time series. \begin{figure}[ht] \centering \includegraphics[trim={0 0.5cm 0 0},clip,width=0.4\textwidth]{subintervalts-eps-converted-to.pdf} \caption{A sample time series with 10 sub-interval means, denoted by the horizontal lines.} \label{fig:subintervalts} \end{figure} \section{An Analytical Mean Variance Study} \label{sec:armatest} In this Section, we will examine application of Auto-Regressive Moving Average\cite{choi2012arma} (ARMA) models to analytical time series. ARMA models provide a description of stationary and ergodic processes in terms of two set of polynomials, one for the autoregression and the second for the moving average, and are commonly used in a variety of research communities. The general format of ARMA$(p,q)$ models with $p$ autoregressive terms and $q$ moving-average terms is formulated as \begin{equation} X_t = \phi_1 X_{t-1} + \dots + \phi_p X_{t-p} + \varepsilon_t + \theta_1 \varepsilon_{t-1} + \dots + \theta_q \varepsilon_{t-q}, \end{equation} where $\phi_i$ and $\theta_i$ are parameters of the model, and $\varepsilon_t, \varepsilon_{t-1}, ...$ are white noises with $\mathcal{N}(0, \sigma_\varepsilon^2)$. Here, we will look at an example of using an ARMA model with one auto-regressive term and zero moving-averages to generate sample time series, and compare the efficacy of the methods introduced in Sec. \ref{sec:meanvaroldtechniques} against the analytical derivation of mean variance. For the ARMA$(1,0)$ model $X_t = \phi_1 X_{t-1} + \varepsilon_t$, the variance of the sample mean analytically asymptotes to\cite{crack2004} \begin{equation} Var[\bar{X}] \sim {\frac{\sigma_X^2 (1 + \phi_1)}{n (1 - \phi_1)}}, \end{equation} and autocorrelation function of ARMA$(1,0)$ is given by\cite{thompson2010comparison} \begin{equation} \rho_X(0) = 1; \rho_X(i) = \left(\phi_1\right)^i. \end{equation} We can observe when $\phi_1$ is finite, the autocorrelation of lags are finite and decreasing with the lag number, and the mean variance is no longer equal to uncorrelated case of Eqn. \ref{noncorrvar}. We now consider three time series generated using ARMA$(1,0)$ model to study the effects of correlation persistence in finite sample size time series samples. In Fig. \ref{fig:armats}, we show a randomly generated finite sample size ($n=1000$ and $\sigma_\varepsilon = 0.2$) ARMA$(1,0)$ process with $\phi_1=\{0,0.75,0.99\}$. In Fig. \ref{fig:armaacf} we plot the time series sample autocorrelation function against the analytical value of autocorrelation for an ARMA$(1,0)$ process. We observe for smaller $\phi_1$, with less significant lags, 1000 samples used are sufficient to converge to analytical autocorrelation function. On the other hand, we observe for larger $\phi_1$ values the autocorrelation persistence is significantly larger, and our finite number of samples are not enough to accurately approximate the analytical autocorrelation function. In such large persistence processes, due to cumulative summation of larger correlation lags, the standard error (Eqn. \ref{standarderror}) is much larger, resulting in much larger confidence interval bounds. We see therefore that larger autocorrelation effects and finite sample number can have drastic influence on the calculation of the mean variance. \begin{figure*}[!htb] \centering \begin{minipage}[b]{0.325\textwidth} \includegraphics[trim={0cm 0 0 0},clip, width=1.\textwidth]{armaphi0-eps-converted-to.pdf} \end{minipage} \begin{minipage}[b]{0.325\textwidth} \includegraphics[trim={0cm 0 0 0},clip, width=1.\textwidth]{armaphi075-eps-converted-to.pdf} \end{minipage} \begin{minipage}[b]{0.325\textwidth} \includegraphics[trim={0cm 0 0 0},clip, width=1.\textwidth]{armaphi099-eps-converted-to.pdf} \end{minipage} \caption{Randomly generated ARMA$(1,0)$ process with 1000 samples for different values of $\phi_1 = \{0, 0.75, 0.99\}$.} \label{fig:armats} \end{figure*} \begin{figure*}[!htb] \centering \begin{minipage}[b]{0.325\textwidth} \includegraphics[trim={0cm 0 0 0},clip, width=1.\textwidth]{acfphi0-eps-converted-to.pdf} \end{minipage} \begin{minipage}[b]{0.325\textwidth} \includegraphics[trim={0cm 0 0 0},clip, width=1.\textwidth]{acfphi075-eps-converted-to.pdf} \end{minipage} \begin{minipage}[b]{0.325\textwidth} \includegraphics[trim={0cm 0 0 0},clip, width=1.\textwidth]{acfphi099-eps-converted-to.pdf} \end{minipage} \caption{Autocorrelation function of ARMA$(1,0)$ process with 1000 samples for different values of $\phi_1 = \{0, 0.75, 0.99\}$ compared with analytical autocorrelation function of ARMA$(1,0)$ model. The shaded band shows standard error bounds.} \label{fig:armaacf} \end{figure*} To understand which method can better approximate the mean variance of the signal, we have compared mean variance of introduced methods in the previous Section (Eqns. \ref{noncorrvar}, \ref{ictmethod}, and \ref{subwindowmethod}) against the analytical value of mean variance for ARMA$(1,0)$, and results are shown in Table \ref{tbl:compmethods}. We can gain some essential insights from Table \ref{tbl:compmethods} as how these different methods work in different correlation persistence settings. Before we proceed with the comparison, we should note that for the sub-interval average method, we need to determine the size of sub-window. Statistically, due to finite sample size of time series, a correlation of lag below standard error statistically can not reject the uncorrelated sub-intervals hypothesis. In Fig. \ref{fig:armarho1}, we have shown the autocorrelation of sub-interval averaged first lags $\rho_Y(1)$ as function of sub-interval size. The zero symmetric shaded area show the statistically insignificant region where the autocorrelation of first lag is below standard error. Moreover, the line and its shaded area show the expected value and its deviation due to independent $\rho_Y(1)$ calculation of sub-interval shifts. Since the lag autocorrelation of sub-interval averages diminish for larger lags in a stochastic process, we only focus on the uncorrelated first lags for determining the minimum sub-interval size. We also show the changes in the variance of the mean distribution, in Fig. \ref{fig:subwinvar}. We observe in autocorrelated cases if the sample size is large enough (such as the case in Fig. \ref{fig:armarho1}b and Fig. \ref{fig:subwinvar}b), at a certain sub-interval size, $\rho_Y(1)$ is less than the standard error of sample size. Traditionally, the sub-interval size is determined as the minimum interval size needed to calculate the variance of series. However, when comparing against the analytical solution, we observe this estimate of uncertainty still exhibits some error in comparison with the analytical value. Since we observe that minimizing the error is obtained when $\rho_y(1) \simeq 0$, we propose that a better approach for this technique is to use the largest number of sub-windows possible which meets this condition. Ideally, where the autocorrelation of all the sub-interval lags are zero, the variance estimate is exact. However, here we focused on selecting the sub-interval size that minimizes the first lag value for getting a simple yet relatively accurate variance estimate. \begin{figure*}[!htb] \centering \begin{minipage}[b]{0.325\textwidth} \includegraphics[trim={0cm 0 0 0},clip, width=1.\textwidth]{rhoy1_phi0-eps-converted-to.pdf} \end{minipage} \begin{minipage}[b]{0.325\textwidth} \includegraphics[trim={0cm 0 0 0},clip, width=1.\textwidth]{rhoy1_phi750-eps-converted-to.pdf} \end{minipage} \begin{minipage}[b]{0.325\textwidth} \includegraphics[trim={0cm 0 0 0},clip, width=1.\textwidth]{rhoy1_phi990-eps-converted-to.pdf} \end{minipage} \caption{Autocorrelation of first sub-window average lag, $\rho_Y(1)$, of ARMA$(1,0)$ process as a function of sub-window size with 1000 samples for different values of $\phi_1 = \{0, 0.75, 0.99\}$. The zero-symmetric shaded band shows correlation statistical significance rejection area.} \label{fig:armarho1} \end{figure*} \begin{figure*}[!htb] \centering \begin{minipage}[b]{0.325\textwidth} \includegraphics[trim={0cm 0 0 0},clip, width=1.\textwidth]{vary_phi0-eps-converted-to.pdf} \end{minipage} \begin{minipage}[b]{0.325\textwidth} \includegraphics[trim={0cm 0 0 0},clip, width=1.\textwidth]{vary_phi750-eps-converted-to.pdf} \end{minipage} \begin{minipage}[b]{0.325\textwidth} \includegraphics[trim={0cm 0 0 0},clip, width=1.\textwidth]{vary_phi990-eps-converted-to.pdf} \end{minipage} \caption{Mean variance of sub-interval averages, $Var[\bar{Y}]$, of ARMA$(1,0)$ process as a function of sub-window size with 1000 samples for different values of $\phi_1 = \{0, 0.75, 0.99\}$ compared with the analytical value of variance.} \label{fig:subwinvar} \end{figure*} In Table \ref{tbl:compmethods} we compare the variance measurement techniques against the analytical values. When $\phi_1 = 0$, the signal is white noise and completely uncorrelated, thus the analytical value of the mean variance becomes equivalent to the variance of sampling distribution of the mean obtained from the Lindberg-Levy Central Limit Theorem for independent data. Moreover, in this case, the best window length for sub-interval averaging is the same timestep as the original series, as shown in Figs. \ref{fig:armarho1}(a) and \ref{fig:subwinvar}(a). On the other hand, using the integral correlation time approach introduces finite lag effects within the kernel estimator integration, hence the integral correlation time estimate of the mean variance exhibits small error. \begin{table}[ht!] \caption{\label{tbl:compmethods} Mean variance of ARMA$(1,0)$ using different methods for $n=1000$ samples.} \setlength\tabcolsep{0pt} \begin{ruledtabular} \begin{tabularx}{0.95\textwidth}{ >{\centering\arraybackslash}m{1.2in} | >{\centering\arraybackslash}m{1.4in} | >{\centering\arraybackslash}m{1.4in} | >{\centering\arraybackslash}m{1.4in} | >{\centering\arraybackslash}m{1.4in} } Method & Analytical & Treating as uncorrelated signals & Integral Correlation Time & Sub-interval Averaging \\ \hline & ${\sigma_X^2 (1 + \phi_1) \over n (1 - \phi_1)}$ & ${\sigma_X^2 \over {n}}$ & ${\tau_{int}}{\sigma_X^2 \over n}$ & ${\sigma_{{Y}}^2 \over {T}}$ \\ \hline $\phi_1 = 0$ & $4.05 \times 10^{-5}$ & $4.05 \times 10^{-5}$ & $3.93\times 10^{-5}$ & $4.05 \times 10^{-5}$ \\ \hline $\phi_1 = 0.75$ & $6.61 \times 10^{-4}$ & $9.44 \times 10^{-5}$ & $2.76 \times 10^{-4}$ & $6.46 \times 10^{-4}$ \\ \hline $\phi_1 = 0.99$ & $0.12$ & $8.85 \times 10^{-4}$ & $0.037$ & $-$ \\ \end{tabularx} \end{ruledtabular} \end{table} In the case of $\phi=0.75$, we observe due to finite autocorrelation of lags, the analytical value of mean variance is larger than treating the time series as uncorrelated. One should bear in mind that the mean variance is always larger than treating the time series as uncorrelated, and neglecting this fact can result in underestimation of temporal uncertainties. Nonetheless, with $n=1000$ samples we have a sufficient number of time-steps that the so that the sub-interval averaging mean variance asymptotes to the analytical value of the sample mean variance (see Fig. \ref{fig:subwinvar}b) and be used as a good estimate of the actual analytical mean variance in this case. Again we have chosen the largest number of sub-interval sub-windows ($T=55$) value where $\rho_Y(1)$ is insignificant within confidence intervals (see Fig. \ref{fig:armarho1}b). On the other hand, although the integral correlation time method does a better job than simply treating $X$ as an uncorrelated signal, it suffers from the errors of autocorrelation lags due to finite sample size of time series. The integral correlation time method shows significant sensitivity to both the choice of kernel estimator and insignificance threshold of the autocorrelated lags. From this comparison, it is obvious that the sub-interval averaging is a better choice for estimating the mean variance. In the case of $\phi_1 = 0.99$, we are dealing with a long memory process with many correlated lags, where the number of time-steps used to generate the signal is not enough for a good estimate of the mean variance with either one of our considered approaches. Using the integral correlation time method, the mean variance exhibits large error, mainly due to fact that the standard error of autocorrelation function is large due to small sample size yet large correlation (see Fig. \ref{fig:armaacf}c). On the other hand, in Fig. \ref{fig:subwinvar}(c) we can observe the mean variance of sub-intervals has not converged yet, and at no point does $\rho_y(1)$ enter the statistical insignificance region in Fig. \ref{fig:armarho1}(c). Such a case can happen in plasma turbulence simulations with very slow dynamics, in these cases the simulation needs to be run longer for an accurate assessment of the mean variance. We note that in the $\phi_1=0.99$ case, by merely having larger time series, the sub-interval mean variance technique converges to the analytical variance value, as shown in Fig. \ref{fig:largertssub}. \begin{figure*}[!htb] \centering \begin{minipage}[b]{0.37\textwidth} \includegraphics[trim={0cm 0 0 0},clip, width=1.\textwidth]{rhoy1_phi990ss10000-eps-converted-to.pdf} \end{minipage} \begin{minipage}[b]{0.37\textwidth} \includegraphics[trim={0cm 0 0 0},clip, width=1.\textwidth]{vary_phi990ss10000-eps-converted-to.pdf} \end{minipage} \caption{(a) Autocorrelation of first sub-window average lag, $\rho_Y(1)$, and (b) Mean variance of sub-interval averages, $Var[\bar{Y}]$, of ARMA$(1,0)$ process as a function of sub-window size with 10000 samples for $\phi_1 = 0.99$ compared against the analytical value of variance.} \label{fig:largertssub} \end{figure*} \section{Assessing Mean and Mean Variance Convergence of Simulated Turbulence Quantities} \label{simulationvar} From Sec. \ref{sec:armatest}, we observed that $Var[\bar{Y}]$ estimate of sub-interval averages technique is a reasonable approach for estimating the variance of the mean distribution as long as we have a simulation time length much larger than autocorrelation timescale. One could run the simulation long enough and perform the sub-interval averaging of measurements to check if the mean variance is small compare to the mean quantity, and if the mean of turbulence quantity can be confidently used for validation or calibration purposes. However, if the simulation is not of sufficient length to confidently estimate the variance, these approaches offer no guidance on how much more simulation time is needed to accurately estimate the mean variance. Nevertheless, modeling the stochastic process of turbulence quantity time series within saturation phase of simulation can provide us with the means of forecasting the temporal uncertainty at later simulation times. Hence, in this Section, we study the fitting of Gaussian ARMA processes to the gyrokinetic simulation energy flux within the saturation phase in order to determine the process of turbulence quantity. By determining which process best fits the data, we can estimate and forecast the mean variance of a turbulence quantity at later simulation times. Specifically, we use ARMA models to determine the process of the turbulence quantity and assess if a simulation has been run long enough. Once we determine the process of the simulation, we can use the fitted ARMA model coefficients to forecast how the variance of mean decreases for later simulation times. This study builds upon recent work by Parker \textit{et. al.}\cite{parker2018}, examining the performance of ARMA extrapolation methods for both strongly and weakly driven turbulence. \subsection{Simulations Detail} For this study, we utilize gyrokinetic simulation predictions of the ion energy flux $Q_i$, using parameters taken from a series of ion-temperature gradient (ITG) mode dominated neutral beam heated DIII-D tokamak high-confinement mode (H-mode) plasmas, the details of which can be found in Luce \textit{et al.}\cite{luce2017}. More specifically, we utilize the results of three different simulations corresponding to three different values of the local normalized ion temperature gradient inverse scale length $a/L_{Ti} = -a d ln(T_i)/dr$, where $r$ is the minor radius of a flux surface at the outboard midplane and $a$ its value at the separatrix. The first simulation uses parameters corresponding to the nominal measured value of $a/L_{Ti}$ (as determined by standard profile curve-fitting analysis) at $\rho_{tor} = 0.6$ in a discharge with approximately $7$ MW of injected heating power but only $1.4$ N-m of injected torque, while the other two simulations use values of $a/L_{T_i}$ equal to $80\%$ and $50\%$ of the measured value, respectively. All other input parameters are held fixed at their measured values, which allows us to systematically quantify how the turbulence temporal characteristics change as the ITG mode drive is reduced. The simulations were performed with the nonlinear initial value continuum gyrokinetic code CGYRO\cite{candy2016}. The simulations span a domain size of $111\rho_s$ by $63\rho_s$ in the radial and binormal directions, where $\rho_s = c_s/\Omega_{c_i}$ is the ion sound-speed gyroradius. The simulations are fully spectral in the perpendicular plane, and include 320 radial modenumbers (resolving up to a maximum $k_x \rho_s = 9.0$) and 12 binormal modes (spanning $0.1 \le k_y \rho_s \le 1.1$). Parallel motion derivatives are treated with a sixth-order conservative upwind finite differencing scheme using 24 grid points in $\theta$. Velocity space is represented using the same ($\xi$,$v$) coordinates as the neoclassical NEO code\cite{belli2008,belli2012}, where $\xi = v_\parallel/v$ is the cosine of the pitch angle and $v$ the speed; 24 grid points in $\xi$ and 8 in $v$ are used. The simulations are local, and include magnetic flux surface shaping through the Miller representation\cite{miller1998,candy2009}, transverse magnetic fluctuations, electron and ion collisions, and equilibrium rotation and shear effects treated with a novel wavenumber advection algorithm\cite{candy2018}. Three ion species are included- thermal deuterium and carbon, as well as fast beam ions (modeled as having Maxwellian distribution with $T_{fast}/T_e = 12.4$, whereas for thermal ions $T_i/T_e = 1.16$), but only transport from the thermal ions is considered here. All particle species (ions and electrons) are treated fully gyrokinetically. The time series of energy fluxes for $1000 (a/c_s)$ or longer with a sampling rate of $1 (a/c_s)$ are shown in Fig. \ref{fig:simts} along with their running mean values through out the simulation. Here, $c_s=\sqrt{T_e/m_i}$ is the local ion sound speed. We observe for the experimental value of $a/L_{T_i}$ which is well above the critical value for instability, the ion energy flux exhibits a near-normal distribution, while at lower gradients the time series distribution skewness increases. Examination of the running means for each case (plotted as dashed lines in Fig. \ref{fig:simts}) shows that for the $(a/L_{T_i})^{exp}$ case, the mean of ion energy flux converges after approximately $500 (a/c_s)$, in $0.8(a/L_{T_i})^{exp}$ the ion energy flux mean converges after about $1000(a/c_s)$, while for the near marginal $0.5(a/L_{T_i})^{exp}$ case, the mean still has not converged after $2500 (a/c_s)$. {Here, we have deliberately kept the simulation length short to analyze the convergence of the introduced methods in the non-converged case}. Consistent with the results, the $0.5(a/L_{T_i})^{exp}$ case autocorrelation function exhibits much a larger number of autocorrelated lags, as shown in Fig. \ref{fig:simacf}. \begin{figure*}[!htb] \centering \begin{minipage}[b]{0.325\textwidth} \includegraphics[trim={1.75cm 0.5cm 1cm 0},clip, width=1.\textwidth]{qi10_TS-eps-converted-to.pdf} \end{minipage} \begin{minipage}[b]{0.325\textwidth} \includegraphics[trim={1.75cm 0.5cm 1cm 0},clip, width=1.\textwidth]{qi8_TS-eps-converted-to.pdf} \end{minipage} \begin{minipage}[b]{0.325\textwidth} \includegraphics[trim={.5cm 0.5cm 2.25cm 0},clip, width=1.\textwidth]{qi5_TS-eps-converted-to.pdf} \end{minipage} \caption{Ion energy flux time series with their cumulative mean, and the marginal distribution of ion energy flux gyrokenitic simulations of ion temperature length scale: (a) $a/L_{T_i} = (a/L_{T_i})^{exp}$, (b) $a/L_{T_i} = 0.8(a/L_{T_i})^{exp}$, (c) $a/L_{T_i} = 0.5(a/L_{T_i})^{exp}$. The histograms show simulation temporal sample distribution. Simulation times before saturation phase are not shown. Fluxes are normalized to gyro-Bohm energy flux.} \label{fig:simts} \end{figure*} \begin{figure*}[!htb] \centering \begin{minipage}[b]{0.325\textwidth} \includegraphics[trim={.25cm 0.75cm 2cm 0},clip, width=1.\textwidth]{acf_qi10-eps-converted-to.pdf} \end{minipage} \begin{minipage}[b]{0.325\textwidth} \includegraphics[trim={.25cm 0.75cm 2cm 0},clip, width=1.\textwidth]{acf_qi8-eps-converted-to.pdf} \end{minipage} \begin{minipage}[b]{0.325\textwidth} \includegraphics[trim={.25cm 0.75cm 2cm 0},clip, width=1.\textwidth]{acf_qi5-eps-converted-to.pdf} \end{minipage} \caption{Autocorrelation function of ion energy flux gyrokenitic simulations of ion temperature length scale: (a) $a/L_{T_i} = (a/L_{T_i})^{exp}$, (b) $a/L_{T_i} = 0.8(a/L_{T_i})^{exp}$, (c) $a/L_{T_i} = 0.5(a/L_{T_i})^{exp}$. The shaded area shows statistically insignificant autocorrelation within 95 percent confidence interval.} \label{fig:simacf} \end{figure*} \subsection{Determining the Stochastic Process of Turbulence Quantity} To model the stochastic process of a turbulence quantity, we propose to use the Box-Jenkins methodology\cite{box2015time} to fit a suitable order of ARMA model parameters to the simulation data, and obtain an approximate analytical relation that best fits the simulation quantity. {We note that the ARMA model assumes stationary time series and can only be used to fit the simulation time series after nonlinear saturation, where initial linear physics effects have completely vanished and there is good convergence in the mean of the turbulence quantity. Further discussion of determining the saturation phase start time and convergence of the mean can be be found in Ref.} \cite{holland2016}. After finding the best ARMA fit for the nonlinear phase, the long run variance can be determined and thereby the mean variance can easily and cost-effectively be calculated for a variety of the simulations lengths. From this information, the necessary simulation length can be determined. To address ARMA fitting of skewed time series (e.g. near marginal case), Box and Cox\cite{boxcox1992} suggested use of an invertible link function $g(X_t)$ which transforms the original time series to a linear process where the ARMA fitting is appropriate. A popular link function that have been used in the literature for positively skewed marginal distributions, is the logarithm function $g(X_t) = \log{X_t}$. Different variations of generalized non-normal ARMA models have also been formulated\cite{benjamin2003,zheng2015}. For the simulations considered here we find logarithmic link function is sufficient, and defer the investigation of non-Gaussian processes to future studies. Different orders of ARMA$(p,q)$ models can be fitted to $g(X_t)$ and the best $(p,q)$ can be selected based on standard statistical tests, e.g. significance of fit coefficients\cite{hannan1982}, insignificance autocorrelation of residual lags\cite{parzen1963}, normality of the residuals distribution\cite{dagostino1973}, and significance of lags Ljung-Box p-values\cite{ljung1978}. To assess the fitting statistics of ARMA series, here we considered $p$ and $q$ values ranging from zero to ten, and for statistical tests we considered the statistics of first ten lags against the $95\%$ confidence interval for the autocorrelation function, with an assumed p-value threshold of 0.05 for the null hypotheses. We again emphasize that ARMA models assume stationarity of the time series, therefore we can only applied where the evolving mean has almost converged. As a result, based on the evolving mean shown in Fig. \ref{fig:simts}, we applied ARMA fitting after $250 (a/c_s)$ for $(a/L_{T_i})^{exp}$ case, after $750 (a/c_s)$ to $0.8(a/L_{T_i})^{exp}$ case, and after $1750(a/c_s)$ to $0.5(a/L_{T_i})^{exp}$ case (which is not rigorously justifiable). \begin{figure*}[!htb] \centering \begin{minipage}[b]{0.325\textwidth} \includegraphics[trim={.25cm 0.75cm 1cm 0},clip, width=1.\textwidth]{armafit_qi10-eps-converted-to.pdf} \end{minipage} \begin{minipage}[b]{0.325\textwidth} \includegraphics[trim={.25cm 0.75cm 1cm 0},clip, width=1.\textwidth]{armafit_qi8-eps-converted-to.pdf} \end{minipage} \begin{minipage}[b]{0.325\textwidth} \includegraphics[trim={.25cm 0.75cm 1cm 0},clip, width=1.\textwidth]{armafit_qi5-eps-converted-to.pdf} \end{minipage} \caption{Statistically feasible ARMA fits to the simulation data with respect to simulation timesteps for simulated ion energy flux: (a) $a/L_{T_i} = (a/L_{T_i})^{exp}$, (b) $a/L_{T_i} = 0.8(a/L_{T_i})^{exp}$, (c) $a/L_{T_i} = 0.5(a/L_{T_i})^{exp}$. {Distinct colors show different ARMA models.}} \label{fig:armafitscan} \end{figure*} In Fig. \ref{fig:armafitscan}, we have shown all ARMA fits that passed the statistical tests for different simulation cases, as a function of fitting window length. We observe at earlier simulation timesteps, the simulation data is not sufficient to accurately describe the stochastic process of turbulence. Hence, many different ARMA processes can fit the simulation data and pass the statistical tests. However, with the increase of simulation data points the stochasticity of the turbulence is better determined statistically. We observe after $750 (a/c_s)$ only one process ($ARMA(1,7)$) passes the tests for the $(a/L_{T_i})^{exp}$ simulation case. For the $0.8(a/L_{T_i})^{exp}$ simulation case, $ARMA(5,2)$ is the only process that can describe the simulation data after $1250 (a/c_s)$. However, in the near marginal $0.5(a/L_{T_i})^{exp}$ simulation case, there multiple feasible ARMA fits to the simulation data. This result can be associated to large autocorrelations and small sample sizes, and/or a break down in the assumption of Gaussian processes for the near marginal case. The reasoning behind such behavior can also be seen for autocorrelation function of ion energy flux for each simulation case (see Fig. \ref{fig:simacf}). Theoretically, the turbulence timescale is larger than current simulation length, showing that not all relevant timescale dynamics are resolved within the simulation. {We should note that the best practice in the case of getting multiple ARMA models for a time series is to run the simulation longer in order to rule out statistically infeasible ARMA models}\cite{ling2013uncertainty}{. If no ARMA model can fit a time series while the number of samples are very large, one can loosen the statistical hypotheses. However, if the number of samples are small while no ARMA fit can be found, we can deduce ARMA Gaussian process can not be applied to the simulation case, and non-Gaussian processes}\cite{benjamin2003} {should be modeled for that specific simulation case.} To further study the convergence of the ARMA model in the $(a/L_{T_i})^{exp}$ and $0.8(a/L_{T_i})^{exp}$ simulation cases, in Fig. \ref{fig:armacoefconv}, we have shown the changes in the ARMA constant offset (link-transformed time series mean), $\phi$ and $\theta$ coefficients. We observe after $500 (a/c_s)$ in the $(a/L_{T_i})^{exp}$ simulation case and after $1250 (a/c_s)$ in the $0.8(a/L_{T_i})^{exp}$ simulation case, the ARMA coefficients and the mean of turbulence quantity start to converge to a certain value, indicating there is enough simulation data to get a good estimate of mean and its fractional uncertainty. \begin{figure*}[!htb] \centering \begin{minipage}[b]{0.325\textwidth} \includegraphics[trim={0cm 0cm 0cm 0}, width=.97\textwidth]{armapq_qi10-eps-converted-to.pdf} \end{minipage} \begin{minipage}[b]{0.325\textwidth} \includegraphics[trim={0cm 0cm 0cm 0}, width=1.\textwidth]{armapq_qi8-eps-converted-to.pdf} \end{minipage} \caption{Convergence of ARMA process coefficients for simulated ion energy flux for: (a) $a/L_{T_i} = (a/L_{T_i})^{exp}$, (b) $a/L_{T_i} = 0.8(a/L_{T_i})^{exp}$ as a function of simulation timestep. $ 0.8(a/L_{T_i})^{exp}$ case not shown since there is not a definite best ARMA model with current simulation length.} \label{fig:armacoefconv} \end{figure*} \subsection{Forecasting the Mean Variance for Temporal Uncertainty Tolerance} We extend our analysis to forecast the variance changes for later simulation times. These forecasts are performed by using the best ARMA model fits to generate additional data points for the $Q_i$ time series beyond the gyrokinetic results. Since the ARMA model inherently assumes the stationarity of the time series, one is able to forecast mean variance at later simulation time, once the stochastic process of turbulence is known. Here, we have used ARMA fitting of simulation data using a link function, extended the time series by continue simulating the time series of the based fitted ARMA, and calculate the mean variance of ARMA forecast using sub-interval averaging technique based on $\min(\rho_Y(1))$ criteria described in Sec. \ref{sec:armatest}. Once $p$, $q$, and $\sigma_\varepsilon$ is found, we can continue generating the time series up to a desired length, we use the inverse link $g^{-1}(\cdot)$ to transform back the the process to their original series, and perform sub-interval averaging of approximate link inverted ARMA fit to approximate the mean variance of simulation quantities at a later simulation time. In Fig. \ref{fig:armaforecast} we have forecasted the fractional uncertainty of ion energy flux for up $5000 (a/c_s)$ in each simulation case. \begin{figure*}[!htb] \centering \begin{minipage}[b]{0.325\textwidth} \includegraphics[trim={0cm 0cm 0cm 0},clip, width=1.\textwidth]{uqforecast_qi10-eps-converted-to.pdf} \end{minipage} \begin{minipage}[b]{0.325\textwidth} \includegraphics[trim={0cm 0cm 0cm 0},clip, width=1.\textwidth]{uqforecast_qi8-eps-converted-to.pdf} \end{minipage} \begin{minipage}[b]{0.325\textwidth} \includegraphics[trim={0cm 0cm 0cm 0},clip, width=1.\textwidth]{uqforecast_qi5-eps-converted-to.pdf} \end{minipage} \caption{Forecasting of ion energy flux fractional uncertainty at later simulation times for: (a) $a/L_{T_i} = (a/L_{T_i})^{exp}$, (b) $a/L_{T_i} = 0.8(a/L_{T_i})^{exp}$, (c) $a/L_{T_i} = 0.5(a/L_{T_i})^{exp}$. Red line shows a desired fractional uncertainty threshold of $5\%$.} \label{fig:armaforecast} \end{figure*} We observe as the simulation length increases the variance of the mean shrinks down. However, the variance reduction is not similar to treating the samples as uncorrelated, due to finite autocorrelation of lags. We can observe for $(a/L_{T_i})^{exp}$ simulation case, if one is aiming for five percent factional uncertainty, we tentatively need to run the simulation for up to $2000 (a/c_s)$. For the simulation case of $0.8(a/L_{T_i})^{exp}$, with current simulation length, the fractional uncertainty is at the desired level of five percent. For the $0.5(a/L_{T_i})^{exp}$ simulation case, we have forecasted the fractional uncertainty for all the four possible ARMA fits. We observe in the near marginal case the forecasted fractional uncertainty at $5000 (a/c_s)$ is between $6\%-7\%$, and simulation needs to be run even longer that $5000 (a/c_s)$ to achieve desired level of accuracy of less than five percent fractional uncertainty. {We should note that as we observed from our analytical analysis shown in Sec.} \ref{sec:armatest}{, the variance of the mean distribution of each ARMA model is a function of $\phi_i$ and $\theta_i$, thus the variance vary from one ARMA model to another one. Nonetheless, even for the non-converged $0.5(a/L_{T_i})^{exp}$ case, we shown that with multiple ARMA fits we can still forecast a range of fractional uncertainties at later simulation times.} These procedures can be used in future validation studies to help determine simulation length and computational resource requirements. \section{Summary and Future Directions} \label{summary} In this paper, we reviewed some previous approaches used within MFE community on estimating the variance of the mean distribution of simulated quantities. We compared the analytical mean variance of $ARMA(1,0)$ process with two previously used mean variance techniques, namely the integral correlation time, and the sub-interval averaging approaches. We found that the integral correlation time is very sensitive to the choice of kernel estimator, while sub-interval averaging of correlated measurements can be a robust method as long as we have enough temporal samples compare to the autocorrelation timescale. Moreover, we have studies fitting of the ARMA models to gyrokinetic simulated ion energy flux quantities. Through ARMA model fitting, we determined if the simulation has been run long enough, and forecasted the fractional uncertainty of turbulence quantity through later simulation times. We should note that only ARMA models with normal error terms have been explored in this publication. Generalized ARMA models\cite{benjamin2003,zheng2015} can be studied and explored with different types of turbulence simulations, to provide a tool to model non-Gaussian processes, and acquire non-symmetric confidence intervals on the mean within nonlinear simulations. As MFE community further incorporating temporal UQ into plasma turbulence studies, more advanced methods such as Bayesian calibration of temporal models\cite{ling2013uncertainty} can also be seen as future avenue for stacking different types of uncertainties. \begin{acknowledgments} The authors would thank W. M. Nevins and D. R. Mikkelsen for their initial work on this topic, including implementation of the approaches described in Sec. \ref{sec:meanvaroldtechniques} into analysis software. C.H. also thanks both, as well as J. Parker, for many useful discussions on this topic. {Authors would also thank O. Meneghini for useful discussions.} The work was supported by the U.S. Department of Energy Office, Office of Sciences, Office of Fusion Energy Sciences award numbers DE-SC0006957 (Center for Simulation of Plasma Microturbulence) and DE-SC0018287 (AToM: Advanced Tokamak Modeling Environment). The simulations were performed on computing resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility operated under Contract No. DE-AC02-05CH11231. \end{acknowledgments} \bibliographystyle{apsrev4-1}
{ "timestamp": "2019-03-01T02:08:49", "yymm": "1902", "arxiv_id": "1902.10879", "language": "en", "url": "https://arxiv.org/abs/1902.10879" }
\section{Introduction} \label{sec:Intro} Solving optimal control problems (OCPs) though numerical methods has been very popular in the field of trajectory optimization and real-time optimization-based control. They often require direct transcription of the infinite-dimensional OCP into a nonlinear programming (NLP) problem of a finite dimension, via the introduction of a discretization mesh. Since the NLP solver will only return samples of the solution at a finite number of points in the discretization mesh, additional steps and care must be taken when representing the continuous-time results for time instances other than these sampled points. The common practice today is to directly use interpolation schemes, which are selected in accordance with the type of the discretization mesh \cite{betts2010practical,kelly2017introduction}. The direct collocation method with direct interpolation has one major drawback: the ODE residual errors are forced to zero only at collocation points. In general, no guarantees on accuracy and constraint satisfaction can be derived for the system trajectories inbetween collocation points. Most often, posterior analysis is used to identify the intervals where errors are high, and mesh refinement procedures are put in place to modify the discretization mesh. The problem has to be solved iteratively until all the errors are within user-defined tolerances. In this paper, we will present a solution representation method that can largely improve the solution accuracy for the same discretization mesh, so that results of higher quality are obtainable with relatively coarse meshes. This is achieved by minimizing the ODE residual error that is integrated over the solution trajectory. Section~\ref{sec: OptimizationBasedControl} will provide the background information for solving optimal control problems numerically with the direct collocation method, as well as the issues associated with this. Section~\ref{sec: ResidualMinimization} will introduce the fundamental concept of the recently developed residual minimization method for optimal control and motivate the development of the proposed scheme, which will be presented in Section~\ref{sec: ProposedScheme}. The benefits of the method will be demonstrated in Section~\ref{sec: ExampleProblem} with two example problems. This will be followed by concluding remarks in Section~\ref{sec: conlclusions}. \section{Numerical Optimal Control} \label{sec: OptimizationBasedControl} Generally speaking, optimization-based control requires the solution of OCPs expressed in the general Bolza form: \begin{subequations} \label{eqn: OCPBolza} \begin{equation} \min_{x,u,p,t_0,t_f} \Phi(x(t_0),t_0,x(t_f),t_f,p) +\int_{t_0}^{t_f} L(x(t),u(t),t,p)\: dt \end{equation} subject to \begin{align} \dot{x}(t)=f(x(t),u(t),t,p),\ &\forall t \in [t_0,t_f] \label{eqn:OCPBolzaDynamics}\\ c(x(t),u(t),t,p)\le 0,\ &\forall t \in [t_0,t_f] \label{eqn:OCPBolzaPathConstraint}\\ \phi(x(t_0),t_0,x(t_f),t_f,p) =0,\ & \end{align} \end{subequations} with $x(t) \in \mathbb{R}^n$ is the state of the system, $u(t) \in \mathbb{R}^m$ is the control input, $p \in \mathbb{R}^s$ are static parameters, $t_0 \in \mathbb{R}$ and $t_f \in \mathbb{R}$ are the initial and terminal time. $\Phi$ is the Mayer cost functional ($\Phi$: $\mathbb{R}^n \times \mathbb{R} \times \mathbb{R}^n \times \mathbb{R} \times \mathbb{R}^s \to \mathbb{R}$), $L$ is the Lagrange cost functional ($L:\mathbb{R}^n \times \mathbb{R}^m \times \mathbb{R} \times \mathbb{R}^s \to \mathbb{R}$), $f$ is the dynamic constraint ($f:\mathbb{R}^n \times \mathbb{R}^m \times \mathbb{R} \times \mathbb{R}^s \to \mathbb{R}^n$), $c$ is the path constraint ($c:\mathbb{R}^n \times \mathbb{R}^m \times \mathbb{R} \times \mathbb{R}^s \to \mathbb{R}^{n_g}$) and $\phi$ is the boundary condition ($\phi:\mathbb{R}^n \times \mathbb{R} \times \mathbb{R}^n \times \mathbb{R} \times \mathbb{R}^s \to \mathbb{R}^{n_q}$). In practice, most optimal control problems formulated as~\eqref{eqn: OCPBolza} need to be solved with numerical schemes. In many situations, indirect methods can be difficult to implement, since they require analytic expressions of optimality conditions. Direct methods have consequently become the de facto standard for solving practical optimal control problems \cite{limebeer2015faster}. With direct methods, the OCP problem is first discretized, then the resulting NLP is numerically solved. In this process, if the solutions to the dynamic equations and the boundary conditions are solved altogether, the corresponding schemes are often referred to as a direct collocation method. \subsection{Direct collocation methods} \label{sec: DirectTranscriptionMethod} Direct collocation methods can be categorized into fixed-order $h$ methods (e.g.\ Euler, Trapezoidal, and Hermite-Simpson (H-S) as in \cite{betts2010practical}), and variable higher-order $p$/$hp$ methods (e.g.\ Legendre-Gauss-Radau (LGR) as in \cite{liu2014hp}). Here, we aim to provide a high level overview of the method, which is valid for both $h$ and $p$/$hp$ methods. With a mesh of size $N=\sum_1^K N^{(k)}$, the states can be approximated as \begin{equation} \label{eqn: LGRStateApproximation} x^{(k)}(\tau) \approx X^{(k)}(\tau) := \sum_{j=1}^{N^{(k)}}\mathcal{X}_j^{(k)}\mathcal{B}_{j}^{(k)}(\tau), \end{equation} with mesh interval $k$ $\in$ $\{1$, $\hdots$, $K\}$, $N^{(k)}$ denoting the number of collocation points for interval $k$, and $\mathcal{B}_{j}^{(k)}(\cdot)$ are basis functions. For classical $h$ methods, $\tau \in \mathbb{R}^{N}$ takes on values in the interval $[0,1]$ representing $[t_0,t_f]$, and $\mathcal{B}_{j}^{(k)}(\cdot)$ are chosen to be elementary B-splines of various orders. For $p$/$hp$ methods, $\mathcal{B}_{j}^{(k)}(\cdot)$ are Lagrange interpolating polynomials over the normalized time interval $\tau$ $\in$ $[-1,1]$. We use $X_j^{(k)}$ to represent the approximated states at collocation points, i.e.\ $X_j^{(k)}=X^{(k)}(\tau_j^{(k)})$. Approximation of the input $u$ can be made analogously with $U_j^{(k)}$. Consequently, the OCP~\eqref{eqn: OCPBolza} can be approximated by \begin{subequations} \label{eqn: LGRStateApproximationAll} \begin{multline} \label{eqn: LGRStateApproximationCost} J_c:=\min_{X,U,p,t_0,t_f} \Phi(X_1^{(1)},t_0,X_{f}^{(K)},t_f,p)\\ +\sum_{k=1}^{K}\sum_{i=1}^{N^{(k)}} w_i^{(k)} L(X_i^{(k)},U_i^{(k)},\tau_i^{(k)},t_0,t_f,p) \end{multline} subject to, for $i=1,\hdots,N^{(k)}$ and $k=1,\hdots,K$: \begin{align} \label{eqn: LGRStateApproximationCostDefect} \sum_{j=1}^{N^{(k)}}\mathcal{A}_{ij}^{(k)}X_j^{(k)}+\mathcal{D}_{i}^{(k)}f(X_i^{(k)},U_i^{(k)},\tau_i^{(k)},t_0,t_f,p) = & 0 \\ \label{eqn: LGRStateApproximationPathConstraint} c(X_i^{(k)},U_i^{(k)},\tau_i^{(k)},t_0,t_f,p)\le & 0 \\ \phi(X_1^{(1)},t_0,X_{f}^{(K)},t_f,p) =& 0 \end{align} \end{subequations} where $w_j^{(k)}$ are the quadrature weights for the respective discretization method chosen, $\mathcal{A}$ is the numerical differentiation matrix with $\mathcal{A}_{ij}$ the element $(i,j)$ of the matrix, and $\mathcal{D}$ a constant matrix. The discretized problem can then be solved with off-the-shelf NLP solvers \subsection{Representing the results} \label{subsec: ResultReconstruction} The NLP solver generates a discretized solution $\mathcal{Z} \coloneqq (X, U, p, \tau, t_0, t_f)$ as sampled data points. Interpolating splines may be used to construct an approximation of the continuous-time optimal trajectory $\tilde{z}(t) \coloneqq (\tilde{x}(\mathcal{Z},t), \tilde{u}(\mathcal{Z},t), t, p)$. \subsubsection{Representation via direct interpolation} \label{subsec: ReconstructionDefault} Conventionally, the interpolation of the solution corresponds to the discretization scheme used in the transcription process. Thus, we must analyze how the state approximation \eqref{eqn: LGRStateApproximation} enters the optimal control problem formulation~\eqref{eqn: LGRStateApproximationCost}. It is not difficult to discover that the only dependency on the basis function in \eqref{eqn: LGRStateApproximationCost} appears in the defect constraint~\eqref{eqn: LGRStateApproximationCostDefect}, through the first term representing the numerical differentiation of the approximated function $X(\cdot)$. For most commonly-used numerical schemes, the numerical differentiation formulation has an equivalent integration form. Both forms are presented in Table \ref{tab: NumericalIntegrationSchemes}, where $h_k := \Delta t(\tau_{N}^{(k)}-\tau_1^{(k)})$, $\Delta t := t_f-t_0$, and \begin{equation} \label{eqn: Fi_Discretized} F_i^{(k)}:=f(X_i^{(k)},U_i^{(k)},\tau_i^{(k)},t_0,t_f,p). \end{equation} \begin{table}[b] \small \begin{center} \caption{Typical numerical schemes} \label{tab: NumericalIntegrationSchemes} \begin{tabular}{c|c} \textbf{Method} & \multirow{2}{*}{\textbf{Numerical Integration Scheme}} \\ \textbf{(Order)} & \\ \hline \multirow{2}{*}{Euler (1)} & \multirow{2}{*}{$X_{2}^{(k)}=X_1^{(k)}+h_kF_1^{(k)}$}\\ & \\ \hline \multirow{2}{*}{Trapezoidal (2)} & \multirow{2}{*}{$X_{2}^{(k)}=X_1^{(k)}+\frac{h_k}{2}(F_1^{(k)}+F_{2}^{(k)})$} \\ & \\ \hline Hermite & $X_{2}^{(k)}=\frac{1}{2}(X_{2}^{(k)}+X_1^{(k)})+\frac{h_k}{8}(F_1^{(k)}-F_{3}^{(k)})$\\ Simpson (3)& $X_{3}^{(k)}=X_1^{(k)}+\frac{h_k}{6}(F_1^{(k)}+4F_{2}^{(k)}+F_{3}^{(k)})$\\ \hline \multirow{2}{*}{LGR ($N^{(k)}$)} & $\mathcal{I}^{(k)}=[\mathcal{A}^{(k)}_{2:N+1}]^{-1}$ \\ & $X_{2:N+1}^{(k)}=X_1+\frac{\Delta t}{2}\mathcal{I}^{(k)}F_{1:N}^{(k)}$\\ \hline \end{tabular} \end{center} \end{table} For each numerical scheme, direct interpolation of the OCP solution is possible using splines with the type and order in accordance with Table \ref{tab: ReconstructionContinuity}. \begin{table}[tb] \small \begin{center} \caption{Continuity of the reconstructed solution (a.m.: at most; p.w.: piecewise)} \label{tab: ReconstructionContinuity} \begin{tabular}{c|c|c|c} \hline \textbf{Method} & \textbf{Dynamics ($\dot{\tilde{x}}$)} & \textbf{States ($\tilde{x}$)} & \textbf{Inputs ($\tilde{u}$)} \\ \hline Euler & p.w.\ constant & a.m.p.w.\ linear & \multirow{4}{*}{\shortstack{same \\ as \\dynamics}} \\ \cline{1-3} Trape. & a.m.p.w.\ linear & a.m.p.w.\ quad. &\\ \cline{1-3} H-S & a.m.p.w.\ quad. & a.m.p.w.\ cubic & \\ \cline{1-3} LGR & a.m.\ order $N^{(k)}$ & a.m.\ order $N^{(k)}$+1 &\\ \hline \end{tabular} \end{center} \end{table} For example, with Hermite-Simpson transcription, the reconstructed state trajectory inside mesh interval $k$ using cubic splines will be \begin{multline} \label{eqn: HSStateReconstruction} \tilde{x}^{(k)}(\mathcal{Z},t)=X_1^{(k)}+F_1^{(k)}(t-t_1^{(k)})\\ +\frac{1}{2}\bigg(-3F_1^{(k)}+4F_2^{(k)}-F_3^{(k)}\bigg)\frac{(t-t_1^{(k)})^2}{h_k}\\ +\frac{2}{3}\bigg(F_1^{(k)}-2F_2^{(k)}+F_3^{(k)}\bigg)\frac{(t-t_1^{(k)})^3}{h_k^2}, \end{multline} the dynamics trajectory with quadratic splines will be \begin{multline} \label{eqn: HSDynamicsReconstruction} \dot{\tilde{x}}^{(k)}(\mathcal{Z},t)=F_1^{(k)}+\bigg(-3F_1^{(k)}+4F_2^{(k)}-F_3^{(k)}\bigg)\frac{t-t_1^{(k)}}{h_k}\\ +\bigg(2F_1^{(k)}-4F_2^{(k)}+2F_3^{(k)}\bigg)\bigg(\frac{t-t_1^{(k)}}{h_k}\bigg)^2, \end{multline} and the control trajectory with quadratic splines will have the expression \begin{multline} \label{eqn: HSInputReconstruction} \tilde{u}^{(k)}(\mathcal{Z},t)=\frac{2}{h_k^2}(t-\frac{1}{2}t_1^{(k)}-\frac{1}{2}t_3^{(k)})(t-t_3^{(k)})U_1^{(k)}\\ -\frac{4}{h_k^2}(t-t_1^{(k)})(t-t_3^{(k)})U_2^{(k)}\\ +\frac{2}{h_k^2}(t-t_1^{(k)})(t-\frac{1}{2}t_1^{(k)}-\frac{1}{2}t_3^{(k)})U_3^{(k)}. \end{multline} for all $t \in [t_1^{(k)}, t_3^{(k)}]$. The whole trajectory $\tilde{x}(\mathcal{Z},t)$, $\dot{\tilde{x}}(\mathcal{Z},t)$ and $\tilde{u}(\mathcal{Z},t)$ can then be expressed as piecewise polynomials. For $p/hp$ methods, Lagrange interpolating polynomials are often used as basis functions during the transcription process. An alternative version, namely barycentric Lagrange interpolation, is often used instead for solution interpolation, due to its improved numerical stability. \subsubsection{Evaluation of errors} \label{subsec: SolRepresentationAndErrorAnalysis} The quality of the interpolated solution needs to be assured through error analysis, assessing the level of accuracy and constraint satisfaction. Firstly, any valid trajectory $\tilde{z}(t)$ must satisfy the system dynamics \eqref{eqn:OCPBolzaDynamics} with a good level of accuracy. Therefore, one measure for the error due to discretization and interpolation is through the calculation of the ODE residual $\varepsilon_r(t) \in \mathbb{R}^n$ defined as \begin{equation} \label{eqn: DiscretizationError} \varepsilon_r(t):=\dot{\tilde{x}}(\mathcal{Z},t)-f(\tilde{x}(\mathcal{Z},t), \tilde{u}(\mathcal{Z},t), t, p). \end{equation} For the discretized problem, the error in the state variables over each interval inbetween collocation points can then be estimated with the integral \begin{equation*} \eta_j:=\int^{t_{j+1}}_{t_j} \|\varepsilon_r(s)\|_{2}\: ds, \end{equation*} as a single metric for a multi-variable problem, or \begin{equation*} \sigma_{j,q}:=\int^{t_{j+1}}_{t_j} |\varepsilon_{r_q}(s)|\: ds, \text{ for } q=1,\hdots,n, \end{equation*} for each dynamics equation separately. $\eta \in \mathbb{R}^N$ or $\sigma \in \mathbb{R}^{N \times n}$ are typically referred as the \emph{absolute local error}~\cite{betts2010practical}. The operator $\|\cdot\|_{2}$ is the vector 2-norm. The integral can be practically estimated by high order quadrature. In addition, numerical discretization inevitably leads to possible constraint violations of the trajectories inbetween the collocation points. For path and box constraints that are expressed semi-explicitly as \eqref{eqn:OCPBolzaPathConstraint}, the \textit{absolute local constraint violation} $\varepsilon_{c_\zeta}(t) \in \mathbb{R}^{n_g}$ may be straight-forwardly estimated by \begin{align*} \label{eqn: ConstraintViolationError} \varepsilon_{c_\zeta}(t):=\begin{cases} 0 & \text{if } c_\zeta(\tilde{z}(t)) \leq 0\\ c_\zeta(\tilde{z}(t)) & \text{if } c_\zeta(\tilde{z}(t)) > 0\\ \end{cases}, \text{ for } \zeta=1,\hdots,n_g. \end{align*} Once the distributions of errors are calculated, appropriate modifications can be made to the discretization mesh, to iteratively resolve the problem until the obtained solution fulfills all predefined error tolerances ($\eta_{tol}$ and $\varepsilon_{c_{tol}}$). This process is called mesh refinement (MR). Common approaches for mesh refinement include adding intervals and/or changing the polynomial order. The NLP formulated based on the new mesh is warm started using the previous solution from the coarser mesh. This can often lead to significantly faster convergence, thus reducing the overall computation time. \subsubsection{Problems associated with direct reconstruction} Practical experience has shown that trajectory interpolation in accordance with the discretization scheme is not the best choice. In many cases large discretization errors and constraint violations occur inside the intervals inbetween collocation points. Furthermore, if the optimal control trajectory is discontinuous, direct interpolation using polynomials can often result in a Gibbs-like phenomenon, inducing non-physical oscillations in the solution. These issues are fundamentally rooted in the direct collocation formulation. Firstly, states, dynamics and controls can rarely all be approximated accurately by polynomials. Even in the simple case where $f(x,u)=\dot{x}(t)=ax(t)+u(t)$ and $u(t)=1$ are both polynomials (thus can be represented exactly by polynomials), the corresponding state trajectory $x(t)=x(0)e^{at}+\int_0^t e^{a(t-s)}u(s)\ ds$ is clearly not a polynomial and approximation errors should be expected. It is then important to note that driving the defect constraint \eqref{eqn: LGRStateApproximationCostDefect} to zero (or machine precision) at collocation points does not imply that the polynomial functions used for the state and input approximations in the NLP will satisfy the dynamic equations and constraints inbetween collocation points. In fact, the opposite can and often does occur. It is well-known in the field of curve fitting that if a function cannot be exactly represented by a polynomial, forcing the polynomial to exactly go though some sampled data points generally results in larger errors in comparison to fitting using least squares criteria. The same analogy can be applied here: forcing the defect constraints to be zero at collocation points will generally result in larger overall defect errors for the whole trajectory, in comparison to a method that minimizes the integral of the defect errors in a least squares manner. This observation motivated the development of the integrated residual minimization scheme. \section{Method of integrated residual minimization} \label{sec: ResidualMinimization} The concept of integrated residual minimization is motivated by the recently-proposed method in~\cite{neuenhofen2018dynamic}, which is a generalization of the least-squares approach for solving differential equations~\cite{ascher1978,locker1978} to solving dynamic optimization problems. The idea is that instead of forcing the ODE residuals \eqref{eqn: DiscretizationError} to be zero at collocation points with \eqref{eqn: LGRStateApproximationCostDefect}, the method tries to minimize the square of the 2-norm of the ODE residuals for the represented solution polynomials integrated along the whole trajectory, i.e. \begin{equation} \label{eqn: ResidualMinimizationOrg} \min_{\hat{x}, \hat{u}, t, p} \int_{t_0}^{t_f} r(\hat{x}(t), \hat{u}(t), t, p)\: dt \end{equation} with \begin{equation} \label{eqn: ResidualMinimizationOrgr} r(\hat{x}(t), \hat{u}(t), t, p):=\| \dot{\hat{x}}(t)-f(\hat{x}(t), \hat{u}(t), t, p)\|^2_{2}. \end{equation} As presented in \cite{neuenhofen2018dynamic}, the expressions for functions $\hat{x}$ and $\hat{u}$ can be polynomials of any standard types, with polynomial coefficients $P_{j,q}$ as decision variables. This choice of representation increases the computational complexity of the problem in comparison to direct collocation: \begin{itemize} \item One extra decision variable is required for every state and input variable in every mesh segment. \item Simple bounds need to be implemented as path constraints. \item Additional computations to obtain the initial guesses of the decision variables from an estimation of the solution trajectory are required. \item The magnitudes of decision variables may span a wide numerical range. This is detrimental in terms of ensuring consistent numerical accuracy in computations. \item If finite differences are used for obtaining the derivative information, the calculations can be less accurate. \item Proper scaling of decision variables can be difficult. \item State continuity inbetween mesh segments might need to be enforced with additional path constraints. \end{itemize} Therefore, we need to develop a method that avoids the above-listed drawbacks, and to a great extent, retains the computational efficiency of direct collocation formulation. \section{The proposed scheme} \label{sec: ProposedScheme} Based on the above observations, we propose a method to generate solution trajectories that can be orders of magnitudes more accurate than direct interpolation for the ODE defect error, without increasing the size of the discretization mesh. The method retains the same decision variables as in \eqref{eqn: LGRStateApproximationAll}, namely $Z:=(X,U,p,t_0,t_f)$, and uses the interpolation polynomial formula $\tilde{x}(\mathcal{Z},\cdot)$, $\dot{\tilde{x}}(\mathcal{Z},\cdot)$ and $\tilde{u}(\mathcal{Z},\cdot)$ to directly map $\mathcal{Z}$ to $\hat{x}(\cdot)$, $\dot{\hat{x}}(\cdot)$ and $\hat{u}(\cdot)$ in \eqref{eqn: ResidualMinimizationOrg} and \eqref{eqn: ResidualMinimizationOrgr}. For example, consider Hermite-Simpson discretization. The input trajectory inside mesh interval $k$ can be directly represented by the polynomial as in \eqref{eqn: HSInputReconstruction}, based on the values of the decision variables $U_1^{(k)}$, $U_2^{(k)}$ and $U_3^{(k)}$. However, for the state trajectory, one challenge arises. For solutions to \eqref{eqn: LGRStateApproximationAll}, continuity of state variables are automatically fulfilled when using \eqref{eqn: HSStateReconstruction} as the interpolation equation; however, this is not generally the case for arbitrary solutions that do not fulfill the defect constraint~\eqref{eqn: LGRStateApproximationCostDefect}. To avoid imposing additional path constraints for state continuity, we make use of the original Hermite-Simpson numerical integration scheme (in Table \ref{tab: NumericalIntegrationSchemes}), and obtain the following relationship: \begin{align} \label{eqn: HSIntegrationF2} F_2^{(k)}=&-\frac{1}{2h_k}(5X_1^{(k)}-4X_2^{(k)}-X_3^{(k)}+F_1^{(k)}h_k)\\ \label{eqn: HSIntegrationF3} F_3^{(k)}=&\frac{1}{h_k}(4X_1^{(k)}-8X_2^{(k)}+4X_3^{(k)}+F_1^{(k)}h_k). \end{align} As a sanity check, substituting $t=t_2^{(k)}=t_1^{(k)}+h_k/2$ and \eqref{eqn: HSIntegrationF2} into \eqref{eqn: HSStateReconstruction} will result in $X_2^{(k)}$, and substituting $t=t_3^{(k)}=t_1^{(k)}+h_k$ and \eqref{eqn: HSIntegrationF3} into \eqref{eqn: HSStateReconstruction} will result in $X_3^{(k)}$. Thus, with $F_1^{(k)}$, $F_2^{(k)}$ and $F_3^{(k)}$ calculated based on \eqref{eqn: Fi_Discretized}, \eqref{eqn: HSIntegrationF2} and \eqref{eqn: HSIntegrationF3}, respectively, the interpolation formula \eqref{eqn: HSStateReconstruction} will guarantee state trajectory continuity without the need to impose additional constraints. Thus, an optimization problem for representing the OCP solution can be formulated as \begin{subequations} \label{eqn: ResMinInterpolationAll} \begin{equation} \label{eqn: ResMinInterpolationCost} \min_{X,U,p,t_0,t_f} \sum_{k=1}^{K} R(X^{(k)},U^{(k)},\tau^{(k)},\tau_{q}^{(k)},t_0,t_f,p) \end{equation} subject to, for $i=1,\hdots,N^{(k)}$ and $k=1,\hdots,K$, \begin{align} \begin{split} \sum_{k=1}^{K}\sum_{i=1}^{N^{(k)}} w_i^{(k)} L(X_i^{(k)},U_i^{(k)},\tau_i^{(k)},t_0,t_f,p) \quad &\\ +\Phi(X_1^{(1)},t_0,X_{f}^{(K)},t_f,p) \le & J_c \end{split}\\ \label{eqn: ResMinInterpolationPathConstraint} c(X_i^{(k)},U_i^{(k)},\tau_i^{(k)},t_0,t_f,p)\le & 0 \\ \phi(X_1^{(1)},t_0,X_{f}^{(K)},t_f,p) =& 0 \end{align} \end{subequations} with $J_c \in \mathbb{R}$ the value of the objective obtained from direct collocation. $R$ is the residual cost: for certain problems, this can be calculated precisely with analytical expressions; for most practical problems, quadrature rules of sufficiently high order can be used, i.e. \begin{multline*} R(X^{(k)},U^{(k)},\tau^{(k)},\tau_{q}^{(k)},t_0,t_f,p):=\\ \sum_{\iota=1}^{N_q^{(k)}}w_{\iota}^{(k)}\|\dot{\tilde{x}}(\mathcal{Z},t_{q_{\iota}}^{(k)})-f(\tilde{x}(\mathcal{Z},t_{q_{\iota}}^{(k)}), \tilde{u}(\mathcal{Z},t_{q_{\iota}}^{(k)}), t_{q_{\iota}}^{(k)}, p)\|^2_{2} \end{multline*} with $t_{q_{\iota}}^{(k)}:=\frac{t_f^{(k)}-t_0^{(k)}}{2}\tau_{q_{\iota}}^{(k)}+\frac{t_f^{(k)}+t_0^{(k)}}{2}$, and $\tau_q^{(k)} \in \mathbb{R}^{N_q^{(k)}}$ the quadrature mesh for approximating the integral inside a mesh interval. Typically a Gaussian quadrature of order of $N_q^{(k)}\ge 4N^{(k)}+1$ is required for a good accuracy \cite{neuenhofen2018dynamic}, and $w_{\iota}^{(k)}$ are the correspoinding quadrature weights. Unlike the penalty-barrier finite element method (PBF) proposed in \cite{neuenhofen2018dynamic}, which requires tailored solvers for a good performance, the problem formulation in \eqref{eqn: ResMinInterpolationAll} can be efficiently solved with the same of-the-shelf sparse NLP solver as in the case of direct collocation. The transcription process and the majority of the computational components can be shared between the two, and warm starting techniques can be exploited to further accelerate the computations. \section{Example Problems} \label{sec: ExampleProblem} Here, we present two example problems to demonstrate the main advantages of the proposed scheme. Both OCPs are transcribed using the optimal control software \texttt{ICLOCS2} \cite{ICLOCS2}, and numerically solved to a tolerance level of $10^{-9}$ with IPM based NLP solver \texttt{IPOPT} \cite{wachter2006implementation} (version 3.12.9). Since extremely coarse meshes are used, the emphasis of the comparison will not be on yielding solutions that look similar to the true optimal trajectory. Instead, the goal is to obtain sub-optimal solutions that, when implemented, can result in low discrepancies between the represented solution and the implementation outcome. \subsection{Two-Link Robot Arm} The two-link robot arm problem presented here was adapted from Example 2, Section 12.4.2 of \cite{luus2000iterative}. Consider a system consisting of two identical beams with the same property (mass: $m=1$\,kg, length: $l=1$\,m, and moment of inertia), connected at two actuated joints. The objective is to reposition a payload of mass $M=1$\,kg in minimum time with the addition of a regularization term: \begin{equation*} \min_{x,u,t_f} \quad t_f+0.01\int_{0}^{t_f} u_1(t)^2+u_2(t)^2\: dt. \\ \end{equation*} The system has angular rates $\omega_{\phi}$, $\omega_{\psi}$, and angles $\phi$, $\chi$ as state variables, and nondimensionalized torque $u_1$ and $u_2$ as inputs. Furthermore, the variable simple bounds and boundary conditions are imposed in accordance to the reference, except that $\chi(t_f)=0.5$\,rad, and $\phi(t_f)=0.522$\,rad. Figures \ref{fig:Coll_roboticArm} and \ref{fig:ResMin_roboticArm} illustrate the solutions to the two-link robot arm problem problem generated with the two different solution representation methods. \begin{figure}[t] \begin{center} \includegraphics[width=\columnwidth]{images/Examples/RoboticArm/Coll_roboticArm.eps} \caption{Solution to the two-link robot arm problem, direct interpolation method for solution representation, direct collocation with Hermite-Simpson discretization, 10 mesh intervals} \label{fig:Coll_roboticArm} \end{center} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[width=\columnwidth]{images/Examples/RoboticArm/ResMin_roboticArm.eps} \caption{Solution to the two-link robot arm problem, integrated residual minimization method for solution representation, direct collocation with Hermite-Simpson discretization, 10 mesh intervals} \label{fig:ResMin_roboticArm} \end{center} \end{figure} Presented alongside are the outcomes from the actual implementation of the resultant input trajectory on the same dynamic model, solved with a non-stiff variable order ODE solver (Matlab \texttt{ode113}) with a time step 100 times smaller than the discretization grid of the optimization problem. Observe that: \begin{itemize} \item Despite a very small tolerance and successful termination of the NLP solver, the direct collocation solution and the direct interpolation of the solution exhibit large discretization errors, leading to significant deviations to the state trajectories when the inputs are directly implemented. In contrast, only minor discrepancies can be observed for the solutions represented using integrated residual minimization, on the same coarse grid with relatively low-order discretization. \item Although the constraints are implemented in the exact same way, the integrated residual minimization method alleviates the issues of constraint violations inside the mesh intervals to a greater extent compared the solutions represented by direct interpolation. This is because these constraint violations are often related to the large ODE defect errors inbetween collocation points, which are being directly dealt with by the integrated residual minimization scheme. \end{itemize} \subsection{Aircraft Go-around in the Presence of Windshear} \label{sec: AircraftExample} Based on previous developments \cite{miele1988optimal}, a problem is presented in \cite{betts2010practical} where the aircraft needs to stay as high above the ground as possible after encountering a severe windshear. Firstly, the simplified dynamics of the aircraft can be described by \begin{subequations} \label{eqn: DynamicsAircraftGPW} \begin{align*} \dot{d}(t) =&V(t)\cos(\gamma(t))+w_d(d(t)) \\ \dot{h}(t) =&V(t)\sin(\gamma(t))+w_h(d(t),h(t)) \\ \begin{split} \dot{V}(t) =&\frac{1}{m}[T(V(t))\cos(\alpha(t)+\delta)-D(V(t),\alpha(t))]\\ &-g\sin(\gamma(t))-\dot{w_d}(d(t),\dot{d}(t))\cos(\gamma(t))\sin(\gamma(t))\\ &-\dot{w_h}(d(t),h(t),\dot{d}(t),\dot{h}(t))\sin(\gamma(t)) \end{split}\\ \begin{split} \dot{\gamma}(t) =&\frac{1}{mv(t)}[T(V(t))\sin(\alpha(t)+\delta)+L(V(t),\alpha(t))]\\ &-\frac{g\cos(\gamma(t))}{V(t)}+\frac{1}{V(t)}\dot{w_d}(d(t),\dot{d}(t))\sin(\gamma(t))\\ &-\frac{1}{V(t)}\dot{w_h}(d(t),h(t),\dot{d}(t),\dot{h}(t))\cos(\gamma(t)) \end{split} \end{align*} \end{subequations} with $d$ the horizontal distance, $h$ the altitude, $V$ the true airspeed, $\gamma$ the flight path angle. The angle of attack $\alpha$ is the actual control input to the physical system (aircraft); however, in order to implement a constraint on its rate of change, $\nu$ is introduced as angle of attack rate and serves as the control input with $\dot{\alpha}(t) = \nu(t)$. This implementation is known to exhibit singular arc behaviour \cite{nie2018should}, leading to fluctuations and ringing phenomenon in the solutions. Polynomial models are used for the maximum thrust $T_{max}$, lift coefficient $C_L$ and drag coefficient $C_D$, to model the thrust $T$, lift $L$ and drag $D$. A simplified windshear model is used with wind speed contributions represented by a horizontal component $w_d$ and a vertical component $w_h$. Other details about the aerodynamic modelling, parameter values, simple bounds and boundary condtions are all the same as in \cite{betts2010practical}. A static parameter $h_{min}$ is introduced to represent the minimum altitude. The objective is therefore to minimize $-h_{min}$ together with path constraint $h(t) \ge h_{min}$. The solutions to this problem are collectively shown in Figures \ref{fig:CollocationSolution} and \ref{fig:ResMinSolution}. \begin{figure}[t] \begin{center} \includegraphics[width=\columnwidth]{images/Examples/WindshearLanding/CollocationSolution.eps} \caption{Solution to the aircraft go-around in the windshear problem, direct interpolation method for solution representation, direct collocation with Hermite-Simpson discretization, 15 mesh intervals} \label{fig:CollocationSolution} \end{center} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[width=\columnwidth]{images/Examples/WindshearLanding/ResMinSolution.eps} \caption{Solution to the aircraft go-around in the windshear problem, integrated residual minimization method for solution representation, direct collocation with Hermite-Simpson discretization, 15 mesh intervals} \label{fig:ResMinSolution} \end{center} \end{figure} In addition to the advantages identified in the previous example, solution representation via integrated residual minimization have clear benefits in suppressing fluctuations. This ringing phenomenon is frequently observed in direct collocation solutions of singular control problems, as well as the directly interpolated solution trajectories. \section{Conclusions} \label{sec: conlclusions} Although direct interpolation of direct collocation solutions can sometimes yield results of good quality, it is very difficult to guarantee the level of accuracy without posterior error assessments and mesh design iterations. As demonstrated by the two examples, despite successfully solving the NLP to negligibly small tolerance levels, the validity of the solution may still be questionable with large discrepancies. The flaws are rooted in direct collocation schemes, where the ODE defect errors are forced to zero at collocation points, regardless of the errors inside the intervals. The proposed solution representation method of integrated residual minimization fundamentally addresses this shortcoming by instead minimizing the integrated ODE residual error along the whole trajectory. As a result, solutions of higher accuracy are obtainable with the same discretization mesh, allowing the mesh to be relatively coarse. This benefit is clearly demonstrated with the example problems: despite being highly nonlinear, moderately complex and solved on a coarse low-order mesh, with one of them exhibiting singular arc behaviours, only minor differences are observed between the represented solution and the actual implementation.
{ "timestamp": "2019-03-01T02:17:07", "yymm": "1902", "arxiv_id": "1902.11041", "language": "en", "url": "https://arxiv.org/abs/1902.11041" }
\section{Consequences for $\mathbb{R}^d$} In this section we take balls to be closed. Unlike the case of the Heisenberg groups, where we have $E(X,d) < \infty$ and $L(X,d) = \infty$, in Banach spaces we always have $E(X,d) = L(X,d)$. \begin{theorem} \label{kiss} If $(X, \|\cdot\|)$ is a Banach space, then $E(X, \| \cdot\|) = L(X, \| \cdot\|)$. \end{theorem} \begin{proof} It suffices to show that $E(X, \| \cdot\|) \ge L(X, \| \cdot\|)$. Both $E (X, \| \cdot\|)$ and $L (X, \| \cdot\|)$ are defined as suprema, so it is enough to prove that given any finite, intersecting Besicovitch family $\mathcal{C} := \{B^{cl} (x_1, r_1), \dots, B^{cl} (x_n, r_n) \}$, we can produce an equal radius intersecting Besicovitch family of the same cardinality. Choose $y \in \cap \mathcal{C}$, and let $r_y := \min\{ \|x_1 - y\| , \dots, \|x_n - y\|\}$. By a translation and a dilation, we may assume that $y = 0$ and $r_y = 1$. We claim that $\mathcal{C}^\prime := \{B^{cl} (x_1/\|x_1\|, 1), \dots, B^{cl} (x_n/\|x_n\|, 1) \}$ is a Besicovitch family. To show that any two vectors in $ \{ x_1/\|x_1\|, \dots, x_n/\|x_n\|\}$ are at distance $ > 1$, we choose a pair of centers $x_i$ and $x_j$ of balls from $\mathcal{C}$, with, say, $\|x_i\| \ge \|x_j\|$. Since $\|x_i - x_j\| > \|x_i\|$, using the lower bound for the angular distances from \cite[Corollary 1.2]{Ma}, we get \begin{equation*} \left\|\frac{x_i}{\|x_i\|}-\frac{x_j}{\|x_j\|}\right\| \ge \frac{\|x_i - x_j\| - \left| \|x_i\| - \|x_j\|\right| }{\min\left\{\|x_i\|, \|x_j\|\right\} } = \frac{\|x_i - x_j\| - \|x_i\| + \|x_j\|}{ \|x_j\| } > 1. \end{equation*} \end{proof} As is the case with the maximal operator, cf. \cite[Theorem 3.3]{Al1}, in $\mathbb{R}^d$ it is possible to construct a measure $\mu$ for which the supremum is attained, with $r = 1$. We omit the proof. \begin{theorem} \label{kiss} Let $\|\cdot\|$ be any norm on $\mathbb{R}^d$. Then there exists a discrete measure $\mu$ such that $\|A^{cl}_{1,\mu} \|_{L^1(\mu)\to L^1(\mu)} = E(\mathbb{R}^d, \| \cdot\|)$. \end{theorem} The equality $E(X, \| \cdot\|) = L(X, \| \cdot\|)$ allows one to transfer uniform bounds known for the centered maximal operator to uniform bounds for the averaging operators. In one dimension it is obvious that $E(X, \| \cdot\|) = 2$. This observation extends to arbitrary measures on the real line the upper bound 2 that appears in Theorem 4.2 for the standard exponential distribution (given by $d P(t) = \mathbf{1}_{(0,\infty)} (t) \ e^{-t} dt$). In higher dimensions, from Corollaries 3.4, 3.5 and 3.6 of \cite{Al1} we obtain the following \begin{corollary} \label{infinitybounds} Given any norm $ \|\cdot\| $ on the plane, if the unit ball is a parallelogram then $\sup_{r, \mu}\|A_{r,\mu} \|_{L^1(\mu)\to L^1(\mu)} = 4 $, while $\sup_{r, \mu}\|A_{r,\mu} \|_{L^1(\mu)\to L^1(\mu)} = 5$ in every other case. With balls defined using the $\ell_\infty$ norm, the sharp uniform bound for $\sup_{r, \mu}\|A_{r,\mu} \|_{L^1(\mu)\to L^1(\mu)}$ on $(\mathbb{R}^d , \|\cdot\|_\infty)$ is $2^d$. Furthermore, the bound is attained. For the euclidean norm we have $\sup_{r, \mu}\|A_{r,\mu} \|_{L^1(\mu)\to L^1(\mu)} = 12$ in dimension 3, and the bound is attained. Asymptotically, in dimension $d$ the following bounds hold: \begin{equation}\label{asym} (1 + o(1)) \sqrt{\frac{3 \pi}{8}} \log {\frac{3}{2 \sqrt 2}} \ d^{3/2} \ \left(\frac{2}{\sqrt{3}}\right)^d \le \sup_{r, \mu}\|A_{r,\mu} \|_{L^1(\mu)\to L^1(\mu)} \le 2^{0.401 (1 + o(1)) d}. \end{equation} \end{corollary} \begin{remark} For $\mathbb{R}^d$ with the euclidean norm and the standard gaussian measure $\gamma$, it was shown in \cite[Theorem 4.3]{Al3} that $\sup_{r>0}\|A_r\|_{L^1(\gamma)\to L^1(\gamma)} \le (2 + \varepsilon)^d$, whenever $\varepsilon > 0$ and $d$ is large enough. The upper bounds from the preceding result (valid for all measures) represent a substantial improvement. \end{remark}
{ "timestamp": "2019-03-01T02:18:41", "yymm": "1902", "arxiv_id": "1902.11080", "language": "en", "url": "https://arxiv.org/abs/1902.11080" }
\section{Introduction} There are several generalizations of classical Fourier heat conduction that can model second sound phenomena and ballistic propagation. These theories are more and more important in nanostructures and are subjects of various challenging physical, mathematical and numerical researches see e.g. \cite{Sob14a,Sob16a,Sob17a,Sob18a,Zhu16a1,Zhu17a,Zhu17a1,ZhuEta18a,RieEta18a,SelEta17a,Res16a,CiaRes16a,CiaREs19a,VazRio12a,CarEta19a,NieCao19a,Mac19b}. Second sound, the wavelike propagation of heat, is due to the inertia of internal energy. This property can be modelled by a new nonequilibrium thermodynamic state variable. A straightforward choice for this additional vectorial state variable is the heat flux \cite{Mul67a1,Gya77a}. This choice leads to theories of Extended Thermodynamics (ET). There one requires a compatibility with kinetic theory \cite{JouAta92b,LebEta08b,MulRug98b,CimEta14a,Van16a,SelEta16b}, and the structure of the continuum theory will be compatible with the equations derived by moment series expansion of Boltzmann equation, considering also a Callaway collision integral with two relaxation times. This compatibility with kinetic theory is a necessity for any phenomenology: a universal macroscopic approach must be valid in case of various micro- and mesostructures, in particular must be compatible with the theory of rarefied gases. The key of universality is to introduce only general requirements, minimal number of assumptions regarding the structure of the material. In particular on must use and exploit the second law of thermodynamics and introduce a proper functional characterisation of the deviation from local equilibrium. This can be accomplished most conveniently with the help of internal variables. One can achieve the compatibility with kinetic theory if the variables have the same tensorial order than the corresponding moments, therefore their tensorial order is increasing with every new variable. However, the evolution equations of these fields are direct consequences of the second law, one can get them solving the inequality of the entropy production. This way for heat conduction one obtains the Maxwell-Cattaneo-Vernotte equation as well as the Guyer-Krumhansl one with a single vectorial internal variable \cite{Van01a2,VanFul12a}. With an additional tensorial variable a more general theory can be derived that properly describes ballistic propagation, the propagation of heat with the speed of sound, too \cite{KovVan15a}. This approach, Non-Equilibrium Thermodynamics with Internal Variables (NET-IV), can reproduce NaF experiments quantitatively, including the correct ballistic propagation speed \cite{KovVan16a,KovVan18a}. Also the universality of the derivation indicates a broader range of validity, far beyond the validity of rarefied real or phonon gases. This broadened range of validity is a prediction, therefore one can expect non Fourier heat conduction e.g. in heterogeneous materials, too. Really, Guyer-Krumhansl type heat conduction was observed in various heterogeneous materials with heat pulse experiments at room temperature \cite{BotEta16a,VanEta17a}. Internal variables are powerful modelling concepts in other continuum theories, too \cite{Ver97b,Ott05b,SzuFul18m,JouRes18a1}. Naturally, the relation of NET-IV with theories of ET, and kinetic theory, is not straightforward and their performance should be analysed considering the complete theory, not only heat conduction \cite{RugSug15b,RogEta18a,KovEta18m}. Up to now the solutions and analyses of wave-like and ballistic propagation are mostly restricted to one spatial dimension. This approach is problematic from the point of view of experimental observations, especially considering the NaF experiments \cite{McNEta70a,JacWal71a}. In the classical experiments the setup is not one dimensional, and this fact is not considered in the modelling calculations \cite{KovVan16a,KovVan18a}. The related ET theory inherits the dimensional reduction from the particular collision integrals, e.g. the deviatoric and spherical contributions in the evolution equation of the heat flux have the same coefficient in the usual form of the Guyer-Krumhansl equation \cite{MulRug98b}, and this is preserved in nonlinear theories, too \cite{SelEta16b}. In this paper we give the complete three dimensional form of the equations of a universal theory of heat conduction in isotropic materials, including the possible reciprocity relations and second law requirements for the transport coefficients. The paper is organised as follows. In the second section the theoretical framework is outlined and the basic balances and constitutive equations are given in a strictly linear anisotropic form. Then, the isotropic form of the equations is treated in general and then with Onsager reciprocity. In the fourth section the particular special theories are introduced. Then the conclusions are formulated. A detailed matrix form of the conductivity matrix is given in the Appendix, including the transformation of the sixth order tensor to a form suitable for the calculation of the positive definiteness of the coefficients. \section{Basic equations of the heat conduction with two internal variables} We consider the balance equations of a rigid heat conductor, i.e. the balance of internal energy and the balance of entropy: \begin{equation} \label{balance1} \rho\dot{e} + q_{i,i}=0, \end{equation} \begin{equation}\label{balance2} \rho\dot{s}+ J_{i,i} =\sigma^{(s)}. \end{equation} \noindent Here $\rho$ is the density, $e$ is the specific internal energy, $q_i$ is the current density of the internal energy, the heat flux, $s$ is the specific entropy and $J_i$ denotes the entropy flux. The $\sigma^s$ entropy production rate plays a central and constructive role in the theory. $i,j,k$ are abstract spatial indices of vectors and tensors and the notation is abstract in the sense that it does not assume Descartes coordinates, but it is convenient in case of higher than second order tensors. Then comma in lower indices is for spatial derivation, and dot denotes the substantial time derivative (e.g. $\dot e = \partial_t e + v^i e_{,i}$, where $\partial_t$ is the partial time derivative). In case of rigid conductors in rest the relative velocity of the continuum is zero, therefore the substantial time derivative is equal to the partial time derivative. Regarding the general usage of abstract indices in classical nonrelativistic continuum theories see e.g. in \cite{Van17a,VanEta19a}. We assume that the entropy flux is zero if $q_i=0_i$ and $Q_{ij}=0_{ij}$, therefore \begin{equation} J_i=b_{ij}q_j+B_{ijk}Q_{jk}, \end{equation} where the $b_{ij}$ and $B_{ijk}$ constitutive functions are the Ny\'iri multipliers, that conveniently represent the deviation from the local equilibrium form of the entropy flux \cite{Nyi91a1}. Then the K vector of Müller, \cite{Mul67a}, for ballistic heat conductors can be given as $K_i = (b_{ij} - \delta_{ij}/T) q_j+B_{ijk}Q_{jk}$. Expanding the entropy function $s(e, q_i, Q_{ij})$ up to second order approximation around an equilibrium state, we obtain \begin{equation} \label{eqn:entropy} s(e, q_i, Q_{ij})=s^{(eq)}(e)-\frac{1}{2\rho}m_{ij}q_iq_j-\frac{1}{2\rho}M_{ijkl}Q_{ij}Q_{kl} \end{equation} We have the following symmetries \[ m_{ij}=m_{ji}, \quad M_{ijkl}=M_{klij}. \] Thermodynamic stability requires that the inductivity tensors, $m_{ij}$, $M_{ijkl}$ (see in \cite{MacOns53a,Gya77a}), are positive definite. The second law of thermodynamics requires $\sigma^{(s)}\geq 0$, thus from \eqref{balance2} we have \begin{equation} \label{eqn:EN} \begin{split} \rho\dot{s}+J_{i,i}&=\rho\frac{d s^{(eq)}}{de}\dot{e}-\frac{1}{2}m_{ij}\dot{q_i}q_j-\frac{1}{2}m_{ij}q_i\dot{q_j}-\frac{1}{2}M_{ijkl}\dot{Q}_{ij}Q_{kl}+ \\ & \quad -\frac{1}{2}M_{ijkl}Q_{ij}\dot{Q}_{kl}+b_{ij,i}q_j+b_{ij}q_{j,i}+B_{ijk,i}Q_{jk}+B_{ijk}Q_{jk,i}= \\ &= \left(b_{ij}-\frac{1}{T}\delta_{ij}\right)q_{j,i}+\left(b_{ji,j}-m_{ij}\dot{q_j}\right)q_i+ \\ & \quad +\left(B_{kij,k}-M_{ijkl}\dot{Q}_{kl}\right)Q_{ij}+B_{ijk}Q_{jk,i}\geq 0. \end{split} \end{equation} Following the procedures of non-equilibrium thermodynamics we obtain the following \textit{ general tridimensional anisotropic linear relations} between the thermodynamic fluxes $b_{ij}-\frac{1}{T}\delta_{ij}, b_{ji,j}-m_{ij}\dot{q_j}, B_{ijk}, B_{kij,k}-M_{ijkl}\dot{Q}_{kl}$ and forces $q_i,q_{j,i},Q_{ij},Q_{jk,i} $ \begin{align} \label{eqn:general1} b_{ji,j}-m_{ij}\dot{q_j}&=L^{(1)}_{ij}q_j+L^{(1,2)}_{ijk}q_{j,k}+L^{(1,3)}_{ijk}Q_{jk}+L^{(1,4)}_{ijkl}Q_{jk,l} \\ b_{ij}-\frac{1}{T}\delta_{ij}&=L^{(2,1)}_{ijk}q_k+L^{(2)}_{ijkl}q_{k,l}+L^{(2,3)}_{ijkl}Q_{kl}+L^{(2,4)}_{ijklm}Q_{kl,m} \label{eqn:general2}\\ B_{kij,k}-M_{ijkl}\dot{Q}_{kl}&=L^{(3,1)}_{ijk}q_k+L^{(3,2)}_{ijkl}q_{k,l}+L^{(3)}_{ijkl}Q_{kl}+L^{(3,4)}_{ijklm}Q_{kl,m} \label{eqn:general3}\\ B_{ijk}&=L^{(4,1)}_{ijkl}q_l+L^{(4,2)}_{ijklm}q_{l,m}+L^{(4,3)}_{ijklm}Q_{lm}+L^{(4)}_{ijklmn}Q_{lm,n}. \label{eqn:general4} \end{align} Here the conductivity tensors, $L^{\alpha,\beta}$, are restricted by material symmetries, by the second law and also reciprocity relations are to be considered. \section{Onsager reciprocity relations} In NET-IV we do not assume anything about the microscopic structure of the material, therefore time reversal symmetry conditions of Onsager cannot be applied \cite{VanAta08a}. Several theoretical and experimental results support this statement, e.g. in continuum mechanics \cite{AssEta15a,VanEta14a,BerVan17b}. Also for ballistic heat conduction a hyperbolic structure is unavoidable for the compatibility with kinetic theory \cite{KovVan15a,KovVan16a,KovVan18a}. In our particular approach to heat conduction we do not know anything about the symmetry or antisymmetry of the coefficients of the conductivity tensors. The antisymmetric part does not contribute to the entropy production and the symmetric part is positive definite, ensuring the nonnegativity of the bilinear form for any value of the thermodynamic forces. Therefore the symmetric part of the conductivity tensor can be written as \begin{align} \label{eqn:O-C1} L^{(1)}_{ik}&=L^{(1)}_{ki}, & L^{(1,2)}_{ijk}&=L^{(2,1)}_{jki}, \\ \label{eqn:O-C2} L^{(1,3)}_{ijk}&=L^{(3,1)}_{jki}, & L^{(1,4)}_{ijkl}&=L^{(4,1)}_{jkli}, \\ \label{eqn:O-C3} L^{(2)}_{ijkl}&=L^{(2)}_{klij}, & L^{(2,3)}_{ijkl}&=L^{(3,2)}_{klij}, \\ \label{eqn:O-C4} L^{(2,4)}_{ijklm}&=L^{(4,2)}_{klmij}, & L^{(3)}_{ijkl}&=L^{(3)}_{klij}, \\ \label{eqn:O-C5} L^{(3,4)}_{ijklm}&=L^{(4,3)}_{klmij}, & L^{(4)}_{ijklmn}&=L^{(4)}_{lmnijk}. \end{align} \section{Perfect isotropic case} In the perfect isotropic case, in which the symmetry properties of the body under consideration are invariant with respect to \textit{all rotations and to inversion of the frame of axes}, we have \cite{KeaFon75a}: \begin{gather} \label{eqn:13} m_{ij}=m\delta_{ij}, \\ L^{(1)}_{ij}\equiv \mathcal{L}^{(1)}_{ij}=L^{(1)}\delta_{ij}, \\ M_{ijkl}=M_1\delta_{ij}\delta_{kl}+M_2\delta_{ik}\delta_{jl}+M_3\delta_{il}\delta_{jk}, \\ \label{eqn:1,4} L^{(1,4)}_{ijkl}\equiv \mathcal{L}^{(1,4)}_{ijkl}=L^{(1,4)}_1\delta_{ij}\delta_{kl}+L^{(1,4)}_2\delta_{ik}\delta_{jl}+L^{(1,4)}_3\delta_{il}\delta_{jk}, \\ L^{(2)}_{ijkl}\equiv \mathcal{L}^{(2)}_{ijkl}=L^{(2)}_1\delta_{ij}\delta_{kl}+L^{(2)}_2\delta_{ik}\delta_{jl}+L^{(2)}_3\delta_{il}\delta_{jk}, \\ \label{eqn:2,3} L^{(2,3)}_{ijkl}\equiv \mathcal{L}^{(2,3)}_{ijkl}=L^{(2,3)}_1\delta_{ij}\delta_{kl}+L^{(2,3)}_2\delta_{ik}\delta_{jl}+L^{(2,3)}_3\delta_{il}\delta_{jk}, \\ \label{eqn:3,2} L^{(3,2)}_{ijkl}\equiv \mathcal{L}^{(3,2)}_{ijkl}=L^{(3,2)}_1\delta_{ij}\delta_{kl}+L^{(3,2)}_2\delta_{ik}\delta_{jl}+L^{(3,2)}_3\delta_{il}\delta_{jk}, \\ L^{(3)}_{ijkl}\equiv \mathcal{L}^{(3)}_{ijkl}=L^{(3)}_1\delta_{ij}\delta_{kl}+L^{(3)}_2\delta_{ik}\delta_{jl}+L^{(3)}_3\delta_{il}\delta_{jk}, \\ \label{eqn:4,1} L^{(4,1)}_{ijkl}\equiv \mathcal{L}^{(4,1)}_{ijkl}=L^{(4,1)}_1\delta_{ij}\delta_{kl}+L^{(4,1)}_2\delta_{ik}\delta_{jl}+L^{(4,1)}_3\delta_{il}\delta_{jk}, \\ \begin{split} \label{eqn:4,4} L^{(4)}_{ijklmn}\equiv \mathcal{L}^{(4)}_{ijklmn}&=L^{(4)}_1\delta_{ij}\delta_{kl}\delta_{mn}+L^{(4)}_2\delta_{ij}\delta_{km}\delta_{ln}+L^{(4)}_3\delta_{ij}\delta_{kn}\delta_{lm} \\ & \quad +L^{(4)}_4\delta_{ik}\delta_{jl}\delta_{mn}+L^{(4)}_5\delta_{ik}\delta_{jm}\delta_{ln}+L^{(4)}_6\delta_{ik}\delta_{jn}\delta_{lm} \\ & \quad +L^{(4)}_7\delta_{il}\delta_{jk}\delta_{mn}+L^{(4)}_8\delta_{im}\delta_{jk}\delta_{ln}+L^{(4)}_9\delta_{in}\delta_{jk}\delta_{lm} \\ & \quad +L^{(4)}_{10}\delta_{il}\delta_{jm}\delta_{kn}+L^{(4)}_{11}\delta_{im}\delta_{jl}\delta_{kn}+L^{(4)}_{12}\delta_{in}\delta_{jl}\delta_{km} \\ & \quad +L^{(4)}_{13}\delta_{in}\delta_{jm}\delta_{kl}++L^{(4)}_{14}\delta_{im}\delta_{jn}\delta_{kl}+L^{(4)}_{15}\delta_{il}\delta_{jn}\delta_{km}. \end{split} \end{gather} Furthermore, in the isotropic case (where the symmetry properties of the considered body are invariant only with respect to all rotations of the frame of axes) the third and fifth order tensors keep the form $L_{ijk}=L\in_{ijk}$ and $L_{ijklm}=A_1\in_{ijk}\delta_{lm}+A_2\in_{ijl}\delta_{km}+A_3\in_{ijm}\delta_{kl}+A_4\in_{ikl}\delta_{jm}+A_5\in_{ikm}\delta_{lj}+ \\ A_6\in_{ilm}\delta_{jk}$ respectively ($\in_{ijk}$ denotes the Levi Civita tensor and the quantities $L$ and $A_i$, $i=1,\dots ,6$ are the independent components of the tensors $L_{ijk}$ and $L_{ijklmn}$), that vanisch when there is also the invariance of the properties with respect to the inversion of the axes. Thus, we obtain: \begin{gather} \label{eqn:30} L^{(1,2)}_{ijk}=L^{(1,3)}_{ijk}=L^{(2,1)}_{ijk}=L^{(3,1)}_{ijk}=0, \\ L^{(2,4)}_{ijklm}=L^{(3,4)}_{ijklm}=L^{(4,2)}_{ijklm}=L^{(4,3)}_{ijklm}=0. \label{eqn:31} \end{gather} From relations \eqref{eqn:13}-\eqref{eqn:31}, the phenomenological equations \eqref{eqn:general1}-\eqref{eqn:general4} in the isotropic case read \begin{align} \label{eqn:I1} m\dot{q_i}-b_{ji,j}&=-L^{(1)}q_i-L^{(1,4)}_1Q_{ik,k}-L^{(1,4)}_2Q_{ki,k}-L^{(1,4)}_3Q_{kk,i}, \\ \begin{split} \label{eqn:I2} b_{ij}-\frac{1}{T}\delta_{ij}&=L^{(2)}_1\delta_{ij}q_{k,k}+L^{(2)}_2q_{i,j}+L^{(2)}_3q_{j,i}+L^{(2,3)}_1\delta_{ij}Q_{kk} \\ & \quad +L^{(2,3)}_2Q_{ij}+L^{(2,3)}_3Q_{ji}, \end{split} \\ \begin{split} \label{eqn:I3} B_{kij,k}&=M_1\delta_{ij}\dot{Q}_{kk}+M_2\dot{Q}_{ij}+M_3\dot{Q}_{ji}+L^{(3,2)}_1\delta_{ij}q_{k,k}+L^{(3,2)}_2q_{i,j} \\ & \quad +L^{(3,2)}_3q_{j,i}+L^{(3)}_1\delta_{ij}Q_{kk}+L^{(3)}_2Q_{ij}+L^{(3)}_3Q_{ji}, \end{split} \\ \begin{split} \label{eqn:I4} B_{ijk}&=L^{(4,1)}_1\delta_{ij}q_k+L^{(4,1)}_2\delta_{ik}q_j+L^{(4,1)}_3\delta_{jk}q_i \\ & \quad +\delta_{ij}\left(L^{(4)}_1Q_{kl,l}+L^{(4)}_2Q_{lk,l}+L^{(4)}_3Q_{ll,k}\right) \\ & \quad +\delta_{ik}\left(L^{(4)}_4Q_{jl,l}+L^{(4)}_5Q_{lj,l}+L^{(4)}_6Q_{ll,j}\right) \\ & \quad +\delta_{jk}\left(L^{(4)}_7Q_{il,l}+L^{(4)}_8Q_{li,l}+L^{(4)}_9Q_{ll,i}\right) \\ & \quad +L^{(4)}_{10}Q_{ij,k}+L^{(4)}_{11}Q_{ji,k}+L^{(4)}_{12}Q_{jk,i}+L^{(4)}_{13}Q_{kj,i} \\ & \quad +L^{(4)}_{14}Q_{ki,j}+L^{(4)}_{15}Q_{ik,j}. \end{split} \end{align} Perfect isotropy reduces the number of material coefficients to $4$ static and $34$ conductivity parameters. \subsection{Onsager symmetry with isotropy} If we require Onsager reciprocity relations, \eqref{eqn:O-C1}-\eqref{eqn:O-C5}, then from \eqref{eqn:O-C2}$_2$ $\mathcal{L}^{(1,4)}_{ijkl}=\mathcal{L}^{(4,1)}_{jkli}$ and being \begin{equation} \label{eqn:81} \mathcal{L}^{(4,1)}_{jkli}=L^{(4,1)}_1\delta_{jk}\delta_{li}+L^{(4,1)}_2\delta_{jl}\delta_{ki}+L^{(4,1)}_3\delta_{ji}\delta_{kl}, \end{equation} we obtain \begin{equation} \label{eqn:32} L^{(1,4)}_1=L^{(4,1)}_3, \quad L^{(1,4)}_2=L^{(4,1)}_2, \quad L^{(1,4)}_3=L^{(4,1)}_1. \end{equation} \medskip Furthermore, for each isotropic four tensor $\mathcal{L}_{ijkl}$ we have the follwing symmetry relation \begin{equation} \label{eqn:property} \mathcal{L}_{ijkl}=\mathcal{L}_{klij}, \end{equation} \noindent because of \begin{equation} \mathcal{L}_{ijkl}=T_1\delta_{ij}\delta_{kl}+T_2\delta_{ik}\delta_{jl}+T_1\delta_{il}\delta_{jk}=T_1\delta_{kl}\delta_{ij}+T_2\delta_{ki}\delta_{lj}+T_3\delta_{kj}\delta_{li}=\mathcal{L}_{klij}, \end{equation} \noindent where $T_1$, $T_2$ and $T_3$ indicates the independent components of $\mathcal{L}_{ijkl}$. Taking the property \eqref{eqn:property} into account, Onsager-Casimir relations $\eqref{eqn:O-C3}_1$ and $\eqref{eqn:O-C4}_2$ are verified in the isotropic case and from $\eqref{eqn:O-C3}_2$ we derive $\mathcal{L}^{(2,3)}_{ijkl}=\mathcal{L}^{(3,2)}_{klij}=\mathcal{L}^{(3,2)}_{ijkl}$, from which we have \begin{equation} \label{eqn:33} L^{(2,3)}_i=L^{(3,2)}_i \quad (i=1,2,3). \end{equation} \medskip Then, from \eqref{eqn:4,4} we obtain: \begin{equation} \begin{split} \label{eqn:4,4-2} \mathcal{L}^{(4)}_{lmnijk}&=L^{(4)}_1\delta_{lm}\delta_{ni}\delta_{jk}+L^{(4)}_2\delta_{lm}\delta_{nj}\delta_{ik}+L^{(4)}_3\delta_{lm}\delta_{nk}\delta_{ij}+L^{(4)}_4\delta_{ln}\delta_{mi}\delta_{jk} \\ & \quad +L^{(4)}_5\delta_{ln}\delta_{mj}\delta_{ik}+L^{(4)}_6\delta_{ln}\delta_{mk}\delta_{ij}+L^{(4)}_7\delta_{mn}\delta_{li}\delta_{jk}+L^{(4)}_8\delta_{mn}\delta_{lj}\delta_{ik} \\ & \quad +L^{(4)}_9\delta_{mn}\delta_{lk}\delta_{ij}+L^{(4)}_{10}\delta_{li}\delta_{mj}\delta_{nk}+L^{(4)}_{11}\delta_{lj}\delta_{mi}\delta_{nk}+L^{(4)}_{12}\delta_{lk}\delta_{mi}\delta_{nj} \\ & \quad +L^{(4)}_{13}\delta_{lk}\delta_{mj}\delta_{ni}+L^{(4)}_{14}\delta_{lj}\delta_{mk}\delta_{in}+L^{(4)}_{15}\delta_{li}\delta_{mk}\delta_{nj}. \end{split} \end{equation} Using $\eqref{eqn:O-C5}_2$, adding \eqref{eqn:4,4} and \eqref{eqn:4,4-2}, and dividing by $2$, we have \begin{equation} \label{eqn:34} \begin{split} \mathcal{L}^{(4)}_{ijklmn}&=C^{(4)}_1(\delta_{ij}\delta_{kl}\delta_{mn}+\delta_{in}\delta_{jk}\delta_{lm})+C^{(4)}_2(\delta_{ij}\delta_{km}\delta_{ln}+\delta_{ik}\delta_{jn}\delta_{lm}) \\ & \quad +C^{(4)}_3\delta_{ij}\delta_{kn}\delta_{lm}+C^{(4)}_4(\delta_{ik}\delta_{jl}\delta_{mn}+\delta_{im}\delta_{jk}\delta_{nl})+C^{(4)}_5\delta_{ik}\delta_{jm}\delta_{ln} \\ & \quad +C^{(4)}_6\delta_{il}\delta_{jk}\delta_{mn}+C^{(4)}_7\delta_{il}\delta_{jm}\delta_{kn}+C^{(4)}_8\delta_{il}\delta_{jn}\delta_{km}+C^{(4)}_9\delta_{im}\delta_{jl}\delta_{kn} \\ & \quad +C^{(4)}_{10}(\delta_{im}\delta_{jn}\delta_{kl}+\delta_{in}\delta_{jl}\delta_{km})+C^{(4)}_{11}\delta_{in}\delta_{jm}\delta_{kl}, \end{split} \end{equation} where \begin{gather} \label{eqn:C1-3} C^{(4)}_{1}=\frac{L^{(4)}_1+L^{(4)}_9}{2}, \quad C^{(4)}_{2}=\frac{L^{(4)}_2+L^{(4)}_6}{2}, \quad C^{(4)}_{3}=L^{(4)}_3, \\ C^{(4)}_{4}=\frac{L^{(4)}_4+L^{(4)}_8}{2}, \quad C^{(4)}_{5}=L^{(4)}_5, \quad C^{(4)}_{6}=L^{(4)}_7, \quad C^{(4)}_{7}=L^{(4)}_{10}, \\ \label{eqn:C8-11} C^{(4)}_{8}=L^{(4)}_{15}, \quad C^{(4)}_{9}=L^{(4)}_{11}, \quad C^{(4)}_{10}=\frac{L^{(4)}_{12}+L^{(4)}_{14}}{2}, \quad C^{(4)}_{11}=L^{(4)}_{13}. \end{gather} Thus, from relation $\mathcal{L}^{(4)}_{ijklmn}=\mathcal{L}^{(4)}_{lmnijk}$ the significant components of the isotropic tensor $\mathcal{L}_{ijklmn}$ reduce to $11$. Therefore, in case of Onsager reciprocity, the number of conductivity coefficients are reduced altogether to $24$. \subsection{Entropy production} In the perfect isotropic case, with the aid of relations \eqref{eqn:I1}-\eqref{eqn:I4}, \eqref{eqn:13}-\eqref{eqn:4,1}, \eqref{eqn:30}, \eqref{eqn:31}, \eqref{eqn:33}, \eqref{eqn:34}, entropy production \eqref{eqn:EN} can be written as \begin{equation} \label{eqn:36} \begin{split} \sigma^{(s)}&=\mathcal{L}^{(1)}_{ik}q_iq_k+\mathcal{L}^{(2)}_{ijkl}q_{j,i}q_{k,l}+ \mathcal{L}^{(3)}_{ijkl}Q_{ij}Q_{kl}+ \mathcal{L}^{(4)}_{ijklmn}Q_{jk,i}Q_{lm,n} \\ & \quad +\left(\mathcal{L}^{(1,4)}_{ijkl}+\mathcal{L}^{(1,4)}_{iljk}\right)q_iQ_{jk,l} +\left(\mathcal{L}^{(2,3)}_{ijkl}+\mathcal{L}^{(2,3)}_{klji}\right)q_{j,i}Q_{kl} \geq 0, \end{split} \end{equation} or in extended form: \begin{equation} \label{eqn:36-1} \begin{split} \sigma^{(s)}&=L^{(1)}\delta_{ik}q_iq_k+\left(L^{(2)}_1\delta_{ij}\delta_{kl}+L^{(2)}_2\delta_{ik}\delta_{jl}+L^{(2)}_3\delta_{il}\delta_{jk}\right)q_{j,i}q_{k,l} \\ & \quad +\left(L^{(3)}_1\delta_{ij}\delta_{kl}+L^{(3)}_2\delta_{ik}\delta_{jl}+L^{(3)}_3\delta_{il}\delta_{jk}\right)Q_{ij}Q_{kl} \\ & \quad +\left[C^{(4)}_1(\delta_{ij}\delta_{kl}\delta_{mn}+\delta_{in}\delta_{jk}\delta_{lm})+C^{(4)}_2(\delta_{ij}\delta_{km}\delta_{ln}+\delta_{ik}\delta_{jn}\delta_{lm})\right. \\ & \quad +C^{(4)}_3\delta_{ij}\delta_{kn}\delta_{lm}+C^{(4)}_4(\delta_{ik}\delta_{jl}\delta_{mn}+\delta_{im}\delta_{jk}\delta_{nl})+C^{(4)}_5\delta_{ik}\delta_{jm}\delta_{ln} \\ & \quad +C^{(4)}_6\delta_{il}\delta_{jk}\delta_{mn}+C^{(4)}_7\delta_{il}\delta_{jm}\delta_{kn}+C^{(4)}_8\delta_{il}\delta_{jn}\delta_{km}+C^{(4)}_9\delta_{im}\delta_{jl}\delta_{kn} \\ & \quad +\left.C^{(4)}_{10}(\delta_{im}\delta_{jn}\delta_{kl}+\delta_{in}\delta_{jl}\delta_{km})+C^{(4)}_{11}\delta_{in}\delta_{jm}\delta_{kl}\right]Q_{jk,i}Q_{lm,n} \\ & \quad +\left[\left(L^{(1,4)}_1+L^{(1,4)}_2\right)\delta_{ij}\delta_{kl}+\left(L^{(1,4)}_2+L^{(1,4)}_3\right)\delta_{ik}\delta_{jl}\right. \\ & \quad \left.+\left(L^{(1,4)}_1+L^{(1,4)}_3\right)\delta_{il}\delta_{jk}\right]q_iQ_{jk,l} \\ & \quad +\left[2L^{(2,3)}_1\delta_{ij}\delta_{kl}+\left(L^{(2,3)}_2+L^{(2,3)}_3\right)\left(\delta_{il}\delta_{jk}+\delta_{ik}\delta_{jl}\right)\right]q_{j,i}Q_{kl} \geq 0, \end{split} \end{equation} and also in the following form \begin{equation} \label{eqn:EI} \begin{split} \sigma^{(s)}&=L^{(1)}\delta_{ik}q_iq_k+\left(L^{(2)}_1\delta_{ji}\delta_{kl}+L^{(2)}_2\delta_{jk}\delta_{il}+L^{(2)}_3\delta_{jl}\delta_{ik}\right)q_{i,j}q_{k,l} \\ & \quad +\left(L^{(3)}_1\delta_{ij}\delta_{kl}+L^{(3)}_2\delta_{ik}\delta_{jl}+L^{(3)}_3\delta_{il}\delta_{jk}\right)Q_{ij}Q_{kl} \\ & \quad +\left[C^{(4)}_1(\delta_{pi}\delta_{jl}\delta_{mn}+\delta_{pn}\delta_{ij}\delta_{lm})+C^{(4)}_2(\delta_{pi}\delta_{jm}\delta_{ln}+\delta_{pj}\delta_{in}\delta_{lm})\right. \\ & \quad +C^{(4)}_3\delta_{pi}\delta_{jn}\delta_{lm}+C^{(4)}_4(\delta_{pj}\delta_{il}\delta_{mn}+\delta_{pm}\delta_{ij}\delta_{nl})+C^{(4)}_5\delta_{pj}\delta_{im}\delta_{ln} \\ & \quad +C^{(4)}_6\delta_{pl}\delta_{ij}\delta_{mn}+C^{(4)}_7\delta_{pl}\delta_{im}\delta_{jn}+C^{(4)}_8\delta_{pl}\delta_{in}\delta_{jm}+C^{(4)}_9\delta_{pm}\delta_{il}\delta_{jn} \\ & \quad +\left.C^{(4)}_{10}(\delta_{pm}\delta_{in}\delta_{jl}+\delta_{pn}\delta_{il}\delta_{jm})+C^{(4)}_{11}\delta_{pn}\delta_{im}\delta_{jl}\right]Q_{ij,p}Q_{lm,n} \\ & \quad +\left(L^{(1,4)}_1\delta_{il}\delta_{mn}+L^{(1,4)}_2\delta_{im}\delta_{ln}+L^{(1,4)}_3\delta_{in}\delta_{lm}\right)q_iQ_{lm,n} \\ & \quad +\left(L^{(1,4)}_1\delta_{kp}\delta_{ij}+L^{(1,4)}_2\delta_{ki}\delta_{pj}+L^{(1,4)}_3\delta_{kj}\delta_{pi}\right)Q_{ij,p}q_k \\ & \quad +\left(L^{(2,3)}_1\delta_{ji}\delta_{kl}+L^{(2,3)}_2\delta_{jk}\delta_{il}+L^{(2,3)}_3\delta_{jl}\delta_{ik}\right)q_{i,j}Q_{kl} \\ & \quad +\left(L^{(2,3)}_1\delta_{ij}\delta_{kl}+L^{(2,3)}_2\delta_{ik}\delta_{jl}+L^{(2,3)}_3\delta_{il}\delta_{jk}\right)Q_{ij}q_{k,l} \geq 0. \end{split} \end{equation} From \eqref{eqn:EI} it is seen that the entropy production in a non-negative bilinear form in the components of the heat flux and its gradient, and in the components of the internal variable and its gradient (see in Appendix its matrix representation $\sigma^{(s)}=X_\alpha \mathcal{L}_{\alpha\beta}X_{\beta}$, with $X_\alpha$, $X_\beta$ and $\mathcal{L}_{\alpha\beta}$ suitable matrixes). The following inequalities can be obtained for the components of the phenomenological tensors, resulting from the fact that all the elements of the main diagonal of the symbolic matrix $\{\mathcal{L}_{\alpha\beta}\}$ associated to the bilinear form \eqref{eqn:EI} must be non-negative (see Appendix): \begin{gather} \label{eqn:D1} L^{(1)}\geq 0, \qquad L^{(2)}_3\geq 0, \qquad L^{(3)}_2\geq 0, \\[0.5em] \label{eqn:D2} L^{(2)}_1+L^{(2)}_2+L^{(2)}_3\geq 0, \qquad L^{(3)}_1+L^{(3)}_2+L^{(3)}_3\geq 0, \\[0.5em] \begin{split} \label{eqn:D3} 2C^{(4)}_1+2C^{(4)}_2+C^{(4)}_3+2C^{(4)}_4&+C^{(4)}_5+C^{(4)}_6+C^{(4)}_7+C^{(4)}_8+ \\ & +C^{(4)}_9+2C^{(4)}_{10}+C^{(4)}_{11}\geq 0, \end{split} \\[0.5em] \label{eqn:D4} C^{(4)}_2+C^{(4)}_8+C^{(4)}_{10}\geq 0, \qquad C^{(4)}_4+C^{(4)}_9+C^{(4)}_{10}\geq 0, \\[0.5em] \label{eqn:D5} C^{(4)}_{10}\geq 0, \qquad C^{(4)}_1+C^{(4)}_{10}+C^{(4)}_{11}\geq 0. \end{gather} In particular relations \eqref{eqn:D3}-\eqref{eqn:D5} come from the non-negativity of the elements of the main diagonal of the sub-matrix $\mathcal{L}^{(4)}_{pijlmn}$. Moreover, other relations can be obtained from the non-negativity of the major minors $P_r$ ($r=1,\ldots ,48$) of $\{\mathcal{L}_{\alpha\beta}\}$. For instance, the calculation of the major minors up to sixth order gives the relations $\eqref{eqn:D1}_1$, $\eqref{eqn:D1}_2$ and $\eqref{eqn:D2}_1$. \noindent The non-negativity of the seventh order major minor of $\{\mathcal{L}_{\alpha\beta}\}$ \begin{equation} P_7= \begin{vmatrix} L^{(1)} & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & L^{(1)} & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & L^{(1)} & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & L^{(2)} & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & L^{(2)}_3 & 0 & L^{(2)}_2 \\ 0 & 0 & 0 & 0 & 0 & L^{(2)}_3 & 0 \\ 0 & 0 & 0 & 0 & L^{(2)}_2 & 0 & L^{(2)}_3 \\ \end{vmatrix} , \end{equation} with $L^{(2)}\equiv L^{(2)}_1+L^{(2)}_2+L^{(2)}_3$, gives the new relation \begin{equation} L^{(2)}_2+\left(L^{(2)}_3\right)^2\geq 0, \end{equation} and so on. In the Appendix we give a two dimensional form of the conductivity matrix $\{\mathcal{L}_{\alpha\beta}\}$, then the calculation of the conditions of positive definiteness is straightforward. \subsection{Rate equations for q and Q} Changing indexes $i$ and $j$ in \eqref{eqn:I2}, deriving it with respect to $x_j$ and substituting it into \eqref{eqn:I1}, we deduce: \begin{equation} \label{eqn:A0} \begin{split} m\dot{q_i}+L^{(1)}q_i&=\left(L^{(2)}_1+L^{(2)}_2\right)q_{k,ki}+L^{(2)}_3q_{i,kk}+\left(L^{(2,3)}_1-L^{(1,4)}_3\right)Q_{kk,i} \\ & \quad +\left(L^{(2,3)}_3-L^{(1,4)}_1\right)Q_{ik,k}+\left(L^{(2,3)}_2-L^{(1,4)}_2\right)Q_{ki,k}+{\left(\frac{1}{T}\right)}_{,i}. \end{split} \end{equation} where: \begin{gather} m>0, \quad L^{(1)}>0, \quad L^{(2)}_1+L^{(2)}_2>0, \quad L^{(2)}_3>0. \end{gather} Equation \eqref{eqn:A0} can be written as follow: \begin{equation} \label{eqn:A} \tau\dot{q_i}+q_i=-\lambda T_{,i}+l_1q_{i,kk}+l_2q_{k,ki}+l_{12}Q_{kk,i}+l_{13}Q_{ik,k}+l_{14}Q_{ki,k}\ , \end{equation} where \begin{gather} \label{eqn:54} \tau=\frac{m}{L^{(1)}}, \quad \lambda=\frac{1}{L^{(1)}T^2}, \quad l_1=\frac{L^{(2)}_3}{L^{(1)}}, \quad l_2=\frac{L^{(2)}_1+L^{(2)}_2}{L^{(1)}},\\ \label{eqn:55} l_{12}=\frac{L^{(2,3)}_1-L^{(1,4)}_3}{L^{(1)}}, \quad l_{13}=\frac{L^{(2,3)}_3-L^{(1,4)}_1}{L^{(1)}}, \quad l_{14}=\frac{L^{(2,3)}_2-L^{(1,4)}_2}{L^{(1)}} \end{gather} being $\tau$ the relaxation time of the heat flux, that, then, has a finite velocity of propagation and $\lambda$ the heat conductivity. In analogous way, if we change $i\rightarrow k$, $j\rightarrow i$, $k\rightarrow j$ in equation \eqref{eqn:I4}, deriving it with respect to $x_k$ and inserting it into \eqref{eqn:I3}, we have: \begin{equation} \label{eqn:b} \begin{split} &M_1\delta_{ij}\dot{Q}_{kk}+M_2\dot{Q}_{ij}+M_3\dot{Q}_{ji}+L^{(3)}_1\delta_{ij}Q_{kk}+L^{(3)}_2Q_{ij}+L^{(3)}_3Q_{ji}\\ & \quad =\left(L^{(4,1)}_3-L^{(3,2)}_1\right)\delta_{ij}q_{k,k}+\left(L^{(4,1)}_2-L^{(3,2)}_2\right)q_{i,j}+\left(L^{(4,1)}_1-L^{(3,2)}_3\right)q_{j,i}\\ & \quad +\left(L^{(4)}_3+L^{(4)}_6\right)Q_{kk,ij}+L^{(4)}_{12}Q_{ij,kk}+L^{(4)}_{13}Q_{ji,kk}+\left(L^{(4)}_1+L^{(4)}_{15}\right)Q_{jk,ik}\\ & \quad +\left(L^{(4)}_2+L^{(4)}_{11}\right)Q_{kj,ik}+\left(L^{(4)}_4+L^{(4)}_{14}\right)Q_{ik,jk}+\left(L^{(4)}_5+L^{(4)}_{10}\right)Q_{ki,jk}\\ & \quad +\delta_{ij}\left[\left(L^{(4)}_7+L^{(4)}_8\right)Q_{kl,lk}+L^{(4)}_9Q_{ll,kk}\right], \end{split} \end{equation} i.e.: \begin{equation} \label{eqn:B} \begin{split} &\tau_1\delta_{ij}\dot{Q}_{kk}+\tau_2\dot{Q}_{ij}+\tau_3\dot{Q}_{ji}+\delta_{ij}Q_{kk}+l^3_2Q_{ij}+l^3_3Q_{ji}=l_{21}\delta_{ij}q_{k,k}+l_{31}q_{i,j}\\ & \quad +l_{41}q_{j,i}+L_1Q_{kk,ij}+L_2Q_{ij,kk}+L_3Q_{ji,kk}+L_4Q_{jk,ik}+L_5Q_{kj,ik} \\ & \quad +L_6Q_{ik,jk}+L_7Q_{ki,jk}+\delta_{ij}\left({L_8Q_{kl,kl}+L_9Q_{ll,kk}}\right), \end{split} \end{equation} where \begin{gather} \label{eqn:58} \tau_1=\frac{M_1}{L^{(3)}_1}, \quad \tau_2=\frac{M_2}{L^{(3)}_1}, \quad \tau_3=\frac{M_3}{L^{(3)}_1}, \quad l^3_2=\frac{L^{(3)}_2}{L^{(3)}_1}, \quad l^3_3=\frac{L^{(3)}_3}{L^{(3)}_1}, \\ l_{21}=\frac{L^{(4,1)}_3-L^{(3,2)}_1}{L^{(3)}_1}, \quad l_{31}=\frac{L^{(4,1)}_2-L^{(3,2)}_2}{L^{(3)}_1}, \quad l_{41}=\frac{L^{(4,1)}_1-L^{(3,2)}_3}{L^{(3)}_1}, \\ L_1=\frac{L^{(4)}_3+L^{(4)}_6}{L^{(3)}_1}, \quad L_2=\frac{L^{(4)}_{12}}{L^{(3)}_1}, \quad L_3=\frac{L^{(4)}_{13}}{L^{(3)}_1}, \\ L_4=\frac{L^{(4)}_1+L^{(4)}_{15}}{L^{(3)}_1}, \quad L_5=\frac{L^{(4)}_2+L^{(4)}_{11}}{L^{(3)}_1}, \quad L_6=\frac{L^{(4)}_4+L^{(4)}_{14}}{L^{(3)}_1}, \\ L_7=\frac{L^{(4)}_5+L^{(4)}_{10}}{L^{(3)}_1}, \quad {L_8=\frac{L^{(4)}_7+L^{(4)}_8}{L^{(3)}_1}}, \quad {L_9=\frac{L^{(4)}_9}{L^{(3)}_1}} \end{gather} and $\tau_1$, $\tau_2$ and $\tau_3$ have time dimension. Equations \eqref{eqn:54}-\eqref{eqn:B} (or \eqref{eqn:A0}-\eqref{eqn:b}) are the full three dimensional versions of the one dimensional equations (12)-(13) in \cite{KovVan15a}. We split the second order tensor to orthogonal components, i.e. \begin{equation} Q_{ij}=Q_{\langle ij \rangle}+Q_{[ij]}+Q\delta_{ij}, \end{equation} where \begin{align} Q_{\langle ij \rangle}&=\frac{1}{2}(Q_{ij}+Q_{ji})-Q\delta_{ij} \quad \text{(deviator of the symmetric part of $Q_{ij}$)},\\ Q_{[ij]}&=\frac{1}{2}(Q_{ij}-Q_{ji}) \quad \text{(skew-symmetric part of $Q_{ij}$)},\\ Q&=\frac{1}{3}Q_{kk} \quad \text{(scalar part of $Q_{ij}$)}. \end{align} \noindent From equation \eqref{eqn:B} we derive the rate equation for $Q$ ($i=j$): \begin{equation} \begin{split} &3(3\tau_1+\tau_2+\tau_3)\dot{Q}+3(3+l^3_2+l^3_3)Q=(3l_{21}+l_{31}+l_{41})q_{k,k}\\ & \quad +3(L_1+L_2+L_3+3L_9)Q_{,kk}+ (L_4+L_5+L_6+L_7+3L_8)Q_{kl,kl}, \end{split} \end{equation} i.e. \begin{equation} \label{eqn:B0} \tau^0\dot{Q}+Q=l^0q_{k,k}+L^0_1Q_{,kk}+L^0_2Q_{kl,kl}\ , \end{equation} where \begin{gather} \label{eqn:69} \tau^0=\frac{3\tau_1+\tau_2+\tau_3}{3+l^3_2+l^3_3}, \quad l^0=\frac{3l_{21}+l_{31}+l_{41}}{3(3+l^3_2+l^3_3)}, \\ L^0_1=\frac{L_1+L_2+L_3+3L_9}{3+l^3_2+l^3_3}, \quad L^0_2=\frac{L_4+L_5+L_6+L_7+3L_8}{3(3+l^3_2+l^3_3)}, \end{gather} being $\tau^0$ the relaxation time of $Q$; the rate equation for $Q_{\langle ij \rangle}$: \begin{equation} \label{eqn:B1} \overset{\wedge}{\tau}\dot{Q}_{\langle ij \rangle}+Q_{\langle ij \rangle}=\overset{\wedge}{l}q_{\langle i,j \rangle}+{\overset{\wedge}{L}_1Q_{kk,\langle ij \rangle}}+\overset{\wedge}{L}_2Q_{\langle ij \rangle,kk}+\overset{\wedge}{L}_3Q_{k\langle i,j\rangle k}+\overset{\wedge}{L}_4Q_{\langle ik,kj\rangle}\ , \end{equation} where \begin{gather} \label{eqn:72} \overset{\wedge}{\tau}=\frac{\tau_2+\tau_3}{l^3_2+l^3_3}, \quad \overset{\wedge}{l}=\frac{l_{31}+l_{41}}{l^3_2+l^3_3}, \quad {\overset{\wedge}{L}_1=\frac{L_1}{l^3_2+l^3_3}}, \quad \\ \overset{\wedge}{L}_2=\frac{L_2+L_3}{l^3_2+l^3_3}, \quad \overset{\wedge}{L}_3=\frac{L_5+L_7}{l^3_2+l^3_3}, \quad \overset{\wedge}{L}_4=\frac{L_4+L_6}{l^3_2+l^3_3}; \end{gather} being $\overset{\wedge}{\tau}$ the relaxation time of $Q_{\langle ij \rangle}$; finally the rate equation for $Q_{[ij]}$: \begin{equation} \label{eqn:B2} \overset{\vee}{\tau}\dot{Q}_{[ij]}+Q_{[ij]}=\overset{\vee}{l}q_{[i,j]}+\overset{\vee}{L}_1Q_{[ij],kk}+\overset{\vee}{L}_2Q_{k[i,j]k}+\overset{\vee}{L}_3Q_{[ik,kj]}\ , \end{equation} where \begin{gather} \label{eqn:75} \overset{\vee}{\tau}=\frac{\tau_2-\tau_3}{l^3_2-l^3_3}, \quad \overset{\vee}{l}=\frac{l_{31}-l_{41}}{l^3_2-l^3_3}, \quad \overset{\vee}{L}_1=\frac{L_2-L_3}{l^3_2-l^3_3}, \\ \overset{\vee}{L}_2=\frac{L_7-L_5}{l^3_2-l^3_3}, \quad \overset{\vee}{L}_3=\frac{L_6-L_4}{l^3_2-l^3_3}, \end{gather} being $\overset{\vee}{\tau}$ the relaxation time of $Q_{[ij]}$; \subsection{The rate equations for q and Q with Onsager reciprocity} From Onsager reciprocity relations \eqref{eqn:32}-\eqref{eqn:34}, the phenomenological equations \eqref{eqn:I1}-\eqref{eqn:I4} become \begin{align} \label{eqn:I1bis} m\dot{q_i}-b_{ji,j}&=-L^{(1)}q_i-L^{(1,4)}_1Q_{ik,k}-L^{(1,4)}_2Q_{ki,k}-L^{(1,4)}_3Q_{kk,i}, \\ \begin{split} \label{eqn:I2bis} b_{ij}-\frac{1}{T}\delta_{ij}&=L^{(2)}_1\delta_{ij}q_{k,k}+L^{(2)}_2q_{i,j}+L^{(2)}_3q_{j,i}+L^{(2,3)}_1\delta_{ij}Q_{kk}\\ & \quad +L^{(2,3)}_2Q_{ij}+L^{(2,3)}_3Q_{ji}, \end{split} \\ \begin{split} \label{eqn:I3bis} B_{kij,k}&=M_1\delta_{ij}\dot{Q}_{kk}+M_2\dot{Q}_{ij}+M_3\dot{Q}_{ji}+{L^{(2,3)}_1}\delta_{ij}q_{k,k}+{L^{(2,3)}_2}q_{i,j} \\ & \quad +{L^{(2,3)}_3}q_{j,i}+L^{(3)}_1\delta_{ij}Q_{kk}+L^{(3)}_2Q_{ij}+L^{(3)}_3Q_{ji}, \end{split} \\ \begin{split} \label{eqn:I4bis} B_{ijk}&={L^{(1,4)}_3}\delta_{ij}q_k+{L^{(1,4)}_2}\delta_{ik}q_j+{L^{(1,4)}_1}\delta_{jk}q_i \\ & \quad +\delta_{ij}\left({C^{(4)}_1}Q_{kl,l}+{C^{(4)}_2}Q_{lk,l}+{C^{(4)}_3}Q_{ll,k}\right) \\ & \quad +\delta_{ik}\left({C^{(4)}_4}Q_{jl,l}+{C^{(4)}_5}Q_{lj,l}+{C^{(4)}_2}Q_{ll,j}\right) \\ & \quad +\delta_{jk}\left({C^{(4)}_6}Q_{il,l}+{C^{(4)}_4}Q_{li,l}+{C^{(4)}_1}Q_{ll,i}\right) \\ & \quad +{C^{(4)}_7}Q_{ij,k}+{C^{(4)}_8}Q_{ik,j}+{C^{(4)}_{10}}Q_{jk,i}+{C^{(4)}_{11}}Q_{kj,i} \\ & \quad +{C^{(4)}_9}Q_{ji,k}+{C^{(4)}_{10}}Q_{ki,j}. \end{split} \end{align} We observe that equations \eqref{eqn:I1bis} and \eqref{eqn:I2bis} are equal to \eqref{eqn:I1} and \eqref{eqn:I2}. In \eqref{eqn:I2bis} changing indexes $i$ in index $j$, operating the derivative with respect to $x_j$ and substituting the obtained equation into \eqref{eqn:I1bis}, we have: \begin{equation} \label{eqn:A0bis} \begin{split} m\dot{q_i}+L^{(1)}q_i&=\left(L^{(2)}_1+L^{(2)}_2\right)q_{k,ki}+L^{(2)}_3q_{i,kk}+\left(L^{(2,3)}_1-L^{(1,4)}_3\right)Q_{kk,i} \\ & \quad +\left(L^{(2,3)}_3-L^{(1,4)}_1\right)Q_{ik,k}+\left(L^{(2,3)}_2-L^{(1,4)}_2\right)Q_{ki,k}+{\left(\frac{1}{T}\right)}_{,i}. \end{split} \end{equation} Equation \eqref{eqn:A0bis} can we written as follows: \begin{equation} \label{eqn:Abis} \tau\dot{q_i}+q_i=-\lambda T_{,i}+l_1q_{i,kk}+l_2q_{k,ki}+l_{12}Q_{kk,i}+l_{13}Q_{ik,k}+l_{14}Q_{ki,k}\ , \end{equation} in which the coefficients are given by \eqref{eqn:54} and \eqref{eqn:55}. We observe that the rate equations \eqref{eqn:A0bis} and \eqref{eqn:Abis} are the same to \eqref{eqn:A0} and \eqref{eqn:A} in perfect isotropic case. Furthermore, changing the indexes $i$, $j$, $k$ in the indexes $k$, $i$, $j$, respectively, in equation \eqref{eqn:I4bis}, deriving it with respect to $x_k$ and inserting the the obtainded equation into \eqref{eqn:I3bis} we derive: \begin{equation} \label{eqn:bbis} \begin{split} &M_1\delta_{ij}\dot{Q}_{kk}+M_2\dot{Q}_{ij}+M_3\dot{Q}_{ji}+L^{(3)}_1\delta_{ij}Q_{kk}+L^{(3)}_2Q_{ij}+L^{(3)}_3Q_{ji} \\ & \quad =\left({L^{(1,4)}_1-L^{(2,3)}_1}\right)\delta_{ij}q_{k,k}+\left({L^{(1,4)}_2-L^{(2,3)}_2}\right)q_{i,j}+\left({L^{(1,4)}_3-L^{(2,3)}_3}\right)q_{j,i} \\ & \quad +\left({C^{(4)}_2+C^{(4)}_3}\right)Q_{kk,ij}+{C^{(4)}_{10}}Q_{ij,kk}+{C^{(4)}_{11}}Q_{ji,kk}+\left({C^{(4)}_1+C^{(4)}_{10}}\right)Q_{jk,ik} \\ & \quad +\left({C^{(4)}_2+C^{(4)}_8}\right)Q_{kj,ik}+\left({C^{(4)}_4+C^{(4)}_9}\right)Q_{ik,jk}+\left({C^{(4)}_5+C^{(4)}_7}\right)Q_{ki,jk} \\ & \quad +\delta_{ij}\left[\left({C^{(4)}_4+C^{(4)}_6}\right)Q_{kl,kl}+{C^{(4)}_1}Q_{ll,kk}\right], \end{split} \end{equation} i.e.: \begin{equation} \label{eqn:86_1} \begin{split} &\tau_1\delta_{ij}\dot{Q}_{kk}+\tau_2\dot{Q}_{ij}+\tau_3\dot{Q}_{ji}+\delta_{ij}Q_{kk}+l^3_2Q_{ij}+l^3_3Q_{ji}=l_{21}\delta_{ij}q_{k,k}+l_{31}q_{i,j}\\ & \quad +l_{41}q_{j,i}+C_1Q_{kk,ij}+C_2Q_{ij,kk}+C_3Q_{ji,kk}+C_4Q_{jk,ik}+C_5Q_{kj,ik} \\ & \quad +C_6Q_{ik,jk}+C_7Q_{ki,jk}+\delta_{ij}\left(C_8Q_{kl,kl}+C_9Q_{ll,kk}\right), \end{split} \end{equation} where $\tau_1$, $\tau_2$, $\tau_3$, $l^3_2$ and $l^3_3$ are given by \eqref{eqn:58}, the coefficients $l_{21}$, $l_{31}$ and $l_{41}$ transform according to relations \eqref{eqn:32} and \eqref{eqn:33}: \begin{gather} \label{eqn:86} l_{21}=\frac{{L^{(1,4)}_1-L^{(2,3)}_1}}{L^{(3)}_1}, \quad l_{31}=\frac{{L^{(1,4)}_2-L^{(2,3)}_2}}{L^{(3)}_1}, \quad l_{41}=\frac{{L^{(1,4)}_3-L^{(2,3)}_3}}{L^{(3)}_1}, \end{gather} and moreover \begin{gather} \label{eqn:89} C_1=\frac{{C^{(4)}_2+C^{(4)}_3}}{L^{(3)}_1}, \quad C_2=\frac{{C^{(4)}_{10}}}{L^{(3)}_1}, \quad C_3=\frac{{C^{(4)}_{11}}}{L^{(3)}_1}, \\ \label{eqn:90} C_4=\frac{{C^{(4)}_1+C^{(4)}_{10}}}{L^{(3)}_1}, \quad C_5=\frac{{C^{(4)}_2+C^{(4)}_8}}{L^{(3)}_1}, \quad C_6=\frac{{C^{(4)}_4+C^{(4)}_9}}{L^{(3)}_1}, \\ \label{eqn:91} C_7=\frac{{C^{(4)}_5+C^{(4)}_7}}{L^{(3)}_1}, \quad C_8=\frac{{C^{(4)}_4+C^{(4)}_6}}{L^{(3)}_1}, \quad C_9=\frac{{C^{(4)}_1}}{L^{(3)}_1}. \end{gather} We observe that from relations \eqref{eqn:55} and \eqref{eqn:86} we have: \begin{equation} l_{31}=-l_{14}, \quad l_{41}=-l_{12}-l_{13}-l_{21}. \end{equation} Furthermore, using $\eqref{eqn:89}_2$, $\eqref{eqn:90}_1$ and $\eqref{eqn:91}_3$ we obtain \begin{equation} C_9=C_4-C_2, \end{equation} so that equation \eqref{eqn:86_1} reads \begin{equation} \label{eqn:Bbis} \begin{split} &\tau_1\delta_{ij}\dot{Q}_{kk}+\tau_2\dot{Q}_{ij}+\tau_3\dot{Q}_{ji}+\delta_{ij}Q_{kk}+l^3_2Q_{ij}+l^3_3Q_{ji}=l_{21}\delta_{ij}q_{k,k}-l_{14}q_{i,j}\\ & \quad -(l_{12}+l_{13}+l_{21})q_{j,i}+C_1Q_{kk,ij}+C_2Q_{ij,kk}+C_3Q_{ji,kk}+C_4Q_{jk,ik}+C_5Q_{kj,ik} \\ & \quad +C_6Q_{ik,jk}+C_7Q_{ki,jk}+\delta_{ij}\left[C_8Q_{kl,kl}+({C_4-C_2})Q_{ll,kk}\right]. \end{split} \end{equation} As in the previous paragraph, we split the second order tensor $Q_{ij}$ in its orthogonal components, i.e. \begin{equation*} Q_{ij}=Q_{\langle ij \rangle}+Q_{[ij]}+Q\delta_{ij}. \end{equation*} \noindent From equation \eqref{eqn:Bbis} we derive the rate equation for $Q$ ($i=j$): \begin{equation} \begin{split} &3(3\tau_1+\tau_2+\tau_3)\dot{Q}+3(3+l^3_2+l^3_3)Q=(2l_{21}-l_{12}-l_{13}-l_{14})q_{k,k}\\ & \quad +3[C_1-2C_2+C_3+3C_4]Q_{,kk}+(C_4+C_5+C_6+C_7+3C_8)Q_{kl,kl}, \end{split} \end{equation} i.e. \begin{equation} \label{eqn:B0bis} \tau^0\dot{Q}+Q=c^0q_{k,k}+C^0_1Q_{,kk}+C^0_2Q_{kl,kl}\ , \end{equation} where $\tau^0$ is are given by $\eqref{eqn:69}_1$ and \begin{equation} c^0=\frac{2l_{21}-l_{12}-l_{13}-l_{14}}{3+l^3_2+l^3_3}, \quad C^0_1=\frac{C_1{-2C_2}+C_3+{3C_4}}{3+l^3_2+l^3_3}, \quad C^0_2=\frac{C_4+C_5+C_6+C_7+3C_8}{3(3+l^3_2+l^3_3)}; \end{equation} the rate equations for $Q_{\langle ij \rangle}$: \begin{equation} \label{eqn:B1bis} \overset{\wedge}{\tau}\dot{Q}_{\langle ij \rangle}+Q_{\langle ij \rangle}=\overset{\wedge}{c}q_{\langle i,j \rangle}+\overset{\wedge}{C}_1Q_{kk,\langle ij \rangle}+\overset{\wedge}{C}_2Q_{\langle ij \rangle,kk}+\overset{\wedge}{C}_3Q_{k\langle i,j\rangle k}+\overset{\wedge}{C}_4Q_{\langle ik,kj\rangle}\ , \end{equation} where $\overset{\wedge}{\tau}$ is given by $\eqref{eqn:72}_{1}$ and \begin{equation} \overset{\wedge}{c}=-\frac{l_{12}+l_{13}+l_{14}+l_{21}}{l^3_2+l^3_3}, \quad \overset{\wedge}{C}_1=\frac{C_1}{l^3_2+l^3_3}, \quad \overset{\wedge}{C}_2=\frac{C_2+C_3}{l^3_2+l^3_3}, \quad \overset{\wedge}{C}_3=\frac{C_5+C_7}{l^3_2+l^3_3}, \quad \overset{\wedge}{C}_4=\frac{C_4+C_6}{l^3_2+l^3_3}; \end{equation} finally the rate equation for $Q_{[ij]}$: \begin{equation} \label{eqn:B2bis} \overset{\vee}{\tau}\dot{Q}_{[ij]}+Q_{[ij]}=\overset{\vee}{c}q_{[i,j]}+\overset{\vee}{C}_1Q_{[ij],kk}+\overset{\vee}{C}_2Q_{k[i,j]k}+\overset{\vee}{C}_3Q_{[ik,kj]}\ , \end{equation} where $\overset{\vee}{\tau}$ is given by $\eqref{eqn:75}_{1}$ and \begin{equation} \overset{\vee}{c}=\frac{l_{12}+l_{13}-l_{14}+l_{21}}{l^3_2-l^3_3}, \quad \overset{\vee}{C}_1=\frac{C_2-C_3}{l^3_2-l^3_3}, \quad \overset{\vee}{C}_2=\frac{C_7-C_5}{l^3_2-l^3_3}, \quad \overset{\vee}{C}_3=\frac{C_6-C_4}{l^3_2-l^3_3}. \end{equation} Therefore the number of rate equations in case of Onsager reciprocity is simplified and the number of coefficients is reduced from $38$ to $32$ when compared to the perfect isotropic case. \subsection{One-dimensional heat conduction} In the one-dimensional case, where \begin{equation} \mathbf{q}=(q,0,0), \quad \mathbf{Q}= \begin{bmatrix} Q & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix} , \quad \mathbf{b}= \begin{bmatrix} b & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}, \end{equation} the unique component of the third order tensor $\mathbf{B}$ is $B\equiv B_{111}$ and $Q=Q_{11}$ indicates the unique component of $Q_{ij}$, the system of equations \eqref{eqn:I1bis} -\eqref{eqn:I4bis} becomes \begin{align} \label{eqn:mon_1} m\dot{q}-b_{,x}&=-L^{(1)}q-L^{(1,4)}Q_{,x}, \\ \label{eqn:mon_2} b-\frac{1}{T}&=L^{(2)}q_{,x}+L^{(2,3)}Q, \\ \label{eqn:mon_3} M\dot{Q}-B_{,x}&=-L^{(2,3)}q_{,x}-L^{(3)}Q, \\ \label{eqn:mon_4} B&=L^{(1,4)}q+C^{(4)}Q_{,x}, \end{align} where \begin{gather} L^{(1,4)}=L^{(1,4)}_1+L^{(1,4)}_2+L^{(1,4)}_3, \quad L^{(2)}=L^{(2)}_1+L^{(2)}_2+L^{(2)}_3, \\[0.5em] L^{(2,3)}=L^{(2,3)}_1+L^{(2,3)}_2+L^{(2,3)}_3, \quad M=M_1+M_2+M_3, \\[0.5em] L^{(3)}=L^{(3)}_1+L^{(3)}_2+L^{(3)}_3, \\[0.5em] \begin{split} C^{(4)}=2C^{(4)}_1+2C^{(4)}_2+C^{(4)}_3+2C^{(4)}_4&+C^{(4)}_5+C^{(4)}_6+C^{(4)}_7+C^{(4)}_8+ \\ & +C^{(4)}_9+2C^{(4)}_{10}+C^{(4)}_{11}, \end{split} \end{gather} with $(\cdot)_{,x}$ indicating the derivative of $(\cdot)$ with respect to $x$. In this case the entropy production \eqref{eqn:36} assume the form \begin{equation} \label{eqn:sigma_mon} \sigma^{(s)}=L^{(1)}q^2+L^{(2)}(q_{,x})^2+L^{(3)}Q^2+C^{(4)}(Q_{,x})^2 + 2L^{(1,4)}qQ_{,x}+2L^{(2,3)}q_{,x}Q\geq 0, \end{equation} or in symbolic matrix notation: \begin{equation} \sigma^{(s)}= \begin{pmatrix} q & q_{,x} & Q & Q_{,x} \end{pmatrix} \underbrace{ \begin{pmatrix} L^{(1)} & 0 & 0 & L^{(1,4)} \\[0.5em] 0 & L^{(2)} & L^{(2,3)} & 0 \\[0.5em] 0 & L^{(2,3)} & L^{(3)} & 0 \\[0.5em] L^{(1,4)} & 0 & 0 & C^{(4)} \end{pmatrix} }_{\displaystyle \mathcal{B}} \begin{pmatrix} q \\[0.5em] q_{,x} \\[0.5em] Q \\[0.5em] Q_{,x} \end{pmatrix} \geq 0. \end{equation} \noindent Because of the bilinear form \eqref{eqn:sigma_mon} must be non-negative, the matrix $\mathcal{B}$ associated to this form is non-negative semidefinite, so that the elements of its main diagonal and its major minors must be non-negative: \begin{gather} L^{(1)}\geq 0, \quad L^{(2)}\geq 0, \quad L^{(3)}\geq 0, \quad C^{(4)}\geq 0, \\[0.5em] L^{(2)}L^{(3)}-\left(L^{(2,3)}\right)^2\geq 0, \quad L^{(1)}C^{(4)}-\left(L^{(1,4)}\right)^2\geq 0. \end{gather} Using \eqref{eqn:mon_2} and \eqref{eqn:mon_4}, equations \eqref{eqn:mon_1} and \eqref{eqn:mon_3} become \begin{align} \label{eqn:125} mq_{,t}+L^{(1)}q-L^{(2)}q_{,xx}&=\left(\frac{1}{T}\right)_{,x}-DQ_{,x},\\ \label{eqn:126} MQ_{,t}+L^{(3)}Q-C^{(4)}Q_{,xx}&=Dq_{,x}, \end{align} \noindent where $D=L^{(1,4)}-L^{(2,3)}$, $\frac{M}{L^{(3)}}$ is the relaxation time of the internal variable Q, called in the following $\tau^J$. Moreover we have supposed the the body is at rest, so that material derivative coincide with the partial time derivative and $(\cdot)_{,t}$. Equations \eqref{eqn:125} and \eqref{eqn:126} are analogous to equations $(12)$ and $(13)$ of \cite{KovVan16a} \noindent Eliminating $Q$ from \eqref{eqn:125} and \eqref{eqn:126} we can write the following \textit{equation of heat conduction} \begin{equation} \label{eqn:heat} \begin{split} &mMq_{,tt}+\left(ML^{(1)}+mL^{(3)}\right)q_{,t}-\left(mC^{(4)}+ML^{(2)}\right)q_{,xxt}+C^{(4)}L^{(2)}q_{,xxxx} \\ & \quad -\left(L^{(1)}C^{(4)}+H\right)q_{,xx}+L^{(3)}L^{(1)}q=M\left(\frac{1}{T}\right)_{,xt}+L^{(3)}\left(\frac{1}{T}\right)_{,x} -C^{(4)}\left(\frac{1}{T}\right)_{,xxx}, \end{split} \end{equation} where \begin{equation} H=L^{(3)}L^{(2)}-D^2. \end{equation} \noindent Thus, we derive \begin{equation} \label{eqn:heat_1} \tau\tau^Jq_{,tt}+\tau^qq_{,t}+q-\alpha q_{,xxt}+\beta q_{,xxxx}-\gamma q_{,xx}=\nu\left(\frac{1}{T}\right)_{,xt}-\lambda\left(\frac{1}{T}\right)_{,x} -\zeta\left(\frac{1}{T}\right)_{,xxx}, \end{equation} where \begin{gather} H=L^{(3)}L^{(2)}-D^2, \quad \tau^q=\tau+\tau^J, \quad \alpha=\frac{mC^{(4)}+ML^{(2)}}{L^{(1)}L^{(3)}}, \quad \beta=\frac{C^{(4)}L^{(2)}}{L^{(1)}L^{(3)}}, \\ \gamma=\frac{L^{(1)}C^{(4)}+H}{L^{(1)}L^{(3)}}, \quad \nu=\frac{M}{L^{(1)}L^{(3)}}, \quad \zeta=\frac{C^{(4)}}{L^{(1)}L^{(3)}}. \end{gather} In \eqref{eqn:heat_1} wee see that relaxation time $\tau^q=\tau+\tau^J$ is given by two contributions: the first comes from the relaxation time of the flux and the second comes from the relaxation time of the internal variable. \subsubsection{Special case} From \eqref{eqn:heat}, it is possible to derive some special cases. \medskip \paragraph{\textit{Ballistic-conductive}.} In the case where $C^{(4)}=L^{(2)}=0$, the heat equation \eqref{eqn:heat} becomes: \begin{equation} mMq_{,tt}+\left(ML^{(1)}+mL^{(3)}\right)q_{,t}-D^2q_{,xx}+L^{(3)}L^{(1)}q=M\left(\frac{1}{T}\right)_{,xt}+L^{(3)}\left(\frac{1}{T}\right)_{,x}. \end{equation} Thus, we can write \begin{equation} \tau\tau^Jq_{,tt}+\tau^qq_{,t}+q-\eta q_{,xx}=\nu\left(\frac{1}{T}\right)_{,xt}-\lambda T_{,x}, \end{equation} where $\eta=\frac{D^2}{L^{(1)}L^{(3)}}$. \medskip \paragraph{\textit{Guyer-Krumhansl}.} In the case where $C^{(4)}=M=0$, the heat equation \eqref{eqn:heat} becomes: \begin{equation} mL^{(3)}q_{,t}-Hq_{,xx}+L^{(3)}L^{(1)}q=L^{(3)}\left(\frac{1}{T}\right)_{,x}, \end{equation} then we work out \begin{equation} \tau q_{,t}-l^2 q_{,xx}+q=-\lambda T_{,x}, \end{equation} where \begin{equation} l^2=\frac{H}{L^{(1)}L^{(3)}}, \end{equation} with $l$ the free mean path of the heat carriers i.e. the average lenght between successive collision amonghst them. We observe that only in Guyer-Krumhansl heat equation the coefficient multypling the field $q_{,xx}$ has the physical meaning of $l^2$. \medskip \paragraph{\textit{Cahn-Hilliard type}.} In the case where $C^{(4)}=M=m=0$, the heat equation \eqref{eqn:heat} becomes: \begin{equation} L^{(3)}L^{(1)}q-Hq_{,xx}=L^{(3)}\left(\frac{1}{T}\right)_{,x}, \end{equation} from which we obtain \begin{equation} q-\gamma q_{,xx}=-\lambda T_{,x}. \end{equation} \medskip \paragraph{\textit{Jeffreys type}.} In the case where $C^{(4)}=L^{(2)}=m=D=0$, the heat equation \eqref{eqn:heat} becomes: \begin{equation} ML^{(1)}q_{,t}+L^{(3)}L^{(1)}q=M\left(\frac{1}{T}\right)_{,xt}+L^{(3)}\left(\frac{1}{T}\right)_{,x}, \end{equation} thus we derive: \begin{equation} \tau^Jq_{,t}+q=\nu\left(\frac{1}{T}\right)_{,xt}-\lambda T_{,x}. \end{equation} We note that in this equation $\tau^J$ is the relaxation time of $q$. \medskip \paragraph{\textit{Maxwell-Cattaneo-Vernotte}.} In the case where $C^{(4)}=M=L^{(2)}=D=0$, the heat equation \eqref{eqn:heat} becomes: \begin{equation} mq_{,t}+L^{(1)}q=\left(\frac{1}{T}\right)_{,x}, \end{equation} from which we have: \begin{equation} \tau q_{,t}+q=-\lambda T_{,x}. \end{equation} \medskip \paragraph{\textit{Fourier}.} In the case where $C^{(4)}=M=L^{(2)}=D=m=0$, the heat equation \eqref{eqn:heat} becomes: \begin{equation} L^{(1)}q=\left(\frac{1}{T}\right)_{,x}, \end{equation} i.e. \begin{equation} q=-\lambda T_{,x}. \end{equation} \section{Discussion and conclusions} In this paper ballistic heat conduction in isotropic materials was treated in the framework of Non-Equilibrium Thermodynamics with Internal Varables (NET-IV). Onsager reciprocity was considered and the consequences were derived. Two dimensional formulas, that are suitable for numerical calculations are shown in the Appendix. The conditions of positive definiteness of the corresponding conductivity matrix can be calculated directly with the help of computer algebra programs. We have obtained a complete set of equations for generalized ballistic-conductive heat conduction in isotropic rigid conductors for the variables $T,q_i,Q_{ij}$. These are the balance of internal energy \eqref{balance1} with the caloric equation of state $s'_{eq}(e)=1/T$ and the balance type constitutive equations \eqref{eqn:A} and \eqref{eqn:B} in the perfect isotropic case, or \eqref{eqn:Abis} and \eqref{eqn:Bbis} with Onsagerian reciprocity. \eqref{eqn:A} and \eqref{eqn:Abis} turned out to be identical. There are two different aspects of ballistic heat conduction in continua. From the point of view of kinetic theory it is the propagation of phonons without collisions with the lattice. Then heat is reflected only at the boundaries of the medium. This microscopic understanding is the foundation of the so called ballistic-diffusive integro-differential model of Chen \cite{Che01a,Che02a,TanEta16a,TanEta16a1,TanEta17a} and leads to two independent continuum representations. First, it is a particular boundary condition for continuum theories that can be introduced also to second sound models, like Guyer-Krunhansl equation \cite{AlvEta12a}. On the other hand for ballistic phonons the speed of propagation is equal to the speed of 'first' sound, the speed of elastic waves in the medium. The speed of propagation is independent of the boundary conditions in a continuum approach and this is the meaning of ballistic in our theory, in accordance with Rational Extended Thermodynamics (RET) \cite{DreStr93a,MulRug98b}. It is also remarkable that Chen's model is equivalent to a two component extended continuum heat conduction theory as it was shown by Lebon et al. \cite{LebEta11a,Leb14a}. Theories of Extended Thermodynamics (ET) assume that the constitutive equations are local, and the evolution equations are balances with a characteristic coupling, i.e. the previous fluxes being the consecutive state variables in the system. These two basic assumptions are consequences of the definition of the macroscopic fields as moments of the single particle phase space probability density and of the Boltzmann equation. In our case, with internal variables, this structure is the consequence of the second law and can be observed on the left hand side of (\ref{eqn:general1}) and (\ref{eqn:general3}). Then most important aspects of ET are well represented. On the other hand NET-TIV has many material coefficients that are missing in ET, in particular in Rational Extended Thermodynamics, where only the two relaxation times of the Callaway collision integral represent the material properties. This property of RET is very attractive, but the price is not only that the validity of the theory is connected to the particularities of the microscopic model, but also the speed of the ballistic propagation, the speed of elastic waves, can be obtained exactly only when considering the complete moment series, or practically with using dozens of evolution equations (with consecutively increasing tensorial orders) \cite{MulRug98b}. The low number of material coefficients leads to a lot of evolution equations in modelling ballistic propagation of heat. Given the three dimensional structure of ET and NET-IV for heat conduction in case of isotropic materials opens the field to build and solve realistic models of two and three dimensional experimental setups, where the two theories lead to different predictions. \section{Acknowledgement} The work was supported by the grants National Research, Development and Innovation Office: NKFIH 116197(116375), NKFIH 124366(124508) and NKFIH 123815. The authors thank Robert Kovács for valuable discussions. \newpage \section*{Appendix} Here we give a two dimensional symmetric representation of the conductivity matrix $\{\mathcal{L}_{\alpha\beta}\}$. This form is useful when the conditions of positive definity are to be calculated. Entropy production \eqref{eqn:EI} can be also written in the symbolic matrix notation: \begin{equation} X_\alpha \mathcal{L}_{\alpha\beta}X_{\beta}\geq 0, \end{equation} where \begin{equation} \label{eqn:IXT} \begin{split} \{X_\alpha\}&= \{ q_i \ ; \ q_{i,j} \ ; \ Q_{ij} \ ; \ Q_{ij,p} \}= \\ & =\{ q_1 \ ; \ q_2 \ ; \ q_3 \ ; \ q_{1,1} \ ; \ q_{1,2} \ ; \ q_{1,3} \ ; \ q_{2,1} \ ; \ q_{2,2} \ ; \ q_{2,3} \ ; \ q_{3,1} \ ; \ q_{3,2} \ ; \ q_{3,3} \ ; \\ & \qquad Q_{11,1} \ ; \ Q_{11,2} \ ; \ Q_{11,3} \ ; \ Q_{12,1} \ ; \ Q_{12,2} \ ; \ Q_{12,3} \ ; \ Q_{13,1} \ ; \ Q_{13,2} \ ; \ Q_{13,3} \ ; \\ & \qquad Q_{21,1} \ ; \ Q_{21,2} \ ; \ Q_{21,3} \ ; \ Q_{22,1} \ ; \ Q_{22,2} \ ; \ Q_{22,3} \ ; \ Q_{23,1} \ ; \ Q_{23,2} \ ; \ Q_{23,3} \ ; \\ & \qquad Q_{31,1} \ ; \ Q_{31,2} \ ; \ Q_{31,3} \ ; \ Q_{32,1} \ ; \ Q_{32,2} \ ; \ Q_{32,3} \ ; \ Q_{33,1} \ ; \ Q_{33,2} \ ; \ Q_{33,3} \}, \\[0.5em] & \quad (\alpha=1,\ldots,48), \end{split} \end{equation} \begin{equation} \label{eqn:IX} \{X_\beta\}= \begin{Bmatrix} q_k \\[0.5em] q_{k,l} \\[0.5em] Q_{kl} \\[0.5em] Q_{lm,n} \end{Bmatrix}, \quad (\beta=1,\ldots,48), \end{equation} \noindent and for $\mathcal{L}_{\alpha\beta}$ we introduce the following notation: \begin{equation} \label{eqn:Imatrix} \{\mathcal{L}_{\alpha\beta}\}= \left(\begin{array}{@{}c|c|c|c@{}} \overset{3\times 3}{\mathcal{L}^{(1)}_{ik}} & \overset{3\times 9}{0} & \overset{3\times 9}{0} & \overset{3\times 27}{\mathcal{L}^{(1,4)}_{ilmn}} \\[0.5em] \hline \overset{9\times 3}{0} & \overset{}{\overset{9\times 9}{\mathcal{L}^{(2)}_{jikl}}} & \overset{9\times 9}{\mathcal{L}^{(2,3)}_{jikl}} & \overset{9\times 27}{0} \\[0.5em] \hline \overset{9\times 3}{0} & \overset{}{\overset{9\times 9}{\mathcal{L}^{(3,2)}_{ijkl}}} & \overset{9\times 9}{\mathcal{L}^{(3)}_{ijkl}} & \overset{9\times 27}{0} \\[0.5em] \hline \overset{}{\overset{27\times 3}{\mathcal{L}^{(4,1)}_{kpij}}} & \overset{27\times 9}{0} & \overset{27\times 9}{0} & \overset{27\times 27}{\mathcal{L}^{(4)}_{pijlmn}} \end{array}\right) \quad (\alpha , \beta=1,\ldots,48), \end{equation} \noindent in which $\overset{n\times m}{0}$ is the symbolic null matrix of dimension $n\times m$. In the following we have written some sub-matrixes that appear in \eqref{eqn:Imatrix}, in particular \begin{equation} \mathcal{L}^{(1)}_{ik}= \begin{pmatrix} L^{(1)} & 0 & 0 \\ 0 & L^{(1)} & 0 \\ 0 & 0 & L^{(1)} \end{pmatrix} \end{equation} \begin{equation} \qquad \mathcal{L}^{(1,4)}_{ilmn}= \begin{pmatrix} \mathcal{L}^{(1,4)}_{1111} & \mathcal{L}^{(1,4)}_{2111} & \mathcal{L}^{(1,4)}_{3111} \\[0.5em] \mathcal{L}^{(1,4)}_{1112} & \mathcal{L}^{(1,4)}_{2112} & \mathcal{L}^{(1,4)}_{3112} \\[0.5em] \mathcal{L}^{(1,4)}_{1113} & \mathcal{L}^{(1,4)}_{2113} & \mathcal{L}^{(1,4)}_{3113} \\[0.5em] \mathcal{L}^{(1,4)}_{1121} & \mathcal{L}^{(1,4)}_{2121} & \mathcal{L}^{(1,4)}_{3121} \\[0.5em] \mathcal{L}^{(1,4)}_{1122} & \mathcal{L}^{(1,4)}_{2122} & \mathcal{L}^{(1,4)}_{3122} \\[0.5em] \mathcal{L}^{(1,4)}_{1123} & \mathcal{L}^{(1,4)}_{2123} & \mathcal{L}^{(1,4)}_{3123} \\[0.5em] \mathcal{L}^{(1,4)}_{1131} & \mathcal{L}^{(1,4)}_{2131} & \mathcal{L}^{(1,4)}_{3131} \\[0.5em] \mathcal{L}^{(1,4)}_{1132} & \mathcal{L}^{(1,4)}_{2132} & \mathcal{L}^{(1,4)}_{3132} \\[0.5em] \mathcal{L}^{(1,4)}_{1133} & \mathcal{L}^{(1,4)}_{2133} & \mathcal{L}^{(1,4)}_{3133} \\[0.5em] \mathcal{L}^{(1,4)}_{1211} & \mathcal{L}^{(1,4)}_{2211} & \mathcal{L}^{(1,4)}_{3211} \\[0.5em] \mathcal{L}^{(1,4)}_{1212} & \mathcal{L}^{(1,4)}_{2212} & \mathcal{L}^{(1,4)}_{3212} \\[0.5em] \mathcal{L}^{(1,4)}_{1213} & \mathcal{L}^{(1,4)}_{2213} & \mathcal{L}^{(1,4)}_{3213} \\[0.5em] \mathcal{L}^{(1,4)}_{1221} & \mathcal{L}^{(1,4)}_{2221} & \mathcal{L}^{(1,4)}_{3221} \\[0.5em] \mathcal{L}^{(1,4)}_{1222} & \mathcal{L}^{(1,4)}_{2222} & \mathcal{L}^{(1,4)}_{3222} \\[0.5em] \mathcal{L}^{(1,4)}_{1223} & \mathcal{L}^{(1,4)}_{2223} & \mathcal{L}^{(1,4)}_{3223} \\[0.5em] \mathcal{L}^{(1,4)}_{1231} & \mathcal{L}^{(1,4)}_{2231} & \mathcal{L}^{(1,4)}_{3231} \\[0.5em] \mathcal{L}^{(1,4)}_{1232} & \mathcal{L}^{(1,4)}_{2232} & \mathcal{L}^{(1,4)}_{3232} \\[0.5em] \mathcal{L}^{(1,4)}_{1233} & \mathcal{L}^{(1,4)}_{2233} & \mathcal{L}^{(1,4)}_{3233} \\[0.5em] \mathcal{L}^{(1,4)}_{1311} & \mathcal{L}^{(1,4)}_{2311} & \mathcal{L}^{(1,4)}_{3311} \\[0.5em] \mathcal{L}^{(1,4)}_{1312} & \mathcal{L}^{(1,4)}_{2312} & \mathcal{L}^{(1,4)}_{3312} \\[0.5em] \mathcal{L}^{(1,4)}_{1313} & \mathcal{L}^{(1,4)}_{2313} & \mathcal{L}^{(1,4)}_{3313} \\[0.5em] \mathcal{L}^{(1,4)}_{1321} & \mathcal{L}^{(1,4)}_{2321} & \mathcal{L}^{(1,4)}_{3321} \\[0.5em] \mathcal{L}^{(1,4)}_{1322} & \mathcal{L}^{(1,4)}_{2322} & \mathcal{L}^{(1,4)}_{3322} \\[0.5em] \mathcal{L}^{(1,4)}_{1323} & \mathcal{L}^{(1,4)}_{2323} & \mathcal{L}^{(1,4)}_{3323} \\[0.5em] \mathcal{L}^{(1,4)}_{1331} & \mathcal{L}^{(1,4)}_{2331} & \mathcal{L}^{(1,4)}_{3331} \\[0.5em] \mathcal{L}^{(1,4)}_{1332} & \mathcal{L}^{(1,4)}_{2332} & \mathcal{L}^{(1,4)}_{3332} \\[0.5em] \mathcal{L}^{(1,4)}_{1333} & \mathcal{L}^{(1,4)}_{2333} & \mathcal{L}^{(1,4)}_{3333} \end{pmatrix} ^T= \begin{pmatrix} L^{(1,4)} & 0 & 0 \\[0.3em] 0 & L^{(1,4)}_3 & 0 \\[0.3em] 0 & 0 & L^{(1,4)}_3 \\[0.3em] 0 & L^{(1,4)}_2 & 0 \\[0.3em] L^{(1,4)}_1 & 0 & 0 \\[0.3em] 0 & 0 & 0 \\[0.3em] 0 & 0 & L^{(1,4)}_2 \\[0.3em] 0 & 0 & 0 \\[0.3em] L^{(1,4)}_1 & 0 & 0 \\[0.3em] 0 & L^{(1,4)}_1 & 0 \\[0.3em] L^{(1,4)}_2 & 0 & 0 \\[0.3em] 0 & 0 & 0 \\[0.3em] L^{(1,4)}_3 & 0 & 0 \\[0.3em] 0 & L^{(1,4)} & 0 \\[0.3em] 0 & 0 & L^{(1,4)}_3 \\[0.3em] 0 & 0 & 0 \\[0.3em] 0 & 0 & L^{(1,4)}_2 \\[0.3em] 0 & L^{(1,4)}_1 & 0 \\[0.3em] 0 & 0 & L^{(1,4)}_1 \\[0.3em] 0 & 0 & 0 \\[0.3em] L^{(1,4)}_2 & 0 & 0 \\[0.3em] 0 & 0 & 0 \\[0.3em] 0 & 0 & L^{(1,4)}_1 \\[0.3em] 0 & L^{(1,4)}_2 & 0 \\[0.3em] L^{(1,4)}_3 & 0 & 0 \\[0.3em] 0 & L^{(1,4)}_3 & 0 \\[0.3em] 0 & 0 & L^{(1,4)} \end{pmatrix} ^T \end{equation} where $L^{(1,4)} \equiv L^{(1,4)}_1+L^{(1,4)}_2+L^{(1,4)}_3$. \begin{equation} \begin{split} \mathcal{L}^{(2)}_{jikl}&= \begin{pmatrix} \mathcal{L}^{(2)}_{1111} & \mathcal{L}^{(2)}_{1112} & \mathcal{L}^{(2)}_{1113} & \mathcal{L}^{(2)}_{1121} & \mathcal{L}^{(2)}_{1122} & \mathcal{L}^{(2)}_{1123} & \mathcal{L}^{(2)}_{1131} & \mathcal{L}^{(2)}_{1132} & \mathcal{L}^{(2)}_{1133} \\[0.5em] \mathcal{L}^{(2)}_{2111} & \mathcal{L}^{(2)}_{2112} & \mathcal{L}^{(2)}_{2113} & \mathcal{L}^{(2)}_{2121} & \mathcal{L}^{(2)}_{2122} & \mathcal{L}^{(2)}_{2123} & \mathcal{L}^{(2)}_{2131} & \mathcal{L}^{(2)}_{2132} & \mathcal{L}^{(2)}_{2133} \\[0.5em] \mathcal{L}^{(2)}_{3111} & \mathcal{L}^{(2)}_{3112} & \mathcal{L}^{(2)}_{3113} & \mathcal{L}^{(2)}_{3121} & \mathcal{L}^{(2)}_{3122} & \mathcal{L}^{(2)}_{3123} & \mathcal{L}^{(2)}_{3131} & \mathcal{L}^{(2)}_{3132} & \mathcal{L}^{(2)}_{3133} \\[0.5em] \mathcal{L}^{(2)}_{1211} & \mathcal{L}^{(2)}_{1212} & \mathcal{L}^{(2)}_{1213} & \mathcal{L}^{(2)}_{1221} & \mathcal{L}^{(2)}_{1222} & \mathcal{L}^{(2)}_{1223} & \mathcal{L}^{(2)}_{1231} & \mathcal{L}^{(2)}_{1232} & \mathcal{L}^{(2)}_{1233} \\[0.5em] \mathcal{L}^{(2)}_{2211} & \mathcal{L}^{(2)}_{2212} & \mathcal{L}^{(2)}_{2213} & \mathcal{L}^{(2)}_{2221} & \mathcal{L}^{(2)}_{2222} & \mathcal{L}^{(2)}_{2223} & \mathcal{L}^{(2)}_{2231} & \mathcal{L}^{(2)}_{2232} & \mathcal{L}^{(2)}_{2233} \\[0.5em] \mathcal{L}^{(2)}_{3211} & \mathcal{L}^{(2)}_{3212} & \mathcal{L}^{(2)}_{3213} & \mathcal{L}^{(2)}_{3221} & \mathcal{L}^{(2)}_{3222} & \mathcal{L}^{(2)}_{3223} & \mathcal{L}^{(2)}_{3231} & \mathcal{L}^{(2)}_{3232} & \mathcal{L}^{(2)}_{3233} \\[0.5em] \mathcal{L}^{(2)}_{1311} & \mathcal{L}^{(2)}_{1312} & \mathcal{L}^{(2)}_{1313} & \mathcal{L}^{(2)}_{1321} & \mathcal{L}^{(2)}_{1322} & \mathcal{L}^{(2)}_{1323} & \mathcal{L}^{(2)}_{1331} & \mathcal{L}^{(2)}_{1332} & \mathcal{L}^{(2)}_{1333} \\[0.5em] \mathcal{L}^{(2)}_{2311} & \mathcal{L}^{(2)}_{2312} & \mathcal{L}^{(2)}_{2313} & \mathcal{L}^{(2)}_{2321} & \mathcal{L}^{(2)}_{2322} & \mathcal{L}^{(2)}_{2323} & \mathcal{L}^{(2)}_{2331} & \mathcal{L}^{(2)}_{2332} & \mathcal{L}^{(2)}_{2333} \\[0.5em] \mathcal{L}^{(2)}_{3311} & \mathcal{L}^{(2)}_{3312} & \mathcal{L}^{(2)}_{3313} & \mathcal{L}^{(2)}_{3321} & \mathcal{L}^{(2)}_{3322} & \mathcal{L}^{(2)}_{3323} & \mathcal{L}^{(2)}_{3331} & \mathcal{L}^{(2)}_{3332} & \mathcal{L}^{(2)}_{3333} \end{pmatrix} = \\[1em] & = \begin{pmatrix} L^{(2)} & 0 & 0 & 0 & L^{(2)}_1 & 0 & 0 & 0 & L^{(2)}_1 \\ 0 & L^{(2)}_3 & 0 & L^{(2)}_2 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & L^{(2)}_3 & 0 & 0 & 0 & L^{(2)}_2 & 0 & 0 \\ 0 & L^{(2)}_2 & 0 & L^{(2)}_3 & 0 & 0 & 0 & 0 & 0 \\ L^{(2)}_1 & 0 & 0 & 0 & L^{(2)} & 0 & 0 & 0 & L^{(2)}_1 \\ 0 & 0 & 0 & 0 & 0 & L^{(2)}_3 & 0 & L^{(2)}_2 & 0 \\ 0 & 0 & L^{(2)}_2 & 0 & 0 & 0 & L^{(2)}_3 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & L^{(2)}_2 & 0 & L^{(2)}_3 & 0 \\ L^{(2)}_1 & 0 & 0 & 0 & L^{(2)}_1 & 0 & 0 & 0 & L^{(2)} \end{pmatrix} \end{split} \end{equation} where $L^{(2)} \equiv L^{(2)}_1+L^{(2)}_2+L^{(2)}_3$. \begin{equation} \begin{split} \mathcal{L}^{(2,3)}_{jikl}&= \begin{pmatrix} \mathcal{L}^{(2,3)}_{1111} & \mathcal{L}^{(2,3)}_{1112} & \mathcal{L}^{(2,3)}_{1113} & \mathcal{L}^{(2,3)}_{1121} & \mathcal{L}^{(2,3)}_{1122} & \mathcal{L}^{(2,3)}_{1123} & \mathcal{L}^{(2,3)}_{1131} & \mathcal{L}^{(2,3)}_{1132} & \mathcal{L}^{(2,3)}_{1133} \\[0.5em] \mathcal{L}^{(2,3)}_{2111} & \mathcal{L}^{(2,3)}_{2112} & \mathcal{L}^{(2,3)}_{2113} & \mathcal{L}^{(2,3)}_{2121} & \mathcal{L}^{(2,3)}_{2122} & \mathcal{L}^{(2,3)}_{2123} & \mathcal{L}^{(2,3)}_{2131} & \mathcal{L}^{(2,3)}_{2132} & \mathcal{L}^{(2,3)}_{2133} \\[0.5em] \mathcal{L}^{(2,3)}_{3111} & \mathcal{L}^{(2,3)}_{3112} & \mathcal{L}^{(2,3)}_{3113} & \mathcal{L}^{(2,3)}_{3121} & \mathcal{L}^{(2,3)}_{3122} & \mathcal{L}^{(2,3)}_{3123} & \mathcal{L}^{(2,3)}_{3131} & \mathcal{L}^{(2,3)}_{3132} & \mathcal{L}^{(2,3)}_{3133} \\[0.5em] \mathcal{L}^{(2,3)}_{1211} & \mathcal{L}^{(2,3)}_{1212} & \mathcal{L}^{(2,3)}_{1213} & \mathcal{L}^{(2,3)}_{1221} & \mathcal{L}^{(2,3)}_{1222} & \mathcal{L}^{(2,3)}_{1223} & \mathcal{L}^{(2,3)}_{1231} & \mathcal{L}^{(2,3)}_{1232} & \mathcal{L}^{(2,3)}_{1233} \\[0.5em] \mathcal{L}^{(2,3)}_{2211} & \mathcal{L}^{(2,3)}_{2212} & \mathcal{L}^{(2,3)}_{2213} & \mathcal{L}^{(2,3)}_{2221} & \mathcal{L}^{(2,3)}_{2222} & \mathcal{L}^{(2,3)}_{2223} & \mathcal{L}^{(2,3)}_{2231} & \mathcal{L}^{(2,3)}_{2232} & \mathcal{L}^{(2,3)}_{2233} \\[0.5em] \mathcal{L}^{(2,3)}_{3211} & \mathcal{L}^{(2,3)}_{3212} & \mathcal{L}^{(2,3)}_{3213} & \mathcal{L}^{(2,3)}_{3221} & \mathcal{L}^{(2,3)}_{3222} & \mathcal{L}^{(2,3)}_{3223} & \mathcal{L}^{(2,3)}_{3231} & \mathcal{L}^{(2,3)}_{3232} & \mathcal{L}^{(2,3)}_{3233} \\[0.5em] \mathcal{L}^{(2,3)}_{1311} & \mathcal{L}^{(2,3)}_{1312} & \mathcal{L}^{(2,3)}_{1313} & \mathcal{L}^{(2,3)}_{1321} & \mathcal{L}^{(2,3)}_{1322} & \mathcal{L}^{(2,3)}_{1323} & \mathcal{L}^{(2,3)}_{1331} & \mathcal{L}^{(2,3)}_{1332} & \mathcal{L}^{(2,3)}_{1333} \\[0.5em] \mathcal{L}^{(2,3)}_{2311} & \mathcal{L}^{(2,3)}_{2312} & \mathcal{L}^{(2,3)}_{2313} & \mathcal{L}^{(2,3)}_{2321} & \mathcal{L}^{(2,3)}_{2322} & \mathcal{L}^{(2,3)}_{2323} & \mathcal{L}^{(2,3)}_{2331} & \mathcal{L}^{(2,3)}_{2332} & \mathcal{L}^{(2,3)}_{2333} \\[0.5em] \mathcal{L}^{(2,3)}_{3311} & \mathcal{L}^{(2,3)}_{3312} & \mathcal{L}^{(2,3)}_{3313} & \mathcal{L}^{(2,3)}_{3321} & \mathcal{L}^{(2,3)}_{3322} & \mathcal{L}^{(2,3)}_{3323} & \mathcal{L}^{(2,3)}_{3331} & \mathcal{L}^{(2,3)}_{3332} & \mathcal{L}^{(2,3)}_{3333} \end{pmatrix} = \\[1em] & = \begin{pmatrix} L^{(2,3)} & 0 & 0 & 0 & L^{(2,3)}_1 & 0 & 0 & 0 & L^{(2,3)}_1 \\ 0 & L^{(2,3)}_3 & 0 & L^{(2,3)}_2 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & L^{(2,3)}_3 & 0 & 0 & 0 & L^{(2,3)}_2 & 0 & 0 \\ 0 & L^{(2,3)}_2 & 0 & L^{(2,3)}_3 & 0 & 0 & 0 & 0 & 0 \\ L^{(2,3)}_1 & 0 & 0 & 0 & L^{(2,3)} & 0 & 0 & 0 & L^{(2,3)}_1 \\ 0 & 0 & 0 & 0 & 0 & L^{(2,3)}_3 & 0 & L^{(2,3)}_2 & 0 \\ 0 & 0 & L^{(2,3)}_2 & 0 & 0 & 0 & L^{(2,3)}_3 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & L^{(2,3)}_2 & 0 & L^{(2,3)}_3 & 0 \\ L^{(2,3)}_1 & 0 & 0 & 0 & L^{(2,3)}_1 & 0 & 0 & 0 & L^{(2,3)} \end{pmatrix} \end{split} \end{equation} where $L^{(2,3)} \equiv L^{(2,3)}_1+L^{(2,3)}_2+L^{(2,3)}_3$. \begin{equation} \begin{split} \mathcal{L}^{(3)}_{ijkl}&= \begin{pmatrix} \mathcal{L}^{(3)}_{1111} & \mathcal{L}^{(3)}_{1112} & \mathcal{L}^{(3)}_{1113} & \mathcal{L}^{(3)}_{1121} & \mathcal{L}^{(3)}_{1122} & \mathcal{L}^{(3)}_{1123} & \mathcal{L}^{(3)}_{1131} & \mathcal{L}^{(3)}_{1132} & \mathcal{L}^{(3)}_{1133} \\[0.5em] \mathcal{L}^{(3)}_{1211} & \mathcal{L}^{(3)}_{1212} & \mathcal{L}^{(3)}_{1213} & \mathcal{L}^{(3)}_{1221} & \mathcal{L}^{(3)}_{1222} & \mathcal{L}^{(3)}_{1223} & \mathcal{L}^{(3)}_{1231} & \mathcal{L}^{(3)}_{1232} & \mathcal{L}^{(3)}_{1233} \\[0.5em] \mathcal{L}^{(3)}_{1311} & \mathcal{L}^{(3)}_{1312} & \mathcal{L}^{(3)}_{1313} & \mathcal{L}^{(3)}_{1321} & \mathcal{L}^{(3)}_{1322} & \mathcal{L}^{(3)}_{1323} & \mathcal{L}^{(3)}_{1331} & \mathcal{L}^{(3)}_{1332} & \mathcal{L}^{(3)}_{1333} \\[0.5em] \mathcal{L}^{(3)}_{2111} & \mathcal{L}^{(3)}_{2112} & \mathcal{L}^{(3)}_{2113} & \mathcal{L}^{(3)}_{2121} & \mathcal{L}^{(3)}_{2122} & \mathcal{L}^{(3)}_{2123} & \mathcal{L}^{(3)}_{2131} & \mathcal{L}^{(3)}_{2132} & \mathcal{L}^{(3)}_{2133} \\[0.5em] \mathcal{L}^{(3)}_{2211} & \mathcal{L}^{(3)}_{2212} & \mathcal{L}^{(3)}_{2213} & \mathcal{L}^{(3)}_{2221} & \mathcal{L}^{(3)}_{2222} & \mathcal{L}^{(3)}_{2223} & \mathcal{L}^{(3)}_{2231} & \mathcal{L}^{(3)}_{2232} & \mathcal{L}^{(3)}_{2233} \\[0.5em] \mathcal{L}^{(3)}_{2311} & \mathcal{L}^{(3)}_{2312} & \mathcal{L}^{(3)}_{2313} & \mathcal{L}^{(3)}_{2321} & \mathcal{L}^{(3)}_{2322} & \mathcal{L}^{(3)}_{2323} & \mathcal{L}^{(3)}_{2331} & \mathcal{L}^{(3)}_{2332} & \mathcal{L}^{(3)}_{2333} \\[0.5em] \mathcal{L}^{(3)}_{3111} & \mathcal{L}^{(3)}_{3112} & \mathcal{L}^{(3)}_{3113} & \mathcal{L}^{(3)}_{3121} & \mathcal{L}^{(3)}_{3122} & \mathcal{L}^{(3)}_{3123} & \mathcal{L}^{(3)}_{3131} & \mathcal{L}^{(3)}_{3132} & \mathcal{L}^{(3)}_{3133} \\[0.5em] \mathcal{L}^{(3)}_{3211} & \mathcal{L}^{(3)}_{3212} & \mathcal{L}^{(3)}_{3213} & \mathcal{L}^{(3)}_{3221} & \mathcal{L}^{(3)}_{3222} & \mathcal{L}^{(3)}_{3223} & \mathcal{L}^{(3)}_{3231} & \mathcal{L}^{(3)}_{3232} & \mathcal{L}^{(3)}_{3233} \\[0.5em] \mathcal{L}^{(3)}_{3311} & \mathcal{L}^{(3)}_{3312} & \mathcal{L}^{(3)}_{3313} & \mathcal{L}^{(3)}_{3321} & \mathcal{L}^{(3)}_{3322} & \mathcal{L}^{(3)}_{3323} & \mathcal{L}^{(3)}_{3331} & \mathcal{L}^{(3)}_{3332} & \mathcal{L}^{(3)}_{3333} \end{pmatrix} = \\[1em] & = \begin{pmatrix} L^{(3)} & 0 & 0 & 0 & L^{(3)}_1 & 0 & 0 & 0 & L^{(3)}_1 \\ 0 & L^{(3)}_2 & 0 & L^{(3)}_3 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & L^{(3)}_2 & 0 & 0 & 0 & L^{(3)}_3 & 0 & 0 \\ 0 & L^{(3)}_3 & 0 & L^{(3)}_2 & 0 & 0 & 0 & 0 & 0 \\ L^{(3)}_1 & 0 & 0 & 0 & L^{(3)} & 0 & 0 & 0 & L^{(3)}_1 \\ 0 & 0 & 0 & 0 & 0 & L^{(3)}_2 & 0 & L^{(3)}_3 & 0 \\ 0 & 0 & L^{(3)}_3 & 0 & 0 & 0 & L^{(3)}_2 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & L^{(3)}_3 & 0 & L^{(3)}_2 & 0 \\ L^{(3)}_1 & 0 & 0 & 0 & L^{(3)}_1 & 0 & 0 & 0 & L^{(3)} \end{pmatrix} \end{split} \end{equation} where $L^{(3)} \equiv L^{(3)}_1+L^{(3)}_2+L^{(3)}_3$. \begin{equation} \begin{split} \mathcal{L}^{(3,2)}_{ijkl}&= \begin{pmatrix} \mathcal{L}^{(2,3)}_{1111} & \mathcal{L}^{(2,3)}_{1112} & \mathcal{L}^{(2,3)}_{1113} & \mathcal{L}^{(2,3)}_{1121} & \mathcal{L}^{(2,3)}_{1122} & \mathcal{L}^{(2,3)}_{1123} & \mathcal{L}^{(2,3)}_{1131} & \mathcal{L}^{(2,3)}_{1132} & \mathcal{L}^{(2,3)}_{1133} \\[0.5em] \mathcal{L}^{(2,3)}_{1211} & \mathcal{L}^{(2,3)}_{1212} & \mathcal{L}^{(2,3)}_{1213} & \mathcal{L}^{(2,3)}_{1221} & \mathcal{L}^{(2,3)}_{1222} & \mathcal{L}^{(2,3)}_{1223} & \mathcal{L}^{(2,3)}_{1231} & \mathcal{L}^{(2,3)}_{1232} & \mathcal{L}^{(2,3)}_{1233} \\[0.5em] \mathcal{L}^{(2,3)}_{1311} & \mathcal{L}^{(2,3)}_{1312} & \mathcal{L}^{(2,3)}_{1313} & \mathcal{L}^{(2,3)}_{1321} & \mathcal{L}^{(2,3)}_{1322} & \mathcal{L}^{(2,3)}_{1323} & \mathcal{L}^{(2,3)}_{1331} & \mathcal{L}^{(2,3)}_{1332} & \mathcal{L}^{(2,3)}_{1333} \\[0.5em] \mathcal{L}^{(2,3)}_{2111} & \mathcal{L}^{(2,3)}_{2112} & \mathcal{L}^{(2,3)}_{2113} & \mathcal{L}^{(2,3)}_{2121} & \mathcal{L}^{(2,3)}_{2122} & \mathcal{L}^{(2,3)}_{2123} & \mathcal{L}^{(2,3)}_{2131} & \mathcal{L}^{(2,3)}_{2132} & \mathcal{L}^{(2,3)}_{2133} \\[0.5em] \mathcal{L}^{(2,3)}_{2211} & \mathcal{L}^{(2,3)}_{2212} & \mathcal{L}^{(2,3)}_{2213} & \mathcal{L}^{(2,3)}_{2221} & \mathcal{L}^{(2,3)}_{2222} & \mathcal{L}^{(2,3)}_{2223} & \mathcal{L}^{(2,3)}_{2231} & \mathcal{L}^{(2,3)}_{2232} & \mathcal{L}^{(2,3)}_{2233} \\[0.5em] \mathcal{L}^{(2,3)}_{2311} & \mathcal{L}^{(2,3)}_{2312} & \mathcal{L}^{(2,3)}_{2313} & \mathcal{L}^{(2,3)}_{2321} & \mathcal{L}^{(2,3)}_{2322} & \mathcal{L}^{(2,3)}_{2323} & \mathcal{L}^{(2,3)}_{2331} & \mathcal{L}^{(2,3)}_{2332} & \mathcal{L}^{(2,3)}_{2333} \\[0.5em] \mathcal{L}^{(2,3)}_{3111} & \mathcal{L}^{(2,3)}_{3112} & \mathcal{L}^{(2,3)}_{3113} & \mathcal{L}^{(2,3)}_{3121} & \mathcal{L}^{(2,3)}_{3122} & \mathcal{L}^{(2,3)}_{3123} & \mathcal{L}^{(2,3)}_{3131} & \mathcal{L}^{(2,3)}_{3132} & \mathcal{L}^{(2,3)}_{3133} \\[0.5em] \mathcal{L}^{(2,3)}_{3211} & \mathcal{L}^{(2,3)}_{3212} & \mathcal{L}^{(2,3)}_{3213} & \mathcal{L}^{(2,3)}_{3221} & \mathcal{L}^{(2,3)}_{3222} & \mathcal{L}^{(2,3)}_{3223} & \mathcal{L}^{(2,3)}_{3231} & \mathcal{L}^{(2,3)}_{3232} & \mathcal{L}^{(2,3)}_{3233} \\[0.5em] \mathcal{L}^{(2,3)}_{3311} & \mathcal{L}^{(2,3)}_{3312} & \mathcal{L}^{(2,3)}_{3313} & \mathcal{L}^{(2,3)}_{3321} & \mathcal{L}^{(2,3)}_{3322} & \mathcal{L}^{(2,3)}_{3323} & \mathcal{L}^{(2,3)}_{3331} & \mathcal{L}^{(2,3)}_{3332} & \mathcal{L}^{(2,3)}_{3333} \end{pmatrix} = \\[1em] & = \begin{pmatrix} L^{(2,3)} & 0 & 0 & 0 & L^{(2,3)}_1 & 0 & 0 & 0 & L^{(2,3)}_1 \\ 0 & L^{(2,3)}_2 & 0 & L^{(2,3)}_3 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & L^{(2,3)}_2 & 0 & 0 & 0 & L^{(2,3)}_3 & 0 & 0 \\ 0 & L^{(2,3)}_3 & 0 & L^{(2,3)}_2 & 0 & 0 & 0 & 0 & 0 \\ L^{(2,3)}_1 & 0 & 0 & 0 & L^{(2,3)} & 0 & 0 & 0 & L^{(2,3)}_1 \\ 0 & 0 & 0 & 0 & 0 & L^{(2,3)}_2 & 0 & L^{(2,3)}_3 & 0 \\ 0 & 0 & L^{(2,3)}_3 & 0 & 0 & 0 & L^{(2,3)}_2 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & L^{(2,3)}_3 & 0 & L^{(2,3)}_2 & 0 \\ L^{(2,3)}_1 & 0 & 0 & 0 & L^{(2,3)}_1 & 0 & 0 & 0 & L^{(2,3)} \end{pmatrix} \end{split} \end{equation} \begin{equation} \qquad \mathcal{L}^{(4,1)}_{kpij}= \begin{pmatrix} \mathcal{L}^{(1,4)}_{1111} & \mathcal{L}^{(1,4)}_{2111} & \mathcal{L}^{(1,4)}_{3111} \\[0.5em] \mathcal{L}^{(1,4)}_{1211} & \mathcal{L}^{(1,4)}_{2211} & \mathcal{L}^{(1,4)}_{3211} \\[0.5em] \mathcal{L}^{(1,4)}_{1311} & \mathcal{L}^{(1,4)}_{2311} & \mathcal{L}^{(1,4)}_{3311} \\[0.5em] \mathcal{L}^{(1,4)}_{1112} & \mathcal{L}^{(1,4)}_{2112} & \mathcal{L}^{(1,4)}_{3112} \\[0.5em] \mathcal{L}^{(1,4)}_{1212} & \mathcal{L}^{(1,4)}_{2212} & \mathcal{L}^{(1,4)}_{3212} \\[0.5em] \mathcal{L}^{(1,4)}_{1312} & \mathcal{L}^{(1,4)}_{2312} & \mathcal{L}^{(1,4)}_{3312} \\[0.5em] \mathcal{L}^{(1,4)}_{1113} & \mathcal{L}^{(1,4)}_{2113} & \mathcal{L}^{(1,4)}_{3113} \\[0.5em] \mathcal{L}^{(1,4)}_{1213} & \mathcal{L}^{(1,4)}_{2213} & \mathcal{L}^{(1,4)}_{3213} \\[0.5em] \mathcal{L}^{(1,4)}_{1313} & \mathcal{L}^{(1,4)}_{2313} & \mathcal{L}^{(1,4)}_{3313} \\[0.5em] \mathcal{L}^{(1,4)}_{1121} & \mathcal{L}^{(1,4)}_{2121} & \mathcal{L}^{(1,4)}_{3121} \\[0.5em] \mathcal{L}^{(1,4)}_{1221} & \mathcal{L}^{(1,4)}_{2221} & \mathcal{L}^{(1,4)}_{3221} \\[0.5em] \mathcal{L}^{(1,4)}_{1321} & \mathcal{L}^{(1,4)}_{2321} & \mathcal{L}^{(1,4)}_{3321} \\[0.5em] \mathcal{L}^{(1,4)}_{1122} & \mathcal{L}^{(1,4)}_{2122} & \mathcal{L}^{(1,4)}_{3122} \\[0.5em] \mathcal{L}^{(1,4)}_{1222} & \mathcal{L}^{(1,4)}_{2222} & \mathcal{L}^{(1,4)}_{3222} \\[0.5em] \mathcal{L}^{(1,4)}_{1322} & \mathcal{L}^{(1,4)}_{2322} & \mathcal{L}^{(1,4)}_{3322} \\[0.5em] \mathcal{L}^{(1,4)}_{1123} & \mathcal{L}^{(1,4)}_{2123} & \mathcal{L}^{(1,4)}_{3123} \\[0.5em] \mathcal{L}^{(1,4)}_{1223} & \mathcal{L}^{(1,4)}_{2223} & \mathcal{L}^{(1,4)}_{3223} \\[0.5em] \mathcal{L}^{(1,4)}_{1323} & \mathcal{L}^{(1,4)}_{2323} & \mathcal{L}^{(1,4)}_{3323} \\[0.5em] \mathcal{L}^{(1,4)}_{1131} & \mathcal{L}^{(1,4)}_{2131} & \mathcal{L}^{(1,4)}_{3131} \\[0.5em] \mathcal{L}^{(1,4)}_{1231} & \mathcal{L}^{(1,4)}_{2231} & \mathcal{L}^{(1,4)}_{3231} \\[0.5em] \mathcal{L}^{(1,4)}_{1331} & \mathcal{L}^{(1,4)}_{2331} & \mathcal{L}^{(1,4)}_{3331} \\[0.5em] \mathcal{L}^{(1,4)}_{1132} & \mathcal{L}^{(1,4)}_{2132} & \mathcal{L}^{(1,4)}_{3132} \\[0.5em] \mathcal{L}^{(1,4)}_{1232} & \mathcal{L}^{(1,4)}_{2232} & \mathcal{L}^{(1,4)}_{3232} \\[0.5em] \mathcal{L}^{(1,4)}_{1332} & \mathcal{L}^{(1,4)}_{2332} & \mathcal{L}^{(1,4)}_{3332} \\[0.5em] \mathcal{L}^{(1,4)}_{1133} & \mathcal{L}^{(1,4)}_{2133} & \mathcal{L}^{(1,4)}_{3133} \\[0.5em] \mathcal{L}^{(1,4)}_{1233} & \mathcal{L}^{(1,4)}_{2233} & \mathcal{L}^{(1,4)}_{3233} \\[0.5em] \mathcal{L}^{(1,4)}_{1333} & \mathcal{L}^{(1,4)}_{2333} & \mathcal{L}^{(1,4)}_{3333} \end{pmatrix} = \begin{pmatrix} L^{(1,4)} & 0 & 0 \\[0.3em] 0 & L^{(1,4)}_1 & 0 \\[0.3em] 0 & 0 & L^{(1,4)}_1 \\[0.3em] 0 & L^{(1,4)}_3 & 0 \\[0.3em] L^{(1,4)}_2 & 0 & 0 \\[0.3em] 0 & 0 & 0 \\[0.3em] 0 & 0 & L^{(1,4)}_3 \\[0.3em] 0 & 0 & 0 \\[0.3em] L^{(1,4)}_2 & 0 & 0 \\[0.3em] 0 & L^{(1,4)}_2 & 0 \\[0.3em] L^{(1,4)}_3 & 0 & 0 \\[0.3em] 0 & 0 & 0 \\[0.3em] L^{(1,4)}_1 & 0 & 0 \\[0.3em] 0 & L^{(1,4)} & 0 \\[0.3em] 0 & 0 & L^{(1,4)}_1 \\[0.3em] 0 & 0 & 0 \\[0.3em] 0 & 0 & L^{(1,4)}_3 \\[0.3em] 0 & L^{(1,4)}_2 & 0 \\[0.3em] 0 & 0 & L^{(1,4)}_2 \\[0.3em] 0 & 0 & 0 \\[0.3em] L^{(1,4)}_3 & 0 & 0 \\[0.3em] 0 & 0 & 0 \\[0.3em] 0 & 0 & L^{(1,4)}_2 \\[0.3em] 0 & L^{(1,4)}_3 & 0 \\[0.3em] L^{(1,4)}_1 & 0 & 0 \\[0.3em] 0 & L^{(1,4)}_1 & 0 \\[0.3em] 0 & 0 & L^{(1,4)} \end{pmatrix} \end{equation} where $L^{(1,4)} \equiv L^{(1,4)}_1+L^{(1,4)}_2+L^{(1,4)}_3$. \newgeometry{top=3cm,bottom=2.5cm,left=2.5cm,right=2.5cm,heightrounded} \begin{landscape} \begin{equation*} \scalebox{0.54}{% $\mathcal{L}^{(4)}_{pijlmn}= \begin{pmatrix} \mathcal{L}^{(4)}_{111111} & \mathcal{L}^{(4)}_{111112} & \mathcal{L}^{(4)}_{111113} & \mathcal{L}^{(4)}_{111121} & \mathcal{L}^{(4)}_{111122} & \mathcal{L}^{(4)}_{111123} & \mathcal{L}^{(4)}_{111131} & \mathcal{L}^{(4)}_{111132} & \mathcal{L}^{(4)}_{111133} & \mathcal{L}^{(4)}_{111211} & \mathcal{L}^{(4)}_{111212} & \mathcal{L}^{(4)}_{111213} & \mathcal{L}^{(4)}_{111221} & \mathcal{L}^{(4)}_{111222} & \mathcal{L}^{(4)}_{111223} & \mathcal{L}^{(4)}_{111231} & \mathcal{L}^{(4)}_{111232} & \mathcal{L}^{(4)}_{111233} & \mathcal{L}^{(4)}_{111311} & \mathcal{L}^{(4)}_{111312} & \mathcal{L}^{(4)}_{111313} & \mathcal{L}^{(4)}_{111321} & \mathcal{L}^{(4)}_{111322} & \mathcal{L}^{(4)}_{111323} & \mathcal{L}^{(4)}_{111331} & \mathcal{L}^{(4)}_{111332} & \mathcal{L}^{(4)}_{111333} \\[0.5em] \mathcal{L}^{(4)}_{211111} & \mathcal{L}^{(4)}_{211112} & \mathcal{L}^{(4)}_{211113} & \mathcal{L}^{(4)}_{211121} & \mathcal{L}^{(4)}_{211122} & \mathcal{L}^{(4)}_{211123} & \mathcal{L}^{(4)}_{211131} & \mathcal{L}^{(4)}_{211132} & \mathcal{L}^{(4)}_{211133} & \mathcal{L}^{(4)}_{211211} & \mathcal{L}^{(4)}_{211212} & \mathcal{L}^{(4)}_{211213} & \mathcal{L}^{(4)}_{211221} & \mathcal{L}^{(4)}_{211222} & \mathcal{L}^{(4)}_{211223} & \mathcal{L}^{(4)}_{211231} & \mathcal{L}^{(4)}_{211232} & \mathcal{L}^{(4)}_{211233} & \mathcal{L}^{(4)}_{211311} & \mathcal{L}^{(4)}_{211312} & \mathcal{L}^{(4)}_{211313} & \mathcal{L}^{(4)}_{211321} & \mathcal{L}^{(4)}_{211322} & \mathcal{L}^{(4)}_{211323} & \mathcal{L}^{(4)}_{211331} & \mathcal{L}^{(4)}_{211332} & \mathcal{L}^{(4)}_{211333} \\[0.5em] \mathcal{L}^{(4)}_{311111} & \mathcal{L}^{(4)}_{311112} & \mathcal{L}^{(4)}_{311113} & \mathcal{L}^{(4)}_{311121} & \mathcal{L}^{(4)}_{311122} & \mathcal{L}^{(4)}_{311123} & \mathcal{L}^{(4)}_{311131} & \mathcal{L}^{(4)}_{311132} & \mathcal{L}^{(4)}_{311133} & \mathcal{L}^{(4)}_{311211} & \mathcal{L}^{(4)}_{311212} & \mathcal{L}^{(4)}_{311213} & \mathcal{L}^{(4)}_{311221} & \mathcal{L}^{(4)}_{311222} & \mathcal{L}^{(4)}_{311223} & \mathcal{L}^{(4)}_{311231} & \mathcal{L}^{(4)}_{311232} & \mathcal{L}^{(4)}_{311233} & \mathcal{L}^{(4)}_{311311} & \mathcal{L}^{(4)}_{311312} & \mathcal{L}^{(4)}_{311313} & \mathcal{L}^{(4)}_{311321} & \mathcal{L}^{(4)}_{311322} & \mathcal{L}^{(4)}_{311323} & \mathcal{L}^{(4)}_{311331} & \mathcal{L}^{(4)}_{311332} & \mathcal{L}^{(4)}_{311333} \\[0.5em] \mathcal{L}^{(4)}_{112111} & \mathcal{L}^{(4)}_{112112} & \mathcal{L}^{(4)}_{112113} & \mathcal{L}^{(4)}_{112121} & \mathcal{L}^{(4)}_{112122} & \mathcal{L}^{(4)}_{112123} & \mathcal{L}^{(4)}_{112131} & \mathcal{L}^{(4)}_{112132} & \mathcal{L}^{(4)}_{112133} & \mathcal{L}^{(4)}_{112211} & \mathcal{L}^{(4)}_{112212} & \mathcal{L}^{(4)}_{112213} & \mathcal{L}^{(4)}_{112221} & \mathcal{L}^{(4)}_{112222} & \mathcal{L}^{(4)}_{112223} & \mathcal{L}^{(4)}_{112231} & \mathcal{L}^{(4)}_{112232} & \mathcal{L}^{(4)}_{112233} & \mathcal{L}^{(4)}_{112311} & \mathcal{L}^{(4)}_{112312} & \mathcal{L}^{(4)}_{112313} & \mathcal{L}^{(4)}_{112321} & \mathcal{L}^{(4)}_{112322} & \mathcal{L}^{(4)}_{112323} & \mathcal{L}^{(4)}_{112331} & \mathcal{L}^{(4)}_{112332} & \mathcal{L}^{(4)}_{112333} \\[0.5em] \mathcal{L}^{(4)}_{212111} & \mathcal{L}^{(4)}_{212112} & \mathcal{L}^{(4)}_{212113} & \mathcal{L}^{(4)}_{212121} & \mathcal{L}^{(4)}_{212122} & \mathcal{L}^{(4)}_{212123} & \mathcal{L}^{(4)}_{212131} & \mathcal{L}^{(4)}_{212132} & \mathcal{L}^{(4)}_{212133} & \mathcal{L}^{(4)}_{212211} & \mathcal{L}^{(4)}_{212212} & \mathcal{L}^{(4)}_{212213} & \mathcal{L}^{(4)}_{212221} & \mathcal{L}^{(4)}_{212222} & \mathcal{L}^{(4)}_{212223} & \mathcal{L}^{(4)}_{212231} & \mathcal{L}^{(4)}_{212232} & \mathcal{L}^{(4)}_{212233} & \mathcal{L}^{(4)}_{212311} & \mathcal{L}^{(4)}_{212312} & \mathcal{L}^{(4)}_{212313} & \mathcal{L}^{(4)}_{212321} & \mathcal{L}^{(4)}_{212322} & \mathcal{L}^{(4)}_{212323} & \mathcal{L}^{(4)}_{212331} & \mathcal{L}^{(4)}_{212332} & \mathcal{L}^{(4)}_{212333} \\[0.5em] \mathcal{L}^{(4)}_{312111} & \mathcal{L}^{(4)}_{312112} & \mathcal{L}^{(4)}_{312113} & \mathcal{L}^{(4)}_{312121} & \mathcal{L}^{(4)}_{312122} & \mathcal{L}^{(4)}_{312123} & \mathcal{L}^{(4)}_{312131} & \mathcal{L}^{(4)}_{312132} & \mathcal{L}^{(4)}_{312133} & \mathcal{L}^{(4)}_{312211} & \mathcal{L}^{(4)}_{312212} & \mathcal{L}^{(4)}_{312213} & \mathcal{L}^{(4)}_{312221} & \mathcal{L}^{(4)}_{312222} & \mathcal{L}^{(4)}_{312223} & \mathcal{L}^{(4)}_{312231} & \mathcal{L}^{(4)}_{312232} & \mathcal{L}^{(4)}_{312233} & \mathcal{L}^{(4)}_{312311} & \mathcal{L}^{(4)}_{312312} & \mathcal{L}^{(4)}_{312313} & \mathcal{L}^{(4)}_{312321} & \mathcal{L}^{(4)}_{312322} & \mathcal{L}^{(4)}_{312323} & \mathcal{L}^{(4)}_{312331} & \mathcal{L}^{(4)}_{312332} & \mathcal{L}^{(4)}_{312333} \\[0.5em] \mathcal{L}^{(4)}_{113111} & \mathcal{L}^{(4)}_{113112} & \mathcal{L}^{(4)}_{113113} & \mathcal{L}^{(4)}_{113121} & \mathcal{L}^{(4)}_{113122} & \mathcal{L}^{(4)}_{113123} & \mathcal{L}^{(4)}_{113131} & \mathcal{L}^{(4)}_{113132} & \mathcal{L}^{(4)}_{113133} & \mathcal{L}^{(4)}_{113211} & \mathcal{L}^{(4)}_{113212} & \mathcal{L}^{(4)}_{113213} & \mathcal{L}^{(4)}_{113221} & \mathcal{L}^{(4)}_{113222} & \mathcal{L}^{(4)}_{113223} & \mathcal{L}^{(4)}_{113231} & \mathcal{L}^{(4)}_{113232} & \mathcal{L}^{(4)}_{113233} & \mathcal{L}^{(4)}_{113311} & \mathcal{L}^{(4)}_{113312} & \mathcal{L}^{(4)}_{113313} & \mathcal{L}^{(4)}_{113321} & \mathcal{L}^{(4)}_{113322} & \mathcal{L}^{(4)}_{113323} & \mathcal{L}^{(4)}_{113331} & \mathcal{L}^{(4)}_{113332} & \mathcal{L}^{(4)}_{113333} \\[0.5em] \mathcal{L}^{(4)}_{213111} & \mathcal{L}^{(4)}_{213112} & \mathcal{L}^{(4)}_{213113} & \mathcal{L}^{(4)}_{213121} & \mathcal{L}^{(4)}_{213122} & \mathcal{L}^{(4)}_{213123} & \mathcal{L}^{(4)}_{213131} & \mathcal{L}^{(4)}_{213132} & \mathcal{L}^{(4)}_{213133} & \mathcal{L}^{(4)}_{213211} & \mathcal{L}^{(4)}_{213212} & \mathcal{L}^{(4)}_{213213} & \mathcal{L}^{(4)}_{213221} & \mathcal{L}^{(4)}_{213222} & \mathcal{L}^{(4)}_{213223} & \mathcal{L}^{(4)}_{213231} & \mathcal{L}^{(4)}_{213232} & \mathcal{L}^{(4)}_{213233} & \mathcal{L}^{(4)}_{213311} & \mathcal{L}^{(4)}_{213312} & \mathcal{L}^{(4)}_{213313} & \mathcal{L}^{(4)}_{213321} & \mathcal{L}^{(4)}_{213322} & \mathcal{L}^{(4)}_{213323} & \mathcal{L}^{(4)}_{213331} & \mathcal{L}^{(4)}_{213332} & \mathcal{L}^{(4)}_{213333} \\[0.5em] \mathcal{L}^{(4)}_{313111} & \mathcal{L}^{(4)}_{313112} & \mathcal{L}^{(4)}_{313113} & \mathcal{L}^{(4)}_{313121} & \mathcal{L}^{(4)}_{313122} & \mathcal{L}^{(4)}_{313123} & \mathcal{L}^{(4)}_{313131} & \mathcal{L}^{(4)}_{313132} & \mathcal{L}^{(4)}_{313133} & \mathcal{L}^{(4)}_{313211} & \mathcal{L}^{(4)}_{313212} & \mathcal{L}^{(4)}_{313213} & \mathcal{L}^{(4)}_{313221} & \mathcal{L}^{(4)}_{313222} & \mathcal{L}^{(4)}_{313223} & \mathcal{L}^{(4)}_{313231} & \mathcal{L}^{(4)}_{313232} & \mathcal{L}^{(4)}_{313233} & \mathcal{L}^{(4)}_{313311} & \mathcal{L}^{(4)}_{313312} & \mathcal{L}^{(4)}_{313313} & \mathcal{L}^{(4)}_{313321} & \mathcal{L}^{(4)}_{313322} & \mathcal{L}^{(4)}_{313323} & \mathcal{L}^{(4)}_{313331} & \mathcal{L}^{(4)}_{313332} & \mathcal{L}^{(4)}_{313333} \\[0.5em] \mathcal{L}^{(4)}_{121111} & \mathcal{L}^{(4)}_{121112} & \mathcal{L}^{(4)}_{121113} & \mathcal{L}^{(4)}_{121121} & \mathcal{L}^{(4)}_{121122} & \mathcal{L}^{(4)}_{121123} & \mathcal{L}^{(4)}_{121131} & \mathcal{L}^{(4)}_{121132} & \mathcal{L}^{(4)}_{121133} & \mathcal{L}^{(4)}_{121211} & \mathcal{L}^{(4)}_{121212} & \mathcal{L}^{(4)}_{121213} & \mathcal{L}^{(4)}_{121221} & \mathcal{L}^{(4)}_{121222} & \mathcal{L}^{(4)}_{121223} & \mathcal{L}^{(4)}_{121231} & \mathcal{L}^{(4)}_{121232} & \mathcal{L}^{(4)}_{121233} & \mathcal{L}^{(4)}_{121311} & \mathcal{L}^{(4)}_{121312} & \mathcal{L}^{(4)}_{121313} & \mathcal{L}^{(4)}_{121321} & \mathcal{L}^{(4)}_{121322} & \mathcal{L}^{(4)}_{121323} & \mathcal{L}^{(4)}_{121331} & \mathcal{L}^{(4)}_{121332} & \mathcal{L}^{(4)}_{121333} \\[0.5em] \mathcal{L}^{(4)}_{221111} & \mathcal{L}^{(4)}_{221112} & \mathcal{L}^{(4)}_{221113} & \mathcal{L}^{(4)}_{221121} & \mathcal{L}^{(4)}_{221122} & \mathcal{L}^{(4)}_{221123} & \mathcal{L}^{(4)}_{221131} & \mathcal{L}^{(4)}_{221132} & \mathcal{L}^{(4)}_{221133} & \mathcal{L}^{(4)}_{221211} & \mathcal{L}^{(4)}_{221212} & \mathcal{L}^{(4)}_{221213} & \mathcal{L}^{(4)}_{221221} & \mathcal{L}^{(4)}_{221222} & \mathcal{L}^{(4)}_{221223} & \mathcal{L}^{(4)}_{221231} & \mathcal{L}^{(4)}_{221232} & \mathcal{L}^{(4)}_{221233} & \mathcal{L}^{(4)}_{221311} & \mathcal{L}^{(4)}_{221312} & \mathcal{L}^{(4)}_{221313} & \mathcal{L}^{(4)}_{221321} & \mathcal{L}^{(4)}_{221322} & \mathcal{L}^{(4)}_{221323} & \mathcal{L}^{(4)}_{221331} & \mathcal{L}^{(4)}_{221332} & \mathcal{L}^{(4)}_{221333} \\[0.5em] \mathcal{L}^{(4)}_{321111} & \mathcal{L}^{(4)}_{321112} & \mathcal{L}^{(4)}_{321113} & \mathcal{L}^{(4)}_{321121} & \mathcal{L}^{(4)}_{321122} & \mathcal{L}^{(4)}_{321123} & \mathcal{L}^{(4)}_{321131} & \mathcal{L}^{(4)}_{321132} & \mathcal{L}^{(4)}_{321133} & \mathcal{L}^{(4)}_{321211} & \mathcal{L}^{(4)}_{321212} & \mathcal{L}^{(4)}_{321213} & \mathcal{L}^{(4)}_{321221} & \mathcal{L}^{(4)}_{321222} & \mathcal{L}^{(4)}_{321223} & \mathcal{L}^{(4)}_{321231} & \mathcal{L}^{(4)}_{321232} & \mathcal{L}^{(4)}_{321233} & \mathcal{L}^{(4)}_{321311} & \mathcal{L}^{(4)}_{321312} & \mathcal{L}^{(4)}_{321313} & \mathcal{L}^{(4)}_{321321} & \mathcal{L}^{(4)}_{321322} & \mathcal{L}^{(4)}_{321323} & \mathcal{L}^{(4)}_{321331} & \mathcal{L}^{(4)}_{321332} & \mathcal{L}^{(4)}_{321333} \\[0.5em] \mathcal{L}^{(4)}_{122111} & \mathcal{L}^{(4)}_{122112} & \mathcal{L}^{(4)}_{122113} & \mathcal{L}^{(4)}_{122121} & \mathcal{L}^{(4)}_{122122} & \mathcal{L}^{(4)}_{122123} & \mathcal{L}^{(4)}_{122131} & \mathcal{L}^{(4)}_{122132} & \mathcal{L}^{(4)}_{122133} & \mathcal{L}^{(4)}_{122211} & \mathcal{L}^{(4)}_{122212} & \mathcal{L}^{(4)}_{122213} & \mathcal{L}^{(4)}_{122221} & \mathcal{L}^{(4)}_{122222} & \mathcal{L}^{(4)}_{122223} & \mathcal{L}^{(4)}_{122231} & \mathcal{L}^{(4)}_{122232} & \mathcal{L}^{(4)}_{122233} & \mathcal{L}^{(4)}_{122311} & \mathcal{L}^{(4)}_{122312} & \mathcal{L}^{(4)}_{122313} & \mathcal{L}^{(4)}_{122321} & \mathcal{L}^{(4)}_{122322} & \mathcal{L}^{(4)}_{122323} & \mathcal{L}^{(4)}_{122331} & \mathcal{L}^{(4)}_{122332} & \mathcal{L}^{(4)}_{122333} \\[0.5em] \mathcal{L}^{(4)}_{222111} & \mathcal{L}^{(4)}_{222112} & \mathcal{L}^{(4)}_{222113} & \mathcal{L}^{(4)}_{222121} & \mathcal{L}^{(4)}_{222122} & \mathcal{L}^{(4)}_{222123} & \mathcal{L}^{(4)}_{222131} & \mathcal{L}^{(4)}_{222132} & \mathcal{L}^{(4)}_{222133} & \mathcal{L}^{(4)}_{222211} & \mathcal{L}^{(4)}_{222212} & \mathcal{L}^{(4)}_{222213} & \mathcal{L}^{(4)}_{222221} & \mathcal{L}^{(4)}_{222222} & \mathcal{L}^{(4)}_{222223} & \mathcal{L}^{(4)}_{222231} & \mathcal{L}^{(4)}_{222232} & \mathcal{L}^{(4)}_{222233} & \mathcal{L}^{(4)}_{222311} & \mathcal{L}^{(4)}_{222312} & \mathcal{L}^{(4)}_{222313} & \mathcal{L}^{(4)}_{222321} & \mathcal{L}^{(4)}_{222322} & \mathcal{L}^{(4)}_{222323} & \mathcal{L}^{(4)}_{222331} & \mathcal{L}^{(4)}_{222332} & \mathcal{L}^{(4)}_{222333} \\[0.5em] \mathcal{L}^{(4)}_{322111} & \mathcal{L}^{(4)}_{322112} & \mathcal{L}^{(4)}_{322113} & \mathcal{L}^{(4)}_{322121} & \mathcal{L}^{(4)}_{322122} & \mathcal{L}^{(4)}_{322123} & \mathcal{L}^{(4)}_{322131} & \mathcal{L}^{(4)}_{322132} & \mathcal{L}^{(4)}_{322133} & \mathcal{L}^{(4)}_{322211} & \mathcal{L}^{(4)}_{322212} & \mathcal{L}^{(4)}_{322213} & \mathcal{L}^{(4)}_{322221} & \mathcal{L}^{(4)}_{322222} & \mathcal{L}^{(4)}_{322223} & \mathcal{L}^{(4)}_{322231} & \mathcal{L}^{(4)}_{322232} & \mathcal{L}^{(4)}_{322233} & \mathcal{L}^{(4)}_{322311} & \mathcal{L}^{(4)}_{322312} & \mathcal{L}^{(4)}_{322313} & \mathcal{L}^{(4)}_{322321} & \mathcal{L}^{(4)}_{322322} & \mathcal{L}^{(4)}_{322323} & \mathcal{L}^{(4)}_{322331} & \mathcal{L}^{(4)}_{322332} & \mathcal{L}^{(4)}_{322333} \\[0.5em] \mathcal{L}^{(4)}_{123111} & \mathcal{L}^{(4)}_{123112} & \mathcal{L}^{(4)}_{123113} & \mathcal{L}^{(4)}_{123121} & \mathcal{L}^{(4)}_{123122} & \mathcal{L}^{(4)}_{123123} & \mathcal{L}^{(4)}_{123131} & \mathcal{L}^{(4)}_{123132} & \mathcal{L}^{(4)}_{123133} & \mathcal{L}^{(4)}_{123211} & \mathcal{L}^{(4)}_{123212} & \mathcal{L}^{(4)}_{123213} & \mathcal{L}^{(4)}_{123221} & \mathcal{L}^{(4)}_{123222} & \mathcal{L}^{(4)}_{123223} & \mathcal{L}^{(4)}_{123231} & \mathcal{L}^{(4)}_{123232} & \mathcal{L}^{(4)}_{123233} & \mathcal{L}^{(4)}_{123311} & \mathcal{L}^{(4)}_{123312} & \mathcal{L}^{(4)}_{123313} & \mathcal{L}^{(4)}_{123321} & \mathcal{L}^{(4)}_{123322} & \mathcal{L}^{(4)}_{123323} & \mathcal{L}^{(4)}_{123331} & \mathcal{L}^{(4)}_{123332} & \mathcal{L}^{(4)}_{123333} \\[0.5em] \mathcal{L}^{(4)}_{223111} & \mathcal{L}^{(4)}_{223112} & \mathcal{L}^{(4)}_{223113} & \mathcal{L}^{(4)}_{223121} & \mathcal{L}^{(4)}_{223122} & \mathcal{L}^{(4)}_{223123} & \mathcal{L}^{(4)}_{223131} & \mathcal{L}^{(4)}_{223132} & \mathcal{L}^{(4)}_{223133} & \mathcal{L}^{(4)}_{223211} & \mathcal{L}^{(4)}_{223212} & \mathcal{L}^{(4)}_{223213} & \mathcal{L}^{(4)}_{223221} & \mathcal{L}^{(4)}_{223222} & \mathcal{L}^{(4)}_{223223} & \mathcal{L}^{(4)}_{223231} & \mathcal{L}^{(4)}_{223232} & \mathcal{L}^{(4)}_{223233} & \mathcal{L}^{(4)}_{223311} & \mathcal{L}^{(4)}_{223312} & \mathcal{L}^{(4)}_{223313} & \mathcal{L}^{(4)}_{223321} & \mathcal{L}^{(4)}_{223322} & \mathcal{L}^{(4)}_{223323} & \mathcal{L}^{(4)}_{223331} & \mathcal{L}^{(4)}_{223332} & \mathcal{L}^{(4)}_{223333} \\[0.5em] \mathcal{L}^{(4)}_{323111} & \mathcal{L}^{(4)}_{323112} & \mathcal{L}^{(4)}_{323113} & \mathcal{L}^{(4)}_{323121} & \mathcal{L}^{(4)}_{323122} & \mathcal{L}^{(4)}_{323123} & \mathcal{L}^{(4)}_{323131} & \mathcal{L}^{(4)}_{323132} & \mathcal{L}^{(4)}_{323133} & \mathcal{L}^{(4)}_{323211} & \mathcal{L}^{(4)}_{323212} & \mathcal{L}^{(4)}_{323213} & \mathcal{L}^{(4)}_{323221} & \mathcal{L}^{(4)}_{323222} & \mathcal{L}^{(4)}_{323223} & \mathcal{L}^{(4)}_{323231} & \mathcal{L}^{(4)}_{323232} & \mathcal{L}^{(4)}_{323233} & \mathcal{L}^{(4)}_{323311} & \mathcal{L}^{(4)}_{323312} & \mathcal{L}^{(4)}_{323313} & \mathcal{L}^{(4)}_{323321} & \mathcal{L}^{(4)}_{323322} & \mathcal{L}^{(4)}_{323323} & \mathcal{L}^{(4)}_{323331} & \mathcal{L}^{(4)}_{323332} & \mathcal{L}^{(4)}_{323333} \\[0.5em] \mathcal{L}^{(4)}_{131111} & \mathcal{L}^{(4)}_{131112} & \mathcal{L}^{(4)}_{131113} & \mathcal{L}^{(4)}_{131121} & \mathcal{L}^{(4)}_{131122} & \mathcal{L}^{(4)}_{131123} & \mathcal{L}^{(4)}_{131131} & \mathcal{L}^{(4)}_{131132} & \mathcal{L}^{(4)}_{131133} & \mathcal{L}^{(4)}_{131211} & \mathcal{L}^{(4)}_{131212} & \mathcal{L}^{(4)}_{131213} & \mathcal{L}^{(4)}_{131221} & \mathcal{L}^{(4)}_{131222} & \mathcal{L}^{(4)}_{131223} & \mathcal{L}^{(4)}_{131231} & \mathcal{L}^{(4)}_{131232} & \mathcal{L}^{(4)}_{131233} & \mathcal{L}^{(4)}_{131311} & \mathcal{L}^{(4)}_{131312} & \mathcal{L}^{(4)}_{131313} & \mathcal{L}^{(4)}_{131321} & \mathcal{L}^{(4)}_{131322} & \mathcal{L}^{(4)}_{131323} & \mathcal{L}^{(4)}_{131331} & \mathcal{L}^{(4)}_{131332} & \mathcal{L}^{(4)}_{131333} \\[0.5em] \mathcal{L}^{(4)}_{231111} & \mathcal{L}^{(4)}_{231112} & \mathcal{L}^{(4)}_{231113} & \mathcal{L}^{(4)}_{231121} & \mathcal{L}^{(4)}_{231122} & \mathcal{L}^{(4)}_{231123} & \mathcal{L}^{(4)}_{231131} & \mathcal{L}^{(4)}_{231132} & \mathcal{L}^{(4)}_{231133} & \mathcal{L}^{(4)}_{231211} & \mathcal{L}^{(4)}_{231212} & \mathcal{L}^{(4)}_{231213} & \mathcal{L}^{(4)}_{231221} & \mathcal{L}^{(4)}_{231222} & \mathcal{L}^{(4)}_{231223} & \mathcal{L}^{(4)}_{231231} & \mathcal{L}^{(4)}_{231232} & \mathcal{L}^{(4)}_{231233} & \mathcal{L}^{(4)}_{231311} & \mathcal{L}^{(4)}_{231312} & \mathcal{L}^{(4)}_{231313} & \mathcal{L}^{(4)}_{231321} & \mathcal{L}^{(4)}_{231322} & \mathcal{L}^{(4)}_{231323} & \mathcal{L}^{(4)}_{231331} & \mathcal{L}^{(4)}_{231332} & \mathcal{L}^{(4)}_{231333} \\[0.5em] \mathcal{L}^{(4)}_{331111} & \mathcal{L}^{(4)}_{331112} & \mathcal{L}^{(4)}_{331113} & \mathcal{L}^{(4)}_{331121} & \mathcal{L}^{(4)}_{331122} & \mathcal{L}^{(4)}_{331123} & \mathcal{L}^{(4)}_{331131} & \mathcal{L}^{(4)}_{331132} & \mathcal{L}^{(4)}_{331133} & \mathcal{L}^{(4)}_{331211} & \mathcal{L}^{(4)}_{331212} & \mathcal{L}^{(4)}_{331213} & \mathcal{L}^{(4)}_{331221} & \mathcal{L}^{(4)}_{331222} & \mathcal{L}^{(4)}_{331223} & \mathcal{L}^{(4)}_{331231} & \mathcal{L}^{(4)}_{331232} & \mathcal{L}^{(4)}_{331233} & \mathcal{L}^{(4)}_{331311} & \mathcal{L}^{(4)}_{331312} & \mathcal{L}^{(4)}_{331313} & \mathcal{L}^{(4)}_{331321} & \mathcal{L}^{(4)}_{331322} & \mathcal{L}^{(4)}_{331323} & \mathcal{L}^{(4)}_{331331} & \mathcal{L}^{(4)}_{331332} & \mathcal{L}^{(4)}_{331333} \\[0.5em] \mathcal{L}^{(4)}_{132111} & \mathcal{L}^{(4)}_{132112} & \mathcal{L}^{(4)}_{132113} & \mathcal{L}^{(4)}_{132121} & \mathcal{L}^{(4)}_{132122} & \mathcal{L}^{(4)}_{132123} & \mathcal{L}^{(4)}_{132131} & \mathcal{L}^{(4)}_{132132} & \mathcal{L}^{(4)}_{132133} & \mathcal{L}^{(4)}_{132211} & \mathcal{L}^{(4)}_{132212} & \mathcal{L}^{(4)}_{132213} & \mathcal{L}^{(4)}_{132221} & \mathcal{L}^{(4)}_{132222} & \mathcal{L}^{(4)}_{132223} & \mathcal{L}^{(4)}_{132231} & \mathcal{L}^{(4)}_{132232} & \mathcal{L}^{(4)}_{132233} & \mathcal{L}^{(4)}_{132311} & \mathcal{L}^{(4)}_{132312} & \mathcal{L}^{(4)}_{132313} & \mathcal{L}^{(4)}_{132321} & \mathcal{L}^{(4)}_{132322} & \mathcal{L}^{(4)}_{132323} & \mathcal{L}^{(4)}_{132331} & \mathcal{L}^{(4)}_{132332} & \mathcal{L}^{(4)}_{132333} \\[0.5em] \mathcal{L}^{(4)}_{232111} & \mathcal{L}^{(4)}_{232112} & \mathcal{L}^{(4)}_{232113} & \mathcal{L}^{(4)}_{232121} & \mathcal{L}^{(4)}_{232122} & \mathcal{L}^{(4)}_{232123} & \mathcal{L}^{(4)}_{232131} & \mathcal{L}^{(4)}_{232132} & \mathcal{L}^{(4)}_{232133} & \mathcal{L}^{(4)}_{232211} & \mathcal{L}^{(4)}_{232212} & \mathcal{L}^{(4)}_{232213} & \mathcal{L}^{(4)}_{232221} & \mathcal{L}^{(4)}_{232222} & \mathcal{L}^{(4)}_{232223} & \mathcal{L}^{(4)}_{232231} & \mathcal{L}^{(4)}_{232232} & \mathcal{L}^{(4)}_{232233} & \mathcal{L}^{(4)}_{232311} & \mathcal{L}^{(4)}_{232312} & \mathcal{L}^{(4)}_{232313} & \mathcal{L}^{(4)}_{232321} & \mathcal{L}^{(4)}_{232322} & \mathcal{L}^{(4)}_{232323} & \mathcal{L}^{(4)}_{232331} & \mathcal{L}^{(4)}_{232332} & \mathcal{L}^{(4)}_{232333} \\[0.5em] \mathcal{L}^{(4)}_{332111} & \mathcal{L}^{(4)}_{332112} & \mathcal{L}^{(4)}_{332113} & \mathcal{L}^{(4)}_{332121} & \mathcal{L}^{(4)}_{332122} & \mathcal{L}^{(4)}_{332123} & \mathcal{L}^{(4)}_{332131} & \mathcal{L}^{(4)}_{332132} & \mathcal{L}^{(4)}_{332133} & \mathcal{L}^{(4)}_{332211} & \mathcal{L}^{(4)}_{332212} & \mathcal{L}^{(4)}_{332213} & \mathcal{L}^{(4)}_{332221} & \mathcal{L}^{(4)}_{332222} & \mathcal{L}^{(4)}_{332223} & \mathcal{L}^{(4)}_{332231} & \mathcal{L}^{(4)}_{332232} & \mathcal{L}^{(4)}_{332233} & \mathcal{L}^{(4)}_{332311} & \mathcal{L}^{(4)}_{332312} & \mathcal{L}^{(4)}_{332313} & \mathcal{L}^{(4)}_{332321} & \mathcal{L}^{(4)}_{332322} & \mathcal{L}^{(4)}_{332323} & \mathcal{L}^{(4)}_{332331} & \mathcal{L}^{(4)}_{332332} & \mathcal{L}^{(4)}_{332333} \\[0.5em] \mathcal{L}^{(4)}_{133111} & \mathcal{L}^{(4)}_{133112} & \mathcal{L}^{(4)}_{133113} & \mathcal{L}^{(4)}_{133121} & \mathcal{L}^{(4)}_{133122} & \mathcal{L}^{(4)}_{133123} & \mathcal{L}^{(4)}_{133131} & \mathcal{L}^{(4)}_{133132} & \mathcal{L}^{(4)}_{133133} & \mathcal{L}^{(4)}_{133211} & \mathcal{L}^{(4)}_{133212} & \mathcal{L}^{(4)}_{133213} & \mathcal{L}^{(4)}_{133221} & \mathcal{L}^{(4)}_{133222} & \mathcal{L}^{(4)}_{133223} & \mathcal{L}^{(4)}_{133231} & \mathcal{L}^{(4)}_{133232} & \mathcal{L}^{(4)}_{133233} & \mathcal{L}^{(4)}_{133311} & \mathcal{L}^{(4)}_{133312} & \mathcal{L}^{(4)}_{133313} & \mathcal{L}^{(4)}_{133321} & \mathcal{L}^{(4)}_{133322} & \mathcal{L}^{(4)}_{133323} & \mathcal{L}^{(4)}_{133331} & \mathcal{L}^{(4)}_{133332} & \mathcal{L}^{(4)}_{133333} \\[0.5em] \mathcal{L}^{(4)}_{233111} & \mathcal{L}^{(4)}_{233112} & \mathcal{L}^{(4)}_{233113} & \mathcal{L}^{(4)}_{233121} & \mathcal{L}^{(4)}_{233122} & \mathcal{L}^{(4)}_{233123} & \mathcal{L}^{(4)}_{233131} & \mathcal{L}^{(4)}_{233132} & \mathcal{L}^{(4)}_{233133} & \mathcal{L}^{(4)}_{233211} & \mathcal{L}^{(4)}_{233212} & \mathcal{L}^{(4)}_{233213} & \mathcal{L}^{(4)}_{233221} & \mathcal{L}^{(4)}_{233222} & \mathcal{L}^{(4)}_{233223} & \mathcal{L}^{(4)}_{233231} & \mathcal{L}^{(4)}_{233232} & \mathcal{L}^{(4)}_{233233} & \mathcal{L}^{(4)}_{233311} & \mathcal{L}^{(4)}_{233312} & \mathcal{L}^{(4)}_{233313} & \mathcal{L}^{(4)}_{233321} & \mathcal{L}^{(4)}_{233322} & \mathcal{L}^{(4)}_{233323} & \mathcal{L}^{(4)}_{233331} & \mathcal{L}^{(4)}_{233332} & \mathcal{L}^{(4)}_{233333} \\[0.5em] \mathcal{L}^{(4)}_{333111} & \mathcal{L}^{(4)}_{333112} & \mathcal{L}^{(4)}_{333113} & \mathcal{L}^{(4)}_{333121} & \mathcal{L}^{(4)}_{333122} & \mathcal{L}^{(4)}_{333123} & \mathcal{L}^{(4)}_{333131} & \mathcal{L}^{(4)}_{333132} & \mathcal{L}^{(4)}_{333133} & \mathcal{L}^{(4)}_{333211} & \mathcal{L}^{(4)}_{333212} & \mathcal{L}^{(4)}_{333213} & \mathcal{L}^{(4)}_{333221} & \mathcal{L}^{(4)}_{333222} & \mathcal{L}^{(4)}_{333223} & \mathcal{L}^{(4)}_{333231} & \mathcal{L}^{(4)}_{333232} & \mathcal{L}^{(4)}_{333233} & \mathcal{L}^{(4)}_{333311} & \mathcal{L}^{(4)}_{333312} & \mathcal{L}^{(4)}_{333313} & \mathcal{L}^{(4)}_{333321} & \mathcal{L}^{(4)}_{333322} & \mathcal{L}^{(4)}_{333323} & \mathcal{L}^{(4)}_{333331} & \mathcal{L}^{(4)}_{333332} & \mathcal{L}^{(4)}_{333333} \end{pmatrix} =$} \end{equation*} \newpage \begin{equation} \scalebox{0.73}{% $= \begin{pmatrix} C^{(4)} & 0 & 0 & 0 & \Lambda_1 & 0 & 0 & 0 & \Lambda_1 & 0 & \Lambda_2 & 0 & \Lambda_3 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \Lambda_2 & 0 & 0 & 0 & \Lambda_3 & 0 & 0 \\[0.5em] 0 & \Lambda_4 & 0 & \Lambda_5 & 0 & 0 & 0 & 0 & 0 & \Lambda_6 & 0 & 0 & 0 & \Lambda_1 & 0 & 0 & 0 & C^{(4)}_6 & 0 & 0 & 0 & 0 & 0 & C^{(4)}_4 & 0 & C^{(4)}_1 & 0 \\[0.5em] 0 & 0 & \Lambda_4 & 0 & 0 & 0 & \Lambda_5 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & C^{(4)}_1 & 0 & C^{(4)}_4 & 0 & \Lambda_6 & 0 & 0 & 0 & C^{(4)}_6 & 0 & 0 & 0 & \Lambda_1 \\[0.5em] 0 & \Lambda_7 & 0 & \Lambda_8 & 0 & 0 & 0 & 0 & 0 & \Lambda_4 & 0 & 0 & 0 & \Lambda_3 & 0 & 0 & 0 & C^{(4)}_1 & 0 & 0 & 0 & 0 & 0 & C^{(4)}_2 & 0 & C^{(4)}_3 & 0 \\[0.5em] \Lambda_2 & 0 & 0 & 0 & \Lambda_5 & 0 & 0 & 0 & C^{(4)}_4 & 0 & \Lambda_9 & 0 & \Lambda_8 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & C^{(4)}_5 & 0 & 0 & 0 & C^{(4)}_2 & 0 & 0 \\[0.5em] 0 & 0 & 0 & 0 & 0 & C^{(4)}_{10} & 0 & C^{(4)}_9 & 0 & 0 & 0 & C^{(4)}_{11} & 0 & 0 & 0 & C^{(4)}_{10} & 0 & 0 & 0 & C^{(4)}_7 & 0 & C^{(4)}_8 & 0 & 0 & 0 & 0 & 0 \\[0.5em] 0 & 0 & \Lambda_7 & 0 & 0 & 0 & \Lambda_8 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & C^{(4)}_3 & 0 & C^{(4)}_2 & 0 & \Lambda_4 & 0 & 0 & 0 & C^{(4)}_1 & 0 & 0 & 0 & \Lambda_3 \\[0.5em] 0 & 0 & 0 & 0 & 0 & C^{(4)}_9 & 0 & C^{(4)}_{10} & 0 & 0 & 0 & C^{(4)}_7 & 0 & 0 & 0 & C^{(4)}_8 & 0 & 0 & 0 & C^{(4)}_{11} & 0 & C^{(4)}_{10} & 0 & 0 & 0 & 0 & 0 \\[0.5em] \Lambda_2 & 0 & 0 & 0 & C^{(4)}_4 & 0 & 0 & 0 & \Lambda_5 & 0 & C^{(4)}_5 & 0 & C^{(4)}_2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \Lambda_9 & 0 & 0 & 0 & \Lambda_8 & 0 & 0 \\[0.5em] 0 & \Lambda_8 & 0 & \Lambda_9 & 0 & 0 & 0 & 0 & 0 & \Lambda_5 & 0 & 0 & 0 & \Lambda_2 & 0 & 0 & 0 & C^{(4)}_4 & 0 & 0 & 0 & 0 & 0 & C^{(4)}_5 & 0 & C^{(4)}_2 & 0 \\[0.5em] \Lambda_3 & 0 & 0 & 0 & \Lambda_4 & 0 & 0 & 0 & C^{(4)}_1 & 0 & \Lambda_8 & 0 & \Lambda_7 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & C^{(4)}_2 & 0 & 0 & 0 & C^{(4)}_3 & 0 & 0 \\[0.5em] 0 & 0 & 0 & 0 & 0 & C^{(4)}_{11} & 0 & C^{(4)}_{10} & 0 & 0 & 0 & C^{(4)}_{10} & 0 & 0 & 0 & C^{(4)}_9 & 0 & 0 & 0 & C^{(4)}_8 & 0 & C^{(4)}_7 & 0 & 0 & 0 & 0 & 0 \\[0.5em] \Lambda_1 & 0 & 0 & 0 & \Lambda_6 & 0 & 0 & 0 & C^{(4)}_6 & 0 & \Lambda_5 & 0 & \Lambda_4 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & C^{(4)}_4 & 0 & 0 & 0 & C^{(4)}_1 & 0 & 0 \\[0.5em] 0 & \Lambda_3 & 0 & \Lambda_2 & 0 & 0 & 0 & 0 & 0 & \Lambda_1 & 0 & 0 & 0 & C^{(4)} & 0 & 0 & 0 & \Lambda_1 & 0 & 0 & 0 & 0 & 0 & \Lambda_2 & 0 & \Lambda_3 & 0 \\[0.5em] 0 & 0 & C^{(4)}_1 & 0 & 0 & 0 & C^{(4)}_4 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \Lambda_4 & 0 & \Lambda_5 & 0 & C^{(4)}_6 & 0 & 0 & 0 & \Lambda_6 & 0 & 0 & 0 & \Lambda_1 \\[0.5em] 0 & 0 & 0 & 0 & 0 & C^{(4)}_7 & 0 & C^{(4)}_8 & 0 & 0 & 0 & C^{(4)}_9 & 0 & 0 & 0 & C^{(4)}_{10} & 0 & 0 & 0 & C^{(4)}_{10} & 0 & C^{(4)}_{11} & 0 & 0 & 0 & 0 & 0 \\[0.5em] 0 & 0 & C^{(4)}_3 & 0 & 0 & 0 & C^{(4)}_2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \Lambda_7 & 0 & \Lambda_8 & 0 & C^{(4)}_1 & 0 & 0 & 0 & \Lambda_4 & 0 & 0 & 0 & \Lambda_3 \\[0.5em] 0 & C^{(4)}_2 & 0 & C^{(4)}_5 & 0 & 0 & 0 & 0 & 0 & C^{(4)}_4 & 0 & 0 & 0 & \Lambda_2 & 0 & 0 & 0 & \Lambda_5 & 0 & 0 & 0 & 0 & 0 & \Lambda_9 & 0 & \Lambda_8 & 0 \\[0.5em] 0 & 0 & \Lambda_8 & 0 & 0 & 0 & \Lambda_9 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & C^{(4)}_2 & 0 & C^{(4)}_5 & 0 & \Lambda_5 & 0 & 0 & 0 & C^{(4)}_4 & 0 & 0 & 0 & \Lambda_2 \\[0.5em] 0 & 0 & 0 & 0 & 0 & C^{(4)}_{10} & 0 & C^{(4)}_{11} & 0 & 0 & 0 & C^{(4)}_8 & 0 & 0 & 0 & C^{(4)}_7 & 0 & 0 & 0 & C^{(4)}_{10} & 0 & C^{(4)}_9 & 0 & 0 & 0 & 0 & 0 \\[0.5em] \Lambda_3 & 0 & 0 & 0 & C^{(4)}_1 & 0 & 0 & 0 & \Lambda_4 & 0 & C^{(4)}_2 & 0 & C^{(4)}_3 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \Lambda_8 & 0 & 0 & 0 & \Lambda_7 & 0 & 0 \\[0.5em] 0 & 0 & 0 & 0 & 0 & C^{(4)}_8 & 0 & C^{(4)}_7 & 0 & 0 & 0 & C^{(4)}_{10} & 0 & 0 & 0 & C^{(4)}_{11} & 0 & 0 & 0 & C^{(4)}_9 & 0 & C^{(4)}_{10} & 0 & 0 & 0 & 0 & 0 \\[0.5em] 0 & 0 & C^{(4)}_2 & 0 & 0 & 0 & C^{(4)}_5 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \Lambda_8 & 0 & \Lambda_9 & 0 & C^{(4)}_4 & 0 & 0 & 0 & \Lambda_5 & 0 & 0 & 0 & \Lambda_2 \\[0.5em] 0 & C^{(4)}_3 & 0 & C^{(4)}_2 & 0 & 0 & 0 & 0 & 0 & C^{(4)}_1 & 0 & 0 & 0 & \Lambda_3 & 0 & 0 & 0 & \Lambda_4 & 0 & 0 & 0 & 0 & 0 & \Lambda_8 & 0 & \Lambda_7 & 0 \\[0.5em] \Lambda_1 & 0 & 0 & 0 & C^{(4)}_6 & 0 & 0 & 0 & \Lambda_6 & 0 & C^{(4)}_4 & 0 & C^{(4)}_1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \Lambda_5 & 0 & 0 & 0 & \Lambda_4 & 0 & 0 \\[0.5em] 0 & C^{(4)}_1 & 0 & C^{(4)}_4 & 0 & 0 & 0 & 0 & 0 & C^{(4)}_6 & 0 & 0 & 0 & \Lambda_1 & 0 & 0 & 0 & \Lambda_6 & 0 & 0 & 0 & 0 & 0 & \Lambda_5 & 0 & \Lambda_4 & 0 \\[0.5em] 0 & 0 & \Lambda_3 & 0 & 0 & 0 & \Lambda_2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \Lambda_3 & 0 & \Lambda_2 & 0 & \Lambda_1 & 0 & 0 & 0 & \Lambda_1 & 0 & 0 & 0 & C^{(4)} \end{pmatrix}$} \end{equation} \end{landscape} \newpage \restoregeometry where \begin{gather*} C^{(4)}=2C^{(4)}_1+2C^{(4)}_2+C^{(4)}_3+2C^{(4)}_4+C^{(4)}_5+C^{(4)}_6+C^{(4)}_7+C^{(4)}_8+ C^{(4)}_9+2C^{(4)}_{10}+C^{(4)}_{11}, \\ \Lambda_1=C^{(4)}_1+C^{(4)}_4+C^{(4)}_6, \quad \Lambda_2=C^{(4)}_2+C^{(4)}_4+C^{(4)}_5, \quad \Lambda_3=C^{(4)}_1+C^{(4)}_2+C^{(4)}_3, \\ \Lambda_4=C^{(4)}_1+C^{(4)}_{10}+C^{(4)}_{11}, \quad \Lambda_5=C^{(4)}_4+C^{(4)}_9+C^{(4)}_{10}, \quad \Lambda_6=C^{(4)}_6+C^{(4)}_7+C^{(4)}_8, \\ \Lambda_7=C^{(4)}_3+C^{(4)}_7+C^{(4)}_9, \quad \Lambda_8=C^{(4)}_2+C^{(4)}_8+C^{(4)}_{10}, \quad \Lambda_9=C^{(4)}_5+C^{(4)}_7+C^{(4)}_{11}. \end{gather*} \bibliographystyle{unsrt}
{ "timestamp": "2019-03-01T02:14:22", "yymm": "1902", "arxiv_id": "1902.10980", "language": "en", "url": "https://arxiv.org/abs/1902.10980" }
\section{Acknowledgments} The authors wish to thank the Amadeus Middleware Fraud Detection team directed by Virginie Amar and J\'er\'emie Barlet, led by the product owner Christophe Allexandre and composed of Jean-Blas Imbert, Jiang Wu, Yang Pu and Damien Fontanes for building the \name{rights}, \textsc{transactions}-\textsc{fr}\xspace and \textsc{transactions}-\textsc{mo}\xspace datasets. MF gratefully acknowledges support from the AXA Research Fund. \section{Conclusions} \label{sec:conclusion} This work studied the performance and scalability of state-of-the-art novelty detection methods based on a significant collection of real and synthetic datasets. The standard metric used in the literature to compare event sequences is \name{lcs}. Given the evidence provided, we found that although \name{lcs} produced more transparent insights than the Levenshtein distance, it did not exhibit better anomalies and was computationally more expensive. Our experiments suggest that \textit{k}-\textsc{nn}\xspace, \textit{k}-\textsc{medoids}\xspace, \textit{t}-\textsc{stide}\xspace and \textsc{lstm}-\textsc{ae}\xspace are suitable choices to identify outliers in genomics, and that \name{hmm} and \name{ripper} are efficient algorithms to detect intrusions. \name{hmm} is a strong candidate for most novelty detection applications, and shows a good scalability and interpretability. These characteristics make \name{hmm} appropriate for user behavior analysis, along with \textit{k}-\textsc{nn}\xspace, \textit{k}-\textsc{medoids}\xspace and \name{ism} which also provide a good model accountability. The fast scoring achieved by \name{hmm}, \textit{t}-\textsc{stide}\xspace and \name{ism} implies an excellent management of heavy loads arising in production environments. Major scalability constraints are pointed out for \name{ripper} and distance-based methods, namely \textit{k}-\textsc{nn}\xspace, \textit{k}-\textsc{medoids}\xspace and \name{lof}. The resort to alternative approaches when tackling large volumes of data is recommended. The widely used \name{lstm} networks show a lack of interpretability, and we believe that improving the understanding of recurrent networks as performed in \cite{Karpathy2016} would strongly benefit to the research community. Most approaches evaluated in this study are suitable for supervised tasks based on event sequences. Studying how these methods compare in a supervised context would be of interest. \section{Experimental setup} \label{sec:experiments} \subsection{Performance tests} \label{sec:perf_datasets} Our evaluation uses 81 datasets related to genomics, intrusion detection and user behavior analysis (\name{uba}). The datasets are divided into 9 categories detailed in Table \ref{table:datasets}, and cover a total of 68,832 sequences. For a given dataset, we use 70\% of the data for the training, and 30\% for the testing. We detail thereafter the metrics used to evaluate the novelty detection capabilities of the methods. At prediction time, each method provides us with continuous anomaly scores $s$ which allow us to rank novelties from a testing set. We can then define a threshold $\alpha$ and classify test points as anomalies when $s > \alpha$. The novelty detection capabilities of the algorithms can further be assessed by computing the \textit{precision} and \textit{recall} metrics on the resulting binary classification (eq. \ref{eq:precision:recall}). These metrics require a labelled testing dataset where novelties and nominal cases are defined as \textit{positive} and \textit{negative} samples. Data points correctly labelled as positives are called true positives (TP), examples incorrectly labelled as positives are called false positives (FP), and positive samples incorrectly labelled as negatives are referred as false negatives (FN). \begin{equation} \label{eq:precision:recall} precision = \frac{TP}{TP+FP} \qquad recall = \frac{TP}{TP+FN} \end{equation} By varying $\alpha$ over the range of values taken by $s$, we can compute different precision and recall measurements resulting in a precision-recall curve. The area under this curve is called average precision (\name{ap}) and is the recommended metric to assess the performance of novelty detection methods\cite{davis2006relationship}. An alternative metric used in the literature is the area under the receiver operating characteristic (\textsc{roc}) curve. While the latter is widely used for classification problems, Davis et al.\cite{davis2006relationship} demonstrated that it was not appropriate when dealing with heavily imbalanced class distributions, which is inherent to novelty detection where anomalies consist in a small proportion of the labelled data. Indeed, false positives have very little impact on the \name{roc}, whereas \name{ap} is strongly penalized by these, even if their proportion is not significant compared to the size of the negative class. We thus measure the performance of the algorithms by computing the average precision (\name{ap}) over the testing data. To ensure stability and confidence in our results, we perform 5-fold cross-validation for each method and dataset. The final performance given in Table \ref{tab:map} is thus the \textit{mean average precision} (\name{map}), i.e. the \name{ap} averaged over the 5 iterations. A \textit{robust} method is able to learn a consistent model from noisy data, i.e. a training set contaminated by anomalies. We use the same proportion of outliers in the training and testing sets to showcase the robustness of the selected methods. The corpus of data described in Table \ref{table:datasets} includes 6 widely used public collections of datasets, in addition to 3 new collections of industrial datasets from the company Amadeus. \name{pfam} (v31.0) describes 5 families of proteins, namely \textsc{rub} (PF00301), \textsc{tet} (PF00335), \textsc{sno} (PF01174), \textsc{nad} (PF02540) and \textsc{rvp} (PF08284). \name{intrusions} contains \textsc{unix} system calls for the traces \textsc{lpr-mit}, \textsc{lpr-unm}, \textsc{sendmail-cert}, \textsc{sendmail-unm}, \textsc{stide} and \textsc{xlock}. Concerning industrial datasets, \name{rights} details the actions performed by users in a Web application designed to manage the permissions of users and roles. The dataset shows the sessions of the 10 most active users. For each user dataset, anomalies are introduced by sampling sessions from the 9 remaining users. \textsc{transactions}-\textsc{fr}\xspace and \textsc{transactions}-\textsc{mo}\xspace are generated from a business-oriented flight booking application and covers Web traffic coming from France and Morocco. User selection and anomaly generation were performed as described previously. \begin{table} \centering \caption{Datasets benchmarked, related to genomics (\textsc{gen}), intrusion detection (\textsc{int}) or user behavior analysis (\textsc{gen}). \textit{D} is the number of datasets in each collection. The following characteristics are averaged over the collection of datasets: \textit{N} is the number of samples, \textit{A} and \textit{$p_A$} are the number and proportion of anomalies, respectively, \textit{$M_L$} is the length of the shortest sequence, \textit{$\mu_L$} is the average sequence length, \textit{$S_L$} is the entropy of the sequence lengths, \textit{$\sigma$} is the number of unique events, \textit{$S_\sigma$} is the entropy of the event distribution, \textit{$T_5$} (Top 5\%) is the proportion of events represented by the 5\% biggest events and \textit{$L_1$} (Lowest 1\%) is the proportion of the smallest events representing 1\% of the events.} \label{table:datasets} \resizebox{\columnwidth}{!}{ \begin{threeparttable} \renewcommand\TPTminimum{\linewidth} \begin{tabular}{llllllllllllll} \thickhline \textbf{Category} & \textbf{Area} & \textbf{D} & \textbf{N} & \textbf{A} \textbf{($p_A$)} & \textbf{$M_L$} & \textbf{$\mu_L$} & \textbf{$S_L$} & \textbf{$\sigma$} & \textbf{$S_\sigma$} & \textbf{$T_5$} & \textbf{$L_1$} \\ \thickhline \href{https://archive.ics.uci.edu/ml/datasets/Molecular+Biology+(Splice-junction+Gene+Sequences)}{\textsc{splice}-\textsc{junctions}\xspace} & \textsc{gen} & 1 & 1710 & 55 (3.22\%) & 60 & 60 & 0.00 & 6 & 1.39 & 25.76 & 16.67 \\ \href{https://archive.ics.uci.edu/ml/datasets/Molecular+Biology+\%28Promoter+Gene+Sequences\%29}{\name{promoter}} & \textsc{gen} & 1 & 59 & 6 (10.17\%) & 57 & 57 & 0.00 & 4 & 1.39 & 26.85 & 0.00 \\ \href{https://pfam.xfam.org/family/browse}{\name{pfam}} & \textsc{gen} & 5 & 5166 & 165 (3.19\%) & 117 & 1034 & 0.15 & 45 & 1.17 & 83.97 & 40.00 \\ \href{http://www.schonlau.net/intrusion.html}{\name{masquerade}} & \textsc{int} & 29 & 94 & 6 (6.29\%) & 100 & 100 & 0.00 & 113 & 3.40 & 49.69 & 29.55 \\ \href{https://www.cs.unm.edu/~immsec/systemcalls.htm}{\name{intrusions}} & \textsc{int} & 6 & 2834 & 202 (7.14\%) & 56 & 1310 & 4.27 & 43 & 2.01 & 66.91 & 36.43 \\ \href{http://archive.ics.uci.edu/ml/datasets/unix+user+data}{\name{unix}} & \name{uba} & 9 & 1045 & 33 (3.20\%) & 1 & 31 & 3.60 & 379 & 3.31 & 77.54 & 48.86 \\ \name{rights} & \name{uba} & 10 & 677 & 22 (3.18\%) & 1 & 15 & 3.31 & 67 & 2.19 & 70.03 & 55.95 \\ \textsc{transactions}-\textsc{fr}\xspace & \name{uba} & 10 & 215 & 7 (3.21\%) & 4 & 49 & 3.57 & 285 & 4.16 & 47.57 & 33.37 \\ \textsc{transactions}-\textsc{mo}\xspace & \name{uba} & 10 & 386 & 12 (3.19\%) & 5 & 37 & 3.88 & 416 & 4.18 & 67.08 & 33.46 \\ \thickhline \end{tabular} \end{threeparttable} } \end{table} \subsection{Scalability tests} \label{sec:exp:scala} Synthetic datasets are generated to measure the scalability of the selected methods. Nominal data is obtained by sampling $N$ sequences of fixed length $L$ from a Markov chain. The transition matrix used by the Markov chain is randomly generated from a uniform distribution and has dimension $\sigma$, where $\sigma$ is the size of the alphabet. Anomalies are sampled from a distinct random transition matrix of same dimension, to which we add the identity matrix. The default proportion of anomalies in the training and testing sets is 10\%. Both transition matrices are normalized to provide correct categorical distributions. We vary $N$, $L$ and the proportion of anomalies to generate datasets of increasing size and complexity. We also studied the impact of $\sigma$ on the methods, and found that it had little effect on the scalability and \name{map}. The training time, prediction time, memory usage and novelty detection abilities of the algorithms are measured during this process. For each configuration, we run the algorithms 3 times over distinct sampled datasets and average the metrics to increase confidence in our results. Training and testing datasets are generated from the same two transition matrices, and have the same number of samples and outliers. The experiments are performed on a VMWare platform running Ubuntu 14.04 LTS and powered by an Intel Xeon E5-4627 v4 CPU (10 cores at 2.6 GHz) and 256GB RAM. We use the Intel distribution of Python 3.5.2, Java 8 and R 3.3.2. Due to the number of algorithms and the size of the datasets, we interrupt training and scoring steps lasting more than 12 hours. Memory usage is measured by \href{https://pypi.org/project/memory_profiler/}{memory\_profiler} for algorithms written in Python and R, and by the \textsc{unix} \textit{ps} command for other languages. We perform a garbage collection for R and Python before starting the corresponding methods. Memory consumption is measured at intervals of $10^{-4}$ seconds, and shows the maximum usage observed during the training or scoring step. The memory required by the plain running environment and to store the dataset is subtracted to the observed memory peak. \subsection{Algorithms} The implementation and configuration of the methods are detailed in Table \ref{table:algos_imp}. Parameter selection was achieved by grid-search and maximizes the \name{map} averaged over the testing datasets detailed in Section \ref{sec:perf_datasets}. We use \href{https://rpy2.readthedocs.io}{rpy2} to run algorithms written in R from Python, and create dedicated subprocesses for Java and C. \begin{table}[h] \centering \caption{Parameters and implementations of the selected algorithms.} \label{table:algos_imp} \begin{threeparttable} \resizebox{\linewidth}{!}{ \renewcommand\TPTminimum{0.5\linewidth} \begin{tabular}{lll} \thickhline \textbf{Algorithm} & \textbf{Language} & \textbf{Parameters}\\ \thickhline \href{https://github.com/hmmlearn/hmmlearn}{\name{hmm}}~\tnote{1} & Python & $components=3, iters=30, tol=10^{-2}$ \\ \href{http://mlpy.sourceforge.net/docs/3.5/lcs.html}{\name{lcs}} & Python & n/a \\ \href{https://pypi.org/project/leven/}{Levenshtein} & Python & n/a \\ \href{http://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html}{\textit{k}-\textsc{nn}\xspace} & Python & $k=\max(n*0.1, 20)$ \\ \href{http://scikit-learn.org/stable/modules/generated/sklearn.neighbors.LocalOutlierFactor.html}{\name{lof}} & Python & $k=\max(n*0.1, 50)$ \\ \href{https://github.com/letiantian/kmedoids/blob/master/kmedoids.py}{\textit{k}-\textsc{medoids}\xspace} & Python & $k=2$ \\ \href{http://www.cs.unm.edu/~immsec/software/stide_v1.2.tar.gz}{\textit{t}-\textsc{stide}\xspace}~\tnote{2} & C & $k=6, t=10^{-5}$ \\ \href{https://cran.r-project.org/web/packages/RWeka/index.html}{\name{ripper}}~\tnote{2} & R & $K=9, F=2, N=1, O=2$ \\ \href{https://github.com/mast-group/sequence-mining}{\name{ism}} & Java & $iters=100, s=10^{5}$ \\ \href{https://www.tensorflow.org/tutorials/seq2seq}{\textsc{seq}\oldstylenums{2}\textsc{seq}\xspace}~\tnote{3} & Python & $iters=100, batch=128, hidden=40, enc\_dropout=0.5, dec\_dropout=0.$ \\ \textsc{lstm}-\textsc{ae}\xspace~\tnote{3} & Python & $batch=128, iters=50, hidden=40, \delta=10^{-4}$ \\ \thickhline \end{tabular} } \begin{tablenotes} \scriptsize \item [1] New symbols are not supported natively by the method. \item [2] Sequences were split into sliding windows of fixed length. \item [3] Padding symbols were added to the datasets to provide batches of sequences having the same length. \end{tablenotes} \end{threeparttable} \end{table} \section{Introduction} \label{sec:intro} Novelty detection is an unsupervised learning problem and an active research area \cite{Aggarwal2015outlier,Hodge2004outlier}. Given a set of training samples, novelty detection is the task of classifying new test samples as \textit{nominal} when the test data relates to the training set, or as \textit{anomalous} when they significantly differ. Anomalous data is called novelties or anomalies and is assumed to be generated by a different generative process. Since novelty detection can be considered a one-class classification problem, it has also been described as a semi-supervised problem\cite{Chandola2012survey} when the training set is exempt of outliers. While most anomaly detection problems deal with numerical data\cite{emmott2016anomaly,breunig2000lof,Ramaswamy2000kNN}, novelty detection methods have been successfully applied to categorical data\cite{Hodge2004outlier}, time-series\cite{Marchi2015denoising,Zbigniew2004change,Taylor2018prophet}, discrete sequences\cite{chandola2008comparative,stide1999intrusions,Cohen1995115} and mixed data types\cite{Domingues2018DGP}. This paper surveys the problem of detecting anomalies in temporal data, specifically in discrete sequences of events which have a temporal order. Such a problem can be divided into two categories. The first one is \textit{change point detection}, where datasets are long sequences in which we seek anomalous and contiguous subsequences, denoting a sudden change of behavior in the data. Use cases relating to this problem are sensor readings\cite{Zbigniew2004change} and first story detection\cite{Petrovic2010story}. A second category considers datasets as sets of sequences, and targets the detection of anomalous sequences with respect to nominal samples. Our study focuses on the latter, which encompasses use cases such as protein identification for genomics\cite{chandola2008comparative,sun2006pst}, fraud and intrusion detection\cite{maxion2002masquerade,stide1999intrusions,chandola2008comparative} and user behavior analysis (\textsc{uba})\cite{sculley2006compression}. While this is a matter of interest in the literature, most reviews addressing the issue focus on theoretical aspects\cite{Gupta2014survey,Chandola2012survey}, and as such do not assess and compare performance. Chandola et al.\cite{chandola2008comparative} showcase an experimental comparison of novelty detection methods for sequential data, although this work uses a custom metric to measure the novelty detection capabilities of the algorithms and misses methods which have been recently published in the field. Our work extends previous studies by bringing together the following contributions: (i) comparison of the novelty detection performance for 12 algorithms, including recent developments in neural networks, on 81 datasets containing discrete sequences from a variety of research fields; (ii) assessment of the robustness for the selected methods using datasets contaminated by outliers, with contrast to previous studies which rely on clean training data; (iii) scalability measurements for each algorithm, reporting the training and prediction time, memory usage and novelty detection capabilities on synthetic datasets of increasing samples, sequence length and anomalies; (iv) discussion on the interpretability of the different approaches, in order to provide insights and motivate the predictions resulting from the trained model. To our knowledge, this study is the first to perform an evaluation of novelty detection methods for discrete sequences with so many datasets and algorithms. This work is also the first to assess the scalability of the selected methods, which is an important selection criterion for processes subject to fast response time commitments, in addition to resource-constrained systems such as embedded systems. The paper is organized as follows: Section \ref{sec:methods} presents the state-of-the-art of novelty detection methods, Section \ref{sec:experiments} details the real-world and synthetic datasets used for the study, in addition to the relevant metrics and parameters, Sections \ref{sec:results} and \ref{sec:conclusion} report the results and conclusions of the work. \section{Methods} \label{sec:methods} The current section details novelty detection methods from the literature. In order to provide recommendations relevant to real-world use cases, only methods satisfying the following constraints were selected: (1) the method accepts \textit{discrete sequences of events} as input, where events are represented as categorical samples; (2) the sequences fed to the method may have \textit{variable lengths}, which implies a dedicated support or a tolerance for padding; (3) the novelty detection problem induces a distinct training and testing dataset. As such, the selected approach should be able to perform \textit{predictions on unseen data} which was not presented to the algorithm during the training phase; (4) subject to user inputs and system changes, the set of discrete symbols in the sequences (alphabet) of the training set cannot be assumed to be complete. The algorithm should support \textit{new symbols from the test set}; (5) in order to perform an accurate evaluation of its novelty detection capabilities and to provide practical predictions on testing data, the method should provide \textit{continuous anomaly scores} rather than a binary decision. This last point allows for a ranking of the anomalies, and hence a meaningful manual validation of the anomalies, or the application of a user-defined threshold in the case of automatic intervention. The ranking of anomalies is also required by the performance metric used in the study and described in section \ref{sec:perf_datasets}. \subsection{Hidden Markov Model} \textbf{Hidden Markov Models} (\textsc{hmm}s\xspace)\cite{RabinerHMM18626} are popular graphical models used to describe temporal data and generate sequences. The approach fits a probability distribution over the space of possible sequences, and is widely used in speech recognition and protein modelling. An \name{hmm} is composed of $N$ states which are interconnected by state-transition probabilities, each state generating emissions according to its own emission probability distribution and the previous emission. To generate a sequence, an initial state is first selected based on initial probabilities. A sequence of states is then sampled according to the transition matrix of the \name{hmm}. Once the sequence of states is obtained, each state emits a symbol based on its emission distribution. The sequence of emissions is the observed data. Based on a dataset composed of emission sequences, we can achieve the inverse process, i.e. estimate the transition matrix and the emission distributions of a \name{hmm} from the emissions observed. Possible sequences of \textit{hidden} states leading to these emissions are thus inferred during the process. Once we obtain a trained \name{hmm} $\lambda = (A, B, \pi)$ with $A$ the transition matrix, $B$ describing the emission probabilities and $\pi$ the initial state probabilities, we can compute the normalized likelihood of a sequence and use it as a score to detect novelties. \subsection{Distance-based methods} Distance-based approaches rely on pairwise distance matrices computed by applying a distance function to each pair of input sequences. The resulting matrix is then used by clustering or nearest-neighbor algorithms to build a model of the data. At test time, a second distance matrix is computed to perform scoring, which contains the distance between each test sample and the training data. \subsubsection{Distance metrics} \name{lcs} is the \textbf{longest common subsequence}\cite{Bergroth2000Survey} shared between two sequences. A common subsequence is defined as a sequence of symbols appearing in the same order in both sequences, although they do not need to be consecutive. For example, $\name{lcs}(\textsc{xmjyauz}, \textsc{mzjawxu}) = \textsc{mjau}$. Since \name{lcs} expresses a similarity between sequences, we use the negative $\name{lcs}$ to obtain a distance. The \textbf{Levenshtein distance}\cite{1966levenshtein}, also called the \textit{edit distance}, is a widely used metric which computes the difference between two strings or sequences of symbols. It represents the minimum number of edit operations required to transform one sequence into another, such as insertions, deletions and substitutions of individual symbols. Both metrics are normalized by the sum of the sequence lengths (equation \ref{eq:norm_dist}), which makes them suitable for sequences of different length. \begin{equation} \label{eq:norm_dist} distance(x, y) = \frac{metric(x, y)}{|x| + |y|} \end{equation} \subsubsection{Algorithms} The \textbf{\textit{k}-nearest neighbors} (\textit{k}-\textsc{nn}\xspace) algorithm is often used for classification and regression. In the case of classification, \textit{k}-\textsc{nn}\xspace assigns to each test sample the label the most represented among its \textit{k} nearest neighbors from the training set. In \cite{Ramaswamy2000kNN}, the scoring function used to detect outliers is the distance $d(x, n_k)$ or $d_k(x)$ between a point $x$ and its \textit{$k^{th}$} nearest neighbor $n_k$. This approach was applied to sequences in \cite{chandola2008comparative} using the \name{lcs} metric, and outperformed methods such as \name{hmm} and \name{ripper}. \textbf{Local outlier factor} (\name{lof}) \cite{breunig2000lof} also studies the neighborhood of test samples to identify anomalies. It compares the local density of a point $x$ to the local density of its neighbors by computing the \textit{reachability distance} $rd_k(x,y)$ between $x$ and each of its \textit{k}-nearest neighbors $n_i$. \begin{equation} rd_k(x,n_i) = \max(d_k(n_i), d(x, n_i)) \end{equation} The computed distances are then aggregated into a final anomaly score detailed in \cite{breunig2000lof}. The method showed promising results when applied to intrusion detection \cite{Lazarevic2003Study}. \textbf{\textit{k}-medoids} \cite{park2009kmedoids} is a clustering algorithm which uses data points from the training set, also called \textit{medoids}, to represent the center of a cluster. The algorithm first randomly samples \textit{k} medoids from the input data, then cluster the remaining data points by selecting the closest medoid. The medoids of each cluster are further replaced by a data point from the same cluster which minimizes the sum of distances between the new medoid and the points in the cluster. The method uses expectation-maximization and is very similar to \textit{k}-means, although the latter uses the arithmetic mean of a cluster as a center, called \textit{centroid}. Since \textit{k}-means requires numerical data and is more sensitive to outliers \cite{park2009kmedoids}, it was not selected for this study. We use the distance to the closest medoid to detect anomalies, which is the method used in \cite{Budalakoti2009airline} and \cite{budalakoti2006anomaly}. Both papers used the \name{lcs} metric to preprocess the data given to \textit{k}-\textsc{medoids}\xspace. \subsection{Window-based techniques} The two following methods observe subsequences of fixed length, called \textit{windows}, within a given sequence to identify abnormal patterns. This workflow requires to preprocess the data by applying a sliding window to each sequence, shifting the window by one symbol at each iteration and resulting in a larger dataset due to overlapping subsequences. \textbf{\textit{t}-\textsc{stide}\xspace}\cite{stide1999intrusions}, which stands for \textit{threshold-based sequence time-delay embedding}, uses a dictionary or a tree to store subsequences of length \textit{k} observed in the training data, along with their frequency. Once this model is built, the anomaly score of a test sequence is the number of subsequences within the sequence which do not exist in the model, divided by the number of windows in the test sequence. For increased robustness, subsequences having a frequency lower than a given threshold are excluded from the model. This increases the anomaly score for uncommon patterns, and allows the algorithm to handle datasets contaminated by anomalous sequences. This scoring method is called Locality Frame Count (\textsc{lfc}) and was applied to intrusion detection\cite{stide1999intrusions} where it performed almost as well as \name{hmm} at a reduced computational cost. \textbf{\name{ripper}} \cite{Cohen1995115} is a supervised classifier designed for association rule learning. The training data given to the algorithm is divided into a set of sequences of length \textit{k}, and the corresponding labels. For novelty detection, subsequences are generated by a sliding window, and the label is the symbol following each subsequence. This allows \name{ripper} to learn rules predicting upcoming events. This method was applied to intrusion detection in \cite{lee1997learning}. To build an anomaly score for a test sequence, the authors retrieve the predictions obtained for each subsequence, along with the confidence of the rule which triggered the prediction. Each time a prediction does not match the upcoming event, the anomaly score is increased by $confidence * 100$. The final score is then divided by the number of subsequences for normalization. \subsection{Pattern mining} Sequential Pattern Mining (SPM) consists in the unsupervised discovery of interesting and relevant subsequences in sequential databases. A recent algorithm from this field is \textbf{Interesting Sequence Miner} (\name{ism})\cite{Fowkes2016ISM}, a probabilistic and generative method which learns a set of patterns leading to the best compression of the database. From a training set, \name{ism} learns a set of interesting subsequences ranked by probability and interestingness. To score a test sequence, we count the number of occurrences of each interesting pattern returned by \name{ism}, and multiply the number of occurrences by the corresponding probability and interestingness. This score is normalized by the length of the test sequence, a low score denoting an anomaly. While alternatives to \name{ism} exist in the literature\cite{gan2018survey}, few provide both a probabilistic framework and an open source software. \subsection{Neural networks} Recurrent neural networks (\textsc{rnn}s) are widely used algorithms for a variety of supervised tasks related to temporal data\cite{Lipton2015Review}. Long Short-Term Memory (\name{lstm})\cite{hochreiter1997lstm}, a specific topology of \textsc{rnn}, has the ability to model long-term dependencies and thus arbitrary long sequences of events. This network can be applied to unsupervised learning problems by using an autoencoder topology, i.e. using input and output layers of same dimensions to present the same data in input and output to the network. This allows the method to learn a compressed representation of the data. For this purpose, the following algorithms use two multilayer \name{lstm} networks, the first one encoding the data in a vector of fixed dimensionality (encoder), the second one decoding the target sequence from the vector (decoder). The \textbf{Sequence to Sequence} (\textsc{seq}\oldstylenums{2}\textsc{seq}\xspace)\cite{Sutskever2014Seq2Seq} network is a recent work designed for language translation. The method is based on \name{lstm} cells and uses various mechanisms such as \textit{dropout} to prevent overfitting and \textit{attention}\cite{luong2015effective} to focus on specific past events to establish correlations. As suggested in \cite{sakurada2014anomaly,Marchi2015denoising}, the reconstruction error is used to score anomalies. The reconstruction error is the distance between the input and the reconstructed output, computed by \name{lcs} in this study. We also include a simpler \textbf{\name{lstm} Autoencoder} (\textsc{lstm}-\textsc{ae}\xspace) for the sake of the comparison, paired with a different scoring system. This network is also composed of two \name{lstm} networks, and both \textsc{seq}\oldstylenums{2}\textsc{seq}\xspace and \textsc{lstm}-\textsc{ae}\xspace perform masking to handle padding characters appended to the end of the sequences of variable length. However, \textsc{lstm}-\textsc{ae}\xspace does not benefit from the dropout and attention mechanisms. In addition, instead of comparing the input to the reconstructed output for scoring, we now apply a distinct novelty detection algorithm to the latent representation provided by the network. The goal of \textsc{lstm}-\textsc{ae}\xspace is thus to learn a numerical fixed-length vector to represent each input sequence. The resulting representation of the training set is given to \textbf{Isolation Forest}\cite{liu2008isolation}, an unsupervised novelty detection algorithm for numerical data recommended in \cite{emmott2016anomaly}. At test time, the input sequence is encoded into a vector which is scored by Isolation Forest. \section{\refname} \bibliographystyle{abbrv} \section{Results} \label{sec:results} \subsection{Novelty detection capabilities} \label{sec:perf} The mean average precision (\name{map}) resulting from the experiment detailed in Section \ref{sec:perf_datasets} is reported in Table \ref{tab:map} for each algorithm and dataset. When no significant difference can be observed between a given \name{map} and the best result achieved on the dataset, we highlight the corresponding \name{map} in bold. The null hypothesis is rejected based on a pairwise Friedman test\cite{GARCIA20102044} with a significance level of $0.05$. While we believe that no method outperforms all others, and that each problem may require a distinct method, we attempt to give a broad overview of how methods compare to one another. For this purpose, we extract the rank of each algorithm on each collection of datasets from Table \ref{tab:map} and aggregate them to produce an overall ranking reported in the last column. The aggregation is performed using the Cross-Entropy Monte Carlo algorithm \cite{pihur2009rankaggreg} and rely on the Spearman distance. \begin{table} \centering \caption{Mean area under the precision-recall curve (\name{map}) averaged per group of datasets over 5 cross-validation iterations. Results in bold indicate that we cannot reject the null hypothesis of the given \name{map} to be identical to the best \name{map} achieved for the dataset. Column \textit{Rank} reports the aggregated rank for each method based on the Spearman footrule distance.} \label{tab:map} \resizebox{\columnwidth}{!}{ \begin{threeparttable} \renewcommand\TPTminimum{\linewidth} \setlength\tabcolsep{4.5pt} \begin{tabular}{llllllllllll} \thickhline & \textsc{splice} & \textsc{promot.} & \name{pfam} & \textsc{masque.} & \textsc{intrus.} & \name{unix} & \name{rights} & \textsc{trans}-\textsc{fr} & \textsc{trans}-\textsc{mo} & \textbf{Mean} & \textbf{Rank}\\ \thickhline \name{hmm} & 0.027 & 0.336 & 0.387 & \textbf{0.166} & \textbf{0.580} & \textbf{0.302} & \textbf{0.246} & \textbf{0.260} & \textbf{0.164} & \textbf{0.274} & \textbf{1} \\ \textit{k}-\textsc{nn}-\textsc{lcs}\xspace & 0.032 & \textbf{0.437} & \textbf{0.516} & 0.132 & \textbf{0.425} & \textbf{0.207} & \textbf{0.270} & \textbf{0.179} & \textbf{0.097} & \textbf{0.255} & \textbf{3} \\ \textit{k}-\textsc{nn}-\textsc{lev}\xspace & 0.033 & \textbf{0.412} & \textbf{0.516} & 0.129 & \textbf{0.405} & \textbf{0.120} & \textbf{0.188} & \textbf{0.185} & 0.083 & \textbf{0.230} & \textbf{5} \\ \textsc{lof}-\textsc{lcs}\xspace & \textbf{0.042} & 0.150 & 0.029 & \textbf{0.167} & 0.141 & 0.073 & 0.042 & 0.091 & 0.041 & \textbf{0.086} & \textbf{12} \\ \textsc{lof}-\textsc{lev}\xspace & 0.031 & 0.226 & \textbf{0.517} & \textbf{0.156} & 0.181 & \textbf{0.132} & \textbf{0.191} & \textbf{0.192} & \textbf{0.099} & \textbf{0.192} & \textbf{4} \\ \textit{k}-\textsc{medoids}-\textsc{lcs}\xspace & 0.027 & \textbf{0.581} & \textbf{0.510} & 0.134 & 0.318 & \textbf{0.155} & \textbf{0.218} & \textbf{0.184} & 0.092 & \textbf{0.247} & \textbf{6} \\ \textit{k}-\textsc{medoids}-\textsc{lev}\xspace & \textbf{0.040} & \textbf{0.692} & \textbf{0.513} & \textbf{0.148} & 0.222 & 0.086 & 0.146 & \textbf{0.189} & 0.078 & \textbf{0.235} & \textbf{7} \\ \textit{t}-\textsc{stide}\xspace & \textbf{0.048} & \textbf{0.806} & \textbf{0.506} & 0.122 & \textbf{0.469} & 0.081 & 0.130 & 0.136 & \textbf{0.112} & \textbf{0.268} & \textbf{9} \\ \name{ripper} & 0.028 & \textbf{0.431} & 0.034 & \textbf{0.176} & \textbf{0.359} & 0.053 & 0.077 & 0.105 & 0.079 & \textbf{0.149} & \textbf{10} \\ \name{ism} & 0.027 & 0.205 & 0.116 & 0.140 & \textbf{0.559} & \textbf{0.220} & \textbf{0.217} & \textbf{0.211} & \textbf{0.111} & \textbf{0.201} & \textbf{2} \\ \textsc{seq}\oldstylenums{2}\textsc{seq}\xspace & \textbf{0.072} & 0.341 & 0.035 & \textbf{0.178} & 0.113 & 0.076 & 0.083 & 0.092 & 0.063 & \textbf{0.117} & \textbf{11} \\ \textsc{lstm}-\textsc{ae}\xspace & 0.034 & \textbf{0.494} & \textbf{0.591} & \textbf{0.178} & 0.174 & 0.074 & 0.100 & \textbf{0.173} & 0.075 & \textbf{0.210} & \textbf{8} \\ \thickhline \end{tabular} \end{threeparttable} } \end{table} In order to infer the behavior of each method based on the datasets characteristics, we learn an interpretable meta-model using the features introduced in Table \ref{table:datasets}. While the metrics given in Table \ref{table:datasets} are computed over entire datasets, then averaged over the corresponding collection, this experiment focuses on the training data and retains features for each of the 81 datasets. We use these features as input data, and fit one decision tree per algorithm in order to predict how a given method performs. The resulting models are binary classifiers where the target class is whether the average rank of the algorithm is among the top 25\% performers (ranks 1 to 3), or if it reaches the lowest 25\% (ranks 9 to 12). Figure \ref{fig:kMed-tree} shows the trained meta-model of \textit{k}-\textsc{medoids}-\textsc{lev}\xspace as an example. These trees expose the strengths and weaknesses of the methods studied, and highlight the most important factors impacting the methods' performances. \begin{figure}[!htb] \centering \includegraphics[width=0.8\linewidth]{tree_k-Medoids-Lev.pdf} \caption{Decision tree showing the position of \textit{k}-\textsc{medoids}-\textsc{lev}\xspace in the overall ranking based on features extracted from the datasets. Ranks have been aggregated into the \textit{top} and \textit{low} classes which encompass the best (1 to 3) and worst (10 to 12) 25\% ranks, respectively.} \label{fig:kMed-tree} \end{figure} In order to provide a concise visual overview of this analysis, we report in Figure \ref{fig:heatmap} the performance of each method based on the datasets characteristics. For this purpose, we extract the rules of the nodes for which $depth < 4$ in all meta-models, then aggregate these rules per feature to identify values corresponding to the most important splits. The resulting filters are reported in the horizontal axis of the heatmap. \begin{figure}[!htb] \centering \includegraphics[width=\linewidth]{heatmap.pdf} \caption{Novelty detection capabilities of the algorithms based on the datasets characteristics. The scores range from 0 to 10 and are based on the rank of the method averaged over the subset of datasets matching the corresponding filter applied to the 81 datasets. A score of 10 corresponds to an average rank of 1, while a score of 0 indicates that the method consistently ended in the last position. $N$ is the number of samples; $p_A$ is the proportion of anomalies; $M_L$, $\mu_L$ and $S_L$ are the minimum, average and entropy computed over the sequence length; $\sigma$ and $S_\sigma$ are the alphabet size and the corresponding entropy of its distribution, the entropy increasing with the number of events and the distribution uniformity; $T_5$ is the proportion of events represented by the 5\% biggest events, a high value denotes important inequalities in the distribution; $L_1$ is the proportion of the smallest events representing 1\% of the data, a high value indicates numerous events with rare occurrences; the genomics (\textsc{gen}), intrusion detection (\textsc{int}) and \textsc{uba} columns target datasets related to the corresponding field of study.} \label{fig:heatmap} \end{figure} Our experiments show that no algorithm consistently reaches better results than the competing methods, but that \name{hmm}, \textit{k}-\textsc{nn}\xspace and \name{ism} are promising novelty detection methods. While previous comparisons\cite{stide1999intrusions,chandola2008comparative,Budalakoti2009airline} use clean datasets exempt of anomalies, our study shows a good robustness for the selected methods, even for datasets with a high proportion of outliers, namely \name{promoter}, \name{masquerade} and \name{intrusions}. Concerning the applications studied, \textit{k}-\textsc{nn}\xspace, \textit{k}-\textsc{medoids}\xspace, \textit{t}-\textsc{stide}\xspace and \textsc{lstm}-\textsc{ae}\xspace show good performance on datasets related to genomics, which are \textsc{splice}-\textsc{junctions}\xspace, \name{promoter} and \name{pfam}. \textit{t}-\textsc{stide}\xspace apart, these methods have successfully addressed numerous supervised numerical problems, and could thus reach good performance when applied to sequence-based supervised use cases. The best methods for intrusion detection are \name{hmm} and \name{ripper}, while \textit{t}-\textsc{stide}\xspace shows reduced performance compared to \cite{stide1999intrusions}, likely caused by the introduction of anomalies in the training sets. Our observations for genomics and intrusion detection corroborate the conclusions presented for \textit{t}-\textsc{stide}\xspace and \name{ripper} in \cite{chandola2008comparative}. However, our study shows much better performance for \name{hmm}, the previous study using a custom likelihood for \name{hmm} based on an aggregated sequence of binary scores. With regard to user behavior analysis, \name{hmm}, \textit{k}-\textsc{nn}\xspace, \textit{k}-\textsc{medoids}-\textsc{lcs}\xspace and \name{ism} show the best ability to differentiate users. While the performance of \textit{t}-\textsc{stide}\xspace on \name{uba} are not sufficient to recommend the method, we believe that increasing the threshold of \textit{t}-\textsc{stide}\xspace would lead to increased performance. Indeed, user actions are often based on well-defined application flows, and most of the possible subsequences are likely to exist in the training sets. The amount of supplementary information which can be provided by the models about the user behaviors will determine the most suitable methods for this field (Section \ref{sec:interpretability}). Figure \ref{fig:heatmap} shows that the performance of \name{hmm} improves significantly with the number of available samples. Both \name{hmm} and \name{ism} achieve good performance, even when a high discrepancy is observed among the sequence lengths. \name{hmm}, \name{ism} and \name{ripper} are able to handle efficiently a large alphabet of symbols. \name{ripper} also shows good performance for datasets containing a high proportion of outliers, while nearest neighbor methods are strongly impacted by this characteristic. Distance metrics are known to suffer from the curse of dimensionality inherent to a high number of features. Similarly, Figure \ref{fig:heatmap} shows a decrease of performance for \textit{k}-\textsc{nn}\xspace, \textit{k}-\textsc{medoids}\xspace and \name{lof} when $\sigma$ increases, these methods relying on the \name{lcs} and Levenshtein metrics for distance computations. While \name{lcs} is a metric widely used in the literature\cite{chandola2008comparative, Budalakoti2009airline, budalakoti2006anomaly}, our experiments show that it does not perform better than the Levenshtein distance. If both metrics provide satisfactory results for novelty detection, the combination of \name{lof} and \name{lcs} produces the lowest accuracy of our evaluation. Nonetheless, the efficiency of \textsc{lof}-\textsc{lev}\xspace prevents us from discarding this method, even though \textit{k}-\textsc{nn}-\textsc{lev}\xspace achieves a similar accuracy to \textsc{lof}-\textsc{lev}\xspace with a simpler scoring function. For the sake of the experiment, we evaluated the scoring function proposed for \textit{t}-\textsc{stide}\xspace in \cite{hofmeyr1998intrusion}. For each subsequence of fixed length in a test sequence, the authors compute the hamming distance between the test window and all training windows, and return the shortest distance. This method was much slower than a binary decision based on the presence of the test window in the training set, and did not strongly improve the results. Neural networks do not stand out in this test. The reconstruction error showed good results for detecting numerical anomalies in previous studies\cite{sakurada2014anomaly,Marchi2015denoising}, but the approach may not be appropriate for event sequences. The reconstructed sequences provided by \textsc{seq}\oldstylenums{2}\textsc{seq}\xspace are often longer than the input data, and the network loops regularly for a while over a given event. Figure \ref{fig:heatmap} show that \name{lstm} networks perform better with long sequences and a moderate alphabet size. We repeated our experiments using the Python library \href{https://docs.python.org/3.5/library/difflib.html#difflib.SequenceMatcher.ratio}{difflib} as an alternative to \name{lcs} for \textsc{seq}\oldstylenums{2}\textsc{seq}\xspace, but it did not improve the performance of the network. \textsc{lstm}-\textsc{ae}\xspace shows a correct novelty detection accuracy, which could be further improved with dropout and attention. Thanks to their moderate depth, these two networks do not require very large datasets to tune their parameters. For example, \textsc{lstm}-\textsc{ae}\xspace achieves a good \name{map} even for small datasets such as \name{promoter} and \name{masquerade}. Despite the use of masks to address padding, these methods have difficulty with datasets showing an important disparity in sequence length, such as \name{intrusions} and the four collections of \name{uba} datasets. \subsection{Robustness} Figures \ref{fig:robust_outliers} to \ref{fig:robust_seq_len} report the mean area under the precision recall curve (\name{map}) for datasets of increasing proportion of outliers, number of samples and sequence length, respectively. The positive class represents the nominal samples in Figure \ref{fig:robust_outliers}, and the anomalies in Figure \ref{fig:robust_samples} and \ref{fig:robust_seq_len} (as in Section \ref{sec:perf}). Figure \ref{fig:robust_outliers} demonstrates a more complex test case than just identifying uniform background noise against a well-defined distribution. In this test, anomalies are sampled according to their own probability distribution, which will affect the models learnt when a sufficient proportion of anomalies is reached. The test highlights thus how algorithms deal with complex data based on multiple distributions. We observe that most algorithms focus on the major distribution as long as the proportion of corresponding samples remains higher than 60\%. \name{hmm} uses 3 components and may thus learn the second distribution much earlier in the test. On the opposite, most of the distance-based methods discard the smallest distribution even if this one represents up to 40\% of the data. \textsc{lof}-\textsc{lcs}\xspace shows poor performance from the very beginning, which prevents us from concluding on the behavior of this method. \begin{figure}[!htb] \centering \includegraphics[width=0.9\linewidth]{roc_outliers.pdf} \caption{Robustness for increasing noise density} \label{fig:robust_outliers} \end{figure} Figure \ref{fig:robust_samples} shows that 200 samples are a good basis to reach stable novelty detection results. While we expected the performance of deep learning methods to improve with the number of samples, these networks did not significantly increase their detection with the size of the dataset. The best results on large datasets were achieved by distance-based methods, most of which rely on nearest-neighbor approaches particularly efficient when a high number of samples is available. Good performance were also achieved by \name{hmm}, presumably due to a generation method for nominal samples and outliers based on Markov chains, which matches the internal representation of \name{hmm}. \begin{figure}[!htb] \centering \includegraphics[width=0.9\linewidth]{roc_n_seq.pdf} \caption{Robustness for increasing number of samples} \label{fig:robust_samples} \end{figure} Despite the increasing volume of data over the scalability test reported in Figure \ref{fig:robust_seq_len}, important variations can be observed for the results, possibly related to the limited number of samples available for the generated datasets. \textit{k}-\textsc{medoids}\xspace achieve better performance than other distance-based methods, which suggests a better approach for small datasets. \name{hmm} achieves once again good results, while \name{lstm} networks show improved novelty detection capabilities for datasets containing sequences longer than 100 events. The performance of \name{ism} also increases with the volume of data, although the method require bigger datasets to reach comparable results. \begin{figure}[!htb] \centering \includegraphics[width=0.9\linewidth]{roc_seq_len.pdf} \caption{Robustness for increasing sequence length} \label{fig:robust_seq_len} \end{figure} In summary, our experiments show that robust models require at least 200 training samples to provide satisfactory results. \textsc{lof}-\textsc{lcs}\xspace and \textit{t}-\textsc{stide}\xspace do not provide satisfactory performance, even though fine-tuning \textit{t}-\textsc{stide}\xspace by increasing the frequency threshold could lead to better results. \subsection{Complexity} The computation time for training and prediction steps is reported in figures \ref{fig:fit_samples} to \ref{fig:pred_seq_len}. While time measurements are impacted by hardware configuration (Sec. \ref{sec:exp:scala}), the slope of the curves and their ranking compared to other methods should remain the same for most running environments. The measurements from Figures \ref{fig:fit_samples} and \ref{fig:pred_samples} show a poor scalability of algorithms relying on pairwise distance matrices, namely \name{lof}, \textit{k}-\textsc{nn}\xspace and \textit{k}-\textsc{medoids}\xspace. Most of the training and prediction time of these methods is dedicated to the computation of the distance matrix, and thus to the \name{lcs} and Levenshtein algorithms. Since training and testing sets have the same number of samples in this test, the previous assumption is confirmed by observing a similar training and prediction time for the methods. In addition, \textit{k}-\textsc{medoids}\xspace is the only distance-based algorithm with a faster prediction time, caused by a smaller number of distances to compute. The prediction step of this method requires only to compare a small number of medoids with the testing set, instead of performing a heavy pairwise comparison. Regarding distance metrics, \name{lcs} shows a much higher computation time than the Levenshtein distance. Despite a very small $\sigma$, the rule-learning algorithm \name{ripper} shows the highest training time, reaching our 12-hour timeout for 13,000 samples. On the opposite and as expected, the use of mini-batch learning by \textsc{lstm}-\textsc{ae}\xspace and \textsc{seq}\oldstylenums{2}\textsc{seq}\xspace allows the two methods to efficiently handle the increasing number of sequences, although we recommend to increase the batch size or the number of iterations according to the size of the training set. However, such technique is only valid for the training step, and both methods show a scoring scalability comparable to the other algorithms. The extreme simplicity of \textit{t}-\textsc{stide}\xspace, which essentially stores subsequences in a dictionary at train time, makes this algorithm one of the fastest methods. The increasing load does not affect much \name{ism}, since the method stops iterating over the dataset if it does not find new interesting patterns after a given number of sequences. \begin{figure}[!htb] \centering \includegraphics[width=0.9\linewidth]{fit_n_seq.pdf} \caption{Training time for increasing number of samples} \label{fig:fit_samples} \end{figure} \begin{figure}[!htb] \centering \includegraphics[width=0.9\linewidth]{pred_n_seq.pdf} \caption{Prediction time for increasing number of test samples} \label{fig:pred_samples} \end{figure} We now use a fixed number of samples while increasing the length of the sequences and report the computation time in Figures \ref{fig:fit_seq_len} and \ref{fig:pred_seq_len}. The careful reader will notice that both scalability tests, i.e. number of sequence-based and length-based, produce datasets containing the exact same number of symbols (e.g. $10^5 \ sequences * 20 \ symbols = 200 \ sequences * 10^4 \ symbols$). This configuration reveals the true impact of samples and length on the scalability, while keeping the same volume of data. While we still observe a poor scalability for distance-based algorithms caused by a high computation time to compute distances, the training and prediction time of these methods was reduced due to a smaller number of samples to handle by the core algorithm. On the opposite, \name{ripper} and \name{ism} show a much higher training time when dealing with long sequences. However, the prediction time of these two methods only depends on the volume of data, i.e. the total number of symbols in the dataset, and will be impacted similarly by the number of samples and length. Mini-batch methods are now subject to training batches of increasing volume, which reveals a poor scalability for \textsc{seq}\oldstylenums{2}\textsc{seq}\xspace. \textsc{lstm}-\textsc{ae}\xspace performs better due to an early stopping mechanism, interrupting the training when the loss does not improve sufficiently over the iterations. \begin{figure}[!htb] \centering \includegraphics[width=0.9\linewidth]{fit_seq_len.pdf} \caption{Training time for increasing sequence length} \label{fig:fit_seq_len} \end{figure} \begin{figure}[!htb] \centering \includegraphics[width=0.9\linewidth]{pred_seq_len.pdf} \caption{Prediction time for increasing sequence length} \label{fig:pred_seq_len} \end{figure} These tests show the limitations of \name{ripper}, which suffers from a long training step, even for datasets of reasonable size. Distance-based methods and \textsc{seq}\oldstylenums{2}\textsc{seq}\xspace also show limited scalability, although \textit{k}-\textsc{medoids}\xspace provide fast predictions and \textsc{seq}\oldstylenums{2}\textsc{seq}\xspace easily supports datasets containing a large number of samples. \name{ism} and \textit{t}-\textsc{stide}\xspace show the best computation time for both training and prediction steps, and could even prove useful in lightweight applications. \subsection{Memory usage} Monitoring the memory consumption in Figures \ref{fig:mem_samples} and \ref{fig:mem_seq_len} highlights important scalability constraints for several algorithms. We first observe in Figure \ref{fig:mem_samples} that memory usage for \name{ripper} and distance-based methods is strongly correlated with the number of input sequences. \name{ripper} shows a very high memory usage, although the method reaches our 12h timeout at train time before exceeding the limit of 256GB RAM. Distance-based methods are also strongly impacted by the number of samples. However, most of the memory is here consumed by the pairwise distance matrix. Despite storage optimizations, e.g. symmetric matrix, integers are stored on 24 bytes by Python, resulting in a memory usage of 114GB and 167GB for \textit{k}-\textsc{nn}-\textsc{lev}\xspace and \textsc{lof}-\textsc{lev}\xspace, respectively. Interestingly, \name{ism} stabilizes at 10GB after having discovered a sufficient number of patterns from the data. Mini-batch neural networks are not strongly impacted by the number of samples, and the small $\sigma$ limits the diversity of sequences, thus reducing the memory usage of \textit{t}-\textsc{stide}\xspace. \begin{figure}[!htb] \centering \includegraphics[width=0.9\linewidth]{mem_n_seq.pdf} \caption{Memory usage for increasing number of samples} \label{fig:mem_samples} \end{figure} The metrics reported in Figure \ref{fig:mem_seq_len} corroborate the previous conclusions. The experiment reveals a number of rules learnt by \name{ripper} increasing linearly with the number of events, the final model containing in average $\frac{\# events}{50}$ rules. The size of the decision tree built by association rule learning is thus correlated with the volume of the data. To the opposite, the memory usage of ISM stabilizes again after convergence, showing a more efficient internal representation of the data than \name{ripper}. The memory consumption of distance-based methods is very low due to small distance matrices, although the computation of \name{lcs} shows a memory usage increasing with the length of the sequences compared. Neural networks, especially \textsc{seq}\oldstylenums{2}\textsc{seq}\xspace, are more impacted by the increasing sequence length. This is caused by a network topology depending on the size of the padded sequences, in addition to matrix multiplications of dimensionalities directly impacted by the length of the sequences. \begin{figure}[!htb] \centering \includegraphics[width=0.9\linewidth]{mem_seq_len.pdf} \caption{Memory usage for increasing sequence length} \label{fig:mem_seq_len} \end{figure} We have observed that most algorithms have a memory consumption strongly related to the volume of input data. The requirements of \name{ripper} are too important for most systems, and distance-based methods are not suitable to address problems pertaining to more than 20,000 sequences. Interestingly, we did not observe correlations between training or prediction time and memory usage, while one could expect fast algorithms consume more memory, performing faster computations due to a massive caching system. If this may be true when comparing similar methods, the important differences in time and memory are here caused by major discrepancies in the approaches taken by the algorithms. \subsection{Interpretability} \label{sec:interpretability} The ability for humans to understand a machine learning model and the resulting predictions is called \textit{interpretability}. This trait allows data scientists to validate the final model and provides useful insights on the targeted dataset, e.g. discovering valuable information about user behaviors which have an important business value. While continuous scores are usually sufficient for automatic intervention modules, this information and the corresponding ranking may not be sufficient when a manual investigation of the anomalies is required. This situation arises for critical applications, where false positives could strongly impact the brand image, e.g. deny access to services for a business partner, or incur heavy costs, e.g. component replacement based on failure prediction with applications to data centers and airplanes. In this case, especially if many alerts are raised every day, the time allocated to manual investigation could be greatly reduced if we could provide the motivations behind high scores to the human expert. Transparency is thus an essential criterion for the choice of algorithms in many applications, and data analysts may accept to trade performance for model accountability. If human eyes may differentiate outlying activity from the underlying patterns in numerical time-series, this task is much harder for discrete event sequences, which emphasizes the need for model interpretability. The internal representation of interpretable methods provides sufficient information to motivate a predicted score with respect to an input sequence. For example, \name{hmm} learns intuitive transition and emission matrices, providing an insightful weighted process flowchart. Unusual event transitions in the test sequence can be visually highlighted by putting a threshold on the emission transition probabilities. Pairwise distance matrices also convey valuable information and can be turned into intuitive visualizations. The matrices can be plotted as Voronoi diagrams, heat maps or fed into a multidimensional scaling (MDS) algorithm resulting in a scatter plot of chosen dimensionality. If additional insight on the distance computations is required, \name{lcs} is an intuitive metric and the subsequence common to two compared samples can be underlined. On the other hand, the cost matrix computed by Levenshtein is more difficult to read. Further on, the scoring performed by distance-based methods can be easily motived in the previous 2D representations of distance matrices, e.g. by highlighting the test sample and its $k^{th}$ neighbor for \textit{k}-\textsc{nn}\xspace, or the corresponding medoid for \textit{k}-\textsc{medoids}\xspace. The scoring function of \name{lof} is more complex, as it studies the local density of a test sample and its neighbors. Moving back to standard sequence representations, \textit{t}-\textsc{stide}\xspace is extremely accountable and subsequences can be underlined based on their frequency in the model, thus motivating the resulting score. Pointing out events incorrectly predicted by \name{ripper} should also provide some information, and interesting patterns learnt by \name{ism} could be emphasized similarly. Neural networks are closer to black-box systems, and their interpretability has recently gained a lot of attention \cite{Zhang2018interpretability}. However, recent efforts mostly focus on numerical and convolutional networks, which leaves room for future \name{lstm} representations. Differences between the input sequence and the reconstructed output could be highlighted for \textsc{seq}\oldstylenums{2}\textsc{seq}\xspace, although it would not explain the underlying model. For \textsc{lstm}-\textsc{ae}\xspace, we could learn and plot a low dimensional numerical representation based on the internal representation of the network, but dimensionality reduction methods will often produce an output biased towards the average sample of the dataset \cite{onderwater2015outlier} and must be selected with care. This is the reason why the reconstruction error is used with \textsc{seq}\oldstylenums{2}\textsc{seq}\xspace to identify anomalies. In order to overcome the lack of accountability of a given algorithm, an alternative approach is to infer meaningful rules based on the inputs and outputs predicted by a trained model\cite{fortuny2015extraction}. The rule extraction method should provide simple rules showing a transparent decision, while minimizing the prediction error. This is a popular approach used to improve the interpretability of classification models, in particular neural networks and support vector machines (\textsc{svm}s). Two good rule extraction methods for classifiers are \textsc{osre}\cite{etchells2006extraction} and \textsc{hypinv}\cite{saad2009inversion}. These methods are also compatible with novelty detection when the targeted model produces a binary output such as \textit{fraud} and \textit{non-fraud}. If a continuous anomaly score is required to rank anomalies, we should then resort to regression rule extraction methods which learn rules producing a continuous output, e.g. REFANN\cite{setiono2002extraction}, ITER\cite{huysmans2006extraction} or classification and regression trees (\textsc{cart})\cite{breiman2017cart}. Both regression and classification rule mining methods show good performance when applied to numerical or one-hot encoded input data. In order to feed temporal data to these algorithms (or to any standard regression or classification methods), numerical features should be extracted from the sequences during a preprocessing step. The feature selection must be performed with great care to minimize the amount of information lost, and was automated for continuous time-series in a previous work\cite{christ2016distributed}. While different features should be selected for discrete event sequences, either manually or based on existing techniques\cite{wang2001feature,saidi2010feature}, any regression rule extraction technique can be subsequently applied for both data types. The numerical latent representation provided by \name{lstm} autoencoders could be used as input features for rule mining, but it would only improve the interpretability of the decoder, leaving aside the data transformation performed by the encoder. \begin{table} \centering \caption{Scalability and interpretability summary. Runtime and memory scalability are reported for datasets of increasing number of samples and sequence length.} \label{tab:evaluation-summary} \resizebox{\columnwidth}{!}{ \begin{tabular}{llllll} \thickhline & \multicolumn{2}{l}{\textbf{Training/prediction time}} & \multicolumn{2}{l}{\textbf{Mem. usage}} & \\ \textbf{Algorithm} & $\shortarrow{1}$ \textbf{Samples} & $\shortarrow{1}$ \textbf{Length} & $\shortarrow{1}$ \textbf{Samples} & $\shortarrow{1}$ \textbf{Length} & \textbf{Interpretability} \\ \thickhline \name{hmm} & Medium/Low & Low/Low & Low & Low & High \\ \textit{k}-\textsc{nn}-\textsc{lcs}\xspace & High/High & Medium/High & High & Low & High \\ \textit{k}-\textsc{nn}-\textsc{lev}\xspace & High/High & Medium/High & High & Low & Medium \\ \textsc{lof}-\textsc{lcs}\xspace & High/High & Medium/High & High & Low & Medium \\ \textsc{lof}-\textsc{lev}\xspace & High/High & Medium/High & High & Low & Medium \\ \textit{k}-\textsc{medoids}-\textsc{lcs}\xspace & High/Low & Medium/Medium & High & Low & High \\ \textit{k}-\textsc{medoids}-\textsc{lev}\xspace & High/Low & Medium/Medium & High & Low & Medium \\ \textit{t}-\textsc{stide}\xspace & Low/Low & Low/Low & Low & Low & High \\ \name{ripper} & High/Low & High/Medium & High & High & Medium \\ \name{ism} & Low/Low & Medium/Low & Medium & Medium & High \\ \textsc{seq}\oldstylenums{2}\textsc{seq}\xspace & Low/Medium & High/High & Low & High & Low \\ \textsc{lstm}-\textsc{ae}\xspace & Low/Low & Low/Low & Low & Medium & Low \\ \thickhline \end{tabular} } \end{table}
{ "timestamp": "2019-03-01T02:11:34", "yymm": "1902", "arxiv_id": "1902.10940", "language": "en", "url": "https://arxiv.org/abs/1902.10940" }
\section{Introduction} When a continuous dynamical system on a compact space $(f,X)$ admits a Markov partition, the Perron-Frobenius theorem implies that the exponential of its topological entropy, $e^{h_{top}}(f)$, is a weak Perron number, i.e. an algebraic integer whose modulus is greater than or equal to those of its Galois conjugates. The \emph{Thurston set} of a family $\mathcal{F}$ of such systems is the closure in $\mathbb{C}$ of the set of Galois conjugates of numbers of the form $e^{h_{top}(f)}$ for $f \in \mathcal{F}$. In this work, $\mathcal{F}$ is the family of superattracting real quadratic polynomials, and we investigate the geometry and topology of the associated Thurston set, $\Omega_2$: \[ \Omega_2 = \overline{\{z \in \mathbb{C} \mid z \textrm{ is a Galois conjugate of } e^{h_{top}(f)} \textrm{ for some }f \in \mathcal{F} \}}. \] The \emph{Master Teapot} for $\mathcal{F}$, defined by W. Thurston in \cite{thurston}, is a three-dimensional set whose geometry encodes information about which maps in $\mathcal{F}$ correspond to which regions of the Thurston set: \[ \Upsilon_2=\overline{ \{(z,\lambda) \in \mathbb{C} \times \mathbb{R} \mid \lambda = e^{h_{top}(f)} \textrm{ for some }f \in \mathcal{F}, z \textrm{ is a Galois conjugate of } \lambda \}}. \] In \cite{thurston}, Thurston plotted the Galois conjugates of the \emph{growth rates} (the numbers $e^{h_{top}(f)}$) of a selection of postcritically finite (PCF) quadratic real polynomials; Thurston's visually stunning image (see Figure \ref{fig:thurston_set}) showed that the Thurston set has a rich geometric structure. Our first main theorem is a geometric description of the part of the Master Teapot, $\Upsilon_2$, inside the unit cylinder: \begin{mainthm}[Persistence] \label{mainthm:closurepersistence} Fix $(z,\lambda) \in \Upsilon_2 $ with $z \in \mathbb{D}$. Then $\{z \} \times [\lambda,2] \subset \Upsilon_2.$ \end{mainthm} \noindent In other words, $\Upsilon_2 \cap (\mathbb{D} \times \{c\})$ grows monotonically with $c$. The proof of Theorem \ref{mainthm:closurepersistence} is at the end of \S \ref{s:angledoubling}. In \cite[Figure 7.7]{thurston}, Thurston describes the part of the Master Teapot outside the unit cylinder as ``a network of very frizzy hairs, \ldots sometimes joining and splitting, but always transverse to the horizontal planes." As a counterpart to Thurston's ``frizzy hairs," Theorem \ref{mainthm:closurepersistence} suggests a description of the part of the Master Teapot inside the unit cylinder as a collection of ``icicles" hanging down transverse to the horizontal planes. Thurston was aware of this phenomenon, writing: ``Roots in the closed unit disk do not depend continuously on $\lambda$, but they are confined to (and dense in) closed sets that include the unit circle and increases monotonically with $\lambda$, converging at $\lambda = 2$ to the inside portion of [the Thurston set]" \cite[caption of Figure 7.8]{thurston}. However, \cite{thurston} gives no further explanation. Theorem \ref{mainthm:unitcylinder} describes the geometry of the Master Teapot in a neighborhood of the unit cylinder: \begin{mainthm} \label{mainthm:unitcylinder} There exists $R > 0$ such that for any $n \in \mathbb{N}$, $$\left\{\left(z,\lambda \right) \in \mathbb{C} \times \mathbb{R} \mid \left(R^{-1}\right)^{\frac{1}{2^n}} \leq |z| \leq 1, 2^{\frac{1}{2^n}} \leq \lambda \leq 2 \right\} \subset \Upsilon_2. $$ In particular, the Master Teapot contains the unit cylinder, i.e. \[ S^1 \times [1,2] \subset \Upsilon_2. \] \end{mainthm} Connectivity of the Master Teapot follows from Theorems \ref{mainthm:closurepersistence} and \ref{mainthm:unitcylinder} together with a proof by Tiozzo \cite[proof of Theorem 1.3]{TiozzoGaloisConjugates} of connectivity of the region outside the unit cylinder: \begin{mainthm} \label{mainthm:connected} The Master Teapot, $\Upsilon_2$, is connected. Furthermore $\Upsilon_2 \cap (\overline{\mathbb{D}} \times [1,2])$ is path-connected. \end{mainthm} \begin{figure}[!h] \begin{center} \includegraphics[width=\linewidth]{teapot_black.png} \caption{An approximation of Thurston's Master Teapot, $\Upsilon_2$. The horizontal plane is $\mathbb{C}$ and the vertical axis is $\mathbb{R}$. The projection onto $\mathbb{C}$ of the Master Teapot, $\Upsilon_2$, is the Thurston set, $\Omega_2$. The slice of the teapot at level $z=1$ is the unit circle (blue); the unit circle is also shown at level $z=2$ (red). The faint ``spout'' on the right consists of points the form $(\beta,0,\beta) \in \mathbb{R}^3 \simeq \mathbb{C} \times \mathbb{R}$. } \label{fig:teapot} \end{center} \end{figure} A heretofore mysterious feature of plots of finite approximations of the Thurston set, formed by bounding the length of the postcritical orbits, was the appearance of visible ``gaps" or holes at fourth roots of unity, sixth roots of unity, and certain other algebraic numbers (see Figure \ref{fig:thurston_set}). The gaps on the unit circle get filled in as the length of the postcritical orbits approaches infinity \cite[Proposition 6.1]{TiozzoGaloisConjugates}. It is known, however, that $\Omega_2 \cap \mathbb{D}$ does have a hole other than the large central hold around the origin \cite{ckw}. Theorem \ref{mainthm:gaps} provides an arithmetic explanation for these visible gaps in finite approximations of $\Omega_2$. \begin{mainthm}[Gap theorem] \label{mainthm:gaps} For $n \in \mathbb{N}$, let $\omega_n$ denote the set of Galois conjugates of growth rates of superattracting tent maps with postcritical length at most $n$. Let $R$ be one of the rings $\mathbb{Z}[\sqrt{-D}]$ or $\mathbb{Z}[\frac{1+\sqrt{-D}}{2}]$ for $D=1,2,3$ or $5$, and set $c = \inf\{|z| : z \in R, z \not = 0\}$. Then for any $x \in R$, $$B_{r(x)}(x) \cap \omega_n \subset \{x\},$$ where $$r(x) = \begin{cases} \min \{ \frac{c}{(2n^2 + 3n+1) |x|^n e}, \frac{1}{n+1} \} & \textrm{ if } |x| \geq 1,\\ \min \{ \frac{c}{(2n^2+3n+1)|x| e}, \frac{1}{n+1} \} & \textrm{ if } |x| \leq 1.\\ \end{cases}$$ \end{mainthm} \noindent Tiozzo proves there is a hole of radius 1/2 around the origin in the Thurston set, \cite[Lemma 2.4]{TiozzoGaloisConjugates}. Our proof strategy is different: we use techniques resembling those of Solomyak for $\beta$-transformations with standard signature $E=(1,1)$ \cite{solomyak}. We define the preperiodic Thurston set $\Omega_2^{pre}$ as the Thurston set for the family of postcritically finite tent maps. That is, $\Omega_2^{pre}$ is the closure of the set of Galois conjugates of growth rates of postcritically finite tent maps. This includes tent maps that are both superattracting and strictly preperiodic. \begin{mainthm}\label{mainthm:prepernotequal} The Thurston set $\Omega_2$ and the preperiodic Thurston set $\Omega_2^{pre}$ are not equal. \end{mainthm} \noindent The caption for Thurston's image \cite[Figure 1.1]{thurston} states that the image shows the roots of the defining polynomials for "a sample of about $10^7$ postcritically finite quadratic maps of the interval with postcritical orbit of length $\leq 80$." We suspect that Thurston's image shows only roots of superattracting tent maps, i.e. shows $\Omega_2$ and not $\Omega_2^{pre}$ (c.f. Figures \ref{fig:thurston_set}, \ref{fig:preperiodic}). \begin{figure}[!h] \begin{center} \includegraphics[width=\linewidth]{ts_large.png} \caption{An approximation of the Thurston set, $\Omega_2$, containing the roots of the Parry polynomials for all of the (approximately $10^7$) postcritically finite quadratic superattracting tent maps of the interval with postcritical orbit of length $\leq 29$. Notice the ``gaps'' visible at the fourth and sixth roots of unity. } \label{fig:thurston_set} \end{center} \end{figure} \bigskip A uniform $\lambda$-expander is a continuous, piecewise linear self-map of an interval such that the slope of each piece is either $\lambda$ or $-\lambda$ (by convention, $\lambda > 0$). Thanks to a theorem of Thurston and Milnor, from the point of view of topological entropy, it suffices to consider uniform expanders: \begin{theorem} \cite[Theorem 7.4]{MilnorThurston} Every continuous self-map $g$ of an interval with finitely many turning points and with $h_{top}(g) > 0$ is semi-conjugate to a uniform $\lambda$-expander $PL(g)$ with the same topological entropy $h_{top}(g) = \log \lambda$. If $g$ is postcritically finite, so is $PL(g)$. \end{theorem} \noindent A criterion for conjugacy to a uniform expander was also obtained in \cite{parry66}. Uniform expanders may be thought of as one-dimensional analogues of pseudo-Anosov surface diffeomorphisms. For topological quadratic maps (i.e. maps with one turning point), this amounts to studying tent maps on the unit interval. \bigskip There are numerous characterizations of $\Omega_2$ arising from different points of view, and our results build (directly of indirectly) on a long history of research in each of these areas: 1. \emph{Combinatorial.} The root of the combinatorial approach is the theory of $\beta$-expansions of real numbers and Parry polynomials. First introduced in \cite{parry60} for maps of the form $x \mapsto \beta x \mod{1}$ and later extended to larger classes of interval self-maps (e.g. \cite{gora, ItoSadahiro, DombekMP, Steiner, IntermediateBetaShifts}), the Parry polynomial for a superattracting tent map is a monic polynomial with integer coefficients that is determined by combinatorial data about the critical orbit and and has the growth rate of the tent map as a root. Parry polynomials are not necessarily irreducible, but the collection of roots of Parry polynomials associated to a family of functions contains the Thurston set for that family. Parry polynomials were used to study the Thurston sets in \cite{solomyak, thompson}. We prove the relationship between Parry polynomials and kneading determinants for superattracting tent maps in \S~\ref{sec:kneading-parry}. 2. \emph{Complex dynamics and kneading theory.} One may view a unimodal interval self-map as arising via the restriction to the real line of a quadratic polynomial with real coefficients on $\mathbb{C}$, and apply kneading theory (e.g. \cite{Guckenheimer, MilnorThurston}). The part of $\Omega_2$ that is outside the closed unit disk can be characterized as the set of points $z \in \mathbb{C} \setminus \mathbb{D}$ whose inverse is the root of a kneading determinant for a parameter in the real slice of the Mandelbrot set. The growth rate of a real PCF map can be viewed as a specific case of the core entropy of a complex polynomial \cite{TiozzoTopologicalEntropy,TiozzoContinuity,GaoYanTiozzo}. 3. \emph{Iterated function systems.} A point $z \in \mathbb{D}$ is in $\Omega_2$ if and only if $0$ is in the limit set of the iterated function system generated by the two maps $x \mapsto zx+1$ and $x \mapsto zx -1$ \cite{TiozzoGaloisConjugates}. These IFS and their limit sets are the focus of numerous works, including \cite{BarnsleyHarrington, BouschPaires, BouschConnexite, Bandt, SolomyakXu, SoloymakLocalGeom, SolomyakAsymptotic, ckw}. 4. \emph{Power series with prescribed coefficients.} The set $\Omega_2 \cap \mathbb{D}$ equals the set of roots of all power series with coefficients $\pm 1$. There is a large body of literature that investigating the roots of polynomials and power series with all coefficients in a prescribed set (see, for example, \cite{OdlyzkoPoonen, BeaucoupEtAl, BorweinEtAlLittlewoodType, Konyagin, ShmerkinSolomyak, BorweinErdelyiLittmann}). Different normalizations of the IFS give rise to power series with different coefficients. The polynomials most closely related to the Thurston set are perhaps Littlewood, Newman and Borwein polynomials, polynomials whose coefficients belong to the sets $\{\pm 1\}$, $\{0,1\}$ and $\{-1,0,+1\}$ respectively. \subsection{Structure of the paper} \ \begin{comment} \comkathryn{I changed the formatting here, because the titles of the sections are informative and I wanted to see them. The older formatting version is commented out.} Section \S~\ref{s:preliminaries}: Preliminaries. \begin{itemize} \item[]\S~\ref{s:preliminaries} provides background on \emph{tent maps}, the transformations we study in this paper. We define the \emph{$\beta$-itinerary} of a point under such a transformation, the associated sequence of \emph{digits}, the \emph{cumulative sign} for sequences, the \emph{$\beta$-tent map expansion}, the notion of being \emph{postcritically finite}, and the \emph{Parry polynomials}. We define \emph{twisted lexicographic ordering} and give the \emph{admissibility criterion} for itineraries, which are key tools. Finally, we give some background on Milnor-Thurston kneading theory, discuss the connection with quadratic maps, and give an \emph{iterated function system} description. \end{itemize} Section \S~\ref{sec:auxiliary}: Auxiliary sequences. \begin{itemize} \item [] \S~\ref{sec:auxiliary} defines the \emph{auxiliary sequences} associated to sequences of digits, which we will use to characterize admissible sequences, and to define the important notion of \emph{dominant} words that will be essential in \S~\ref{sec:dominant}. \end{itemize} Section \S~\ref{sec:kneading-parry}: Relating kneading polynomials and Parry polynomials. \begin{itemize} \item [] \S~\ref{sec:kneading-parry} shows how to convert between the kneading polynomials used by Tiozzo, and the Parry polynomials that we use. \end{itemize} Section \S~\ref{sec:dominant}: Dominant Strings. \begin{itemize} \item [] \S~\ref{sec:dominant} shows that the leading roots of Parry polynomials of dominant strings are dense in $\left[\sqrt 2, 2\right]$. \end{itemize} Section \S~\ref{sec:compatibility}: Compatibility of orderings. \begin{itemize} \item [] \S~\ref{sec:compatibility} shows that the orderings on the sets of admissible words, kneading determinants, and growth rates are compatible. \end{itemize} Section \S~\ref{sec:persistence}: Persistence on $[\sqrt{2},2]$. \begin{itemize} \item [] \S~\ref{sec:persistence} shows that roots of postcritically finite $\beta$-transformations \emph{persist} inside the unit disk, for growth rates in the interval $\left[\sqrt 2, 2\right]$. Using Thurston's terminology, this shows that this portion of the ``Master Teapot'' picture is connected. To do so, we first prove a technical fact: that certain words can be concatenated such that the concatenation is admissible. Dominant strings will be essential for this concatenation. \end{itemize} Section \S~\ref{sec:doubling}: Period doubling. \begin{itemize} \item [] \S~\ref{sec:doubling} uses \emph{period doubling} to extend the persistence result to all growth rates in the interval $(1,2]$, proving Theorem \ref{mainthm:closurepersistence}. Previous sections gave results for leading roots or growth rates \comdiana{should we be consistent?} in $[\sqrt 2, 2]$, and period doubling extends this to $[\sqrt[4] 2, \sqrt 2]$, then to $[\sqrt[8] 2, \sqrt[4] 2]$, and so on, which extends the results to all of $(1,2]$. \comdiana{please confirm this description.} \end{itemize} Section \S~\ref{sec:cylinder}: The unit cylinder and connectivity. \begin{itemize} \item [] \S~\ref{sec:cylinder} shows that the Master Teapot is connected inside the unit cylinder, and uses this structure to prove Theorems \ref{mainthm:unitcylinder} and \ref{mainthm:connected}. \end{itemize} Section \S~\ref{sec:gaps}: Gaps in the Thurston set. \begin{itemize} \item [] \S~\ref{sec:gaps} explains why there appear to be ``holes'' near primitive roots of unity in the Thurston set (Figure \ref{fig:thurston_set}). We show that these holes are associated to discrete subgroups, proving Theorem \ref{mainthm:gaps}. \end{itemize} \end{comment} {\bf \S\ref{s:preliminaries}: Preliminaries} provides background on \emph{tent maps}, the transformations we study in this paper. We define the \emph{$\beta$-itinerary} of a point under such a transformation, the associated sequence of \emph{digits}, the \emph{cumulative sign} for sequences, the \emph{$\beta$-tent map expansion}, the notion of being \emph{postcritically finite}, and the \emph{Parry polynomials}. We define \emph{twisted lexicographic ordering} and give the \emph{admissibility criterion} for itineraries, which are key tools. Finally, we give some background on Milnor-Thurston kneading theory, discuss the connection with quadratic maps, and give an \emph{iterated function system} description. {\bf \S\ref{sec:auxiliary}: Auxiliary sequences} defines the \emph{auxiliary sequences} associated to sequences of digits, which we will use to characterize admissible sequences, and to define the important notion of \emph{dominant} words that will be essential in \S~\ref{sec:dominant}. {\bf\S\ref{sec:kneading-parry}: Relating kneading polynomials and Parry polynomials} shows how to convert between kneading polynomials and Parry polynomials. {\bf\S\ref{sec:dominant}: Dominant Strings} shows that growth rates corresponding to dominant strings are dense in $\left[\sqrt 2, 2\right]$, by proving the same result for the leading roots of Parry polynomials of dominant strings, and the fact that growth rates and leading roots are equivalent. {\bf\S\ref{sec:compatibility}: Compatibility of orderings} shows that the orderings on the sets of admissible words, kneading determinants, and growth rates are compatible. {\bf\S\ref{sec:persistence}: Persistence on $[\sqrt{2},2]$} shows that roots of postcritically finite $\beta$-transformations \emph{persist} inside the unit disk, for growth rates in the interval $\left[\sqrt 2, 2\right]$. Using Thurston's terminology, this shows that this portion of the ``Master Teapot'' picture is connected. To do so, we first prove a technical fact: that certain words can be concatenated such that the concatenation is admissible. Dominant strings will be essential for this concatenation. {\bf\S\ref{sec:doubling}: Period doubling} introduces the tool of \emph{period doubling} to extend the persistence result to all growth rates in the interval $(1,2]$, proving Theorem \ref{mainthm:closurepersistence}. Previous sections gave results for growth rates in $[\sqrt 2, 2]$, and period doubling extends this to $[\sqrt[4] 2, \sqrt 2]$, then to $[\sqrt[8] 2, \sqrt[4] 2]$, and so on, which extends the results to all of $(1,2]$. {\bf\S\ref{sec:cylinder}: The unit cylinder and connectivity} shows that the Master Teapot is connected inside the unit cylinder, and uses this structure to prove Theorems \ref{mainthm:unitcylinder} and \ref{mainthm:connected}. {\bf\S\ref{sec:gaps}: Gaps in the Thurston set} explains why there appear to be ``holes'' near primitive roots of unity in the Thurston set (Figure \ref{fig:thurston_set}). We show that these holes are associated to discrete subgroups, proving Theorem \ref{mainthm:gaps}. {\bf \S\ref{sec:preperiodic}: $\Omega_2$ and $\Omega_2^{pre}$ are not equal} shows that the periodic and preperiodic Thurston sets are not equal, proving Theorem \ref{mainthm:prepernotequal}. \subsection{Acknowledgements} The authors gratefully acknowledge Giulio Tiozzo, Daniel Thompson, Sarah Koch, and Dylan Thurston for helpful conversations. This work began at the AMS Mathematics Research Communities program in June 2017. The authors are immensely grateful to the MRC program for stimulating this collaboration, and to Daniel Thompson introducing us to this subject while at the MRC. This material is based upon work supported by the National Science Foundation under Grant Number DMS 1641020. The first author was supported in part by NSF RTG grant 1045119. \section{Preliminaries}\label{s:preliminaries} \subsection{Basic definitions} \label{Basic definitions} \label{s:tentMapBackground} Denote the unit interval $[0,1]$ by $I$. Throughtout this work, a \emph{tent map} will mean a map $f_{\beta}:I \rightarrow I$ of the following form. Fix a real number $\beta \in (1,2]$, let $I^\beta_0=[0,\frac1\beta]$ and $I^\beta_1=(\frac1\beta,1]$. The \emph{$\beta$-tent map} is the map $f_\beta\colon I\to I$ defined by \[ f_\beta= \left\{\begin{array}[]{ll} \beta x & \text{ for }x\in[0,\frac1\beta],\\ -\beta x + 2 & \text{ for }x\in[\frac1\beta,1]. \end{array} \right. \] The number $\beta$ is the \emph{growth rate} of the map $f_{\beta}$; equivalently, $\beta = e^{h_{top}(f_{\beta})}$. This equivalence follows from the fact that for a continuous self-map $f$ of an interval with finitely many turning points, \begin{equation} \label{eq:arclengthentropy} h(f) = \lim_{n \to \infty} \frac{1}{n} \log(\textrm{Var}(f^n)), \end{equation} where $\textrm{Var}(f)$ denotes the total variation of $f$ \cite{MS}. The \emph{$\beta$-itinerary sequence} of a point $x$ in $I$ is the sequence $\It_\beta(x,\cdot)\colon \mathbb N\to\{0,1\}$ defined by $$\It_\beta(x,j)=k$$ where $f^{j-1}_\beta(x)\in I^\beta_k$. Equivalently, $\It_\beta(x,j)=\lfloor \beta\cdot f_\beta^{j-1}(x)\rfloor,$ where $\lfloor\cdot\rfloor$ is the integer floor. The sequence of \emph{digits} associated to the $\beta$-itinerary sequence of a point $x$ is the sequence $d_\beta(x,\cdot)\colon\mathbb N\to\{0,2\}$ defined by \[ d_\beta(x,j)=\left\{ \begin{array}[]{ll} 0 & \text{ if }\It_\beta(x,j)=0, \\ 2 & \text{ if }\It_\beta(x,j)=1. \end{array} \right. \] For any point $x \in I$, $\beta \in (1,2]$ and integer $j \geq 0$, define the \emph{sign} $e_{\beta}(x,j)$ by $$e_\beta(x,j) = d_\beta(x,j)=\left\{ \begin{array}[]{ll} +1 & \text{ if }\It_{\beta}(x,j)=0, \\ -1 & \text{ if }\It_\beta(x,j)=1. \end{array} \right. $$ The \emph{sign vector} associated to any tent map is the function $E:\{0,1\} \rightarrow \{-1,+1\}$ defined by $E(0)=+1$ and $E(1)=-1$. The sign vector $E$ encodes the information that for any tent map $f_{\beta}$, the graph has positive slope on $I^{\beta}_0$ and negative slope on $I^{\beta}_1$. The {\em cumulative sign} associated to a $\beta$-itinerary sequence of a point $x$ is the sequence $s_\beta(x,\cdot):\mathbb{N} \rightarrow \{+1,-1\}$ defined inductively by $s_\beta(x,1)=1$ and \begin{equation} \label{eq:cumulativesigndef} s_\beta(x,j+1)=\prod_{k=1}^{j} e_\beta(x,k) \end{equation} for $j\geq1$. In fact, cumulative signs can be defined for any word in the alphabet $\{0,1\}$, not just those that arise as $\beta$-itineraries. For any sequence $w=w_1w_2w_3\dots \in \{0,1\}^\mathbb{N},$ define the sequence of cumulative signs $s_w:\mathbb{N} \rightarrow \{+1,-1\}$ inductively by $s_w(1)=+1$ and $s_w(i+1)= E(w_i)s(i)$ for $i \in \mathbb{N}$. For a finite string $w=w_1\dots w_n$, define the cumulative sign of $w$ to be $s_w(n)$. \begin{remark} \label{rem:words_strings_seqs} We will use the term \emph{string} to refer to an ordered list of letters in some alphabet, and this list may be either finite or infinite. We adopt the convention that a \emph{word} is always a finite string, and a \emph{sequence} is always an infinite string. An itinerary is also assumed to be an infinite string. \end{remark} The formula for the $\beta$-tent map expansion of $x$ is well known, but since Parry polynomials, which we will use extensively, come from $\beta$-expansions, we include an (original) proof below for completeness. \begin{proposition}[$\beta$-tent map expansion of $x$] \label{fact:betaexpansion} For any $\beta \in (1,2]$ and any $x \in I$, \begin{equation} \label{eq:beta_expansion} x=\sum_{j=1}^\infty \frac{s(x,j)d(x,j)}{\beta^j}. \end{equation} \end{proposition} \begin{proof} Fix $1 < \beta \leq 2$ and let $f$ be the tent map of growth rate $\beta$. For any $x \in I$, $f(x) = d(x,1) + e(x,1)\beta x$. Then for any integer $n>1$, $f^n(x) = d(x,n)+ e(x,n)\beta f^{n-1}(x)$. By induction on $n$, one obtains that for any $n \in \mathbb{N}$ and $x \in [0,1]$, \begin{multline*} f^{n}(x) = d(x,n) + \beta^1d(x,n-1) \prod_{j=n}^n e(x,j)+\beta^2 d(x,n-2) \prod_{j=n-1}^n e(x,j) \\ + \dots + \beta^{n-1}d(x,1) \prod_{j=2}^n e(x,j) + \beta^{n} x\prod_{j=1}^n e(x,j). \end{multline*} Dividing through by $\beta^n \prod_{j=1}^n e(x,j)$ yields \begin{multline*}\frac{f^n(x)}{\beta^n \prod_{j=1}^n e(x,j)} = \frac{d(x,n)}{\beta^n \prod_{j=1}^n e(x,j)} + \frac{d(x,n-1)}{\beta^{n-1} \prod_{j=1}^{n-1} e(x,j)} + \frac{d(x,n-2)}{\beta^{n-2} \prod_{j=1}^{n-2} e(x,j)} \\ + \dots + \frac{d(x,1)}{\beta^1 \prod_{j=1}^1 e(x,j)} + x. \\ \end{multline*} Taking the limit as $n \to \infty$ gives \begin{equation}\label{eq:lim} 0 = x + \sum_{i=1}^{\infty} \frac{d(x,i)}{\beta^i s(x,i+1)}=x + \sum_{i=1}^{\infty} \frac{d(x,i)s(x,i)e(x,i)}{\beta^i}. \end{equation} Since for tent maps $d(x,i) \neq 0$ if and only if $e(x,i) = -1$, equation (\ref{eq:lim}) implies $$0 = x - \sum_{i=1}^{\infty} \frac{d(x,i)s(x,i)}{\beta^i}.$$ \end{proof} The \emph{topological critical points} of the tent map $f_{\beta}$ are the points $0,1/\beta$, and $1$. A tent map $f_\beta$ is said to be {\em postcritically finite} if the union of the forward orbits of the critical points of $f_\beta$ is a finite set. The definition of the tent map $f_{\beta}$ immediately implies that $f_{\beta}(0)=0$ and $f_{\beta}(1/\beta)=1$. Therefore, a tent map $f_{\beta}$ is postcritically finite if and only if the orbit of $1$ is finite. A postcritically finite orbit of $1$ may be (strictly) \emph{periodic}, meaning that there exists $n \in \mathbb{N}$ such that $f^n(1)=1$ or it may be (strictly) \emph{preperiodic}, meaning that the orbit is not strictly periodic, but there exists $k,n \in \mathbb{N}$ such that $f^n(f^k(1))=f^k(1)$. We call $f_\beta$ {\em superattracting} if the orbit of $1$ under $f_{\beta}$ is (strictly) periodic. The terminology ``superattracting" is borrowed from complex dynamics (see \S \ref{ss:quadratic maps}). If $f_{\beta}$ is superattracting, meaning that $1$ is (strictly) periodic under $f_{\beta}$, the $\beta$-tent map expansion of $1$ (equation \ref{eq:beta_expansion}) becomes a geometric series. Denoting the period of $1$ by $p$ and substituting the value of the geometric series, the $\beta$-tent map expansion of $1$ becomes \begin{equation} \label{eq:definingpolynomial} 1 = \beta^p - \sum_{j=1}^p s(1,j)d(1,j)\beta^{p-j}. \end{equation} \begin{definition} \label{def:ParryPolynomial} The \emph{Parry polynomial} for a superattracting tent map $f_{\beta}$ with critical period $p$ is the polynomial \[ P_{\beta} (z) := z^p-s(1,1)d(1,1)z^{p-1}-\cdots-s(1,p)d(1,p) - 1. \] \end{definition} \begin{remark} The Parry polynomial for a word $w$ in the alphabet $\{0,1\}$ is defined similarly; interpret the word $w$ as one period of the itinerary of $1$ under a tent map, compute the digits and cumulative signs, and form the Parry polynomial $P_w$ as above. \end{remark} Thus, if $f_{\beta}$ is a superattracting tent map, it follows from equation (\ref{eq:definingpolynomial}) that $\beta$ is a root of the associated Parry polynomial. The minimal polynomial for $\beta$ is a factor of $P_{\beta}$. However, $P_{\beta}$ is never irreducible, as it always has a factor of $(z-1)$ (see Proposition \ref{p:relatingpolynomials2}), and may also have other factors. In the case that $f_{\beta}$ is strictly preperiodic, a similar procedure using the sum of a power series produces a polynomial associated to a strictly preperiodic $f_\beta$. \subsection{Irreducibility} To establish irreducibility, we will use two lemmas from \cite{TiozzoGaloisConjugates} which are derived from Eisenstein's criterion. \begin{lemma} \cite[Lemma 4.1]{TiozzoGaloisConjugates} \label{l:tiozzo4point1} Let $d=2^n - 1$ with $n \geq 1$, and choose a sequence $\epsilon_0,\epsilon_1,\dots,\epsilon_n$ with each $\epsilon_k \in \{\pm 1\}$ such that $\sum_{k=0}^d \epsilon_k \equiv 2 \bmod{4}$. Then the polynomial $$f(x):= \epsilon_0 + \epsilon_1 x + \dots + \epsilon_d x^d$$ is irreducible in $\mathbb{Z}[x]$. \end{lemma} \begin{lemma}\cite[Lemma 4.2]{TiozzoGaloisConjugates} \label{l:Tiozzo4.2} Let $f(x) = 1 + \sum_{k=1}^d \epsilon_k x^k$ be a polynomial with $\epsilon_k \in \{\pm 1\}$ for all $1 \leq k \leq d$ and $\epsilon_k = -1$ for some $k$. If $f(x)$ is irreducible in $\mathbb{Z}[x]$, then for all $n \geq 1$, the polynomial $f(x^{2^n})$ is irreducible in $\mathbb{Z}[x]$. \end{lemma} Parry polynomials and kneading polynomials may not be irreducible; all Galois conjugates of $\beta$ are roots of $P_{\beta}$, but $P_{\beta}$ may have roots which are not Galois conjugates of $\beta$. The terms $\beta$-conjugates or generalized $\beta$-conjugates refer to the roots of a Parry polynomial associated to a $\beta$-map or generalized $\beta$-map. The distribution of $\beta$-conjugates was studied in \cite{VGJL,VGJL2}. \subsection{Ordering and admissibility of itineraries} \label{sec:admissibility} \begin{definition}[Twisted lexicographic ordering] \label{def:twistedlexicographic} {\color{white} for formatting only} \begin{enumerate} \item Define the ordering $\leq_E$ on the set of sequences in $\{0,1\}^{\mathbb{N}}$ as follows. Given two distinct words $w=w_1w_2\dots$ and $v=v_1v_2\dots$ in $\{0,1\}^{\mathbb{N}}$, define $w <_E v$ if and only if at the first integer $n$ such that $w_n\neq v_n$, \[\left\{ \begin{array}{ll} w_n<v_n &\text{ if } s_w(n)=+1,\\ w_{n}>v_{n} &\text{ if } s_w(n)=-1. \end{array} \right. \] \item Define the ordering $\leq_E$ on the set of words in the alphabet $\{0,1\}$ as follows. Given two words $w$ and $v$, write $w <_E v$ if and only if $w^\infty <_E v^\infty$. \end{enumerate} \end{definition} \noindent Notice in Definition \ref{def:twistedlexicographic} that $s_w(n)=s_v(n)$ since $n$ is the first digit in which $w$ and $v$ differ. \noindent\begin{definition}[Admissibility] \label{def:admissible} \ \begin{enumerate} \item A sequence $w=(w_1 w_2 \dots )$ in the alphabet $\{0,1\}$ is \emph{admissible} if there exists $\beta \in (1,2]$ such that $w$ is the itinerary of $1$ under the tent map $f_{\beta}$. \item A word $w=(w_1\dots w_n)$ is \emph{admissible} if the infinite string $(w_1\dots w_n)^\infty$ is admissible. \end{enumerate} \end{definition} \noindent Let $\sigma:\{0,1\}^\mathbb{N} \to \{0,1\}^\mathbb{N}$ be the shift map, i.e. $\sigma(w_1w_2w_3\dots)= w_2w_3\dots$. \begin{theorem} \cite[Theorem 12.1]{MilnorThurston} \label{t:admissible} A word $w \in \{0,1\}^\mathbb{N}$ is admissible if and only if $\sigma^j(w) \leq_E w$ for all $j \in \mathbb{N}$. \end{theorem} \subsection{The real slice of the Mandelbrot set \& Milnor-Thurston kneading theory} \label{ss:quadratic maps} Every quadratic polynomial on $\mathbb{C}$ is conformally equivalent to a unique polynomial of the form $f_c(z)=z^2+c$. The Mandelbrot set $\mathcal{M}$ is the set of parameters $c$ for which the filled Julia set for the map $f_c$ is connected. A parameter $c \in \mathcal{M}$ is said to be \emph{hyperbolic} if the critical point for $f_c$ tends tends to the (necessarily unique) attracting cycle in $\mathbb{C}$. The hyperbolic parameters of $\mathcal{M}$ form an open set; connected components of this set are called hyperbolic components. Each hyperbolic component $H$ is conformally equivalent to $\mathbb{D}$ under the map $\lambda$ which assigns to each $c \in H$ the multiplier of its (unique) attracting cycle. The \emph{center} and \emph{root} of $H$ are $\lambda^{-1}(0)$ and $\lambda^{-1}(1)$, respectively. The set of all real hyperbolic parameters is dense in $\mathcal{M} \cap \mathbb{R} = [-2,1/4]$; in particular, every component of the interior of $\mathcal{M}$ which meets the real line is hyperbolic \cite{Lyubich}. The parameter $c$ and the map $f_c$ are said to be \emph{superattracting} if the critical point $z=0$ is strictly periodic under $f_c$. Each superattracting parameter $c$ is the center of a hyperbolic component of the Mandelbrot set. For a parameter $c \in \partial \mathcal{M} \cap \mathbb{R}$, the \emph{dynamic root} $r_c$ of $f_c$ is defined to be the critical value $c$ if $c$ belongs to the Julia set of $f_c$, and the smallest real value of $J(f_c)$ larger than $c$ if $c$ does not belong to the Julia set. For $c \in \partial \mathcal{M} \cap \mathbb{R}$, there exists a unique angle $\theta_c \in [0, 1/2]$ such that the dynamic rays $R_c(\pm \theta(c))$ land at the dynamic root $r_c$ of $f_c$; in the parameter plane, the two rays $R_\mathcal{M}(\pm \theta(c))$, and only these rays, contain $c$ in their impression \cite{Zakeri}. This angle angle $\theta_ c$ is called the \emph{characteristic angle} for the parameter $c \in \partial \mathcal{M} \cap \mathbb{R}$. In the context of quadratic maps of the form $f_c(z)=z^2+c$, define the \emph{sign} of a real number $x \not = 0$ by $\epsilon(x) = -1$ if $x<0$ and $\epsilon(x) = +1$ if $x > 0$. Define the sequence of cumulative signs by $\eta_n(x) = \prod_{i=0}^{n-1} \epsilon(f^i(x))$. (The use of $\epsilon$ and $\eta_n$ in this context are analogous to $e$ and $s_n$ in \S \ref{Basic definitions}.) When the critical point $0$ is not a periodic point for $f_c$, the \emph{kneading series} of $x$, denoted by $K(x,t)$ is the formal series $$K(x,t) = 1 + \sum_{n=1}^{\infty}\eta_n(x)t^n.$$ For each $c \in \mathbb{C}$, define the \emph{kneading determinant} $K_c(t)$ of $f_c$ by $$K_c(t) = \begin{cases} K(c,t) \textrm{ if the critical point is not periodic under }f_c \\ \lim_{C \to c^+}K(C,t) \textrm{ if the critical point is periodic under }f_c\\ \end{cases}$$ where the limit as $C \to c^+$ is taken over the set of $C$'s such that the critical point is not periodic under $f_C$. \begin{theorem} \cite[Theorem 6.3]{MilnorThurston} \label{t:kneadingroots} Let $s$ be the growth rate of $f_c$. Then the function $K_c(t)$ has no zeros on the interval $[0,1/s)$, and if $s>0$ we have $K_c(1/s)=0$. \end{theorem} A formal power series with coefficients $\pm 1$ is said to be \emph{admissible} if it is the kneading determinant of some real quadratic polynomial. A formal power series $\phi(t)$ is said to be \emph{positive} if its first non-zero coefficient is positive. Two formal power series satisfy \mbox{$\phi_1(t) < \phi_2(t)$} if $\phi_2(t) - \phi_1(t)$ is positive. The absolute value $|\phi(t)|$ of a power series equals $\phi(t)$ if $\phi(t)$ is positive and equals $-\phi(t)$ otherwise. \begin{theorem} \cite[Theorem 12.1]{MilnorThurston} \label{t:MTadmissibility} Let \[ \phi(t) = 1+\sum_{k=1}^{\infty} \epsilon_k t^k \] be a formal power series with $\epsilon_k \in \{\pm 1\}$. Then $\phi(t)$ is admissible if and only if \[ \phi(t) \leq \left| \sum_{k=n}^{\infty} \epsilon_kt^{k-n} \right| \] for each $n \geq 1$. \end{theorem} For a superattracting parameter $c$, denote the length of the critical orbit by $n$. Then the coefficients of the kneading determinant, $K_c(t)$ are periodic, and so there exists a polynomial $P_{c, knead}(t)$ of degree $n-1$ with coefficients in $\{+1,-1\}$ such that \begin{equation} \label{eq:kneadingpolynomial} K_c(t) = \frac{P_{c, knead}(t)}{1-t^n}. \end{equation} The polynomial $P_{c, knead}(t)$ is the \emph{kneading polynomial} of $f_c$. \begin{theorem} \cite[Theorem 13.1, Corollary 13.2]{MilnorThurston} \label{t:entropycontinuous} The function $h_{top}(f_c|_{\mathbb{R}})$ is a continuous, nonincreasing function of $c$. \end{theorem} \begin{theorem} \cite[Theorem 1.1]{TiozzoTopologicalEntropy} \label{t:TiozzoGaloisConjugates} Let $c \in [-2,1/4]$. Then \[ \frac{h_{top}(f_c|_{\mathbb{R}})}{\log 2} = H.\text{dim} \{\theta \in S^1 \mid R_{\mathcal{M}}(\theta) \textrm{ lands on } \partial \mathcal{M} \cap [c,1/4]\}. \] \end{theorem} \noindent An immediate consequence of Theorem \ref{t:TiozzoGaloisConjugates} is that $h_{top}(f_c|_{\mathbb{R}})$ as a function of $c \in \mathbb{R}$ is constant on real hyperbolic components. \subsection{Iterated function system description} \label{ss:IFSdescription} A point $z \in \mathbb{D} \setminus \{0\}$ defines a contracting iterated function system (IFS) generated by the two maps $$f_z:x \mapsto zx+1, \quad g_z:x \mapsto zx-1.$$ The \emph{attractor} or \emph{limit set} $\Lambda_z$ of this IFS is defined to be the unique fixed, nonempty, compact set $S \subset \mathbb{C}$ such that $S = f_z(S) \cup g_z(S)$. The existence and uniqueness of $\Lambda_z$ is a consequence of the contraction mapping principle. The image of a point $x \in \mathbb{C}$ under a word $w$ of length $n$ in the alphabet $\{f,g\}$ is $$xz^n + \sum_{i=0}^{n-1} c_iz^i,$$ where $c_i \in \{-1,1\}$ is determined according to whether the $i^{th}$ letter of $w$ is $f_z$ or $g_z$. Thus, the limit set $\Lambda_z$ of the IFS generated by $f_z$ and $g_z$ is the set of values of power series in $z$ with coefficients $\pm 1$. Tiozzo showed, roughly speaking, that all finite strings occur as the suffixes of kneading sequences, thereby proving that $\Omega_2 \cap \mathbb{D}$ equals the closure of the set of roots in $\mathbb{D}$ of all power series with $\pm 1$ coefficients \cite[Proposition 5.2]{TiozzoGaloisConjugates}. Therefore, a point $z \in \mathbb{D}$ is in $\Omega_2$ if and only if $0$ is in the limit set of the iterated function system generated by $f_z,g_z$ \begin{lemma} \cite[Lemma 3.1.1]{ckw} \label{l:boundsonlimitset} $$\Lambda_z \subset B_{\frac{1}{1-|z|}}(0).$$ \end{lemma} The statement in \cite{ckw} uses a different normalization on the maps. Lemma \ref{l:boundsonlimitset} above and its proof below are exact translations of the versions in \cite{ckw}. \begin{proof} Let $D$ denote the ball of radius $R$ centered at $0$. Then $f(D)$ and $g(D)$ are balls of radius $|z|R$ centered at $1$ and $-1$, respectively. Hence, if $\frac{1}{1-|z|}<R$, we have $f(D),g(D) \subset D$. This implies $\Lambda_z \subset D$. \end{proof} \section{Auxiliary sequences}\label{sec:auxiliary} Auxiliary strings will serve two purposes: first, admissible sequences can be characterized in terms of auxiliary strings, and second, auxiliary strings feature in the definition of dominant words, which we will use to obtain a set of tent maps whose growth rates are dense in $[1,2]$. The definitions of auxiliary and dominant used here are translations from the complex dynamics setting of notions with the same names introduced in \cite{TiozzoTopologicalEntropy}. \begin{lemma} \label{lem:10start} Let $w$ be a word in the alphabet $\{0,1\}$ such that $w^\infty$ is admissible. Then the first letter of $w$ is $1$ and the second letter is $0$. \label{lem:firstletters} \end{lemma} \begin{proof} Let $w$ be a word in alphabet $\{0,1\}$ such that $w^\infty$ is the itinerary of $1$ under the tent map of slope $\beta$, $f_{\beta}$. It suffices to prove that $f_{\beta}(1)\in[0,\frac1\beta]$. This holds if and only if $2-\beta \leq \frac1\beta$, which is equivalent to $-\beta^2+2\beta-1 \leq 0$ since $\beta>0$, and thus also to $(\beta-1)^2 \geq 0$. Since $\beta > 1$ (by the definition of a tent map), the claim follows. \end{proof} \begin{lemma} \label{l:equivalentAdmissibility} Let $w$ be a word in the alphabet $\{0,1\}$ that starts with $10$. Then $w^\infty$ is admissible if and only if for every nontrivial decomposition $w=xy$ such that $y$ starts with $10$, $yx \leq_E xy$. \end{lemma} \begin{remark}\label{r:admissible_prefix_suffix} If $w$ is a word for which $w^\infty$ is admissible, then an immediate consequence of Lemma \ref{l:equivalentAdmissibility} is that any suffix of $w$ is smaller than or equal to the prefix of $w$ of the same length in the twisted lexicographical ordering. \end{remark} \begin{proof}[Proof of Lemma \ref{l:equivalentAdmissibility}] By Fact \ref{t:admissible}, a word $w$ is admissible if and only if for every nontrivial decomposition of $w$ as $w=xy$, we have \begin{equation} \label{eq:admissibleequivalentcharacterization} yx\leq_Exy. \end{equation} If $w$ is admissible, then equation (\ref{eq:admissibleequivalentcharacterization}) holds for every nontrivial decomposition $w=xy$, including those for which $y$ starts with $10$, proving one direction of the statement. Now suppose $w$ starts with $10$ and $yx \leq_E xy$ for every decomposition $w=xy$ such that $y$ starts with $10$. Since $w=xy$ starts with $10$, which is maximal in the ordering $\leq_E$, equation (\ref{eq:admissibleequivalentcharacterization}) automatically holds for any decomposition $w={x'}{y'}$ such that ${y'}$ does not start with $10$. Therefore equation (\ref{eq:admissibleequivalentcharacterization}) holds for every decomposition $w=xy$, and so $w$ is admissible. \end{proof} \begin{definition}[Auxiliary string] \label{def:auxiliary} \ \begin{enumerate} \item Let $w=w_1w_2\dots $ be an infinite string in the alphabet $\{0,1\}$ such that $w_1=1$. Let $i_1,i_2,\dots$ be the increasing sequence of indices $i$ such that $w_i = 1$. For each $j \in \mathbb{N}$, define $n_j = i_{n+1}-i_{n}-1$. The \emph{auxiliary string} $w_{aux}$ associated to $w$ is the sequence of nonnegative integers \[ w_{aux} = n_1 n_2 n_3 \dots. \] \item Let $w=w_1 \dots w_n$ be a word in the alphabet $\{0,1\}$ such that $w_1=1$. Let $i_1,\dots,i_p$, $p\geq 1$, be the increasing string of indices $i$ such that $w_i =1$. For each $j < p$, define $n_j = i_{n+1}-i_{n}-1$, and define $n_p$ to be the number of $0$'s to the right of $w_p$ in $w$. The \emph{auxiliary string} $w_{aux}$ associated to $w$ is the finite string of nonnegative integers $$w_{aux} = n_1 \dots n_p.$$ \end{enumerate} \end{definition} \begin{remark} Note that the auxiliary string is always defined for admissible sequences; since $f_\beta$ is uniformly expanding with slope $\beta>1$ in the first interval $I_0$, the $f_\beta$-orbit of $1$ must eventually leave the interval $I_0$ if it ever enters $I_0$. The term $n_j$ in $w_{aux}$ represents the number of $0$'s after the $j^\text{th}$ occurance of $1$ in the string $w$. If the last letter of a finite string $w$ is a $1$, there are zero $0$'s to the right, so $n_j=0$. Otherwise, the value of $n_j$ is zero if and only if the $j^\text{th}$ $1$ and the $(j+1)^\text{th}$ $1$ are adjacent. Notice that if $w$ is a finite string in the alphabet $\{0,1\}$ that begins with $1$, $(w^{\infty})_{aux} = (w_{aux})^{\infty}$. \end{remark} \begin{definition} The {\em alternating lexicographical order} on a the set of length $n$ strings of nonnegative integers (where $n$ is either a finite positive integer or $\infty$) is defined as follows: $(a_i)_{i=1}^n<_{alt}(b_i)_{i=1}^n$ if, denoting by $k$ the index of the first digit in which the sequences differ, $$\begin{cases} a_k <_{alt} b_k \qquad \text{ if }k\text{ is even,}\\ a_k >_{alt} b_k\qquad \text{ if }k\text{ is odd.} \end{cases}$$ If there is no such $k$, meaning that the two strings are the same, write $(a_i)_{i=1}^n \leq_{alt}(b_i)_{i=1}^n$ and $(b_i)_{i=1}^n\leq_{alt}(a_i)_{i=1}^n$. \end{definition} \noindent For example, $21<_{alt}11<_{alt}12$. \begin{definition} Let $A=(a_1,\dots,a_n)$ and $B=(b_1,\dots,b_m)$ be two finite strings of positive integers (possibly of different lengths). Write $$A \ll_{alt} B$$ if there exists a positive integer index $k \leq \min\{m,n\}$ such that $(a_1,\dots,a_{k-1}) = (b_1,\dots,b_{k-1})$ and $(a_1,\dots,a_k) <_{alt} (b_1,\dots,b_k)$. \end{definition} \begin{definition} \ \begin{enumerate} \item A finite string of nonnegative integers $w$ is {\em extremal} if for any decomposition \mbox{$w=xy$} where $x$ and $y$ are nontrivial, $xy \leq _{alt}yx$. \item An infinite string of nonnegative integers $S$ is \emph{extremal} if for any decomposition $w=xy$ where $x$ has finite length, $$xy \leq_{alt} y.$$ \end{enumerate} \end{definition} Recall (Definition \ref{def:admissible}) that a word $w$ is {\em admissible} if $w^\infty$ is the itinerary of $1$ under a PCF tent map. \begin{proposition} \label{prop:auxiliary} Let $w$ be a word in the alphabet $\{0,1\}$ with first letters $10$. Then $w$ is admissible if and only if $w_{aux}$ is extremal. \end{proposition} \begin{proof} Lemma \ref{l:equivalentAdmissibility} allows us to only consider decompositions in which $y$ starts with $10$, meaning that the auxiliary sequence for $y$ is defined, and if $x_{aux}=(n_1,\ldots,n_{\ell})$ and $y_{aux}=(n_{\ell+1},\ldots,n_p)$, then $(xy)_{aux}=(n_1,\ldots,n_{\ell},n_{\ell+1},\ldots,n_p)$. \color{black} Let $w=xy$ be any such decomposition. Compare $w=xy$ to the shift $yx$. Note that equality of $w$ and its shift is trivial, so consider the case where they differ. Let $t$ be the last 1 in $w=xy$ at which $xy$ and $yx$ agree. More precisely, we are assuming that every $k^\text{th}$ term in $w$ up to (and including) this $t^\text{th}$ 1 agrees with the $k^\text{th}$ term of the shift $yx$. Then $w$ and its shift $yx$ differ in the number of consecutive zeros following the $t^\text{th}$ 1. Let us express this in terms of the auxiliary sequences. If we denote the auxiliary sequence of $w$ by \[ w_{aux}=(xy)_{aux}=(n_1\ldots,n_t,\ldots,n_p) \] then the auxiliary sequence of the shift $xy$ is given by \[ yx_{aux}=(n_{\ell},\ldots,n_{t+\ell},\ldots,n_p,\ldots,n_{\ell-1}) \] where $\ell$ is the length of the auxiliary sequence for $x$. Thus, $xy$ and $yx$ agree at least up to the $t^\text{th}$ one of $w=xy$ if and only if $n_{t+1}$, which is the $t+1^\text{st}$ term of $xy_{aux}$, and $n_{t+\ell+1}$, the $t+1^\text{st}$ term of $yx_{aux}$, are the first terms at which the sequences $xy_{aux}$ and $yx_{aux}$ differ. The direction of the inequality will be determined by the parity of $t$. For this special case of the tent map, $E(0)=+1$ and $E(1)=-1$, so the cumulative sign at the $m^\text{th}$ term of a string is equal to $(-1)^{n(m)}$ where $n(m)$ is the number of 1's in the string before the $m^\text{th}$ term. Thus, $t$ even implies the cumulative sign at the point where $xy$ and $yx$ differ is positive. It follows that $xy>_{E}yx$ if and only if the $(t+1)^\text{st}$ 1 of $xy$ appears earlier in the sequence than the $(t+1)^\text{st}$ 1 of $yx$, which is equivalent to $n_t<n_{\ell+t}$. Since $t$ is even, $xy_{aux}<_{alt}yx_{aux}$. Similarly, if $t$ is odd, then the cumulative sign at the point where $xy$ and $yx$ differ is negative. So $xy>_{E}yx$ if and only if $n_t>n_{\ell+t}$ if and only if $xy_{aux}<_{alt}yx_{aux}$. \end{proof} \begin{remark} Proposition \ref{prop:auxiliary} is equivalent to Lemma 9.3 of \cite{TiozzoTopologicalEntropy}, which is developed from the point of complex dynamics (e.g. using external angles of the Mandelbrot set). \end{remark} \section{Relating kneading polynomials and Parry polynomials}\label{sec:kneading-parry} For a characteristic angle $\theta_c$ of a real hyperbolic parameter, Tiozzo associates an auxiliary strings $w_c$ as follows: Write the binary expansion of $\theta_c$, and let $w_c$ be the sequence \mbox{$w_c=a_1 a_2 a_3 \dots$} whose entries counts how many digits in a row of the binary expansion of $\theta_c$ are the same: \begin{equation} \label{eq:TiozzoAux} \theta_c = 0.\underbrace{0\dots0}_{a_1} \underbrace{1\dots1}_{a_2} \underbrace{0\dots0}_{a_3} \dots \end{equation} (Notice that the sequence $w_c = a_1 a_2 \dots$ is independent of whether one uses the binary expansion of $+\theta_c$ or $-\theta_c$.) \begin{definition} \label{d:tiozzo18aux} For a word $w$ in the alphabet $\{0,1\}$, let $w^{T}_{aux}$ be the sequence $a_1,\dots,a_n$ defined as above in equation (\ref{eq:TiozzoAux}) for the binary expansion $(0.w)$. \end{definition} \begin{proposition} \label{p:relatingpolynomials2} Let $w=(w_1 \dots w_p)$ be an admissible word in the alphabet $\{0,1\}$ such that $\sum_{i=1}^p w_i$ is even. Let $w_{aux}=(a_1,\ldots,a_n)$ and $w_{aux}^T=(b_1,\ldots,b_\ell)$. Then $n=\ell$ and $a_i=b_i-1$ for all $i=1\ldots,n$. Furthermore, \[ (t-1)t^{p-1}P_{c,\textrm{knead}}(t^{-1}) = P_{\textrm{Parry}}(t), \] where $P_{\textrm{Parry}}$ is the Parry polynomial for the tent map associated to $w$ and $P_{c,\textrm{knead}}$ is the kneading polynomial associated to a real quadratic map $f_c$ whose auxiliary sequence is $w_{aux}^T$. \end{proposition} \begin{proof} Let $w$ be an admissible word of length $p$ and positive cumulative sign, and let $(b_1,\ldots,b_n)=w^T_{aux}$ be the Tiozzo auxiliary string for $w$. By the Milnor-Thurston admissibility criterion (Theorem \ref{t:MTadmissibility}), there exists a parameter $c\in[1/4,2]$ such that \begin{equation}\label{tiozzo18_polynomial} P_{c,\textrm{knead}} (t)=1+\left(\sum_{k=1}^n (-1)^k \sum_{j=b_1+\cdots+b_{k-1}+1}^{b_1+\cdots+b_k} t^j\right)-t^p \end{equation} and the smallest root of $P_{c,\textrm{knead}}$ is $1/\beta$ for some $\beta\in[1,2]$. Since $n$ is even, the last term of $P_{c,\textrm{knead}}(t)$ in the summation at $k=n$ over $j$ is $t^{b_1+\cdots+b_n}=t^p$, which cancels with $-t^p$. Thus, \begin{align*} P_{c,\textrm{knead}}(t) = \left\{ \begin{array}[]{lc} 1+\left( \sum_{k=1}^{n-1}(-1)^k \sum_{j=b_1+\cdots+b_{k-1}+1}^{b_1+\cdots+b_k}t^j \right) + \sum_{j=b_1+\cdots+b_{n-1}+1}^{b_1+\cdots+b_n-1}t^j & \text{ if }b_n>1, \\ 1+\sum_{k=1}^{n-1}(-1)^k \sum_{j=b_1+\cdots+b_{k-1}+1}^{b_1+\cdots+b_k} t^j & \text{ if } b_n=1. \end{array} \right. \end{align*} Then we compute when $b_n>1$ that \begin{align*} P(t)&:=(t-1)t^{p-1}P_{c,\text{knead}}(t^{-1}) \\ &=(t^p-t^{p-1}) \left( 1+\left( \sum_{k=1}^{n-1}(-1)^k \sum_{j=b_1+\cdots+b_{k-1}+1}^{b_1+\cdots+b_k}t^{-j} \right) + \sum_{j=b_1+\cdots+b_{n-1}+1}^{b_1+\cdots+b_n-1}t^{-j} \right) \\ & = t^p - t^{p-1} + \left( \sum_{k=1}^{n-1}(-1)^k \sum_{j=b_1+\cdots+b_{k-1}+1}^{b_1+\cdots+b_k} t^{p-j}-t^{p-(j+1)} \right) + \sum_{j=b_1+\cdots+b_{n-1}+1}^{b_1+\cdots+b_n-1} t^{p-j}-t^{p-(j+1)} \\ & = t^p - t^{p-1} + \left(\sum_{k=1}^{n-1}(-1)^k \big(t^{p-(b_1+\cdots+b_{k-1}+1)}-t^{p-(b_1+\cdots+b_k+1)}\big)\right) + t^{p-(b_1+\cdots+b_{n-1}+1)}-1 \\ & = t^p - 2t^{p-1} + 2t^{p-(b_1+1)}-2t^{p-(b_1+b_2+1)}+\cdots +2t^{p-(b_1+b_2+\cdots+b_{n-1}+1)}-1 \\ & = t^p - 2t^{p-1}-2\left(\sum_{k=1}^{n-1} (-1)^kt^{p-(b_1+\cdots+b_k+1)}\right) - 1. \end{align*} When $b_n=1$, \begin{align*} P(t) & = (t^p-t^{p-1}) \left(1+\sum_{k=1}^{n-1}(-1)^k \sum_{j=b_1+\cdots+b_{k-1}+1}^{b_1+\cdots+b_k} t^{-j} \right) \\ & = t^p-t^{p-1} + \sum_{k=1}^{n-1} (-1)^k \sum_{j=b_1+\cdots+b_{k-1}+1}^{b_1+\cdots+b_k} t^{p-j}-t^{p-(j+1)} \\ & = t^p - 2 t^{p-1} + 2t^{p-(b_1+1)} - \cdots +t^{p-(b_1+\cdots+b_{n-1}+1)} \\ & = t^p - 2t^{p-1} + 2\left( \sum_{k=1}^{n-1}(-1)^k t^{p-(b_1+\cdots+b_k+1)} \right) - 1 \end{align*} because $t^{p-(b_1+\cdots+b_{n-1}+1)}=t^{p-(b_1+\cdots+b_n)}=1$ when $b_n=1$. Therefore, we recover the same polynomial regardless of whether $b_n=1$ or $b_n>1$. The final expression of $P(t)$ has the form of an admissible Parry polynomial. Note that by definition, since the smallest root of $P_{\textrm{knead},c}$ is $1/\beta$, the leading root of $P$ is $\beta$. The first term of the itinerary associated to $P$ is a 1 because of the coefficient $-2$ in front of the $t^{p-1}$ term. The next 1 appears at the $(p-1)-(p-b_1-1)=b_1$th term, so there are $b_1-1$ many 0's in between the first 1 and the second 1, and so on. (Note that $b_1$ should always be at least 2, so there is at least one 0 before the second 1, and then $b_i\geq1$ for all $i=2,\ldots,n$.) Thus, the $i$th term of the auxiliary sequence we extract from this polynomial is $b_i-1$, where $b_i$ is the $i$th term of Tiozzo's auxiliary sequence $w_{aux}^T$. From this we recover the same itinerary $w$, and we see that $w_{aux}=(a_1,\ldots,a_n)$ and $w_{aux}^T=(b_1,\ldots,b_n)$ if and only if $a_i=b_i-1$ for $i=1,\ldots,n$. \end{proof} \section{Dominant strings}\label{sec:dominant} The main goal of this section is to prove the following result: \begin{proposition}\label{prop:densedominant} The leading roots of Parry polynomials of dominant strings are dense in $[\sqrt2,2]$. \end{proposition} Proposition \ref{prop:densedominant} is a reformulation of a result (Proposition \ref{t:tiozzo18density} below) by Tiozzo \cite{TiozzoGaloisConjugates}; translating it into non-complex-dynamics language is somewhat delicate. Proposition \ref{prop:densedominant} makes no guarantee that the leading roots of these polynomials correspond to growth rates (since the polynomials may have multiple factors). Proposition \ref{prop:extensions_for_irreducibility} will show how to add suffixes to these dominant strings so that the associated Parry polynomial is, after dividing by a factor of $(1-z)$, irreducible. \begin{definition} A finite string $S$ of positive integers is {\em dominant} if $XY\ll_{alt}Y$ for any nontrivial decomposition $S=XY$. \end{definition} \begin{remark} It is straightforward to verify that $S$ is dominant if and only if proper prefixes of $S$ are smaller than proper suffixes of $S$ of the same length. More precisely, for any proper prefix $X$ of $S$, if $Y$ is the suffix of $S$ with $|X|=|Y|$ then $X\ll_{alt} Y$. Note that the last letter of $S$ must be 0, else the inequality would not be strict with the suffix 1 and prefix 1. In the alternating ordering, $1<_{alt}0$ is indeed true. \end{remark} \begin{definition} \label{def:dominant} Define a word $w=w_1\dots w_n$ in the alphabet $\{0,1\}$ such that $w_1=1$ to be {\em dominant} if and only if $w$ has positive cumulative sign and the auxiliary sequence $w_{aux}$ is dominant. \end{definition} \begin{definition} A word $w$ in the alphabet $\{0,1\}$ is \emph{irreducible} if there exists no shorter word $w_0$ in the alphabet $\{0,1\}$ and integer $n \geq 2$ such that $w=(w_0)^n$. \end{definition} \noindent The definition of dominant immediately implies that dominant words are irreducible. \begin{corollary} \label{c:dominantimpliesadmissible} Dominant strings are admissible. \end{corollary} \begin{proof} It is clear that dominant strings are extremal strings, so the statement follows immediately from Proposition \ref{prop:auxiliary}. \end{proof} We prove an equivalent characterization of dominance of a word which is intrinsic to the word and the twisted lexicographical ordering: \begin{lemma} \label{l:dominantequivalentcharacterization} \label{lem:equivalentdefinitiondominance} Let $w$ be a word in the alphabet $\{0,1\}$ that starts with $10$ and has positive cumulative sign. Then $w$ is dominant if and only if for any proper suffix $b$ of $w$, the word $b1$ is (strictly) smaller than the prefix of $w$ of length $|b|+1$ in the twisted lexicographical ordering $<_E$. \end{lemma} \begin{proof} First assume $w$ is dominant. Let $b$ be any proper suffix of $w$. Any suffix $b$ with first term $0$ satisfies $b<_Ew$ immediately, so we consider when the first letter of $b$ is 1. Then $b$ has a well-defined auxiliary string; if we denote $w_{aux}=(a_1,\ldots,a_n)$ and assume the first term of $b$ is the $k$th 1 of $w$, then $b_{aux}=(a_k,\ldots,a_n)$. Let $m\in\{1,\ldots,n-k\}$ be the index of the first term where $b_{aux}$ and $w_{aux}$ differ, which exists by dominance of $w$. For such an $m$, if $m$ is even, then \begin{align} a_{k-1+m}>a_m&\iff (a_k,\ldots,a_{k-1+m})>_{alt}(a_1,\ldots,a_m)\\ & \iff (a_k,\ldots,a_{k-1+m},\ldots,a_{n})>_{alt}(a_1,\ldots,a_m,\ldots,a_{n-k}), \label{eqn:equivdominantlemma1} \end{align} and if $m$ is odd, then \begin{align} a_{k-1+m}<a_m&\iff (a_k,\ldots,a_{k-1+m})>_{alt}(a_1,\ldots,a_m)\\ & \iff (a_k,\ldots,a_{k-1+m},\ldots,a_{n})>_{alt}(a_1,\ldots,a_m,\ldots,a_{n-k}). \label{eqn:equivdominantlemma2} \end{align} Note that in Equations \eqref{eqn:equivdominantlemma1} and \eqref{eqn:equivdominantlemma2}, we compare a proper suffix of $w_{aux}$ to a proper prefix of $w_{aux}$ of the same length, where properness follows because $b$ was a proper suffix of $w$ by assumption. Since $w$ is dominant, these inqualities are true by definition. In the case where $m$ is even, $a_{k-1+m}>a_m$ is equivalent to more 0's appearing after the $m^\text{th}$ 1 in $b$ than after the $m^\text{th}$ 1 in $w$. Equivalently, the $(m+1)^\text{st}$ 1 of $(b1)$ appears later in the sequence than the $(m+1)^\text{st}$ 1 in $w$ (note that adding a 1 to $b$ allows for the case $m=n-k\leq n-1$). Since $m$ is even, at this point where $b$ and $w$ first differ, i.e. $w$ has a 1 but $b$ has a 0, there are an even number of 1's. Hence the strings have positive cumulative sign, and $(b1)<_E w$ as desired. In the case where $m$ is odd, $a_{k-1+m}<a_m$ is equivalent to fewer 0's appear after the $m$th 1 in $b$ than the $m$ 1 in $w$. In other words, the $m+1$st 1 of $(b1)$ appears earlier than the $m+1$st 1 of $w$. Since $m$ is odd, the ordering is reserved at the first point where $(b1)$ and $w$ differ. Thus, $(b1)<_E w$ again. Conversely, consider any proper suffix $(a_k,\ldots,a_n)$ of $w_{aux}$. Then there exists a proper suffix $b$ of $w$ with first letter is the $k$th 1 of $w$; in other words, $b$ admits an auxiliary string, and that string must be $b_{aux}=(a_k,\ldots,a_n)$ by design. By assumption, $(b1)<_E w$; define $m\in\{1,\ldots,n-k\}$ such that the initial difference between $(b1)$ and $w$ follows the $m$th 1 of $w$. Then indeed $a_{k-1+m}\neq a_m$. Again by the definition of the twisted lexicographical ordering, as in the previous arguments, if $m$ is even then $(b1)<_E w$ implies $a_{k-1+m}>a_m$; and if $m$ is odd then $(b1)<_Ew$ implies $a_{k-1+m}>a_m$. In both cases, in Equations \eqref{eqn:equivdominantlemma1} and \eqref{eqn:equivdominantlemma2} we see that the proper suffix $(a_k,\ldots,a_n)$ of $w_{aux}$ is larger than the proper prefix of $w_{aux}$ of the same length in the alternating ordering. By definition, $w_{aux}$ is dominant and hence $w$ is dominant. \end{proof} Tiozzo defines a real parameter $c$ to be dominant if there exists a finite string $S$ of positive integers such that $w_c = \overline{S}$ and $S$ is dominant. To distinguish between dominant in the sense of Definition \ref{def:dominant} (which uses $w_{aux}$) and dominant in the sense of Tiozzo (which uses $w_{aux}^T$), we will call a word $w$ for which $w^T_{aux}$ is dominant \emph{T-dominant}. We will see in Proposition \ref{p:relatingpolynomials2} that in fact these two notions of dominant are equivalent -- that a word $w$ is dominant if and only if it is T-dominant. \begin{definition} Let $w=w_1\dots w_p$ be a word in the alphabet $\{0,1\}$ such that $w_1 = 1$ and $w$ has positive cumulative sign. The word $w$ is defined to be \emph{T-dominant} if $w_{aux}^T$ is dominant. \end{definition} \begin{lemma}\label{l:dominantAndTDominantEquivalent} A word $w$ is dominant if and only if it is $T$-dominant. \end{lemma} \begin{proof} As a consequence of Proposition \ref{p:relatingpolynomials2}, any $w$ that satisfies the assumptions of the proposition is $T$-dominant if and only if $w$ is dominant. Note that to be dominant, $w$ must satisfy the conditions of the proposition: it $w$ must start with $10$, and $w$ must has positive cumulative sign. \end{proof} \begin{proposition} \cite[Proposition 9.6]{TiozzoTopologicalEntropy} \label{t:tiozzo18density} Let $\theta_c \in [0,1/2]$ be the characteristic angle of a real, non-renormalizable parameter $c$, with $c \not = -1$. Then $\theta_c$ is the limit point from below of characteristic angles of T-dominant parameters. \end{proposition} A non-renormalizable parameter $c \in \mathbb{C}$ is a parameter in the Mandelbrot set $\mathcal{M}$ that does not live inside a ``baby Mandelbrot set." A hyperbolic component $W$ of the Mandelbrot set is a connected component of the interior of $\mathcal{M}$ such that for all $c \in W$, the orbit of the critical point is attracted to a periodic cycle under $f_c$. Associated to any hyperbolic component $W$ of $\mathcal{M}$ there is a tuning map $\iota_W:\mathcal{M} \to \mathcal{M}$ that sends the main cardioid of $\mathcal{M}$ to $W$ and all o $\mathcal{M}$ to a baby Mandelbrot set. Denote by $\tau_W$ the associated map on external angles, i.e. if $\theta$ is a characteristic for $c \in \partial \mathcal{M}$, then $\tau_W(\theta)$ is a characteristic angle of $\iota_W(c)$. Tiozzo proves \begin{proposition} \cite[Proposition 11.2]{TiozzoTopologicalEntropy}\label{p:tiozzo18tuning} Let $W$ be a hyperbolic component of period $p$ and let $c \in \mathcal{M}$. Then $\textrm{H.dim } \tau_W(H_c) = \frac{1}{p} \textrm{H.dim } H_c$. \end{proposition} \noindent Here $\textrm{H.dim } H_c$ is equal to $h_{top}(f_c | T_c)/\log 2$ (by \cite[Theorem 7.1]{TiozzoTopologicalEntropy}), where $T_c$ is the Hubbard tree of $f_c$, in the case that $f_c$ is topologically finite (meaning that its Julia set is connected and locally connected and its Hubbard tree is homeomorphic to a finite tree.) The set of topologically finite parameters contains all postcriticially finite parameters \cite{TiozzoTopologicalEntropy}. Since $2$ is the minimum possible value for $p$, Proposition \ref{p:tiozzo18tuning} implies that if $c$ is renormalizable and PCF, $$\frac{h_{top}(f_c | T_c)}{\log 2} =\textrm{H.dim } H_c \leq \frac{1}{2} \sup_{c \in \mathcal{M}} \{{\textrm{H.dim } H_c} \}=\frac{1}{2},$$ and hence \begin{equation} \label{eq:rewritingtiozzo18} e^{h_{top}(f_c | T_c)} \leq \sqrt{2}. \end{equation} Combining Theorem \ref{t:entropycontinuous}, Proposition \ref{t:tiozzo18density} and equation \ref{eq:rewritingtiozzo18}, we have now proven the following: \begin{proposition} \label{p:limitfrombelow} If $\sqrt{2} < \lambda \leq 2$ is the growth rate of a PCF tent map, then $\lambda$ is the limit from below of a sequence of growth rates of maps corresponding to T-dominant parameters. \end{proposition} In \cite{TiozzoGaloisConjugates}, Tiozzo expresses the kneading polynomial for a parameter $c$ in terms of the associated auxiliary word $w_{aux}^{T}$. Namely, from \cite{TiozzoGaloisConjugates}, if $c$ is a T-dominant (real) parameter with auxiliary string $S=(a_1,\dots,a_n)$, then the associated kneading polynomial $P_{c,\textrm{knead}}$ can be written as \begin{equation} \label{eq:kneading} P_{c,\textrm{knead}}(t)=1+\left(\sum_{k=1}^n (-1)^k \sum_{j=a_1+\cdots+a_{k-1}+1}^{a_1+\cdots+a_k} t^j\right)-t^p. \end{equation} Recall that if $s$ is the growth rate of a superattracting map $f_c$, then $1/s$ is a root of $P_{c,\textrm{knead}}$. \begin{proof}[Proof of Proposition \ref{prop:densedominant}] By Lemma \ref{l:dominantAndTDominantEquivalent}, for a word $w$ in the alphabet $\{0,1\}$, the auxiliary sequence $w_{aux}$ is dominant if and only if $w_{aux}^{T}$ is dominant. By Proposition \ref{p:limitfrombelow}, any $\lambda \in (\sqrt{2},2]$ is the limit from below of a sequence of growth rates of tent maps for which the associated word $w_{aux}^{T}$ is dominant. \end{proof} \section{Compatibility of orderings}\label{sec:compatibility} We will make use of the compatibility of corresponding orderings on three related sets: the set of admissible words (with the twisted lexicographic ordering), kneading determinants, and growth rates. Recall that the ordering on the the additive group $\mathbb{Z}[[t]]$ of formal power series with integer coefficients is defined by setting $\alpha = a_0 + a_1t + \dots > 0$ whenever $a_0 = \dots = a_{n-1} = 0$ but $a_n > 0$ for some $n \geq 0$. \begin{lemma} \label{l:kneadingdetsgrowthrate} For tent maps, the kneading determinant is a monotone decreasing function of the growth rate. \end{lemma} \begin{proof} For the real one-parameter family of maps $f_a(x) = (x^2-a)/2$, \cite[Theorem 13.1]{MilnorThurston} asserts that the kneading determinant $D(f_a) \in \mathbb{Z}[[t]]$ is monotone decreasing as a function of the parameter $a$; and Corollary 13.2 asserts the growth rate is monotone increasing as a function of $a$. The family of maps $\{f_a\}$ takes on all possible growth rates; this can be seen from the fact that $f_a$ is conjugate to the map $q_{(-a/4)}(z) = z^2+(-a/4)$ via the conjugation map $h(z)=z/2$, growth rate is a continuous function of $c$ (Theorem \ref{t:entropycontinuous}), and the Intermediate Value Theorem. \end{proof} \begin{comment} \begin{theorem}\cite[Intermediate Value Theorem 12.2]{MilnorThurston} \label{t:MTIVT} Consider a one-parameter family of $C^1$-smooth maps $g_b:I \to I$, depending $C^1$-smoothly on the parameter $b$ for $b_0 \leq b \leq b_1$, all with the single turning point $c_1$. Then any admissible power series $1 \pm t \pm t^2 \pm \dots$ which lies between the kneading determinants of $g_{b_0}$ and $g_{b_1}$ must actually occur as the kneading determinant of $g_b$ for some parameter value between $b_0$ and $b_1$. \end{theorem} We wish to apply Theorem \ref{t:MTIVT} to the collection of tent maps parameterized by growth rate, but these maps do not satisfy the assumptions of Theorem \ref{t:MTIVT}. The proof of Proposition \ref{p:monotonicitykneadingdeterminants} performs a somewhat cumbersome modification of these maps in order to utilize this theorem. \comkathryn{Are kneading determinants and tent maps in bijection? Or can multiple tent maps correspond to the same kneading determinant? This affects whether the proposition below is if and only if.} \comkathryn{Edit this statement, so it is clear whether the orderings agree or are opposite.} \begin{proposition} \label{p:monotonicitykneadingdeterminants} Kneading determinants of superattracting tent maps depend monotonically on their growth rates. \end{proposition} \begin{proof} Consider the class of maps $F_{\lambda}:\mathbb{R} \to \mathbb{R}$ for $\lambda \in (1,2]$ defined by $$F_{\lambda}(x) = 1-\lambda|x|.$$ For each $\lambda$, $F_{\lambda}$ fixes the point $(1/(1-\lambda)$ and the interval $I_{\lambda}:=[1/(1-\lambda),1]$ satisfies $F_{\lambda}(I_{\lambda}) \subset I_{\lambda}$. The restriction $F_{\lambda}$ to $I_{\lambda}$ is conjugate via an affine map to the tent map of growth rate $\lambda$ on $[0,1]$. First, for each $N \in \mathbb{N}$, we define a family of maps $G_{\lambda}$ as follows. Fix $N \in \mathbb{N}$. Let $\mathcal{C}$ be the family of maps $F_{\lambda}$ such that the point $1$ is strictly periodic under $F_{\lambda}$ and has period at most $N$. There are finitely many maps in $\mathcal{C}$. Set $I = I_{\lambda_0}$ where $\lambda_0$ is the minimum growth rate of the maps in $\mathcal{C}$. Then $I_{\lambda} \subset I$ for all $\lambda$ such that $F_{\lambda} \in \mathcal{C}$. For each $F_{\lambda} \in \mathcal{C}$ set $m(\lambda)$ to be the closest the $F_{\lambda}$ orbit of $1$ ever comes to either $0$ or the fixed point $1/(1-\lambda)$, i.e. $$m(\lambda) = \min \left( \{|F_{\lambda}^n(1)| : n \in \mathbb{N}, F_{\lambda}^n(1)\neq 0\} \cup \left \{\left|F_{\lambda}^n(1) - \frac{1}{1-\lambda} \right | : n \in \mathbb{N}, F_{\lambda}^n(1)\neq \frac{1}{1-\lambda} \right \} \right) .$$ Set $\epsilon = \frac{1}{2} \min \{m(\lambda): F_{\lambda} \in \mathcal{C}\}$. For each $\lambda \in [\lambda_0,2]$, we now define a $C^1$ map $G_{\lambda}:I \to I $ for $\lambda \in [\lambda_0,2]$ by modifying $F_{\lambda}$ on the intervals $(-\epsilon,\epsilon)$ and $[\frac{1}{1-\lambda_0},\frac{1}{1-\lambda}+\epsilon]$. The modification on $(-\epsilon,\epsilon)$ will ``smooth out" the peak of the tent; the modification on $[\frac{1}{1-\lambda_0},\frac{1}{1-\lambda}+\epsilon]$ will make $G_{\lambda}$ be the constant function $\frac{1}{1-\lambda}$ on the interval $[\frac{1}{1-\lambda_0},\frac{1}{1-\lambda}]$, so that $G_{\lambda}(I) \subset I$. Pick any smooth function $s_0:[-\epsilon,\epsilon]\to \mathbb{R}$ so that $s$ and all derivatives of $s$ agree with those of $F_{\lambda}$ at the points $\epsilon$ and $-\epsilon$, $s(0)=1$, and is strictly increasing on $[-\epsilon,0)$ and strictly decreasing on $(0,\epsilon)$. For each $\lambda \in [\lambda_0,2]$, form $G_{\lambda}$ by replacing the section of the graph of $F_{\lambda}$ over $[-\epsilon,\epsilon]$ by a vertically scaled copy of the graph of $s$. Define $G_{\lambda}$ to be the constant function $\frac{1}{1-\lambda}$ on the interval $[\frac{1}{1-\lambda_0},\frac{1}{1-\lambda}]$. On the interval $[\frac{1}{1-\lambda},\frac{1}{1-\lambda}+\epsilon]$ replace the graph of $F_{\lambda}$ by a vertically stretched copy of some fixed monotone increasing $C^1$ function whose values and derivatives at $\frac{1}{1-\lambda}$ and $\frac{1}{1-\lambda}+\epsilon$ match those of $F_{\lambda}$. The resulting family of functions $G_{\lambda}$ satisfies the assumptions of Theorem \ref{t:MTIVT}. Because the orbit of $1$ under each $F_{\lambda}$ in $\mathcal{C}$ does not enter the region on which we modified $F_{\lambda}$ to form $G_{\lambda}$, we have that the kneading determinant $D(G_{\lambda}) $ of $G_{\lambda}$ equals the kneading determinant $D(F_{\lambda})$ of $F_{\lambda}$ whenever $F_{\lambda} \in \mathcal{C}$. Since topological entropy of an interval self-map is a function of its total variation (via equation (\ref{eq:arclengthentropy})) and $\textrm{Var}(F_{\lambda}|_{I_{\lambda}})=\textrm{Var}(G_{\lambda})$, it follows that $h_{top}(F_{\lambda}|_{I_{\lambda}}) = h_{top}(G_{\lambda})$. Since the topological entropy of any map equals the topological entropy of the restriction of that map to its nonwandering set, it follows that for any $\lambda \in [\lambda_0,2]$, $h_{top}(F_{\lambda}) = h_{top}(F_{\lambda}|_I)$. Hence $h_{top}(F_{\lambda}) = h_{top}(G_{\lambda})$. To prove the proposition, let $F_{\lambda_a}$, $F_{\lambda_b}$ and $F_{\lambda_c}$ be superattracting tent maps with kneading determinants $D(F_{\lambda_a})$, $D(F_{\lambda_b})$ and $D(F_{\lambda_c})$, respectively. Suppose $D(F_{\lambda_c})$ is between $D(F_{\lambda_a})$ and $D(F_{\lambda_b})$. Pick $N$ greater than the period of the orbit of $1$ under each of $F_{\lambda_a}$, $F_{\lambda_b}$ and $F_{\lambda_c}$, and construct the family of maps $G_{\lambda}$ as above. Then Theorem \ref{t:MTIVT} implies $\lambda_c$ is between $\lambda_a$ and $\lambda_b$. \end{proof} Although the remainder of this paper uses monotonicity of kneading determinants only for superattracting tent maps, the analogous statement for \emph{all} tent maps seems significant, so we prove it below. \begin{theorem} Kneading determinants of tent maps depend monotonically on growth rate. \end{theorem} \color{blue} \begin{proof} [Idea: extend to nonperiodic case by approximating orbits by periodic ones and applying previous proposition. ] \end{proof} \color{black} \end{comment} \begin{comment} \comkathryn{I expanded below on the proof Chenxi wrote (I just added more words/notation to his proof). His proof is really showing that the orderings on kneading determinants and words are consistent, so I changed the statement to stay this. We can make the connection to growth rate a corollary..} \begin{lemma} \label{l:kneadingdetswordsrelationship} Let $f$ be a tent map with kneading determinant $\alpha$ and denote the itinerary of $1$ under $f$ by $w_{\alpha}$; let $g$ be a tent map with kneading determinant $\beta$ and denote the itinerary of $1$ under $g$ by $w_{\beta}$. If $\alpha > \beta$, then $w_{\alpha} >_E w_{\beta}$. \end{lemma} \begin{proof} \cite[Lemma 4.5]{MilnorThurston} implies that if $f$ is a tent map and $\alpha= 1+\sum a_i t^i$ is the kneading determinant associated to $f$, then $$a_n = - \textrm{sign}\left(\frac{d}{dx}f^{n-1}(x)\Big|_{x=1}\right).$$ By the definition of the cumulative sign (equation \ref{eq:cumulativesigndef}), $$ \textrm{sign}\left(\frac{d}{dx}f^{n-1}(x)\Big|_{x=1}\right) = s(1,n),$$ so $a_n = -s(1,n)$. Now suppose $\alpha$ is the kneading determinant $\alpha= 1+\sum a_i t^i$, $\beta$ is the kneading determinant $\beta=1+\sum b_i t^i$, and $\alpha > \beta$. Let $n$ be the smallest natural number such that $a_n \neq b_n$. \comkathryn{Deal with $n=1$ case, then say so assume $n \geq 2$.} Denoting the cumulative signs for the tent map with kneading determinant $\alpha$ by $s_{\alpha}(1,\cdot)$ and with kneading determinant $\beta$ by $s_{\beta}(1,\cdot)$, the statement $\alpha > \beta$ means $s_{\alpha}(1,j) = s_{\beta}(1,j)$ for all $1 \leq j \leq n-1$ and $s_{\alpha}(1,n) < s_{\beta}(1,n)$. Hence $\textrm{It}_{\alpha}(1,j) = \textrm{It}_{\beta}(1,j)$ for $1 \leq j \leq n-2$ and $\textrm{It}_{\alpha}(1,n-1) \neq \textrm{It}_{\beta}(1,n-1)$. There are two possibilities: $$s_{\alpha}(1,n-1)=s_{\beta}(1,n-1) = +1, \quad \textrm{It}_{\alpha}(1,n-1) = 1, \quad \textrm{It}_{\beta}(1,n-1) = 0, \textrm{ or}$$ $$s_{\alpha}(1,n-1)=s_{\beta}(1,n-1) = -1,\quad \textrm{It}_{\alpha}(1,n-1) = 0, \quad \textrm{It}_{\beta}(1,n-1) = 1.$$ In both cases, $$\textrm{It}_{\alpha}(1,1) \dots \textrm{It}_{\alpha}(1,n-1) >_E \textrm{It}_{\beta}(1,1) \dots \textrm{It}_{\beta}(1,n-1).$$ \end{proof} \color{black} \begin{corollary} \label{prop:word_monotonicity} Let $f$ be a tent map with growth rate $\lambda_f$ and denote the itinerary of $1$ under $f$ by $w_f$; let $g$ be a tent map with growth rate $\lambda_g$ and denote the itinerary of $1$ under $g$ by $w_g$. If $\lambda_f > \lambda_g$, then $w_f <_E w_g$. \end{corollary} \begin{proof} Suppose $\lambda_f > \lambda_g$. By Lemma \ref{l:kneadingdetsgrowthrate}, $D(f) < D(g)$, where $D(f)$ and $D(g)$ denote the kneading determinants of $f$ and $g$, respectively. Then by Lemma \ref{l:kneadingdetswordsrelationship}, $w_f <_E w_g$. \end{proof} \comkathryn{I edited/fixed the proofs in this section, and the above lemma is what I got. It says that growth rates and words have OPPOSITE orderings does this make sense? If so, we need to make fixes wherever we use this lemma. On second thought, NO, this is WRONG. Because the word for slope 2 is clearly maximal. I am going to write an alternative version of the last 2 lemmas below, where I use the positive cumulative signs as the kneading coeffs, because I think this is what it must be. Then we can pick which version is correct. } \end{comment} \begin{lemma} \label{l:kneadingdetswordsrelationship} Let $f$ be a tent map with kneading determinant $\alpha$ and denote the itinerary of $1$ under $f$ by $w_{\alpha}$; let $g$ be a tent map with kneading determinant $\beta$ and denote the itinerary of $1$ under $g$ by $w_{\beta}$. If $\alpha > \beta$, then $w_{\alpha} >_E w_{\beta}$. \end{lemma} \begin{proof} \cite[Lemma 4.5]{MilnorThurston} implies that if $f$ is a tent map and $\alpha= 1+\sum a_i t^i$ is the kneading determinant associated to $f$, then $$a_n = \textrm{sign}\left(\frac{d}{dx}f^{n-1}(x)\Big|_{x=1}\right).$$ By the definition of the cumulative sign (equation \ref{eq:cumulativesigndef}), $$ \textrm{sign}\left(\frac{d}{dx}f^{n-1}(x)\Big|_{x=1}\right) = s(1,n),$$ so $a_n = s(1,n)$. Now suppose $\alpha$ is the kneading determinant $\alpha= 1+\sum_{i=1}^{\infty} a_i t^i$, $\beta$ is the kneading determinant $\beta=1+\sum_{i=1}^{\infty} b_i t^i$, and $\alpha > \beta$. Let $n$ be the smallest natural number such that $a_n \neq b_n$. We must have $a_1=b_1$, so we may assume $n \geq 2$. Denoting the cumulative signs for the tent map with kneading determinant $\alpha$ by $s_{\alpha}(1,\cdot)$ and with kneading determinant $\beta$ by $s_{\beta}(1,\cdot)$, the statement $\alpha > \beta$ means $s_{\alpha}(1,j) = s_{\beta}(1,j)$ for all $1 \leq j \leq n-1$ and $s_{\alpha}(1,n) > s_{\beta}(1,n)$. Hence $\textrm{It}_{\alpha}(1,j) = \textrm{It}_{\beta}(1,j)$ for $1 \leq j \leq n-2$ and $\textrm{It}_{\alpha}(1,n-1) \neq \textrm{It}_{\beta}(1,n-1)$. There are two possibilities: $$s_{\alpha}(1,n-1)=s_{\beta}(1,n-1) = +1, \quad \textrm{It}_{\alpha}(1,n-1) = 0, \quad \textrm{It}_{\beta}(1,n-1) = 1, \textrm{ or}$$ $$s_{\alpha}(1,n-1)=s_{\beta}(1,n-1) = -1,\quad \textrm{It}_{\alpha}(1,n-1) = 1, \quad \textrm{It}_{\beta}(1,n-1) = 0.$$ In both cases, $$\textrm{It}_{\alpha}(1,1) \dots \textrm{It}_{\alpha}(1,n-1) <_E \textrm{It}_{\beta}(1,1) \dots \textrm{It}_{\beta}(1,n-1).$$ \end{proof} \color{black} \begin{corollary} \label{prop:word_monotonicity} Let $f$ be a tent map with growth rate $\lambda_f$ and denote the itinerary of $1$ under $f$ by $w_f$; let $g$ be a tent map with growth rate $\lambda_g$ and denote the itinerary of $1$ under $g$ by $w_g$. If $\lambda_f > \lambda_g$, then $w_f >_E w_g$. \end{corollary} \begin{proof} Suppose $\lambda_f > \lambda_g$. By Lemma \ref{l:kneadingdetsgrowthrate}, $D(f) < D(g)$, where $D(f)$ and $D(g)$ denote the kneading determinants of $f$ and $g$, respectively. Then by Lemma \ref{l:kneadingdetswordsrelationship}, $w_f >_E w_g$. \end{proof} \color{black} \section{Persistence on $[\sqrt{2},2]$}\label{sec:persistence} In this section, we prove a restriction of the persistence theorem for Galois conjugates inside the unit disk associated to growth rates in the interval $[\sqrt{2},2]$. This proof relies on the fact that growth rates of dominant strings are dense in $[\sqrt2,2]$ (Proposition \ref{prop:densedominant}). To prove the full persistence theorem, we will need to apply the period doubling procedure, which is treated in the next section. To motivate this approach to the persistence theorem, we prove in the following proposition that density of dominant strings in the interval $[\sqrt2,2]$ is indeed optimal. The proof is well-understood and only included for completeness. Our proof is a combinatorial argument, but the result can also be obtained from the perspective of complex dynamics. \begin{proposition} \label{p:dominantgrowthratesarebig} The set of growth rates of dominant words is contained in $[\sqrt{2},2]$. \end{proposition} \begin{remark} Recall from \S\ref{Basic definitions} that a word in the alphabet $\{0,1\}$ has {\em positive cumulative sign} if it contains an even number of 1's, and otherwise has {\em negative cumulative sign}. It is straightforward to check by the definition of the twisted lexicographical ordering that if a word $a$ has positive cumulative sign, then for any words $v,w$, we have $w<_E v$ if and only if $aw<_E av$. Similarly, if $a$ has negative cumulative sign, then $w<_E v$ if and only if $aw>_Eav$. \label{rem:flippingsigns} \end{remark} \begin{proof}[Proof of Proposition \ref{p:dominantgrowthratesarebig}] By contrapositive, assume $w$ is an admissible word and that the growth rate of $w$ is at most $\sqrt2$. By monotonicity (Corollary \ref{prop:word_monotonicity}) and that the itinerary of $\sqrt2$ is strictly preperiodic, we conclude \[w^\infty<_E \textrm{It}_{\sqrt2}(1)=10\cdot 1^\infty,\] which implies \begin{equation} w \leq_E 10\cdot 1^{|w|-2}. \label{eqn:biggerthansqrt2} \end{equation} In the case of equality, there are two possibilities. If $w$ has an even number of 1's, then $(w1)$ has an odd number of 1's. Then equation \eqref{eqn:biggerthansqrt2} and Remark \ref{rem:flippingsigns} imply \[w \cdot 10>_E 10\cdot 1^{|w|-2} \cdot 11,\] which implies $w^\infty >_E10\cdot 1^\infty$ because admissible words start with $10$ (Lemma \ref{lem:10start}). This violates our assumption. On the other hand, if $w$ has an odd number of 1's then $w$ cannot be dominant by the definition (Definition \ref{def:dominant}), as desired. Now consider the case where $w<_E 10\cdot 1^{|w|-2}$. Then $w$ has at least two 0's. Moreover, there is at least one other term of $10$ in $w$ besides the first two letters in $w$, since $101<_E100$ implies $w$ must start with $101$. Let $b$ be a proper suffix of $w$ which begins with a term of $10$, and assume that $b$ is the shortest possible such choice. Then $b \cdot 1=10\cdot 1^{|b|-1}$, which by the assumption (equation \eqref{eqn:biggerthansqrt2}) is greater than or equal to the prefix of $w$ of length $|b|+1$ in the twisted lexicographical ordering. By Lemma \ref{lem:equivalentdefinitiondominance}, $w$ is not dominant. \end{proof} \subsection{Constructing dominant extensions} The development of persistence on $[\sqrt2,2]$ hinges on a series of technical combinatorial lemmas. \begin{comment} \begin{lemma}[Sandwich Lemma] If $w^\infty$ is admissible, and $w$ is irreducible and can be written as $aba$, then $a$ must have negative cumulative sign. \label{sandwich} \end{lemma} \begin{proof} By irreducibility and admissibility we have $$abaaba\dots=w^\infty>\sigma^{|a|+|b|}w^\infty=aabaabaa\dots.$$ However, $$baabaab\dots=\sigma^{|a|}w^\infty<w^\infty=abaabaaba\dots,$$ hence the only way for these two inequalities both be true is when $a$ has negative cumulative sign. \end{proof} \comharry{I think the sandwich lemma is no longer needed} \end{comment} \begin{proposition} Assume $w_1$ is dominant, $w_2$ is admissible and irreducible, $n$ is a positive integer such that \[ 2n|w_2| > |w_1| > n|w_2|, \] $w_1^\infty>_E w_2^\infty$, and $w_2^n$ has positive cumulative sign. Then $(w_1w_2^n)^{\infty}$ is admissible. \label{prop:thekeylemma} \end{proposition} \begin{comment} \begin{proof} We only need to show that $\sigma^k(w_1w_2^n)^\infty\leq (w_1w_2^n)^\infty$ for all $k<|w_1|+n|w_2|$. \begin{itemize} \item Case 1: $k<|w_1|$. This follows from Lemma~\ref{l:dominantequivalentcharacterization}. \item Case 2: $k=|w_1|$. In this case we need to show $w_2^nw_1\leq w_1w_2^n$. If these two words differ at some point within the first $|w_1|$ letters, this follows from the assumption that $w_1^\infty>w_2^\infty$. If not, delete the common prefix of length $n|w_2|$ from both of these words, we see that a suffix of $w_1$ of length $|w_1|-n|w_2|$ followed by $1$ is no less than a prefix of $w_1$ of the same size, which contradicts with Lemma~\ref{l:dominantequivalentcharacterization}. \item Case 3: $|w_1|<k<|w_1|+n|w_2|$. Let $k_0$ be the first place $\sigma^k((w_1w_2^n)^n)$ and $(w_1w_2^n)^\infty$ differs. \begin{itemize} \item Case 3a: $k_0\leq |w_1|+n|w_2|-k$: This follows from the assumption. \item Case 3b: $k_0>|w_1|+n|w_2|-k$: The two sequences have a common prefix of length $|w_1|+n|w_2|-k$. If this prefix has negative cumulative sign, deleting it will reduce this to Case 1. If this prefix has positive cummulative sign, by Lemma~\ref{sandwich} it must be of the form $w_2^m$ where $m<n$, and $w_1$ can be written as $w_2^mb$. By Lemma~\ref{l:dominantequivalentcharacterization}, $w_1^\infty=(w_2^mb)^\infty=w_2^m(bw_2^m)^\infty\leq (w_2^mw_1)^\infty=w_2^{2m}(bw_2^m)^\infty\leq\dots\leq w_2^\infty$ which contradicts with the assumption. \end{itemize} \end{itemize} \end{proof} \comharry{The proof below has everything we talked about in the meeting and addresses the small issue I brought up. Please read it over for clarity and correctness} \end{comment} \begin{proof} It suffices to show that \[\sigma^k(w_1w_2^n)^\infty\leq_E (w_1w_2^n)^\infty\] for all $k<|w_1|+n|w_2|$. If $1<k<|w_1|$, denote by $b$ the proper suffix of $w_1$ of length $|w_1|-k$. Then $(b1)$ is a prefix of $\sigma^k (w_1w_2^n)$ because the first letter of $w_2$ is 1 by admissibility and Lemma \ref{lem:firstletters}. By dominance of $w_1$ and Lemma \ref{l:dominantequivalentcharacterization}, $(b1)$ is smaller than the prefix of $w_1$ of length $|b|+1$ in the twisted lexicographical ordering, which proves \[\sigma^k(w_1w_2^n)=bw_2^n<_E w_1\] and provides the desired inequality. If $k=|w_1|$, for contradiction, see that existence of $n$ such that $w_2^n\geq_E w_1$ implies \[ w_2^\infty <_Ew_1^\infty \leq_E (w_2^n)^{\infty}=w_2^\infty, \] which is impossible given the assumption that $w_2^\infty$ is smaller than $w_1^\infty$ in the twisted lexicographical ordering. Thus, \[\sigma^{|w_1|}(w_1w_2^n)^\infty =w_2^n (w_1w_2^n)^\infty <_E (w_1w_2^n)^\infty.\] Lastly, we consider the shift by $k$ where $|w_1|<k<|w_1|+n|w_2|$. Let $r=k-|w_1|$, so that $1<r<n|w_2|$. See that $\sigma^rw_2^n>_Ew_1$ is impossible, because $\sigma^rw_2^n>_Ew_1$ and admissibility of $w_2$ implies \[w_1^\infty<_E\sigma^r(w_2)^\infty\leq_E w_2^\infty,\] a contradiction. We conclude that $\sigma^rw_2^n\leq_E w_1$. If this inequality is strict, we are done: we would have \[\sigma^k(w_1w_2^n) =\sigma^{|w_1|+r}(w_1w_2^n)=\sigma^rw_2^n<_Ew_1\] as desired. We must now consider when this inequality is not strict; in other words, $\sigma^rw_2^n$ is a prefix of $w_1$. We will need to prove that such a string must always have cumulative negative sign. If it does, then $|w_1|-r<|w_1|$ implies \[ \sigma^{|w_1|-r}(w_1w_2^n)^\infty \leq_E (w_1w_2^n)^\infty \] by dominance of $w_1$ discussed above. Then by Remark \ref{rem:flippingsigns} and that $\sigma^rw_2^n$ has negative cumulative sign, \begin{multline*} (w_1w_2)^\infty =\sigma^rw_2^n\sigma^{|w_1|-r}w_1 w_2^n (w_1w_2^n)^\infty \geq_E \sigma^rw_2^n(w_1w_2^n)^\infty \\ = \sigma^{k-|w_1|}w_2^n(w_1w_2^n)^\infty = \sigma^k(w_1w_2^n)^\infty. \end{multline*} It remains to prove that if $\sigma^rw_2^n$ is a prefix of $w_1$, then it cannot have cumulative positive sign. Consider the suffix $b=\sigma^rw_2^n$ of $w_2^n$. Since $w_2^n$ is admissible, $b\leq_E a$ where $a$ is the prefix of $w_2^n$ of the same length (see Remark \ref{r:admissible_prefix_suffix}). Since $w_2^\infty < w_1^\infty$, moreover $a$ is smaller than or equal to the prefix of $w_1$ of the same length, which is assumed to be equal to $b$. Then $b\leq_E a\leq_E b$ implies equality, and we conclude $w_2^n=ac=db=da$. Now \[ w_2^\infty=(ac)^\infty=(da)^\infty\geq_E a \hspace{.15em}{\cdot}\hspace{.15em} (da)^\infty \] implying \[ (ca)^\infty\geq_E(da)^\infty=w_2^\infty\geq_E (ca)^\infty \] because we assumed $a$ has positive cumulative sign (see Remark \ref{rem:flippingsigns}) and $w_2$ is admissible, hence $(ca)^\infty=(ac)^\infty$. Then, \[ w_2^\infty= (ac)^\infty =a \hspace{.15em}{\cdot}\hspace{.15em}(ca)^\infty=a \hspace{.15em}{\cdot}\hspace{.15em}(ac)^\infty=a^2(ca)^\infty = \cdots=a^\infty \] implies $a=w_2^m$ for some $m$ because $w_2$ is irreducible. Then $w_1=af = w_2^mf$ for some suffix $f$, and again by dominance of $w_1$ and Lemma \ref{lem:equivalentdefinitiondominance}, \[ w_1^\infty=(w_2^mf)^\infty=w_2^m(fw_2^m)^\infty\leq_E (w_2^mw_1)^\infty=w_2^{2m}(fw_2^m)^\infty\leq_E\dots\leq_E w_2^\infty \] which contradicts the assumption that $w_1^\infty>_Ew_2^\infty$. \end{proof} \begin{definition} We say that a string $v$ is an {\em extension} of a word $w$ if $w$ is a proper prefix of $v$. If $v$ is finite then such a $v$ is a {\em finite extension} of $w$. \end{definition} If the kneading determinant of $(w_1w_2^n)$ was irreducible then we would be able to proceed immediately to the proof of persistence on $[\sqrt2,2]$. However, there is no such guarantee. We next prove that we can extend $w_1$ to a dominant word $w_1'$ which guarantees that the kneading determinant of the concatenation is irreducible via Lemma \ref{l:tiozzo4point1}. We will exploit word monotonicity in the core entropy (Corollary \ref{prop:word_monotonicity}) and that we are currently only studying strings with core entropy larger than $\sqrt2$. This allows us to append truncations of the itinerary of $\sqrt2$ to $w_1$ without compromising dominance. For the next Proposition, we recall or advise the reader to verify that the itinerary of $\sqrt2$ is $10\cdot 1^\infty$. \begin{proposition}\label{prop:extensions_for_irreducibility} Let $w_1$ and $w_2$ be words in the alphabet $\{0,1\}$ such that $w_1$ is dominant, $w_2$ is admissible and irreducible, and $w_1^\infty> w_2^\infty$ and there exists an $m$ such that \[ 2m|w_2|>|w_1|>m|w_2|. \] Then there exists a finite extension $w_1'$ of $w_1$ and an integer $m'\geq m$ such that $(w_1'w_2^{m'})^\infty$ is admissible, $|w_1'|>m'|w_2|$, and $P(z)/(z-1)$ is an irreducible polynomial, where $P$ is the Parry polynomial of $(w_1' w_2^{m'})$. \end{proposition} The following Lemma will give us a recipe for extending $w_1$. \begin{lemma} Let $w$ be a dominant string. Then the words \[ w \hspace{.15em}{\cdot}\hspace{.15em} 10 \hspace{.15em}{\cdot}\hspace{.15em} 1^\kappa \hspace{.15em}{\cdot}\hspace{.15em} 10 \hspace{.15em}{\cdot}\hspace{.15em} 1^{|w|} \hspace{.15em}{\cdot}\hspace{.15em} \mathbf{01} \hspace{.15em}{\cdot}\hspace{.15em} 1^{|w|} \text{ and }w \hspace{.15em}{\cdot}\hspace{.15em} 10 \hspace{.15em}{\cdot}\hspace{.15em} 1^\kappa \hspace{.15em}{\cdot}\hspace{.15em} 10 \hspace{.15em}{\cdot}\hspace{.15em} 1^{|w|} \hspace{.15em}{\cdot}\hspace{.15em} \mathbf{10} \hspace{.15em}{\cdot}\hspace{.15em} 1^{|w|} \] for any odd natural number $\kappa>|w|$, and \[ w \hspace{.15em}{\cdot}\hspace{.15em} 1^\kappa \hspace{.15em}{\cdot}\hspace{.15em} 10 \hspace{.15em}{\cdot}\hspace{.15em} 1^{|w|} \hspace{.15em}{\cdot}\hspace{.15em} \mathbf{01} \hspace{.15em}{\cdot}\hspace{.15em} 1^{|w|} \text{ and }w \hspace{.15em}{\cdot}\hspace{.15em} 1^\kappa \hspace{.15em}{\cdot}\hspace{.15em} 10 \hspace{.15em}{\cdot}\hspace{.15em} 1^{|w|} \hspace{.15em}{\cdot}\hspace{.15em} \mathbf{10} \hspace{.15em}{\cdot}\hspace{.15em} 1^{|w|} \] for any even natural number $\kappa>|w|$, are all dominant extensions of $w$. Moreover, for each $\kappa$, the sums of the coefficients of the kneading polynomials for the two extensions differ by 2. \label{lem:extension_options} \end{lemma} \begin{proof}[Proof of Lemma \ref{lem:extension_options}] The parity condition on $\kappa$ is to guarantee that the new word has an even number of 1's, which is part of the definition of dominance. We apply the alternate definition of dominance from Lemma \ref{lem:equivalentdefinitiondominance}. Let $w'$ be one of the possible extensions in the statement of the Lemma. Let $b$ be any suffix of $w'$. If a prefix of $b$ is a suffix of $w$, then $(b1)$ is smaller than the prefix of $w'$ of the same length in the twisted lexicographical ordering by dominance of $w$ and the construction of $w'$. If not, then if $b$ starts with 0 or $11$,and the desired inequality is immediate, so the interesting case is if $b$ starts with $10$ and no prefix of $b$ is a suffix of $w$. By construction, including our choice of $\kappa>|w|$ in the $\kappa$ odd case, we are comparing a prefix of $\It_{\sqrt2}(1)$ with length at least $|w|+1$ to $w$, which must be smaller by monotonicity (Corollary \ref{prop:word_monotonicity}). For any natural number $\kappa$, odd or even, there are now two choices to extend $w$ to a dominant word. The two choices only differ by an exchange of $01$ with $10$ in one position. This exchange will change the sum of the coefficients of the kneading polynomials by a factor of 2. \end{proof} \begin{proof}[Proof of Proposition \ref{prop:extensions_for_irreducibility}] We need to choose for $w_1'$ one of the extensions of $w_1$ from Lemma \ref{lem:extension_options}, and select $n$, $\kappa$, and $m'$ so that $|w_1'|$ has length $2^n-1-m'|w_2|$ and \[ 2m'|w_2|>|w_1'|>m'|w_2|. \] To do so, first define constants $C_1=1+|w_1|+m|w_2|$ and $C_2=|w_2|$. Then choose $n$ for which \[2^n>\max\{C_2(10m+3)+C_1,18C_2+C_1\}\] and define \begin{equation} \label{eqn:def_k_n} k_n=\left\lceil \frac{2^n-C_1}{2C_2}\right\rceil-2,\qquad k_n'=\left\lceil \frac{2^n-C_1}{2C_2}\right\rceil-3. \end{equation} The two options $k_n$ and $k_n'$ are needed for parity reasons. Choosing $2^n>C_2(10m+3)+C_1$ ensures that \begin{equation} k_n>k_n'>10m, \label{eqn:bound_k_n_below} \end{equation} which becomes useful later in the proof when we define the length of the extension. The choice of $2^n>18C_2+C_1$ and the definition of $k_n,k_n'$ ensures (respectively) that \begin{equation} 3k_n>3k_n'>\frac{2^n-C_1}{C_2}>2k_n>2k_n'. \label{eqn:controlling_extension_length} \end{equation} Let $m'=k_n+m$ if this is even, and else, replace $k_n$ with $k_n'$. We will proceed with the notational choice $m'=k_n+m$ and assume $m'$ is even, but note that the needed inequalities hold for both $k_n$ and $k_n'$. Now, replacing $C_1,C_2$ with their definitions, applying Equation \eqref{eqn:controlling_extension_length}, and invoking the assumed relationship between $|w_1|$ and $|w_2|$, we see that \begin{equation*} 3m'|w_2|>3k_n|w_2|+m|w_2|+|w_1|>2^n-1>2k_n|w_2|+m|w_2|+|w_1|>2m'|w_2| \end{equation*} which implies \begin{equation} 2m'|w_2|>2^n-1-m'|w_2|>m'|w_2|. \label{eqn:needed_to_apply_prop} \end{equation} We now adjust the extension $w_1'$ of $w_1$ to have length $|w_1'|=2^n-1-m'|w_2|$, so that $(w_1'w_2^{m'})$ has total length $2^n-1$. If $|w_1|$ is odd, then $\kappa=(2^n-1-m'|w_2|)-6-3|w_1|$ is even, as needed for \[ w \hspace{.15em}{\cdot}\hspace{.15em} 1^\kappa \hspace{.15em}{\cdot}\hspace{.15em} 10 \hspace{.15em}{\cdot}\hspace{.15em} 1^{|w|} \hspace{.15em}{\cdot}\hspace{.15em} {\mathbf{01}} \hspace{.15em}{\cdot}\hspace{.15em} 1^{|w|} \text{ and }w \hspace{.15em}{\cdot}\hspace{.15em} 1^\kappa \hspace{.15em}{\cdot}\hspace{.15em} 10 \hspace{.15em}{\cdot}\hspace{.15em} 1^{|w|} \hspace{.15em}{\cdot}\hspace{.15em} {\mathbf{10}} \hspace{.15em}{\cdot}\hspace{.15em} 1^{|w|} \] to both be dominant extensions of $w_1$ by Lemma \ref{lem:extension_options}, each of length $2^n-1-m'|w_2|$. If $|w_1|$ is even, then $\kappa=(2^n-1-m'|w_2|)-4-3|w_1|$ is odd, as needed for \[ w_1 \hspace{.15em}{\cdot}\hspace{.15em} 1^\kappa \hspace{.15em}{\cdot}\hspace{.15em} 10 \hspace{.15em}{\cdot}\hspace{.15em} 1^{|w_1|} \hspace{.15em}{\cdot}\hspace{.15em} {\mathbf{01}} \hspace{.15em}{\cdot}\hspace{.15em} 1^{|w_1|} \text{ and }w_1 \hspace{.15em}{\cdot}\hspace{.15em} 1^\kappa \hspace{.15em}{\cdot}\hspace{.15em} 10 \hspace{.15em}{\cdot}\hspace{.15em} 1^{|w_1|} \hspace{.15em}{\cdot}\hspace{.15em} {\mathbf{10}} \hspace{.15em}{\cdot}\hspace{.15em} 1^{|w_1|} \] to both be dominant extensions of $w_1$ by Lemma \ref{lem:extension_options}, each of length $2^n-1-m'|w_2|$. In all the above cases, $\kappa>|w_1|$ follows from Equation \eqref{eqn:bound_k_n_below}. For each choice, $w_1^\infty>_Ew_2^\infty$ implies $w_1'^\infty>_Ew_2^\infty$, and $w_2^{m'}$ has positive cumulative sign because we ensured that $m'$ is even. Combined with Equation (\ref{eqn:needed_to_apply_prop}), we have all the necessary hypotheses to apply Proposition \ref{prop:thekeylemma} and conclude that $(w_1'w_2^{m'})^\infty$ is admissible. We also designed $w_1'$ so that $|w_1'|>m'|w_2|$. The sum of the coefficients of the kneading polynomial of $w_1'w_2^{m'}$ is even, because it has $2^n$ coefficients, each of which is either $-1$ or $+1$. By the final observation in Lemma \ref{lem:extension_options}, we can choose the extension so that the sum of the coefficients of the kneading polynomial for $w_1'w_2^{m'}$ is equal to $2\mod 4$. Since the kneading polynomial has degree $2^n-1$, we apply Lemma \ref{l:tiozzo4point1} to conclude irreducibility. \end{proof} \subsection{Controlling Galois conjugates and core entropies of concatenations} \begin{lemma} \label{l:smallrootsdontmovemuch} Let $w_2$ be a word whose Parry polynomial has a root at $z_0 \in \mathbb{D}$. Then for any $\epsilon > 0$, there exists an integer $N = N(\epsilon,w_2) \in \mathbb{N}$ such that $n > N$ implies that for every word $w_1$ for which $w_1w_2^n$ is admissible, the Parry polynomial associated to $(w_1w_2^n)$ has a root within distance $\epsilon$ of $z_0$. \end{lemma} \begin{proof} First, for any word $w$, denote the Parry polynomial associated to $w$ by $P_w$. Let $D$ be the closed disk radius $\epsilon$ centered at $z_0$, and let $C$ be the boundary of $D$. Without loss of generality, assume $\epsilon$ is small enough that $D \subset \mathbb{D}$, and that $D$ contains no root of $P_{w_2}$ except $z_0$. For any $n \in \mathbb{N}$, it is straightforward to see that \[ P_{w_1w_2^n}(z) = z^{n|w_2|}P_{w_1}(z) + \left(z^{(n-1)|w_2|} + z^{(n-2)|w_2|}+ \dots + 1 \right)P_{w_2}(z) \] \noindent Set $\alpha = \min_{z \in C} |P_{w_2}(z)|$, which exists and is positive by compactness and the assumption that $D$ contains no root of $P_{w_2}$ except $z_0$. Set \[ 0<\beta := \min_{z \in C} \left(1-|z|^{|w_2|}\right)/\left(1+|z|^{|w_2|}\right).\] Then for all $z \in C$, we have \[ \left|\left(z^{(n-1)|w_2|} + z^{(n-2)|w_2|}+ \dots + 1\right)P_{w_2}(z)\right| \geq \left| \frac{1-(z^{|w_2|})^n}{1-z^{|w_2|}} \right| \alpha \geq \frac{1-|z^{|w_2|}|}{1+|z^{|w_2|}|}\alpha \geq \beta \alpha>0 \] where the middle nonstrict inequality follows the triangle inequality and that $\left|z^{|w_2|}\right|<1$. Set $1> m: = \max_{z \in D} |z|$. Also for all $z \in C$, since all coefficients of $P_{w_1}$ have absolute value at most $3$, \[ \left| z^{n|w_2|}P_{w_1}(z) \right| \leq | z|^{n|w_2|} \left(1+3 \sum_{i=0}^{\infty} |z|^i \right) \leq m^{n|w_2|} \left(1+3 \sum_{i=0}^{\infty} m^i \right). \] \noindent Therefore, for sufficiently large $n \in \mathbb{N}$ depending only on $w_2$, we have \[ \left| z^{(n-1) |w_2|}P_{w_1}(z) \right| < \frac{\beta \alpha}{2}. \] Consequently, the winding number around $0$ of the image of $C$ under $P_{w_1w_2^n}$ equals the winding number around $0$ of the image of $C$ under the map $$z \mapsto \left(z^{(n-1)|w_2|} + z^{(n-2)|w_2|}+ \dots + 1\right)P_{w_2}(z).$$ The winding number of the image around $0$ is related to number of zeros via the Argument Principle; for a holomorphic function $f$ and a simple closed contour $\Gamma$, the number $N$ of zeros of $f$ inside $\Gamma$ is given by \begin{equation} \label{eq:argumentprinciple} N = \frac{1}{2\pi i} \int_{\Gamma} \frac{f^{\prime}(z)}{f(z)}dz = \frac{1}{2\pi i} \int_{f(\Gamma)} \frac{dw}{w} \end{equation} where $w=f(z)$. Since $P_{w_2}$ has a root in $D$, this implies $P_{w_1w_2^n}$ also has a root in $D$ for sufficiently large $n$. \end{proof} \begin{lemma} Let $w_1$ be an admissible word whose Parry polynomial $P_{w_1}$ has leading root $z_0 > 1$. For any $\epsilon>0$, there exists an integer $N = N(\epsilon,w_1)$ such $n>N$ implies that for every word $w_2$ for which $w_1^n w_2$ is admissible, the leading root of the Parry polynomial $P_{w_1^n w_2}$ associated to $(w_1^n w_2)$ is within distance $\epsilon$ of $z_0$. \label{l:leadingrootdontmovemuch} \end{lemma} \begin{proof} The proof consists of three main steps. Step 1: Compute the Parry and kneading polynomials associated to $(w_1^n w_2)$. Step 2: Show that there exists $N$ such that $n > N$ implies that for every word $w_2$ for which $w_1^n w_2$ is admissible, the Parry polynomial $P_{w_1^n w_2}$ has a root within distance $\epsilon$ of $z_0$. Step 3: Show that no root of $P_{w_1^n w_2}$ is greater in modulus than $|z_0| + \epsilon$. \emph{Step 1:} First, for any word $v$, denote the kneading polynomial associated to $v$ by $K_v$. It suffices to show that $K_{w_1^nw_2}$ can be made to have a root arbitrarily close to $1/z_0$ by choosing $n$ sufficiently big, with the choice of $n$ not depending on $w_2$. For any $n \in \mathbb{N}$, the Parry polynomial $P_{w_1^n w_2}$ is given by \[ P_{w_1^n w_2}(z) = \left( z^{(n-1)|w_1|} + \cdots + z^{2|w_1|}+ z^{|w_1|}+ 1 \right) \left( z^{|w_2|} \right)P_{w_1}(z) + P_{w_2}(z). \] By Proposition \ref{p:relatingpolynomials2}, \[ (z-1)z^{|w_1^n w_2|} K_{w_1^nw_2}(z^{-1}) = P_{w_1^nw_2}(z). \] Hence, for $z \neq 1$, \[ K_{w_1^nw_2}(z) = \frac{z}{1-z} z^{|w_1^n w_2|}P_{w_1^nw_2}(z^{-1}). \] So \[ K_{w_1^nw_2}(z) = \frac{z}{1-z} \left( z^{n|w_1|+|w_2|}P_{w_2}(1/z) +(z^{|w_1|} + \dots + z^{n|w_1|})P_{w_1}(1/z) \right) \] Denote by $Q_{w_2}$ the reciprocal polynomial for $P_{w_2}$, i.e. $Q_{w_2} =z^{|w_2|} P_{w_2}(1/z)$. Notice $Q_{w_2}$ is a polynomial whose coefficients are all at most $3$ is absolute value. Then \begin{align} K_{w_1^nw_2}(z) & = \frac{z}{1-z} \left( z^{n|w_1|} Q_{w_2}(z) +(z^{|w_1|} + \dots + z^{n|w_1|})P_{w_1}(1/z) \right) \\ &= \frac{z^{|w_1|+1}}{1-z} \left( z^{(n-1)|w_1|} Q_{w_2}(z) +(1+ \dots + z^{(n-1)|w_1|})P_{w_1}(1/z) \right) \\ &= \frac{z^{|w_1|+1}}{1-z} \left( z^{(n-1)|w_1|} Q_{w_2}(z) + \frac{1-(z^{|w_1|})^n}{1-z^{|w_1|}} P_{w_1}(1/z) \right) \label{eq:kneadingwithQ} \end{align} \emph{Step 2:} For any fixed $\epsilon_0$, let $D$ be the closed disk of radius $\epsilon_0>0$ centered at $1/z_0$, and let $C$ be the boundary of $D$. Without loss of generality, assume $\epsilon_0$ is small enough that $ D \subset \mathbb{D} $ and that $D$ contains no root of $P_{w_1}(1/z)$ except $1/z_0$ and $D$ does not contain $0$. We will show that on $C$, we can make the size of $z^{n|w_1|}Q_{w_2}(z)$ small enough relative to the size of $(1 + \dots + z^{(n-1)|w_1|})P_{w_1}(1/z)$ that the winding number around $0$ of the image of $C$ under $K$ equals the winding number around $0$ of the image of $C$ under of $z \mapsto P_{w_1}(1/z)$. Set $\alpha = \min_{z \in C} |P_{w_1}(1/z)|$, which exists and is positive by compactness and the assumption that $D$ contains no root of $P_{w_1}$ except $1/z_0$. Set \[ 0<\beta := \min_{z \in C} \left\{ \frac{1-|z|^{|w_1|}}{1+|z|^{|w_1|}} \right\}.\] Then for all $z \in C$, we have \begin{equation} \label{eq:circlebigpart} \left| \frac{1-(z^{|w_1|})^n}{1-z^{|w_1|}} P_{w_1}(1/z) \right| \geq \left( \frac{1-|z|^{|w_1|n}}{1+|z|^{|w_1|}} \right) \alpha \geq \left( \frac{1-|z|^{|w_1|}}{1+|z|^{|w_1|}} \right) \alpha \geq \beta \alpha \end{equation} Set $1> m: = \max_{z \in D} \{|z|\}$. Also for all $z \in C$, \begin{equation} \label{eq:circlesmallpart} \left| z^{(n-1)|w_1|}Q_{w_2}(z) \right| \leq | z|^{(n-1)|w_1|} \left(1+3 \sum_{i=0}^{\infty} |z|^i \right) \leq m^{(n-1)|w_2|} \left(1+3 \sum_{i=0}^{\infty} m^i \right). \end{equation} Therefore, for sufficiently large $n$, $\left| z^{n|w_1|}Q_{w_2}(z) \right| \leq \frac{\alpha \beta}{2}$. Consequently, the winding number around $0$ of the image of $C$ under the map $$k_{w_1^nw_2}: z \mapsto z^{(n-1)|w_1|} Q_{w_2}(z) + \frac{1-(z^{|w_1|})^n}{1-z^{|w_1|}} P_{w_1}(1/z)$$ equals the winding number $W$ of the image of $C$ around $0$ under the map $$g_{w_1^nw_2}:z \mapsto \frac{1-(z^{|w_1|})^n}{1-z^{|w_1|}} P_{w_1}(1/z).$$ Since $P_{w_1}$ has a root at $1/z_0 \in D$, the argument principle (equation \ref{eq:argumentprinciple}) implies the winding number $W$ is nonzero. Hence, the winding number around $0$ of the image of $C$ under $k_{w_1^nw_2}$ is nonzero. Therefore, $k_{w_1^nw_2}$ has a root in $D$, and thus $K_{w_1^nw_2}$ has a root in $D$. This implies $P_{w_1^nw_2}$ has a root in the set $\{z:1/z \in D\}$. The diameter of this set decreases to $0$ as $\epsilon_0$ decreases to $0$, and $\epsilon_0$ was arbitrary. \emph{Step 3:} Set $r = |1/z_0| - \epsilon_0$. Without loss of generality, assume $\epsilon_0$ is small enough that $r > 0$ and $|1/z_0|+\epsilon_0 < 1$. Let $E$ be the closed disk of radius $r$ centered at $0$. Let $F$ be the boundary of $E$. Since $z_0$ is the leading root of $P_{w_1}$, the map $z \mapsto P_{w_1}(1/z)$ has no roots in $E$. Hence the map $g_{w_1^n w_2}$ has no roots in $E$, as $|z| < 1$ for all $z \in E$. Set $\tilde{\alpha} = \min_{z \in F} |P_{w_1}(1/z)|$, which exists and is positive by compactness. Set \[ 0< \tilde{\beta } := \min_{z \in F} \left\{ \frac{(1-|z|^{|w_1|})}{1+|z|^{|w_1|}} \right\}.\] By equation (\ref{eq:circlebigpart}), for any $n$ and for any $z \in F$, \[ |g_{w_1^nw_2}(z)|=\left| \frac{1-(z^{|w_1|})^n}{1-z^{|w_1|}} P_{w_1}(1/z)\right| \geq \tilde{\beta} \tilde{\alpha}. \] Thus, for any $n$, the image of the circle $F$ under $g_{w_1^nw_2}$ is a closed curve that has winding number $0$ about the origin and is contained in the set of points with absolute value at least $\tilde{\beta}\tilde{\alpha}$. By equation (\ref{eq:circlesmallpart}), for any $n$ and any $z \in F$, \begin{equation} \label{eq:yetanotherequation} \left| z^{(n-1)|w_1|}Q_{w_2}(z) \right| \leq (|1/z_0|+\epsilon_0)^{(n-1)|w_2|} \left(1+3 \sum_{i=0}^{\infty} (|1/z_0|+\epsilon_0)^i \right). \end{equation} Since $|1/z_0| + \epsilon_0 < 1$ by assumption, equation (\ref{eq:yetanotherequation}) implies that for sufficiently large $n$, $\left| z^{(n-1)|w_1|}Q_{w_2}(z) \right| < \tilde{\beta}\tilde{\alpha}/2$ for all $z \in F$. In order to perturb the image of $F$ under $g_{w_1^nw_2}$ so that it has nonzero winding number around the origin, some point in the image would have to move by at least $\tilde{\beta}\tilde{\alpha}/2$. Therefore, for sufficiently large $n$, the image of $F$ under $k_{w_1^n w_2}$ has zero winding number around $0$. The argument principle then implies $k_{w_1^n w_2}$ has no roots in $E$. By equation (\ref{eq:kneadingwithQ}), the only root of $K_{w_1^nw_2}$ in $E$ is $z=0$. Therefore $P_{w_1^n w_2}$ has no roots in $\mathbb{C}$ of modulus greater than $1/(|1/z_0| - \epsilon_0)$. \end{proof} \begin{lemma} Let $v$ be a dominant word with growth rate $\beta$. Then the string $v^n\cdot 1^\infty$ is admissible for all $n$, and the growth rate of $v^n\cdot 1^\infty$ converges to $\beta$ as $n \to \infty$. \label{lem:approaching_dominant_growthrate} \end{lemma} \begin{proof} Denote the growth rate of $v^n\cdot 1^\infty$ by $\zeta_n$. First, we show that $v^n\cdot 1^\infty$ is admissible. It is evident that $v^n\cdot 1^\infty\geq_E 1^\infty$, so one needs only to show that \begin{equation} \label{eq:admisscond} v^n\cdot 1^\infty\geq_E\sigma^k(v^n\cdot 1^\infty) \end{equation} for all $0<k<n|v|$. If $k$ is a multiple of $|v|$, then $\sigma^k(v^n\cdot 1^\infty)$ is of the form $v^m\cdot 1^\infty$ for some natural number $m<n$. In this case, equation (\ref{eq:admisscond}) then follows from the fact that $v^m$ has positive cumulative sign and $v^{n-m}\cdot 1^\infty\geq_E 1^\infty$. If $k$ is not a multiple of $|v|$, then $\sigma^k(v^n\cdot 1^\infty)$ starts with a word of the form $b\cdot 1$ where $b$ is a proper suffix of $v$. Hence by dominance of $v$ and Lemma \ref{l:dominantequivalentcharacterization}, equation (\ref{eq:admisscond}) holds with strict inequality in this case. Proposition \ref{fact:betaexpansion} gives us: \begin{align*} 1 &=\sum_{j=1}^\infty {s(1,j)d(1,j)\over \beta^j}={1\over 1-\beta^{-|v|}}\sum_{j=1}^{|v|}{s(1,j)d(1,j)\over \beta^j}, \\ 1 &=\sum_{j=1}^\infty {s_{\zeta^n}(1,j)d_{\zeta^n}(1,j)\over \zeta_n^j}={1-\zeta_n^{-n|v|}\over 1-\zeta_n^{-|v|}}\sum_{j=1}^{|v|}{s(1,j)d(1,j)\over \zeta_n^j}+{2\zeta_n^{-n|v|-1}\over 1+\zeta_n^{-1}}, \end{align*} where $d(1,j)$ and $s(1,j)$ are the digits and cumulative signs associated to the string $v^{\infty}$, and $d_{\zeta^n}(1,j)$ and $s_{\zeta^n}(1,j)$ are the digits and cumulative signs associated to the string $v^n \cdot 1^{\infty}$. Hence, the corresponding Parry polynomials are: \begin{align*} \beta^{|v|}- \left( \sum_{j=1}^{|v|} s(1,j)d(1,j)\beta^{|v|-j} \right)-1&=0,\\ (\zeta_n+1)\zeta^{n|v|}-(\zeta_n+1){\zeta_n^{n|v|}-1\over \zeta_n^{|v|}-1} \left(\sum_{j=1}^{|v|}{s(1,j)d(1,j)\zeta_n^{|v|-j}} \right) -2&=0. \end{align*} It follows from kneading theory (Theorem \ref{t:kneadingroots}) that $\beta$ and $\zeta_n$, respectively, are the leading roots of these Parry polynomials. Hence, $1/\beta$ and $1/\zeta_n$ are the smallest zeroes of the following analytic functions: \begin{align*} Q_\beta(z)&=1-\left(\sum_{j=1}^{|v|}s(1,j)d(1,j)z^j\right)-z^{|v|},\\ Q_{\zeta_n}(z)&=Q_\beta(z)-z^{n|v|}(Q_\beta(z)-1)+z^{n|v|}(1-z^{|v|})-{2z^{n|v|+1}(z^{|v|}-1)\over z+1}. \end{align*} Now it is evident that $Q_{\zeta_n}-Q_\beta$ converges uniformly to $0$ as $n\rightarrow\infty$ in any compact subset of the open unit disc, hence by the same winding number argument used in the proof of Lemma \ref{l:leadingrootdontmovemuch}, the smallest zeroes of $Q_{\zeta_n}$ converge to the smallest zero of $Q_\beta$. \end{proof} \begin{proposition} For all $y\in(\sqrt2,2)$ and all $\epsilon>0$, there exists a sequence of dominant words $(w_n)_{i=1}^\infty$ such that for any admissible extension $w_n'$ of $w_n$, including the empty extension, the growth rate of $(w_n')^\infty$ is within $\epsilon$ of $y$. \label{prop:powerful_dominant_words} \end{proposition} \begin{proof} By Proposition \ref{prop:densedominant}, there exists a dominant word $v$ with growth rate $\beta$ within $\epsilon/2$ of $y$. For each $n \in \mathbb{N}$, consider the admissible string $v^n\cdot 1^\infty$; denote the growth rate of the tent map associated to $v^n\cdot 1^\infty$ by $\zeta_{n}$. Denote by $I_j^\eta$ the subinterval of $[0,1]$, with the partition into subintervals depending on the growth rate $\eta$ (as in \S\ref{Basic definitions}), that contains the point $f^j_\eta(1)$. For each pair $k,n \in \mathbb{N}$, define the set of growth rates \[ U_k^n=\left\{\eta\in[\sqrt2,2] \mid f^j_\eta(1)\in \interior(I_{j}^{\zeta_n}) \text{ for all }j=1,\ldots,k\right\}. \] Note that $\zeta_n \in U_k^n$ for all $k$ and $n$, since if at any point the $f_{\zeta_n}$-orbit of 1 landed on the boundary of $I_0$ or $I_1$, then either the tail of the itinerary would be $0^{\infty}$ or $1$ would be periodic under $f_{\zeta_n}$ which contradicts the construction of the itinerary $v^n\cdot1^\infty$. Evidently, $U_k^n$ is open for all $k,n$ by design. By Lemma \ref{lem:approaching_dominant_growthrate}, there exists $N_1 \in \mathbb{N}$ such that whenever $n \geq N_1$, the growth rate $\zeta_n$ is within $\epsilon/2$ of $\beta$, and hence within $\epsilon$ of $y$. Therefore, for all $n, k \in \mathbb{N}$ with $n \geq N_1$, the set $U_k^n$ has nontrivial intersection with $(y-\epsilon,y+\epsilon)$. For integer $n \geq N_1$, fix an integer $k_n>|v^n|$. Since $U_{k_n}^n\cap(y-\epsilon,y+\epsilon)$ is open and nonempty, Proposition \ref{prop:densedominant} implies there exists a dominant word $w_n$ with growth rate \[ \beta_{n}\in U_{k_n}^n\cap(y-\epsilon,y+\epsilon). \] By the definition of the set $U_{k_n}^n$, the word $w_n$ agrees with $v^n\cdot 1^\infty$ for more than $|v^n|$ letters. Therefore, any extension $w_n^{\prime}$ of $w_n$, including the empty extension, is also an extension of $v^n$. Let $N_2=N_2(v,\epsilon)$ be the integer whose existence is guaranteed by Lemma \ref{l:leadingrootdontmovemuch}, and let $N = \max\{N_1,N_2\}$. Then whenever $n >N$, for any admissible extension $w_n^{\prime}$ of $w_n$, the leading root of the Parry polynomial $P_{w_n^{\prime}}$ is within $\epsilon$ of $\beta$. \end{proof} \subsection{Proof of main theorem of section} \begin{theorem} \label{thm:persistence} Let $\alpha \in \mathbb{D}$ be a Galois conjugate of a superattracting $\beta\in(\sqrt{2},2)$. Then for any $y\in[\beta,2]$ and any $\epsilon>0$, there exists a superattracting $\beta'$ within $\epsilon$ of $y$ which has some Galois conjugate within $\epsilon$ of $\alpha$. \end{theorem} \begin{proof} Let $w$ be an irreducible, admissible word with growth rate $\beta \in [\sqrt{2} ,2]$ and fix $y \in [\beta,2]$. If $y = \beta$ the statement is trivial, so assume $y > \beta$. Fix $$0<\epsilon<\frac{y-\beta}2.$$ Construct the sequence of dominant words $(w_n)$ as in Proposition \ref{prop:powerful_dominant_words}; the words $w_n$ satisfy that for any admissible extension $w_n^{\prime}$ of $w_n$, the growth rate of $w_n^{\prime}$ is within $\epsilon$ of $y$. Denote the growth rate of $w_n$ by $\beta_n$. The inequality $\beta_n>\beta$, for all $n$, follows from $\epsilon < \frac{y-\beta}2$. Because $\beta_n > \beta$, monotonicity (Corollary \ref{prop:word_monotonicity}) implies $w_n^\infty>_Ew_2^\infty$. Passing to subsequences as needed, we may assume that $|w_n| \to \infty$ as $n \to \infty$, since there are only finitely many words of bounded length. For each $n$, let $M_n=\left\lceil \frac{|w_n|}{|w|}\right\rceil - 2 $. Then \begin{equation} \label{eq:iffstatement} 2M_n|w|\geq 2\left(\frac{|w_n|}{|w|}-2\right)|w| = 2|w_n|-4|w|. \end{equation} Since $2|w_n|-4|w| > |w_n|$ if and only if $|w_n|>4|w|$, we have from equation (\ref{eq:iffstatement}) that \begin{equation*} 2M_n |w| > |w_n| \iff |w_n|>4|w|. \end{equation*} Observe that \[ |w_n|=\frac{|w_n|}{|w|}|w| > \left( \left\lceil \frac{|w_n|}{|w|}\right\rceil-2 \right)|w| =M_n|w| \quad \text{ for all }n \] and $|w_n| \to \infty$. Therefore, for all $n$ large enough that $|w_n|>4|w|$, there exists a positive integer $M_n$ such that \[ 2M_n|w| > |w_n| > M_n |w|. \] Note also that $M_n \to \infty$ as $n \to \infty$. Thus, for sufficiently large $n$, the hypotheses of Proposition \ref{prop:extensions_for_irreducibility} hold, using $w_n$ in place of $w_1$ and $w$ in place of $w_2$. Then by Proposition \ref{prop:extensions_for_irreducibility}, there exists an integer $m_n'>M_n$ and a dominant extension $w_n'$ of $w_n$ so that $(w_n'w^{m_n'})^\infty$ is admissible and the polynomial \[\frac{P_{w_n'w^{m_n'}}(z)}{1-z}\] is irreducible, where $P_{w_n'w^{m_n'}}$ is the Parry polynomial of the admissible word $w_n'w^{m_n'}$. Because $w_n'w^{m_n'}$ is an admissible extension of $w_n$, which was constructed via Proposition \ref{prop:powerful_dominant_words}, the growth rate of $w_n'w^{m_n'}$ is within $\epsilon$ of $y$. Since $M_n \to \infty$ as $n \to \infty$ and $m_n^{\prime} > M_n$, we have $m_n' \to \infty$ as $n\to \infty$. Then by Lemma \ref{l:smallrootsdontmovemuch}, for sufficiently large $n \in \mathbb{N}$, $P_{w_n^{\prime} w^{M_n^\prime}}$ has a root within $\epsilon$ of $\alpha$. \end{proof} \begin{comment} \comharry{old proof below} \begin{proof} Let $w$ be an irreducible, admissible word with growth rate $\beta \in [\sqrt{2} ,2]$ and fix $y \in [\beta,2]$. If $y = \beta$ the statement is trivial, so assume $y > \beta$. By Proposition \ref{prop:densedominant}, there exists a sequence of distinct leading roots $\beta_n$ of Parry polynomials associated to dominant words $w_n$ such that $\beta_n \to y$ as $n \to \infty$ and $\beta_n > \beta$ for all $n$. Passing to subsequences as needed, we assume that $|w_n| \to \infty$ as $n \to \infty$, since there are only finitely many words of bounded length. For each $n$, let $M_n=\left\lceil \frac{|w_n|}{|w|}\right\rceil -k_n $ where $k_n=2$ or $3$ is chosen so that $M_n$ is even. Then \begin{equation} \label{eq:iffstatement} 2M_n|w|\geq 2\left(\frac{|w_n|}{|w|}-k_n\right)|w| = 2|w_n|-2k_n|w|. \end{equation} Since $2|w_n|-2k_n|w| > |w_n|$ if and only if $|w_n|>2k_n|w|$, we have from equation (\ref{eq:iffstatement}) that \begin{equation} 2M_n |w| > |w_n| \iff |w_n|>2k_n|w|. \end{equation} Observe that \[ |w_n|=\frac{|w_n|}{|w|}|w| > \left( \left\lceil \frac{|w_n|}{|w|}\right\rceil-k_n \right)|w| =M_n|w| \quad \text{ for all }n \] and $|w_n| \to \infty$. Therefore, for all $n$ large enough that $|w_n|>2k_n|w|$, there exists an even positive integer $M_n$ such that $$2M_n|w| > |w_n| > M_n |w|.$$ Since $\beta_n > \beta$ for all $n$, $(w_n)^{\infty} >_E (w)^{\infty}$ by monotonicity (Corollary \ref{prop:word_monotonicity}). Thus, for sufficiently large $n$, the hypotheses of Proposition \ref{prop:extensions_for_irreducibility} hold, using $w_n$ in place of $w_1$ and $w$ in place of $w_2$. Proposition \ref{prop:extensions_for_irreducibility} thus implies that for all sufficiently large $n$, there is a finite extension $w_n^\prime$ of $w_n$ and some $M_n'>M_n$ such that $(w_n^{\prime} w^{M_n'})^\infty$ is admissible, $|w_n^\prime|>M_n'|w|$, and, denoting by $P_{w_n^{\prime} w^{M_n'}}$ the associated Parry polynomial, $P_{w_n^{\prime} w^{M_n'}}(z)/(z-1)$ is irreducible. Since $M_n \to \infty$ as $n \to \infty$, and hence $M_n' \to \infty$ as $n\to \infty$, by Lemma \ref{l:smallrootsdontmovemuch} there exists $N_1 \in \mathbb{N}$ such that $n \geq N_1$ implies $P_{w_n^{\prime} w^{M_n^\prime}}$ has a root within $\epsilon$ of $\alpha$. By \comharry{new lemmas needed here} there exists $N_2 \in \mathbb{N}$ such that $n \geq N_2$ implies that the leading root of $P_{w_n^{\prime} w^{M_n^\prime}}$ is within $\epsilon$ of $y$. The result follows for $n\geq \max\{N_1,N_2\}$. \end{proof} \end{comment} \begin{comment} \comkathryn{This red argument is wrong. } \color{red} \begin{lemma} Let $K \subset \mathbb{C}$ be a small disk with $C = \partial D$ contained in the annulus $1<|z|<2$. Let $w_2$ be a finite word. Then there exists $N \in \mathbb{N}$ such that if $w_1$ is any word of length at least $N$, then the Parry polynomial associated to the concatenation $w_1w_2$ and the Parry polynomial associated to $w_1$ have the same number of roots in $K$. \end{lemma} \begin{proof} First, observe that for any real number $r \in (1,2)$, there exists $N$ such that \begin{equation} \label{eq:usefulfact} r^n - \sum_{i=0}^{n-1}r^i > 0 \textrm{ whenever } n \geq N. \end{equation} We claim that there exists an integer $N=N(C)$ such that whenever $\ell_1 > N$, we have $$|g| <|f| \textrm{ on }C,$$ for any $g$ of the form $P_2(z)/(z-1)$ where $P_2$ is the Parry polynomial associated to a word $w_2$, whose length we will call $\ell_2$, and for any $f$ of the form $f(z)=z^{\ell_2}P_1(z)/(z-1)$ where $P_1$ is the Parry polynomial associated to any word $w_1$ of length $\ell_1$. By Proposition \ref{p:relatingpolynomials2}, away from $1$, the $z^{\ell_2}P_2(z)/(z-1)$ and $P_1(z)/(z-1)$ are both polynomials with all all coefficients in $\{\pm 1\}$. If suffices to show that on $C$, the inequality $|f| - |g| > 0$ holds. Now, by \ref{eq:usefulfact}, for sufficiently large $\ell_1$, $$|f(z)| \geq |z|^{\ell_2}\left(|z|^{\ell_1-1}-\sum_{i=0}^{\ell_1-2} |z|^i \right).$$ Hence \begin{multline} |f(z)| - |g(z)| \geq |z|^{\ell_1+\ell_2-1} - \sum_{i=\ell_2}^{\ell_1+\ell_2 -2} |z|^i - |g(z)| \\ \geq |z|^{\ell_1+\ell_2-1} - \sum_{i=\ell_2}^{\ell_1+\ell_2 -2} |z|^i - \sum _{j=0}^{\ell_2-1}|z|^j \geq |z|^{\ell_1+\ell_2-1} - \sum_{i=0}^{\ell_1+\ell_2 -2} |z|^i >0, \end{multline} where the last inequality follows from \ref{eq:usefulfact} for sufficiently large $\ell_1$. This proves the claim. Thus, for sufficiently large $\ell_1$, $|g| < |f|$ on $C$, so the assumptions of Rouch\'{e}'s Theorem hold. Therefore, $f$ and $f+g$ have the same number of roots in the region bounded by $C$. But $f+g$ is precisely $P_3(z)/(z-1)$, where $P_3$ is the Parry polynomial associated to the concatenation $w_1w_2$. \end{proof} \color{black} \end{comment} \section{Period doubling} \label{s:angledoubling}\label{sec:doubling} This section shows that if $1<\lambda \leq 2$ is the growth rate of a superattracting tent map, then so is $\sqrt{\lambda}$, and relates the itineraries of these two maps; we refer to this mechanism as Period Doubling. We then use Period Doubling to extend Theorem \ref{thm:persistence}, which holds for $\beta \in [\sqrt 2,2]$, to work for $\beta \in [1,2]$ (Proposition \ref{p:fullpersistence}), and then use this to prove Theorem \ref{mainthm:closurepersistence}. Period doubling is related to the process of ``tuning" the Mandelbrot set in complex dynamics; see e.g. \cite[\S~7.2]{TiozzoGaloisConjugates}. \medskip Define $s:\{0,1\}^{\mathbb{N}} \rightarrow \{0,1\}^{\mathbb{N}}$ be the map that interchanges $0$s and $1$s, i.e. $$s(b_1, b_2, b_3,\dots) = (b_1+1 \bmod{2}, b_2 +1 \bmod{2}, b_3+1 \bmod{2}, \dots ).$$ \begin{lemma} \label{l:squaredmap} Let $f$ be a tent map on $[0,1]$ with growth rate $1<\lambda < \sqrt{2}$, and denote the itinerary of $1$ under $f$ by $$\textrm{It}_f(1) = a_1,a_2,a_3,\dots.$$Then \begin{enumerate} \item $a_{2k+1} = 1$ for all nonnegative integers $k$, and \item there exists a tent map $g$ of growth rate $\lambda^2$ such that $$a_2,a_4,a_6,\dots = s(\textrm{It}_g(1)).$$ \end{enumerate} Furthermore, $g$ is conjugate to the restriction of $f^2$ to the interval $[2-\lambda,\frac{2}{1+\lambda}]$ via an affine scaling and flipping of the interval. \end{lemma} \begin{proof} Fix $1 < \lambda < \sqrt{2}$ and let $f$ be the tent map with growth rate $\lambda$. Let \begin{equation} \label{eq:Jintervals} J_1=\left[2-\lambda,\frac{2}{1+\lambda}\right], \hspace{1cm} J_2=\left[\frac{2}{1+\lambda},1\right]. \end{equation} Notice that $2-\lambda = f(1) \leq 1/\lambda$, and that $\frac{2}{1+\lambda}$ is the non-zero fixed point of $f$. Since $f(1) \in I_0$, $f^2(1) = \lambda\cdot f(1) = 2\lambda - \lambda^2$. The inequality $2x-x^2 \geq \frac{2}{1+\lambda}$ is true when $1 \leq x \leq \sqrt{2}$. Hence, $f^2(1) \in J_2$. Thus, $f(J_1) \subset J_2$ and $f(J_2) \subset J_1$. The inequality $1/\lambda < 2/(1+\lambda)$ holds for $\lambda > 1$, so the critical point $1/\lambda$ of $f$ is in the interior of interval $J_1$. It follows that the restrictions $f^2:J_1 \to J_1$ and $f^2:J_2 \to J_2$ are piecewise linear, continuous, have one turning point, and have growth rate $\lambda^2$. The map $f^2|_{J_2}$ is a tent map on $J_2$. The map $f^2|_{J_1}$ is an ``inverted tent map" on $J_1$. It is conjugate (via scaling and then flipping the interval so as to exchange the endpoints) to a tent map $g$ on $[0,1]$ of growth rate $\lambda^2$. Denote the itinerary of $1$ under $f$ by $a_1, a_2, a_3,\dots$. Since $f^2(J_2) \subset J_2 \subset I_1$, all odd terms $a_{2k+1}$ are equal to $1$. What about the even terms? By definition, the term $a_{2k} = 1$ if and only if $f^{2k-1}(1) \in I_1$. This happens if and only if \[f^{2k-1}(1) \in J_1 \cap [1/\lambda,2/(1+\lambda)],\] which is equivalent to \[(f^2)^{k-1} (f(1)) \in J_1 \cap [1/\lambda,2/(1+\lambda)].\] Because the map that conjugates $f^2|_{J_1}$ and $g$ involves an isometric flip that exchanges the endpoints of the interval, we have that \[(f^2)^{k-1} (f(1)) \in J_1 \cap [1/\lambda,2/(1+\lambda)]\] if and only if the point $g^{k-1}(1)$ lies to the left of the critical point for $g$. Thus, $a_{2k} = 1$ if and only if the $(k-1)^\textrm{th}$ digit of the itinerary of $1$ under $g$ equals $0$. Hence $$(a_2,a_4,a_6,\dots) = s(\textrm{It}_g(1))$$ \end{proof} \begin{proposition}[Period Doubling] \label{p:squareroot} Let $g$ be a superattracting tent map with growth rate \mbox{$1 < \lambda \leq 2$}, and denote the itinerary of $1$ under $g$ by $$\textrm{It}_g(1) = b_1,b_2,b_3,\dots.$$ Then the sequence $$a_1,a_2,a_3,\dots$$ defined by $$\begin{cases} a_{2k+1} = 1 & \textrm{ for nonnegative integers k} \\ a_{2k} = b_k +1 \pmod{2} & \textrm{ for nonnegative integers k} \end{cases}$$ is the itinerary of $1$ under the superattracting tent map of growth rate $\sqrt{\lambda}$. \end{proposition} \begin{proof} Denote by $g$ the superattracting tent map of growth rate $1 < \lambda \leq 2$, and denote by $f$ the tent map of growth rate $\sqrt{\lambda}$. Let $J_1$ and $J_2$ be the intervals defined as in equation (\ref{eq:Jintervals}) for $f$ (with growth rate $\sqrt{\lambda}$). By Lemma \ref{l:squaredmap}, $g$ is conjugate to $f^2 |_{J_1}$ via an affine map that scales and flips $J_1$ (exchanging the endpoints). Since $g$ is superattracting, the left endpoint of $J_1$, $2-\sqrt{\lambda}$, is a (strictly) periodic point for $f^2$. Since $f(J_1) \subset J_2$ and $f(J_2 \subset J_1)$ and $f$ is injective on $J_2$, this implies $1$ is a strictly periodic point for $f$. Hence $f$ is superattracting. The statement about the itineraries is a restatement of Lemma \ref{l:squaredmap}. \end{proof} \begin{proposition} \label{p:fullpersistence} Let $\alpha \in \mathbb{D}$ be a Galois conjugate of a superattracting $\beta\in[1,2]$. Then for any $y\in[\beta,2]$ and any $\epsilon>0$, there exists a superattracting $\beta'$ within $\epsilon$ of $y$ which has some Galois conjugate within $\epsilon$ of $\alpha$. \end{proposition} \begin{proof} We will use Period Doubling (Proposition \ref{p:squareroot}) to extend the conclusion of Theorem \ref{thm:persistence}, which gives the desired result for $\beta \in (\sqrt{2},2]$, to all $\beta$ in the interval $(1,2]$. Let $\alpha, \beta, y, \epsilon$ be as in the statement of the theorem. Assume $\beta>1$. The case $y \in [\sqrt{2},2]$ is covered by Theorem \ref{thm:persistence}, so assume $y \in (1,\sqrt{2})$. Define $k \in \mathbb{N}$ so that $y^{2^k} \in [\sqrt{2},2]$. Set $\tilde{y} = y^{2^k}$ and $\tilde{\alpha} = \alpha^{2^k}$. By Theorem \ref{thm:persistence}, there exists a superattracting $\tilde{\beta}^{\prime}$ within $\epsilon$ of $\tilde{y}$ which has a Galois conjugate $\tilde{z}$ within $2^{-2^k}\epsilon$ of $\tilde{\alpha}$. Without loss of generality, we may assume $\tilde{\beta^{\prime}} \in [\sqrt{2},2]$. Set $\beta^{\prime}$ to be the real root $(\tilde{\beta^{\prime}})^{-2^k}$, and pick $z$ to be a root of $(\tilde{z})^{-2^k}$ that minimizes distance to $\alpha$. Let $f$ be the minimal polynomial for $\tilde{\beta^{\prime}} \in [\sqrt{2},2]$. The polynomial $f$ is, by definition, irreducible in $\mathbb{Z}[z]$, and so satisfies the assumptions of Lemma \ref{l:Tiozzo4.2}. Thus, for all $n \geq 1$, the polynomial $f(z^{2^n})$ is irreducible in $\mathbb{Z}[z]$. By Period Doubling (Proposition \ref{p:squareroot}), if a growth rate $1<\lambda<2$ is admissible, then the growth rate $\sqrt{\lambda}$ is also admissible. Consequently, $\beta^{\prime}$ is an admissible slope and $z$ is a Galois conjugate of $\beta^{\prime}$. Taking positive square roots of positive numbers does not increase distance, so $|\beta^{\prime}-y| < \epsilon$. Since the supremum over $\mathbb{D}$ of the absolute value of the derivative of the map $z \mapsto z^2$ is $2$, the distance between $\tilde{z} = z^{2^k}$ and $\tilde{\alpha}=\alpha^{2^k}$ is at most $2^{2^k} |z-\alpha|$. Hence $$|z-\alpha| \leq 2^{2^k}|\tilde{z} - \tilde{\alpha}| < 2^{2^k}2^{-2^k}\epsilon = \epsilon.$$ For $\beta = 1$, since $1$ has no nontrivial Galois conjugates, $(z,1)$ must the the limit of a sequence of points $(z_n,\lambda_n) \in \Upsilon_2$ with $z_n \in \mathbb{D}$ and $\lambda_n >1$. By the previous argument, we can approximate each $(z_n,\lambda_n) $ by a point $(c_n,\beta_n^{\prime})$ where $\beta_n^{\prime}$ is a superattracting growth rate with a Galois conjugate $c_n$ so that $c_n$ is within $\epsilon/2$ of $z_n$. The claim follows. \end{proof} \begin{proof}[Theorem \ref{mainthm:closurepersistence}] For $(z,\lambda) \in \Upsilon_2$ with $z \in \mathbb{D}$ and $\lambda>1$, the statement $\{z\} \times [\lambda,2] \subset \Upsilon_2$ follows immediately from Proposition \ref{p:fullpersistence} and the fact that the Master Teapot $\Upsilon_2$ is closed. Thus, it suffices to deal with the case $(z,1) \in \Upsilon_2$ with $z \in \mathbb{D}$. Since $1$ has no nontrivial Galois conjugates, $(z,1)$ must the the limit of a sequence of points $(z_n,\lambda_n) \in \Upsilon_2$ with $z_n \in \mathbb{D}$ and $\lambda_n >1$. Then the interval $\{z\} \times [1,2] \subset \Upsilon_2$ by the first part and that $\Upsilon_2$ is closed. \end{proof} In fact, the case $(z,1) \subset \Upsilon_2$ with $z \in \mathbb{D}$ discussed in the proofs of Theorem \ref{mainthm:closurepersistence} and Proposition \ref{p:fullpersistence} cannot occur; Proposition \ref{p:bottomlevelunitcircle} will show that the bottom level of the Master Teapot is the unit circle. \section{The unit cylinder and connectivity}\label{sec:cylinder} \begin{proposition} \label{p:bottomlevelunitcircle} The unit circle is equal to the bottom level of the Master Teapot, i.e. $$S^1 \times \{1\} = \Upsilon_2 \cap (\mathbb{C} \times \{1\}).$$ \end{proposition} \begin{proof} We will first show $S^1 \times \{1\} \subset \Upsilon_2$. By Proposition \ref{p:squareroot}, if the tent map of growth rate $1 < \lambda \leq 2$ is superattracting, then the tent map of growth rate $\sqrt{\lambda}$ is also superattracting. Fix $1<\lambda \leq 2$ such that the tent map of growth rate $\lambda$ is superattracting and such that the kneading polynomial for that map satisfies the assumptions of Lemma \ref{l:Tiozzo4.2}. Thus, for any Galois conjugate $\alpha$ of $\lambda$ and for any $n \in \mathbb{N}$, each of the $2^n$ complex points $\alpha^{\frac{1}{2^n}}$ is a Galois conjugate of the positive real root $\lambda^{\frac{1}{2^n}}$. So each of the $2^n$ points $(\alpha^{\frac{1}{2^n}},\lambda^{\frac{1}{2^n}}) \subset \Upsilon_2$. Taking the closure over all $n$, we have that $S^1 \times \{1\} \subset \Upsilon_2$. To show $\Upsilon_2 \cap (\mathbb{C} \times \{1\}) \subset S^1 \times \{1\}$, suppose there exists a point $(y,1)\in\Upsilon_2$ such that $|y|\neq 1$. Since $1$ has no nontrivial Galois conjugates, $(y,1) \in \mathbb{C} \times \mathbb{R}$ must be the the limit of a sequence of points $(\alpha_n,\beta_n) \in \mathbb{C} \times \mathbb{R}$ such that $\beta_n$ is the growth rate of a superattracting tent map and $\alpha_n$ is a Galois conjugate of $\beta$. Thus, reindexing the sequence as necessary, we have that for any $k>0$, there exists $\beta_{k}$ with $1<\beta_{k}<1+\frac{1}{k}$ with Galois conjugate $\alpha_{k}$, so that $|\alpha_k-y|<\epsilon$. Now by Lemma \ref{l:squaredmap}, $\beta_k^{2^{n_k}} \leq 2$ is admissible, where $n_k$ is the maximal value of $n$ for which $\beta_k^{2^{n_k}}\leq 2$. The fact that $\alpha_k^{2^{n_k}}$ is a Galois conjugate of $\beta_k^{2^{n_k}}$ follows immediately from the definition of a Galois automorphism. Thus $(\alpha_k^{2^{n_k}},\beta_k^{2^{n_k}}) \subset \Upsilon_2$. Now, $|\alpha_k|$ is bounded away from $1$ for for $k$ sufficiently large (because $\alpha_k \to y$), and $n_k\to\infty$ as $k \to \infty$, since $\beta_k \to 1$ as $k \to \infty$. Consequently, either $\alpha_k^{2^{n_k}} \to 0$ or $\alpha_k^{2^{n_k}} \to \infty$ as $k\to \infty$. This is a contradiction because $$\Omega \subset \{z \in \mathbb{C}: 1/2 \leq z \leq 2\}$$ by \cite[Lemma 2.4]{TiozzoGaloisConjugates}, and the projection of $\Upsilon_2$ onto the first coordinate is $\Omega_2$. \end{proof} \begin{proposition}\label{p:irreducibleapproximation} Fix $z \in \mathbb{D} \cap \Omega$ and $\epsilon > 0$. Then there exists $(y,\beta) \in \mathbb{C} \times \mathbb{R} \simeq \mathbb{R}^3$ such that \begin{enumerate} \item $d((z,2),(y,\beta)) < \epsilon$, \item $y$ is a Galois conjugate of $\beta$, and \item the minimal polynomial for $\beta$ has coefficients in $\{\pm 1\}$, and not all its coefficients are equal. \end{enumerate} \end{proposition} \begin{proof} Fix any sequence $\{\epsilon_i\}_{i \in \mathbb{N}}$, $\epsilon_i \in \{\pm 1\}$. In the proof of \cite[Corollary 5.3]{TiozzoGaloisConjugates}, Tiozzo shows that for any $n \in \mathbb{N}$, there exist arbitrarily large $N \in \mathbb{N}$ and $\eta=\eta(N,n) \in \{\pm 1\}$ such that $$P_{N,n}(t) = 1 - \left(\sum_{k=1}^N t^k \right) + \eta t^{N+1} + \left(\sum_{k=0}^n \epsilon_{n-k}t^{N+2+k}\right)$$ is an admissible kneading determinant for a superattracting tent map and the polynomial \begin{align*} Q_{N,n}(t) & = t^{N+n+2}P_{N,n}\left(\frac{1}{t}\right) \\ & = t^{N+n+2} - \left(\sum_{k=1}^N t^{N+n+2-k}\right) + \eta t^{n+1} + \left(\sum_{k=0}^n \epsilon_{n-k}t^{n-k} \right) \end{align*} is irreducible. The leading (real) root of $Q_{N,n}$ is the inverse growth rate of the associated superattracting tent map, and its Galois conjugates are the other roots of $Q_{N,n}$. By Rouch\'{e}'s Theorem, for any sequence $\{N_i\}_{i \in \mathbb{N}}$, each root of $\sum_{k=0}^{\infty} \epsilon_k x^k$ is the limit of roots of $Q_{N_i,i}(x)$ as $i \to \infty$. We claim that for any fixed $n$, the limit as $N \rightarrow \infty$ of the leading root of $Q_{N,n}$ equals $2$. Suppose $\{\lambda_N\}_{n \in \mathbb{N}}$ is a sequence of nonzero complex numbers with $3/2<|\lambda_ N| \leq 2$ such that $0=Q_{N,n}(\lambda_N)$. Then $$ 0 = P_{N,n}\left(\frac{1}{\lambda_N}\right) = 1 - \sum_{k=1}^N \left(\frac{1}{\lambda_N}\right)^k + \frac{1}{\lambda_N^{N+n+2}} \left( \eta \lambda_N^{n+1} + \sum_{k=0}^n \epsilon_{n-k} \lambda_N^{n-k}\right). $$ Now \begin{multline} \label{eq:smallpart} \left| \frac{1}{\lambda_N^{N+n+2}} \left( \eta \lambda_n^{n+1} + \sum_{k=0}^n \epsilon_{n-k} \lambda_N^{n-k}\right) \right| \leq \frac{1}{|\lambda_N^{N+n+2}|} \left( \sum_{k=0}^{n+1} |\lambda_N|^{k}\right) \\ \leq \frac{1}{|\lambda_N^{N+n+2}|} \left( \sum_{k=0}^{\infty} |\lambda_N|^{k}\right) \leq \frac{1}{(1-|\lambda_N|)|\lambda_N^{N+n+2}|} \leq \frac{2}{(3/2)^{N+n+2}}. \end{multline} Hence, $$0 = \lim_{N \to \infty} P_{N,n}(\frac{1}{\lambda_N}) = 1 - \lim_{N \to \infty} \sum_{k=1}^{N} \left(\frac{1}{\lambda_N}\right)^k + \lim_{N \rightarrow \infty} \frac{1}{\lambda_N^{N+n+2}} \left( \eta \lambda_n^{n+1} + \sum_{k=0}^n \epsilon_{n-k} \lambda_N^{n-k}\right). $$ Thus, since the limit of the right hand term is 0 by (\ref{eq:smallpart}), $$1 = \lim_{N \to \infty} \sum_{k=1}^N \left(\frac{1}{\lambda_N}\right)^k.$$ Since $\sum_{k=1}^{\infty} \frac{1}{2^k} = 1$, this implies $\lim_{N \to \infty} \lambda_N = 2$. \end{proof} \begin{proof}[Proof of Theorem \ref{mainthm:unitcylinder}] By \cite[Proposition 6.1]{TiozzoGaloisConjugates}, there exists $R > 1$ such that the inclusion $$\{z \mid R^{-1} < |z| < R\} \subset \Omega_2$$ holds. Therefore, by the Persistence Theorem \ref{mainthm:closurepersistence}, we have that there exists $R>0$ such that the annulus $$A:=\{(z,2)\in \mathbb{C} \times \mathbb{R} \mid R^{-1} < |z| < 1\} \subset \Upsilon_2.$$ By Proposition \ref{p:irreducibleapproximation}, each point in $A$ is the limit of a sequence of points of the form $(y,\beta) \in \mathbb{C} \times \mathbb{R}$ such that $y$ is a Galois conjugate of $\beta<2$, $\beta$ is the growth rate of a superattracting tent map, and the minimal polynomial for $\beta$ has all coefficients in $\{\pm 1\}$, with not all coefficients are equal. Consider any such fixed $(\beta,y)$. By Period Doubling (Proposition \ref{p:squareroot}), for any $n \in \mathbb{N}$, we have that $\beta^{\frac{1}{2n}}$ is the growth rate of a superattracting tent map. By Lemma \ref{l:Tiozzo4.2} \cite[Lemma 4.2]{TiozzoGaloisConjugates}, if $f(x)$ is the minimal polynomial for $\beta$, then $f(x^{2^n})$ is irreducible for all $n \in \mathbb{N}$. Hence, if $\gamma$ is any $(\frac{1}{2^n})^{\textrm{th}}$ root of $y$, then $\gamma$ is a Galois conjugate of $\beta^{\frac{1}{2^n}}$. Consequently, for any $n \in \mathbb{N}$, the set $$\left\{\left(z,2^{\frac{1}{2^n}}\right) \in \mathbb{C} \times \mathbb{R} \mid \left(R^{-1}\right)^{\frac{1}{2^n}} < |z| < 1 \right\} \subset \Upsilon_2. $$ Therefore, by the Persistence Theorem \ref{mainthm:closurepersistence}, for each $n \in \mathbb{N}$, we have the inclusion $$\left\{\left(z,\lambda \right) \in \mathbb{C} \times \mathbb{R} \mid \left(R^{-1}\right)^{\frac{1}{2^n}} < |z| < 1, 2^{\frac{1}{2^n}} \leq \lambda \leq 2 \right\} \subset \Upsilon_2. $$ Since $\Upsilon_2$ is closed, in fact we have the stronger inclusion $$\left\{\left(z,\lambda \right) \in \mathbb{C} \times \mathbb{R} \mid \left(R^{-1}\right)^{\frac{1}{2^n}} \leq |z| \leq 1, 2^{\frac{1}{2^n}} \leq \lambda \leq 2 \right\} \subset \Upsilon_2. $$ \end{proof} \begin{proof}[Proof of Theorem \ref{mainthm:connected}] Connnectivity of the part of the Master Teapot outside of the unit cylinder is due to Tiozzo \cite{TiozzoGaloisConjugates}. Namely, by \cite[Lemma 7.3]{TiozzoGaloisConjugates}, for any point $(z,\beta) \subset \mathbb{C} \times \mathbb{R}$ such that $\beta$ is the growth rate of a superattracting tent map, $z$ is a Galois of $\beta$, and $|z| > 1$, there exists a continuous path $(\gamma(x),x)$ in $\Upsilon_2$ connecting $(z,\beta)$ to a point $(w,1)$. Consequently, since the unit cylinder is in $\Upsilon_2$ by Theorem \ref{mainthm:unitcylinder}, and since $\Upsilon_2$ is closed, this implies $\Upsilon_2 \cap (\{z:|z| \geq 1\} \times \mathbb{R})$ is connected. By the Persistence Theorem \ref{mainthm:closurepersistence}, the part of the Master Teapot inside the unit circle is connected. Thus, the entire Master Teapot, $\Upsilon_2$, is connected. \end{proof} \section{Gaps in the Thurston set} \label{sec:gaps} Plots of finite approximations of the Thurston set consisting of the roots of all defining polynomials associated to superattracting tent maps of critical orbit length at most $n$, for fixed $n \in \mathbb{N}$, have ``gaps" at certain algebraic integers, some of which are on the unit circle. The Thurston set contains a neighborhood of the unit circle \cite{TiozzoGaloisConjugates}, but these gaps get filled in more slowly with $n$ than some other regions. See Figure \ref{fig:thurston_set} for a picture of the entire Thurston set, and Figure \ref{fig:gaps} for a closeup of one such gap. In this section, we prove an arithmetic justification for gaps: \begin{figure}[!h] \begin{center} \includegraphics[width=0.7\linewidth]{n2m23pt01.png} \caption{A closeup of the how the ``gap'' around the point $i$ fills in as postcritical length increases, for an approximation of the Thurston set. The points are color-coded by the length of the associated post-critical orbit. Blue is the shortest, followed by green, yellow, orange, and finally red with the longest orbit, of length 23. } \label{fig:gaps} \end{center} \end{figure} \begin{theorem} \label{t:gaps} Let $\alpha$ be an algebraic integer such that $\mathbb{Z}[\alpha]$ is a discrete subgroup of $\mathbb{C}$ and let $x \in \mathbb{Z}[\alpha]$. Set $c = \min \{|z| : z \in \mathbb{Z}[\alpha], z \neq 0\}$. Suppose there exists a superattracting tent map with postcritical length $n$ whose growth rate has a Galois conjugate of the form $x+\epsilon$ for some $\epsilon \in \mathbb{C}$ with $|\epsilon| \leq \frac{1}{n+1}$. Then \begin{enumerate} \item if $|x| \geq 1$, then $\displaystyle\frac{c}{(2n^2 + 3n+1) |x|^n e} \leq \epsilon.$ \item if $|x| \leq 1$, then $\displaystyle\frac{c}{(2n^2+3n+1)|x| e} \leq \epsilon.$ \end{enumerate} \end{theorem} \begin{proof} Fix $x \in \mathbb{Z}[\alpha]$ and suppose there exists a real number $\beta$ associated to a generalized PCF $\beta$-map with $m$ intervals and postcritical length $n$ that has a Galois conjugate of the form $x+\epsilon$ for some $\epsilon \in \mathbb{C}$ with $|\epsilon| \leq 1$. Then $\beta$ is the root of the associated Parry polynomial $P_{\beta,E}$; $$0 = z^{n+1}-(a_0z^n + a_1 z^{n-1}+\cdots + a_n) - 1,$$ where $a_i \in \{-2,0,2\}$. Hence $(x+\epsilon)$ is also a root of $P_{\beta,E}$: \begin{equation*} \label{eq:epsilonpoly2} 0 = (x+\epsilon)^{n+1} - (a_0(x+\epsilon)^n + a_1(x+\epsilon)^{n-1}+ \cdots + a_n)-1. \end{equation*} Therefore \begin{align*} 1-x^{n+1}+a_0x^n + \dots +a_n &= (x+\epsilon)^{n+1}-x^{n+1} -\big( a_0((x+\epsilon)^{n}-x^n) \\ &\qquad + a_1((x+\epsilon)^{n-1}-x^{n-1}) +\dots + a_{n-1}((x+\epsilon)-x)\big). \end{align*} We have $1-x^{n+1}+a_0x^n + \dots +a_n \in Z[\alpha]$, so $c \leq |1-x^{n+1}+a_0x^n + \dots +a_n|$. Then by the triangle inequality, \begin{align}\label{eq:firsttriangleinequality} \begin{split} c &\leq |1-x^{n+1}+a_0x^n + \dots +a_n| \\ & \leq |(x+\epsilon)^{n+1}-x^{n+1}| + |a_0||(x+\epsilon)^{n}-x^n| + |a_1||(x+\epsilon)^{n-1}-x^{n-1}| \dots |a_{n-1}||(x+\epsilon)-x|. \end{split} \end{align} We now restrict to the case $|x| \geq 1$. For any $k \leq n+1$, by the binomial theorem, the triangle inequality, and $|\epsilon| \leq \frac{1}{n+1}$, \begin{align} \label{eq:biginequalitybigx} \begin{split} | (x+\epsilon)^k - x^k | &= \quad\left| \sum_{i=1}^k \begin{pmatrix} k \\ i \end{pmatrix} x^{k-i} \epsilon^i \right| \quad\leq\quad \sum_{i=1}^k \left| \begin{pmatrix} k \\ i \end{pmatrix} x^{k-i} \epsilon^i \right| \\ &\leq \quad \sum_{i=1}^k \left| \frac{k^i}{(k-i)!} x^{k-i} \frac{1}{(n+1)^{i-1}} \epsilon \right| \quad=\quad \sum_{i=1}^k \left| \left( \frac{k}{n+1} \right)^{i-1} \frac{k}{(k-i)!} \ \epsilon \ x^{k-i}\right| \\ &\leq\quad \epsilon k |x|^{k-1} \sum_{i=1}^k \frac{1}{(k-i)!} \quad=\quad \epsilon k |x|^{k-1} \sum_{i=0}^{k-1} \frac{1}{i!} \\ &\leq \quad \epsilon k |x|^{k-1} \sum_{i=0}^{\infty} \frac{1}{i!} \quad=\quad \epsilon k |x|^{k-1} e. \end{split} \end{align} \noindent Combining equations (\ref{eq:firsttriangleinequality}) and (\ref{eq:biginequalitybigx}) yields \begin{align*} \begin{split} c & \leq \epsilon (n+1) e |x|^n + |a_0| \epsilon n e |x|^{n-1} + \dots + |a_{n-1}| \epsilon 1 |x|^0 e| \\ & \leq \epsilon (n+1) e |x|^n \left(1 + |a_0| + \dots + |a_{n-1}| \right) \\ & \leq \epsilon (n+1) e |x|^n \left(1 +2n \right). \\ \end{split} \end{align*} \noindent Thus $$\frac{c}{e (1+2n) (n+1) |x|^n} \leq \epsilon.$$ We now restrict to the case $|x| \leq 1$. In this case, the estimate (\ref{eq:biginequalitybigx}) becomes \begin{equation} \label{eq:littlexcase} \left|(x+\epsilon)^k - x^k\right| \leq \epsilon k |x| e. \end{equation} Combining equations (\ref{eq:firsttriangleinequality}) and (\ref{eq:littlexcase}) yields $$c \leq \epsilon (n+1) e |x| (1+|a_0| + |a_1| + \dots +|a_{n-1}|) \leq \epsilon (n+1) e |x| (1+2n) .$$ Hence, for $|x| \geq 1$, $$ \frac{c}{(n+1)(1+2n)|x| e} \leq \epsilon.$$ \end{proof} \begin{proof}[Proof of Theorem \ref{mainthm:gaps}] In view of Theorem \ref{t:gaps}, it suffices to classify the discrete subgroups of $\mathbb{C}$. The classification of discrete subrings of $\mathbb{C}$ is well-known, and we include it for completeness: firstly, because it is a discrete additive subgroup, it is either $\mathbb{Z}$ or a lattice of rank $2$. If it is the latter case, let $\{1, a\}$ be a basis of the lattice, then $a$ must be an algebraic integer of degree $2$, so it can be chosen as something of either the form $\sqrt{-D}$ or ${1+\sqrt{-D}\over 2}$ (the latter only when $D=4n+1$), where $D$ is some positive integer. Requiring that there is an element not on the real line and has absolute value less than $2$ means that $D=1, 2, 3, 5$. \end{proof} \section{$\Omega_2$ and $\Omega_2^{pre}$ are not equal}\label{sec:preperiodic} In this section we prove Theorem \ref{mainthm:prepernotequal}, that $\Omega_2$ and $\Omega_2^{pre}$ are not equal. $\Omega_2$ is shown in Figure \ref{fig:thurston_set}, and $\Omega_2^{pre}$ is shown in Figure \ref{fig:preperiodic}. \begin{figure}[!hb] \begin{center} \includegraphics[width=\linewidth]{preperiod19smaller.jpg} \caption{An approximation of the preperiodic Thurston set, $\Omega_2^\text{pre}$, consisting of the roots of all minimal polynomials associated to postcritically finite tent maps for which the sum of the pre-critical length and the period is at most 19. Compare this with the Thurston set $\Omega_2$ in Figure \ref{fig:thurston_set}, and note in particular the difference in a large neighborhood of the point 1. } \label{fig:preperiodic} \end{center} \end{figure} As outlined in section \S \ref{ss:IFSdescription}, a point $z \in \mathbb{D}$ is in $\Omega_2$ if and only if $0$ is in the limit set of the iterated function system generated by $f_z,g_z$, where $$f_z:x \mapsto zx+1, \quad g_z:x \mapsto zx-1.$$ Denote the alphabet $\{f_z,g_z\}$ by $\mathcal{F}_z$ and denote the alphabet of inverses $\{f^{-1}_z,g^{-1}_z\}$ by $\mathcal{F}^{-1}_z$. For a word $w=w_1,\dots,w_n$ in the alphabet $\mathcal{F}_z$ or in the alphabet $\mathcal{F}^{-1}_z$, define the action of $w$ on $\mathbb{C}$ by $$w(x) = w_n \circ \dots \circ w_1 (x).$$ \begin{lemma} \label{l:notinThurstonSetCriterion} Fix $z \in \mathbb{D} \setminus \{0\}$. If there exists $n \in \mathbb{N}$ such that $$\min \left \{ |v(0)| : v \in (\mathcal{F}^{-1}_z)^n \right \} > \frac{1}{1-|z|},$$ then $z \not \in \Omega_2$. \end{lemma} \begin{proof} Suppose $z \in \mathbb{D} \cap \Omega_2$. Then $0$ is in the limit set $\Lambda_z$. Since $\Lambda_z = f_z(\Lambda_z) \cup g_z(\Lambda_z)$, it follows that $\Lambda_z$ is fixed by taking the union of the images of $\Lambda_z$ under all words of length $n$, for any $n \in \mathbb{N}$: $$\Lambda_z = \bigcup_{w \in (\mathcal{F}_z)^n} w(\Lambda_z).$$ Hence, for any $n \in \mathbb{N}$, each point in $\Lambda_z$ is the image of a point $\Lambda_z$ under some word in $\mathcal{F}_z$ of length $n$. In particular, $0$ is the the image of a point in $\Lambda_z$ under some word in $\mathcal{F}$ of length $n$. Since $\Lambda_z \subset B_{\frac{1}{1-|z|}}(0)$ by Lemma \ref{l:boundsonlimitset}, this implies that for any $n \in \mathbb{N}$, $$ \left( \bigcup_{v \in (\mathcal{F}^{-1}_z)^n} v(0) \right) \cap B_{\frac{1}{1-|z|}}(0) \neq \emptyset.$$ \end{proof} \begin{proof}[Proof of Theorem \ref{mainthm:prepernotequal}] We will exhibit a point that is in $\Omega_2^{pre}$ but not in $\Omega_2$. Let $w$ be the preperiodic itinerary $$w=1000011100(101000)^\infty.$$ One may verify that $\sigma^j(w) \leq_E w$ for every integer $j \geq 0$. Hence, by the Admissibility Criterion (Fact \ref{t:admissible}), $w$ is the itinerary of $1$ under a preperiodic tent map. One may then calculate from $w$ the sequence of digits: $$2000022200(202000)^\infty$$ and the sequence of cumulative signs: $$ +-----+-++ (+--+++)^{\infty}. $$ Then the $\beta$-expansion of $1$, where $\beta$ is the slope of the associated tent map (Fact \ref{fact:betaexpansion}), is given by \begin{equation}\label{eq:examplebetaexpansion} 1=\frac{2}{\beta}- \frac{2}{\beta^6}+\frac{2}{\beta^7}-\frac{2}{\beta^8}+ \frac{1}{\beta^{10}} \left(\frac{2}{\beta^1}-\frac{2}{\beta^3} \right) \sum_{n=0}^{\infty} \left(\frac{1}{\beta^6}\right)^n. \end{equation} Substituting in the sum of the geometric series and clearing denominators, the equation (\ref{eq:examplebetaexpansion}) becomes $$ 0=2 - 4 \beta + 2 \beta^2 + 2 \beta^3 - 2\beta^6 - \beta^8 + 2\beta^{13} - \beta^{14},$$ which factors as $$0 = \big(-1 + \beta\big) \big(1 + \beta\big) \big(2 - 4 \beta + 4 \beta^2 - 2 \beta^3 + 4 \beta^4 - 2 \beta^5 + 2 \beta^6 - 2 \beta^7 + \beta^8 - 2 \beta^9 + \beta^{10} - 2 \beta^{11} + \beta^{12}\big).$$ Let $P$ be the irreducible polynomial $$P(x)=x^{12} - 2x^{11} + x^{10} - x^9 + x^8 - 2x^7 + 2x^6 - 2x^5 + 4x^4 - 2x^3 + 4x^2 - 4x + 2.$$ By construction, the roots of $P$ are in $\Omega_2^{per}$. Let $p$ be the root of $P$ with approximate value $$p \approx 0.5393738531461442 + 0.4050155839374199i.$$ Since $|p|$ is approximately $0.674509$, $p \in \mathbb{D} \cap \Omega_2^{pre}$. Let $\mathcal{F}^{-1}_p$ be the alphabet consisting of the two maps $f_p^{-1}$ and $g_p^{-1}$, where $$f_p^{-1}:x \mapsto \frac{x-1}{p}, \quad g_p^{-1}:x \mapsto \frac{x+1}{p}.$$ Computation shows that $$\min \left \{ |v(0)| : v \in (\mathcal{F}^{-1}_p)^5 \right \} \approx 4.3792,$$ which is much bigger than $\frac{1}{1-|p|} \approx 3.07228$. Consequently, Lemma \ref{l:notinThurstonSetCriterion} implies that $p \not \in \Omega_2$. \end{proof} \bibliographystyle{alpha}
{ "timestamp": "2019-03-01T02:04:20", "yymm": "1902", "arxiv_id": "1902.10805", "language": "en", "url": "https://arxiv.org/abs/1902.10805" }
\section{Introduction} The problem of quantifying complicated and fascinating microstructures of materials like metals has been around for many years. It is an important issue in Materials Science because modeling 3D microstructures and relating these models to specific properties of the metals can give rise to new kinds of metals with desired performance. Indeed, having a good model for the microstructure, simulations can be performed to generate `digital versions' of the microstructure and testing its properties, for instance mechanical properties, using yet other models that establish the relation between microstructural and mechanical properties. These simulations, approximating reality, allow the researcher to test material at relatively low cost and relatively fast, compared to real physical experiments. It is clear that an important and challenging statistical question to be answered is whether a specific model for a microstructure is adequate, given measured data. In the tentative of answering this last question several points need to be touched upon. The first point concerns the choice of a model. There exists a vast choice of models and among them, Voronoi diagrams have been extensively studied and used \cite{okabe09}. In particular, Poisson-Voronoi diagrams, only involving one nonnegative intensity parameter $\lambda$, represent the most basic case for modeling microstrucures. In fact, they are often used in applications involving single-phase steel \cite{okabe09, kumar94, lorzkrawietz91}. More sophisticated models have been proposed, but in this paper we will concentrate on the Poisson-Voronoi model. A second point concerns the available data. While the microstructure of a material is the arrangement of grains and phases in a three dimensional (3D) space, the material is typically observed in two dimensions (2D). Usually, a small sample from inside the material is obtained and the exposed surface is examined in a microscope. Therefore, the work involves the study of 2D sections from which 3D microstructure information has to be extracted. Under the 3D Poisson-Voronoi model, the observable 2D section is a realization of a so called \textit{2D sectional Poisson-Voronoi diagram}, often denoted by $\mathcal{V}_{\Phi}(2,3)$. It is the result of the intersection of a fixed plane and a 3D Poisson-Voronoi diagram. Only limited results about the geometrical characteristics of its grains have been obtained analytically but for most of them numerical results have been obtained through Monte Carlo simulations \cite{lorz90}. If a Poisson-Voronoi diagram is a good model, using 2D sections for the estimate of the intensity parameter $\lambda$, it is possible to infer distributions of almost all 3D microstructural properties, such as grain volume, grain surface area and grain number of faces \cite{vittorietti17}. The last point is about the model validation. The question that this paper wants to answer is ``Given a real 2D materials section, could a Poisson-Voronoi diagram be a good model for approximating the 3D materials microstructure?'' We propose several tests for the Poisson-Voronoi hypothesis. These are all based on contrasts between features of the observed 2D picture and the features one would expect if the data were generated according to the Poisson-Voronoi model. The paper structure is as follows. After having reviewed the basic concepts of Voronoi diagrams (Section \textbf{\ref{sec:PVdiagrams}}), we recall the main stereological relations which can be used to estimate $\lambda$ based on a 2D sectional Poisson-Voronoi diagram and the most used intensity estimators introduced in \cite{lorzhahn94} (Section \textbf{\ref{sec:estimators}}). Then, we move to the testing framework (Section \textbf{\ref{sec:test}}). We distinguish periodic and non periodic boundary conditions. The former case is very popular in material science practice and it allows to approximate `infinite structures', giving nice scaling properties and avoiding so called `edge effects'. The latter more closely resembles real situations. Assuming periodic boundary conditions, in Section \textbf{\ref{subsec:testper}} the distributions of the main geometrical characteristics of the 2D sectional cells are numerically obtained and two model tests are proposed. The first, already introduced in \cite{lorzhahn93}, is based on the coefficient of variation of the cell (or grain) areas; the second is a Kolmogorov-Smirnov type test based on the cumulative distribution function of the cell areas. In Section \textbf{\ref{subsec:testnonper}} the two tests previously mentioned are adapted to the non periodic boundaries setting. An additional test is defined, using tools from the emergent area of Topological Data Analysis (TDA), that combines the two disciplines of Statistics and Topology. The focus is on \textit{persistent homology}, the branch of TDA that summarizes the 2D picture using various functions. After having briefly and intuitively explained the basic concepts of persistent homology and the common ways of representing it (\textit{persistence diagram}), a test based on the squared distances between \textit{persistence landscapes} is presented (Section \textbf{\ref{subsubsec:perapproach}}). In Section \textbf{\ref{sec:quantile}}, we carry out a computer simulation for estimating the quantiles for the proposed model test statistics. We consider null distributions for the test statistics conditional on the number of visible cells in 2D. For a general test statistic, the conditional distribution is expressed in terms of quantities that involve the (unknown) intensity parameter $\lambda$ of the 3D Poisson process and quantities independent of $\lambda$. Finally, in Section \textbf{\ref{sec:application}}, we show an application of our work based on scanned images by \cite{lorzhahn93} of Alumina Ceramics. The different tests belonging to the different approaches are performed and the results are compared. A brief discussion on future developments follows in Section \textbf{\ref{sec:discussion}}. \section{Voronoi Diagrams} \label{sec:PVdiagrams} We begin reviewing the generic definition and the basic properties of the Voronoi Diagram. Given a denumerable set of distinct points in $\mathbb{R}^d$, $\textbf{\textrm{X}}=\{x_i: i\ge1\}$, the Voronoi diagram of $\mathbb{R}^d$ with \textit{nuclei} $\{x_i\}$ (also called \textit{sites} or \textit{generator points}) is a partition of $\mathbb{R}^d$ consisting of cells \begin{equation}\notag C_i=\{ y\in\mathbb{R}^d\,:\, \| x_i-y \| \le \|x_j-y\|\,\, for\,\, j\ne i \},\,\,\, i=1,2,\dots \label{eq:vorcell} \end{equation} where $\|\cdot\|$ is the usual Euclidean norm. This means that given a set of two or more but finitely many distinct points, all locations in that space are associated with the closest member(s) of the point set with respect to the Euclidean distance. If $\textbf{\textrm{X}}=\mathrm{\Phi}=\{x_i\}$ is the realization of a homogeneous Poisson point process, then we will refer to the resulting structure as the \textit{Poisson-Voronoi diagram} and denote it by $\mathcal{V}_\Phi$. This model is characterized by one single intensity parameter $\lambda$, the mean number of points generated according to the Poisson point process per unit volume. Okabe et al. \cite{okabe09} synthesize previous research activity on the properties of Poisson-Voronoi diagrams. Despite the fact that the moments of several geometrical characteristics are known, the distributions of the main features, especially in 3D, are not. In \cite{vittorietti17} a simulation study is conducted for finding accurate approximations for these distributions. A Generalized Gamma distribution is found to be the best approximating distribution among the well-known parametric densities frequently used in this framework. Exploiting the scaling property of the Poisson process, one obtains the distribution of the main geometrical characteristics for $\lambda$. In real experiments, it is often not possible to deal directly with 3D structures. Instead, one has to base inference on pictures of 2D sections of the 3D structure. In \cite{chiu96}, Chiu et al.\ answer a fundamental question: ``For integers $2\le t\le d-1$, is the intersection between an arbitrary but fixed $t$-dimensional linear affine subspace of $\mathbb{R}^d$ and the $d$-dimensional Voronoi tessellation generated by a point process $\Phi$ a $t-$dimensional Voronoi tessellation?" The answer to this question is negative when $\Phi$ is a Poisson point process \cite{mecke84, chiu96}. Moreover, each cell in a sectional Poisson-Voronoi tessellation is almost surely a \textit{non-Voronoi} cell \cite{chiu96}. For 2D and 3D Poisson-Voronoi diagrams, also for 2D sectional Poisson-Voronoi diagrams, much information about moments and scaling for the main geometrical characteristics is known, but little information and no analytic expressions for their distributions of them are available so far. In this paper, we focus on the distribution of the area, the perimeter and the number of edges of cells in 2D sectional Voronoi diagrams. The major results are summarized in Table \textbf{\ref{tab:moments}}. \begin{table}[!h] \centering \caption{The first and second order moments of the main geometrical characteristics of a 2-dimensional sectional Poisson-Voronoi diagram \begin{threeparttable} \begin{tabular}{lcc} \hline & Expected value & Second moment \\ \hline Number of vertices/edges & $6$&$38.827$\tnote{*} \\ Area & $0.686\lambda^{-2/3}$&$0.699\lambda^{-4/3}$ \\ Perimeter & $3.136\lambda^{-1/3}$&$11.308\lambda^{-2/3}$\tnote{*} \\ \hline \end{tabular} \begin{tablenotes} \item[*] \footnotesize{The constants are estimated values; \cite{okabe09}.} \end{tablenotes} \end{threeparttable} \label{tab:moments} \end{table} In the next section, we will see how stereological relations can be used to obtain estimates of the intensity parameter $\lambda$ of the 3D generating Poisson process based on the 2D sections. \section{Stereological estimators for the intensity parameter $\lambda$} \label{sec:estimators} Basic stereological relationships exist which are independent of any underlying tessellation model. Moreover, in the literature explicit (scaling) relations are known expressing the expected number of vertices per unit area, $P_A$, the expected number of cells per unit area, $N_A$, and the mean total edge length per unit area, $L_A$, in terms of the intensity parameter $\lambda$ for a generating 3D Poisson process. Combining stereological and scaling relationships, the following expressions hold, see e.g.\ \cite{lorzhahn94}. \begin{align*}\notag P_A&=\frac{8}{15}\cdot\left(\frac{3}{4}\right)^{1/3}\cdot\pi^{5/3}\Gamma\left(\frac{4}{3}\right)\cdot\lambda^{2/3}=c_1\cdot\lambda^{2/3}\\ N_A&=\frac{4}{15}\cdot\left(\frac{3}{4}\right)^{1/3}\cdot\pi^{5/3}\Gamma\left(\frac{4}{3}\right)\cdot\lambda^{2/3}=\frac{c_1}{2}\cdot\lambda^{2/3} \mbox{ and }\\ L_A&=\pi\cdot\left(\frac{\pi}{6}\right)^{1/3}\cdot\Gamma\left(\frac{5}{3}\right)\cdot\lambda^{1/3}=c_2\cdot\lambda^{1/3}. \end{align*} Furthermore, exploiting the simple relation between $N_A$ and the expected area of the cell profiles, $\mathbb{E}(a)$, $N_A=\frac{1}{\mathbb{E}(a)}$, four estimators for $\lambda$ can be obtained: \begin{align} \notag \hat{\lambda}_P&=\biggl(\frac{\hat{P}_A}{c_1}\biggr)^{3/2}\approx 0.2008\cdot\hat{P}_A^{3/2},\,\, \hat{\lambda}_N=\biggl(\frac{2\hat{N}_A}{c_1}\biggr)^{3/2}\approx 0.5680\cdot\hat{N}_A^{3/2}\\\notag \hat{\lambda}_L&=\biggl(\frac{\hat{L}_A}{c_2}\biggr)^{3}\approx 0.0837\cdot\hat{L}_A^{3},\,\, \label{eq:estimators} \hat{\lambda}_a=\biggl(\frac{2}{c_1\bar{a}}\biggr)^{3/2}\approx 0.5680\cdot\bar{a}^{-3/2}.\\ \end{align} Here the hats indicate natural estimates for the mean quantities based on the data (like `number of cells divided by observed area' for $\hat{\lambda}_n$). In \cite{lorzhahn94}, the behavior of the estimators is investigated by means of a computer simulation. The authors state that the estimators show hardly any difference concerning bias and variance and that the biases are less than 1\% for sample size $n=50$ and that they decrease rapidly with increasing sample size. Once we have an estimate of the intensity parameter $\hat{\lambda}$, it can be used for estimating the distribution of the main geometrical 3D features of the grains \cite{vittorietti17}. An additional important issue of interest is whether the Poisson-Voronoi diagrams assumption is suitable in view of the observed 2D picture. We will consider this problem in the next section. \section{Model tests for validity of the Poisson-Voronoi assumption} \label{sec:test} In \cite{lorzkrawietz91,lorzhahn93,stamm97} several model tests based on the distribution of geometrical features of the grains in random plane sections of a spatial tessellation are proposed. More precisely in \cite{lorzkrawietz91}, the authors propose five stereological model tests based on the distribution of the number of cell vertices. The power of the model tests is investigated under some special parametric alternative hypotheses: a Mat\'ern cluster point process, a Mat\'ern hard-core point process and a simple sequential inhibition point process. Secondly, in \cite{lorzhahn93} three different model tests are considered: the first two are based on the variability of the section cells area, the third is motivated by a well-known relationship between specific edge length $L_A$ and point process intensities $\lambda$ and $P_A$. In line with their previous work, the authors propose one-sided and two-sided tests for distinguishing Poisson-Voronoi tessellation from more regular or irregular tessellations. The null distributions of the test statistics are approximated using simulation. Simulations also show that the model tests are quite powerful in discriminating the different kind of plane sections. It is interesting to note that all their tests are based on summarizing indices like the coefficient of variation, skewness index etc, and that the best behavior among them is reported to be the one based on the coefficient of variation of the cells area (eq. \textbf{\ref{eq:cv}}), also used by the authors in \cite{stamm97}. In this paper we introduce test statistics that use more information contained in the data than only summarizing indices. To this end, we use tools belonging to different branches in statistics. Moreover, we describe a partly simulation-based framework to approximate null distributions of the test statistics considered. Before going more deeply into the testing problem, it is necessary to make a distinction between periodic and non periodic boundary conditions. On the one hand, periodic boundary conditions are mathematically convenient as these provide a natural way to deal with edge effects. Moreover, for large volumes and large values of $\lambda$, the construction really mimics the infinite volume situation where the convenient scaling results as mentioned in section \textbf{\ref{sec:estimators}}. For real materials, the periodic boundary constraint is not realistic. The approach without periodic boundary conditions is more realistic. It will be seen that determining null distributions of test statistics, the approach will be slightly more simulation based, but also more tailored to the data and 3D object at hand. \subsection{Periodic Boundary Conditions} \label{subsec:testper} The first simulation study involves a Monte-Carlo procedure. The following results are obtained by randomly generating approximately $1\,000$ points in a box of dimension $10\times10\times10$ and using eq. \textbf{\ref{eq:vorcell}} for creating 3D Poisson-Voronoi cells. This is equivalent saying that the generator points of the Poisson-Voronoi diagram are generated according to a Poisson process with intensity parameter $\lambda=1$. Then, sections with dimensions $10\times10$ (parallel to the cube face for reducing boundary effect) are randomly taken. On average the number of 2D cells in a section turns out to be approximately $146$. The simulation is conducted using the software provided by TATA Steel. The algorithm, that the software exploits is described in \cite{vittorietti17}. The procedure consists of three main steps:\\ Repeat $1\,000\,000$ times: \begin{description} \item[Step 1]: Generate a 3D Poisson-Voronoi diagram with intensity parameter $\lambda=1$ applying periodic boundary conditions; \item[Step 2]: Take a random 2D section of the 3D structure; \item[Step 3]: Determine the geometrical characteristics of all cells in the 2D section. \end{description} Graphical representations of the results are shown in Figures \textbf{\ref{fig:areaDensDistr}, \ref{fig:perDensDistr}, \ref{fig:edgesDensDistr}}. For the grain area and the grain perimeter distributions estimation a simple boundary correction for kernel density estimation is adopted \cite{jones93}. In fact, given the nonnegative support of the probability density functions of the two variables, the linear correction approach, as proposed in \cite{jones93}, prevents the estimate to assign mass outside$[0,\infty)$. The values in Table \textbf{\ref{tab:moments}} are the estimated values of the main geometrical features for a 2D sectional Poisson-Voronoi diagram. They are in agreement with both theoretical and simulation results known in the literature (cf. \cite{okabe09}). \begin{figure}[!h] \centering \subfloat[]{\includegraphics[width=7cm]{densityarea2d}} \subfloat[]{\includegraphics[width=7cm]{cumulativearea2dbis}} \caption{(a) Boundary corrected Kernel density estimate (Epanechnikov kernel, linear combination correction, $h=0.2$ \cite{jones93}) and (b) empirical cumulative distribution function of the area of 36\,480\,600 (originating from the 1\,000\,000 slices) 2D sectional cells, $\lambda=1$ } \label{fig:areaDensDistr} \end{figure} \begin{figure}[!h] \centering \subfloat[]{\includegraphics[width=7cm]{densityper}} \subfloat[]{\includegraphics[width=7cm]{cumulativeperimeter2d}} \caption{(a) Boundary corrected Kernel density estimate (Epanechnikov kernel, linear combination correction, $h=0.1$ \cite{jones93}) and (b) empirical cumulative distribution function of the perimeter of 36\,480\,600 2D sectional cells, $\lambda=1$ } \label{fig:perDensDistr} \end{figure} \begin{figure}[!h] \centering \subfloat[]{\includegraphics[width=7cm]{densityedges2d}} \subfloat[]{\includegraphics[width=7cm]{cumulativeedges2d}} \caption{(a) Relative frequencies and (b) empirical cumulative distribution function of the number of edges of 36\,480\,600 2D sectional cells, $\lambda=1$} \label{fig:edgesDensDistr} \end{figure} \begin{table}[!h] \centering \caption{Estimated moments of the geometrical features of 36\,480\,600 2D sectional cells, $\lambda$=1} \subfloat[][\emph{Area}] {\begin{tabular}{rr} \hline $\mu_1$ & 0.68524 \\ $\sigma $& 0.47342 \\ $ \mu_2 $& 0.69367 \\ $\mu_3 $& 30.37169 \\ $ \mu_4$ & 40.94590 \\ \hline \end{tabular}} \subfloat[][\emph{Perimeter}] {\begin{tabular}{rr} \hline $\mu_1$ & 3.13345 \\ $ \sigma$ & 1.60552 \\ $ \mu_2 $& 12.39622 \\ $\mu_3 $& 2072.73503 \\ $\mu_4$ & 10695.17596 \\ \hline \end{tabular}} \subfloat[][\emph{Number of edges}] {\begin{tabular}{rr} \hline $\mu_1 $& 6.00000 \\ $ \sigma$ & 1.69195 \\ $ \mu_2 $& 38.86268 \\ $\mu_3 $& 9818.30810 \\ $ \mu_4 $& 72107.17324 \\ \hline \end{tabular}} \label{tab:moments} \end{table} If it comes to the study of mechanical properties of metal, the grain size is known to be an important parameter. In 2D, grain area therefore represents one of the most interesting features for real materials sections, especially for single-phase materials \cite{hermann1989}. Therefore, in this paper we restrict ourselves to tests based on observed cell areas. The first one, mentioned before and already used in \cite{lorzhahn93,stamm97}, is based on the coefficient of variation of the observed cell areas: \begin{equation} C=\frac{\sqrt{\frac{1}{n-1}\sum_{i=1}^n(a_i-\bar{a})^2}}{\bar{a}}. \label{eq:cv} \end{equation} Here $a_i$ is the area of the $i$-th sectional cell and $\bar{a}$ is the mean cell area in the section. As the coefficient of variation is scale invariant, one just needs to compute the coefficient of variation of the area of the cells of a real section applying periodic boundary conditions and compare it with the quantile of the distribution of this test statistic. In fact, the information contained in the 2D section is clearly related to the number of cells observed ($n$) and comparing the observed value of $C$ with a quantile of the conditional distribution of $C$ given $n$ will only depend on number of cells observed in the 2D section. The second test is based on the cumulative distribution function (CDF) of the area of the 2D sectional cells. More precisely it is a Kolmogorov-Smirnov type test given by the supremum distance between the CDF of the area of the cells of the section for which one wants to test the Poisson-Voronoi hypothesis and a function that reflects our expectation of the empirical distribution function under the Poisson-Voronoi assumption. For the latter we choose a very accurate simulation-based approximation of the CDF of the area of $36\,480\,600$ sectional Poisson-Voronoi cells. Let $F_1$ be the cumulative distribution function of the areas of the 2D sectional cells with intensity parameter $\lambda=1$ approximated via simulation as described above and let $\hat G$ be the empirical distribution function of the area of $n$ cells of a 2D section from a 3D structure with intensity parameter $\lambda$. First, we use eq. \textbf{\ref{eq:estimators}} for estimating the intensity based on the considered section, $\hat{\lambda}_a$. Furthermore, inspired by \textbf{Lemma 3} in \cite{vittorietti17}, we define the next test statistic as the supremum distance between the two functions: \begin{equation} D(F,\hat G)=\sup_{x\ge0}|F_1(x)-\hat G(\hat{\lambda}^{\frac{2}{3}}x)|. \end{equation} We will return to the issue of approximating the null distribution of this test statistic in Section \textbf{\ref{sec:quantile}}. \subsection{Non Periodic Boundary Conditions} \label{subsec:testnonper} In most real situations, the data available are relative to a material section with completely visible as well as partially visible grains. In such situations it is not realistic to use periodic boundary conditions in the model. We fix the geometry of the 3D volume and 2D slice as in the periodic boundaries case (Section \textbf{\ref{subsec:testper}}). Then the procedure can be summarized in three main steps:\\ Repeat $1\,000\,000$ times: \begin{description} \item[Step 1]: Generate a 3D Poisson-Voronoi diagram with intensity parameter $\lambda$ not applying periodic boundary conditions. In this paper, for reasons that will become clear later, $\lambda=0.2$ is chosen; \item[Step 2]: Take a random 2D section of the 3D structure; \item[Step 3]: Determine the geometrical characteristics of the completely visible and the partially visible cells in the 2D section. \end{description} In this setting we consider three different tests. The first revokes exactly the one shown in the setting of Section \textbf{\ref{subsec:testper}} (eq.\textbf{ \ref{eq:cv}}) and it is based on the coefficient of variation of the area of the totally and partially visible cells. Obviously, the referring quantile of the distribution of the statistical test are different with respect to the previous case. The second statistic is in line with the test based on the cumulative distribution function of the cell areas seen in Section \textbf{\ref{subsec:testper}}, but the formulation is slightly different. It is expressed by \begin{equation} D(\bar F_{\lambda \,n_{2D}},\hat G_{n_{2D}})=\sup_{x\ge0}|\bar F_{\lambda \,n_{2D}}(x)- \hat G_{n_{2D}}(x)| \label{eq:testcdf} \end{equation} where $\bar F_{\lambda \,n_{2D}}$ is the expected CDF conditioned to the event of observing exactly $n_{2D}$ sectional cells with estimated parameter $\lambda$. In Section \textbf{\ref{sec:quantile}}, it will be explained in more detail how this can be computed. $\hat G_{n_{2D}}$ is the CDF of the areas of the totally and partially visible cells of the section under study. The last test exploits tools coming from the emergent field of Topological Data Analysis. We will now explain the main concepts of persistence homology necessary for using our model test. \subsubsection*{Test based on Persistence plots} \label{subsubsec:perapproach} Instead of giving rigid mathematical and topological definitions, the aim of this section is to guide the reader via intuitive concepts in the construction of \textit{persistence diagrams} and \textit{persistence landscapes} used for the last model test. Looking at one 2D image, it is hard to identify the really `important' features that univocally characterize it. Topological data analysis (TDA) is a relatively new discipline that has provided new insight into the study of qualitative features of data. In particular, persistent homology is the branch of TDA that provides tools both for identifying qualitative features of data and to give a measure of the importance of those features. Key topological features of a set include connected components, holes, voids \dots. The main aim of persistent homology is to record the evolution of those characteristics with respect to a scale parameter $r$ that usually can be interpreted as time. For avoiding too long digressions that can drift away from the real scope of the paper, most of the main concepts belonging to homology and persistent homology field are just roughly mentioned. For readers that aim to come to a formal definition of the following procedure, in \cite{hatcher2002} more details are provided. For illustrative reasons and because in this study 2D images are used, the 2D case is considered but generalization to higher dimensions is not complicated. The input of the analysis typically takes the form of a point cloud $\textbf{\textrm{X}}$ (Fig. \textbf{\ref{fig:perexpl}(a)}). Based on that, a special structure is built. It provides information about the qualitative features above discussed. This structure is based on so called \textit{simplices}. A \textit{geometric} $k$\textit{-simplex} is the convex hull of $k+1$ affinely independent points $v_0,v_1,\dots,v_k$. More precisely the $0$-simplex identifies vertices, the $1$-simplex line segments and the $2$-simplex triangles. \begin{figure}[!h] \centering \subfloat[]{\includegraphics[width=5cm]{pr1}} \subfloat[]{\includegraphics[width=5.5cm]{pr2}} \subfloat[]{\includegraphics[width=5.5cm]{pr3}} \subfloat[]{\includegraphics[width=5cm]{pr4}} \subfloat[]{\includegraphics[width=5cm]{pr5}} \subfloat[]{\includegraphics[width=5cm]{pr6}} \subfloat[]{\includegraphics[width=5cm]{pr7}} \subfloat[]{\includegraphics[width=5cm]{pr8}} \subfloat[]{\includegraphics[width=5cm]{pr9}} \caption{(a) Set of points $\textbf{\textrm{X}}$ (b) Voronoi Diagram (dashed) and Delaunay Triangulation (solid) (c) Circles with radius $0.47$ around the points of $\textbf{\textrm{X}}$; the Alpha complex $\alpha_r(\textbf{\textrm{X}})$ consists of the individual points of $\textbf{\textrm{X}}$ and the one edge corresponding to the two touching circles (d) Alpha complex for $r=1.32$ (e) Alpha complex, $r=1.35$ (f) Alpha complex, $r=1.66$ (g) Alpha complex, $r=2.76$ (h) Alpha complex, $r=3.61$ (i) Alpha complex, $r=3.68$; the entire Delaunay Triangulation.} \label{fig:perexpl} \end{figure} One way for building this structure starts off with the so called \textit{Delaunay Triangulation}, $DT(\textbf{\textrm{X}})$ of $\textbf{\textrm{X}}$. Basically, this is a graph consisting of vertices in $\textbf{\textrm{X}}$ and edges between two points if and only if they share a Voronoi edge (Fig. \textbf{\ref{fig:perexpl}(b)}). Then, circles are grown with increasing radius $r$, centered at the points in $\textbf{\textrm{X}}$. The \textit{Alpha complex}\footnote{Other common choices of simplicial complexes are \v{C}ech and Rips complexes; see \cite{hatcher2002,edelsbrunner2010}.} at radius $r$, $\alpha_r(\textbf{\textrm{X}})$, is a subcomplex of $DT(\textbf{\textrm{X}})$. In fact, for $r$ very small, the Alpha complex is nothing but the set $\textbf{\textrm{X}}$ of the generator points. Then $r$ grows and once two circles intersect, the edge of the underlying Delaunay triangulation between the two circle centers is added to $\alpha_r(\textbf{\textrm{X}})$. Eventually, for $r$ very big, the Alpha complex is the Delaunay Triangulation itself (Fig. \textbf{\ref{fig:perexpl}(c-i)}). Now, rather than considering this structure for some fixed value of $r$, its evolution for growing $r>0$ is registered. In particular, we keep track of the birth time $b$ and a death time $d$ of connected components and holes\footnote{In three dimensions one could also consider other topological features, like loops or voids}, where the `time' is given by the radius of the circles corresponding to those events. One can think of the circles radii growing at constant rate. At time zero, the Alpha complex equals $\textbf{\textrm{X}}$. All individuals are connected components in themselves. These are born at time zero. After some time, when the first two points get connected because their circles touch, one can say two connected components merge or one connected component `dies'. In Figure \textbf{\ref{fig:perexpl}}, this happens for $r=0.47$; see subplot (c). For one connected component, we therefore have $(b,d)=(0,0.47)$. Increasing $r$ further, more connected components will `die' until only one remains for all $r$ large enough because all points are covered by the union of all large circles. During the same process, it is also possible that holes appear. This happens when a triangle appears in the picture, such that the $r$-circles around the three corner points of this triangle do not cover the whole triangle. At this time a hole is `born', yielding a birth time $b$ for this feature. It will also `die' again, when $r$ is further increased and the circles centered at the corners do cover the whole triangle. Note that not all triangles that appear correspond to the birth of a hole. For instance, in Figure \textbf{\ref{fig:perexpl}(g)} a triangle appears but the circles centered at the three corners immediately cover the whole triangle. The points $(b,d)$ thus obtained can be used as coordinates and plotted on a plane, resulting in the so called \textit{persistence diagram}. Since the topological features (connected components, holes) can only die \textit{after} they are born ($d\ge b$), necessarily each point appears on or above the diagonal line $y = x$. The persistence diagram corresponding to the data in Figure \textbf{\ref{fig:perexpl}} is shown in Figure \textbf{\ref{fig:perexpl2}}. The black dots, $D_{0i}$, on the vertical axis represent the `deaths' of connected components; the lowest being the aforementioned $(b,d)=(0,0.47)$, the highest, $(b,d)=(0,2.67)$, corresponding to Figure \textbf{\ref{fig:perexpl} (g)}. The red triangles $D_{1i}$, represent the birth- and death times of the holes. Based on persistence diagrams, several descriptive summarizing functions have been proposed in the literature. For example rank functions \cite{robins2016}, landscapes and silhouettes \cite{bubenik2015, chazal2014} and accumulated persistence functions \cite{biscio2016}. In this paper we follow the persistence landscapes approach, but any other summary statistic could also be used for testing. We first describe in words how to construct a landscape from a persistence diagram. Then, the formal definition follows. For each point $(b,d)$ in the persistence diagram, count the number of points to its left top (north-west). This is the \textit{rank} of the point $(b,d)$ and it can be interpreted as the number of features that are alive at time $b$ and that are still alive at time $d$. Then, draw horizontal and vertical lines from each point $(b,d)$ in the persistence diagram to the diagonal and `tip the diagram on its side'. Then take the contour of the projection of the points with the same rank. This results in the so-called landscape. This is done for connected components and holes separately, see Figure \textbf{\ref{fig:perrank}}. \begin{figure}[!h] \centering \includegraphics[width=6cm]{perdiag} \caption{Persistence Diagram. The black dots indicate the birth- and death time of connected components and the red triangles the birth- and death times of the holes. The data are the same as those used for Figure \textbf{\ref{fig:perexpl}}.} \label{fig:perexpl2} \end{figure} \begin{figure}[!h] \centering \subfloat[]{\includegraphics[width=6cm,angle=-90]{rank0}} \subfloat[]{\includegraphics[width=6cm,angle=-90]{rank1}} \subfloat[]{\includegraphics[width=7cm]{land0}} \subfloat[]{\includegraphics[width=7cm]{land1}} \caption{Rank function for connected components (a) and holes (b) Persistence Landscapes for connected components (c) and holes (d)} \label{fig:perrank} \end{figure} More formally, a persistence landscape is a sequence of continuous, piecewise linear functions $\lambda(k,\cdot):\mathbb{R}^+\rightarrow\mathbb{R}^+, \,\,\,\, k=1,2,\dots$. Denote the set of `persistence points' in the persistence diagram by $D$. Then for each $p=(b,d)\in D$ define the triangular functions \begin{equation*} \Lambda_p(t) =\Biggl\{ \begin{array}{rl} t-b&t\in [b,\frac{b+d}{2}]\\ d-t&t\in (\frac{b+d}{2},d]\\ 0&otherwise.\\ \end{array} \label{eq:trianglefun} \end{equation*} Then, the persistence landscape of the persistence diagram is defined by \begin{equation} \lambda_D(k,t)=k\max_{p\in D}\Lambda_p(t), \,\,\,\,\,\,\,\, t\ge0, \,\, k\in \mathbb{N}, \end{equation} where $k\max$ is the $k$th largest value in the set. Our test will be the contrast between the observed landscape and a landscape one would expect under the null hypothesis that the 3D structure is Poisson-Voronoi. For this \textit{mean landscape}, we use the conditional expectation of the landscape given that $N_{2D}=n_{2D}$ and approximate this using the simulation procedure described in Section \textbf{\ref{subsec:testper}}. To be more specific, \begin{equation} \bar \lambda_{D_j}(k,t)=\frac{1}{n}\sum_{i=1}^n \lambda_{D_j (i)}(k,t) \,\,\, j=0,1,\,\,\,\, t\ge0, \end{equation} where $n$ is the number of 2D Poisson-Voronoi sections generated with $N_{2D}=n_{2D}$. Inspired by the approach proposed in \cite{robins2016}, the test statistics are then given by the distance between persistence landscapes and mean persistence landscapes using $L^2$ norm, \begin{equation} \label{eq:testperland} \begin{split} L_0&=\|\hat \lambda_{D_0}- \bar \lambda_{D_0}\|_2=\biggl[\sum_{k=1}^{n_{2D}-1}\int_{0}^T(\hat \lambda_{D_0}(k,t)- \bar \lambda_{D_0}(k,t))^2 \mathrm{dt}\biggr]^{\frac{1}{2}} \\ L_1&=\|\hat \lambda_{D_1}- \bar \lambda_{D_1}\|_2=\biggl[\sum_{k=1}^{\infty}\int_{0}^T(\hat \lambda_{D_1}(k,t)- \bar \lambda_{D_1}(k,t))^2 \mathrm{dt} \biggr]^{\frac{1}{2}}. \end{split} \end{equation} Here $\hat \lambda_{D_j}(k,\cdot), \,\,\,j=0,1$ is the $k$-th landscape for the connected components ($j=0$) and for the holes ($j=1$) for the 2D section under study. If both $L_0$ and $L_1$ are less than the threshold quantiles, the Poisson-Voronoi hypothesis is not rejected. \section{Bootstrap Confidence Interval for $\lambda$ and Quantiles of the model tests} \label{sec:quantile} In \cite{lorzhahn93}, the authors carry out a simulation for estimating the quantiles of the test statistics proposed there. Cells of 3D spatial Poisson-Voronoi diagrams are generated with $\lambda=1$. Then, a random planar section of the 3D structure is taken and square observation windows are drawn in the section planes with an expected number of 50, 100, 150 and 200 cells, respectively. We provide an expression for the distribution of any test statistic given the number of observed cells in the section, separating a part that depends on the parameter $\lambda$ and a part that does not. We consider the situation where we see a window (with known shape and size) of a $2D$ planar section of a $3D$ Poisson-Voronoi diagram in a $3D$ object of known geometry. As before, denote by $N_{3D}$ the number of cells in the $3D$ object and $N_{2D}$ the number of $2D$ cells visible in the $2D$ window. Lemma \textbf{\ref{lemma:lambdadep}} below gives an expression of the null distribution of a test statistic $T$, given $n_{2D}$ cells are observed in the section. It separates a part that depends on the intensity parameter $\lambda$ and a part that does not. \begin{lemma} Let $T$ denote a general model test for the Poisson-Voronoi assumption validation. The conditional probability $P_\lambda(T\ge t|N_{2D}=n_{2D})$ can be expressed as \begin{equation} \label{eq1lambda} \small \begin{split} P_\lambda(T\ge t|N_{2D}=n_{2D}) & =\frac{\sum_{k=n_{2D}}^\infty P(T\ge t |N_{3D}=k,\, N_{2D}=n_{2D})\, P(N_{2D}=n_{2D}|N_{3D}=k)\frac{(\lambda \mathcal{V})^k}{k!}}{\sum_{j=n_{2D}}^\infty P(N_{2D}=n_{2D}|N_{3D}=j)\frac{(\lambda \mathcal{V})^j}{j!}} \end{split} \end{equation} \label{lemma:lambdadep} \end{lemma} \begin{proof} \begin{equation} \label{eq2lambda} \small \begin{split} P_\lambda(T\ge t|N_{2D}=n_{2D}) & = \sum_{k=0}^\infty P_\lambda(T\ge t,\, N_{3D}=k| N_{2D}=n_{2D}) \\ & = \sum_{k=n_{2D}}^\infty P_\lambda(T\ge t,\, N_{3D}=k| N_{2D}=n_{2D}) \\ & = \sum_{k=n_{2D}}^\infty P_\lambda(T\ge t |N_{3D}=k,\, N_{2D}=n_{2D})\, P_\lambda(N_{3D}=k|N_{2D}=n_{2D})\\ & = \sum_{k=n_{2D}}^\infty P(T\ge t |N_{3D}=k,\, N_{2D}=n_{2D})\, P_\lambda(N_{3D}=k|N_{2D}=n_{2D}). \end{split} \end{equation} In the last equality the $\lambda$-dependence disappears from the first factor, because, conditionally on $N_{3D}$, the distribution of $T$ does not depend on $\lambda$. The $\lambda$-dependent part in eq. \textbf{\ref{eq2lambda}} can be made more explicit also using that conditionally on $N_{3D}$, the distribution of $N_{2D}$ does not depend on $\lambda$: \begin{equation} \label{eq3lambda} \small \begin{split} P_\lambda(N_{3D}=k|N_{2D}=n_{2D})&=\frac{P(N_{2D}=n_{2D}|N_{3D}=k)P_\lambda(N_{3D}=k)}{P_\lambda(N_{2D}=n_{2D})}\\ & = \frac{P(N_{2D}=n_{2D}|N_{3D}=k)P_\lambda(N_{3D}=k)}{\sum_{j=n_{2D}}^\infty P(N_{2D}=n_{2D}|N_{3D}=j)P_\lambda(N_{3D}=j)}\\ & = \frac{P(N_{2D}=n_{2D}|N_{3D}=k)\frac{(\lambda \mathcal{V})^k}{k!}}{\sum_{j=n_{2D}}^\infty P(N_{2D}=n_{2D}|N_{3D}=j)\frac{(\lambda \mathcal{V})^j}{j!}}. \end{split} \end{equation} Combining eqs. \textbf{\ref{eq2lambda}} and \textbf{\ref{eq3lambda}} yields eq. \textbf{\ref{eq1lambda}}. \end{proof} For computing p-values in practice, the value of $\lambda$ is needed. In order to take into account the uncertainty in the estimate of $\lambda$ while computing p-values for model tests, we compute a 90\% confidence interval for $\lambda$. To do this, as this value is not known, we propose a bootstrap approach. More precisely, we want to compute \begin{equation} \label{eqboot} \small \begin{split} P_\lambda(\sqrt{\hat{\lambda}}-\sqrt{\lambda}\le u)&=\sum_{k=0}^\infty P_\lambda(\sqrt{\hat{\lambda}}-\sqrt{\lambda}\le u, N_{3D}=k)\\ & = \sum_{k=0}^\infty P_\lambda(\sqrt{\hat{\lambda}}-\sqrt{\lambda}\le u|N_{3D}=k) P_\lambda(N_{3D}=k) \end{split} \end{equation} The procedure can be summarized as follows: first we estimate $\lambda$ from a real 2D image, using $\hat \lambda_a$ (eq. \textbf{\ref{eq:estimators}}). For computing $P_\lambda(N_{3D}=k)$, $P_{\hat \lambda_a}(N_{3D}=k)$ is then used. Secondly, for computing $P_\lambda(\sqrt{\hat{\lambda}}-\sqrt{\lambda}\le u|N_{3D}=k)$, $10\,000$ Poisson-Voronoi diagrams for each realization of a Poisson process with $\hat \lambda_a$ in a cube are generated. Then a 2D section from each 3D diagram is randomly taken and the number of cells in the section is used for estimating $\lambda$. Next, the probability of having exactly $k$ cells in 3D, $P(N_{3D}=k)$, is used as weight for computing a weighted mean cumulative distribution function. Finally, a square root transformation for normalizing and stabilizing the variance is used for computing the confidence set \cite{sahai1993}: \begin{equation} P[\sqrt{\hat{\lambda}}-l_{0.95}\le \sqrt{\lambda} \le \sqrt{\hat{\lambda}}-l_{0.05}]\approx 0.90 \end{equation} As an example, if $\hat \lambda_a=0.2$ (as in the application shown in Section \textbf{\ref{sec:application}} $n_{2D}=50$ and window size$=10\times10$), the resulting $90\%$-confidence set is given by: \begin{equation} [0.1498;0.2439] \label{eq:confintlambda} \end{equation} Having a confidence set for $\lambda$ at hand, the next step is to compute the null distribution described in \textbf{Lemma \ref{lemma:lambdadep}} for the various test statistics. These p-values depend on $\lambda$, but we can consider these for all $\lambda$ in the confidence set constructed. We start with the coefficient of variation as test statistic (eq.\textbf{ \ref{eq:cv}}). In Figure \textbf{\ref{fig:cdfcvcomp}} it is possible to see the difference between the cumulative distribution functions of the coefficient of variation of the 2D sectional cells area unconditioned and conditioned on seeing exactly $50$ cells in the 2D section. Moreover, the green dotted lines represent the cumulative distribution function of the coefficient of variation for the lower and upper bounds of the $\lambda$ confidence set. Note that the distance between the two cdfs is small, showing that the approach of \cite{lorzhahn93} to use an unconditional distribution in this particular setting leads to comparable results. In table \textbf{\ref{tab:quantilecv}}, quantiles for the conditional distribution of the CV of the cells area are shown ($\hat \lambda=0.2$). \begin{figure}[!h] \centering \includegraphics[width=8cm]{unimodprob50} \caption{Monte Carlo approximation of $P(N_{2D}=50|N_{3D}=k)$ } \label{fig:p50n3d} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=10cm]{cdfcvcb} \caption{Cumulative distribution function of the coefficient of variation of the 2D sectional cells area conditioned on $N_{2D}=50$ (black line; green dotted lines are obtained using the upper and lower limit of the confidence set for $\lambda$ (eq.\textbf{\ref{eq:confintlambda}})) and unconditioned (red line)} \label{fig:cdfcvcomp} \end{figure} \begin{table}[!h] \centering \caption{Quantiles of the conditional distribution of the coefficient of variation of the 2D sectional cells area given that $N_{2D}=50$, ($\lambda=0.2$)} \small \begin{tabular}{rrrrrrrrrrrrr} \hline $\alpha$& 0.005& 0.01& 0.0125&0.025&0.05& 0.1&0.9&0.95&0.975&0.9875&0.99&0.995 \\ \hline $c_\alpha$&0.531& 0.547& 0.553 &0.571& 0.591 &0.615 &0.798& 0.826& 0.853 &0.875& 0.883& 0.903\\ \end{tabular} \label{tab:quantilecv} \end{table} In Figure \textbf{\ref{fig:cdf50c}}, the conditional weighted mean CDF for cells area (black line), its confidence bands (green dotted lines) and the unconditional mean are shown. More precisely we define \begin{equation} \begin{split} \bar F_{\lambda \,n_{2D}}(x)&=\mathbb{E}_\lambda\{F_{N_{2D}}(x)|N_{2D}=n_{2D}\}=\mathbb{E}_\lambda\{\mathbb{E}(F_{N_{2D}}(x)|N_{2D}=n_{2D},\,N_{3D})\}\\ &=\sum_{k=n_{2D}}^\infty P_\lambda(N_{3D}=k|N_{2D}=n_{2D})\cdot\mathbb{E}(F_{N_{2D}=n_{2D},N_{3D}=k}(x)). \end{split} \end{equation} Where $F_{N_{2D}=n_{2D},N_{3D}=k}(x)$ is the empirical distribution function of the areas given $k$ cells in $3D$ structure and $n_{2D}$ visible on the slice. The same type of expression is used also for $\bar \lambda_{D_0}(1,t)$ and $\bar \lambda_{D_1}(1,t)$. In Figure \textbf{\ref{fig:cdfecdfcomp}} the CDF of the test based on the supremum distance between ecdfs of the 2D sectional cells area is shown. As for the test based on coefficient of variation, the difference between conditional and unconditional approach is relatively small. Switching to the test based on persistence landscapes, Fig. \textbf{\ref{mlan0cond}-\ref{mland1cond}} are visualizations of $k$ mean persistence landscapes conditioned on $N_{2D}=50$ for connected components and holes respectively, when $\hat \lambda=0.2$. Instead, Fig. \textbf{\ref{maxmeanland0cb2}-\ref{maxmeanland1cb2}} are the conditional maximum weighted means (black lines) and their confidence bands (green dotted lines). In Figures \textbf{\ref{fig:cdfl0comp}-\ref{fig:cdfl1comp}} the CDF of the test based on the $L_2$ distance between persistence landscapes (connected components and holes) are shown. Also in this case, the difference between conditional and unconditional approach seems to be irrilevant. \begin{figure}[!h] \centering \includegraphics[width=8cm]{cdf50c} \caption{Cumulative distribution function of the 2D sectional cells area conditioned on $N_{2D}=50$ (black line; green dotted lines are obtained using the upper and lower limit of the confidence set for $\lambda$ (eq.\textbf{\ref{eq:confintlambda}})) and unconditioned (red line)} \label{fig:cdf50c} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=10cm]{cdfecdfcb} \caption{Cumulative distribution function of the ecdf test of the 2D sectional cells area conditioned on $N_{2D}=50$ (black line; green dotted lines are obtained using the upper and lower limit of the confidence set for $\lambda$ (eq.\textbf{\ref{eq:confintlambda}})) and unconditioned (red line)} \label{fig:cdfecdfcomp} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=9cm]{meanland0k149corr} \caption{$k$ mean landscapes conditioned on $N_{2D}=50$ (connected components) ($\hat \lambda=0.2$)} \label{mlan0cond} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=9cm]{meanland1k149corr} \caption{$k$ mean landscapes conditioned on $N_{2D}=50$ (holes) ($\hat \lambda=0.2$)} \label{mland1cond} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=8cm]{maxmeanland0cb2} \caption{Max weighted mean landscape (connected components) for sections with exactly $50$ 2D sectional cells (black line; green dotted lines are obtained using the upper and lower limit of the confidence set for $\lambda$ (eq.\textbf{\ref{eq:confintlambda}}))} \label{maxmeanland0cb2} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=8cm]{maxmeanland1cb2} \caption{Max weighted mean landscape (holes) for sections with exactly $50$ 2D sectional cells (black line; green dotted lines are obtained using the upper and lower limit of the confidence set for $\lambda$ (eq.\textbf{\ref{eq:confintlambda}}))} \label{maxmeanland1cb2} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=8cm]{cdfl0cb} \caption{Cumulative distribution function of the test statistic based on the $L_2$ distance between persistence landscapes $L_0$, (\textbf{\ref{eq:testperland}}), of the 2D sectional cells area conditioned on $N_{2D}=50$ (black line; green dotted lines are obtained using the upper and lower limit of the confidence set for $\lambda$ (eq.\textbf{\ref{eq:confintlambda}})) and unconditioned (red line)} \label{fig:cdfl0comp} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=8cm]{cdfl1cb} \caption{Cumulative distribution function of the test statistic based on the $L_2$ distance between persistence landscapes $L_1$, (\textbf{\ref{eq:testperland}}), of the 2D sectional cells area conditioned on $N_{2D}=50$ (black line; green dotted lines are obtained using the upper and lower limit of the confidence set for $\lambda$(eq.\textbf{\ref{eq:confintlambda}})) and unconditioned (red line)} \label{fig:cdfl1comp} \end{figure} For computing the quantiles of the distribution of the model tests based on CDF and on persistence landscape, we use a `leave one out' procedure. Here we use the $B$ 2D slices generated as follows: \begin{itemize} \item For the test based on CDFs difference (\textbf{\ref{eq:testcdf}}): \begin{equation}\label{eq:cdfquant} d_i=\sup_{x\in \mathbb{R}}|\bar F_{\lambda \,n_{2D}(-i)}(x)-\hat F_{n_{2D}(i)}(x)|, \,\,\,\,\,\, 1\le i\le B \end{equation} \item For the test based on persistence landscapes difference (\textbf{\ref{eq:testperland}}): \begin{equation} \label{eq:perlandquant} \begin{split} l_{0(i)}&=\biggl[\sum_{k=1}^{n_{2D}-1}\int_{0}^T(\hat \lambda_{D_0(i)}(k,t)- \bar \lambda_{D_0(-i)}(k,t))^2 \mathrm{dt}\biggr]^{\frac{1}{2}} \,\,\,\,\,\, 1\le i\le B\\ l_{1(i)}&=\biggl[\sum_{k=1}^\infty\int_{0}^T(\hat \lambda_{D_1(i)}(k,t)- \bar \lambda_{D_1(-i)}(k,t))^2 \mathrm{dt}\biggr]^{\frac{1}{2}} \,\,\,\,\,\, 1\le i\le B \end{split} \end{equation} \end{itemize} Here $\hat F_{n_{2D}(i)}$, $\hat \lambda_{D_0(i)}$ and $\hat \lambda_{D_1(i)}$ are the empirical results for the section $i$ and $\bar F_{n_{2D}(-i)}$, $\bar \lambda_{D_0(-i)}$ and $\bar \lambda_{D_1(-i)}$ are the mean result computed for all the $B$ sections leaving out the $i$-th. \begin{table}[!h] \centering \caption{Quantiles of the conditional distribution of the test based on the difference between cumulative distribution functions of the 2D sectional cells area given that $N_{2D}=50$, ($\hat \lambda=0.2$)} \footnotesize \begin{tabular}{rrrrrrrrrrrrr} \hline $\alpha$& 0.005& 0.01& 0.0125&0.025&0.05& 0.1&0.9&0.95&0.975&0.9875&0.99&0.995 \\ \hline $d_\alpha$&0.047& 0.050& 0.051&0.054&0.058& 0.064 &0.123 &0.135& 0.146&0.155&0.159& 0.168\\ \end{tabular} \label{tab:quantilet0} \end{table} \begin{table}[!h] \centering \caption{Quantiles of the conditional distribution of the test based on the difference between the observed landscapes and the conditional mean landscapes (connected components) of the 2D sectional cells area given that $N_{2D}=50$, ($\hat \lambda=0.2$)} \footnotesize \begin{tabular}{lrrrrrrrrrrrr} \hline $\alpha$& 0.005& 0.01& 0.0125&0.025&0.05& 0.1&0.9&0.95&0.975&0.9875&0.99&0.995 \\ \hline $l_{0\alpha}\times 10^{-5}$&$2.402$&$ 2.602$& $2.802 $&$3.003$&$3.403$& $3.803 $&10 &20& 20&30&30&40\\ \end{tabular} \label{tab:quantilet0} \end{table} \begin{table}[!h] \centering \caption{Quantiles of the conditional distribution of the test based on the difference between the observed landscapes and the conditional mean landscapes (holes) of the 2D sectional cells area given that $N_{2D}=50$, ($\hat \lambda=0.2$)} \footnotesize \begin{tabular}{rrrrrrrrrrrrr} \hline $\alpha$& 0.005& 0.01& 0.0125&0.025&0.05& 0.1&0.9&0.95&0.975&0.9875&0.99&0.995 \\ \hline $l_{1\alpha}\times 10^{-5}$&$9.109$&$ 9.810$& $9.810$ &$10$&$10$& $10 $&50 &70& 100&140&150& 190\\ \end{tabular} \label{tab:quantilet1} \end{table} \clearpage \section{Application to single-phase alumina ceramics} In \cite{lorzhahn93}, it is stated that single-phase microstructures, e.g. alumina ceramics, can be well approximated by Poisson-Voronoi diagrams. Using the same images shown in \cite{lorzhahn93}, the tests proposed in the previous section (Sec.\textbf{ \ref{sec:test}}) are performed. First all the cells in the images (Fig. \textbf{\ref{fig:lorzhahn93}} (a)) are involved in tests computation. Hereafter, for illustrative purposes and for a better comparison with the theoretical results shown in the previous section, we decide to consider just part of the images used in \cite{lorzhahn93}. In fact, the original window size is reduced until exactly $50$ cells are visible or partially visible (Fig.\textbf{ \ref{fig:lorzhahn93}} (b)). In Tables\textbf{ \ref{tab:lorztestall}-\ref{tab:lorztest}}, the test statistics and the p values (values in brackets) are shown for the four model tests and following the two different approaches. Figures (\textbf{\ref{fig:cdf50ctestall}-\ref{fig:landtest1all}}) and (\textbf{\ref{fig:cdf50ctest}-\ref{fig:landtest1}}) are graphical representations of the cumulative distribution function test and the persistence approach steps. In particular, for applying the test based on the difference between persistence landscapes, we take the center of mass of the cells in the images (Fig. \textbf{\ref{fig:lorzpp50all}, \ref{fig:lorzpp50}}), then compute the persistence diagrams (Fig. \textbf{\ref{fig:perdiaglorzall},\ref{fig:perdiaglorz}}) and finally the persistence landscapes (Fig. \textbf{\ref{fig:landtest0all}-\ref{fig:landtest1all}, \ref{fig:landtest0}-\ref{fig:landtest1}}) as explained in Section \textbf{\ref{subsubsec:perapproach}}. Results using the two different approaches lead to slightly different results regarding the first two images (Fig. \textbf{\ref{fig:lorzhahn93}} 1(a), 1(b), 2(a), 2(b)). For the first image, considering all the cells, the coefficient of variation test and the test based on the cdf of cells area suggest that the Poisson-Voronoi model could be reasonably used for approximating alumina ceramics; instead, looking at the cuts, the hypothesis is rejected by both tests. For the second image, the coefficient of variation test based on all the cells is in agreement with the results obtained for the reduced sections; just the test based on the cdf considering all the cells does not reject the Poisson-Voronoi hypothesis. Using tests from persistence approach instead, the use of Poisson-Voronoi model is discouraged in both cases. \label{sec:application} \begin{figure}[!h] \centering \captionsetup[subfigure]{labelformat=empty} \subfloat[1a]{\includegraphics[width=5cm]{1b}} \subfloat[1b]{\includegraphics[width=4cm]{lorz150wn}} \subfloat[2a]{\includegraphics[width=5cm]{2b}} \subfloat[2b]{\includegraphics[width=4cm]{lorz250wn}} \subfloat[3a]{\includegraphics[width=5cm]{3b}} \subfloat[3b]{\includegraphics[width=4cm]{lorz350wn}} \caption{Schemes as planar tessellations of plane sections of alumina ceramics: preprocessing (a) Hahn\&Lorz (\cite{lorzhahn93}), (b) Cut of the plane sections with exactly 50 cells } \label{fig:lorzhahn93} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=8cm]{cdfallcmeantest} \caption{Cumulative distribution function comparison of the cells area of the schemes of plane sections of alumina ceramics (Fig.\textbf{\ref{fig:lorzhahn93}} 1(a) black line, 2(a) yellow line, 3(a) green line) and of the 2D sectional Poisson-Voronoi cells area (red line)} \label{fig:cdf50ctestall} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=16cm]{lorzppall} \caption{From left to right centers of mass of the schemes of plane sections of alumina ceramics (Fig.\textbf{\ref{fig:lorzhahn93}} 1(a), 2(a), 3(a)) } \label{fig:lorzpp50all} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=16cm]{perdiaglorzall} \caption{From left to right persistence diagrams of the centers of mass of the schemes of plane sections of alumina ceramics (Fig.\textbf{\ref{fig:lorzhahn93}} 1(a), 2(a), 3(a)) } \label{fig:perdiaglorzall} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=16cm]{landtest0all} \caption{From left to right persistence landscapes (connected components) of the schemes of plane sections of alumina ceramics (Fig.\textbf{\ref{fig:lorzhahn93}} 1(a), 2(a), 3(a)) } \label{fig:landtest0all} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=16cm]{landtest1all} \caption{From left to right persistence landscapes (holes) of the schemes of plane sections of alumina ceramics (Fig.\textbf{\ref{fig:lorzhahn93}} 1(a), 2(a), 3(a)) }\label{fig:landtest1all} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=8cm]{cdf50cmeantest1} \caption{Cumulative distribution function comparison of the cuts of the sections of alumina ceramics with exactly 50 cells (Fig.\textbf{\ref{fig:lorzhahn93}} 1(b) black line, 2(b) yellow line, 3(b) green line) and of the 2D sectional Poisson-Voronoi cells area conditioned on $N_{2D}=50$ (red line)}\label{fig:cdf50ctest} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=16cm]{lorzpp50} \caption{From left to right centers of mass of the cuts of the sections of alumina ceramics with exactly 50 cells (Fig.\textbf{\ref{fig:lorzhahn93}} 1(b), 2(b), 3(b)) }\label{fig:lorzpp50} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=16cm]{perdiaglorz501} \caption{From left to right persistence diagrams of the centers of mass of the cuts of the sections of alumina ceramics with exactly 50 cells (Fig.\textbf{\ref{fig:lorzhahn93}} 1(b), 2(b), 3(b))} \label{fig:perdiaglorz} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=16cm]{landtest01} \caption{From left to right persistence landscapes (connected components) of the cuts of the sections of alumina ceramics with exactly 50 cells (Fig.\textbf{\ref{fig:lorzhahn93}} 1(b), 2(b), 3(b)) } \label{fig:landtest0} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=16cm]{landtest11} \caption{From left to right persistence landscapes (holes) of the cuts of the sections of alumina ceramics with exactly 50 cells (Fig.\textbf{\ref{fig:lorzhahn93}} 1(b), 2(b), 3(b)) } \label{fig:landtest1} \end{figure} \begin{table}[!h] \centering \caption{Values of the different model tests for the schemes of plane sections of alumina ceramics (Fig.\textbf{\ref{fig:lorzhahn93}} 1(a), 2(a), 3(a)); p-value between brackets} \subfloat[][\emph{1b}] {\begin{tabular}{cl} \hline $c$ & 0.848 (0.073) \\ $d$& 0.078 (0.710)\\ $l_0$& 0.058 (0)\\ $l_1 $& 0.019 (0) \\ \hline \end{tabular}} \subfloat[][\emph{2b}] {\begin{tabular}{cl} \hline $c$ & 0.959 (0.002) \\ $d$& 0.121 (0.172)\\ $l_0$& 0.057 (0)\\ $l_1 $& 0.028 (0)\\ \hline \end{tabular}} \subfloat[][\emph{3b}] {\begin{tabular}{cl} \hline $c$ & 1.492 (0) \\ $d$& 0.168 (0.006)\\ $l_0$& 0.062 (0) \\ $l_1 $& 0.018 (0)\\ \hline \end{tabular}} \label{tab:lorztestall} \end{table} \begin{table}[!h] \centering \caption{Values of the different model tests for the cuts of the sections of alumina ceramics with exactly 50 cells (Fig.\textbf{\ref{fig:lorzhahn93}} 1(b), 2(b), 3(b)); p-value between brackets} \subfloat[][\emph{1b}] {\begin{tabular}{cl} \hline $c$ & 0.931 (0.004) \\ $d$& 0.154 (0.014)\\ $l_0$& 0.077 (0)\\ $l_1 $& 0.041 (0) \\ \hline \end{tabular}} \subfloat[][\emph{2b}] {\begin{tabular}{cl} \hline $c$ & 1.002 (0.0002) \\ $d$& 0.172 (0.004)\\ $l_0$& 0.116 (0)\\ $l_1 $& 0.024 (0)\\ \hline \end{tabular}} \subfloat[][\emph{3b}] {\begin{tabular}{cl} \hline $c$ & 1.328 (0) \\ $d$& 0.248 (0)\\ $l_0$& 0.137 (0) \\ $l_1 $& 0.009 (0)\\ \hline \end{tabular}} \label{tab:lorztest} \end{table} \clearpage \section{Results and Discussion} \label{sec:discussion} This work provides a general setting for testing whether a microstructure is generated by a Poisson-Voronoi diagram, based on a cross section of the microstructure. Taking inspiration from previous work in this field, \cite{lorzkrawietz91,lorzhahn93}, we widen the testing framework proposing new model tests. In particular, we introduce test statistics using tools coming from different statistical branches like Goodness of Fit and Topological Data Analysis. We consider the situation with periodic boundary conditions, which is popular in materials science applications and without these conditions. Our approach is very general and can also be extended to test hypothesis for more complicated models describing the 3D structure based on a 2D section. Being able to accept the Poisson-Voronoi model on the basis of 2D real metal sections means having complete probabilistic information on the underlying 3D structure. Based on this model, one can then perform mechanical experiments using virtual microstructures avoiding waste of material and discovering new interesting relations between microstructure features and mechanical properties much faster than possible using physical experiments. Future developments involve testing of more general and less understood Voronoi structures for more complicated microstructures, such as Multi-level Voronoi diagrams. Another interesting direction is to consider fully data based approaches for analyzing 2D sections. For instance, analyzing such a section using a persistence landscape does not need the rigid restrictions on the geometry of the cells as present in the Poisson-Voronoi model. \section*{Acknowledgements} This research was carried out under project number S41.5.14547b in the framework of the Partnership Program of the Materials innovation institute M2i (www.m2i.nl) and (partly) financed by the Netherlands Organisation for Scientific Research (NWO). We also thank Vanessa Robins and Jeroen Spandaw for the inspiring discussion about Persistent Homology.
{ "timestamp": "2019-03-01T02:14:23", "yymm": "1902", "arxiv_id": "1902.10981", "language": "en", "url": "https://arxiv.org/abs/1902.10981" }
\section{Introduction} \label{sec:intro} The external gravity field of an isolated, uniformly rotating, hydrostatic planet is expressible in the form \begin{equation} V_{ext} (r, \theta) = \frac{G M}{r} \left[ 1 - \sum_{n=1}^{\infty} \bigg(\frac{a}{r}\bigg)^{2n} J_{2n} P_{2n}(cos \theta) \right] \label{eq:V_ext} \end{equation} where $J_n$, the gravitational moments are given by: \begin{equation} J_n = - \frac{1}{M a^n} \int_{R^3} \rho (\bm{r}) r^n P_n (cos \theta) \bm{\mathrm{d}^3r} \label{eq:moment} \end{equation} where $M$ is planet mass, $a$ is planet radius (equatorial or volume averaged), $P_n$ is the order $n$ Legendre polynomial, $\rho$ is the density, $G$ is the gravitational constant, and $\theta$ is the colatitude. The dimensionless coefficients of this expansion are called gravitational moments and denoted $J_{2n}$ where $n$ is a positive integer. They exist because the planet rotates. The expansion contains only even moments because the rotational perturbation is quadratic in rotation rate and therefore does not distinguish between Northern and Southern hemispheres. In linear response theory, $J_2$ is proportional to $q = \Omega^2 R^3 / G M$ and all higher $J$'s are zero, where $\Omega$ is the rotation rate, $R$ is the planet's radius, $M$ is the planet's mass, and $G$ is the gravitational constant. This assumes rigid body rotation. However, $q$ is not very small for giant planets and so the linear response is far from adequate, especially given the high accuracy with which the gravity field can be measured by orbiting spacecraft such as {\it Juno} and {\it Cassini}. The higher $J$'s arise explicitly from non-linearity; that is, the centrifugal potential is exactly of degree 2, but the change in shape and resulting self gravity introduces the higher order terms so that $J_{2n}$ is approximately proportional to $q^n$ \citep{Hubbard1975,Hubbard1982}. Moreover, even low order moments are affected by higher order corrections. Although the $J$'s are linearly related to the density anomalies of the same harmonic degree, the underlying non-linearity makes the theory complicated, even for simple models of planetary structure, and it can be difficult to develop an intuition for the dependence of $J$'s on features of that structure, which is, after all, the whole point of making the precise measurements. However, a great deal of successful effort has been put into forward modeling of the $J$'s from physical models of the planet's interior \citep[e.g.][]{Hubbard2013,Galanti2017,Cao2017a}. This is necessarily non-unique (you cannot invert the $J$'s to determine planet structure except in the context of models with only a few degrees of freedom) but it has provided us with a great deal of understanding of fluid planets. However, fluid bodies do not rotate as rigid bodies. This is self-evident in the zonal flows observed in their atmospheres. In general, this flow need not be purely cylindrical, a special case of the Taylor-Proudman theorem (i.e. with no variation of the East-West flow along an axis parallel to the rotation axis). Deviations from the Taylor-Proudman theorem in a fluid with large density variations but no MHD effects arise primarily from latitudinal entropy gradients but also potentially from the Reynolds stress \citep[e.g., Equation 22 of][]{Liu2008,Kaspi2009}. The advantage of assuming cylindrical differential rotation is that it allows for the definition of a centrifugal potential, the gradient of which provides the acceleration and thus the wind strength at each cylindrical radius. At first sight, it might seem that even this simple kind of differential rotation only exacerbates the difficulty of developing an intuitive or approximate understanding of the $J$'s. However, there is a sense in which the effect of differential rotation may actually be easier to understand than the part due to rigid body rotation. The reason is that the centrifugal potential is no longer just degree 2, but contains higher order moments, much higher if the differential rotation is largely confined to a thin shell. One can then compute the gravitational response to that part alone using linear theory. Moreover, we can then approximate the planet as spherical. This will be an adequate approximation (with a fractional error of order $q$) provided the differential rotation is sufficiently small relative to the total rotation. This is the approach presented in this paper. It assumes that one can isolate the contribution to the $J$'s that arises from the differential rotation from the often larger or comparable part that arises from the rigid part of the rotation. This can often be done because the higher order $J$'s arising from rigid body rotation are usually confined to a narrow range of possible values based on interior models that fit the lower $J$'s \footnote{This is predicated on the standard view that we actually know the structure of the outermost ten percent or so of the planet very well: an adiabatic hydrogen-dominated region for Jupiter and Saturn.} \citep[e.g.][]{Iess2019}. Moreover, the emphasis on great precision that usually accompanies analysis of the interior models need no longer apply -- one might well be content with getting the differential rotation of a planet to just tens of percent, especially if the inference is necessarily non-unique, but also because the observational data can have substantial errors at the higher order moments. This paper is motivated by a desire to develop a better intuition about what it is that determines the differential rotation contribution to the $J$'s, more specifically how the amplitude and patterns of their contribution to the $J$'s is set by the amplitude, depth, and shape of the differential flow profile. It focuses on the high order $J$'s where the effect of differential rotation is more readily discerned in the data. In keeping with our emphasis on transparency and simplicity, we use an n = 1 polytropic model (pressure everywhere proportional to the square of the density). This is not essential to the idea of our approach but it leads to a linear equation for hydrostatic equilibrium and enables the derivation of analytical formulas. As we shall explain, it is not necessary that the polytrope apply everywhere; models with a separate but small core can also be described in this way. The biggest error in this assumption lies in the breakdown of that equation of state as one approaches the surface, but this is still a smaller error than the error arising from assuming sphericity. \begin{figure} \centering \includegraphics[width=\linewidth]{model_sketch.pdf} \caption{A depiction of the cylindrical differential rotation in the meridional plane. 1D projections of the flow speed as a function of the cylindrical radius and latitude are also shown and they are clearly coupled. A deeper flow would therefore extend to higher latitudes and a shallower flow would be constrained to lower latitudes. In this work, we use the terms `deep' and `shallow' to refer to the {\it cylindrical} depth of the flow.} \label{fig:model_sketch} \end{figure} In Section 2, we describe our model and derive the main results for an arbitrary choice of functional form for the cylindrical differential rotation. In Section 3, we show how these results can yield simple scaling laws and simple analytical formulas for the case of differential flow that is exponentially declining with depth. We also show how to estimate the corrections that arise from planetary oblateness (i.e. corrections of order $q$). In Section 4, we apply our procedure to the {\it Cassini} observations of Saturn and show how we can recover flows similar to those suggested by more precise calculation, but also how they can be quite non-unique. We conclude with a discussion of what this approach tells us about the general features of gravitational effects caused by differential rotation. \section{Theoretical Model} \label{sec:theory} If the angular velocity within the planet depends only on the cylindrical radius $r_c = r \sin \theta$, then the sole dependence of flow properties on $r_c$ allows us to write their effect on the hydrostatic equilibrium of the planet as a gradient of a potential $V_{rot}$: \begin{equation} \frac{1}{\rho} \nabla P = \nabla (V + V_{rot}) \label{eq:hydrostatic} \end{equation} where $\rho$ is the density, $P$ is the pressure, and $V$ is the gravitational potential. Figure~\ref{fig:model_sketch} shows an example of the differential flows considered in this work. All references to the depth of a flow apply to the {\it cylindrical} radius. We need an equation of state to relate the pressure to the density. This hydrostatic equation is difficult to solve for an arbitrary equation of state but it has a particularly simple form for one with polytropic index $n = 1$, i.e., $P = K \rho^2$. Here, $K$ is a constant of proportionality that is mainly determined by the properties of the degenerate electron gas but is also somewhat affected by the planet's entropy. Using this polytropic equation, the hydrostatic equilibrium simplifies to: \begin{equation} \nabla (2 K \rho -V - V_{rot}) = 0 \end{equation} This can only be satisfied if $2 K \rho = V + V_{rot}$. We assume that the contribution of differential rotation to the rotational potential is small compared to the background solid-body rotation potential. We normalize the potential to the monopole, $G M / R$, so we can write the rotational potential as: \begin{equation} V_{rot} = q \; \int_{0}^{s} (1 + \epsilon(s'))^2 s' \mathrm{d}s' \\ \approx q \; \int_{0}^{s} (1 + 2\epsilon(s')) s' \mathrm{d}s' \label{eq:Vrot_integrate} \end{equation} where $s = r_c / R$ is the cylindrical radius normalised to the planet's radius, $\epsilon (s)$ is the differential flow profile as a function of $s$, the integrand is Taylor-expanded to first order under the assumption that $\epsilon << 1 $, and $q$ is given by \begin{equation} q = \Omega^2 R^3 / G M \end{equation} Here, $G$ is the gravitational constant, $M$ is the planet's mass, $R$ its radius, and $\Omega$ is the background solid body rotation. The first term is simply the solid body rotation term. The second term, linear in $\epsilon$, is henceforth denoted $\delta V_{rot}$. The Poisson equation relates the Laplacian of the potential to the density and using the above relationship between density and potential, we obtain: \begin{equation} \nabla^2 \delta V = - 4 \pi G \rho = - k^2 (\delta V + \delta V_{rot}) \end{equation} where $k^2$ = (2$\pi G) / K$, $\delta V$ is only the self gravity potential arising from $\delta V_{rot}$. We can solve this by writing $\delta V = V_h + V_p$ as a sum of a general (homogeneous) and a particular solution. The general solution is the solution to the homogeneous Helmholtz equation and is given by: \begin{equation} V_h = \sum_{Even \; n} A_n \; j_n(kr) \; P_n(cos \theta) \label{eq:homogeneous} \end{equation} where $j_n$ are the spherical Bessel's functions, $P_n$ are Legendre Polynomials of order $n$, and $A_n$ are the calculable coefficients. To calculate the particular solution $V_p$, we make an additional assumption that the differential rotation is significant near the surface of the planet and that it decays away rapidly inside the planet. This is a reasonable assumption for Jupiter, perhaps less so for Saturn. In this approximation, the curvature terms in the cylindrical Laplacian can be ignored: \begin{equation} \frac{\mathrm{d}^2 V_p}{\mathrm{d} r_c^2} = - k^2 \delta V_{rot} \label{eq:V_p_diffeq} \end{equation} Solving for $V_p$, we get an expression which is a function of cylindrical radius $r_c$. However, the homogeneous solution ($V_h$) and the external potential ($V_{ext}$ obtained from solving $\nabla^2 V_{ext} =0$) are normally expressed as a function of the spherical coordinates. We therefore need to project V$_p$ from cylindrical to spherical coordinates to match the boundary condition for the potential at the planet's surface. This can be done using the Legendre polynomials which form a complete basis set for axisymmetric functions of the spherical coordinates: \begin{eqnarray} V_p (r sin\theta) = \sum_{Even \; n} \; V_{pn}(r) \; P_n (cos\theta) \label{eq:V_p_projection1} \\ V_{pn}(r) = \frac{2n+1}{2} \int_{-1}^{1} V_p(r sin\theta) \; P_n(cos\theta) \; \mathrm{d}(cos\theta) \label{eq:V_p_projection2} \end{eqnarray} We want to calculate the deviation in gravitational moments due to a small shallow flow and we can do that by applying boundary conditions to the potential at the planetary surface. We refer to these as $\Delta J_n$ where the $J$'s are defined by Equation~\ref{eq:moment}. We assume that this surface can be approximated by a sphere (we deal in Section~\ref{sec:oblate} with the error introduced by this). The potential and its derivative must be continuous at the surface which gives us two equations that relate the quantities $V_{pn}$, $A_n j_n (kr)$, and $\Delta J_n$ at $r = R$ and their derivatives. \begin{eqnarray} V_{pn} (R) + A_n j_n (kR) = - \Delta J_n \\ R V_{pn}' + k R A_n j_n'(kR) = (n+1) \Delta J_n \label{eq:boundary_cond} \end{eqnarray} Here, the prime $'$ refers to differentiation with respect to $r$ (so $j_n'(kR) = {\rm d} j_n(kr)/{\rm d}r$ evaluated at $r = R$). Eliminating $A_n$, we obtain the following expression for $\Delta J_n$: \begin{equation} \Delta J_n = \frac{R V'_{pn}(R) j_n(kR) - kR V_{pn}(R) j'_n(kR)}{(n+1)j_n(kR) + kR j'_n(kR)} \label{eq:deltaj_n} \end{equation} For a planet with no core, $k R = \pi$. For a planet with a high density (rock or ice) core, $k R = \pi (1-\delta)$ where $\delta$ is the ratio of core to planet mass. However, this correction is small for the models we consider for the $\Delta J_n$'s. In the next section, we demonstrate the utility and power of this model that arises out of its simplicity. \section{Analytical Solutions} \label{sec:analytic} We use the theoretical framework established in the previous section to do analytic calculations for the gravitational moments due to differential flow in the limit of very shallow flow. Although, we do not use this analytic solution directly to fit Saturn's higher order gravitational moments, it is still instructive to perform this exercise to gain an understanding of how the deviations in gravitational moments relate to the properties of differential flow. \subsection{Limit of Shallow Flow} Let us assume that the differential flow profile can be represented by a steeply declining exponential which is only a function of the cylindrical radius of a fluid planet: \begin{equation} \epsilon (s) = a \; e^{-b \; (1-s)} \end{equation} Here, $a$ and $b$ are constants that characterize the amplitude and the depth of the flow, and $s = r_c / R$ is again the cylindrical radius as a fraction of the equatorial radius. We can integrate this function analytically to obtain $V_{rot}$ and calculate $V_p$ from the differential equation~\ref{eq:V_p_diffeq}, with the boundary condition that $V_p$ and its derivative vanish at $s = 0$. To proceed further with the analytic calculation, we need to project $V_p$ onto spherical coordinates using the Legendre Polynomials (equation~\ref{eq:V_p_projection1}). For very shallow flow, we can use a Taylor series expansion of the Legendre Polynomials up to a certain order (that we choose) and calculate the integrals analytically. We choose to expand the even Legendre Polynomials to second order in $\mathrm{cos} \; \theta$ around $\theta = \pi/2$: \begin{equation} P_n (\mathrm{cos} \; \theta) = {n \choose n/2} \frac{(-1)^{n/2} }{2^n} \bigg[ 1 - \frac{n \; (n+1)}{2} (\mathrm{cos} \theta)^2 \bigg] \end{equation} where $n \choose n/2$ is the binomial coefficient. With this approximation, we can project $V_p$ onto a basis set of Legendre polynomials with $s = r \; \mathrm{sin} \theta / R$. We then have an expression for $V_{pn}$ from equation~\ref{eq:V_p_projection2}. Given that we calculate this integral in the approximation of shallow flow, we can neglect all terms that do not contain the large exponential term in the expression for $V_p$. To calculate $V_{pn}$, we resort to the saddle point method (Appendix A) which works well for strongly peaked functions such as the exponential in the expression for $V_p$. Evaluating $V_{pn}$ and its derivative at $r = R = 1$ (fractional radius in our calculations) and substituting it in our expression for $\Delta J_n$, we obtain (see Appendix B for a detailed derivation): \begin{multline} \Delta J_n = - \frac{2n + 1}{2} {n \choose n/2} \frac {2 a \; q \; \pi^2} {b^3} \frac{(-1)^{n/2} }{2^n} \sqrt{\frac{2 \pi}{b}} \; \times \\ \frac{\bigg[ \left( -\frac{5}{2} + b + \frac{3}{2b} \right) + \frac{n (n+1)}{4b} \left( \frac{7}{2} -b - \frac{9}{2b} \right) \bigg] j_n(\pi) - \pi \bigg[ \left(1 - \frac{3}{b} \right) \left( 1 - \frac{n(n+1)}{4b} \right) \bigg] j'_n(\pi)}{(n+1)j_n(\pi) + \pi j'_n(\pi)} \label{eq:O1border} \end{multline} \begin{figure} \centering \includegraphics[width=1\linewidth]{fig_analytical.pdf} \caption{A comparison of the results obtained from the analytical model with terms of the order O(1), (Equation~\ref{eq:O1order}), analytical model with terms of the order O(1/b) (Equation~\ref{eq:O1border}), and numerical calculations. The amplitude of differential rotation relative to solid body rotation, $a$, is chosen to be 0.04. The agreement between all the models only becomes good for large b. The value of $b$ at which agreement begins to improve is higher for higher order $n$ (as expected). The analytical model is a rather poor approximation for small $b$ because we are calculating high order gravitational moments.} \label{fig:deltajs} \end{figure} This result uses the saddle point method expansion (Appendix A) to first order in $O(1/b)$ (constant term + first order term) as the contributions from higher orders are negligible. If we consider only the lowest order term then we find that \begin{equation} \Delta J_{n} = C_n (-1)^{n/2+1} \; a \; q \; b^{-5/2} \end{equation} where $C_6 = 8.16$, $C_8 = 6.98$, and $C_{10} = 6.22$. However, these asymptotic (very small $d = 1/b$) limiting forms are not sufficiently accurate for the case of Saturn, as we shall discuss. The calculations from this analytical model are compared with the numerical calculations for a shallow flow that decays exponentially in Figure~\ref{fig:deltajs}. The agreement between the analytical model and the numerical calculation improves with the inclusion of the $O (1/b)$ term as well as for high $b$ (increasingly shallow flow). The flow depth at which agreement becomes good is also a function of the order of gravitational moment, as expected. This is simply because $\mathrm{cos} \theta$ changes a lot as $\mathrm{sin} \theta$ changes by a small amount, when $\theta$ is near $\pi/2$. Consequently, the Legendre polynomials of high order $n$ vary rapidly in the region of interest where the flow is non-negligible. The relationship between the flow properties and gravitational moments also becomes apparent in the plot. The deviations in gravitational moments decrease as the flow becomes shallower and as $n$ becomes larger. \subsection{Effect of Planet's Oblateness} \label{sec:oblate} We now address one of the key assumptions that we made in our model, i.e., the sphericity of the planet. Using our framework and perturbation theory, we can estimate the expected alteration in the calculated $\Delta J_n$ due to the non-spherical shape of the planet. To first approximation, the small deviation from sphericity can be captured with the second order Legendre polynomial: \begin{equation} r \rightarrow r \left(1 + \varepsilon P_2 \left( \mathrm{cos} \theta \right) \right) \end{equation} We Taylor-series expand the expression for gravitational moment $J_n$ (equation~\ref{eq:moment}), spherical Bessel function $j_n$ (equation~\ref{eq:homogeneous}), and $V_{pn}$ (equation~\ref{eq:V_p_projection2}) in the small quantity $\varepsilon$, which scales with $q$. The product of $P_2$ with $P_n$ can be written as a sum of three Legendre Polynomials, $P_{n-2}, P_{n},$ and $P_{n+2}$, where each of the polynomials is weighted by the corresponding Clebsch-Gordon coefficient \citep{Adams1878}. Expanding in small $\varepsilon$ and keeping the first order terms (see Appendix C), we get: \begin{equation} \Delta J_{n,oblate} \approx \Delta J_{n,sph} - \varepsilon \bigg[ \frac{1}{4} (n+1) \Delta J_{n,sph} + \frac{3}{8} (n-1) \Delta J_{n-2,sph} + \frac{3}{8} (n+3) \Delta J_{n+2,sph} \bigg] \end{equation} where the subscript $sph$ stands for spherical i.e. values of $\Delta J_n$ calculated under the spherical planet approximation. The oblateness of a planet couples gravitational moments of different order. Importantly, the effect of oblateness is small and of the order $\varepsilon \sim 0.1$ for Saturn. Another thing to note is that $\Delta J_{n-2}$ and $\Delta J_{n+2}$ are opposite in sign to $\Delta J_n$ for the flow profiles considered here\footnote{This does not necessarily have to be the case. The sign of $\Delta J_n$ depends on the functional form of differential flow $\epsilon$ and the decay depth adopted and can change when the depth is changed.} and the expression in the bracket is small. The effect of oblateness is roughly proportional to $n$, which implies that it has a greater effect on higher order $\Delta J_n$. This makes intuitive sense because higher order $\Delta J_n$ probe density in regions very close to the surface and therefore are more sensitive to the shape and oblateness of the planet. Numerous studies have investigated the impact of assuming a spherical background state on $\Delta J_n$'s and compared their values from the (spherical) Thermal Wind equation and methods such as Concentric Maclaurin Spheroids (CMS) or Euler equation that account for the non-spherical shape of the planet \citep[e.g.][]{Kaspi2016, Cao2017a}. The differences in the $\Delta J_n$'s values from these two approaches do not exhibit the simple proportionality to the gravity moment degree $n$ that we propose above. We suspect this is because our perturbative approach breaks down for flows of moderate depth. \section{Characterisation of Saturn's Differential Rotation} \subsection{Numerical Calculations} \label{sec:calculation} {\it Cassini's} measurements have revealed the deviations of higher order gravitational moments from solid body rotation and we adopt rough values of these in our work \citep{Iess2019}: \begin{equation} \Delta \mathrm{J}_6 \sim (5 \pm 0.5) \times 10^{-6} \; , \; \Delta \mathrm{J}_8 \sim (-5.5 \pm 0.5) \times 10^{-6} \; , \; \Delta \mathrm{J}_{10} \sim (3.5 \pm 0.5) \times 10^{-6} \end{equation} What is remarkable about these values is their similarity in magnitude even though $n$ increases from 6 to 10. It is immediately clear that an exponentially decaying flow cannot match these observations for any reasonable value of $a$, the amplitude of differential flow (Figure~\ref{fig:deltajs}). We therefore must resort to another form for the flow profile. The next natural step in formulating a flow profile is to consider the observed cloud-top motion as a surface manifestation of differential rotation in the planet's interior. There is no reason to suppose that winds and flows in the deep interior match the flow observed at the cloud-top level \cite[in fact, the observations indicate some vertical shear, see][]{Garcia-Melendo2011}. However, a sinc-like cylindrical flow profile that matches the surface flows of Saturn roughly reproduces the higher order gravitational moments surprisingly well \citep{Iess2019,Galanti2019} and also happens to have nice analytical properties (it is coincidentally the zeroth order Bessel function). In reality, it is not the lack of a differential flow profile that fits the data that has been the problem for Saturn. Degeneracy is the reason we cannot make conclusive statements about Saturn's flow properties using these measurements \citep{Iess2019,Galanti2019}. Cloud-top motion extended deep into the planet as flows on cylinders \citep{Busse1976} is plausible provided the cylinders are external to the electrically conducting region. It is a useful working hypothesis due to the simplicity of the centrifugal potential that arises as a result of this flow. A more realistic model would include a radial depth dependent decay term \citep[e.g.][]{Galanti2017a,Galanti2019} but we cannot accommodate a radial dependence in our simple model because it violates Equation~\ref{eq:hydrostatic}. We therefore adopt a general sinc-like flow profile in our study. For Saturn, differential rotation, as manifested on the surface and in the zonal flows, is known to be the order of a few percent of the solid body rotation rate ($\sim 4 \%$). This is a rough estimate and the exact value is not known. \footnote{This is due to the difficulty posed by the inference of the solid body rotation rate for Saturn \citep[see][]{Smith1982,Helled2015}, which still has some uncertainty and cannot be inferred from the highly axisymmetric magnetic field \citep{Cao2011}. This uncertainty is much smaller than the zonal flows however.} Given these uncertainties and the fact that flow in the interior may be different from cloud-top motion, we allow the amplitude and the depth of the flow to vary significantly from the values inferred from cloud-top motion. \begin{figure} \centering \includegraphics[width=0.8\linewidth]{profiles_combined.pdf} \caption{Sample differential rotation profiles considered in this study: A sinc profile truncated at its first zero (red), a sinc profile with truncation at second zero (blue), and a sinc$^2$ profile truncated at second zero (green). The width and height of each oscillation is varied and corresponding deviations in gravitational moments are calculated. The cloud-top motions of Saturn are shown in black for two different rotation rates: 10h 39m 24s and 10h 34m 13s. \citep{Read2009}} \label{fig:sample_profiles} \end{figure} Three broad functional forms based on the sinc function are chosen in the study (Figure~\ref{fig:sample_profiles}). The first is a sinc function truncated at its first zero with no flow interior to it. \begin{equation} \epsilon (s) = \begin{cases} A \; \mathrm{sinc} \left( B \left(1-s \right) \right), & 1 - \pi / B < s < 1 \\ 0 , & 0 < s < 1 - \pi / B \end{cases} \label{eq:1stprofile} \end{equation} The second is a sinc squared functional form which only has prograde flow and is truncated at the position of second zero. \begin{equation} \epsilon (s) = \begin{cases} A_1 \; \mathrm{sinc^2} \left( B_1 \left(1-s \right) \right), & 1 - \pi / B_1 < s < 1 \\ A_2 \; \mathrm{sinc^2} \left( B_2 \left(1-s \right) \right), & 1 - \pi / B_1 - \pi / B_2 < s < 1 - \pi / B_1 \\ 0 , & 0 < s < 1 - \pi / B_1 - \pi / B_2 \end{cases} \label{eq:2ndprofile} \end{equation} The third form has the equatorial prograde flow and mid-latitude retrograde flow and the sinc function is truncated at the position of the second zero. \begin{equation} \epsilon (s) = \begin{cases} A_p \; \mathrm{sinc} \left( B_p \left(1-s \right) \right), & 1 - \pi / B_p < s < 1 \\ A_r \; \mathrm{sinc} \left( B_r \left(1-s \right) \right) & 1 - \pi / B_p - \pi / B_r < s < 1- \pi / B_p \\ 0 , & 0 < s < 1 - \pi / B_p - \pi / B_r \end{cases} \label{eq:3rdprofile} \end{equation} For the second and third functional forms that have mid-latitude flow (prograde or retrograde), we allow the equatorial and mid-latitude flow to have independent amplitudes and depths. Although analytic forms are chosen for $\epsilon (s)$, subsequent calculations inadvertently involve numerical solutions and approximations. This is because analytical calculations for the sinc profile are time consuming. The entire calculation can be performed rapidly for exponential flow profiles. Therefore, we choose to represent the functional forms presented above as a linear combination of exponentials. A set of 15 exponentials (giving the desired accuracy) with a wide range of decay depths is chosen for this purpose: \begin{equation} \epsilon (s) = \sum_{i=1}^{15} \alpha_i e^{-2i (1-s)} \label{eq:exponentials} \end{equation} The details of the numerical calculations are given in Appendix D. We now use this simple model to calculate gravitational moment deviations due to differential flow in Saturn and statistically characterize the dependence of these deviations on flow properties. This allows us to quantify the degeneracy in flow properties: a task that cannot be accomplished using the available suite of complex models due to their computational cost. Note that our approach does not work for Jupiter because it has significant hemispheric asymmetry, as indicated by the measurement of the odd moments \citep{Iess2018}. \begin{figure} \centering \subfigure[]{\includegraphics[width=0.48\linewidth]{profiles_1st_oscillation.pdf}} \subfigure[]{\includegraphics[width=0.48\linewidth]{sinc_1st_osc_Deltaj10_dependence.pdf}} \caption{(a) Sinc profiles with varying amplitudes and depths truncated at the first zero that are used in this study (colored by $\Delta J_{10}$ values). (b) The calculated values of $\Delta J_8$ plotted against $\Delta J_6$ and $\Delta J_{10}$ is indicated by color. The size of the scatter points corresponds to the three differential rotation amplitudes in (a). The grey dashed box indicates acceptable values of $\Delta J_n$ that agree with the observations. The magnitudes of $\Delta J_8$ and $\Delta J_{10}$ for a given $\Delta J_6$ are lower than they need to be to match the observations.} \label{fig:sinc_1st_osc} \end{figure} \subsection{The Necessity of Mid-Latitude Retrograde Flow} Efforts to fit the observed gravitational moments using highly accurate and sophisticated forward models have indicated that mid-latitude retrograde flow seems to be necessary to match the data \citep{Iess2019,Galanti2019}. We use our simple linear model to find out whether we reach the same conclusion or not. This serves as a test of both our model and the conclusion reached by other studies. \begin{figure} \centering \subfigure[]{\includegraphics[width=0.48\linewidth]{profiles_sinc2_oscillation.pdf}} \subfigure[]{\includegraphics[width=0.48\linewidth]{sinc2_param_Deltaj10_dependence.pdf}} \caption{(a) Sinc squared profiles truncated at the second zero that are used in this study. The amplitude and depth of each oscillation are allowed to vary to determine how the $\Delta J_n$ values depend on them. (b) The calculated values of $\Delta J_8$ plotted against $\Delta J_6$ and $\Delta J_{10}$ is indicated by colour. The size of the scatter points corresponds to the three differential rotation amplitudes in (a). The grey dashed box indicates acceptable values of $\Delta J_n$ that agree with the observations. It is clear that none of the amplitudes or depths match the observations with such a differential flow profile.} \label{fig:sinc_2_osc} \end{figure} The deviations in gravitational moments due to prograde flows (equatorial and/or mid-latitude; defined in Equation~\ref{eq:1stprofile} and \ref{eq:2ndprofile}) are shown in Figure~\ref{fig:sinc_1st_osc} and \ref{fig:sinc_2_osc}. The different flow profiles are plotted in the left panels of the figures and the corresponding values of $\Delta J_n$ are shown in the right panels. The dashed lines in the plot mark the measured values of the gravitational moments. None of the flow profiles agree with the observations. The calculated $\Delta J_n$ for prograde flows exhibit behaviour that is similar to those obtained for exponentially decaying profiles. That is, the prograde flow are unable to produce $\Delta J_n$ of similar magnitude for $n = 6, 8,$ and 10. We therefore conclude that for cylindrical differential flow profiles, the absence of a retrograde flow at intermediate latitudes makes it difficult to fit the observed values of $\Delta J_n$. This is in agreement with other studies that use more sophisticated models. \subsection{Non-Uniqueness of Flow Properties} Given that a prograde flow is not able to reproduce the observed $\Delta J_n$ values, we focus on our third general flow profile: a sinc function truncated at its second zero (Equation~\ref{eq:3rdprofile}). This function is characterised by four parameters: the prograde flow depth and amplitude ($D_p$ and $A_p$) and the retrograde flow depth and amplitude ($D_r$ and $A_r$)\footnote{The depth and amplitude are normalised by Saturn's radius and solid body rotation amplitude respectively}. To understand the degeneracy in flow properties, we use affine-invariant Markov Chain Monte Carlo (MCMC) simulations to constrain the parameters that characterize Saturn's differential flow. This is made possible due to our choice to write all flow profiles as a linear combination of the same set of exponentials (Equation~\ref{eq:exponentials}). The gravitational moments due to each of these exponentially decaying flows is calculated in advance and for each flow profile, we simply sum the weighted contribution of these moments (see Appendix D). The widely used Python package {\it emcee} is used to perform the MCMC simulations \citep{Foreman-Mackey2013}. We assume uniform priors for the four model parameters that characterize the data flow and plot the corresponding posterior probability distributions from the MCMC analysis in order to determine the information provided by these new data. As mentioned before, the chosen range of parameters is inspired by the observed surface zonal wind profile. However, we allow for variation in flow properties to account for the possibility that surface flows do not trace the flows in the interior. We use 20 `walkers' and 35,000 steps in each chain, which gives us a total of 700,000 samples of the posterior probability distribution. We initialize our walkers at randomly chosen positions clustered near a solution that is in reasonably good agreement with the data. We then run the MCMC for an initial 1000 step burn-in phase in order to ensure that all walkers have reached the preferred region of parameter space. The integrated auto-correlation time ($\tau_{f}$) for each of the model parameters is $\sim 100 - 150$. This implies that we draw $700,000/\tau_{f} \sim 4500$ independent samples from the posterior distribution, indicating that we have adequately sampled the relevant parameter space \citep{Foreman-Mackey2013}. \begin{figure} \centering \includegraphics[width=0.8\linewidth]{Diff_Rotation_MCMC_35000.pdf} \caption{Posterior probability distribution for the four parameters used in our model. The red line indicates parameters roughly corresponding to the equatorial prograde cloud-top motion of Saturn, i.e. a $\sim$ 4 \% flow amplitude with respect to the background rotation rate and a latitudinal extent of $\pm 30^{\circ}$ \citep{Read2009}. The black dashed lines mark the 16th, 50th, and 84th quantiles for the 1D histograms. We note that there is a large degeneracy between the flow depth and amplitude for both the prograde and the retrograde flow. In addition, there is some anti-correlation between the properties of prograde and the retrograde flow. The posteriors for each parameter show that a wide range of values yield acceptable gravitational moments given the data.} \label{fig:sinc_4param_MCMC} \end{figure} The posterior probability distributions for the prograde and retrograde flow amplitude and depth are shown in Figure~\ref{fig:sinc_4param_MCMC}. This corner plot gives us the posteriors for each of these parameters as well as the correlations for all pairs of parameters. The surface (cloud-top) flow amplitude and the extent of the prograde jet on Saturn are marked by red lines in these plots. One thing that becomes immediately evident is that the data do not constrain the parameters very tightly. The posteriors for the parameters taper off at the edges of the parameter space but are not that different from the assumed flat prior for the MCMC simulations. This implies that a wide range of flow parameters can reasonably explain the observed gravitational moments, given the errors in our measurements. Correlations between flow parameters allow us to understand the non-uniqueness of the possible solutions and place greater constraints than the 1D histograms. We find that the flow depth and amplitude for both the prograde and retrograde flow are strongly correlated. A shallow flow with a large amplitude and a deep flow with a small amplitude produce similar values for the deviations in gravitational moments. Moreover, we note that correlations between properties of the prograde and retrograde flow are weak. The amplitude of the prograde flow and the depth of the retrograde flow are almost uncorrelated. As for the other parameter pairs, we find that the depth and the amplitude of prograde and retrograde flow are {\it roughly} anti-correlated with each other (the actual posterior distribution is not simple). That is, the retrograde flow tends to be shallow if the prograde flow is deep (and vice versa) and the amplitude of the retrograde flow tends to be smaller if the prograde flow is strong (and vice versa). We plot a sample of flow profiles, colored according to their log-likelihood ($- \chi^2$), for which the MCMC calculations are performed in Figure~\ref{fig:sinc_4param}a. These profiles are a part of the MCMC `chain' as the simulation explores the parameter space to fit the observed gravitational moments. The profiles with the highest log-likelihood (equivalent to the smallest $\chi^2$) best fit the observations. Smaller amplitude flows that are deep tend to fit the observations as well as larger amplitude shallow flows. \begin{figure} \centering \subfigure[]{\includegraphics[width=0.55\linewidth]{chain_profiles_35000.pdf}} \subfigure[]{\includegraphics[width=0.49\linewidth]{sinc_4param_Deltaj10_dependence_2.pdf}} \subfigure[]{\includegraphics[width=0.49\linewidth]{sinc_4param_Deltaj10_dependence.pdf}} \caption{(a) A sample of profiles corresponding to MCMC runs colored according to their log likelihood values. The profiles with the largest log-likelihood (smallest $\chi^2$) best fit the data. (b) The inclusion of retrograde flow at mid-latitudes enables us to match the data and model. The region of interest is densely populated with models with varying differential flow properties. (c) All the plotted points here correspond to model properties that agree with the observed values of $\Delta J_8$ and $\Delta J_6$. The grey dashed lines demarcate acceptable values for $\Delta J_{10}$. $\Delta J_{10}$ is plotted against the flow depth ($D_p + D_r$) and the color indicates the flow amplitude of the prograde flow. The size of the scatter plots is a proxy for the amplitude of retrograde flow. The black line traces models that have all properties constant apart from the depth of the retrograde flow. A deeper and stronger retrograde flow yields a higher $\Delta J_{10}$ value.} \label{fig:sinc_4param} \end{figure} Inclusion of retrograde flow clearly improves the agreement between the data and the model (Figure~\ref{fig:sinc_4param}b). It is therefore worth noting how the properties of retrograde flow influence the gravitational moments. This will help us understand why including retrograde flow seems necessary to fit the data. Essentially, models with only prograde flow that match $\Delta J_6$ and $\Delta J_8$ underestimate the value of $\Delta J_{10}$ (see Figure~\ref{fig:sinc_1st_osc} and \ref{fig:sinc_2_osc}). Including retrograde flow and increasing its amplitude or depth increases $\Delta J_{10}$ while leaving the other two gravitational moments relatively unaltered (Figure~\ref{fig:sinc_4param}c). This is what allows us to obtain a reasonable match between the model and the observations. \section{Discussion and Conclusions} The ideas and theory presented here aid in the development of an intuitive understanding of how the properties of the differential rotation are related to the deviations (from solid body rotation) in gravitational moments. Gravitational moments are given by the integral of density and a Legendre polynomial of certain order. In our formalism, density and gravitational potential are related simply: $2 K \rho = V + V_{rot}$. The contribution due to differential rotation, $\delta V_{rot}$, therefore directly relates to the density, and hence to the gravitational moments, which are now given by the integral of gravitational potential with the Legendre polynomial. Additionally, $\delta V_{rot} = q \int_{0}^{s} 2 \epsilon(s') s' \mathrm{d}s'$ is related to cylindrical flow properties contained in $\epsilon$. This establishes a somewhat more comprehensible relation between differential rotation properties and the gravitational moments. Using this model, we have shown that deviations in gravitational moments alone do not suffice to constrain the differential flow properties of Saturn very accurately due to the errors in the measurement and the modeling. Even using a single sinc-like cylindrical flow profile, which already severely restricts the parameter space, we are able to match the observed gravitational moments for a wide range of flow properties. There is added non-uniqueness in the functional form of the flow profile as well and the surface flows, although coupled to the interior dynamics, may be susbtantially different from those in the interior \citep[e.g.][]{Kong2018}. To deal with this non-uniqueness, we must seek some other ways of constraining the flow depth or amplitude which will narrow down the range of possible flow properties. This requires invoking additional physics that might place constraints on the differential flow of Saturn. One promising way of constraining flow depth is by studying the interaction of interior differential flow and planetary magnetic fields. Electric conductivity rises steadily as one ventures deep into the planet's interior and any differential flow would lead to the generation of currents and Ohmic dissipation. In the deep interior, the conductivity is very high and any differential rotation is quickly damped out by MHD drag. However, in the semi-conducting region, it is possible to have small amplitude flows with speeds of $\sim$ 1 cm/s to 1 m/s. Electrical conductivity and Ohmic dissipation along with the measured interior heat flux of the planet place this upper limit on the amplitude of differential flow because strong flows would generate a large amount of heat, which would lead to a discrepancy between the expected and the measured heat flux. \cite{Cao2017b} studied this interaction in the semi-conducting region of Jupiter and Saturn to determine if flow induced magnetic field variations can be measured with the Juno mission or the {\it Cassini} grand finale orbits. Their work indicates that for an intermediate electrical conductivity $\sigma \approx 10^{-2}$ S/m, it is not possible to have large amplitude (100 m/s) flows. However, 1 m/s flows are possible. The electrical conductivity reaches this value at $\sim$ 0.85 R$_{\mathrm{Saturn}}$, which implies that the differential flow must decay from 100s of m/s at the surface to a negligible 1 m/s at this radius. This could be used as a constraint on the flow depth in our efforts to determine the differential flow properties. Indeed, the measured gravity moments of Saturn seem to imply an upper limit on the flow depth that is consistent with this constraint placed by MHD \citep{Iess2019, Galanti2019}. Notably, Ohmic dissipation places a constraint on the {\it radial} depth of the flow, not just the cylindrical depth $r_c$ considered in this work. It is therefore possible to have differential rotation at high latitudes as long as it becomes negligible at this transition radius. Using such a constraint and our results (in particular, Figure~\ref{fig:sinc_4param_MCMC} and \ref{fig:sinc_4param}), we note that deep flows with small amplitudes are excluded and shallow flows with large amplitudes are preferable because they agree with the gravity data as well as the theoretical expectations from magnetohydrodynamics and Ohmic dissipation. More specifically, the flow depth needs to meet the following conditions: $1/D_p + 1/D_r \lessapprox 25$ with both $ 1/D_p \geq 6$ and $1/D_r \geq 6$. The required amplitude of the flow would depend on the flow depths chosen and it can vary from $\sim 4 - 8\%$. In conclusion, we have presented a linear model that relates differential flow to gravitational moments in this paper. We demonstrated its utility in the development of an intuitive understanding of the underlying phenomenon and the rapidity of the calculations that enable a statistical analysis of the flow parameters. Our analytical calculations for a shallow flow are useful for understanding the relationship between flow properties and resulting gravitational moments. In addition, we have derived an expression that quantifies the effect of a planet's oblateness on its gravitational moments, providing first order corrections to the assumption of sphericity that we made to make our model linear. This expression will prove useful in comparing the effect of oblateness with other small corrections to the gravitational moments. Moreover, the linearity of the model allows us to make a quick forward calculation for the gravitational moments given a (cylindrical) flow profile. We used this feature of our model to constrain the properties of Saturn's differential flow based on the higher order gravitational moments measured by {\it Cassini} by performing MCMC simulations. This calculation allowed us to quantify the widely suspected non-uniqueness of Saturn's flow parameters inferred from the gravity data. We found that retrograde flow seems necessary to explain the observed gravitational moments as flows that are purely prograde tend to underestimate $\Delta J_{10}$. A wide range of flow parameters yield gravitational moments that agree with the observations and the flow depth and amplitude are anti-correlated. Given the non-uniqueness of the flow properties, additional physics needs to be invoked to place tighter constraints on them. Matching the gravitational moments along with the theoretical expectations from Ohmic dissipation due to flow in the semi-conducting region of Saturn and its heat flux might provide an additional way of diminishing the allowed parameter space of differential flow properties. \section*{Acknowledgements} Y.C. thanks Hao Cao for stimulating discussions, Steve Markham for help with {\it Mathematica} \textcopyright, and Heather Knutson for discussions on MCMC. We are grateful to the reviewers for their thoughtful comments and suggestions which improved this manuscript. This work uses matplotlib \citep{Hunter2007}, scipy \citep{scipy}, and sympy \citep{Meurer2017}. \bibliographystyle{apalike} \section{Introduction} \label{sec:intro} The external gravity field of an isolated, uniformly rotating, hydrostatic planet is expressible in the form \begin{equation} V_{ext} (r, \theta) = \frac{G M}{r} \left[ 1 - \sum_{n=1}^{\infty} \bigg(\frac{a}{r}\bigg)^{2n} J_{2n} P_{2n}(cos \theta) \right] \label{eq:V_ext} \end{equation} where $J_n$, the gravitational moments are given by: \begin{equation} J_n = - \frac{1}{M a^n} \int_{R^3} \rho (\bm{r}) r^n P_n (cos \theta) \bm{\mathrm{d}^3r} \label{eq:moment} \end{equation} where $M$ is planet mass, $a$ is planet radius (equatorial or volume averaged), $P_n$ is the order $n$ Legendre polynomial, $\rho$ is the density, $G$ is the gravitational constant, and $\theta$ is the colatitude. The dimensionless coefficients of this expansion are called gravitational moments and denoted $J_{2n}$ where $n$ is a positive integer. They exist because the planet rotates. The expansion contains only even moments because the rotational perturbation is quadratic in rotation rate and therefore does not distinguish between Northern and Southern hemispheres. In linear response theory, $J_2$ is proportional to $q = \Omega^2 R^3 / G M$ and all higher $J$'s are zero, where $\Omega$ is the rotation rate, $R$ is the planet's radius, $M$ is the planet's mass, and $G$ is the gravitational constant. This assumes rigid body rotation. However, $q$ is not very small for giant planets and so the linear response is far from adequate, especially given the high accuracy with which the gravity field can be measured by orbiting spacecraft such as {\it Juno} and {\it Cassini}. The higher $J$'s arise explicitly from non-linearity; that is, the centrifugal potential is exactly of degree 2, but the change in shape and resulting self gravity introduces the higher order terms so that $J_{2n}$ is approximately proportional to $q^n$ \citep{Hubbard1975,Hubbard1982}. Moreover, even low order moments are affected by higher order corrections. Although the $J$'s are linearly related to the density anomalies of the same harmonic degree, the underlying non-linearity makes the theory complicated, even for simple models of planetary structure, and it can be difficult to develop an intuition for the dependence of $J$'s on features of that structure, which is, after all, the whole point of making the precise measurements. However, a great deal of successful effort has been put into forward modeling of the $J$'s from physical models of the planet's interior \citep[e.g.][]{Hubbard2013,Galanti2017,Cao2017a}. This is necessarily non-unique (you cannot invert the $J$'s to determine planet structure except in the context of models with only a few degrees of freedom) but it has provided us with a great deal of understanding of fluid planets. However, fluid bodies do not rotate as rigid bodies. This is self-evident in the zonal flows observed in their atmospheres. In general, this flow need not be purely cylindrical, a special case of the Taylor-Proudman theorem (i.e. with no variation of the East-West flow along an axis parallel to the rotation axis). Deviations from the Taylor-Proudman theorem in a fluid with large density variations but no MHD effects arise primarily from latitudinal entropy gradients but also potentially from the Reynolds stress \citep[e.g., Equation 22 of][]{Liu2008,Kaspi2009}. The advantage of assuming cylindrical differential rotation is that it allows for the definition of a centrifugal potential, the gradient of which provides the acceleration and thus the wind strength at each cylindrical radius. At first sight, it might seem that even this simple kind of differential rotation only exacerbates the difficulty of developing an intuitive or approximate understanding of the $J$'s. However, there is a sense in which the effect of differential rotation may actually be easier to understand than the part due to rigid body rotation. The reason is that the centrifugal potential is no longer just degree 2, but contains higher order moments, much higher if the differential rotation is largely confined to a thin shell. One can then compute the gravitational response to that part alone using linear theory. Moreover, we can then approximate the planet as spherical. This will be an adequate approximation (with a fractional error of order $q$) provided the differential rotation is sufficiently small relative to the total rotation. This is the approach presented in this paper. It assumes that one can isolate the contribution to the $J$'s that arises from the differential rotation from the often larger or comparable part that arises from the rigid part of the rotation. This can often be done because the higher order $J$'s arising from rigid body rotation are usually confined to a narrow range of possible values based on interior models that fit the lower $J$'s \footnote{This is predicated on the standard view that we actually know the structure of the outermost ten percent or so of the planet very well: an adiabatic hydrogen-dominated region for Jupiter and Saturn.} \citep[e.g.][]{Iess2019}. Moreover, the emphasis on great precision that usually accompanies analysis of the interior models need no longer apply -- one might well be content with getting the differential rotation of a planet to just tens of percent, especially if the inference is necessarily non-unique, but also because the observational data can have substantial errors at the higher order moments. This paper is motivated by a desire to develop a better intuition about what it is that determines the differential rotation contribution to the $J$'s, more specifically how the amplitude and patterns of their contribution to the $J$'s is set by the amplitude, depth, and shape of the differential flow profile. It focuses on the high order $J$'s where the effect of differential rotation is more readily discerned in the data. In keeping with our emphasis on transparency and simplicity, we use an n = 1 polytropic model (pressure everywhere proportional to the square of the density). This is not essential to the idea of our approach but it leads to a linear equation for hydrostatic equilibrium and enables the derivation of analytical formulas. As we shall explain, it is not necessary that the polytrope apply everywhere; models with a separate but small core can also be described in this way. The biggest error in this assumption lies in the breakdown of that equation of state as one approaches the surface, but this is still a smaller error than the error arising from assuming sphericity. \begin{figure} \centering \includegraphics[width=\linewidth]{model_sketch.pdf} \caption{A depiction of the cylindrical differential rotation in the meridional plane. 1D projections of the flow speed as a function of the cylindrical radius and latitude are also shown and they are clearly coupled. A deeper flow would therefore extend to higher latitudes and a shallower flow would be constrained to lower latitudes. In this work, we use the terms `deep' and `shallow' to refer to the {\it cylindrical} depth of the flow.} \label{fig:model_sketch} \end{figure} In Section 2, we describe our model and derive the main results for an arbitrary choice of functional form for the cylindrical differential rotation. In Section 3, we show how these results can yield simple scaling laws and simple analytical formulas for the case of differential flow that is exponentially declining with depth. We also show how to estimate the corrections that arise from planetary oblateness (i.e. corrections of order $q$). In Section 4, we apply our procedure to the {\it Cassini} observations of Saturn and show how we can recover flows similar to those suggested by more precise calculation, but also how they can be quite non-unique. We conclude with a discussion of what this approach tells us about the general features of gravitational effects caused by differential rotation. \section{Theoretical Model} \label{sec:theory} If the angular velocity within the planet depends only on the cylindrical radius $r_c = r \sin \theta$, then the sole dependence of flow properties on $r_c$ allows us to write their effect on the hydrostatic equilibrium of the planet as a gradient of a potential $V_{rot}$: \begin{equation} \frac{1}{\rho} \nabla P = \nabla (V + V_{rot}) \label{eq:hydrostatic} \end{equation} where $\rho$ is the density, $P$ is the pressure, and $V$ is the gravitational potential. Figure~\ref{fig:model_sketch} shows an example of the differential flows considered in this work. All references to the depth of a flow apply to the {\it cylindrical} radius. We need an equation of state to relate the pressure to the density. This hydrostatic equation is difficult to solve for an arbitrary equation of state but it has a particularly simple form for one with polytropic index $n = 1$, i.e., $P = K \rho^2$. Here, $K$ is a constant of proportionality that is mainly determined by the properties of the degenerate electron gas but is also somewhat affected by the planet's entropy. Using this polytropic equation, the hydrostatic equilibrium simplifies to: \begin{equation} \nabla (2 K \rho -V - V_{rot}) = 0 \end{equation} This can only be satisfied if $2 K \rho = V + V_{rot}$. We assume that the contribution of differential rotation to the rotational potential is small compared to the background solid-body rotation potential. We normalize the potential to the monopole, $G M / R$, so we can write the rotational potential as: \begin{equation} V_{rot} = q \; \int_{0}^{s} (1 + \epsilon(s'))^2 s' \mathrm{d}s' \\ \approx q \; \int_{0}^{s} (1 + 2\epsilon(s')) s' \mathrm{d}s' \label{eq:Vrot_integrate} \end{equation} where $s = r_c / R$ is the cylindrical radius normalised to the planet's radius, $\epsilon (s)$ is the differential flow profile as a function of $s$, the integrand is Taylor-expanded to first order under the assumption that $\epsilon << 1 $, and $q$ is given by \begin{equation} q = \Omega^2 R^3 / G M \end{equation} Here, $G$ is the gravitational constant, $M$ is the planet's mass, $R$ its radius, and $\Omega$ is the background solid body rotation. The first term is simply the solid body rotation term. The second term, linear in $\epsilon$, is henceforth denoted $\delta V_{rot}$. The Poisson equation relates the Laplacian of the potential to the density and using the above relationship between density and potential, we obtain: \begin{equation} \nabla^2 \delta V = - 4 \pi G \rho = - k^2 (\delta V + \delta V_{rot}) \end{equation} where $k^2$ = (2$\pi G) / K$, $\delta V$ is only the self gravity potential arising from $\delta V_{rot}$. We can solve this by writing $\delta V = V_h + V_p$ as a sum of a general (homogeneous) and a particular solution. The general solution is the solution to the homogeneous Helmholtz equation and is given by: \begin{equation} V_h = \sum_{Even \; n} A_n \; j_n(kr) \; P_n(cos \theta) \label{eq:homogeneous} \end{equation} where $j_n$ are the spherical Bessel's functions, $P_n$ are Legendre Polynomials of order $n$, and $A_n$ are the calculable coefficients. To calculate the particular solution $V_p$, we make an additional assumption that the differential rotation is significant near the surface of the planet and that it decays away rapidly inside the planet. This is a reasonable assumption for Jupiter, perhaps less so for Saturn. In this approximation, the curvature terms in the cylindrical Laplacian can be ignored: \begin{equation} \frac{\mathrm{d}^2 V_p}{\mathrm{d} r_c^2} = - k^2 \delta V_{rot} \label{eq:V_p_diffeq} \end{equation} Solving for $V_p$, we get an expression which is a function of cylindrical radius $r_c$. However, the homogeneous solution ($V_h$) and the external potential ($V_{ext}$ obtained from solving $\nabla^2 V_{ext} =0$) are normally expressed as a function of the spherical coordinates. We therefore need to project V$_p$ from cylindrical to spherical coordinates to match the boundary condition for the potential at the planet's surface. This can be done using the Legendre polynomials which form a complete basis set for axisymmetric functions of the spherical coordinates: \begin{eqnarray} V_p (r sin\theta) = \sum_{Even \; n} \; V_{pn}(r) \; P_n (cos\theta) \label{eq:V_p_projection1} \\ V_{pn}(r) = \frac{2n+1}{2} \int_{-1}^{1} V_p(r sin\theta) \; P_n(cos\theta) \; \mathrm{d}(cos\theta) \label{eq:V_p_projection2} \end{eqnarray} We want to calculate the deviation in gravitational moments due to a small shallow flow and we can do that by applying boundary conditions to the potential at the planetary surface. We refer to these as $\Delta J_n$ where the $J$'s are defined by Equation~\ref{eq:moment}. We assume that this surface can be approximated by a sphere (we deal in Section~\ref{sec:oblate} with the error introduced by this). The potential and its derivative must be continuous at the surface which gives us two equations that relate the quantities $V_{pn}$, $A_n j_n (kr)$, and $\Delta J_n$ at $r = R$ and their derivatives. \begin{eqnarray} V_{pn} (R) + A_n j_n (kR) = - \Delta J_n \\ R V_{pn}' + k R A_n j_n'(kR) = (n+1) \Delta J_n \label{eq:boundary_cond} \end{eqnarray} Here, the prime $'$ refers to differentiation with respect to $r$ (so $j_n'(kR) = {\rm d} j_n(kr)/{\rm d}r$ evaluated at $r = R$). Eliminating $A_n$, we obtain the following expression for $\Delta J_n$: \begin{equation} \Delta J_n = \frac{R V'_{pn}(R) j_n(kR) - kR V_{pn}(R) j'_n(kR)}{(n+1)j_n(kR) + kR j'_n(kR)} \label{eq:deltaj_n} \end{equation} For a planet with no core, $k R = \pi$. For a planet with a high density (rock or ice) core, $k R = \pi (1-\delta)$ where $\delta$ is the ratio of core to planet mass. However, this correction is small for the models we consider for the $\Delta J_n$'s. In the next section, we demonstrate the utility and power of this model that arises out of its simplicity. \section{Analytical Solutions} \label{sec:analytic} We use the theoretical framework established in the previous section to do analytic calculations for the gravitational moments due to differential flow in the limit of very shallow flow. Although, we do not use this analytic solution directly to fit Saturn's higher order gravitational moments, it is still instructive to perform this exercise to gain an understanding of how the deviations in gravitational moments relate to the properties of differential flow. \subsection{Limit of Shallow Flow} Let us assume that the differential flow profile can be represented by a steeply declining exponential which is only a function of the cylindrical radius of a fluid planet: \begin{equation} \epsilon (s) = a \; e^{-b \; (1-s)} \end{equation} Here, $a$ and $b$ are constants that characterize the amplitude and the depth of the flow, and $s = r_c / R$ is again the cylindrical radius as a fraction of the equatorial radius. We can integrate this function analytically to obtain $V_{rot}$ and calculate $V_p$ from the differential equation~\ref{eq:V_p_diffeq}, with the boundary condition that $V_p$ and its derivative vanish at $s = 0$. To proceed further with the analytic calculation, we need to project $V_p$ onto spherical coordinates using the Legendre Polynomials (equation~\ref{eq:V_p_projection1}). For very shallow flow, we can use a Taylor series expansion of the Legendre Polynomials up to a certain order (that we choose) and calculate the integrals analytically. We choose to expand the even Legendre Polynomials to second order in $\mathrm{cos} \; \theta$ around $\theta = \pi/2$: \begin{equation} P_n (\mathrm{cos} \; \theta) = {n \choose n/2} \frac{(-1)^{n/2} }{2^n} \bigg[ 1 - \frac{n \; (n+1)}{2} (\mathrm{cos} \theta)^2 \bigg] \end{equation} where $n \choose n/2$ is the binomial coefficient. With this approximation, we can project $V_p$ onto a basis set of Legendre polynomials with $s = r \; \mathrm{sin} \theta / R$. We then have an expression for $V_{pn}$ from equation~\ref{eq:V_p_projection2}. Given that we calculate this integral in the approximation of shallow flow, we can neglect all terms that do not contain the large exponential term in the expression for $V_p$. To calculate $V_{pn}$, we resort to the saddle point method (Appendix A) which works well for strongly peaked functions such as the exponential in the expression for $V_p$. Evaluating $V_{pn}$ and its derivative at $r = R = 1$ (fractional radius in our calculations) and substituting it in our expression for $\Delta J_n$, we obtain (see Appendix B for a detailed derivation): \begin{multline} \Delta J_n = - \frac{2n + 1}{2} {n \choose n/2} \frac {2 a \; q \; \pi^2} {b^3} \frac{(-1)^{n/2} }{2^n} \sqrt{\frac{2 \pi}{b}} \; \times \\ \frac{\bigg[ \left( -\frac{5}{2} + b + \frac{3}{2b} \right) + \frac{n (n+1)}{4b} \left( \frac{7}{2} -b - \frac{9}{2b} \right) \bigg] j_n(\pi) - \pi \bigg[ \left(1 - \frac{3}{b} \right) \left( 1 - \frac{n(n+1)}{4b} \right) \bigg] j'_n(\pi)}{(n+1)j_n(\pi) + \pi j'_n(\pi)} \label{eq:O1border} \end{multline} \begin{figure} \centering \includegraphics[width=1\linewidth]{fig_analytical.pdf} \caption{A comparison of the results obtained from the analytical model with terms of the order O(1), (Equation~\ref{eq:O1order}), analytical model with terms of the order O(1/b) (Equation~\ref{eq:O1border}), and numerical calculations. The amplitude of differential rotation relative to solid body rotation, $a$, is chosen to be 0.04. The agreement between all the models only becomes good for large b. The value of $b$ at which agreement begins to improve is higher for higher order $n$ (as expected). The analytical model is a rather poor approximation for small $b$ because we are calculating high order gravitational moments.} \label{fig:deltajs} \end{figure} This result uses the saddle point method expansion (Appendix A) to first order in $O(1/b)$ (constant term + first order term) as the contributions from higher orders are negligible. If we consider only the lowest order term then we find that \begin{equation} \Delta J_{n} = C_n (-1)^{n/2+1} \; a \; q \; b^{-5/2} \end{equation} where $C_6 = 8.16$, $C_8 = 6.98$, and $C_{10} = 6.22$. However, these asymptotic (very small $d = 1/b$) limiting forms are not sufficiently accurate for the case of Saturn, as we shall discuss. The calculations from this analytical model are compared with the numerical calculations for a shallow flow that decays exponentially in Figure~\ref{fig:deltajs}. The agreement between the analytical model and the numerical calculation improves with the inclusion of the $O (1/b)$ term as well as for high $b$ (increasingly shallow flow). The flow depth at which agreement becomes good is also a function of the order of gravitational moment, as expected. This is simply because $\mathrm{cos} \theta$ changes a lot as $\mathrm{sin} \theta$ changes by a small amount, when $\theta$ is near $\pi/2$. Consequently, the Legendre polynomials of high order $n$ vary rapidly in the region of interest where the flow is non-negligible. The relationship between the flow properties and gravitational moments also becomes apparent in the plot. The deviations in gravitational moments decrease as the flow becomes shallower and as $n$ becomes larger. \subsection{Effect of Planet's Oblateness} \label{sec:oblate} We now address one of the key assumptions that we made in our model, i.e., the sphericity of the planet. Using our framework and perturbation theory, we can estimate the expected alteration in the calculated $\Delta J_n$ due to the non-spherical shape of the planet. To first approximation, the small deviation from sphericity can be captured with the second order Legendre polynomial: \begin{equation} r \rightarrow r \left(1 + \varepsilon P_2 \left( \mathrm{cos} \theta \right) \right) \end{equation} We Taylor-series expand the expression for gravitational moment $J_n$ (equation~\ref{eq:moment}), spherical Bessel function $j_n$ (equation~\ref{eq:homogeneous}), and $V_{pn}$ (equation~\ref{eq:V_p_projection2}) in the small quantity $\varepsilon$, which scales with $q$. The product of $P_2$ with $P_n$ can be written as a sum of three Legendre Polynomials, $P_{n-2}, P_{n},$ and $P_{n+2}$, where each of the polynomials is weighted by the corresponding Clebsch-Gordon coefficient \citep{Adams1878}. Expanding in small $\varepsilon$ and keeping the first order terms (see Appendix C), we get: \begin{equation} \Delta J_{n,oblate} \approx \Delta J_{n,sph} - \varepsilon \bigg[ \frac{1}{4} (n+1) \Delta J_{n,sph} + \frac{3}{8} (n-1) \Delta J_{n-2,sph} + \frac{3}{8} (n+3) \Delta J_{n+2,sph} \bigg] \end{equation} where the subscript $sph$ stands for spherical i.e. values of $\Delta J_n$ calculated under the spherical planet approximation. The oblateness of a planet couples gravitational moments of different order. Importantly, the effect of oblateness is small and of the order $\varepsilon \sim 0.1$ for Saturn. Another thing to note is that $\Delta J_{n-2}$ and $\Delta J_{n+2}$ are opposite in sign to $\Delta J_n$ for the flow profiles considered here\footnote{This does not necessarily have to be the case. The sign of $\Delta J_n$ depends on the functional form of differential flow $\epsilon$ and the decay depth adopted and can change when the depth is changed.} and the expression in the bracket is small. The effect of oblateness is roughly proportional to $n$, which implies that it has a greater effect on higher order $\Delta J_n$. This makes intuitive sense because higher order $\Delta J_n$ probe density in regions very close to the surface and therefore are more sensitive to the shape and oblateness of the planet. Numerous studies have investigated the impact of assuming a spherical background state on $\Delta J_n$'s and compared their values from the (spherical) Thermal Wind equation and methods such as Concentric Maclaurin Spheroids (CMS) or Euler equation that account for the non-spherical shape of the planet \citep[e.g.][]{Kaspi2016, Cao2017a}. The differences in the $\Delta J_n$'s values from these two approaches do not exhibit the simple proportionality to the gravity moment degree $n$ that we propose above. We suspect this is because our perturbative approach breaks down for flows of moderate depth. \section{Characterisation of Saturn's Differential Rotation} \subsection{Numerical Calculations} \label{sec:calculation} {\it Cassini's} measurements have revealed the deviations of higher order gravitational moments from solid body rotation and we adopt rough values of these in our work \citep{Iess2019}: \begin{equation} \Delta \mathrm{J}_6 \sim (5 \pm 0.5) \times 10^{-6} \; , \; \Delta \mathrm{J}_8 \sim (-5.5 \pm 0.5) \times 10^{-6} \; , \; \Delta \mathrm{J}_{10} \sim (3.5 \pm 0.5) \times 10^{-6} \end{equation} What is remarkable about these values is their similarity in magnitude even though $n$ increases from 6 to 10. It is immediately clear that an exponentially decaying flow cannot match these observations for any reasonable value of $a$, the amplitude of differential flow (Figure~\ref{fig:deltajs}). We therefore must resort to another form for the flow profile. The next natural step in formulating a flow profile is to consider the observed cloud-top motion as a surface manifestation of differential rotation in the planet's interior. There is no reason to suppose that winds and flows in the deep interior match the flow observed at the cloud-top level \cite[in fact, the observations indicate some vertical shear, see][]{Garcia-Melendo2011}. However, a sinc-like cylindrical flow profile that matches the surface flows of Saturn roughly reproduces the higher order gravitational moments surprisingly well \citep{Iess2019,Galanti2019} and also happens to have nice analytical properties (it is coincidentally the zeroth order Bessel function). In reality, it is not the lack of a differential flow profile that fits the data that has been the problem for Saturn. Degeneracy is the reason we cannot make conclusive statements about Saturn's flow properties using these measurements \citep{Iess2019,Galanti2019}. Cloud-top motion extended deep into the planet as flows on cylinders \citep{Busse1976} is plausible provided the cylinders are external to the electrically conducting region. It is a useful working hypothesis due to the simplicity of the centrifugal potential that arises as a result of this flow. A more realistic model would include a radial depth dependent decay term \citep[e.g.][]{Galanti2017a,Galanti2019} but we cannot accommodate a radial dependence in our simple model because it violates Equation~\ref{eq:hydrostatic}. We therefore adopt a general sinc-like flow profile in our study. For Saturn, differential rotation, as manifested on the surface and in the zonal flows, is known to be the order of a few percent of the solid body rotation rate ($\sim 4 \%$). This is a rough estimate and the exact value is not known. \footnote{This is due to the difficulty posed by the inference of the solid body rotation rate for Saturn \citep[see][]{Smith1982,Helled2015}, which still has some uncertainty and cannot be inferred from the highly axisymmetric magnetic field \citep{Cao2011}. This uncertainty is much smaller than the zonal flows however.} Given these uncertainties and the fact that flow in the interior may be different from cloud-top motion, we allow the amplitude and the depth of the flow to vary significantly from the values inferred from cloud-top motion. \begin{figure} \centering \includegraphics[width=0.8\linewidth]{profiles_combined.pdf} \caption{Sample differential rotation profiles considered in this study: A sinc profile truncated at its first zero (red), a sinc profile with truncation at second zero (blue), and a sinc$^2$ profile truncated at second zero (green). The width and height of each oscillation is varied and corresponding deviations in gravitational moments are calculated. The cloud-top motions of Saturn are shown in black for two different rotation rates: 10h 39m 24s and 10h 34m 13s. \citep{Read2009}} \label{fig:sample_profiles} \end{figure} Three broad functional forms based on the sinc function are chosen in the study (Figure~\ref{fig:sample_profiles}). The first is a sinc function truncated at its first zero with no flow interior to it. \begin{equation} \epsilon (s) = \begin{cases} A \; \mathrm{sinc} \left( B \left(1-s \right) \right), & 1 - \pi / B < s < 1 \\ 0 , & 0 < s < 1 - \pi / B \end{cases} \label{eq:1stprofile} \end{equation} The second is a sinc squared functional form which only has prograde flow and is truncated at the position of second zero. \begin{equation} \epsilon (s) = \begin{cases} A_1 \; \mathrm{sinc^2} \left( B_1 \left(1-s \right) \right), & 1 - \pi / B_1 < s < 1 \\ A_2 \; \mathrm{sinc^2} \left( B_2 \left(1-s \right) \right), & 1 - \pi / B_1 - \pi / B_2 < s < 1 - \pi / B_1 \\ 0 , & 0 < s < 1 - \pi / B_1 - \pi / B_2 \end{cases} \label{eq:2ndprofile} \end{equation} The third form has the equatorial prograde flow and mid-latitude retrograde flow and the sinc function is truncated at the position of the second zero. \begin{equation} \epsilon (s) = \begin{cases} A_p \; \mathrm{sinc} \left( B_p \left(1-s \right) \right), & 1 - \pi / B_p < s < 1 \\ A_r \; \mathrm{sinc} \left( B_r \left(1-s \right) \right) & 1 - \pi / B_p - \pi / B_r < s < 1- \pi / B_p \\ 0 , & 0 < s < 1 - \pi / B_p - \pi / B_r \end{cases} \label{eq:3rdprofile} \end{equation} For the second and third functional forms that have mid-latitude flow (prograde or retrograde), we allow the equatorial and mid-latitude flow to have independent amplitudes and depths. Although analytic forms are chosen for $\epsilon (s)$, subsequent calculations inadvertently involve numerical solutions and approximations. This is because analytical calculations for the sinc profile are time consuming. The entire calculation can be performed rapidly for exponential flow profiles. Therefore, we choose to represent the functional forms presented above as a linear combination of exponentials. A set of 15 exponentials (giving the desired accuracy) with a wide range of decay depths is chosen for this purpose: \begin{equation} \epsilon (s) = \sum_{i=1}^{15} \alpha_i e^{-2i (1-s)} \label{eq:exponentials} \end{equation} The details of the numerical calculations are given in Appendix D. We now use this simple model to calculate gravitational moment deviations due to differential flow in Saturn and statistically characterize the dependence of these deviations on flow properties. This allows us to quantify the degeneracy in flow properties: a task that cannot be accomplished using the available suite of complex models due to their computational cost. Note that our approach does not work for Jupiter because it has significant hemispheric asymmetry, as indicated by the measurement of the odd moments \citep{Iess2018}. \begin{figure} \centering \subfigure[]{\includegraphics[width=0.48\linewidth]{profiles_1st_oscillation.pdf}} \subfigure[]{\includegraphics[width=0.48\linewidth]{sinc_1st_osc_Deltaj10_dependence.pdf}} \caption{(a) Sinc profiles with varying amplitudes and depths truncated at the first zero that are used in this study (colored by $\Delta J_{10}$ values). (b) The calculated values of $\Delta J_8$ plotted against $\Delta J_6$ and $\Delta J_{10}$ is indicated by color. The size of the scatter points corresponds to the three differential rotation amplitudes in (a). The grey dashed box indicates acceptable values of $\Delta J_n$ that agree with the observations. The magnitudes of $\Delta J_8$ and $\Delta J_{10}$ for a given $\Delta J_6$ are lower than they need to be to match the observations.} \label{fig:sinc_1st_osc} \end{figure} \subsection{The Necessity of Mid-Latitude Retrograde Flow} Efforts to fit the observed gravitational moments using highly accurate and sophisticated forward models have indicated that mid-latitude retrograde flow seems to be necessary to match the data \citep{Iess2019,Galanti2019}. We use our simple linear model to find out whether we reach the same conclusion or not. This serves as a test of both our model and the conclusion reached by other studies. \begin{figure} \centering \subfigure[]{\includegraphics[width=0.48\linewidth]{profiles_sinc2_oscillation.pdf}} \subfigure[]{\includegraphics[width=0.48\linewidth]{sinc2_param_Deltaj10_dependence.pdf}} \caption{(a) Sinc squared profiles truncated at the second zero that are used in this study. The amplitude and depth of each oscillation are allowed to vary to determine how the $\Delta J_n$ values depend on them. (b) The calculated values of $\Delta J_8$ plotted against $\Delta J_6$ and $\Delta J_{10}$ is indicated by colour. The size of the scatter points corresponds to the three differential rotation amplitudes in (a). The grey dashed box indicates acceptable values of $\Delta J_n$ that agree with the observations. It is clear that none of the amplitudes or depths match the observations with such a differential flow profile.} \label{fig:sinc_2_osc} \end{figure} The deviations in gravitational moments due to prograde flows (equatorial and/or mid-latitude; defined in Equation~\ref{eq:1stprofile} and \ref{eq:2ndprofile}) are shown in Figure~\ref{fig:sinc_1st_osc} and \ref{fig:sinc_2_osc}. The different flow profiles are plotted in the left panels of the figures and the corresponding values of $\Delta J_n$ are shown in the right panels. The dashed lines in the plot mark the measured values of the gravitational moments. None of the flow profiles agree with the observations. The calculated $\Delta J_n$ for prograde flows exhibit behaviour that is similar to those obtained for exponentially decaying profiles. That is, the prograde flow are unable to produce $\Delta J_n$ of similar magnitude for $n = 6, 8,$ and 10. We therefore conclude that for cylindrical differential flow profiles, the absence of a retrograde flow at intermediate latitudes makes it difficult to fit the observed values of $\Delta J_n$. This is in agreement with other studies that use more sophisticated models. \subsection{Non-Uniqueness of Flow Properties} Given that a prograde flow is not able to reproduce the observed $\Delta J_n$ values, we focus on our third general flow profile: a sinc function truncated at its second zero (Equation~\ref{eq:3rdprofile}). This function is characterised by four parameters: the prograde flow depth and amplitude ($D_p$ and $A_p$) and the retrograde flow depth and amplitude ($D_r$ and $A_r$)\footnote{The depth and amplitude are normalised by Saturn's radius and solid body rotation amplitude respectively}. To understand the degeneracy in flow properties, we use affine-invariant Markov Chain Monte Carlo (MCMC) simulations to constrain the parameters that characterize Saturn's differential flow. This is made possible due to our choice to write all flow profiles as a linear combination of the same set of exponentials (Equation~\ref{eq:exponentials}). The gravitational moments due to each of these exponentially decaying flows is calculated in advance and for each flow profile, we simply sum the weighted contribution of these moments (see Appendix D). The widely used Python package {\it emcee} is used to perform the MCMC simulations \citep{Foreman-Mackey2013}. We assume uniform priors for the four model parameters that characterize the data flow and plot the corresponding posterior probability distributions from the MCMC analysis in order to determine the information provided by these new data. As mentioned before, the chosen range of parameters is inspired by the observed surface zonal wind profile. However, we allow for variation in flow properties to account for the possibility that surface flows do not trace the flows in the interior. We use 20 `walkers' and 35,000 steps in each chain, which gives us a total of 700,000 samples of the posterior probability distribution. We initialize our walkers at randomly chosen positions clustered near a solution that is in reasonably good agreement with the data. We then run the MCMC for an initial 1000 step burn-in phase in order to ensure that all walkers have reached the preferred region of parameter space. The integrated auto-correlation time ($\tau_{f}$) for each of the model parameters is $\sim 100 - 150$. This implies that we draw $700,000/\tau_{f} \sim 4500$ independent samples from the posterior distribution, indicating that we have adequately sampled the relevant parameter space \citep{Foreman-Mackey2013}. \begin{figure} \centering \includegraphics[width=0.8\linewidth]{Diff_Rotation_MCMC_35000.pdf} \caption{Posterior probability distribution for the four parameters used in our model. The red line indicates parameters roughly corresponding to the equatorial prograde cloud-top motion of Saturn, i.e. a $\sim$ 4 \% flow amplitude with respect to the background rotation rate and a latitudinal extent of $\pm 30^{\circ}$ \citep{Read2009}. The black dashed lines mark the 16th, 50th, and 84th quantiles for the 1D histograms. We note that there is a large degeneracy between the flow depth and amplitude for both the prograde and the retrograde flow. In addition, there is some anti-correlation between the properties of prograde and the retrograde flow. The posteriors for each parameter show that a wide range of values yield acceptable gravitational moments given the data.} \label{fig:sinc_4param_MCMC} \end{figure} The posterior probability distributions for the prograde and retrograde flow amplitude and depth are shown in Figure~\ref{fig:sinc_4param_MCMC}. This corner plot gives us the posteriors for each of these parameters as well as the correlations for all pairs of parameters. The surface (cloud-top) flow amplitude and the extent of the prograde jet on Saturn are marked by red lines in these plots. One thing that becomes immediately evident is that the data do not constrain the parameters very tightly. The posteriors for the parameters taper off at the edges of the parameter space but are not that different from the assumed flat prior for the MCMC simulations. This implies that a wide range of flow parameters can reasonably explain the observed gravitational moments, given the errors in our measurements. Correlations between flow parameters allow us to understand the non-uniqueness of the possible solutions and place greater constraints than the 1D histograms. We find that the flow depth and amplitude for both the prograde and retrograde flow are strongly correlated. A shallow flow with a large amplitude and a deep flow with a small amplitude produce similar values for the deviations in gravitational moments. Moreover, we note that correlations between properties of the prograde and retrograde flow are weak. The amplitude of the prograde flow and the depth of the retrograde flow are almost uncorrelated. As for the other parameter pairs, we find that the depth and the amplitude of prograde and retrograde flow are {\it roughly} anti-correlated with each other (the actual posterior distribution is not simple). That is, the retrograde flow tends to be shallow if the prograde flow is deep (and vice versa) and the amplitude of the retrograde flow tends to be smaller if the prograde flow is strong (and vice versa). We plot a sample of flow profiles, colored according to their log-likelihood ($- \chi^2$), for which the MCMC calculations are performed in Figure~\ref{fig:sinc_4param}a. These profiles are a part of the MCMC `chain' as the simulation explores the parameter space to fit the observed gravitational moments. The profiles with the highest log-likelihood (equivalent to the smallest $\chi^2$) best fit the observations. Smaller amplitude flows that are deep tend to fit the observations as well as larger amplitude shallow flows. \begin{figure} \centering \subfigure[]{\includegraphics[width=0.55\linewidth]{chain_profiles_35000.pdf}} \subfigure[]{\includegraphics[width=0.49\linewidth]{sinc_4param_Deltaj10_dependence_2.pdf}} \subfigure[]{\includegraphics[width=0.49\linewidth]{sinc_4param_Deltaj10_dependence.pdf}} \caption{(a) A sample of profiles corresponding to MCMC runs colored according to their log likelihood values. The profiles with the largest log-likelihood (smallest $\chi^2$) best fit the data. (b) The inclusion of retrograde flow at mid-latitudes enables us to match the data and model. The region of interest is densely populated with models with varying differential flow properties. (c) All the plotted points here correspond to model properties that agree with the observed values of $\Delta J_8$ and $\Delta J_6$. The grey dashed lines demarcate acceptable values for $\Delta J_{10}$. $\Delta J_{10}$ is plotted against the flow depth ($D_p + D_r$) and the color indicates the flow amplitude of the prograde flow. The size of the scatter plots is a proxy for the amplitude of retrograde flow. The black line traces models that have all properties constant apart from the depth of the retrograde flow. A deeper and stronger retrograde flow yields a higher $\Delta J_{10}$ value.} \label{fig:sinc_4param} \end{figure} Inclusion of retrograde flow clearly improves the agreement between the data and the model (Figure~\ref{fig:sinc_4param}b). It is therefore worth noting how the properties of retrograde flow influence the gravitational moments. This will help us understand why including retrograde flow seems necessary to fit the data. Essentially, models with only prograde flow that match $\Delta J_6$ and $\Delta J_8$ underestimate the value of $\Delta J_{10}$ (see Figure~\ref{fig:sinc_1st_osc} and \ref{fig:sinc_2_osc}). Including retrograde flow and increasing its amplitude or depth increases $\Delta J_{10}$ while leaving the other two gravitational moments relatively unaltered (Figure~\ref{fig:sinc_4param}c). This is what allows us to obtain a reasonable match between the model and the observations. \section{Discussion and Conclusions} The ideas and theory presented here aid in the development of an intuitive understanding of how the properties of the differential rotation are related to the deviations (from solid body rotation) in gravitational moments. Gravitational moments are given by the integral of density and a Legendre polynomial of certain order. In our formalism, density and gravitational potential are related simply: $2 K \rho = V + V_{rot}$. The contribution due to differential rotation, $\delta V_{rot}$, therefore directly relates to the density, and hence to the gravitational moments, which are now given by the integral of gravitational potential with the Legendre polynomial. Additionally, $\delta V_{rot} = q \int_{0}^{s} 2 \epsilon(s') s' \mathrm{d}s'$ is related to cylindrical flow properties contained in $\epsilon$. This establishes a somewhat more comprehensible relation between differential rotation properties and the gravitational moments. Using this model, we have shown that deviations in gravitational moments alone do not suffice to constrain the differential flow properties of Saturn very accurately due to the errors in the measurement and the modeling. Even using a single sinc-like cylindrical flow profile, which already severely restricts the parameter space, we are able to match the observed gravitational moments for a wide range of flow properties. There is added non-uniqueness in the functional form of the flow profile as well and the surface flows, although coupled to the interior dynamics, may be susbtantially different from those in the interior \citep[e.g.][]{Kong2018}. To deal with this non-uniqueness, we must seek some other ways of constraining the flow depth or amplitude which will narrow down the range of possible flow properties. This requires invoking additional physics that might place constraints on the differential flow of Saturn. One promising way of constraining flow depth is by studying the interaction of interior differential flow and planetary magnetic fields. Electric conductivity rises steadily as one ventures deep into the planet's interior and any differential flow would lead to the generation of currents and Ohmic dissipation. In the deep interior, the conductivity is very high and any differential rotation is quickly damped out by MHD drag. However, in the semi-conducting region, it is possible to have small amplitude flows with speeds of $\sim$ 1 cm/s to 1 m/s. Electrical conductivity and Ohmic dissipation along with the measured interior heat flux of the planet place this upper limit on the amplitude of differential flow because strong flows would generate a large amount of heat, which would lead to a discrepancy between the expected and the measured heat flux. \cite{Cao2017b} studied this interaction in the semi-conducting region of Jupiter and Saturn to determine if flow induced magnetic field variations can be measured with the Juno mission or the {\it Cassini} grand finale orbits. Their work indicates that for an intermediate electrical conductivity $\sigma \approx 10^{-2}$ S/m, it is not possible to have large amplitude (100 m/s) flows. However, 1 m/s flows are possible. The electrical conductivity reaches this value at $\sim$ 0.85 R$_{\mathrm{Saturn}}$, which implies that the differential flow must decay from 100s of m/s at the surface to a negligible 1 m/s at this radius. This could be used as a constraint on the flow depth in our efforts to determine the differential flow properties. Indeed, the measured gravity moments of Saturn seem to imply an upper limit on the flow depth that is consistent with this constraint placed by MHD \citep{Iess2019, Galanti2019}. Notably, Ohmic dissipation places a constraint on the {\it radial} depth of the flow, not just the cylindrical depth $r_c$ considered in this work. It is therefore possible to have differential rotation at high latitudes as long as it becomes negligible at this transition radius. Using such a constraint and our results (in particular, Figure~\ref{fig:sinc_4param_MCMC} and \ref{fig:sinc_4param}), we note that deep flows with small amplitudes are excluded and shallow flows with large amplitudes are preferable because they agree with the gravity data as well as the theoretical expectations from magnetohydrodynamics and Ohmic dissipation. More specifically, the flow depth needs to meet the following conditions: $1/D_p + 1/D_r \lessapprox 25$ with both $ 1/D_p \geq 6$ and $1/D_r \geq 6$. The required amplitude of the flow would depend on the flow depths chosen and it can vary from $\sim 4 - 8\%$. In conclusion, we have presented a linear model that relates differential flow to gravitational moments in this paper. We demonstrated its utility in the development of an intuitive understanding of the underlying phenomenon and the rapidity of the calculations that enable a statistical analysis of the flow parameters. Our analytical calculations for a shallow flow are useful for understanding the relationship between flow properties and resulting gravitational moments. In addition, we have derived an expression that quantifies the effect of a planet's oblateness on its gravitational moments, providing first order corrections to the assumption of sphericity that we made to make our model linear. This expression will prove useful in comparing the effect of oblateness with other small corrections to the gravitational moments. Moreover, the linearity of the model allows us to make a quick forward calculation for the gravitational moments given a (cylindrical) flow profile. We used this feature of our model to constrain the properties of Saturn's differential flow based on the higher order gravitational moments measured by {\it Cassini} by performing MCMC simulations. This calculation allowed us to quantify the widely suspected non-uniqueness of Saturn's flow parameters inferred from the gravity data. We found that retrograde flow seems necessary to explain the observed gravitational moments as flows that are purely prograde tend to underestimate $\Delta J_{10}$. A wide range of flow parameters yield gravitational moments that agree with the observations and the flow depth and amplitude are anti-correlated. Given the non-uniqueness of the flow properties, additional physics needs to be invoked to place tighter constraints on them. Matching the gravitational moments along with the theoretical expectations from Ohmic dissipation due to flow in the semi-conducting region of Saturn and its heat flux might provide an additional way of diminishing the allowed parameter space of differential flow properties. \section*{Acknowledgements} Y.C. thanks Hao Cao for stimulating discussions, Steve Markham for help with {\it Mathematica} \textcopyright, and Heather Knutson for discussions on MCMC. We are grateful to the reviewers for their thoughtful comments and suggestions which improved this manuscript. This work uses matplotlib \citep{Hunter2007}, scipy \citep{scipy}, and sympy \citep{Meurer2017}. \bibliographystyle{apalike}
{ "timestamp": "2019-03-01T02:00:35", "yymm": "1902", "arxiv_id": "1902.10728", "language": "en", "url": "https://arxiv.org/abs/1902.10728" }
\section{Block Hessenberg pencils} \label{sec:bHessenberg} In the first part of this section we define block Hessenberg matrices and pencils and study their characteristics. The second part of this section recapitulates the main results on rational Krylov theory from \cite{Camps2018} to prove that rational Krylov spaces generated from block Hessenberg pencils have a block structure. The third and last part of this section describes two relevant operations on a block Hessenberg pencil. \subsection{Definitions and elementary results} \label{ssec:bHessDef} We first define a block upper triangular matrix and the notation we will use for it. \begin{definition \label{def:blocktriu} A matrix $R \in \mathbb{F}^{n{\times}n}$ is called a block upper triangular matrix with block partition $\bm{s} = (s_1, \hdots, s_m)$, $s_1{+}\hdots{+}s_m = n$, if it admits the form, \begin{equation} \label{eq:blocktriu} \begin{bmatrix} R_{11} & R_{12} & \hdots & R_{1m} \\ & R_{22} & \hdots & R_{2m} \\ & & \ddots & \vdots \\ & & & R_{mm} \end{bmatrix}, \end{equation} with block $R_{jk}$ of size $ s_j{\times}s_k$ for $1 \leq j \leq k \leq m$. The vector $\bm{s}$ defines the sizes of the blocks and is called the partition vector. For the sake of clarity, the block partition can be explicitly denoted as $R_{(s_1,\hdots,s_m)}$ or $R_{\bm{s}}$. \end{definition} A special case of a block upper triangular matrix is a block diagonal matrix $D_{\bm{s}}$ in which all off-diagonal blocks are zero: \begin{equation} \label{eq:blockdiag} D_{\bm{s}} = \begin{bmatrix} D_{11} & & \\ & D_{22} & & \\ & & \ddots & \\ & & & D_{mm} \end{bmatrix}. \end{equation} We sometimes use the notation $D_{\bm{s}} = \text{diag}(D_{11}, D_{22}, \hdots, D_{mm})$ for block diagonal matrices. Further note that if $R_{\bm{s}}$ is a nonsingular block upper triangular matrix, $\hat{R}_{\bm{s}} = R_{\bm{s}}^{-1}$ is also a block upper triangular matrix with an identical block partition $\bm{s}$. Next we define a \emph{block upper Hessenberg} matrix based on the definition of a block upper triangular matrix. \begin{definition \label{def:blockHess} A matrix $H \in \mathbb{F}^{n{\times}n}$ is called a block upper Hessenberg matrix with block partition $\bm{s} = (s_1, \hdots, s_m)$, $s_1{+}\hdots{+}s_m = n{-}1$, if it admits the form, \begin{equation} \label{eq:blockHess} H_{\bm{s}} = \begin{bmatrix} \bm{h}_{11}^T & h_{12} \\ H_{21} & \bm{h}_{22} \end{bmatrix}, \end{equation} with $H_{21}$ an $(n{-}1){\times}(n{-}1)$ block upper triangular matrix with block partition $\bm{s}$, $\bm{h}_{11}$ and $\bm{h}_{22}$ vectors of length $n{-}1$ and $h_{12}$ a scalar. \end{definition} \Cref{def:blockHess} is now extended in an evident manner for matrix pencils. In addition to that, we also introduce the notion of the \emph{pole pencil} and the \emph{pole tuple} of a block Hessenberg pencil. \begin{definition} \label{def:blockHesspencil} The $n{\times}n$ matrix pencil $(A,B)$ is called a block upper Hessenberg pencil with block partition $\bm{s} = (s_1, \hdots, s_m)$ if both $A$ and $B$ are block upper Hessenberg matrices with a coinciding block partition, \begin{equation} \label{eq:blockHesspencil} A = \begin{bmatrix} \bm{a}_{11}^T & a_{12} \\ A_{21} & \bm{a}_{22} \end{bmatrix}, \quad B = \begin{bmatrix} \bm{b}_{11}^T & b_{12} \\ B_{21} & \bm{b}_{22} \end{bmatrix}, \end{equation} $A_{21}$, $B_{21}$ $(n{-}1){\times}(n{-}1)$ both block upper triangular matrices having block partition $\bm{s} = (s_1$, $\hdots$, $s_m)$. The block upper triangular pencil $(A_{21}, B_{21})$ in \cref{eq:blockHesspencil} is called the \emph{pole pencil} of $(A,B)$. If the pole pencil is regular, the poles $\Xi(A,B)$ are defined as the eigenvalues of the pole pencil, $\Lambda(A_{21},B_{21})$. Since $(A_{21},B_{21})$ admits the partition $\bm{s} = (s_1, \hdots, s_m)$, the pole tuple, \begin{equation} \label{eq:poleset} \Xi(A,B) = \Lambda(A_{21},B_{21}) = (\Xi^1, \hdots, \Xi^m) = ( \lbrace \xi^1_{1}, \hdots, \xi^1_{s_1} \rbrace, \hdots, \lbrace \xi^m_1, \hdots, \xi^m_{s_m} \rbrace ), \end{equation} admits the same partition. This imposes no specific ordering of the poles within a block but the mutual blocks are ordered. \end{definition} The previous definitions are illustrated in more detail in the next example. \begin{example} \label{ex:blockHesspencil} The $n{\times}n$ matrices $A$, $B$ form a block Hessenberg pencil with partition vector $\bm{s} = (s_1, \hdots, s_m)$, if they can be partitioned as: \begin{equation} \resizebox{0.9\textwidth}{!}{$ \begin{bmatrix} \bm{a}_{1,1}^T & \bm{a}_{1,2}^T & \hdots & \bm{a}_{1,m}^T & a_{1,m+1} \\ A_{2,1} & A_{2,2} & \hdots & A_{2,m} & \bm{a}_{2,m+1}\\ & A_{3,2} & \hdots & A_{3,m} & \bm{a}_{3,m+1}\\ & & \ddots & \vdots & \vdots \\ & & & A_{m+1,m} & \bm{a}_{m+1,m+1} \end{bmatrix}, \quad \begin{bmatrix} \bm{b}_{1,1}^T & \bm{b}_{1,2}^T & \hdots & \bm{b}_{1,m}^T & b_{1,m+1} \\ B_{2,1} & B_{2,2} & \hdots & B_{2,m} & \bm{b}_{2,m+1}\\ & B_{3,2} & \hdots & B_{3,m} & \bm{b}_{3,m+1}\\ & & \ddots & \vdots & \vdots \\ & & & B_{m+1,m} & \bm{b}_{m+1,m+1} \end{bmatrix}$}, \end{equation} with all subdiagonal blocks $A_{j+1,j}$, $B_{j+1,j}$ of size $s_j{\times}s_j$ (square) and $s_1 + \hdots + s_m = n{-}1$. As a specific example, the pencil $(A,B)$ is a $9{\times}9$ block upper Hessenberg pencil with block partition $\bm{s} = (2, 1, 3, 2)$ if it has the form: \begin{center} \input{fig/bHessExample.tikz} \end{center} The shaded part of the matrices is the pole pencil which is clearly in block upper triangular form with partition $\bm{s} = (2,1,3,2)$. The pole tuple is in this case given by, \begin{equation*} \Xi(A,B) = (\Xi^1 = \lbrace \xi_1^1,\xi_2^1 \rbrace, \; \Xi^2 = \lbrace \xi_1^2 \rbrace, \; \Xi^3 = \lbrace \xi_1^3, \xi_2^3, \xi_3^3 \rbrace, \; \Xi^4 = \lbrace \xi_1^4, \xi_2^4 \rbrace ). \end{equation*} \end{example} Notice that a given block Hessenberg pencil can admit more than one partition. If $(A,B)$ is a block Hessenberg pencil with partition $\bm{s} = (s_1, \hdots, s_k, s_{k+1}, \hdots, s_m)$, it also admits the partition $\hat{\bm{s}} = (s_1, \hdots, s_k + s_{k+1}, \hdots, s_m)$. Consecutive blocks can be grouped together. Similarly, every ${n{\times}n}$ pencil $(A,B)$ can be considered a block Hessenberg pencil with the trivial partition $(n{-}1)$. We say that $\bm{s}^{\scriptstyle \max} = (s_1, \hdots, s_m)$ is the \emph{maximal partition} of a block Hessenberg pencil if none of its blocks can be split into smaller blocks. For example, a Hessenberg pencil has maximal partition $\bm{s}^{\scriptstyle \max} = (1, 1, \hdots, 1)$, but admits any other partition. The \emph{cumulative partition} vector $\bm{s}^{c}$ of a block Hessenberg pencil with partition $\bm{s} = (s_1, \hdots, s_m)$, is defined as: \begin{equation} \label{eq:cumulativepartition} \bm{s}^{c} = (s_1, s_1 + s_2, \hdots, \sum_{i=1}^m s_i = n{-}1). \end{equation} The last definition we generalize from the Hessenberg pencils of the RQZ method to the block Hessenberg pencils for the multishift, multipole RQZ method is the concept of \emph{properness} or \emph{irreducibility}. Properness of the pencil guarantees that there are no obvious options for deflations that split the problem into smaller, independent problems. \begin{definition} \label{def:properness} An $n{\times}n$ block upper Hessenberg pair $(A,B)$ with partition $\bm{s} = (s_1, \hdots, s_m)$ is said to be proper (or irreducible) if: \begin{enumerate}[I.] \item Its pole pencil is regular; \item The first block column of $(A,B)$ of size $(s_1{+}1){\times}s_1$, \begin{equation*} \begin{bmatrix} \bm{{a}}_{1,1}^T \\ {A}_{2,1} \end{bmatrix} = \begin{bmatrix} \bm{a}_1 & \hdots & \bm{a}_{s_1} \end{bmatrix}, \quad \begin{bmatrix} \bm{{b}}_{1,1}^T \\ {B}_{2,1} \end{bmatrix} = \begin{bmatrix} \bm{b}_1 & \hdots & \bm{b}_{s_1} \end{bmatrix}, \quad \bm{a}_i, \bm{b}_i \in \mathbb{F}^{s_1+1}, \end{equation*} satisfies for $i=1, \hdots, s_1$, \begin{equation*} \mathcal{R}(\bm{a}_1, \hdots, \bm{a}_i) \neq \mathcal{R}(\bm{b}_1, \hdots, \bm{b}_i); \end{equation*} \item The last block row of $(A,B)$ of size $s_m{\times}(s_m{+}1)$, \begin{equation*} \resizebox{.9 \textwidth}{!} {$ \begin{bmatrix} A_{m+1,m} & \bm{a}_{m+1,m+1} \end{bmatrix} = \begin{bmatrix} \bm{a}_{s_m}^T \\ \vdots \\ \bm{a}_{1}^T \end{bmatrix}, \quad \begin{bmatrix} B_{m+1,m} & \bm{b}_{m+1,m+1} \end{bmatrix} = \begin{bmatrix} \bm{b}_{s_m}^T \\ \vdots \\ \bm{b}_{1}^T \end{bmatrix}, \quad \bm{a}_i, \bm{b}_i \in \mathbb{F}^{s_m+1}, $} \end{equation*} satisfies for $i=1, \hdots, s_m$, \begin{equation*} \mathcal{R}(\bm{a}_1, \hdots, \bm{a}_i) \neq \mathcal{R}(\bm{b}_1, \hdots, \bm{b}_i). \end{equation*} \end{enumerate} \end{definition} We remark that condition~III is the same as condition~II for the \emph{pertransposed} pencil. This is the pencil obtained after transposition along the anti-diagonal. Furthermore observe that if $(A,B)$ is a Hessenberg pair then the conditions of \Cref{def:properness} reduce to the same conditions as \cite[Definition 2.1]{Camps2018}. Conditions~II also ensures that property~IV of \cite[Lemma 2.2]{Camps2018} is satisfied within the first block column. We illustrate the notion of (im)properness of a block Hessenberg pencil on a small example to clarify \Cref{def:properness}. \begin{example} Consider the $4{\times}4$ real-valued block Hessenberg pencil $(A,B)$ with maximal partition $(2,1)$ given by: \begin{equation} \label{eq:improperpencil} \begin{bmatrix} -0.3 & 0.075 & 0.5 & 0.25 \\ 0.395 & 0.52 & -0.35 & 2 \\ -0.14 & 0.86 & 1.35 & -0.8 \\ & & 1 & 0.85 \end{bmatrix}, \qquad \begin{bmatrix} -0.15 & -0.6 & 0.15 & -1.5 \\ 0.16 & 0.94 & -5 & 1.35 \\ -0.12 & -0.08 & -2.4 & -1 \\ & & 0.2 & 1.8 \end{bmatrix}. \end{equation} Condition~I of \Cref{def:properness} is satisfied, the pole pencil is regular and the pole tuple of $(A,B)$ is given by: \begin{equation} \label{eq:polesimproperpencil} \Xi = (\lbrace 1.5 + i\sqrt{15/8}, 1.5 - i\sqrt{15/8} \rbrace, 5). \end{equation} The $2{\times}2$ block thus contains a pair of complex conjugate poles. Condition~III of \Cref{def:properness} is also satisfied. For the last block row of $(A,B)$, we clearly have that $\mathcal{R}(\begin{bmatrix} 1 & 0.85 \end{bmatrix}) \neq \mathcal{R}(\begin{bmatrix} 0.2 & 1.8 \end{bmatrix}) $. Notice that this implies that we cannot simultaneously create a zero in position $(4,3)$ of both $A$ and $B$ by rotating the last two columns. The block Hessenberg pencil \eqref{eq:improperpencil} is however \emph{improper} since Condition~II of \Cref{def:properness} is violated. We have that $\mathcal{R}(\bm{a}_1) \neq \mathcal{R}(\bm{b}_1)$, but $\mathcal{R}(\bm{a}_1,\bm{a}_2) = \mathcal{R}(\bm{b}_1,\bm{b}_2)$. If we compute an orthonormal basis $Q_1$ of $\mathcal{R}(\bm{a}_1,\bm{a}_2)$ and extend this upto an orthonormal matrix $Q = \begin{bmatrix} Q_1 & \bm{q}_2\end{bmatrix}$, then $(\hat{A},\hat{B}) = Q^T (A,B)$ has zero elements in positions $(3,1)$ and $(3,2)$. This deflates the complex conjugate pair of poles in \cref{eq:polesimproperpencil} as eigenvalues of the pencil. \end{example} The next lemma shows that any proper block Hessenberg pair can be transformed to a proper Hessenberg pair with the same poles. \begin{lemma} \label{lemma:blockHessToHess} Given an $n{\times}n$ proper block Hessenberg pair $(A,B)$ with partition $\bm{s} = (s_1, \hdots, s_m)$ and accordingly partitioned poles $\Xi(A,B)$. Then there exist $n{\times}n$ unitary block diagonal matrices $Q, Z$, \begin{equation} \label{eq:reduction} Q = \text{diag}(1,Q_1, \hdots, Q_m) \qquad \text{and} \qquad Z = \text{diag}(Z_1, \hdots, Z_m,1), \end{equation} with $Q_j$, $Z_j$ unitary matrices of size $s_j{\times}s_j$, such that $(\hat{A},\hat{B}) = Q^* (A,B) Z$ is a proper Hessenberg pair according to \cite[Definition 2.1]{Camps2018} with poles $\Xi = (\pi_1(\Xi^1), \hdots, \pi_m(\Xi^m))$. Here, $\pi_j(\Xi^j)$ is a permutation of $\xi^j_1, \hdots, \xi^j_{s_j}$. \end{lemma} \begin{proof} Since $(A,B)$ is a proper block Hessenberg pencil, the pole pencil is regular and any Schur decomposition of it reduces the block Hessenberg pair to a Hessenberg pair with the same pole tuple as the block Hessenberg pencil. The order of the poles in the Hessenberg pair is, however, dependent on the Schur decomposition. Moreover, since the pole pencil is a block upper triangular pencil with $m$ blocks, $m$ independent Schur decompositions can be combined as in \cref{eq:reduction}. The pole tuple of the Hessenberg pencil is in this case clearly as described: the poles of the different blocks remain mutually ordered, but within a block any order, or permutation $\pi_j$, of the poles is permissible. It remains to verify that conditions~II and III of \Cref{def:properness} are preserved under this transformation. Denote $\hat{Q} = \text{diag}(Q_1,\hdots,Q_m)$ and $\hat{Z} = \text{diag}(Z_1,\hdots,Z_m)$, with $Q_j$, $Z_j$ as in \cref{eq:reduction}. Then, \begin{equation*} \begin{split} \hat{A} & = Q^* A Z = \text{diag}(1, \hat{Q}^*) \begin{bmatrix} \bm{a}_{11}^{T} & a_{12} \\ A_{21} & \bm{a}_{22} \end{bmatrix} \text{diag}(\hat{Z},1) = \begin{bmatrix} \bm{a}_{11}^{T} \hat{Z} & a_{12} \\ \hat{Q}^* A_{21} \hat{Z} & \hat{Q}^* \bm{a}_{22} \end{bmatrix}, \\ \hat{B} & = Q^* B Z = \text{diag}(1, \hat{Q}^*) \begin{bmatrix} \bm{b}_{11}^{T} & b_{12} \\ B_{21} & \bm{b}_{22} \end{bmatrix} \text{diag}(\hat{Z},1) = \begin{bmatrix} \bm{b}_{11}^{T} \hat{Z} & b_{12} \\ \hat{Q}^* B_{21} \hat{Z} & \hat{Q}^* \bm{b}_{22} \end{bmatrix}. \end{split} \end{equation*} The first block column of $(\hat{A},\hat{B})$ is equal to, \begin{equation*} \left( \begin{bmatrix} \hat{\bm{a}}_{1,1}^{T} \\ \hat{A}_{2,1} \end{bmatrix}, \begin{bmatrix} \hat{\bm{b}}_{1,1}^{T} \\ \hat{B}_{2,1} \end{bmatrix} \right) = \begin{bmatrix} 1 & \\ & Q_1^* \end{bmatrix} \left( \begin{bmatrix} \bm{a}_{1,1}^{T}\\ A_{2,1} \end{bmatrix}, \begin{bmatrix} \bm{b}_{1,1}^{T}\\ B_{2,1} \end{bmatrix} \right) Z_1. \end{equation*} The left and right multiplication of the first block column of $(A,B)$ with unitary matrices clearly preserves condition~II of \Cref{def:properness}. Also condition~III is preserved under the equivalence transformation of \cref{eq:reduction}. This directly implies that the resulting Hessenberg pair is also proper according to \cite[Definition 2.1]{Camps2018}. \end{proof} We remark that since a real-valued block Hessenberg pencils can have complex conjugate pairs of poles, its proper Hessenberg form of \Cref{lemma:blockHessToHess} will be complex-valued. \subsection{Rational Krylov and block Hessenberg pencils} \label{ssec:RK} In this section we study the structure of \emph{rational Krylov subspaces} generated by proper block Hessenberg matrices. These results are useful for the analysis of the \emph{pole introduction} operation introduced in Section \ref{ssec:manipulating} and to study \emph{uniqueness} of a multishift, multipole RQZ step in \Cref{sec:theory}. We give a brief introduction to rational Krylov matrices and subspaces for the sake of completeness. For a more detailed overview of this subject matter, we refer the interested reader to \cite{Camps2018} and the references therein. We use the same notational conventions as in \cite{Camps2018}. Given a matrix pair $(A, B) \in \mathbb{F}^{n \times n}$, shift $\varrho = \mu/\nu \in \bar{\mathbb{C}}$ and pole $\xi = \alpha/\beta \in \bar{\mathbb{C}}\setminus\Lambda$, we define the following elementary rational matrices, \begin{equation} \label{eq:elemRational} \begin{split} M(\varrho,\xi) = \LinOp{\mu}{\nu} \LinOpInv{\alpha}{\beta}, \\ N(\varrho,\xi) = \LinOpInv{\alpha}{\beta} \LinOp{\mu}{\nu}. \end{split} \end{equation} Notice that the matrices $M(\varrho,\xi)$ and $N(\varrho,\xi)$ represent an entire class of matrices that are all nonzero scalar multiple of each other. Every representative is fine as the results we present are scale invariant. The elementary rational matrices satisfy a few basic properties. The inverse $M(\varrho,\xi)^{-1}$ is defined if $\varrho \notin \Lambda$ and is equal to $M(\xi,\varrho)$. They are commutative, $M(\varrho_1,\xi_1) M(\varrho_2,\xi_2) = M(\varrho_2,\xi_2) M(\varrho_1,\xi_1)$, and they can be merged together, $M(\varrho,\xi_1) M(\xi_1,\xi_2) = M(\varrho,\xi_2)$, if a pole and shift are equal. Analogous results hold for $N(\varrho,\xi)$. These elementary rational matrices are used to construct rational Krylov matrices generated by a regular matrix pair, a starting vector, and a tuple of poles. \begin{definition \label{def:ratkrylmat} Let $A,B \in \mathbb{F}^{n{\times}n}$ form a regular matrix pair, $\bm{v} \in \mathbb{F}^n$ a nonzero vector, $k{\leq}n$, $\Xi = (\xi_1, \hdots, \xi_{k-1})$ a tuple of poles that are distinct from the eigenvalues, and $P = (\varrho_1, \hdots, \varrho_{k-1}) \subset \bar{\mathbb{C}}$ a tuple of shifts distinct from the poles. The corresponding rational Krylov matrices are defined as: \begin{equation} \label{eq:ratkrylmat} \begin{split} K^{\text{rat}}_{k}(A,B,\bm{v}, \Xi, P) & = \left[ \bm{v}, M(\varrho_1,\xi_1) \bm{v}, M(\varrho_2,\xi_2) M(\varrho_1,\xi_1) \bm{v}, \, \hdots, \left( \prod_{i=1}^{k-1}M(\varrho_i,\xi_1) \right) \bm{v} \right], \\ L^{\text{rat}}_{k}(A,B,\bm{v}, \Xi, P) & = \left[ \bm{v}, N(\varrho_1,\xi_1) \bm{v}, N(\varrho_2,\xi_2) N(\varrho_1,\xi_1) \bm{v}, \, \hdots, \left( \prod_{i=1}^{k-1} N(\varrho_i,\xi_i) \right) \bm{v} \right]. \end{split} \end{equation} \end{definition} The column spaces of these matrices span the \emph{rational Krylov subspaces}. \begin{definition \label{def:rksubspace} The rational Krylov subspaces $\mathcal{K}_{k}^{\text{rat}}$ and $\mathcal{L}_{k}^{\text{rat}}$, $k{\leq}n$, associated with the ${n{\times}n}$ regular pair $(A,B)$, a nonzero vector $\bm{v} \in \mathbb{F}^n$, and pole tuple $\Xi = (\xi_1, \hdots, \xi_{k-1})$ distinct from the eigenvalues, are defined as, \begin{equation} \label{eq:rksubspace_definition} \begin{split} \mathcal{K}_{k}^{\text{rat}}(A,B,\bm{v},\Xi) & \equiv \mathcal{R}(K_{k}^{\text{rat}}(A,B,\bm{v}, \Xi, \mathrm{P})) = \prod_{i=1}^{k-1} M(\hat{\varrho},\xi_i) \cdot \mathcal{K}_k(M(\check{\varrho},\hat{\varrho}),\bm{v}) , \\ \mathcal{L}_{k}^{\text{rat}}(A,B,\bm{v},\Xi) & \equiv \mathcal{R}(L_{k}^{\text{rat}}(A,B,\bm{v}, \Xi, \mathrm{P})) = \prod_{i=1}^{k-1} N(\hat{\varrho},\xi_i) \cdot \mathcal{K}_k(N(\check{\varrho},\hat{\varrho}),\bm{v}) , \end{split} \end{equation} where the shift tuple $\mathrm{P}$ is freely chosen in agreement with \Cref{def:ratkrylmat}, $\hat{\varrho}$ is a shift different from the eigenvalues and poles, and $\check{\varrho}$ is an alternative shift different from $\hat{\varrho}$. \end{definition} The first equality in \cref{eq:rksubspace_definition} defines the rational Krylov subspaces, the second equality repeats \cite[Lemma 5.6.II]{Camps2018}. This result shows that rational Krylov subspaces are \emph{shift invariant} as they are independent of the choice shifts $\mathrm{P}$. The following theorem is a block generalization of \cite[Theorem 5.6]{Camps2018} and shows that the rational Krylov subspaces $\mathcal{K}^{\text{rat}}$ and $\mathcal{L}^{\text{rat}}$ have a specific structure if they are generated from a proper block Hessenberg pair. \begin{theorem} \label{thm:blockHessspaces} Given an $n{\times}n$ proper block Hessenberg pair $(A,B)$ with partition $\bm{s} = (s_1$, $\hdots$, $s_m)$, cumulative partition $\bm{s^{c}}$, poles $\Xi = (\Xi^1, \hdots, \Xi^m)$ with $\Xi^i = \lbrace \xi^{i}_{1}, \hdots \xi^{i}_{s_i} \rbrace$ that are all different from the eigenvalues. Then for $j=0,1,\hdots,m$, \begin{equation} \label{eq:Kratspan} \mathcal{K}^{\text{rat}}_{s^{c}_{j}+1} (A,B,\bm{e}_1, (\Xi^1, \hdots, \Xi^j)) = \mathcal{E}_{s^{c}_{j}+1}, \end{equation} with $s^{c}_{0} \equiv 0$. While for $j=1,\hdots,m$, \begin{equation} \label{eq:Lratspan} \mathcal{L}^{\text{rat}}_{s^{c}_{j}} (A,B,\bm{z}_1, (\breve{\Xi}^1, \Xi^2, \hdots, \Xi^j)) = \mathcal{E}_{s^{c}_{j}}, \end{equation} with $\breve{\Xi}^1 = \lbrace \xi^1_1, \hdots, \xi^1_{s_{1}-1} \rbrace$, and $\bm{z}_1$ the right eigenvector of the pole pencil corresponding with pole $\xi^1_{s_1}$. Here $\xi^1_{s_1}$ can be any of the poles in $\Xi^1$. \end{theorem} \begin{proof} We rely on the transformation $(\hat{A}, \hat{B}) = Q^* (A,B) Z$ from proper block Hessenberg pencil $(A,B)$ to proper Hessenberg pencil $(\hat{A},\hat{B})$ as defined in \Cref{lemma:blockHessToHess}. Denote with $\hat{\Xi} = (\xi_1, \hdots, \xi_{n-1})$ the pole tuple of the proper Hessenberg pair $(\hat{A},\hat{B})$ after renumbering. Note that, by construction, in \cref{eq:reduction}, $\bm{q}_1 = \bm{e}_1$ and denote $\hat{M}(\varrho,\xi) = Q^* M(\varrho,\xi) Q$ as the elementary rational matrix of \cref{eq:elemRational} in terms of $(\hat{A},\hat{B})$. Further we apply \cite[Theorem 5.6]{Camps2018} to $(\hat{A}, \hat{B})$ such that for $k$ from $1$ to $n$, \begin{equation*} \begin{split} \mathcal{E}_k & = \mathcal{K}^{\text{rat}}_{k}(\hat{A},\hat{B}, \bm{e}_1, (\xi_1, \hdots, \xi_{k-1})) = \prod_{i=1}^{k-1} \hat{M}(\hat{\varrho},\xi_i) \cdot \mathcal{K}_{k}(\hat{M}(\check{\varrho},\hat{\varrho}),\bm{e}_1) \\ & = Q^* \prod_{i=1}^{k-1} M(\hat{\varrho},\xi_i) \cdot \mathcal{K}_{k}(M(\check{\varrho},\hat{\varrho}),\bm{q}_1) = Q^* \mathcal{K}^{\text{rat}}_{k}(A,B, \bm{e}_1, (\xi_1, \hdots, \xi_{k-1})). \end{split} \end{equation*} Multiplying both sides of this equation with $Q$ and using that \cref{eq:reduction} implies that $Q \mathcal{E}_{k} = \mathcal{E}_{k}$ for $k \in \lbrace 1, s^{c}_{1}+1, \hdots, s^{c}_{m}+1=n \rbrace$, proving thereby the first part of the theorem. The second part of the theorem can be proven in an analogous manner. Denote $\hat{N}(\varrho,\xi) = Z^* N(\varrho,\xi) Z$ as the second elementary rational matrix of \cref{eq:elemRational} in terms of $(\hat{A},\hat{B})$ and apply again \cite[Theorem 5.6]{Camps2018} to $(\hat{A}, \hat{B})$ such that for $k$ from $1$ to $n{-}1$, \begin{equation*} \begin{split} \mathcal{E}_k & = \mathcal{L}^{\text{rat}}_{k}(\hat{A},\hat{B}, \bm{e}_1, (\xi_2, \hdots, \xi_{k})) = \prod_{i=2}^{k} \hat{N}(\hat{\varrho},\xi_i) \cdot \mathcal{K}_{k}(\hat{N}(\check{\varrho},\hat{\varrho}),\bm{e}_1) \\ & = Z^* \prod_{i=2}^{k} N(\hat{\varrho},\xi_i) \cdot \mathcal{K}_{k}(N(\check{\varrho},\hat{\varrho}),\bm{z}_1) = Z^* \mathcal{L}^{\text{rat}}_{k}(A,B, \bm{z}_1, (\xi_2, \hdots, \xi_{k})). \end{split} \end{equation*} Now multiply both sides with $Z$ and again use \cref{eq:reduction} to show that $Z \mathcal{E}_{k} = \mathcal{E}_k$ for $k \in \lbrace s^{c}_{1}, \hdots, s^{c}_{m}, n \rbrace$ and the second part of the theorem is proven. Recall from \Cref{lemma:blockHessToHess} that $\bm{z}_1$ is the right eigenvector of the pole pencil related to $\xi_1$ and that $\xi_1$ can be any of the poles of $\Xi^1$ since for any $\xi^1_j$ there exists a block Schur decomposition \eqref{eq:reduction} that places $\xi^1_j$ as the first pole in the Hessenberg pencil $(\hat{A},\hat{B})$. \end{proof} \subsection{Manipulating poles of block Hessenberg pencils} \label{ssec:manipulating} Throughout this section, the pencil $(A,B)$ is assumed to be an $n{\times}n$ proper block Hessenberg pencil with maximal partition $\bm{s} = (s_1,\hdots,s_m)$ and pole tuple $\Xi =(\Xi^1, \hdots, \Xi^m)$, where $\Xi^j = \lbrace \xi^j_1, \hdots, \xi^j_{s_j} \rbrace$. All poles are assumed different from the eigenvalues. We review two different operations to change the pole tuple $\Xi$. The first operation changes the first or last $\ell$ poles of the pencil, the second operation swaps two adjacent pole blocks $\Xi^i$ and $\Xi^{i+1}$. \paragraph{Changing poles at the boundary} The first $\ell=s_1{+}\hdots{+}s_i=s^{c}_i$ poles in the first $i$ pole blocks $\Xi^1, \hdots, \Xi^i$ can be changed to $\ell$ new poles $\mathrm{P} = \lbrace \varrho_1, \hdots, \varrho_{\ell} \rbrace$. We assume $\mathrm{P}$ distinct from the original poles. For this purpose consider the vector, \begin{equation} \label{eq:vec_multishift} \bm{x} = \gamma \; \prod_{j=1}^{\ell} M(\varrho_j,\xi_j) \; \bm{e}_1, \end{equation} with $\xi_1, \hdots, \xi_{\ell}$ the poles of $\Xi^1, \hdots, \Xi^i$. The following procedure can be used to compute $\bm{x}$, \begin{equation} \label{eq:proc_multishift} \begin{split} & \bm{x} \leftarrow \bm{e}_1 \\ & \text{for $j =1,\hdots,\ell$} \\ & \; \begin{sqcases} \bm{x} \leftarrow \gamma_j \; M(\varrho_j,\xi_j) \bm{x} \end{sqcases} \end{split}. \end{equation} The scalars $\gamma_j$ can be chosen as some suitable scaling factors. Now compute a unitary matrix $Q$ such that, \begin{equation} \label{eq:Qmultishift} Q^* \bm{x} = \alpha \bm{e}_1. \end{equation} We claim that the new poles $P$ are introduced in the block Hessenberg pair by updating $(\hat{A},\hat{B}) = Q^* (A,B)$. Specifically, $(\hat{A},\hat{B})$ is a block Hessenberg pair with maximal partition $\hat{\bm{s}} = (\ell, s_{i+1}, \hdots, s_m)$ and poles $\hat{\Xi} = (\mathrm{P}, \Xi^{i+1}, \hdots, \Xi^{m})$. From \cref{eq:rksubspace_definition} and \Cref{thm:blockHessspaces} we have that, \begin{equation} \bm{x} \in \mathcal{K}^{\text{rat}}_{\ell+1}(A,B,\bm{e}_1,\Xi) = \mathcal{E}_{\ell+1}. \end{equation} This implies that $Q$ in \cref{eq:Qmultishift} is of the form $\text{diag}(Q_{\ell+1},I)$, with $Q_{\ell+1}$ an $(\ell{+}1){\times}(\ell{+}1)$ unitary matrix. It follows that the first block $\hat{\Xi}^1$ in $(\hat{A},\hat{B})$ is indeed of size $\ell$. Furthermore, for $j = 0, 1, \hdots, m-i+1$, \begin{equation*} \resizebox{1\textwidth}{!}{$ \begin{aligned} \mathcal{K}&^{\text{rat}}_{\hat{s}^{c}_{j}+1}(\hat{A},\hat{B},\bm{e}_1, (\mathrm{P}, \Xi^{i+1}, \hdots, \Xi^m)) = \prod_{k=1}^{\hat{s}^{c}_{j}} \hat{M}(\hat{\varrho},\hat{\xi}_k) \cdot \mathcal{K}_{\hat{s}^{c}_{j}+1}(\hat{M}(\check{\varrho},\hat{\varrho}),\bm{e}_1) \\ & = Q^* M(\hat{\varrho},\varrho_1) \hdots M(\hat{\varrho},\varrho_{\ell}) M(\hat{\varrho},\xi_{\ell+1}) \hdots M(\hat{\varrho},\xi_{\hat{s}^{c}_{j}}) \cdot \mathcal{K}_{\hat{s}^{c}_{j}+1}(M(\check{\varrho},\hat{\varrho}),\bm{q}_1) \\ & = Q^* M(\hat{\varrho},\varrho_1) \hdots M(\hat{\varrho},\varrho_{\ell}) M(\hat{\varrho},\xi_{\ell+1}) \hdots M(\hat{\varrho},\xi_{\hat{s}^{c}_{j}}) \cdot \mathcal{K}_{\hat{s}^{c}_{j}+1}(M(\check{\varrho},\hat{\varrho}),\prod_{k=1}^{\ell} M(\varrho_k,\xi_k)\bm{e}_1) \\ & = Q^* M(\hat{\varrho},\xi_1) \hdots M(\hat{\varrho},\xi_{\ell}) M(\hat{\varrho},\xi_{\ell+1}) \hdots M(\hat{\varrho},\xi_{\hat{s}^{c}_{j}}) \cdot \mathcal{K}_{\hat{s}^{c}_{j}+1}(M(\check{\varrho},\hat{\varrho}),\bm{e}_1) \\ & = Q^* \mathcal{K}^{\text{rat}}_{\hat{s}^{c}_{j}+1}(A,B,\bm{e}_1, (\Xi^1, \hdots, \Xi^i, \Xi^{i+1}, \hdots, \Xi^m)) \\ & = Q^* \mathcal{E}_{\hat{s}^{c}_{j}+1} = \mathcal{E}_{\hat{s}^{c}_{j}+1}. \end{aligned} $} \end{equation*} In the first equality we used \cref{eq:rksubspace_definition}, we applied $\hat{M} = Q^* M Q$ in the second equality, and combined \cref{eq:vec_multishift,eq:Qmultishift} to get $\bm{q}_1 = \prod_{k=1}^{\ell} M(\varrho_k,\xi_k) \bm{e}_1$ in the third equality. The fourth equality uses the commutativity of the $M$ matrices and the property that $M(\hat{\varrho},\varrho_k) M(\varrho_k,\xi_k)$ can be merged to $M(\hat{\varrho},\xi_k)$. This results in the rational Krylov subspace of the original pencil with the original poles in the fifth equality and by \Cref{thm:blockHessspaces} we know that this is equal to $\mathcal{E}_{\hat{s}^{c}_{j}+1}$. Finally, since $Q$ has a block diagonal structure, it does not affect the $\mathcal{E}_{\hat{s}^{c}_{j}+1}$ for the given sizes. It is clear that $(\hat{A}, \hat{B})$ is a proper block Hessenberg pencil with partition $\hat{\bm{s}} = (\ell, s_{i+1}, \hdots, s_m)$ by construction. The last poles are unchanged by the block diagonal structure of $Q$ and the first $\ell$ poles are changed to $\mathrm{P}$ which follows from the uniqueness of block Hessenberg pencils, see \Cref{thm:implicitQ}. We remark that in order to compute the vector $\bm{x}$ in \cref{eq:proc_multishift}, $\ell$ shifted linear systems need to be solved as $M(\varrho_i,\xi_i) = \LinOp{\mu_i}{\nu_i} \LinOpInv{\alpha_i}{\beta_i}$. These linear systems are essentially of size $\ell$ because $\LinOpInv{\alpha_\ell}{\beta_\ell}$ is a block upper triangular matrix with a leading block of size $\ell{\times}\ell$. This limits the computational cost of computing $\bm{x}$ to $O(\ell^4)$, which is small as long as $\ell \ll n$. It also follows that the vector $\bm{x}$ can be computed even when poles in $\Xi^{1}$,$\hdots$, $\Xi^{i}$ are equal to eigenvalues of the pencil. Properness ensures that the leading $\ell{\times}\ell$ block is nonsingular. The last $\ell$ poles in the last $i$ blocks $\Xi^{m-i+1}, \hdots, \Xi^{m}$ of $(A,B)$ can be changed to $\mathrm{P} = \lbrace \varrho_1, \hdots, \varrho_\ell \rbrace$ in a similar fashion. We compute first the row vector, \begin{equation} \label{eq:vec_multishift_end} \bm{x}^T = \gamma \bm{e}_{n}^T \prod_{j=m-\ell+1}^{m} N(\varrho_j,\xi_j), \end{equation} and then a unitary matrix $Z = \text{diag}(I,Z_{\ell+1})$ such that $\bm{x}^T Z = \alpha \bm{e}_n^T$. The pencil $(\hat{A},\hat{B}) = (A,B) Z$ then becomes a block Hessenberg pencil with pole tuple $(\Xi^1$, $\hdots$, $\Xi^{m-i}$, $\mathrm{P})$. We remark that if $(A,B)$ is a real-valued pencil and the poles and shifts considered in \cref{eq:vec_multishift,eq:vec_multishift_end} are both closed under complex conjugation, then the vectors $\bm{x}$ and $\bm{x}^T$ and consequently the matrices $Q$ and $Z$ are also real-valued. This follows from the commutativity property in combination with the property that $M(\bar{\varrho},\bar{\xi}) = \overline{M(\varrho,\xi)}$ for real-valued pencils. We have, \begin{equation} \label{eq:realvalued} \overline{M(\varrho,\xi) M(\bar{\varrho},\bar{\xi})} = \overline{M(\bar{\varrho},\bar{\xi}) M(\varrho,\xi)} = M(\varrho,\xi) M(\bar{\varrho},\bar{\xi}) \end{equation} so $M(\varrho,\xi) M(\bar{\varrho},\bar{\xi})$ is a real-valued matrix if $A$ and $B$ are real-valued. \paragraph{Swapping adjacent pole blocks} A second operation to change the pole tuple of a block Hessenberg pencil is swapping two consecutive blocks in the pole pencil. Swapping block $i$ with block $i{+}1$ requires the computation of a unitary equivalence essentially of size $(s_i{+}s_{i+1}){\times}(s_i{+}s_{i+1})$ that updates the pencil $(\hat{A},\hat{B}) = Q^* (A,B) Z$ in such a way that the new pole tuple and partition vector are given by, \begin{equation*} \begin{split} \hat{\Xi} &= (\Xi^1, \hdots, \Xi^{i-1}, \Xi^{i+1}, \Xi^{i}, \Xi^{i+2}, \hdots, \Xi^m), \\ \hat{\bm{s}} &= (s_1, \hdots, s_{i-1}, s_{i+1}, s_{i}, s_{i+2}, \hdots, s_m). \end{split} \end{equation*} This problem is equivalent to reordering eigenvalues in the generalized Schur form. Two different approaches to solve this problem have been proposed in the literature. The first approach, studied by K{\aa}gstr{\"{o}}m \cite{Kagstrom1993,Kagstrom1996}, requires the solution of a coupled Sylvester equation. This method is applicable for general blocksizes $s_i$, $s_{i+1}$. The second approach, studied by Van Dooren \cite{VanDooren1981}, is a direct method that relies on the computation of a right eigenvector of a pole in block $i{+}1$. This method has been studied for swapping a block of dimension $1{\times}1$ or $2{\times}2$ with a block of dimension $1{\times}1$, or vice versa. \subsection{Multishift, multipole RQZ step} \label{ssec:msmpstep} Combining the operations from the previous subsection, we propose following three step procedure as the generic multishift, multipole RQZ step. \begin{enumerate}[I.] \item \label{step:MSMPRQZ1} Starting from a proper block Hessenberg pencil with pole tuple $\Xi = (\Xi^1, \hdots, \Xi^m)$ and partition $\bm{s} = (s_1, \hdots, s_m)$, select or compute $\ell = s_1{+}\hdots{+}s_i=s^{c}_i$ shifts $\mathrm{P}$. Introduce the shifts in the block Hessenberg pencil by computing the vector $\bm{x}$ via \cref{eq:proc_multishift} and the orthonormal matrix $Q$ via \cref{eq:Qmultishift} and updating the pencil accordingly. The pencil now has pole tuple $\Xi = (\mathrm{P}, \Xi^{i+1}, \hdots, \Xi^m)$ and partition vector $\bm{s} = (\ell, s_{i+1}, \hdots, s_m)$. \item \label{step:MSMPRQZ2} Repeatedly use the swapping procedure to construct a unitary equivalence that moves the shifts $\mathrm{P}$ to the last pole block. This changes the pole tuple to $\Xi = (\Xi^{i+1}, \hdots, \Xi^m,\mathrm{P})$ and the partition vector to $\bm{s} = (s_{i+1}, \hdots, s_m, \ell)$. \item \label{step:MSMPRQZ3} Compute or select $\ell$ new poles $\Xi^{m+1}$ and introduce them at the end of the pencil to change the pole tuple to $\Xi = (\Xi^{i+1}, \hdots, \Xi^m,\Xi^{m+1})$. \end{enumerate} These three steps constitute a single multishift, multipole RQZ sweep. After every sweep, the properness of the pencil is checked and the problem is split into independent subproblems wherever possible. The multishift QZ method is a special case of this algorithm where the pencil initially has pole tuple $(\infty, \hdots, \infty)$ and partition $(1,\hdots,1)$ and where this form is always restored in step~\ref{step:MSMPRQZ3} of the algorithm. The single shift RQZ method is also a special case of this algorithm. In \Cref{sec:numerical,sec:deflation} we address a couple of numerical challenges that make the multishift, multipole RQZ step stable and efficient in finite precision arithmetic. First, \Cref{sec:theory} provides further theoretical foundation for the implicit approach. \section{Conclusion and future work} \label{sec:conclusion} In this paper we have generalized the rational QZ method from Hessenberg to block Hessenberg pencils. This allows for the use of complex conjugate shifts and poles in real arithmetic. Numerical considerations have shown that medium to large multiplicities are unfavorable due to inherent inaccuracies and an increasing computational complexity. In the spirit of recent developments for the QR \cite{Braman2002, Braman2002a} and QZ \cite{Kagstrom2007} methods, this urged us to use small shift and pole multiplicities, but they can be tightly-packed together. This approach maintains accurate shifts and poles in combination with level 3 BLAS performance. We also implemented the aggressive early deflation strategy for block Hessenberg pencils. Numerical experiments indicated that this combination leads to an efficient algorithm for the generalized eigenvalue problem that can outperform LAPACK \cite{lapack} in terms of speed, accuracy, and time complexity. In a future update of \texttt{libRQZ}, we plan to include the option to use \emph{bidirectional} RQZ sweeps that actively chase poles from the bottom-right of the pencil to the upper-left side in parallel to chasing shifts from the upper-left to the bottom-right side of the pencil. Bidirectional chasing can, for a large part, be performed independently in both directions. It is hence an excellent opportunity for shared-memory parallelization. On the theoretical side, a further investigation of shift and pole selection strategies that stimulate interior deflations would be an interesting undertaking. \section*{Acknowledgements} The authors are grateful to Paul Van Dooren and Nicola Mastronardi for their help with the iterative refinement procedure for $2{\times}2$ with $2{\times}2$ swaps \cite{Camps2019} which was essential for handling $2{\times}2$ blocks accurately. \section{Numerics} \label{sec:numerics} The numerical tests have been performed on an Intel Xeon E5-2697 v3 CPU with $14$ cores and $128$GB of RAM. Our implementation of the multishift, multipole RQZ method with aggressive deflation is compiled with \emph{gfortran} version 4.8.5 using compilation flag \texttt{-O3}. LAPACK \cite{lapack} and BLAS are referenced via \texttt{-llapack} and \texttt{-lblas}. The library \texttt{libRQZ} supports both real-valued (\texttt{dRQZm}) and complex-valued (\texttt{zRQZm}) problems. \subsection{dRQZm and zRQZm} As discussed in \Cref{sec:numerical}, \texttt{dRQZm} uses $1{\times}1$ blocks for real poles and $2{\times}2$ blocks for pairs of complex-conjugate shifts, \texttt{zRQZm} always uses $1{\times}1$ blocks. Both algorithms proceed as follows: \begin{enumerate}[I.] \item Check for deflations at the upper-left side of the pencil using AED with window size $w_s$. \item Check for interior deflations along the subdiagonal. \item Compute $m$ shifts as the eigenvalues of the trailing $m{\times}m$ block with the RQZ method and introduce these as consecutive poles in the first $m$ subdiagonal positions of the block Hessenberg pencil. This is achieved by using the operations of \Cref{ssec:manipulating}. The involved transformations are accumulated and the pencil is updated by level 3 BLAS matrix-matrix multiplication. \item Chase the batch of $m$ shifts to the last $m$ positions on the subdiagonal of the block Hessenberg pencil. The chasing is performed by repeatedly swapping the $m$ shifts with the next $k$ poles. Every time one sequence of swaps is computed, all transformations are accumulated and the pencil is updated by level 3 BLAS matrix-matrix multiplication. \item Check for deflations at the bottom-right side of the pencil using AED with window size $w_e$. \item Compute $m$ poles as the eigenvalues of the leading $m{\times}m$ block with the RQZ method and introduce these as consecutive poles in the last $m$ subdiagonal positions of the block Hessenberg pencil. This is achieved by using the operations of \Cref{ssec:manipulating}. The involved transformations are accumulated and the pencil is updated by level 3 BLAS matrix-matrix multiplication. \end{enumerate} \noindent This algorithm actively chases shifts from the upper-left corner to the bottom-right corner. This typically leads to fast convergence of eigenvalues near the bottom-right side of the pencil. The swapping also slowly moves the poles that are introduced at the bottom-right corner to the upper-left side of the pencil which, in turn, induces convergence of eigenvalues near the upper-left corner of the pencil. The settings used in \texttt{libRQZ} are summarized in Table~\ref{tab:settings}. The first column lists the size of the pencil. The second column lists the batch size $m$ of shifts that are handled in one iteration. The third column lists the swap size $k$: after the $m$ shifts have been swapped with the next $k$ poles, the transformations are accumulated and the entire pencil is updated with a BLAS call. In our experience, choosing $k$ equal to $m$ gives the best performance. The fourth column lists the window size $w_e$ for aggressive early deflation at the bottom-right side of the pencil. Finally, the fifth column lists the window size $w_s$ for aggressive early deflation at the upper-left side of the pencil. \begin{table}[htp] \centering \caption{\texttt{libRQZ} settings: $n$ problem size, $m$ step multiplicity, $k$ swap range, $w_e$ AED window size at the bottom-right side of the pencil, $w_s$ AED window size at the upper-left side of the pencil.} \begin{tabular}{l|c|c|c|c} $n$ & $m$ & $k$ & $w_e$ & $w_s$\\ \hline $\left[1;80\right[$ & $1$---$2$ & $1$---$2$ & $1$---$2$ &$1$---$2$\\ $\left[80;150\right[$ & $4$ & $4$ & $6$ &$4$\\ $\left[150;250\right[$ & $8$ & $8$ & $10$ &$4$\\ $\left[250;501\right[$ & $16$ & $16$ & $18$ &$6$\\ $\left[501;1001\right[$ & $32$ & $32$ & $34$ &$10$\\ $\left[1001;3000\right[$ & $64$ & $64$ & $66$ &$16$\\ $\left[3000;6000\right[$ & $128$ & $128$ & $130$ &$32$\\ $\left[6000;\infty\right[$ & $256$ & $256$ & $266$ &$48$\\ \end{tabular} \label{tab:settings} \end{table} We compare \texttt{zRQZm} and \texttt{dRQZm} with respectively \texttt{ZHGEQZ} and \texttt{DHGEQZ} from LAPACK \cite{lapack} in terms of speed and accuracy. \subsection{Random problems} \label{ssec:random} For our first numerical experiment, we have generated random matrix pairs of increasing dimension. The entries of the matrices are drawn from the standard normal distribution. The experiment is performed both for real-valued and complex-valued matrix pairs; for the latter class of problems, both the real and imaginary part are randomly generated. The matrix pairs are initially reduced to Hessenberg, triangular form by means of the LAPACK \cite{lapack} routines \texttt{xGEQRFP} and \texttt{xGGHRD}. After this initial reduction, the matrix pairs are further reduced to (real) generalized Schur form, $ (S,T) = Q^*(A,B)Z, $ with \texttt{libRQZ} and LAPACK \cite{lapack}. In all cases, the entire Schur decomposition is computed. \begin{figure}[htp] \label{fig:timereal} \centering \resizebox{\textwidth}{!}{% \input{fig/numexp/d/random/fig_numexp_d_randn.tikz} } \caption{CPU time of \texttt{DHGEQZ} of LAPACK and \texttt{dRQZm} of libRQZ on randomly generated real-valued matrix pencils (\emph{left}). Speedup of libRQZ over LAPACK (\emph{right}).} \end{figure} The left part of \Cref{fig:timereal} shows the CPU time of \texttt{dRQZm} and \texttt{DHGEQZ} for problems of size $1000$ up to $8000$ on a loglog scale. The dashed lines indicate the slopes of the time complexity in function of problem size, which are estimated in a least-squares sense. The least-square fits are computed based on the $(n_i,t_i)$ data indicated with the circular markers that show the exact height of the bars in the graph. For \texttt{DHGEQZ} we observe an empirical time complexity close to $O(n^3)$, while the empirical time complexity of \texttt{dRQZm} is significantly lower than $O(n^3)$ with a leading exponent close to $2.2$. This improved time complexity can be attributed to the effectiveness of aggressive early deflation in combination with the rational iteration leading to occasional deflations situated more in the interior part of the pencil. The right part of \Cref{fig:timereal} shows the speedup achieved by \texttt{dRQZm} over \texttt{DHGEQZ}. The crossover point where \texttt{dRQZm} becomes faster than \texttt{DHGEQZ} is situated between $n=1000$ and $n=1414$. Our method, \texttt{dRQZm}, is slower than \texttt{DHGEQZ} for problems of smaller size because the computational overhead of computing swaps of $2{\times}2$ with $2{\times}2$ blocks and $2{\times}2$ with $1{\times}1$ blocks leads to larger lower-order terms in the time complexity. \begin{figure}[htp] \label{fig:bwerealcomplex} \centering \resizebox{\textwidth}{!}{% \input{fig/numexp/d/random/fig_numexp_dz_bwe.tikz} } \caption{Relative backward error on Schur decomposition computed with LAPACK (\emph{circles}) and libRQZ (\emph{triangles}) on $A$ (\emph{full lines}) and $B$ (\emph{dashed lines}). Both for real-valued (\emph{left}) and complex-valued (\emph{right}) randomly generated matrix pairs.} \end{figure} The left part of \Cref{fig:bwerealcomplex} shows the relative backward errors, $$ \|T - Q^T A Z\|_{F} / \|A\|_{F}, \quad \text{and,} \quad \|S - Q^T B Z\|_{F} / \|B\|_{F}, $$ on the generalized real Schur decompositions obtained with \texttt{dRQZm} and \texttt{DHGEQZ}. We observe that the relative backward errors of \texttt{dRQZm} are about half of these of \texttt{DHGEQZ}. \begin{figure}[htp] \label{fig:timecomplex} \centering \resizebox{\textwidth}{!}{% \input{fig/numexp/z/random/fig_numexp_z_randn.tikz} } \caption{CPU time of \texttt{ZHGEQZ} of LAPACK and \texttt{zRQZm} of libRQZ on randomly generated complex-valued matrix pencils (\emph{left}). Speedup of libRQZ over LAPACK (\emph{right}).} \end{figure} \Cref{fig:timecomplex} shows the results of an experiment similar to \Cref{fig:timereal} but for complex-valued pencils. Again, \texttt{ZHGEQZ} shows an empirical time complexity larger than $O(n^3)$, while \texttt{zRQZm} stays below $O(n^3)$. The crossover point where \texttt{ZHGEQZ} is faster than \texttt{zRQZm} is not shown in \Cref{fig:timecomplex}, but is situated around $n=200$. This is significantly lower than for \texttt{dRQZm} and is explained by the fact that only $1{\times}1$ with $1{\times}1$ swaps are used in this case. These have a lower computational overhead than larger swaps. The right part of \Cref{fig:bwerealcomplex} shows the relative backward errors on the generalized Schur decompositions for the complex-valued pencils. Again, the relative backward error of \texttt{zRQZm} is about half of \texttt{ZHGEQZ}. \subsection{Problems from applications} \label{ssec:mmarket} In this section we test \texttt{libRQZ} on five pencils originating from applications. We study the \emph{cavity} and \emph{obstacle flow} pencils generated with IFISS \cite{Elman07,Elman14}. The same pencils were studied in our initial paper on the RQZ method \cite{Camps2018}. Besides these pencils, we have selected two pencils from Matrix market \cite{Boisvert1997} originating from the MHD collection and the \emph{rail} pencil from the Oberwolfach benchmark collection \cite{Oberwolfach}. The results of the numerical tests are summarized in \Cref{tab:numexp}. The table lists the CPU time and maximum of the relative backward errors on $A$ and $B$ for the generalized real Schur form computed with LAPACK \cite{lapack} and \texttt{libRQZ}. Again, \texttt{libRQZ} requires less CPU time and has the smaller backward error. \begin{table}[htp] \centering \caption{CPU times and maximum relative backward error on the generalized real Schur form computed with LAPACK and \texttt{libRQZ} for pencils originating from applications.} \begin{adjustbox}{max width=\textwidth} \begin{tabular}{l|c|c|c|c|c} \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{\texttt{DHGEQZ}} & \multicolumn{2}{c}{\texttt{dRQZm}} \\ Problem & $n$ & $t_{\text{CPU}} (s)$ & max error $(A,B)$ & $t_{\text{CPU}} (s)$ & max error $(A,B)$ \\ \hline Cavity Flow & $2467$ & $50.1$ & $1.2 \cdot 10^{-14}$ & $20.4$ & $4.7 \cdot 10^{-15}$\\ Obstacle Flow & $2488$ & $64.0$ & $9.9 \cdot 10^{-15}$ & $27.9$ & $5.9 \cdot 10^{-15}$\\ MHD3200 & $3200$ & $60.8$ & $9.0 \cdot 10^{-15}$ & $39.6$ & $3.1\cdot 10^{-15}$\\ MHD4800 & $4800$ & $194.1$ & $1.4 \cdot 10^{-14}$ & $92.2$ & $3.4\cdot 10^{-15}$\\ RAIL & $5177$ & $1287.5$ & $1.7 \cdot 10^{-13}$ & $87.5$ & $1.1 \cdot 10^{-14}$ \end{tabular} \end{adjustbox} \label{tab:numexp} \end{table} \section{Uniqueness and convergence} \label{sec:theory} In this section we motivate the implicit approach used in the multishift, multipole RQZ step in the form of an implicit Q theorem for block Hessenberg pencils. We also discuss the subspace iteration that is implicitly performed during the multishift, multipole RQZ step. The following lemma first extends the essential uniqueness of the QR factorization to a form of essential uniqueness in the factorization of a matrix as a product of a unitary matrix and a block upper triangular matrix. \begin{lemma} \label{lemma:blockQR} Given a nonsingular $n{\times}n$ matrix $A$ and consider $A = \hat{Q} \hat{R}_{\bm{s}}$, $A = \check{Q} \check{R}_{\bm{s}}$ two block QR factorizations where $\hat{Q}, \check{Q}$ are unitary matrices and $\hat{R}_{\bm{s}}, \check{R}_{\bm{s}}$ are block upper triangular matrices with the same partition $\bm{s} = (s_1,\hdots,s_m)$. Then $\hat{Q} = \check{Q} D_{\bm{s}}$ with $D_{\bm{s}}$ a unitary block diagonal matrix with an identical partition $\bm{s}$ as $\hat{R}_{\bm{s}}$ and $\check{R}_{\bm{s}}$. \end{lemma} \begin{proof} From $\hat{Q} \hat{R}_{\bm{s}} = \check{Q} \check{R}_{\bm{s}}$ it follows that, $ \check{Q}^* \hat{Q} = \check{R}_{\bm{s}} \hat{R}_{\bm{s}}^{-1} = \tilde{R}_{\bm{s}} = D_{\bm{s}}, $ with $\tilde{R}_{\bm{s}}$ a unitary block upper triangular matrix with partition ${\bm{s}}$. The only unitary block upper triangular matrices are block diagonal matrices $D_{\bm{s}}$. \end{proof} Before presenting the implicit Q theorem, we first give this direct corollary of \Cref{thm:blockHessspaces} that considers the structure of rational Krylov matrices instead of the subspaces. This is the block generalization of \cite[Corollary 5.7]{Camps2018}. \begin{corollary} \label{cor:blockHessmatrices} Given an $n{\times}n$ proper block Hessenberg pair $(A,B)$ with partition $\bm{s} = (s_1$, $\hdots$, $s_m)$ and poles $\Xi = (\Xi^1, \hdots, \Xi^m)$ that are different from the eigenvalues. Then for a tuple of shifts $\mathrm{P}$ different from the poles, $ K^{\text{rat}}_n(A,B,\bm{e}_1,\Xi, \mathrm{P}) $ is a full rank $n{\times}n$ block upper triangular matrix with partition $(1,s_1, s_2,\hdots, s_m)$. While, $ L^{\text{rat}}_{n-1}(A,B,\bm{z}_1,(\breve{\Xi}^1, \Xi^2, \hdots, \Xi^m), \mathrm{P}) $ is a full rank $n{\times}n{-}1$ block upper triangular matrix with partition $(s_1, s_2,\hdots, s_m)$. Here $\bm{z}_1$ and $\breve{\Xi}^1$ are chosen as described in \Cref{thm:blockHessspaces}. \end{corollary} We are now ready to state the block implicit Q theorem. \begin{theorem} \label{thm:implicitQ} Let $(A,B)$ be a regular matrix pair and let $\hat{Q}, \check{Q}, \hat{Z}, \check{Z}$ be unitary matrices with $\hat{Q}\bm{e}_1 = \sigma \check{Q} \bm{e}_1$, $|\sigma| = 1$, such that, \begin{equation*} (\hat{A}, \hat{B}) = \hat{Q}^* (A,B) \hat{Z} \quad \text{and} \quad (\check{A}, \check{B}) = \check{Q}^* (A,B) \check{Z}, \end{equation*} are both proper block Hessenberg pairs with the same partition $(s_1, \hdots, s_m)$ and poles $\Xi = (\Xi^1, \hdots, \Xi^m)$ different from the eigenvalues. Then the pairs $(\hat{A}, \hat{B})$ and $(\check{A},\check{B})$ are identical up to multiplication with two unitary block diagonal matrices, \begin{align*} \hat{A} = D^*_1 \check{A} D_2 \quad \text{and} \quad \hat{B} = D^*_1 \check{B} D_2, \end{align*} with $D_1$ having partition $(1, s_1, \hdots, s_m)$ and $D_2$ having partition $(s_1, \hdots, s_m, 1)$. \end{theorem} \begin{proof} \Cref{cor:blockHessmatrices} states that $K^{\text{rat}}_n(\hat{A},\hat{B},\bm{e}_1,\Xi, \mathrm{P})$ and $K^{\text{rat}}_n(\check{A},\check{B},\bm{e}_1,\Xi, \mathrm{P})$ are both block upper triangular matrices of full rank with block partition $(1,s_1, \hdots, s_m)$. We thus have, \begin{equation*} \begin{split} & \hat{Q} \, K_{n}^{\text{rat}}(\hat{A}, \hat{B}, \bm{e}_1, \Xi, \mathrm{P}) \\ = \ & \hat{Q} \, \left[ \bm{e}_1, \ \hat{M}(\varrho_1,\xi_1) \; \bm{e}_1, \ \hdots, \ \left( \prod_{i=1}^{n-1} \hat{M}(\varrho_i,\xi_i) \right) \; \bm{e}_1 \right] \\ = \ & \hat{Q} \, \left[ \bm{e}_1, \ \hat{Q}^* M(\varrho_1,\xi_1) \hat{Q} \; \bm{e}_1, \ \hdots, \ \hat{Q}^* \left( \prod_{i=1}^{n-1} M(\varrho_i,\xi_i) \right) \hat{Q} \; \bm{e}_1 \right] \\ = \ & \phantom{Q} \, \left[ \hat{\bm{q}}_1, \ M(\varrho_1,\xi_1) \; \hat{\bm{q}}_1, \ \hdots, \ \left( \prod_{i=1}^{n-1} M(\varrho_i,\xi_i) \right) \; \hat{\bm{q}}_1 \right] \\ = \ & \sigma \; \left[ \check{\bm{q}}_1, \ M(\varrho_1,\xi_1) \; \check{\bm{q}}_1, \ \hdots, \ \left( \prod_{i=1}^{n-1} M(\varrho_i,\xi_i) \right) \; \check{\bm{q}}_1 \right] \\ = \ & \sigma \check{Q} \, K_{n}^{\text{rat}}(\check{A},\check{B}, \bm{e}_1, \Xi,\mathrm{P}). \end{split} \end{equation*} From \Cref{lemma:blockQR} we have that this equality between two block QR factorizations implies that $\hat{Q} = \check{Q} D_{(1,s_1,\hdots,s_m)}$. For the relation between $\hat{Z}$ and $\check{Z}$, consider, \begin{equation*} (\doublehat{A},\doublehat{B}) = \doublehat{Q}^* \, (\hat{A},\hat{B}) \, \doublehat{Z}, \quad \text{and,} \quad (\doublecheck{A},\doublecheck{B}) = \doublecheck{Q}^* \, (\check{A},\check{B}) \, \doublecheck{Z}, \end{equation*} both reductions of the block Hessenberg pencils to a proper Hessenberg pencil as defined in \Cref{lemma:blockHessToHess} and assume, without loss of generality, that $\xi_{s_1}^1$ is the first pole in both $(\doublehat{A},\doublehat{B})$ and $(\doublecheck{A},\doublecheck{B})$. Thus $\doublehat{\bm{z}}_1$ is the right eigenvector of the pole pencil of $(\doublehat{A},\doublehat{B})$ associated with the eigenvalue $\xi_{s_1}^1$ and the same holds for $\doublecheck{\bm{z}}_1$ and $(\doublecheck{A},\doublecheck{B})$. This implies, \begin{equation*} \doublehat{Q}^* \, (\hat{A} - \xi_{s_1}^1 \hat{B}) \doublehat{\bm{z}}_1 = \hat{\gamma} \bm{e}_1 , \quad \text{and,} \quad \doublecheck{Q}^* \, (\check{A} - \xi_{s_1}^1 \check{B}) \doublecheck{\bm{z}}_1 = \check{\gamma} \bm{e}_1, \end{equation*} by the proper Hessenberg structure of $(\doublehat{A},\doublehat{B})$ and $(\doublecheck{A},\doublecheck{B})$. Since by \cref{eq:reduction}, $\doublehat{Q} \bm{e}_1 = \doublecheck{Q} \bm{e}_1 = \bm{e}_1$, we also have, \begin{equation*} (\hat{A} - \xi_{s_1}^1 \hat{B}) \doublehat{\bm{z}}_1 = \hat{\gamma} \bm{e}_1 , \quad \text{and,} \quad (\check{A} - \xi_{s_1}^1 \check{B}) \doublecheck{\bm{z}}_1 = \check{\gamma} \bm{e}_1, \end{equation*} Thus, \begin{equation*} \hat{Q}^* (A - \xi_{s_1}^1 B) \hat{Z} \doublehat{\bm{z}}_1 = \hat{\gamma} \bm{e}_1, \quad \text{and,} \quad \check{Q}^* (A - \xi_{s_1}^1 B) \check{Z} \doublecheck{\bm{z}}_1 = \check{\gamma} \bm{e}_1. \end{equation*} Using, $\hat{Q} = \check{Q} D_{(1,s_1,\hdots,s_m)}$, $D_{(1,s_1,\hdots,s_m)} \bm{e}_1 = \sigma \bm{e}_1$, we get that, \begin{equation*} \begin{split} \hat{Z} \doublehat{\bm{z}}_1 &= \hat{\gamma} (A- \xi_{s_1}^1 B)^{-1} \check{Q} D_{(1,s_1,\hdots,s_m)} \bm{e}_1 = \sigma \hat{\gamma} (A- \xi_{s_1}^1 B)^{-1} \check{Q}\bm{e}_1 \\ \check{Z} \doublecheck{\bm{z}}_1 & = \check{\gamma} (A- \xi_{s_1}^1 B)^{-1} \check{Q} \bm{e}_1, \end{split} \end{equation*} from which we conclude that $\hat{Z} \doublehat{\bm{z}}_1 = \tilde{\sigma} \check{Z} \doublecheck{\bm{z}}_1$ for some $\tilde{\sigma}$ with $|\tilde{\sigma}| = 1$. Now use this result in combination with \Cref{cor:blockHessmatrices}, \begin{equation*} \begin{split} & \hat{Z} \, L_{n-1}^{\text{rat}}(\hat{A}, \hat{B}, \doublehat{\bm{z}}_1, \Xi, \mathrm{P}) \\ = \ & \hat{Z} \, \left[ \doublehat{\bm{z}}_1, \ \hat{N}(\varrho_1,\xi_1) \; \doublehat{\bm{z}}_1, \ \hdots, \ \left( \prod_{i=2}^{n-1} \hat{N}(\varrho_i,\xi_i) \right) \; \doublehat{\bm{z}}_1 \right] \\ = \ & \phantom{Z} \, \left[\hat{Z}\doublehat{\bm{z}}_1 , \ N(\varrho_1,\xi_1) \; \hat{Z}\doublehat{\bm{z}}_1, \ \hdots, \ \left( \prod_{i=2}^{n-1} N(\varrho_i,\xi_i) \right) \; \hat{Z}\doublehat{\bm{z}}_1 \right] \\ = \ & \tilde{\sigma} \; \left[ \check{Z}\doublecheck{\bm{z}}_1, \ N(\varrho_1,\xi_1) \; \check{Z}\doublecheck{\bm{z}}_1, \ \hdots, \ \left( \prod_{i=2}^{n-1} N(\varrho_i,\xi_i) \right) \; \check{Z}\doublecheck{\bm{z}}_1 \right] \\ = \ & \tilde{\sigma} \; \check{Z} \, L_{n-1}^{\text{rat}}(\check{A}, \check{B}, \doublecheck{\bm{z}}_1, \Xi, \mathrm{P}). \end{split} \end{equation*} Based on \Cref{lemma:blockQR} we can now guarantee that the first $n{-}1$ columns of $\hat{Z}$ are equal to the first $n{-}1$ columns of $\check{Z}$ multiplied with some $n{-}1{\times}n{-}1$ unitary block diagonal matrix $D_{(s_1,\hdots,s_m)}$. Observe that this also determines $\hat{\bm{z}}_n = \breve{\sigma} \check{\bm{z}}_n$, $|\breve{\sigma}| = 1$. This concludes the proof. \end{proof} In \cite[Theorem 6.1]{Camps2018} it is shown that an RQZ step with shift $\varrho$ on a Hessenberg pencil with pole tuple $\Xi = (\xi_1, \hdots, \xi_{n-1})$ and new pole $\xi_n$ performs nested subspace iteration accelerated by \begin{equation} q_{k}^{Q}(z) = \frac{z - \varrho}{z - \xi_k}, \quad \text{and} \quad q_{k}^{Z}(z) = \frac{z - \varrho}{z - \xi_{k+1}}, \end{equation} for the $k$th column vector of respectively $Q$ and $Z$. Based on \Cref{lemma:blockHessToHess}, this can be extended to block Hessenberg pencils under the condition that the partition $\bm{s}$ prior to the multishift, multipole RQZ step is the same as the partition $\hat{\bm{s}}$ afterwards. We omit this generalization as the condition $\bm{s}= \hat{\bm{s}}$ limits the practical application of the theoretical result. Combining \cite[Theorem 6.1]{Camps2018} with \Cref{lemma:blockHessToHess}, it is clear, however, that in the multishift, multipole RQZ method shifts that have been swapped along the subdiagonal of the block Hessenberg pencil will lead to deflations at the end of the pencil, while poles that have been moved to the front of the pencil lead to convergence of eigenvalues at the beginning. This holds under the assumption that a good choice of poles and shifts is made. \section{Aggressive early deflation} \label{sec:deflation} Aggressive early deflation (AED) significantly speeds up the convergence of the QR \cite{Braman2002a} and QZ \cite{Kagstrom2007} methods by identifying deflatable eigenvalues before classical deflation criteria are able to detect them. This avoids the reuse of converged shifts in subsequent iterations, thereby initiating convergence of other eigenvalues sooner. In this section, we describe how aggressive early deflation is implemented for the RQZ method. The process exists out of $3$ stages and is summarized in \Cref{fig:aed}. Because the shifts lead to convergence in the bottom-right corner of the pencil and the poles cause convergence in the upper-left corner, AED can be performed at both sides of the pencil. We present the description of the AED process simultaneously for the upper-left and bottom-right sides of the pencil, but they can be treated separately in a practical implementation. The deflation window sizes are $w_e$ for the bottom-right side and $w_s$ for the upper-left side of the pencil. The window sizes are chosen such that they cover an integer number of blocks, avoiding thereby subdivision of $2{\times}2$ blocks. The deflation windows are shown in Pane I of \Cref{fig:aed}. \begin{figure}[htp] \label{fig:aed} \centering \resizebox{\textwidth}{!}{% \input{fig/figaed.tikz} } \caption{Visualization of the three stages of aggressive early deflation for block Hessenberg pencils; both at the front and back of the pencil. The matrix $A$ is in block Hessenberg form with $2{\times}2$ blocks representing complex conjugate pairs of shifts, the matrix $B$ is in Hessenberg form.} \end{figure} In the first phase, shown in pane II of \Cref{fig:aed}, the parts of the pencil within the deflation windows are reduced to real Schur form. This can be done with the RQZ method as all subpencils in the deflation windows are in block Hessenberg form. The pencil $(A,B)$ is subdivided as, \begin{equation} \label{eq:aed_part1} A = \begin{blockarray}{c|cc|cc} w_s & & 1 \text{ or } 2 & w_e \\ \begin{block}{[c|cc|c]c} A_{11} & A_{12} & A_{13} & A_{14} & w_s \\ \cline{1-5} A_{21} & A_{22} & A_{23} & A_{24} & 1 \text{ or } 2 \\ & A_{32} & A_{33} & A_{34} & \\ \cline{1-5} & & A_{43} & A_{44} & w_e \\ \end{block} \end{blockarray}, \quad B = \begin{blockarray}{c|cc|cc} w_s & & 1 & w_e \\ \begin{block}{[c|cc|c]c} B_{11} & B_{12} & B_{13} & B_{14} & w_s \\ \cline{1-5} B_{21} & B_{22} & B_{23} & B_{24} & 1 \\ & B_{32} & B_{33} & B_{34} & \\ \cline{1-5} & & B_{43} & B_{44} & w_e \\ \end{block} \end{blockarray}, \end{equation} and the subpencils $(A_{11},B_{11})$ and $(A_{44},B_{44})$ are the upper-left and bottom-right deflation windows. Their reduction to real Schur form is given by, \begin{equation} \label{eq:aed_part2} (T_{11},S_{11}) = Q^{T}_{s} (A_{11},B_{11}) Z_{s}, \quad \text{and,} \quad (T_{44},S_{44}) = Q^{T}_{e} (A_{44},B_{44}) Z_{e}, \end{equation} which, when applied as an equivalence transformation to $(A,B)$ gives the following result: \begin{equation} \label{eq:aed_part3} \resizebox{.99\hsize}{!}{$ \check{A} = \begin{blockarray}{c|cc|c} \begin{block}{[c|cc|c]} T_{11} & Q^{T}_{s} A_{12} & Q^{T}_{s} A_{13} & Q^{T}_{s} A_{14} Z_{e}\\ \cline{1-4} A_{21} Z_{s} & A_{22} & A_{23} & A_{24} Z_{e} \\ & A_{32} & A_{33} & A_{34} Z_{e} \\ \cline{1-4} & & Q^{T}_{e} A_{43} & T_{44} \\ \end{block} \end{blockarray}, \check{B} = \begin{blockarray}{c|cc|c} \begin{block}{[c|cc|c]} S_{11} & Q^{T}_{s} B_{12} & Q^{T}_{s} B_{13} & Q^{T}_{s} B_{14} Z_{e} \\ \cline{1-4} B_{21} Z_{s} & B_{22} & B_{23} & B_{24} Z_{e} \\ & B_{32} & B_{33} & B_{34} Z_{e} \\ \cline{1-4} & & Q^{T}_{e} B_{43} & S_{44} \\ \end{block} \end{blockarray} $}. \end{equation} The blocks $(A_{21},B_{21}) Z_s$ and $Q^{T}_{e} (A_{43},B_{43})$ are the spikes shown in pane~II of \Cref{fig:aed}. Because $B$ is an upper Hessenberg matrix by \cref{eq:22stdform}, $B_{21} = b_{w_s+1,w_s} \bm{e}_{w_s}^T$ is of dimension $1{\times}w_s$ and $B_{43} = b_{n-w_e+1,n-w_e} \bm{e}_1$ is of dimension $w_e{\times}1$. The spikes at the side of $A$ can be of dimension $2{\times}w_s$ or $w_e{\times}2$ if there is a $2{\times}2$ block just after the deflation window in the upper-left side of the pencil (the example of \Cref{fig:aed} illustrates this situation), or right before the deflation window at the bottom-right side of the pencil. In this case, the $2$ rows of $A_{21} Z_s$ are scalar multiples of each other. The same holds for the $2$ columns of $Q^{T}_{e} A_{43}$. We denote with $\bm{p}_{s}^{B} = b_{w_s+1,w_s} \bm{e}_{w_s}^T Z_{s}$ the spike at the upper-left deflation window of $B$. Similarly, $\bm{p}_{s}^{A} = \zeta \bm{e}_{w_s}^T Z_{s}$, with $\zeta$ equal to the maximum of $|a_{w_s+1,w_s}|$ and $|a_{w_s+2,w_s}|$, denotes the spike at the upper-left side of $A$ The second phase in the AED process is illustrated in Pane~III of \Cref{fig:aed} and entails testing for deflatable eigenvalues inside the deflation windows. The deflation test starts at the left of the spikes $\bm{p}_{s}^{A}$ and $\bm{p}_{s}^{B}$. If there is a $1{\times}1$ real eigenvalue located at this position, we test if: \begin{equation} \label{eq:aed_part5} | \bm{p}_{s,1}^{A} | < c \epsilon_{m} (|a_{1,1}| + |a_{2,2}|), \quad \text{and,} \quad | \bm{p}_{s,1}^{B} | < c \epsilon_{m} (|b_{1,1}| + |b_{2,2}|). \end{equation} If there is a $2{\times}2$ complex conjugate pair of eigenvalues at this position, we test if: \begin{equation} \label{eq:aed_part6} | \bm{p}_{s,1}^{A} | + | \bm{p}_{s,2}^{A} | < c \epsilon_{m} \|A_{1:2,1:2}\|_{F}, \quad \text{and,} \quad | \bm{p}_{s,1}^{B} | + | \bm{p}_{s,2}^{B} | < c \epsilon_{m} \|B_{1:2,1:2}\|_{F}. \end{equation} If the first eigenvalue is deflatable according to \cref{eq:aed_part5} or \cref{eq:aed_part6}, the corresponding spike elements in $\bm{p}_{s}^{A}$ and $\bm{p}_{s}^{B}$ are set to zero and the next eigenvalue is tested according to the same criterion. If the first eigenvalue is not deflatable, another eigenvalue that has not yet been tested, is swapped to the front of the spike. Then it is checked if this is deflatable according to \cref{eq:aed_part5} or \cref{eq:aed_part6}. This procedure is continued until all deflatable eigenvalues inside the deflation window are identified. The swapping of eigenvalues within the deflation window does not change the form of \cref{eq:aed_part2} but the equivalences $\hat{Q}_s$ and $\hat{Z}_s$ are changed which also changes $\hat{\bm{p}}_{s}$ and $(\hat{T}_{11},\hat{S}_{11})$. The same strategy is used for AED at the bottom-right side of the pencil. In pane~III of \Cref{fig:aed} all spike elements that have been zeroed are marked in red. In the third and last phase, the nonzero spike elements are handled in such a way that the (block) Hessenberg form is restored. The restored form is shown in pane~IV of \Cref{fig:aed}, where the larger block in the middle is in block Hessenberg form and the smaller blocks at the upper-left and bottom-right side of the pencil are in real Schur form. The block Hessenberg restoration is achieved by a sequence of rotations as follows. Assume that the spikes after the deflation procedure of second phase are $\hat{\bm{p}}_s = \hat{\zeta} \bm{e}_{w_s}^T \hat{Z}_{s}$ and that the first $i$ entries in $\hat{\bm{p}}_s$ are zeroed during the deflation step. We then compute rotations $G_{i+1}, \hdots, G_{w_s-1}$ such that, \begin{equation} \label{eq:aed_part7} \hat{\bm{p}}_s G_{i+1}, \hdots, G_{w_s-1} = \hat{\zeta} \bm{e}_{w_s}^T \hat{Z}_{s} G_{i+1}, \hdots, G_{w_s-1} = \sigma \hat{\zeta} \bm{e}_{w_s}^T. \end{equation} Updating $\tilde{Z}_s = \hat{Z}_{s} G_{i+1}, \hdots, G_{w_s-1}$ gives the final equivalence such that the block Hessenberg form is restored. The same idea is used for the deflation window at the bottom-right side of the pencil. We remark that for complex-valued problems the Hessenberg form can be restored in the third phase by a row or column permutation for respectively AED at the upper-left or bottom-right side of the pencil. \section{Introduction} \label{sec:introduction} The rational QZ method (RQZ) \cite{Camps2018} generalizes the standard QZ method of Moler \& Stewart \cite{Moler1973}. Both are methods for the numerical solution of the dense, unsymmetric generalized eigenvalue problem defined by a pair of matrices $A, B \in \mathbb{F}^{n{\times}n}$, $\mathbb{F} \in \lbrace \mathbb{C}, \mathbb{R} \rbrace$. The set of eigenvalues of the pencil $(A,B)$ is denoted as $\Lambda$ and defined by, \begin{equation} \label{eq:GEP} \Lambda = \lbrace \lambda = \alpha/\beta \in \bar{\mathbb{C}}: \det(\beta A - \alpha B) = 0 \rbrace, \end{equation} with $\bar{\mathbb{C}} = \mathbb{C} \cup \lbrace \infty \rbrace$. Eigenvalues with $\beta = 0$ are located at $\infty$. We assume throughout this paper that the pair $(A,B)$ is \emph{regular} which means that its characteristic polynomial differs from zero. This implies that there are exactly $n$ eigenvalues including these at $\infty$. The RQZ method acts on pencils in Hessenberg, Hessenberg form instead of the Hessenberg, triangular form used in the QZ method. It relies on \emph{pole swapping} instead of \emph{bulge chasing}. Both the single shift RQZ method and the RQZ method with tightly-packed shifts, as formulated in \cite{Camps2018}, are applicable to real- and complex-valued pencils. However it requires complex arithmetic for real-valued pencils having complex conjugate eigenvalues. The RQZ method computes the generalized Schur form of $(A,B)$. This is a unitary equivalence transformation, \begin{equation} \label{eq:QZequivalence} (T,S) = Q^* \, (A,B) Z, \end{equation} such that $(T,S)$ is a triangular, triangular pencil equivalent to $(A,B)$. The eigenvalues of $(A,B)$ are readily available as the ratios $t_{ii}/s_{ii}$ of the diagonal elements. In this paper we introduce the multishift, multipole RQZ method which acts on pencils where both matrices are in \emph{block Hessenberg} form. The main benefit of using shifts and poles of higher multiplicity is that complex conjugate pairs of shifts and poles can be represented in real arithmetic for real-valued pencils. This is similar to the well-known implicit double-shift QR step introduced by Francis \cite{Fra62} and the double-shift QZ step \cite{Moler1973}. The focus of this paper is thus on the case $\mathbb{F} = \mathbb{R}$. The multishift, multipole RQZ method no longer converges to the triangular, triangular pencil of the generalized Schur form \eqref{eq:QZequivalence}. Instead, for $A,B \in \mathbb{R}^{n \times n},$ it will converge to the real generalized Schur form, \begin{equation} \label{eq:realQZequivalence} (S,T) = Q^T \, (A,B) Z = \left( \begin{bmatrix} S_{11} & S_{12} & \hdots & S_{1m} \\ 0 & S_{22} & \ddots & S_{2m} \\ \vdots & \ddots & \ddots & \vdots \\ 0 & \hdots & 0 & S_{mm} \end{bmatrix}, \begin{bmatrix} T_{11} & T_{12} & \hdots & T_{1m} \\ 0 & T_{22} & \ddots & T_{2m} \\ \vdots & \ddots & \ddots & \vdots \\ 0 & \hdots & 0 & T_{mm} \end{bmatrix} \right), \end{equation} where the diagonal subpencils $(S_{ii},T_{ii})$, $i=1,\hdots,m$ are of dimension $1{\times}1$ and $2{\times}2$ and correspond with respectively the real and complex conjugate eigenvalues of $(A,B)$. The remainder of this article consists out of two parts. \Cref{sec:bHessenberg,sec:theory} make up the theoretical part of the paper. In \Cref{sec:bHessenberg}, we study matrix pencils in block Hessenberg form. We define properness of block Hessenberg pencils, their \emph{pole pencil}, and their \emph{pole tuple}. We show how the pole tuple can be altered by changing pole blocks at the edge of the pencil and by swapping neighboring pole blocks. The multishift, multipole RQZ step follows directly from this discussion. \Cref{sec:theory} extends the implicit Q theorem for Hessenberg pencils \cite{Camps2018} to block Hessenberg pencils and briefly discusses the convergence behaviour of the method. In the second part of the paper, we follow a practical approach and discuss how a multishift, multipole RQZ method can be implemented in finite precision arithmetic. The QR method suffers from a degraded performance when moderate to large shift multiplicities are used. Watkins \cite{Watkins1996} studied this phenomenon and demonstrated that shifts become \emph{blurred} during a QR iteration of higher shift multiplicity. This severely decreases the effectiveness of the shifts. For the QR method, this issue is mitigated in the \emph{small bulge} multishift variant introduced by Braman, Byers \& Mathias \cite{Braman2002}. This approach is extended to the QZ method by K{\aa}gstr{\"o}m \& Kressner \cite{Kagstrom2007}. In \Cref{sec:numerical}, we demonstrate that the multishift, multipole RQZ method is also prone to numerical issues when shifts and poles of moderate to large multiplicities are used. To overcome these numerical difficulties, we propose a multishift, multipole RQZ method that uses \emph{tightly-packed}, small blocks. Specifically, blocks of dimension $2{\times}2$ for complex conjugate shifts and poles in real pencils and of dimension $1{\times}1$ for real shifts and poles in real pencils and in complex pencils. The last tool we adapt from recent improvements to the QR \cite{Braman2002a} and QZ \cite{Kagstrom2007} methods to the RQZ method is the use of advanced deflation strategies. Specifically we implement the \emph{aggressive early deflation} technique during the RQZ iteration in order to obtain level 3 BLAS performance. The resulting methods are implemented as part of the Fortran package \texttt{libRQZ} which is made publicly available at \url{numa.cs.kuleuven.be/software/rqz}. \Cref{sec:numerics} illustrates the performance of \texttt{libRQZ} with some numerical experiments. We conclude the paper in \Cref{sec:conclusion}. \subsection*{Notation and elementary definitions} $A, B, \hdots$ are matrices, $\bm{a}, \bm{b}, \hdots$ vectors, and $\alpha, \beta, \hdots$ scalars. Subspaces are denoted with calligraphic letters. For example, $\mathcal{R}(A) = \mathcal{R}(\bm{a}_1,\hdots,\bm{a}_n)$ is the column space of $A = [ \bm{a}_1 \hdots \bm{a}_n ]$, $\mathcal{E}_k = \mathcal{R}(\bm{e}_1, \hdots, \bm{e}_k)$ with $\bm{e}_j$ the $j$th canonical basis vector of appropriate dimension. The $k$th order Krylov subspace generated by $A$ from starting vector $\bm{v}$ is $\mathcal{K}_k(A,\bm{v}) = \mathcal{R}(\bm{v}, A\bm{v}, \hdots, A^{k-1}\bm{v})$. The tuple $(\alpha, \beta)$ is ordered and $\lbrace \alpha, \beta \rbrace$ denotes an unordered multiset with repetition. The complex plane extended with the point at infinity, $\mathbb{C} \cup \lbrace \infty \rbrace$, is denoted as $\bar{\mathbb{C}}$. Division of a nonzero $\alpha \in \mathbb{C}$ by $0$ results in infinity. \section{Numerical considerations} \label{sec:numerical} In this section, we discuss numerical experiments related to the pole introduction and swapping operations and draw conclusions for the practical implementation of the multishift, multipole RQZ method. \subsection{Introducing pole blocks} In finite precision arithmetic, the introduction of a large amount of poles in a block Hessenberg pencil via the computation of the vectors as described in \cref{eq:vec_multishift,eq:vec_multishift_end} becomes increasingly inaccurate already for small to medium blocksizes. This comes as no surprise. Kressner \cite{Kressner2005a} studied the use of larger bulges in the QR method and made a connection between the introduction of the multishift block in the Hessenberg matrix and the pole placement problem in systems and control theory. It has been shown in control theory that placing many poles in a high dimensional system is intrinsically ill-conditioned \cite{He1995}. To illustrate the increasing inaccuracy of the pole introduction we have performed a numerical experiment for which the results are summarized in \Cref{fig:poleintro}. We introduced pole blocks containing $\ell = 2, 4, 6, \hdots, 30$ randomly generated pairs of complex-conjugate shifts $\varrho_i$ in a real-valued Hessenberg matrix, a real-valued Hessenberg pencil, and a real-valued block Hessenberg pencil with leading blocksize $\ell$. The procedure based on \cref{eq:vec_multishift} was used for this. All problems are of size $n{=}100$. The Hessenberg matrix is obtained from the Hessenberg reduction of a randomly generated matrix with normally distributed entries with mean $0$ and variance $1$. In this case the shift vector $\bm{x}$ is computed in the classical way \cite{Watkins1996} which is compatible with \cref{eq:vec_multishift}. Then an orthonormal matrix $Q$ is computed having $\bm{q}_1 = \bm{x}$. The shifts are introduced as $Q^T (A,I)$, which is a block Hessenberg pencil. The actual shifts $\hat{\varrho}_i$ are then computed as the eigenvalues of the leading subdiagonal block of $Q^T (A,I)$. The black line in \Cref{fig:poleintro} shows the median absolute error $| \varrho_i - \hat{\varrho}_i|$ over all shifts and $100$ repetitions of the experiment. The red line in \Cref{fig:poleintro} shows the results of the same experiment but now starting from a Hessenberg pencil $(A,B)$ where each individual matrix is generated as before. Now a procedure based \cref{eq:vec_multishift} is used to compute $\bm{x}$. Finally, the blue line shows the results when $(A,B)$ is initially a block Hessenberg pencil with leading blocksize $s_1 = \ell$ and all other blocks of size $1$. We remark that, in all three experiments, we obtain a block Hessenberg pencil with partition $(\ell,1,\hdots,1)$ after the pole block has been introduced. The only difference is the procedure to compute $\bm{x}$ and the form of the pencil prior the pole introduction. \begin{figure}[htp] \centering \input{fig/pole_introduction.tikz} \caption{Initialization error in function of blocksize for multishift QR (Hessenberg matrix), RQZ (Hessenberg pencil), and multishift, multipole RQZ (block Hessenberg pencil). Median result over $100$ randomly generated problems of size $n{=}100$. } \label{fig:poleintro} \end{figure} We observe from \Cref{fig:poleintro} that the accuracy of the shifts rapidly decreases for larger blocksizes in all three cases. We conclude from this experiment that the blocksize should be limited in a practical implementation in order to avoid losing all accuracy in the shifts already at the initialization stage. Indeed, there is not much hope for an effective multishift, multipole RQZ sweep if the shifts that are introduced in the block Hessenberg pencil have few to none significant digits in common with the intended shifts. Nonetheless, Watkins \cite{Watkins1996} showed that in a multishift QR iteration shifts that are off at start of the sweep can still come into focus later on. \subsection{Swapping pole blocks} Swapping pole blocks of sizes $s_i$ and $s_{i+1}$ requires the solution of a linear system with Kronecker product structure of size $2 s_i s_{i+1} \times 2 s_i s_{i+1}$ \cite{Kagstrom1993,Kagstrom1996}. The computational cost rapidly grows for increasing blocksize. In case $s_i=1$ and $s_{i+1}=1$, or $s_i=2$ and $s_{i+1}=1$, the swap can be performed directly based on the computation of an eigenvector \cite{VanDooren1981}. This procedure is norm-wise backward stable. \subsection{Conclusion} In order to limit both the computational cost of the pole introduction and swapping, and the loss of accuracy, we propose a \emph{tightly-packed small-block} multishift, multipole RQZ sweep. The shifts and poles are tightly-packed similar to \cite[Section 4.4]{Camps2018}. We represent real poles as subdiagonal blocks of dimension $1{\times}1$ and complex-conjugate pairs are represented as subdiagonal blocks of dimension $2{\times}2$ of the form, \begin{equation} \label{eq:22stdform} \begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix}, \begin{bmatrix} b_{11} & b_{12} \\ & b_{22} \end{bmatrix}. \end{equation} This standard form has the advantage that $B$ is always an upper Hessenberg matrix throughout the iteration and it simplifies the deflation criterion based on \Cref{def:properness}. The $i$th pole along the subdiagonal is considered deflated if, \begin{equation} \label{eq:defsing} |a_{i+1,i}| < c \epsilon_m (|a_{i,i}|+|a_{i+1,i+1}|), \quad \text{and,} \quad |b_{i+1,i}| < c \epsilon_m (|b_{i,i}|+|b_{i+1,i+1}|), \end{equation} in the case of a single pole. Here, $\epsilon_m$ is the machine precision and $c$ is a small constant. If the $i$th pole is a double pole in standard form \cref{eq:22stdform}, we consider it deflated if either, \begin{equation} \label{eq:defdbl1} \begin{split} |a_{i+1,i}| + |a_{i+2,i}| & < c \epsilon_m (|a_{i,i}|+|a_{i+1,i+1}|), \quad \text{and,} \\ |b_{i+1,i}| & < c \epsilon_m (|b_{i,i}|+|b_{i+1,i+1}|), \end{split} \end{equation} or, \begin{equation} \label{eq:defdbl2} \begin{split} |a_{i+2,i}| + |a_{i+2,i+1}| & < c \epsilon_m (|a_{i+1,i+1}|+|a_{i+2,i+2}|), \quad \text{and,} \\ |b_{i+2,i+1}| & < c \epsilon_m (|b_{i+1,i+1}|+|b_{i+2,i+2}|). \end{split} \end{equation} Deflations in the first block column and last block row of the pencil are also checked according to \Cref{def:properness}. The first pole block of size $s_1 = 1$ or $2$ can be deflated whenever there exists an $(s_{1}+1){\times}(s_1+1)$ orthogonal matrix $Q$ such that, \begin{equation} \label{eq:defstart} Q^{T}\left( \begin{bmatrix} \bm{a}_{1,1}^T \\ A_{2,1} \end{bmatrix}, \begin{bmatrix} \bm{b}_{1,1}^T \\ B_{2,1} \end{bmatrix} \right) = \left( \begin{bmatrix} A_{1,1} \\ \bm{0}^T \end{bmatrix}, \begin{bmatrix} B_{1,1} \\ \bm{0}^T \end{bmatrix} \right) \end{equation} Here, the last row is considered numerically zero according to a relative tolerance similar to \cref{eq:defsing,eq:defdbl1,eq:defdbl2}. Again, we make use of the standard form \cref{eq:22stdform} to efficiently check if a suitable deflation transformation $Q$ can be computed in case $s_1 = 2$. A similar approach is used to check for deflations in the last block row. The swapping transformations are computed according to the procedures described in \cite{VanDooren1981} in case either $s_i=1$, $s_{i+1}=1$, or both of size $1$. If $s_i=s_{i+1}=2$, the transformations are computed according to \cite{Kagstrom1993,Kagstrom1996}. However, this method is not norm-wise backward stable. It leads to occasional non-negligible off-diagonal blocks when an ill-conditioned $2{\times}2$ block close to convergence is involved. In this case, the transformation is refined \cite{Camps2019} to decrease the norm of the off-diagonal block. Our numerical experiments indicated that iterative refinement is required in about $5\%$ of all $2{\times}2$ with $2{\times}2$ swaps during a typical RQZ iteration. In \Cref{sec:numerics}, we will describe our final multishift, multipole RQZ algorithm and test it in a series of numerical experiments.
{ "timestamp": "2019-03-01T02:12:25", "yymm": "1902", "arxiv_id": "1902.10954", "language": "en", "url": "https://arxiv.org/abs/1902.10954" }
\subsection{Dataset Creation Methodology} \label{apx:cifar10_dataset_creation} Our overall goal was to create a new test set that is as close as possible to being drawn from the same distribution as the original CIFAR-10 dataset. One crucial aspect here is that the CIFAR-10 dataset did not exhaust any of the Tiny Image keywords it is drawn from. So by collecting new images from the same keywords as CIFAR-10, our new test set can match the sub-class distribution of the original dataset. \paragraph{Understanding the Sub-Class Distribution.} As the first step, we determined the Tiny Image keyword for every image in the CIFAR-10 dataset. A simple nearest-neighbor search sufficed since every image in CIFAR-10 had an exact duplicate ($\ell_2$-distance $0$) in Tiny Images. Based on this information, we then assembled a list of the 25 most common keywords for each class. We decided on 25 keywords per class since the 250 total keywords make up more than 95\% of CIFAR-10. Moreover, we wanted to avoid accidentally creating a harder dataset with infrequent keywords that the classifiers had little incentive to learn based on the original CIFAR-10 dataset. The keyword distribution can be found in Appendix \ref{apx:v4_keywords}. Inspecting this list reveals the importance of matching the sub-class distribution. For instance, the most common keyword in the \class{airplane}{} class is \keyword{stealth\_bomber} and not a more common civilian type of airplane. In addition, the third most common keyword for the \class{airplane}{} class is \keyword{stealth\_fighter}. Both types of planes are highly distinctive. There are more examples where certain sub-classes are considerably different. For instance, trucks from the keyword \keyword{fire\_truck} are mostly red, which is quite different from pictures for \keyword{dump\_truck} or other keywords. \paragraph{Collecting New Images.} After determining the keywords, we collected corresponding images. To simulate the student / researcher split in the original CIFAR-10 collection procedure, we introduced a similar split among two authors of this paper. Author A took the role of the original student annotators and selected new suitable images for the 250 keywords. In order to ensure a close match between the original and new images for each keyword, we built a user interface that allowed Author A to first look through existing CIFAR-10 images for a given keyword and then select new candidates from the remaining pictures in Tiny Images. Author A followed the labeling guidelines in the original instruction sheet \cite{krizhevsky2009learning}. The number of images Author A selected per keyword was so that our final dataset would contain between 2,000 and 4,000 images. We decided on 2,000 images as a target number for two reasons: \begin{itemize} \item While the original CIFAR-10 test set contains 10,000 images, a test set of size 2,000 is already sufficient for a fairly small confidence interval. In particular, a conservative confidence interval (Clopper-Pearson at confidence level 95\%) for accuracy 90\% has size about $\pm 1\%$ with $n =$ 2,000 (to be precise, $[88.6\%, \, 91.3\%]$). Since we considered a potential discrepancy between original and new test accuracy only interesting if it is significantly larger than 1\%, we decided that a new test set of size 2,000 was large enough for our study. \item As with very infrequent keywords, our goal was to avoid accidentally creating a harder test set. Since some of the Tiny Image keywords have only a limited supply of remaining adequate images, we decided that a smaller target size for the new dataset would reduce bias to include images of more questionable difficulty. \end{itemize} After Author A had selected a set of about 9,000 candidate images, Author B adopted the role of the researchers in the original CIFAR-10 dataset creation process. In particular, Author B reviewed all candidate images and removed images that were unclear to Author B or did not conform to the labeling instructions in their opinion (some of the criteria are subjective). In the process, a small number of keywords did not have enough images remaining to reach the $n =$ 2,000 threshold. Author B then notified Author A about the respective keywords and Author A selected a further set of images for these keywords. In this process, there was only one keyword where Author A had to carefully examine all available images in Tiny Images. This keyword was \keyword{alley\_cat} and comprises less than 0.3\% of the overall CIFAR-10 dataset. \paragraph{Final Assembly.} After collecting a sufficient number of high-quality images for each keyword, we sampled a random subset from our pruned candidate set. The sampling procedure was such that the keyword-level distribution of our new dataset matches the keyword-level distribution of CIFAR-10 (see Appendix \ref{apx:v4_keywords}). In the final stage, we again proceeded similar to the original CIFAR-10 dataset creation process and used $\ell_2$-nearest neighbors to filter out near duplicates. In particular, we removed near-duplicates within our new dataset and also images that had a near duplicate in the original CIFAR-10 dataset (train or test). The latter aspect is particularly important since our reproducibility study is only interesting if we evaluate on truly unseen data. Hence we manually reviewed the top-10 nearest neighbors for each image in our new test set. After removing near-duplicates in our dataset, we re-sampled the respective keywords until this process converged to our final dataset. Figure \ref{fig:original_test} shows a random subset of images from the original and our new test set. We remark that we did not run any classifiers on our new dataset during the data collection phase of our study. In order to ensure that the new data does not depend on the existing classifiers, it is important to strictly separate the data collection phase from the following evaluation phase. \begin{figure*} \centering \newlength{\imagedim} \setlength{\imagedim}{1.2cm} \newlength{\imagexspacing} \setlength{\imagexspacing}{0.1cm} \newlength{\imageyspacing} \setlength{\imageyspacing}{0.1cm} \begin{subfigure}[t]{0.49\textwidth} \centering \begin{tikzpicture} \tikzstyle{img}=[inner sep=0pt,outer sep=0pt]; \node [img] (image0) {\includegraphics[width=\imagedim]{figures/new_images/0.png}}; \node [img,anchor=west,at=(image0.east),xshift=\imagexspacing] (image1) {\includegraphics[width=\imagedim]{figures/new_images/1.png}}; \node [img,anchor=west,at=(image1.east),xshift=\imagexspacing] (image2) {\includegraphics[width=\imagedim]{figures/new_images/2.png}}; \node [img,anchor=west,at=(image2.east),xshift=\imagexspacing] (image3) {\includegraphics[width=\imagedim]{figures/new_images/3.png}}; \node [img,anchor=west,at=(image3.east),xshift=\imagexspacing] (image4) {\includegraphics[width=\imagedim]{figures/new_images/4.png}}; \node [img,anchor=north,at=(image0.south),yshift=-\imageyspacing] (image5) {\includegraphics[width=\imagedim]{figures/new_images/5.png}}; \node [img,anchor=west,at=(image5.east),xshift=\imagexspacing] (image6) {\includegraphics[width=\imagedim]{figures/new_images/6.png}}; \node [img,anchor=west,at=(image6.east),xshift=\imagexspacing] (image7) {\includegraphics[width=\imagedim]{figures/new_images/7.png}}; \node [img,anchor=west,at=(image7.east),xshift=\imagexspacing] (image8) {\includegraphics[width=\imagedim]{figures/new_images/8.png}}; \node [img,anchor=west,at=(image8.east),xshift=\imagexspacing] (image9) {\includegraphics[width=\imagedim]{figures/new_images/9.png}}; \node [img,anchor=north,at=(image5.south),yshift=-\imageyspacing] (image10) {\includegraphics[width=\imagedim]{figures/new_images/10.png}}; \node [img,anchor=west,at=(image10.east),xshift=\imagexspacing] (image11) {\includegraphics[width=\imagedim]{figures/new_images/11.png}}; \node [img,anchor=west,at=(image11.east),xshift=\imagexspacing] (image12) {\includegraphics[width=\imagedim]{figures/new_images/12.png}}; \node [img,anchor=west,at=(image12.east),xshift=\imagexspacing] (image13) {\includegraphics[width=\imagedim]{figures/new_images/13.png}}; \node [img,anchor=west,at=(image13.east),xshift=\imagexspacing] (image14) {\includegraphics[width=\imagedim]{figures/new_images/14.png}}; \node [img,anchor=north,at=(image10.south),yshift=-\imageyspacing] (image15) {\includegraphics[width=\imagedim]{figures/new_images/15.png}}; \node [img,anchor=west,at=(image15.east),xshift=\imagexspacing] (image16) {\includegraphics[width=\imagedim]{figures/new_images/16.png}}; \node [img,anchor=west,at=(image16.east),xshift=\imagexspacing] (image17) {\includegraphics[width=\imagedim]{figures/new_images/17.png}}; \node [img,anchor=west,at=(image17.east),xshift=\imagexspacing] (image18) {\includegraphics[width=\imagedim]{figures/new_images/18.png}}; \node [img,anchor=west,at=(image18.east),xshift=\imagexspacing] (image19) {\includegraphics[width=\imagedim]{figures/new_images/19.png}}; \end{tikzpicture} \caption{Test set A} \label{fig:new_test} \end{subfigure} \begin{subfigure}[t]{0.49\textwidth} \centering \begin{tikzpicture} \tikzstyle{img}=[inner sep=0pt,outer sep=0pt]; \node [img] (image0) {\includegraphics[width=\imagedim]{figures/original_images/0.png}}; \node [img,anchor=west,at=(image0.east),xshift=\imagexspacing] (image1) {\includegraphics[width=\imagedim]{figures/original_images/1.png}}; \node [img,anchor=west,at=(image1.east),xshift=\imagexspacing] (image2) {\includegraphics[width=\imagedim]{figures/original_images/2.png}}; \node [img,anchor=west,at=(image2.east),xshift=\imagexspacing] (image3) {\includegraphics[width=\imagedim]{figures/original_images/3.png}}; \node [img,anchor=west,at=(image3.east),xshift=\imagexspacing] (image4) {\includegraphics[width=\imagedim]{figures/original_images/4.png}}; \node [img,anchor=north,at=(image0.south),yshift=-\imageyspacing] (image5) {\includegraphics[width=\imagedim]{figures/original_images/5.png}}; \node [img,anchor=west,at=(image5.east),xshift=\imagexspacing] (image6) {\includegraphics[width=\imagedim]{figures/original_images/6.png}}; \node [img,anchor=west,at=(image6.east),xshift=\imagexspacing] (image7) {\includegraphics[width=\imagedim]{figures/original_images/7.png}}; \node [img,anchor=west,at=(image7.east),xshift=\imagexspacing] (image8) {\includegraphics[width=\imagedim]{figures/original_images/8.png}}; \node [img,anchor=west,at=(image8.east),xshift=\imagexspacing] (image9) {\includegraphics[width=\imagedim]{figures/original_images/9.png}}; \node [img,anchor=north,at=(image5.south),yshift=-\imageyspacing] (image10) {\includegraphics[width=\imagedim]{figures/original_images/10.png}}; \node [img,anchor=west,at=(image10.east),xshift=\imagexspacing] (image11) {\includegraphics[width=\imagedim]{figures/original_images/11.png}}; \node [img,anchor=west,at=(image11.east),xshift=\imagexspacing] (image12) {\includegraphics[width=\imagedim]{figures/original_images/12.png}}; \node [img,anchor=west,at=(image12.east),xshift=\imagexspacing] (image13) {\includegraphics[width=\imagedim]{figures/original_images/13.png}}; \node [img,anchor=west,at=(image13.east),xshift=\imagexspacing] (image14) {\includegraphics[width=\imagedim]{figures/original_images/14.png}}; \node [img,anchor=north,at=(image10.south),yshift=-\imageyspacing] (image15) {\includegraphics[width=\imagedim]{figures/original_images/15.png}}; \node [img,anchor=west,at=(image15.east),xshift=\imagexspacing] (image16) {\includegraphics[width=\imagedim]{figures/original_images/16.png}}; \node [img,anchor=west,at=(image16.east),xshift=\imagexspacing] (image17) {\includegraphics[width=\imagedim]{figures/original_images/17.png}}; \node [img,anchor=west,at=(image17.east),xshift=\imagexspacing] (image18) {\includegraphics[width=\imagedim]{figures/original_images/18.png}}; \node [img,anchor=west,at=(image18.east),xshift=\imagexspacing] (image19) {\includegraphics[width=\imagedim]{figures/original_images/19.png}}; \end{tikzpicture} \caption{Test set B} \label{fig:original_test} \end{subfigure} \caption{Randomly selected images from the original and new CIFAR-10 test sets. Each grid contains two images for each of the ten classes. The following footnote reveals which of the two grids corresponds to the new test set.\protect \footnotemark} \label{fig:testexamples} \end{figure*} \subsection{Follow-up Hypotheses} \label{apx:explain_gap_cifar} \input{cifar10_explain_gap} \subsection{Additional Figures, Tables, and Lists} In this appendix we provide large figures etc.\ that did not fit into the preceding sections about our CIFAR-10 experiments. \subsubsection{Keyword Distribution in CIFAR-10} \label{apx:v4_keywords} The sub-tables in Table \ref{tab:keywords} show the keyword distribution for each of the ten classes in the original CIFAR-10 test set and our new test set. \captionsetup[subtable]{labelformat=empty} \captionsetup[subtable]{position=top} \begin{table*}[h!] \caption{Distribution of the top 25 keywords in each class for the new and original test set.} \label{tab:keywords} \begin{subtable}{0.3\linewidth} \centering \scriptsize \input{keyword_percentages/frog} \label{tab:frog} \end{subtable}% \hspace*{10em} \begin{subtable}{0.3\linewidth} \centering \scriptsize \input{keyword_percentages/cat} \label{tab:cat} \end{subtable} \end{table*} \begin{table*}[h!] \begin{subtable}{0.3\linewidth} \centering \scriptsize \input{keyword_percentages/dog} \label{tab:dog} \end{subtable} \hspace*{10em} \begin{subtable}{0.3\linewidth} \centering \scriptsize \input{keyword_percentages/deer} \label{tab:deer} \end{subtable} \end{table*} \begin{table*} \begin{subtable}{0.3\linewidth} \centering \scriptsize \input{keyword_percentages/bird} \label{tab:bird} \end{subtable} \hspace*{10em} \begin{subtable}{0.3\linewidth} \centering \scriptsize \input{keyword_percentages/ship} \label{tab:ship} \end{subtable} \end{table*} \begin{table*} \begin{subtable}{0.3\linewidth} \centering \scriptsize \input{keyword_percentages/truck} \label{tab:truck} \end{subtable} \hspace*{10em} \begin{subtable}{0.3\linewidth} \centering \scriptsize \input{keyword_percentages/horse} \label{tab:horse} \end{subtable} \end{table*} \begin{table*} \begin{subtable}{0.3\linewidth} \centering \scriptsize \input{keyword_percentages/airplane} \label{tab:airplane} \end{subtable} \hspace*{10em} \begin{subtable}{0.3\linewidth} \centering \scriptsize \input{keyword_percentages/automobile} \label{tab:Automobile} \end{subtable} \end{table*} \clearpage \subsubsection{Full List of Models Evaluated on CIFAR-10} \label{apx:cifar10_model_descriptions} The following list contains all models we evaluated on CIFAR-10 with references and links to the corresponding source code. \begin{enumerate} \item \model{autoaug\_pyramid\_net} \cite{autoaugment, pyramidnet} \url{https://github.com/tensorflow/models/tree/master/research/autoaugment} \item \model{autoaug\_shake\_shake\_112} \cite{autoaugment, shakeshake} \url{https://github.com/tensorflow/models/tree/master/research/autoaugment} \item \model{autoaug\_shake\_shake\_32} \cite{autoaugment, shakeshake} \url{https://github.com/tensorflow/models/tree/master/research/autoaugment} \item \model{autoaug\_shake\_shake\_96} \cite{autoaugment, shakeshake} \url{https://github.com/tensorflow/models/tree/master/research/autoaugment} \item \model{autoaug\_wrn} \cite{autoaugment, wrn} \url{https://github.com/tensorflow/models/tree/master/research/autoaugment} \item \model{cudaconvnet} \cite{alexnet} \url{https://github.com/akrizhevsky/cuda-convnet2} \item \model{darc} \cite{darc} \url{http://lis.csail.mit.edu/code/gdl.html} \item \model{densenet\_BC\_100\_12} \cite{densenet} \url{https://github.com/hysts/pytorch\_image\_classification/} \item \model{nas} \cite{nas} \url{https://github.com/tensorflow/models/blob/master/research/slim/nets/nasnet/nasnet.py#L32} \item \model{pyramidnet\_basic\_110\_270} \cite{pyramidnet} \url{https://github.com/hysts/pytorch\_image\_classification/} \item \model{pyramidnet\_basic\_110\_84} \cite{pyramidnet} \url{https://github.com/hysts/pytorch\_image\_classification/} \item \model{random\_features\_256k\_aug} \cite{rf} \url{https://github.com/modestyachts/nondeep} Random 1 layer convolutional network with 256k filters sampled from image patches, patch size = $6$, pool size $15$, pool stride $6$, and horizontal flip data augmentation. \item \model{random\_features\_256k} \cite{rf} \url{https://github.com/modestyachts/nondeep} Random 1 layer convolutional network with 256k filters sampled from image patches, patch size = $6$, pool size $15$, pool stride $6$. \item \model{random\_features\_32k\_aug} \cite{rf} \url{https://github.com/modestyachts/nondeep} Random 1 layer convolutional network with 32k filters sampled from image patches, patch size = $6$, pool size $15$, pool stride $6$, and horizontal flip data augmentation. \item \model{random\_features\_32k} \cite{rf} Random 1 layer convolutional network with 32k filters sampled from image patches, patch size = $6$, pool size $15$, pool stride $16$. \item \model{resnet\_basic\_32} \cite{resnet} \url{https://github.com/hysts/pytorch\_image\_classification/} \item \model{resnet\_basic\_44} \cite{resnet} \url{https://github.com/hysts/pytorch\_image\_classification/} \item \model{resnet\_basic\_56} \cite{resnet} \url{https://github.com/hysts/pytorch\_image\_classification/} \item \model{resnet\_basic\_110} \cite{resnet} \url{https://github.com/hysts/pytorch\_image\_classification/} \item \model{resnet\_preact\_basic\_110} \cite{resnet_preact} \url{https://github.com/hysts/pytorch\_image\_classification/} \item \model{resnet\_preact\_bottleneck\_164} \cite{resnet_preact} \url{https://github.com/hysts/pytorch\_image\_classification/} \item \model{resnet\_preact\_tf} \cite{resnet_preact} \url{https://github.com/tensorflow/models/tree/b871670b5ae29aaa6cad1b2d4e004882f716c466/resnet} \item \model{resnext\_29\_4x64d} \cite{resnext} \url{https://github.com/hysts/pytorch\_image\_classification/} \item \model{resnext\_29\_8x64d} \cite{resnext} \url{https://github.com/hysts/pytorch\_image\_classification/} \item \model{shake\_drop} \cite{shakedrop} \url{https://github.com/imenurok/ShakeDrop} \item \model{shake\_shake\_32d} \cite{shakeshake} \url{https://github.com/hysts/pytorch\_image\_classification/} \item \model{shake\_shake\_64d} \cite{shakeshake} \url{https://github.com/hysts/pytorch\_image\_classification/} \item \model{shake\_shake\_96d} \cite{shakeshake} \url{https://github.com/hysts/pytorch\_image\_classification/} \item \model{shake\_shake\_64d\_cutout} \cite{shakeshake,cutout} \url{https://github.com/hysts/pytorch\_image\_classification/} \item \model{vgg16\_keras} \cite{vgg, vgg_cifar} \url{https://github.com/geifmany/cifar-vgg} \item \model{vgg\_15\_BN\_64} \cite{vgg, vgg_cifar} \url{https://github.com/hysts/pytorch\_image\_classification/} \item \model{wide\_resnet\_tf} \cite{wrn} \url{https://github.com/tensorflow/models/tree/b871670b5ae29aaa6cad1b2d4e004882f716c466/resnet} \item \model{wide\_resnet\_28\_10} \cite{wrn} \url{https://github.com/hysts/pytorch\_image\_classification/} \item \model{wide\_resnet\_28\_10\_cutout} \cite{wrn,cutout} \url{https://github.com/hysts/pytorch\_image\_classification/} \end{enumerate} \subsubsection{Full Results Table} \label{apx:cifar10_model_accuracies} Table \ref{tab:v4_results} contains the detailed accuracy scores for the original CIFAR-10 test set and our new test set. \begin{table*}[ht!] \caption{Model accuracy on the original CIFAR-10 test set and our new test set. $\Delta$ Rank is the relative difference in the ranking from the original test set to the new test set. For example, $\Delta \text{Rank} = -2$ means that a model dropped by two places on the new test set compared to the original test set. The confidence intervals are 95\% Clopper-Pearson intervals. Due to space constraints, references for the models can be found in Appendix \ref{apx:cifar10_model_descriptions}. } \label{tab:v4_results} \centering \rowcolors{3}{white}{gray!15} \input{tables/cifarv4_model_results_table} \end{table*} \subsubsection{Full Results Table for the Exactly Class-Balanced Test Set} \label{apx:cifar10_model_accuracies_balanced} Table \ref{tab:v6_results} contains the detailed accuracy scores for the original CIFAR-10 test set and the exactly class-balanced variant of our new test set. \begin{table*}[h!] \caption{Model accuracy on the original CIFAR-10 test set and the exactly class-balanced variant of our new test set. $\Delta$ Rank is the relative difference in the ranking from the original test set to the new test set. For example, $\Delta \text{Rank} = -2$ means that a model dropped by two places on the new test set compared to the original test set. The confidence intervals are 95\% Clopper-Pearson intervals. Due to space constraints, references for the models can be found in Appendix \ref{apx:cifar10_model_descriptions}. } \label{tab:v6_results} \centering \rowcolors{3}{white}{gray!15} \input{tables/cifarv6_model_results_table} \end{table*} \subsubsection{Hard Images} \label{app:cifar_hard_images} Figure \ref{fig:hardtest} shows the images in our new CIFAR-10 test set that were misclassified by all models in our testbed. As can be seen in the figure, the class labels for these images are correct. \begin{figure*}[htb] \centering \setlength{\imagedim}{2cm} \setlength{\imagexspacing}{1cm} \setlength{\imageyspacing}{1.5cm} \newlength{\labelspacingtwo} \setlength{\labelspacingtwo}{.2cm} \centering \begin{tikzpicture} \footnotesize \tikzstyle{img}=[inner sep=0pt,outer sep=0pt]; \tikzstyle{imglabel}=[anchor=north,inner sep=0pt,yshift=-\labelspacingtwo]; \node [img] (image0) {\includegraphics[width=\imagedim]{figures/hard_images/0.png}}; \node [img,anchor=west,at=(image0.east),xshift=\imagexspacing] (image1) {\includegraphics[width=\imagedim]{figures/hard_images/1.png}}; \node [img,anchor=west,at=(image1.east),xshift=\imagexspacing] (image2) {\includegraphics[width=\imagedim]{figures/hard_images/2.png}}; \node [img,anchor=west,at=(image2.east),xshift=\imagexspacing] (image3) {\includegraphics[width=\imagedim]{figures/hard_images/3.png}}; \node [img,anchor=north,at=(image0.south),yshift=-\imageyspacing] (image4) {\includegraphics[width=\imagedim]{figures/hard_images/4.png}}; \node [img,anchor=west,at=(image4.east),xshift=\imagexspacing] (image5) {\includegraphics[width=\imagedim]{figures/hard_images/5.png}}; \node [img,anchor=west,at=(image5.east),xshift=\imagexspacing] (image6) {\includegraphics[width=\imagedim]{figures/hard_images/6.png}}; \node [img,anchor=west,at=(image6.east),xshift=\imagexspacing] (image7) {\includegraphics[width=\imagedim]{figures/hard_images/7.png}}; \node [img,anchor=north,at=(image4.south),yshift=-\imageyspacing] (image8) {\includegraphics[width=\imagedim]{figures/hard_images/8.png}}; \node [img,anchor=west,at=(image8.east),xshift=\imagexspacing] (image9) {\includegraphics[width=\imagedim]{figures/hard_images/9.png}}; \node [img,anchor=west,at=(image9.east),xshift=\imagexspacing] (image10) {\includegraphics[width=\imagedim]{figures/hard_images/10.png}}; \node [img,anchor=west,at=(image10.east),xshift=\imagexspacing] (image11) {\includegraphics[width=\imagedim]{figures/hard_images/11.png}}; \node [img,anchor=north,at=(image8.south),yshift=-\imageyspacing] (image12) {\includegraphics[width=\imagedim]{figures/hard_images/12.png}}; \node [img,anchor=west,at=(image12.east),xshift=\imagexspacing] (image13) {\includegraphics[width=\imagedim]{figures/hard_images/13.png}}; \node [img,anchor=west,at=(image13.east),xshift=\imagexspacing] (image14) {\includegraphics[width=\imagedim]{figures/hard_images/14.png}}; \node [img,anchor=west,at=(image14.east),xshift=\imagexspacing] (image15) {\includegraphics[width=\imagedim]{figures/hard_images/15.png}}; \node [imglabel] (label0) at (image0.south) [align=left]{True: \class{automobile}\\ Predicted: \class{airplane}}; \node [imglabel] (label1) at (image1.south) [align=left]{True: \class{automobile}\\ Predicted: \class{truck}}; \node [imglabel] (label2) at (image2.south) [align=left]{True: \class{automobile}\\ Predicted: \class{truck}}; \node [imglabel] (label3) at (image3.south) [align=left]{True: \class{automobile}\\ Predicted: \class{truck}}; \node [imglabel] (label4) at (image4.south) [align=left]{True: \class{automobile}\\ Predicted: \class{truck}}; \node [imglabel] (label5) at (image5.south) [align=left]{True: \class{automobile}\\ Predicted: \class{truck}}; \node [imglabel] (label6) at (image6.south) [align=left]{True: \class{automobile}\\ Predicted: \class{truck}}; \node [imglabel] (label7) at (image7.south) [align=left]{True: \class{automobile}\\ Predicted: \class{truck}}; \node [imglabel] (label8) at (image8.south) [align=left]{True: \class{automobile}\\ Predicted: \class{truck}}; \node [imglabel] (label9) at (image9.south) [align=left]{True: \class{bird}\\ Predicted: \class{frog}}; \node [imglabel] (label10) at (image10.south) [align=left]{True: \class{horse}\\ Predicted: \class{frog}}; \node [imglabel] (label11) at (image11.south) [align=left]{True: \class{cat}\\ Predicted: \class{dog}}; \node [imglabel] (label12) at (image12.south) [align=left]{True: \class{cat}\\ Predicted: \class{dog}}; \node [imglabel] (label13) at (image13.south) [align=left]{True: \class{cat}\\ Predicted: \class{deer}}; \node [imglabel] (label14) at (image14.south) [align=left]{True: \class{dog}\\ Predicted: \class{cat}}; \node [imglabel] (label15) at (image15.south) [align=left]{True: \class{dog}\\ Predicted: \class{cat}}; \end{tikzpicture} \caption{Hard images from our new test set that no model correctly. The caption of each image states the correct class label (``True'') and the label predicted by most models (``Predicted'').} \label{fig:hardtest} \end{figure*} \subsubsection{Statistical Error} A first natural guess is that the gap is simply due to statistical fluctuations. But as noted before, the sample size of our new test set is large enough so that a 95\% confidence interval has size about $\pm 1.2\%$. Since a 95\% confidence interval for the original CIFAR-10 test accuracy is even smaller (roughly $\pm 0.6\%$ for 90\% classification accuracy and $\pm 0.3\%$ for 97\% classification accuracy), we can rule out statistical error as the main explanation. \subsubsection{Differences in Near-Duplicate Removal} As mentioned in Section \ref{apx:cifar10_dataset_creation}, the final step of both the original CIFAR-10 and our dataset creation procedure is to remove near-duplicates. While removing near-duplicates between our new test set and the original CIFAR-10 dataset, we noticed that the original test set contained images that we would have ruled out as near-duplicates. A large number of near-duplicates between CIFAR-10 train and test, combined with our more stringent near-duplicate removal, could explain some of the accuracy drop. Indeed, we found about 800 images in the original CIFAR-10 test set that we would classify as near-duplicates (8\% of the entire test set). Moreover, most classifiers have accuracy between 99\% and 100\% on these near-duplicates (recall that most models achieve 100\% training error). However, the following calculation shows that the near-duplicates can explain at most 1\% of the observed difference. For concreteness, we consider a model with 93\% original test set accuracy such as a common VGG or ResNet architecture. Let $\ensuremath{\text{acc}}_{\text{true}}$ be the ``true'' accuracy of the model on test images that are not near-duplicates, and let $\ensuremath{\text{acc}}_{\text{nd}}$ be the accuracy on near-duplicates. Then for 8\% near-duplicates, the overall accuracy is given by \[ \ensuremath{\text{acc}} \; = \; 0.92 \cdot \ensuremath{\text{acc}}_{\text{true}} + 0.08 \cdot \ensuremath{\text{acc}}_{\text{nd}} \; . \] Using $\ensuremath{\text{acc}} = 0.93$, $\ensuremath{\text{acc}}_{\text{nd}} = 1.0$, and solving for $\ensuremath{\text{acc}}_{\text{true}}$ then yields $\ensuremath{\text{acc}}_{\text{true}} \approx 0.924$. So the accuracy on original test images that are not near-duplicates is indeed lower, but only by a small amount (0.6\%). This is in contrast to the 8\% - 9\% accuracy drop that VGG and ResNet models with 93\% original accuracy see in our experiments. For completeness, we describe our process for finding near duplicates in detail. For every test image, we visually inspected the top-10 nearest neighbors in both $\ell_2$-distance and the SSIM (structural similarity) metric. We compared the original test set to the CIFAR-10 training set, and our new test set to both the original training and test sets. We consider an image pair as near-duplicates if both images have the same object in the same pose. We include images that have different zoom, color scale, stretch in the horizontal or vertical direction, or small shifts in vertical or horizontal position. If the object was rotated or in a different pose, we did not include it as a near-duplicate. \subsubsection{Hyperparameter Tuning} Another conjecture is that we can recover some of the missing accuracy by re-tuning hyperparameters of a model. To this end, we performed a grid search over multiple parameters of a VGG model. We selected three standard hyperparameters known to strongly influence test set performance: initial learning rate, dropout, and weight decay. The \model{vgg16\_keras} architecture uses different amounts of dropout across different layers of the network, so we chose to tune a multiplicative scaling factor for the amount of dropout. This keeps the ratio of dropout across different layers constant. \footnotetext{Test Set A is the new test set and Test Set B is the original test set.} We initialized a hyperparameter configuration from values tuned to the original test set (learning rate $0.1$, dropout ratio $1$, weight decay $\num{5e-4}$), and performed a grid search across the following values: \begin{itemize} \item Learning rate in $ \{0.0125, 0.025, 0.05, 0.1, 0.2, 0.4, 0.8\}$. \item Dropout ratio in $\{0.5, 0.75, 1, 1.25, 1.75\}$. \item Weight decay in $\{\num{5e-5}, \num{1e-4}, \num{5e-4}, \num{1e-3}, \num{5e-3} \}$. \end{itemize} We ensured that the best performance was never at an extreme point of any range we tested for an individual hyperparameter. Overall, we did not find a hyperparameter setting with a significantly better accuracy on the new test set (the biggest improvement was from 85.3\% to 85.8\%). \subsubsection{Visually Inspecting Hard Images} It is also possible that we accidentally created a more difficult test set by including a set of ``harder'' images. To explore this question, we visually inspected the set of images that most models incorrectly classified. Figure \ref{fig:hardtest} in Appendix \ref{app:cifar_hard_images} shows examples of the hard images in our new test set that no model correctly classified. We find that all the new images are valid images that are recognizable to humans. \subsubsection{Human Accuracy Comparison} \label{app:cifar_human} The visual inspection of hard images in the previous section is one way to compare the original and new test sets. However, our conclusion may be biased since we have created the new test set ourselves. To compare the relative hardness of the two test sets more objectively, we also conducted a small experiment to measure human accuray on the two test sets.\footnote{Use of this data was permitted by the Berkelely Committee for Protection of Human Subjects (CPHS).} The goal of the experiment was to measure if human accuracy is significantly different on the original and new test sets. Since we conjectured that our new test set included particularly hard images, we focused our experiment on the approximately 5\% hardest images in both test sets. Here, ``hardness'' is defined by how many models correctly classified an image. After rounding to include all images that were classified by the same number of models, we obtained 500 images from the original test set and 115 images from our new test set. We recruited nine graduate students from three different research groups in the Electrical Engineering \& Computer Sciences Department at UC Berkeley. We wrote a simple user interface that allowed the participants to label images with one of the ten CIFAR-10 classes. To ensure that the participants did not know which dataset an image came from, we presented the images in random order. Table \ref{tab:cifar10_human} shows the results of our experiment. We find that four participants performed better on the original test set and five participants were better on our new test set. The average difference is -0.8\%, i.e., the participants do not see a drop in average accuracy on this subset of original and new test images. This suggests that our new test set is not significantly harder for humans. However, we remark that our results here should only be seen as a preliminary study. Understanding human accuracy on CIFAR-10 in more detail will require further experiments. \begin{table*}[ht!] \centering \rowcolors{3}{white}{gray!15} \begin{tabular}{cccc} \toprule & \multicolumn{3}{c}{Human Accuracy ($\%$)} \\ \midrule & Original Test Set & New Test Set & Gap \\ \midrule Participant 1 & 85 {\footnotesize \textcolor{gray}{[81.6, 88.0]}} & 83 {\footnotesize \textcolor{gray}{[74.2, 89.8]}} & 2 \\ Participant 2 & 83 {\footnotesize \textcolor{gray}{[79.4, 86.2]}} & 81 {\footnotesize \textcolor{gray}{[71.9, 88.2]}} & 2 \\ Participant 3 & 82 {\footnotesize \textcolor{gray}{[78.3, 85.3]}} & 78 {\footnotesize \textcolor{gray}{[68.6, 85.7]}} & 4 \\ Participant 4 & 79 {\footnotesize \textcolor{gray}{[75.2, 82.5]}} & 84 {\footnotesize \textcolor{gray}{[75.3, 90.6]}} & -5 \\ Participant 5 & 76 {\footnotesize \textcolor{gray}{[72.0, 79.7]}} & 77 {\footnotesize \textcolor{gray}{[67.5, 84.8]}} & -1 \\ Participant 6 & 75 {\footnotesize \textcolor{gray}{[71.0, 78.7]}} & 73 {\footnotesize \textcolor{gray}{[63.2, 81.4]}} & 2 \\ Participant 7 & 74 {\footnotesize \textcolor{gray}{[69.9, 77.8]}} & 79 {\footnotesize \textcolor{gray}{[69.7, 86.5]}} & -5 \\ Participant 8 & 74 {\footnotesize \textcolor{gray}{[69.9, 77.8]}} & 76 {\footnotesize \textcolor{gray}{[66.4, 84.0]}} & -2 \\ Participant 9 & 67 {\footnotesize \textcolor{gray}{[62.7, 71.1]}} & 71 {\footnotesize \textcolor{gray}{[61.1, 79.6]}} & -4 \\ \bottomrule \end{tabular} \caption{Human accuracy on the ``hardest'' images in the original and our new CIFAR-10 test set. We ordered the images by number of incorrect classifications from models in our testbed and then selected the top 5\% images from the original and new test set (500 images from the original test set, 115 images from our new test set). The results show that on average humans do not see a drop in accuracy on this subset of images. } \label{tab:cifar10_human} \end{table*} \subsubsection{Training on Part of Our New Test Set} If our new test set distribution is significantly different from the original CIFAR-10 distribution, retraining on part of our new test set (plus the original training data) may improve the accuracy on the held-out fraction of our new test set. We conducted this experiment by randomly drawing a class-balanced split containing about 1,000 images from the new test set. We then added these images to the full CIFAR-10 training set and retrained the \model{vgg16\_keras} model. After training, we tested the model on the remaining half of the new test set. We repeated this experiment twice with different randomly selected splits from our test set, obtaining accuracies of 85.1\% and 85.4\% (compared to 84.9\% without the extra training data\footnote{This number is slightly lower than the accuracy of \model{vgg16\_keras} on our new test set in Table \ref{tab:v4_results}, but still within the 95\% confidence interval $[83.6, 86.8]$. Hence we conjecture that the difference is due to the random fluctuation arising from randomly initializing the model.}). This provides evidence that there is no large distribution shift between our new test set and the original CIFAR-10 dataset, or that the model is unable to learn the modified distribution. \subsubsection{Cross-validation} Cross-validation can be a more reliable way of measuring a model's generalization ability than using only a single train / test split. Hence we tested if cross-validation on the original CIFAR-10 dataset could predict a model's error on our new test set. We created cross-validation data by randomly dividing the training set into 5 class-balanced splits. We then randomly shuffled together 4 out of the 5 training splits with the original test set. The leftover held-out split from the training set then became the new test set. We retrained the models \model{vgg\_15\_BN\_64}, \model{wide\_resnet\_28\_10}, and \model{shake\_shake\_64d\_cutout} on each of the 5 new datasets we created. The accuracies are reported in Table \ref{tab:cifar10_cross_validation}. The accuracies on the cross-validation splits did not differ much from the accuracy on the original test set. The variation among the cross-validation splits is significantly smaller than the drop on our new test set. \begin{table*}[ht!] \centering \rowcolors{4}{white}{gray!15} \begin{tabular}{cccc} \toprule & \multicolumn{3}{c}{Model Accuracy ($\%$)} \\ \midrule Dataset & \model{vgg\_15\_BN\_64} & \model{wide\_resnet\_28\_10} & \model{shake\_shake\_64d\_cutout} \\ \midrule Original Test Set & 93.6 {\footnotesize \textcolor{gray}{[93.1, 94.1]}} & 95.7 {\footnotesize \textcolor{gray}{[95.3, 96.1]}} & 97.1 {\footnotesize \textcolor{gray}{[96.8, 97.4]}} \\ \midrule Split 1 & 93.9 {\footnotesize \textcolor{gray}{[93.4, 94.3]}} & 96.2 {\footnotesize \textcolor{gray}{[95.8, 96.6]}} & 97.2 {\footnotesize \textcolor{gray}{[96.9, 97.5]}}\\ Split 2 & 93.8 {\footnotesize \textcolor{gray}{[93.3, 94.3]}} & 96.0 {\footnotesize \textcolor{gray}{[95.6, 96.4]}}& 97.3 {\footnotesize \textcolor{gray}{[97.0, 97.6]}}\\ Split 3 & 94.0 {\footnotesize \textcolor{gray}{[93.5, 94.5]}} & 96.4 {\footnotesize \textcolor{gray}{[96.0, 96.8]}}& 97.4 {\footnotesize \textcolor{gray}{[97.1, 97.7]}}\\ Split 4 & 94.0 {\footnotesize \textcolor{gray}{[93.5, 94.5]}} & 96.2 {\footnotesize \textcolor{gray}{[95.8, 96.6]}}& 97.4 {\footnotesize \textcolor{gray}{[97.1, 97.7]}}\\ Split 5 & 93.5 {\footnotesize \textcolor{gray}{[93.0, 94.0]}} & 96.5 {\footnotesize \textcolor{gray}{[96.1, 96.9]}}& 97.4 {\footnotesize \textcolor{gray}{[97.1, 97.7]}}\\ \midrule New Test Set & 84.9 {\footnotesize \textcolor{gray}{[83.2, 86.4]}} & 89.7 {\footnotesize \textcolor{gray}{[88.3, 91.0]}} & 93.0 {\footnotesize \textcolor{gray}{[91.8, 94.1]}} \\ \bottomrule \end{tabular} \caption{Model accuracies on cross-validation splits for the original CIFAR-10 data. The difference in cross-validation accuracies is significantly smaller than the drop to the new test set.} \label{tab:cifar10_cross_validation} \end{table*} \subsubsection{Training a Discriminator for Original vs.\ New Test Set} Our main hypothesis for the accuracy drop is that small variations in the test set creation process suffice to significantly reduce a model's accuracy. To test whether these variations could be detected by a convolutional network, we investigated whether a discriminator model could distinguish between the two test sets. We first created a training set consisting of $3,200$ images (1,600 from the original test set and 1,600 from our new test set) and a test set of $800$ images (consisting of 400 images from original and new test set each). Each image had a binary label indicating whether it came from the original or new test set. Additionally, we ensured that that both datasets were class balanced. We then trained \model{resnet\_32} and \model{resnet\_110} models for 160 epochs using a standard SGD optimizer to learn a binary classifier between the two datasets. We conducted two variants of this experiment: in one variant, we traind the model from scratch. In the other variant, we started with a model pre-trained on the regular CIFAR-10 classification task. Our results are summarized in Table~\ref{tab:cifar_discrim}. Overall we found that the resulting models could not discriminate well between the original and our new test set: the best accuracy we obtained is 53.1\%. \begin{table*}[ht!] \centering \rowcolors{3}{gray!15}{white} \begin{tabular}{c c c} \toprule Model & Discriminator Accuracy ($\%$) & Discriminator Accuracy ($\%$) \\ & random initialization & pre-trained \\ \midrule \model{resnet\_32} & $50.1$ {\footnotesize \textcolor{gray}{[46.6, 53.6]}} & $52.9$ {\footnotesize \textcolor{gray}{[49.4, 56.4]}} \\ \model{resnet\_110} & $50.3$ {\footnotesize \textcolor{gray}{[46.7, 53.8]}} & $53.1$ {\footnotesize \textcolor{gray}{[49.6, 56.6]}} \\ \bottomrule \end{tabular} \caption{Accuracies for discriminator models trained to distinugish between the original and new CIFAR-10 test sets. The models were initialized either randomly or using a model pre-trained on the original CIFAR-10 dataset. Although the models performed slightly better than random chance, the confidence intervals (95\% Clopper Pearson) still overlap with 50\% accuracy. \label{tab:cifar_discrim}} \end{table*} \subsubsection{An Exactly Class-balanced Test Set} The top 25 keywords of each class in CIFAR-10 capture approximately 95\% of the dataset. However, the remaining 5\% of the dataset are skewed towards the class \class{ship}. As a result, our new dataset was not exactly class-balanced and contained only 8\% images of class \class{ship} (as opposed to 10\% in the original test set). To measure whether this imbalance affected the acccuracy scores, we created an exactly class-balanced version of our new test set with 2,000 images (200 per class). In this version, we selected the top 50 keywords in each class and computed a fractional number of images for each keyword. We then rounded these numbers so that images for keywords with the largest fractional part were added first. The resulting model accuracies can be found in Table \ref{tab:v6_results} (Appendix \ref{apx:cifar10_model_accuracies_balanced}). Models with lower original accuracies achieve a small accuracy improvement on the exactly class-balanced test set (around 0.3\%), but the accuracy drop of the best-performing model remains unchanged. \subsection{Adaptivity Gap} In its prototypical form, \emph{adaptive} overfitting would manifest itself in diminishing returns observed on the new test set (see Section \ref{sec:formal_multiple}). However, we do not observe this pattern on either CIFAR-10 or ImageNet. On both datasets, the slope of the linear fit is \emph{greater} than 1, i.e., each point of accuracy improvement on the original test set translates to more than 1\% on the new test set. This is the opposite of the standard overfitting scenario. So at least on CIFAR-10 and ImageNet, multiple years of competitive test set adaptivity did not lead to diminishing accuracy numbers. While our experiments rule out the most dangerous form of adaptive overfitting, we remark that they do not exclude all variants. For instance, it could be that any test set adaptivity leads to a roughly constant drop in accuracy. Then all models are affected equally and we would see no diminishing returns since later models could still be better. Testing for this form of adaptive overfitting likely requires a new test set that is truly i.i.d.\ and not the result of a separate data collection effort. Finding a suitable dataset for such an experiment is an interesting direction for future research. The lack of adaptive overfitting contradicts conventional wisdom in machine learning. We now describe two mechanisms that could have prevented adaptive overfitting: \vspace{\negspacerw} \paragraph{The Ladder Mechanism.} Blum and Hardt introduced the Ladder algorithm to protect machine learning competitions against adaptive overfitting \cite{BH15}. The core idea is that constrained interaction with the test set can allow a large number of model evaluations to succeed, even if the models are chosen adaptively. Due to the natural form of their algorithm, the authors point out that it can also be seen as a mechanism that the machine learning community \emph{implicitly} follows. \vspace{\negspacerw} \paragraph{Limited Model Class.} Adaptivity is only a problem if we can choose among models for which the test set accuracy differs significantly from the population accuracy. Importantly, this argument does not rely on the number of \emph{all} possible models (e.g., all parameter settings of a neural network), but only on those models that could actually be evaluated on the test set. For instance, the standard deep learning workflow only produces models trained with SGD-style algorithms on a fixed training set, and requires that the models achieve high training accuracy (otherwise we would not consider the corresponding hyperparameters). Hence the number of different models arising from the current methodology may be small enough so that uniform convergence holds. Our experiments offer little evidence for favoring one explanation over the other. One observation is that the convolutional networks shared many errors on CIFAR-10, which could be an indicator that the models are rather similar. But to gain a deeper understanding into adaptive overfitting, it is likely necessary to gather further data from more machine learning benchmarks, especially in scenarios where adaptive overfitting \emph{does} occur naturally. \vspace{\negspacerw} \subsection{Distribution Gap} The lack of diminishing returns in our experiments points towards the distribution gap as the primary reason for the accuracy drops. Moreover, our results on ImageNet show that changes in the sampling strategy can indeed affect model accuracies by a large amount, even if the data source and other parts of the dataset creation process stay the same. So in spite of our efforts to match the original dataset creation process, the distribution gap is still our leading hypothesis for the accuracy drops. This demonstrates that it is surprisingly hard to accurately replicate the distribution of current image classification datasets. The main difficulty likely is the subjective nature of the human annotation step. There are many parameters that can affect the quality of human labels such as the annotator population (MTurk vs.\ students, qualifications, location \& time, etc.), the exact task format, and compensation. Moreover, there are no exact definitions for many classes in ImageNet (e.g., see Appendix \ref{app:ambiguous_imagenet}). Understanding these aspects in more detail is an important direction for designing future datasets that contain challenging images while still being labeled correctly. The difficulty of clearly defining the data distribution, combined with the brittle behavior of the tested models, calls into question whether the black-box and i.i.d.\ framework of learning can produce reliable classifiers. Our analysis of selection frequencies in Figure \ref{fig:rainbow_plot} (Appendix \ref{sec:rainbow}) shows that we could create a new test set with even lower model accuracies. The images in this hypothetical dataset would still be correct, from Flickr, and selected by more than half of the MTurk labelers on average. So in spite of the impressive accuracy scores on the original validation set, current ImageNet models still have difficulty generalizing from ``easy'' to ``hard'' images. \subsection{A Model for the Linear Fit} \label{sec:probitmodel} Finally, we briefly comment on the striking linear relationship between original and new test accuracies that we observe in all our experiments (for instance, see Figure \ref{fig:intro_plot} in the introduction or Figures \ref{fig:imagenet_plotpage} and \ref{fig:imagenet_probit_plotpage} in the appendix). To illustrate how this phenomenon could arise, we present a simple data model where a small modification of the data distribution can lead to significant changes in accuracy, yet the relative order of models is preserved as a linear relationship. We emphasize that this model should not be seen as the true explanation. Instead, we hope it can inform future experiments that explore natural variations in test distributions. First, as we describe in Appendix~\ref{app:imagenetresults}, we find that we achieve better fits to our data under a \emph{probit scaling} of the accuracies. Over a wide range from 21\% to 83\% (all models in our ImageNet testbed), the accuracies on the new test set, $\alpha_{\mathrm{new}}$, are related to the accuracies on the original test set, $\alpha_{\mathrm{orig}}$, by the relationship \[ \Phi^{-1}(\alpha_{\mathrm{new}}) \; = \; u \cdot \Phi^{-1}(\alpha_{\mathrm{orig}})+v \] where $\Phi$ is the Gaussian CDF, and $u$ and $v$ are scalars. The probit scale is in a sense more natural than a linear scale as the accuracy numbers are probabilities. When we plot accuracies on a probit scale in Figures \ref{fig:linear_vs_probit} and \ref{fig:imagenet_probit_plotpage}, we effectively visualize $\Phi^{-1}(\alpha)$ instead of $\alpha$. We now provide a simple plausible model where the original and new accuracies are related linearly on a probit scale. Assume that every example $i$ has a scalar ``difficulty'' $\tau_i \in \ensuremath{\mathbb{R}}$ that quantifies how easy it is to classify. Further assume the probability of a model $j$ correctly classifying an image with difficulty $\tau$ is given by an increasing function $\zeta_j(\tau)$. We show that for restricted classes of difficulty functions $\zeta_j$, we find a linear relationship between average accuracies after distribution shifts. To be specific, we focus on the following parameterization. Assume the difficulty distribution of images in a test set follows a normal distribution with mean $\mu$ and variance $\sigma^2$. Further assume that \[ \zeta_j(\tau) \; = \; \Phi(s_j - \tau) \; , \] where $\Phi: \ensuremath{\mathbb{R}} \rightarrow (0, 1)$ is the CDF of a standard normal distribution, and $s_j$ is the ``skill'' of model $j$. Models with higher skill have higher classification accuracy, and images with higher difficulty lead to smaller classification accuracy. Again, the choice of $\Phi$ here is somewhat arbitrary: any sigmoidal function that maps $(-\infty, +\infty)$ to $(0, 1)$ is plausible. But using the Gaussian CDF yields a simple calculation illustrating the linear phenomenon. Using the above notation, the accuracy $\alpha_{j, \mu,\sigma}$ of a model $j$ on a test set with difficulty mean $\mu$ and variance $\sigma$ is then given by \[ \alpha_{j, \mu, \sigma} \; = \; \Eop_{\tau \sim \ensuremath{\mathcal{N}}(\mu, \sigma)} \left[ \Phi(s_j - \tau) \right] \; . \] We can expand the CDF into an expectation and combine the two expectations by utilizing the fact that a linear combination of two Gaussians is again Gaussian. This yields: \[ \alpha_{j, \mu, \sigma} \; = \; \Phi\left( \frac{s_j - \mu}{\sqrt{\sigma^2 + 1}} \right) \; . \] On a probit scale, the quantities we plot are given by \[ \tilde{\alpha}_{j, \mu, \sigma} \; = \; \Phi^{-1}(\alpha_{j, \mu, \sigma}) \; = \; \frac{s_j - \mu}{\sqrt{\sigma^2 + 1}} \; . \] Next, we consider the case where we have multiple models and two test sets with difficulty parameters $\mu_k$ and $\sigma_k$ respectively for $k \in \{1, 2\}$. Then $\tilde{\alpha}_{j, 2}$, the probit-scaled accuracy on the second test set, is a linear function of the accuracy on the first test set, $\tilde{\alpha}_{j, 1}$: \[ \tilde{\alpha}_{j, 2} \; = \; u \cdot \tilde{\alpha}_{j, 1} + v \; , \] with \begin{align*} u \; = \; \frac{\sqrt{\sigma^2_1 + 1}}{\sqrt{\sigma^2_2 + 1}} ~~~\mbox{and} ~~~v \; = \; \frac{\mu_1 - \mu_2}{\sqrt{\sigma^2_2 + 1}} \; . \end{align*} Hence, we see that the Gaussian difficulty model above yields a linear relationship between original and new test accuracy in the probit domain. While the Gaussian assumptions here made the calculations simple, a variety of different simple classes of $\zeta_j$ will give rise to the same linear relationship between the accuracies on two different test sets. \section{Related Work} We now briefly discuss related threads in machine learning. To the best of our knowledge, there are no reproducibility experiments directly comparable to ours in the literature. \vspace{\negspacerw} \paragraph{Dataset Biases.} The computer vision community has a rich history of creating new datasets and discussing their relative merits, e.g., \cite{caltech101,lotushill,PBEFHLMSRTWZZ06,TE11,pascalvoc,imagenet,RDSKSMHKKBBL15,mscoco}. The paper closest to ours is \cite{TE11}, which studies dataset biases by measuring how models trained on one dataset generalize to other datasets. The main difference to our work is that the authors test generalization across \emph{different} datasets, where larger changes in the distribution (and hence larger drops in accuracy) are expected. In contrast, our experiments explicitly attempt to reproduce the original data distribution and demonstrate that even small variations arising in this process can lead to significant accuracy drops. Moreover, \cite{TE11} do not test on previously unseen data, so their experiments cannot rule out adaptive overfitting. \vspace{\negspacerw} \paragraph{Transfer Learning From ImageNet.} \citet{KSL18} study how well accuracy on ImageNet transfers to other image classification datasets. An important difference from both our work and \cite{TE11} is that the the ImageNet models are re-trained on the target datasets. The authors find that better ImageNet models usually perform better on the target dataset as well. Similar to \cite{TE11}, these experiments cannot rule out adaptive overfitting since the authors do not use new data. Moreover, the experiments do not measure accuracy drops due to small variations in the data generating process since the models are evaluated on a different task with an explicit adaptation step. Interestingly, the authors also find an approximately linear relationship between ImageNet and transfer accuracy. \vspace{\negspacerw} \paragraph{Adversarial Examples.} While adversarial examples \cite{intriguing,biggio2017wild} also show that existing models are brittle, the perturbations have to be finely tuned since models are much more robust to random perturbations. In contrast, our results demonstrate that even small, benign variations in the data sampling process can already lead to a significant accuracy drop without an adversary. A natural question is whether adversarially robust models are also more robust to the distribution shifts observed in our work. As a first data point, we tested the common $\ell_\infty$-robustness baseline from \cite{madry2017towards} for CIFAR-10. Interestingly, the accuracy numbers of this model fall almost exactly on the linear fit given by the other models in our testbed. Hence $\ell_\infty$-robustness does not seem to offer benefits for the distribution shift arising from our reproducibility experiment. However, we note that more forms of adversarial robustness such as spatial transformations or color space changes have been studied \cite{rotations,semanticadversarial,xiao2018spatially,Fawzi2015ManitestAC,Kanbak2018GeometricRO}. Testing these variants is an interesting direction for future work. \vspace{\negspacerw} \paragraph{Non-Adversarial Image Perturbations.} Recent work also explores less adversarial changes to the input, e.g., \cite{Geirhos2018,hendrycks2018benchmarking}. In these papers, the authors modify the ImageNet validation set via well-specified perturbations such as Gaussian noise, a fixed rotation, or adding a synthetic snow-like pattern. Standard ImageNet models then achieve significantly lower accuracy on the perturbed examples than on the unmodified validation set. While this is an interesting test of robustness, the mechanism underlying the accuracy drops is significantly different from our work. The aforementioned papers rely on an intentional, clearly-visible, and well-defined perturbation of existing validation images. Moreover, some of the interventions are quite different from the ImageNet validation set (e.g., ImageNet contains few images of falling snow). In contrast, our experiments use new images and match the distribution of the existing validation set as closely as possible. Hence it is unclear what properties of our new images cause the accuracy drops. \subsection{Choice of Datasets} We focus on image classification since it has become the most prominent task in machine learning and underlies a broad range of applications. The cumulative progress on ImageNet is often cited as one of the main breakthroughs in computer vision and machine learning \cite{MalikCACM}. State-of-the-art models now surpass human-level accuracy by some measure \cite{superhuman,RDSKSMHKKBBL15}. This makes it particularly important to check if common image classification models can reliably generalize to new data from the same source. We decided on CIFAR-10 and ImageNet, two of the most widely-used image classification benchmarks \cite{hamnerpopular}. Both datasets have been the focus of intense research for almost ten years now. Due to the competitive nature of these benchmarks, they are an excellent example for testing whether adaptivity has led to overfitting. In addition to their popularity, their carefully documented dataset creation process makes them well suited for a reproducibility experiment \cite{krizhevsky2009learning,imagenet,RDSKSMHKKBBL15}. Each of the two datasets has specific features that make it especially interesting for our replication study. CIFAR-10 is small enough so that many researchers developed and tested new models for this dataset. In contrast, ImageNet requires significantly more computational resources, and experimenting with new architectures has long been out of reach for many research groups. As a result, CIFAR-10 has likely experienced more hyperparameter tuning, which may also have led to more adaptive overfitting. On the other hand, the limited size of CIFAR-10 could also make the models more susceptible to small changes in the distribution. Since the CIFAR-10 models are only exposed to a constrained visual environment, they may be unable to learn a robust representation. In contrast, ImageNet captures a much broader variety of images: it contains about $24\times$ more training images than CIFAR-10 and roughly $100 \times$ more pixels per image. So conventional wisdom (such as the claims of human-level performance) would suggest that ImageNet models also generalize more reliably . As we will see, neither of these conjectures is supported by our data: CIFAR-10 models do not suffer from more adaptive overfitting, and ImageNet models do not appear to be significantly more robust. \subsection{Dataset Creation Methodology} One way to test generalization would be to evaluate existing models on new i.i.d.\ data from the original test distribution. For example, this would be possible if the original dataset authors had collected a larger initial dataset and randomly split it into two test sets, keeping one of the test sets hidden for several years. Unfortunately, we are not aware of such a setup for CIFAR-10 or ImageNet. In this paper, we instead mimic the original distribution as closely as possible by repeating the dataset curation process that selected the original test set\footnote{For ImageNet, we repeat the creation process of the \emph{validation set} because most papers developed and tested models on the validation set. We discuss this point in more detail in Appendix \ref{sec:imagenet_building_new_test_set}. In the context to this paper, we use the terms ``validation set'' and ``test set'' interchangeably for ImageNet.} from a larger data source. While this introduces the difficulty of disentangling the adaptivity gap from the distribution gap, it also enables us to check whether independent replication affects current accuracy scores. In spite of our efforts, we found that it is astonishingly hard to replicate the test set distributions of CIFAR-10 and ImageNet. At a high level, creating a new test set consists of two parts: \paragraph{Gathering Data.} To obtain images for a new test set, a simple approach would be to use a different dataset, e.g., Open Images \cite{openimages}. However, each dataset comes with specific biases \cite{TE11}. For instance, CIFAR-10 and ImageNet were assembled in the late 2000s, and some classes such as \class{car} or \class{cell\_phone} have changed significantly over the past decade. We avoided such biases by drawing new images from the same source as CIFAR-10 and ImageNet. For CIFAR-10, this was the larger Tiny Image dataset \cite{tinyimages}. For ImageNet, we followed the original process of utilizing the Flickr image hosting service and only considered images uploaded in a similar time frame as for ImageNet. In addition to the data source and the class distribution, both datasets also have rich structure \emph{within} each class. For instance, each class in CIFAR-10 consists of images from multiple specific keywords in Tiny Images. Similarly, each class in ImageNet was assembled from the results of multiple queries to the Flickr API. We relied on the documentation of the two datasets to closely match the sub-class distribution as well. \vspace{\negspaceow} \paragraph{Cleaning Data.} Many images in Tiny Images and the Flickr results are only weakly related to the query (or not at all). To obtain a high-quality dataset with correct labels, it is therefore necessary to manually select valid images from the candidate pool. While this step may seem trivial, our results in Section \ref{sec:imagenet_details} will show that it has major impact on the model accuracies. The authors of CIFAR-10 relied on paid student labelers to annotate their dataset. The researchers in the ImageNet project utilized Amazon Mechanical Turk (MTurk) to handle the large size of their dataset. We again replicated both annotation processes. Two graduate students authors of this paper impersonated the CIFAR-10 labelers, and we employed MTurk workers for our new ImageNet test set. For both datasets, we also followed the original labeling instructions, MTurk task format, etc.\ After collecting a set of correctly labeled images, we sampled our final test sets from the filtered candidate pool. We decided on a test set size of 2,000 for CIFAR-10 and 10,000 for ImageNet. While these are smaller than the original test sets, the sample sizes are still large enough to obtain 95\% confidence intervals of about $\pm 1\%$. Moreover, our aim was to avoid bias due to CIFAR-10 and ImageNet possibly leaving only ``harder'' images in the respective data sources. This effect is minimized by building test sets that are small compared to the original datasets (about 3\% of the overall CIFAR-10 dataset and less than 1\% of the overall ImageNet dataset). \subsection{Results on the New Test Sets} \begin{table*}[ht!] \centering \begin{subtable}{\linewidth} \rowcolors{4}{white}{gray!15} \input{tables/subsampled_cifarv4_model_results_table} \label{tab:subsampled_cifar_model_results} \end{subtable} \begin{subtable}{\linewidth} \rowcolors{4}{white}{gray!15} \input{tables/subsampled_imagenetv2-b-33_model_results_table} \label{tab:subsampled_imagenet_model_results} \end{subtable} \vspace{-3mm} \caption{Model accuracies on the original CIFAR-10 test set, the original ImageNet validation set, and our new test sets. $\Delta$ Rank is the relative difference in the ranking from the original test set to the new test set in the full ordering of all models (see Appendices \ref{apx:cifar10_model_accuracies} and \ref{sec:imagenettable}). For example, $\Delta \text{Rank} = -2$ means that a model dropped by two places on the new test set compared to the original test set. The confidence intervals are 95\% Clopper-Pearson intervals. Due to space constraints, references for the models can be found in Appendices \ref{apx:cifar10_model_descriptions} and \ref{apx:imagenet_model_descriptions}.} \label{tab:subsampled_model_results} \vspace{-3.0mm} \end{table*} After assembling our new test sets, we evaluated a broad range of image classification models spanning a decade of machine learning research. The models include the seminal AlexNet \cite{alexnet}, widely used convolutional networks \cite{vgg,resnet,densenet,inceptionv3}, and the state-of-the-art \cite{autoaugment, pnasnet}. For all deep architectures, we used code previously published online. We relied on pre-trained models whenever possible and otherwise ran the training commands from the respective repositories. In addition, we also evaluated the best-performing approaches preceding convolutional networks on each dataset. These are random features for CIFAR-10 \cite{rahimi2009weighted,rf} and Fisher vectors for ImageNet \cite{fishervectors}.\footnote{We remark that our implementation of Fisher vectors yields top-5 accuracy numbers that are 17\% lower than the published numbers in ILSVRC 2012 \cite{RDSKSMHKKBBL15}. Unfortunately, there is no publicly available reference implementation of Fisher vector models achieving this accuracy score. Hence our implementation should not be seen as an exact reproduction of the state-of-the-art Fisher vector model, but as a baseline inspired by this approach. The main goal of including Fisher vector models in our experiment is to investigate if they follow the same overall trends as convolutional neural networks.} We wrote our own implementations for these models, which we also release publicly.\footnote{\url{https://github.com/modestyachts/nondeep}} Overall, the top-1 accuracies range from 83\% to 98\% on the original CIFAR-10 test set and 21\% to 83\% on the original ImageNet validation set. We refer the reader to Appendices \ref{apx:imagenet_model_descriptions} and \ref{apx:cifar10_model_descriptions} for a full list of models and source repositories. Figure \ref{fig:intro_plot} in the introduction plots original vs.\ new accuracies, and Table \ref{tab:subsampled_model_results} in this section summarizes the numbers of key models. The remaining accuracy scores can be found in Appendices \ref{apx:cifar10_model_accuracies} and \ref{sec:imagenettable}. We now briefly describe the two main trends and discuss the results further in Section \ref{sec:discussion}. \paragraph{A Significant Drop in Accuracy.} All models see a large drop in accuracy from the original test sets to our new test sets. For widely used architectures such as VGG \cite{vgg} and ResNet \cite{resnet}, the drop is 8\% on CIFAR-10 and 11\% on ImageNet. On CIFAR-10, the state of the art \cite{autoaugment} is more robust and only drops by 3\% from 98.4\% to 95.5\%. In contrast, the best model on ImageNet \cite{pnasnet} sees an 11\% drop from 83\% to 72\% in top-1 accuracy and a 6\% drop from 96\% to 90\% in top-5 accuracy. So the top-1 drop on ImageNet is larger than what we observed on CIFAR-10. To put these accuracy numbers into perspective, we note that the best model in the ILSVRC\footnote{ILSVRC is the ImageNet Large Scale Visual Recognition Challenge \cite{RDSKSMHKKBBL15}.} 2013 competition achieved 89\% top-5 accuracy, and the best model from ILSVRC 2014 achieved 93\% top-5 accuracy. So the 6\% drop in \mbox{top-5} accuracy from the 2018 state-of-the-art corresponds to approximately five years of progress in a very active period of machine learning research. \paragraph{Few Changes in the Relative Order.} When sorting the models in order of their original and new accuracy, there are few changes in the respective rankings. Models with comparable original accuracy tend to see a similar decrease in performance. In fact, Figure \ref{fig:intro_plot} shows that the original accuracy is highly predictive of the new accuracy and that the relationship can be summarized well with a linear function. On CIFAR-10, the new accuracy of a model is approximately given by the following formula: \[ \ensuremath{\text{acc}_\text{new}} \; = \; 1.69 \cdot \ensuremath{\text{acc}_\text{orig}} - 72.7\% \; . \] On ImageNet, the top-1 accuracy of a model is given by \[ \ensuremath{\text{acc}_\text{new}} \; = \; 1.11 \cdot \ensuremath{\text{acc}_\text{orig}} - 20.2\% \; . \] Computing a 95\% confidence interval from 100,000 bootstrap samples gives $[1.63, 1.76]$ for the slope and $[-78.6, -67.5]$ for the offset on CIFAR-10, and $[1.07, 1.19]$ and $[-26.0, -17.8]$ respectively for ImageNet. On both datasets, the slope of the linear fit is \emph{greater} \mbox{than 1.} So models with higher original accuracy see a smaller drop on the new test sets. In other words, model robustness \emph{improves} with increasing accuracy. This effect is less pronounced on ImageNet (slope 1.1) than on CIFAR-10 (slope 1.7). In contrast to a scenario with strong adaptive overfitting, neither dataset sees diminishing returns in accuracy scores when going from the original to the new test sets. \subsection{Experiments to Test Follow-Up Hypotheses} \label{sec:imagenet_explaining_the_gap} Since the drop from original to new accuracies is concerningly large, we investigated multiple hypotheses for explaining this drop. Appendices \ref{apx:explain_gap_cifar} and \ref{apx:imagenetfollowup} list a range of follow-up experiments we conducted, e.g., re-tuning hyperparameters, training on part of our new test set, or performing cross-validation. However, none of these effects can explain the size of the drop. We conjecture that the accuracy drops stem from small variations in the human annotation process. As we will see in the next section, the resulting changes in the test sets can significantly affect model accuracies. \section{Understanding the Impact of Data Cleaning on ImageNet} \label{sec:imagenet_details} A crucial aspect of ImageNet is the use of MTurk. There is a broad range of design choices for the MTurk tasks and how the resulting annotations determine the final dataset. To better understand the impact of these design choices, we assembled three different test sets for ImageNet. All of these test sets consist of images from the same Flickr candidate pool, are correctly labeled, and selected by more than 70\% of the MTurk workers on average. Nevertheless, the resulting model accuracies vary by 14\%. To put these numbers in context, we first describe our MTurk annotation pipeline. \vspace{\negspaceow} \paragraph{MTurk Tasks.} We designed our MTurk tasks and user interface to closely resemble those originally used for ImageNet. As in ImageNet, each MTurk task contained a grid of 48 candidate images for a given target class. The task description was derived from the original ImageNet instructions and included the definition of the target class with a link to a corresponding Wikipedia page. We asked the MTurk workers to select images belonging to the target class regardless of ``occlusions, other objects, and clutter or text in the scene'' and to avoid drawings or paintings (both as in ImageNet). Appendix \ref{apx:mturk_ui} shows a screenshot of our UI and a screenshot of the original UI for comparison. For quality control, we embedded at least six randomly selected images from the original validation set in each MTurk task (three from the same class, three from a class that is nearby in the WordNet hierarchy). These images appeared in random locations of the image grid for each task. In total, we collected sufficient MTurk annotations so that we have at least 20 annotated validation images for each class. The main outcome of the MTurk tasks is a \emph{selection frequency} for each image, i.e., what fraction of MTurk workers selected the image in a task for its target class. We recruited at least ten MTurk workers for each task (and hence for each image), which is similar to ImageNet. Since each task contained original validation images, we could also estimate how often images from the original dataset were selected by our MTurk workers. \vspace{\negspaceow} \paragraph{Sampling Strategies. } In order to understand how the MTurk selection frequency affects the model accuracies, we explored three sampling strategies. \begin{itemize} \item \textbf{\textsf{MatchedFrequency}:} First, we estimated the selection frequency distribution for each class from the annotated original validation images. We then sampled ten images from our candidate pool for each class according to these class-specific distributions (see Appendix \ref{sec:imagenetsampling} for details). \item \textbf{\textsf{Threshold0.7}:} For each class, we sampled ten images with selection frequency at least 0.7. \item \textbf{\textsf{TopImages}:} For each class, we chose the ten images with highest selection frequency. \end{itemize} In order to minimize labeling errors, we manually reviewed each dataset and removed incorrect images. The average selection frequencies of the three final datasets range from 0.93 for \textsf{TopImages}{} over 0.85 for \textsf{Threshold0.7}{} to 0.73 for \textsf{MatchedFrequency}. For comparison, the original validation set has an average selection frequency of 0.71 in our experiments. Hence all three of our new test sets have higher selection frequencies than the original ImageNet validation set. In the preceding sections, we presented results on \textsf{MatchedFrequency}{} for ImageNet since it is closest to the validation set in terms of selection frequencies. \vspace{\negspaceow} \paragraph{Results.} Table \ref{tab:sampling_results_summary} shows that the MTurk selection frequency has significant impact on both top-1 and top-5 accuracy. In particular, \textsf{TopImages}{} has the highest average MTurk selection frequency and sees a small \emph{increase} of about 2\% in both average top-1 and top-5 accuracy compared to the original validation set. This is in stark contrast to \textsf{MatchedFrequency}{}, which has the lowest average selection frequency and exhibits a significant drop of 12\% and 8\%, respectively. The \textsf{Threshold0.7}{} dataset is in the middle and sees a small decrease of 3\% in top-1 and 1\% in top-5 accuracy. In total, going from \textsf{TopImages}{} to \textsf{MatchedFrequency}{} decreases the accuracies by about 14\% (top-1) and 10\% (top-5). For comparison, note that after excluding AlexNet (and the SqueezeNet models tuned to match AlexNet \cite{squeezenet}), the range of accuracies spanned by all remaining convolutional networks is roughly 14\% (top-1) and 8\% (top-5). So the variation in accuracy caused by the three sampling strategies is larger than the variation in accuracy among all post-AlexNet models we tested. \begin{table*}[tb!] \centering \rowcolors{2}{white}{gray!15} \begin{tabular}{C{3cm} C{3.75cm} C{3.75cm} C{3.75cm} } \toprule \textbf{Sampling Strategy} & \textbf{Average MTurk Selection Freq.} & \textbf{Average Top-1 Accuracy Change} & \textbf{Average Top-5 Accuracy Change} \\ \midrule \textsf{MatchedFrequency}{} & 0.73 & -11.8\% & -8.2\% \\ \textsf{Threshold0.7}{} & 0.85 & -3.2\% & -1.2\% \\ \textsf{TopImages}{} & 0.93 & +2.1\% & +1.8\% \\ \bottomrule \end{tabular} \caption{Impact of the three sampling strategies for our ImageNet test sets. The table shows the average MTurk selection frequency in the resulting datasets and the average changes in model accuracy compared to the original validation set. We refer the reader to Section \ref{sec:imagenet_details} for a description of the three sampling strategies. All three test sets have an average selection frequency of more than 0.7, yet the model accuracies still vary widely. For comparison, the original ImageNet validation set has an average selection frequency of 0.71 in our MTurk experiments. The changes in average accuracy span 14\% and 10\% in top-1 and top-5, respectively. This shows that details of the sampling strategy have large influence on the resulting accuracies. } \label{tab:sampling_results_summary} \end{table*} Figure \ref{fig:imagenet_a_and_c_top1} plots the new vs.\ original top-1 accuracies on \textsf{Threshold0.7}{} and \textsf{TopImages}{}, similar to Figure \ref{fig:intro_plot} for \textsf{MatchedFrequency}{} before. For easy comparison of top-1 and top-5 accuracy plots on all three datasets, we refer the reader to Figure \ref{fig:intro_plot} in Appendix \ref{sec:imagenettable}. All three plots show a good linear fit. \begin{figure*}[ht!] \centering \iftoggle{isicml}{ \begin{subfigure}{0.39\textwidth} \includegraphics[width=\linewidth]{figures/imagenet_ac_plot_a_without_legend.pdf} \end{subfigure} \begin{subfigure}{0.39\textwidth} \includegraphics[width=\linewidth]{figures/imagenet_ac_plot_c_without_legend.pdf} \end{subfigure} \begin{subfigure}{0.21\textwidth} \includegraphics[width=\linewidth]{figures/imagenet_ac_plot_separate_legend_vertical.pdf} \end{subfigure} \vspace{-.3cm} }{ \begin{subfigure}{0.48\textwidth} \includegraphics[width=\linewidth]{figures/imagenet_ac_plot_a_without_legend.pdf} \end{subfigure} \hfill \begin{subfigure}{0.48\textwidth} \includegraphics[width=\linewidth]{figures/imagenet_ac_plot_c_without_legend.pdf} \end{subfigure} \begin{subfigure}{\textwidth} \vspace{-.15cm} \centering \includegraphics[width=.75\linewidth]{figures/imagenet_ac_plot_separate_legend_horizontal.pdf} \end{subfigure} \vspace{-.6cm} } \caption{Model accuracy on the original ImageNet validation set vs.\ accuracy on two variants of our new test set. We refer the reader to Section \ref{sec:imagenet_details} for a description of these test sets. Each data point corresponds to one model in our testbed (shown with 95\% Clopper-Pearson confidence intervals). On \textsf{Threshold0.7}{}, the model accuracies are 3\% lower than on the original test set. On \textsf{TopImages}{}, which contains the images most frequently selected by MTurk workers, the models perform 2\% \emph{better} than on the original test set. The accuracies on both datasets closely follow a linear function, similar to \textsf{MatchedFrequency}{} in Figure \ref{fig:intro_plot}. The red shaded region is a 95\% confidence region for the linear fit from 100,000 bootstrap samples.\\[-.7cm]} \label{fig:imagenet_a_and_c_top1} \end{figure*} \subsection{Dataset Creation Methodology} \label{sec:imagenet_building_new_test_set} Since the existing training, validation, and test sets are not strictly i.i.d.\ (see above), the first question was which dataset part to replicate. For our experiment, we decided to match the distribution of the \emph{validation set}. There are multiple reasons for this choice: \begin{itemize} \item In contrast to the training set, the validation set comes from only one data source (Flickr). Moreover, the Flickr API allows fine-grained searches, which makes it easier to control the data source and match the original distribution. \item In contrast to the original test set, the validation set comes with label information. This makes it easier to inspect the existing image distribution for each class, which is important to ensure that we match various intricacies of the dataset (e.g., see Appendix \ref{app:ambiguous_imagenet} for examples of ambiguous classes). \item Most papers report accuracy numbers on the validation set. Hence comparing new vs.\ existing accuracies is most relevant for the validation set. \item The validation set is commonly used to develop new architectures and tune hyperparameters, which leads to the possibility of adaptive overfitting. If we again observe no diminishing returns in accuracy on our new test set, this indicates that even the validation set is resilient to adaptive overfitting. \end{itemize} Therefore, our goal was to replicate the distribution of the original validation set as closely as possible. We aimed for a new test set of size 10,000 since this would already result in accuracy scores with small confidence intervals (see Section \ref{sec:formal}). While a larger dataset would result in even smaller confidence intervals, we were also concerned that searching for more images might lead to a larger distribution shift. In particular, we decided to use a time range for our Flickr queries \emph{after} the original ImageNet collection period (see below for the corresponding considerations). Since a given time period only has a limited supply of high quality images, a larger test set would have required a longer time range. This in turn may create a larger temporal distribution shift. To balance these two concerns, we decided on a size of 10,000 images for the new test set. Figure \ref{fig:imagenet_dataset_pipeline} presents a visual overview of our dataset creation pipeline. It consists of two parts: creating a pool of candidate images and sampling a clean dataset from this candidate pool. We now describe each part in detail to give the reader insights into the design choices potentially affecting the final distribution. \begin{figure*}[h!] \centering \includegraphics[width=\textwidth]{figures/imagenet/imagenet_dataset_pipeline.png} \caption{The pipeline for the new ImageNet test set. It consists of two parts: creating the candidate pool and sampling the final dataset from this candidate pool.} \label{fig:imagenet_dataset_pipeline} \end{figure*} \subsubsection{Creating a Candidate Pool} Similar to the creation procedure for the original ImageNet validation set, we collected candidate images from the Flickr image hosting service and then annotated them with Amazon Mechanical Turk (MTurk). \paragraph{Downloading images from Flickr.} The Flickr API has a range of parameters for image searches such as the query terms, an allowed time range, a maximum number of returned images, and a sorting order. We summarize the main points here: \begin{itemize} \item \textbf{Query terms:} For each class, we used each of the WordNet synonyms as a search term in separate queries. \item \textbf{Date range:} There were two main options for the date range associated with our queries to Flickr: either the same date range as the original ImageNet data collection, or a date range directly after ImageNet. The advantage of using the ImageNet date range is that it avoids a distribution shift due to the time the images were taken. However, this option also comes with two important caveats: First, the pool of high quality images in the original ImageNet date range could have been largely exhausted by ImageNet. Second, the new dataset could end up with near-duplicates of images in the original validation or training set that are hard to detect. Especially the first issue is difficult to quantify, so we decided on a time range directly after the ImageNet collection period. In particular, we initially searched for images taken and uploaded to Flickr between July 11, 2012 and July 11, 2013 because the final ILSVRC2012 public data release was on July 10, 2012. Since we used a period of only one year (significantly shorter than the ImageNet collection period), we believe that the temporal component of the distribution shift is small. \item \textbf{Result size:} We initially downloaded up to 100 images for each class. If a class has $k$ synonyms associated with it, we requested 100/$k$ images for each synonym. We decided on 100 images per class since we aimed for 10,000 images overall and estimated that 10\% of the candidate images would be of sufficiently high quality (similar to ImageNet \cite{imagenet}). \item \textbf{Result order:} Flickr offers the sorting options ``relevance'', ``interestingness'', and various temporal orderings. Note that the ``relevance'' and ``interestingness'' orderings may rely on machine learning models trained on ImageNet. Since these orderings may introduce a significant bias (e.g., by mainly showing images that current ImageNet models recognize for the respective search term), we chose to order the images by their upload date. This helps to ensure that our new test set is independent of current classifiers. \end{itemize} After our first data collection, we found it necessary to expand the initial candidate pool for particular classes in order to reach a sufficient number of valid images. This is similar to the original ImageNet creation process, where the authors expanded the set of queries using two methods \cite{imagenet,RDSKSMHKKBBL15}. The first method appended a word from the parent class to the queries if this word also appeared in the gloss of the target class. The second method included translations of the queries into other languages such as Chinese, Spanish, Dutch, and Italian. We took the following steps to expand our search queries, only proceeding to the next step for a given class when in need of more images. \begin{enumerate} \item Append a word from the parent class if the word appears in the gloss of the target class. \item Expand the maximum number of images to 200 for this class. \item Expand the search range to include photos taken or uploaded before July 11, 2014 (i.e., a time span of two years instead of one). \item Concatenate compound queries, i.e., search for ``dialphone'' instead of ``dial phone''. \item Manually pick alternative query words, including translations of the queries. \end{enumerate} In total, we obtained 208,145 candidate images from Flickr. \paragraph{Amazon Mechanical Turk.} While the candidate images from Flickr are correlated with their corresponding class, a large number of images are still unsuitable for an image classification dataset. For instance, the images may be of low quality (blurry, unclear object presence, etc.), violate dataset rules (e.g., no paintings), or be simply unrelated to the target class. So similar to ImageNet, we utilized MTurk to filter our pool of candidate images. We designed our MTurk tasks and UI to be close to those used in ImageNet. As in ImageNet, we showed each MTurk worker a grid of 48 candidate images for a given target class. The task description was derived from the original ImageNet instructions and included the definition of the target class with a link to a corresponding Wikipedia page. We asked the MTurk workers to select images belonging to the target class regardless of ``occlusions, other objects, and clutter or text in the scene'' and to avoid drawings or paintings (both as in ImageNet). Appendix \ref{apx:mturk_ui} shows a screenshot of our UI and a screenshot of the original UI for comparison. For quality control, we embedded at least six randomly selected images from the original validation set in each MTurk task (three from the same class, three from a class that is nearby in the WordNet hierarchy). These images appeared in random locations of the image grid for each task. We obfuscated all image URLs and resized our images to match the most common size of the existing validation images so that the original validation images were not easy to spot. The main outcome of the MTurk tasks is a \emph{selection frequency} for each image, i.e., what fraction of MTurk workers selected the image in a task for its target class. We recruited at least ten MTurk workers for each task (and hence for each image), which is similar to ImageNet. Since each task contained original validation images, we could also estimate how often images from the original dataset were selected by our MTurk workers. \paragraph{Removing near-duplicate images.} The final step in creating the candidate pool was to remove near-duplicates, both within our new test set and between our new test set and the original ImageNet dataset. Both types of near-duplicates could harm the quality of our dataset. Since we obtained results from Flickr in a temporal ordering, certain events (e.g., the 2012 Olympics) led to a large number of similar images depicting the same scene (e.g., in the class for the ``horizontal bar`` gymnastics instrument). Inspecting the ImageNet validation set revealed only very few sets of images from a single event. Moreover, the ImageNet paper also remarks that they removed near-duplicates \cite{imagenet}. Hence we decided to remove near-duplicates within our new test set. Near-duplicates between our dataset and the original test set are also problematic. Since the models typically achieve high accuracy on the training set, testing on a near-duplicate of a training image checks for memorization more than generalization. A near-duplicate between the existing validation set and our new test set also defeats the purpose of measuring generalization to previously unseen data (as opposed to data that may already have been the victim of adaptive overfitting). To find near-duplicates, we computed the 30 nearest neighbors for each candidate image in three different metrics: $\ell_2$-distance on raw pixels, $\ell_2$-distance on features extracted from a pre-trained VGG \cite{vgg} model (fc7), and SSIM (structural similarity) \cite{WBSS04}, which is a popular image similarity metric. For metrics that were cheap to evaluate ($\ell_2$-distance on pixels and $\ell_2$-distance on fc7), we computed nearest neighbor distances to all candidate images and all of the original ImageNet data. For the more compute-intensive SSIM metric, we restricted the set of reference images to include all candidate images and the five closest ImageNet classes based on the tree distance in the WordNet hierarchy. We then manually reviewed nearest neighbor pairs below certain thresholds for each metric and removed any duplicates we discovered. To the best of our knowledge, ImageNet used only nearest neighbors in the $\ell_2$-distance to find near-duplicates \cite{bergpersonal}. While this difference may lead to a small change in distribution, we still decided to use multiple metrics since including images that have near-duplicates in ImageNet would be contrary to the main goal of our experiment. Moreover, a manual inspection of the original validation set revealed only a very small number of near-duplicates within the existing dataset. \subsubsection{Sampling a Clean Dataset} \label{sec:imagenetsampling} The result of collecting a candidate pool was a set of images with annotations from MTurk, most importantly the selection frequency of each image. In the next step, we used this candidate pool to sample a new test set that closely resembles the distribution of the existing validation set. There were two main difficulties in this process. First, the ImageNet publications do not provide the agreement thresholds for each class that were used to determine which images were valid. One possibility could be to run the algorithm the ImageNet authors designed to compute the agreement thresholds. However, this algorithm would need to be exactly specified, which is unfortunately not the case to the best of our knowledge.\footnote{To be precise: Jia Deng's PhD thesis \cite{jiadengthesis} provides a clear high-level description of their algorithm for computing agreement thresholds. However -- as is commonly the case in synopses of algorithms -- the description still omits some details such as the binning procedure or the number of images used to compute the thresholds. Since it is usually hard to exactly reconstruct a non-trivial algorithm from an informal summary, we instead decided to implement three different sampling strategies and compare their outcomes. Potential deviations from the ImageNet sampling procedure are also alleviated by the fact that our MTurk tasks always included at least a few images from the original validation set, which allowed us to calibrate our sampling strategies to match the existing ImageNet data.} Second, and more fundamentally, it is impossible to exactly replicate the MTurk worker population from 2010 -- 2012 with a reproducibility experiment in 2018. Even if we had access to the original agreement thresholds, it is unclear if they would be meaningful for our MTurk data collection (e.g., because the judgments of our annotations could be different). Similarly, re-running the algorithm for computing agreement thresholds could give different results with our MTurk worker population. So instead of attempting to directly replicate the original agreement thresholds, we instead explored three different sampling strategies. An important asset in this part of our experiment was that we had inserted original validation images into the MTurk tasks (see the previous subsection). So at least for \emph{our} MTurk worker population, we could estimate how frequently the MTurk workers select the original validation images. In this subsection, we describe our sampling strategy that closely matches the selection frequency distribution of the original validation set. The follow-up experiments in Section \ref{sec:imagenet_details} then explore the impact of this design choice in more detail. As we will see, the sampling strategy has significant influence on the model accuracies. \paragraph{Matching the Per-class Selection Frequency.} A simple approach to matching the selection frequency of the existing validation set would be to sample new images so that the mean selection frequency is the same as for the original dataset. However, a closer inspection of the selection frequencies reveals significant differences between the various classes. For instance, well-defined and well-known classes such as ``African elephant'' tend to have high selection frequencies ranging from 0.8 to 1.0. At the other end of the spectrum are classes with an unclear definition or easily confused alternative classes. For instance, the MTurk workers in our experiment often confused the class ``nail'' (the fastener) with fingernails, which led to significantly lower selection frequencies for the original validation images belonging to this class. In order to match these class-level details, we designed a sampling process that approximately matches the selection frequency distribution for each class. As a first step, we built an estimate of the per-class distribution of selection frequencies. For each class, we divided the annotated validation images into five histogram bins based on their selection frequency. These frequency bins were $[0.0, 0.2)$, $[0.2, 0.4)$, $[0.4, 0.6)$, $[0.6, 0.8)$, and $[0.8, 1.0]$. Intuitively, these bins correspond to a notion of image quality assessed by the MTurk workers, with the $[0.0, 0.2)$ bin containing the worst images and the $[0.8, 1.0]$ bin containing the best images. Normalizing the resulting histograms then yielded a distribution over these selection frequency bins for each class. Next, we sampled ten images for each class from our candidate pool, following the distribution given by the class-specific selection frequency histograms. More precisely, we first computed the target number of images for each histogram bin, and then sampled from the candidates images falling into this histogram bin uniformly at random. Since we only had a limited number of images for each class, this process ran out of images for a small number of classes. In these cases, we then sampled candidate images from the next higher bin until we found a histogram bin that still had images remaining. While this slightly changes the distribution, we remark that it makes our new test set easier and only affected 0.8\% of the images in the new test set. At the end of this sampling process, we had a test set with $10,000$ images and an average sampling frequency of $0.73$. This is close to the average sampling frequency of the annotated validation images~($0.71$). \paragraph{Final Reviewing.} While the methodology outlined so far closely matches the original ImageNet distribution, it is still hard to ensure that no unintended biases crept into the dataset (e.g., our MTurk workers could interpret some of the class definitions differently and select different images). So for quality control, we added a final reviewing step to our dataset creation pipeline. Its purpose was to rule out obvious biases and ensure that the dataset satisfies our quality expectations \emph{before} we ran any models on the new dataset. This minimizes dependencies between the new test set and the existing models. In the final reviewing step, the authors of this paper manually reviewed every image in the dataset. Appendix \ref{apx:reviewing_ui} includes a screenshot and brief description of the user interface. When we found an incorrect image or a near-duplicate, we removed it from the dataset. After a pass through the dataset, we then re-sampled new images from our candidate pool. In some cases, this also required new targeted Flickr searches for certain classes. We repeated this process until the dataset converged after 33 iterations. We remark that the majority of iterations only changed a small number of images. One potential downside of the final reviewing step is that it may lead to a distribution shift. However, we accepted this possibility since we view dataset correctness to be more important than minimizing distribution shift. In the end, a test set is only interesting if it has correct labels. Note also that removing incorrect images from the dataset makes it easier, which goes \emph{against} the main trend we observe (a drop in accuracy). Finally, we kept track of all intermediate iterations of our dataset so that we could measure the impact of this final reviewing step (see Section \ref{apx:impact_of_dataset_revisions}). This analysis shows that the main trends (a significant accuracy drop and an approximately linear relationship between original and new accuracy) also hold for the first iteration of the dataset without any additional reviewing. \subsection{Model Performance Results} \label{app:imagenetresults} After assembling our new test sets, we evaluated a broad range of models on both the original validation set and our new test sets. Section \ref{apx:imagenet_model_descriptions} contains a list of all models we evaluated with corresponding references and links to source code repositories. Tables \ref{tab:imagenetv2-b-33_top1_full_results} and \ref{tab:imagenetv2-b-33_top5_full_results} show the top-1 and top-5 accuracies for our main test set \textsf{MatchedFrequency}{}. Figure \ref{fig:imagenet_plotpage} visualizes the top-1 and top-5 accuracies on all three test sets. In the main text of the paper and Figure \ref{fig:imagenet_plotpage}, we have chosen to exclude the Fisher Vector models and show accuracies only for the convolutional neural networks (convnets). Since the Fisher Vector models achieve significantly lower accuracy, a plot involving both model families would have sacrificed resolution among the convnets. We decided to focus on convnets in the main text because they have become the most widely used model family on ImageNet. Moreover, a linear model of accuracies (as shown in previous plots) is often not a good fit when the accuracies span a wide range. Instead, a non-linear model such as a logistic or probit model can sometimes describe the data better. Indeed, this is also the case for our data on ImageNet. Figure \ref{fig:linear_vs_probit} shows the accuracies both on a linear scale as in the previous plots, and on a \emph{probit} scale, i.e., after applying the inverse of the Gaussian CDF to all accuracy scores. As can be seen by comparing the two plots in Figure \ref{fig:linear_vs_probit}, the probit model is a better fit for our data. It accurately summarizes the relationship between original and new test set accuracy for all models from both model families in our testbed. Similar to Figure \ref{fig:imagenet_plotpage}, we also show the top-1 and top-5 accuracies for all three datasets in the probit domain in Figure \ref{fig:imagenet_probit_plotpage}. Section \ref{sec:probitmodel} describes a possible generative model that leads to a linear fit in the probit domain as exhibited by the plots in Figures \ref{fig:linear_vs_probit} and \ref{fig:imagenet_probit_plotpage}. \begin{figure*}[tb!] \centering \begin{subfigure}{0.48\textwidth} \includegraphics[width=\linewidth]{figures/linear_vs_probit_linear_without_legend.pdf} \end{subfigure} \hfill \begin{subfigure}{0.48\textwidth} \includegraphics[width=\linewidth]{figures/linear_vs_probit_probit_without_legend.pdf} \end{subfigure} \begin{subfigure}{\textwidth} \vspace{-.15cm} \centering \includegraphics[width=.75\linewidth]{figures/linear_vs_probit_legend_horizontal.pdf} \end{subfigure} \vspace{-.6cm} \caption{ Model accuracy on the original ImageNet validation set vs.\ our new test set \textsf{MatchedFrequency}{}. Each data point corresponds to one model in our testbed (shown with 95\% Clopper-Pearson confidence intervals), and we now also include the Fisher Vector models. The left plot shows the model accuracies with a linear scale on the axes. The right plot instead uses a \emph{probit} scale, i.e., accuracy $\alpha$ appears at $\Phi^{-1}(\alpha)$, where $\Phi$ is the Gaussian CDF. Comparing the two plot provides evidence that the probit model is a better fit for the accuracy scores. Over a range of 60 percentage points, the linear fit in the probit domain accurately describes the relationship between original and new test set accuracy. The shaded region around the linear fit is a 95\% confidence region from 100,000 bootstrap samples. The confidence region is present in both plots but is significantly smaller in the right plot due to the better fit in the probit domain. \vspace{-.3cm} } \label{fig:linear_vs_probit} \end{figure*} \subsection{Follow-up Hypotheses} \label{apx:imagenetfollowup} As for CIFAR-10, the gap between original and new accuracy is concerningly large. Hence we investigated multiple hypotheses for explaining this gap. \subsubsection{Cross Validation} \label{apx:imagenet_cross_validation} A natural question is whether cross-validation with the existing ImageNet data could have pointed towards a significant drop in accuracy. If adaptive overfitting to the images in the validation set is a cause for the accuracy drop, testing on different images from another cross-validation fold could produce lower accuracies.\footnote{Note however that the training images may also be affected by adaptive overfitting since the model hyperparameters are often tuned for fast training speed and high training accuracy.} Moreover, recall that the ImageNet validation set is not a strictly i.i.d.\ sample from the same distribution as the training set (see the beginning of Section \ref{sec:imagenet_details}). This also raises the question of how well a model would perform on a cross-validation fold from the training data. To investigate these two effects, we conducted a cross-validation experiment with the ImageNet training and validation sets. In order to ensure that the new cross-validation folds contain only training images, we treated the existing validation set as one fold and created five additional folds with 50,000 images each. To this end, we drew a class-balanced sample of 250,000 images from the training set and then randomly partitioned this sample into five cross-validation folds (again class-balanced). For each of these five folds, we added the validation set (and the other training folds) to the training data so that the size of the training set was unchanged. We then trained one \model{resnet50} model\footnote{To save computational resources, we used the optimized training code from \url{https://github.com/fastai/imagenet-fast}. Hence the top-5 accuracy on the original validation set is 0.4\% lower than in Table \ref{tab:imagenetv2-b-33_top5_full_results}.} \cite{resnet} for each of the five training sets and evaluated them on the corresponding held-out data. Table \ref{tab:imagenet_cross_validation} shows the resulting accuracies for each split. \begin{table*}[h!] \centering \rowcolors{3}{gray!15}{white} \begin{tabular}{c c c c c c} \toprule Dataset & \model{resnet50} Top-5 Accuracy ($\%$) \\ \midrule Original validation set & 92.5 {\footnotesize \textcolor{gray}{[92.3, 92.8]}} \\ \midrule Split 1 & 92.60 {\footnotesize \textcolor{gray}{[92.4, 92.8]}} \\ Split 2 & 92.59 {\footnotesize \textcolor{gray}{[92.4, 92.8]}} \\ Split 3 & 92.61 {\footnotesize \textcolor{gray}{[92.4, 92.8]}} \\ Split 4 & 92.55 {\footnotesize \textcolor{gray}{[92.3, 92.8]}} \\ Split 5 & 92.62 {\footnotesize \textcolor{gray}{[92.4, 92.9]}} \\ \midrule New test set (\textsf{MatchedFrequency}{}) & 84.7 {\footnotesize \textcolor{gray}{[83.9, 85.4]}} \\ \bottomrule \end{tabular} \caption{\model{resnet50} accuracy on cross-validation splits created from the original ImageNet train and validation sets. The accuracy increase is likely caused by a small shift in distribution between the ImageNet training and validation sets.} \label{tab:imagenet_cross_validation} \end{table*} Overall, we do not see a large difference in accuracy on the new cross validation splits: all differences fall within the 95\% confidence intervals around the accuracy scores. This is in contrast to the significantly larger accuracy drops on our new test sets. \subsubsection{Impact of Dataset Revisions} \label{apx:impact_of_dataset_revisions} As mentioned in Section \ref{sec:imagenetsampling}, our final reviewing pass may have led to a distribution shift compared to the original ImageNet validation set. In general, our reviewing criterion was to blacklist images that were \begin{itemize} \item not representative of the target class, \item cartoons, paintings, drawings, or renderings, \item significantly different in distribution from the original ImageNet validation set, \item unclear, blurry, severely occluded, overly edited, or including only a small target object. \end{itemize} For each class, our reviewing UI (screenshot in Appendix \ref{apx:reviewing_ui}) displayed a random sample of ten original validation images directly next to the ten new candidate images currently chosen. At least to some extent, this allowed us to detect and correct distribution shifts between the original validation set and our candidate pool. As a concrete example, we noted in one revision of our dataset that approximately half of the images for ``great white shark'' were not live sharks in the water but models in museums or statues outside. In contrast, the ImageNet validation set had fewer examples of such artificial sharks. Hence we decided to remove some non-live sharks from our candidate pool and sampled new shark images as a replacement in the dataset. Unfortunately, some of these reviewing choices are subjective. However, such choices are often an inherent part of creating a dataset and it is unclear whether a more ``hands-off'' approach would lead to more meaningful conclusions. For instance, if the drop in accuracy was mainly caused by a distribution shift that is easy to identify and correct (e.g., an increase in black-and-white images), the resulting drop may not be an interesting phenomenon (beyond counting black-and-white images). Hence we decided to \emph{both} remove distribution shifts that we found easy to identify visually, and also to measure the effect of these interventions. Our reviewing process was iterative, i.e., we made a full pass over every incomplete class in a given dataset revision before sampling new images to fill the resulting gaps. Each time we re-sampled our dataset, we saved the current list of images in our version control system. This allowed us to track the datasets over time and later measure the model accuracy for each dataset revision. We remark that we only computed model accuracies on intermediate revisions after we had arrived at the final revision of the corresponding dataset. Figure \ref{fig:imagenetv2-b-reviewing} plots the top-1 accuracy of a \model{resnet50} model versus the dataset revision for our new \textsf{MatchedFrequency}{} test set. Overall, reviewing improved model accuracy by about 4\% for this dataset. This is evidence that our manual reviewing did not cause the drop in accuracy between the original and new dataset. \begin{figure*}[h!] \centering \includegraphics[height=0.3\textheight]{figures/imagenet/version_top_1_b33.pdf} \includegraphics[height=0.3\textheight]{figures/imagenet/version_top_5_b33.pdf} \includegraphics[height=0.3\textheight]{figures/imagenet/version_images_changed_b33.pdf} \caption{Impact of the reviewing passes on the accuracy of a \model{resnet152} on our new \textsf{MatchedFrequency}{} test set. The revision numbers correspond to the chronological ordering in which we created the dataset revisions \label{fig:imagenetv2-b-reviewing}} \end{figure*} In addition, we also investigated whether the linear relationship between original and new test accuracy was affected by our final reviewing passes. To this end, we evaluated our model testbed on the first revision of our \textsf{MatchedFrequency}{} test set. As can be seen in Figure \ref{fig:imagenet_v1}, the resulting accuracies still show a good linear fit that is of similar quality as in Figure \ref{fig:imagenet_plotpage}. This shows that the linear relationship between the test accuracies is not a result of our reviewing intervention. \begin{figure*}[ht!] \centering \begin{subfigure}{0.48\textwidth} \includegraphics[width=\linewidth]{figures/imagenet_plot_page_v1/b-top-1_without_legend.pdf} \end{subfigure} \hfill \begin{subfigure}{0.48\textwidth} \includegraphics[width=\linewidth]{figures/imagenet_plot_page_v1/b-top-5_without_legend.pdf} \end{subfigure} \begin{subfigure}{\textwidth} \vspace{-.15cm} \centering \includegraphics[width=.75\linewidth]{figures/imagenet_plot_page_v1/separate_legend_horizontal.pdf} \end{subfigure} \vspace{-.6cm} \caption{Model accuracy on the original ImageNet validation set vs.\ accuracy on \emph{the first revision} of our \textsf{MatchedFrequency}{} test set. Each data point corresponds to one model in our testbed (shown with 95\% Clopper-Pearson confidence intervals). The red shaded region is a 95\% confidence region for the linear fit from 100,000 bootstrap samples. The plots show that the linear relationship between original and new test accuracy also occurs without our final dataset reviewing step. The accuracy plots for the final revision of \textsf{MatchedFrequency}{} can be found in Figure \ref{fig:imagenet_plotpage}.} \label{fig:imagenet_v1} \end{figure*} \subsubsection{Training a Discriminator for Original vs.\ New Test Set} We investigated whether a convolutional network could distinguish between the original ImageNet validation set and our new test set. To conduct this experiment, we created a training set of 10,000 images with 5,000 images from the original validation set and 5,000 images from our new test set (we repeated this experiment for each of our three test sets \textsf{MatchedFrequency}{}, \textsf{Threshold0.7}{}, and \textsf{TopImages}{}). We similarly created a balanced test set (in addition, all splits were class-balanced). We then trained a \model{resnet152} (both pre-trained and from scratch) for 50 epochs using a standard SGD optimizer to learn a binary classifier between the two datasets. Our results are summarized in Table~\ref{tab:imagenet_discrim}. Overall we found that the resulting models could not discriminate well between the original and our new test set: the best accuracy we obtained is 50.7\% (negating the classifier with accuracy 49.3\%). \begin{table}[ht!] \rowcolors{3}{white}{gray!15} \centering \begin{tabular}{c c c} \toprule Model & Discriminator Accuracy ($\%$) & Discriminator Accuracy ($\%$) \\ & random initialization & pre-trained \\ \midrule \textsf{MatchedFrequency} & 49.8 {\footnotesize \textcolor{gray}{[48.8, 50.8]}} & 50.4 {\footnotesize \textcolor{gray}{[49.4, 51.4]}} \\ \textsf{Threshold0.7} & 49.4 {\footnotesize \textcolor{gray}{[48.4, 50.4]}} & 50.1 {\footnotesize \textcolor{gray}{[49.1, 51.1]}} \\ \textsf{MatchedFrequency} & 50.4 {\footnotesize \textcolor{gray}{[49.4, 51.4]}} & 49.3 {\footnotesize \textcolor{gray}{[48.3, 50.3]}} \\ \bottomrule \end{tabular} \caption{Results for discriminator experiments on each of our three new test sets. The discriminator accuracies are shown with 95\% Clopper-Pearson confidence intervals. All confidence intervals overlap with an accuracy of 50\%, which is trivially achieved by random chance. Hence the discriminators are effectively unable to distinguish between the original ImageNet validation set and our new test sets.} \label{tab:imagenet_discrim} \end{table} \subsection{Additional Figures, Tables, and Lists} In this appendix we provide large figures etc.\ that did not fit into the preceding sections about our ImageNet experiments. \subsubsection{MTurk User Interfaces} \label{apx:mturk_ui} For comparison, we include the original ImageNet MTurk user interface (UI) in Figure \ref{fig:imagenet_mturk_ui} and the MTurk UI we used in our experiments in Figure \ref{fig:imagenetv2_mturk_ui}. Each UI corresponds to one task for the MTurk workers, which consists of 48 images in both cases. In contrast to the original ImageNet UI, our UI takes up more than one screen. This requires the MTurk workers to scroll but also provides more details in the images. While the task descriptions are not exactly the same, they are very similar and contain the same directions (e.g., both descriptions ask the MTurk workers to exclude drawings or paintings). \begin{figure*}[h!] \centering \includegraphics[width=\textwidth]{figures/imagenet/imagenet_mturk_ui.jpg} \caption{The user interface employed in the original ImageNet collection process for the labeling tasks on Amazon Mechanical Turk.} \label{fig:imagenet_mturk_ui} \end{figure*} \begin{figure*}[h!] \centering \frame{\includegraphics[width=.9\textwidth]{figures/imagenet/imagenetv2_mturk_ui.jpg}} \caption{Our user interface for labeling tasks on Amazon Mechanical Turk. The screenshot here omits the scroll bar and shows only a subset of the entire MTurk task. As in the ImageNet UI, the full task involves a grid of 48 images.} \label{fig:imagenetv2_mturk_ui} \end{figure*} \subsubsection{User Interface for our Final Reviewing Process} \label{apx:reviewing_ui} Figure \ref{fig:imagenetv2_review_ui} shows a screenshot of the reviewing UI that the paper authors used to manually review the new ImageNet datasets. At the top, the UI displays the wnid (``n01667114''), the synset (\textbf{mud turtle}), and the gloss. Next, a grid of 20 images is shown in 4 rows. The top two rows correspond to the candidate images currently sampled for the new dataset. Below each image, our UI shows a unique identifier for the image and the date the image was taken. There is also a check box to blacklist any incorrect images. In addition, there is a check box for each image to add it to the blacklist of incorrect images. If an image is added to the blacklist, it will be removed in the next revision of the dataset and replaced with a new image from the candidate pools. The candidate images are sorted by the date they were taken, which makes it easier to spot and remove near-duplicates. Images are marked as near-duplicates by adding their identifier to the ``Near-duplicate set'' text field. The bottom two rows correspond to a random sample of images from the validation set that belong to the target class. We display these images to make it easier to detect and correct for distribution shifts between our new test sets and the original ImageNet validation dataset. \begin{figure*}[ht!] \centering \frame{\includegraphics[width=\textwidth]{figures/imagenet/imagenetv2_review_ui.jpg}} \caption{The user interface we built to review dataset revisions and remove incorrect or near duplicate images. This user interface was not used for MTurk but only in the final dataset review step conducted by the authors of this paper.} \label{fig:imagenetv2_review_ui} \end{figure*} \subsubsection{Full List of Models Evaluated on ImageNet} \label{apx:imagenet_model_descriptions} The following list contains all models we evaluated on ImageNet with references and links to the corresponding source code. \begin{enumerate} \item \model{alexnet} \cite{alexnet} \url{https://github.com/Cadene/pretrained-models.pytorch} \item \model{bninception} \cite{inceptionv2} \url{https://github.com/Cadene/pretrained-models.pytorch} \item \model{cafferesnet101} \cite{resnet} \url{https://github.com/Cadene/pretrained-models.pytorch} \item \model{densenet121} \cite{densenet} \url{https://github.com/Cadene/pretrained-models.pytorch} \item \model{densenet161} \cite{densenet}\url{https://github.com/Cadene/pretrained-models.pytorch} \item \model{densenet169} \cite{densenet} \url{https://github.com/Cadene/pretrained-models.pytorch} \item \model{densenet201} \cite{densenet} \url{https://github.com/Cadene/pretrained-models.pytorch} \item \model{dpn107} \cite{dpn} \url{https://github.com/Cadene/pretrained-models.pytorch} \item \model{dpn131} \cite{dpn} \url{https://github.com/Cadene/pretrained-models.pytorch} \item \model{dpn68b} \cite{dpn}\url{https://github.com/Cadene/pretrained-models.pytorch} \item \model{dpn68} \cite{dpn} \url{https://github.com/Cadene/pretrained-models.pytorch} \item \model{dpn92} \cite{dpn} \url{https://github.com/Cadene/pretrained-models.pytorch} \item \model{dpn98} \cite{dpn} \url{https://github.com/Cadene/pretrained-models.pytorch} \item \model{fbresnet152} \cite{resnet} \url{https://github.com/tensorflow/models/tree/master/research/slim/} \item \model{fv\_4k} \cite{fishervectors,xrce} \url{https://github.com/modestyachts/nondeep} FisherVector model using SIFT, local color statistic features, and 16 GMM centers. \item \model{fv\_16k} \cite{fishervectors,xrce} \url{https://github.com/modestyachts/nondeep} FisherVector model using SIFT, local color statistic features, and 64 GMM centers. \item \model{fv\_64k} \cite{fishervectors,xrce} \url{https://github.com/modestyachts/nondeep} FisherVector model using SIFT, local color statistic features, and 256 GMM centers. \item \model{inception\_resnet\_v2\_tf} \cite{inceptionv4} \url{https://github.com/tensorflow/models/tree/master/research/slim/} \item \model{inception\_v1\_tf} \cite{inceptionv1} \url{https://github.com/tensorflow/models/tree/master/research/slim/} \item \model{inception\_v2\_tf} \cite{inceptionv2} \url{https://github.com/tensorflow/models/tree/master/research/slim/} \item \model{inception\_v3\_tf} \cite{inceptionv3} \url{https://github.com/tensorflow/models/tree/master/research/slim/} \item \model{inception\_v3} \cite{inceptionv3} \url{https://github.com/Cadene/pretrained-models.pytorch} \item \model{inception\_v4\_tf} \cite{inceptionv4} \url{https://github.com/tensorflow/models/tree/master/research/slim/} \item \model{inceptionresnetv2} \cite{inceptionv2} \url{https://github.com/Cadene/pretrained-models.pytorch} \item \model{inceptionv3} \cite{inceptionv3} \url{https://github.com/Cadene/pretrained-models.pytorch} \item \model{inceptionv4} \cite{inceptionv4} \url{https://github.com/Cadene/pretrained-models.pytorch} \item \model{mobilenet\_v1\_tf} \cite{mobilenet} \url{https://github.com/tensorflow/models/tree/master/research/slim/} \item \model{nasnet\_large\_tf} \cite{nas} \url{https://github.com/tensorflow/models/tree/master/research/slim/} \item \model{nasnet\_mobile\_tf} \cite{nas} \mbox{\url{https://github.com/tensorflow/models/tree/master/research/slim/}} \item \model{nasnetalarge} \cite{nas} \mbox{\url{https://github.com/Cadene/pretrained-models.pytorch}} \item \model{nasnetamobile} \cite{nas} \url{https://github.com/Cadene/pretrained-models.pytorch} \item \model{pnasnet5large} \cite{pnasnet} \url{https://github.com/Cadene/pretrained-models.pytorch} \item \model{pnasnet\_large\_tf} \cite{pnasnet} \mbox{\url{https://github.com/tensorflow/models/tree/master/research/slim/}} \item \model{pnasnet\_mobile\_tf} \cite{pnasnet} \url{https://github.com/tensorflow/models/tree/master/research/slim/} \item \model{polynet} \cite{polynet} \url{https://github.com/Cadene/pretrained-models.pytorch} \item \model{resnet101} \cite{resnet} \url{https://github.com/Cadene/pretrained-models.pytorch} \item \model{resnet152} \cite{resnet} \url{https://github.com/Cadene/pretrained-models.pytorch} \item \model{resnet18} \cite{resnet} \url{https://github.com/Cadene/pretrained-models.pytorch} \item \model{resnet34} \cite{resnet} \url{https://github.com/Cadene/pretrained-models.pytorch} \item \model{resnet50} \cite{resnet} \url{https://github.com/Cadene/pretrained-models.pytorch} \item \model{resnet\_v1\_101\_tf} \cite{resnet} \url{https://github.com/tensorflow/models/tree/master/research/slim/} \item \model{resnet\_v1\_152\_tf} \cite{resnet} \mbox{\url{https://github.com/tensorflow/models/tree/master/research/slim/}} \item \model{resnet\_v1\_50\_tf} \cite{resnet} \url{https://github.com/tensorflow/models/tree/master/research/slim/} \item \model{resnet\_v2\_101\_tf} \cite{resnet_preact} \url{https://github.com/tensorflow/models/tree/master/research/slim/} \item \model{resnet\_v2\_152\_tf} \cite{resnet_preact} \url{https://github.com/tensorflow/models/tree/master/research/slim/} \item \model{resnet\_v2\_50\_tf} \cite{resnet_preact} \url{https://github.com/tensorflow/models/tree/master/research/slim/} \item \model{resnext101\_32x4d} \cite{resnext} \url{https://github.com/Cadene/pretrained-models.pytorch} \item \model{resnext101\_64x4d} \cite{resnext} \url{https://github.com/Cadene/pretrained-models.pytorch} \item \model{se\_resnet101} \cite{senet} \url{https://github.com/Cadene/pretrained-models.pytorch} \item \model{se\_resnet152} \cite{senet} \url{https://github.com/Cadene/pretrained-models.pytorch} \item \model{se\_resnet50} \cite{senet} \url{https://github.com/Cadene/pretrained-models.pytorch} \item \model{se\_resnext101\_32x4d} \cite{senet} \url{https://github.com/Cadene/pretrained-models.pytorch} \item \model{se\_resnext50\_32x4d} \cite{senet} \url{https://github.com/Cadene/pretrained-models.pytorch} \item \model{senet154} \cite{senet} \url{https://github.com/Cadene/pretrained-models.pytorch} \item \model{squeezenet1\_0} \cite{squeezenet} \url{https://github.com/Cadene/pretrained-models.pytorch} \item \model{squeezenet1\_1} \cite{squeezenet} \url{https://github.com/Cadene/pretrained-models.pytorch} \item \model{vgg11\_bn} \cite{inceptionv2} \url{https://github.com/Cadene/pretrained-models.pytorch} \item \model{vgg11} \cite{vgg} \url{https://github.com/Cadene/pretrained-models.pytorch} \item \model{vgg13\_bn} \cite{inceptionv2} \url{https://github.com/Cadene/pretrained-models.pytorch} \item \model{vgg13} \cite{vgg} \url{https://github.com/Cadene/pretrained-models.pytorch} \item \model{vgg16\_bn} \cite{inceptionv2} \url{https://github.com/Cadene/pretrained-models.pytorch} \item \model{vgg16} \cite{vgg} \url{https://github.com/Cadene/pretrained-models.pytorch} \item \model{vgg19\_bn} \cite{inceptionv2} \url{https://github.com/Cadene/pretrained-models.pytorch} \item \model{vgg19} \cite{vgg} \url{https://github.com/Cadene/pretrained-models.pytorch} \item \model{vgg\_16\_tf} \cite{vgg} \mbox{\url{https://github.com/tensorflow/models/tree/master/research/slim/}} \item \model{vgg\_19\_tf} \cite{vgg} \url{https://github.com/tensorflow/models/tree/master/research/slim/} \item \model{xception} \cite{xception} \url{https://github.com/Cadene/pretrained-models.pytorch} \end{enumerate} \subsubsection{Full Results Tables} \label{sec:imagenettable} Tables \ref{tab:imagenetv2-b-33_top1_full_results} and \ref{tab:imagenetv2-b-33_top5_full_results} contain the detailed accuracy scores (top-1 and top-5, respectively) for the original ImageNet validation set and our main new test set \textsf{MatchedFrequency}{}. Tables \ref{tab:imagenetv2-a-44_top1_full_results} -- \ref{tab:imagenetv2-c-12_top5_full_results} contain the accuracy scores for our \textsf{Threshold0.7}{} and \textsf{TopImages}{} test sets. \newpage \clearpage \begin{table*}[ht!] \rowcolors{3}{white}{gray!15} \caption{Top-1 model accuracy on the original ImageNet validation set and our new test set \textsf{MatchedFrequency}{}. $\Delta$ Rank is the relative difference in the ranking from the original test set to the new test set. For example, $\Delta \text{Rank} = -2$ means that a model dropped by two places on the new test set compared to the original test set. The confidence intervals are 95\% Clopper-Pearson intervals. Due to space constraints, references for the models can be found in Appendix \ref{apx:imagenet_model_descriptions}. The second part of the table can be found on the following page. } \label{tab:imagenetv2-b-33_top1_full_results} \input{tables/imagenetv2-b-33_top1_model_results_table.1} \centering \end{table*} \begin{table*}[ht!] \rowcolors{3}{white}{gray!15} \input{tables/imagenetv2-b-33_top1_model_results_table.2} \centering \end{table*} \begin{table*}[ht!] \rowcolors{3}{white}{gray!15} \caption{Top-5 model accuracy on the original ImageNet validation set and our new test set \textsf{MatchedFrequency}{}. $\Delta$ Rank is the relative difference in the ranking from the original test set to the new test set. For example, $\Delta \text{Rank} = -2$ means that a model dropped by two places on the new test set compared to the original test set. The confidence intervals are 95\% Clopper-Pearson intervals. Due to space constraints, references for the models can be found in Appendix \ref{apx:imagenet_model_descriptions}. The second part of the table can be found on the following page. } \input{tables/imagenetv2-b-33_top5_model_results_table.1} \label{tab:imagenetv2-b-33_top5_full_results} \centering \end{table*} \begin{table*}[ht!] \rowcolors{3}{white}{gray!15} \input{tables/imagenetv2-b-33_top5_model_results_table.2} \centering \end{table*} \begin{table*}[ht!] \rowcolors{3}{white}{gray!15} \caption{Top-1 model accuracy on the original ImageNet validation set and our new test set \textsf{Threshold0.7}{}. $\Delta$ Rank is the relative difference in the ranking from the original test set to the new test set. For example, $\Delta \text{Rank} = -2$ means that a model dropped by two places on the new test set compared to the original test set. The confidence intervals are 95\% Clopper-Pearson intervals. Due to space constraints, references for the models can be found in Appendix \ref{apx:imagenet_model_descriptions}. The second part of the table can be found on the following page. } \label{tab:imagenetv2-a-44_top1_full_results} \input{tables/imagenetv2-a-44_top1_model_results_table.1} \centering \end{table*} \begin{table*}[ht!] \rowcolors{3}{white}{gray!15} \input{tables/imagenetv2-a-44_top1_model_results_table.2} \centering \end{table*} \begin{table*}[ht!] \rowcolors{3}{white}{gray!15} \caption{Top-5 model accuracy on the original ImageNet validation set and our new test set \textsf{Threshold0.7}{}. $\Delta$ Rank is the relative difference in the ranking from the original test set to the new test set. For example, $\Delta \text{Rank} = -2$ means that a model dropped by two places on the new test set compared to the original test set. The confidence intervals are 95\% Clopper-Pearson intervals. Due to space constraints, references for the models can be found in Appendix \ref{apx:imagenet_model_descriptions}. The second part of the table can be found on the following page. } \input{tables/imagenetv2-a-44_top5_model_results_table.1} \label{tab:imagenetv2-a-44_top5_full_results} \centering \end{table*} \begin{table*}[ht!] \rowcolors{3}{white}{gray!15} \input{tables/imagenetv2-a-44_top5_model_results_table.2} \centering \end{table*} \begin{table*}[ht!] \rowcolors{3}{white}{gray!15} \caption{Top-1 model accuracy on the original ImageNet validation set and our new test set \textsf{TopImages}{}. $\Delta$ Rank is the relative difference in the ranking from the original test set to the new test set. For example, $\Delta \text{Rank} = -2$ means that a model dropped by two places on the new test set compared to the original test set. The confidence intervals are 95\% Clopper-Pearson intervals. Due to space constraints, references for the models can be found in Appendix \ref{apx:imagenet_model_descriptions}. The second part of the table can be found on the following page. } \label{tab:imagenetv2-c-12_top1_full_results} \input{tables/imagenetv2-c-12_top1_model_results_table.1} \centering \end{table*} \begin{table*}[ht!] \rowcolors{3}{white}{gray!15} \input{tables/imagenetv2-c-12_top1_model_results_table.2} \centering \end{table*} \begin{table*}[ht!] \rowcolors{3}{white}{gray!15} \caption{Top-5 model accuracy on the original ImageNet validation set and our new test set \textsf{TopImages}{}. $\Delta$ Rank is the relative difference in the ranking from the original test set to the new test set. For example, $\Delta \text{Rank} = -2$ means that a model dropped by two places on the new test set compared to the original test set. The confidence intervals are 95\% Clopper-Pearson intervals. Due to space constraints, references for the models can be found in Appendix \ref{apx:imagenet_model_descriptions}. The second part of the table can be found on the following page. } \input{tables/imagenetv2-c-12_top5_model_results_table.1} \label{tab:imagenetv2-c-12_top5_full_results} \centering \end{table*} \begin{table*}[ht!] \rowcolors{3}{white}{gray!15} \input{tables/imagenetv2-c-12_top5_model_results_table.2} \centering \end{table*} \newpage \clearpage \subsubsection{Accuracy Plots for All ImageNet Test Sets} Figure \ref{fig:imagenet_plotpage} shows the top-1 and top-5 accuracies for our three test sets and all convolutional networks in our model testbed. Figure \ref{fig:imagenet_probit_plotpage} shows the accuracies for all models (including Fisher Vector models) with a probit scale on the axes. \input{imagenet_plot_page} \input{imagenet_probit_plot_page} \subsubsection{Example Images} Figure \ref{fig:testexamples_imagenet} shows randomly selected images for three randomly selected classes for both the original ImageNet validation set and our three new test sets. \begin{figure*} \centering \setlength{\imagedim}{2cm} \setlength{\imagexspacing}{0.3cm} \setlength{\imageyspacing}{0.3cm} \begin{subfigure}[t]{\textwidth} \begin{tikzpicture} \tikzstyle{img}=[inner sep=0pt,outer sep=0pt]; \node [img] (image0) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/imagenet-validation-original/wnid_0/0.jpeg}}; \node [img,anchor=west,at=(image0.east),xshift=\imagexspacing] (image1) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/imagenet-validation-original/wnid_0/1.jpeg}}; \node [img,anchor=north,at=(image0.south), yshift=-\imageyspacing] (image2) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/imagenet-validation-original/wnid_0/2.jpeg}}; \node [img,anchor=west,at=(image2.east),xshift=\imagexspacing] (image3) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/imagenet-validation-original/wnid_0/3.jpeg}}; \node [img,anchor=east,at=(image0.west),xshift=-4.0\imagexspacing] (image4) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/imagenet-validation-original/wnid_1/0.jpeg}}; \node [img,anchor=east,at=(image4.west),xshift=-\imagexspacing] (image5) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/imagenet-validation-original/wnid_1/1.jpeg}}; \node [img,anchor=north,at=(image4.south), yshift=-\imageyspacing] (image6) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/imagenet-validation-original/wnid_1/2.jpeg}}; \node [img,anchor=east,at=(image6.west),xshift=-\imagexspacing] (image7) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/imagenet-validation-original/wnid_1/3.jpeg}}; \node [img,anchor=west,at=(image1.east),xshift=4.0\imagexspacing] (image8) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/imagenet-validation-original/wnid_2/0.jpeg}}; \node [img,anchor=west,at=(image8.east),xshift=\imagexspacing] (image9) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/imagenet-validation-original/wnid_2/1.jpeg}}; \node [img,anchor=north,at=(image8.south), yshift=-\imageyspacing] (image10) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/imagenet-validation-original/wnid_2/2.jpeg}}; \node [img,anchor=west,at=(image10.east),xshift=\imagexspacing] (image11) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/imagenet-validation-original/wnid_2/3.jpeg}}; \end{tikzpicture} \centering \vspace{-.1cm} \caption{Test Set A} \end{subfigure} \vspace{.2cm} \begin{subfigure}[t]{\textwidth} \begin{tikzpicture} \tikzstyle{img}=[inner sep=0pt,outer sep=0pt]; \node [img] (image0) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/new-imagenet-matched-frequency/wnid_0/0.jpeg}}; \node [img,anchor=west,at=(image0.east),xshift=\imagexspacing] (image1) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/new-imagenet-matched-frequency/wnid_0/1.jpeg}}; \node [img,anchor=north,at=(image0.south), yshift=-\imageyspacing] (image2) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/new-imagenet-matched-frequency/wnid_0/2.jpeg}}; \node [img,anchor=west,at=(image2.east),xshift=\imagexspacing] (image3) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/new-imagenet-matched-frequency/wnid_0/3.jpeg}}; \node [img,anchor=east,at=(image0.west),xshift=-4.0\imagexspacing] (image4) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/new-imagenet-matched-frequency/wnid_1/0.jpeg}}; \node [img,anchor=east,at=(image4.west),xshift=-\imagexspacing] (image5) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/new-imagenet-matched-frequency/wnid_1/1.jpeg}}; \node [img,anchor=north,at=(image4.south), yshift=-\imageyspacing] (image6) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/new-imagenet-matched-frequency/wnid_1/2.jpeg}}; \node [img,anchor=east,at=(image6.west),xshift=-\imagexspacing] (image7) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/new-imagenet-matched-frequency/wnid_1/3.jpeg}}; \node [img,anchor=west,at=(image1.east),xshift=4.0\imagexspacing] (image8) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/new-imagenet-matched-frequency/wnid_2/0.jpeg}}; \node [img,anchor=west,at=(image8.east),xshift=\imagexspacing] (image9) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/new-imagenet-matched-frequency/wnid_2/1.jpeg}}; \node [img,anchor=north,at=(image8.south), yshift=-\imageyspacing] (image10) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/new-imagenet-matched-frequency/wnid_2/2.jpeg}}; \node [img,anchor=west,at=(image10.east),xshift=\imagexspacing] (image11) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/new-imagenet-matched-frequency/wnid_2/3.jpeg}}; \end{tikzpicture} \centering \vspace{-.1cm} \caption{Test Set B} \end{subfigure} \vspace{.2cm} \begin{subfigure}[t]{\textwidth} \begin{tikzpicture} \tikzstyle{img}=[inner sep=0pt,outer sep=0pt]; \node [img] (image0) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/new-imagenet-threshold-0.7/wnid_0/0.jpeg}}; \node [img,anchor=west,at=(image0.east),xshift=\imagexspacing] (image1) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/new-imagenet-threshold-0.7/wnid_0/1.jpeg}}; \node [img,anchor=north,at=(image0.south), yshift=-\imageyspacing] (image2) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/new-imagenet-threshold-0.7/wnid_0/2.jpeg}}; \node [img,anchor=west,at=(image2.east),xshift=\imagexspacing] (image3) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/new-imagenet-threshold-0.7/wnid_0/3.jpeg}}; \node [img,anchor=east,at=(image0.west),xshift=-4.0\imagexspacing] (image4) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/new-imagenet-threshold-0.7/wnid_1/0.jpeg}}; \node [img,anchor=east,at=(image4.west),xshift=-\imagexspacing] (image5) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/new-imagenet-threshold-0.7/wnid_1/1.jpeg}}; \node [img,anchor=north,at=(image4.south), yshift=-\imageyspacing] (image6) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/new-imagenet-threshold-0.7/wnid_1/2.jpeg}}; \node [img,anchor=east,at=(image6.west),xshift=-\imagexspacing] (image7) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/new-imagenet-threshold-0.7/wnid_1/3.jpeg}}; \node [img,anchor=west,at=(image1.east),xshift=4.0\imagexspacing] (image8) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/new-imagenet-threshold-0.7/wnid_2/0.jpeg}}; \node [img,anchor=west,at=(image8.east),xshift=\imagexspacing] (image9) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/new-imagenet-threshold-0.7/wnid_2/1.jpeg}}; \node [img,anchor=north,at=(image8.south), yshift=-\imageyspacing] (image10) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/new-imagenet-threshold-0.7/wnid_2/2.jpeg}}; \node [img,anchor=west,at=(image10.east),xshift=\imagexspacing] (image11) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/new-imagenet-threshold-0.7/wnid_2/3.jpeg}}; \end{tikzpicture} \centering \vspace{-.1cm} \caption{Test Set C} \end{subfigure} \vspace{.2cm} \begin{subfigure}[t]{\textwidth} \begin{tikzpicture} \tikzstyle{img}=[inner sep=0pt,outer sep=0pt]; \node [img] (image0) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/new-imagenet-top-images/wnid_0/0.jpeg}}; \node [img,anchor=west,at=(image0.east),xshift=\imagexspacing] (image1) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/new-imagenet-top-images/wnid_0/1.jpeg}}; \node [img,anchor=north,at=(image0.south), yshift=-\imageyspacing] (image2) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/new-imagenet-top-images/wnid_0/2.jpeg}}; \node [img,anchor=west,at=(image2.east),xshift=\imagexspacing] (image3) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/new-imagenet-top-images/wnid_0/3.jpeg}}; \node [img,anchor=east,at=(image0.west),xshift=-4.0\imagexspacing] (image4) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/new-imagenet-top-images/wnid_1/0.jpeg}}; \node [img,anchor=east,at=(image4.west),xshift=-\imagexspacing] (image5) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/new-imagenet-top-images/wnid_1/1.jpeg}}; \node [img,anchor=north,at=(image4.south), yshift=-\imageyspacing] (image6) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/new-imagenet-top-images/wnid_1/2.jpeg}}; \node [img,anchor=east,at=(image6.west),xshift=-\imagexspacing] (image7) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/new-imagenet-top-images/wnid_1/3.jpeg}}; \node [img,anchor=west,at=(image1.east),xshift=4.0\imagexspacing] (image8) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/new-imagenet-top-images/wnid_2/0.jpeg}}; \node [img,anchor=west,at=(image8.east),xshift=\imagexspacing] (image9) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/new-imagenet-top-images/wnid_2/1.jpeg}}; \node [img,anchor=north,at=(image8.south), yshift=-\imageyspacing] (image10) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/new-imagenet-top-images/wnid_2/2.jpeg}}; \node [img,anchor=west,at=(image10.east),xshift=\imagexspacing] (image11) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/new-imagenet-top-images/wnid_2/3.jpeg}}; \end{tikzpicture} \centering \vspace{-.1cm} \caption{Test Set D} \end{subfigure} \caption{\small Randomly selected images from the original ImageNet validation set and our new ImageNet test sets. We display four images from three randomly selected classes for each of the four datasets (the original validation set and our three test sets described in Section \ref{sec:imagenet_details}). The displayed classes are ``Cypripedium calceolus'', ``gyromitra'', and ``mongoose''. The following footnote reveals which datasets correspond to original and new ImageNet test sets. \protect \footnotemark} \label{fig:testexamples_imagenet} \end{figure*} \footnotetext{Test Set A is the original validation set, Test Set B is the \textsf{MatchedFrequency} dataset, Test Set C is the \textsf{Threshold0.7}, Test set D is \textsf{TopImages}.} \subsubsection{Effect of Selection Frequency on Model Accuracy} \label{sec:rainbow} To better understand how the selection frequency of an image impacts the model accuracies, Figures \ref{fig:rainbow_plot}, \ref{fig:rainbow_plot_original}, and \ref{fig:rainbow_plot_both} show model accuracies stratified into five selection frequency bins. \begin{figure*}[ht!] \centering \begin{subfigure}{.74\textwidth} \includegraphics[width=\linewidth]{figures/rainbow_plot/rainbow_imagenetv2-b-33_bin_new_vs_original_plot.pdf} \end{subfigure} \begin{subfigure}{0.24\textwidth} \includegraphics[width=\linewidth]{figures/rainbow_plot/rainbow_imagenetv2-b-33_bin_new_legend_cropped.pdf} \end{subfigure} \\ \caption{Model accuracy on the original ImageNet validation set vs.\ accuracy on our new test set \textsf{MatchedFrequency}{}, stratified into five selection frequency bins. Every bin contains the images with MTurk selection frequency falling into the corresponding range. Each data point corresponds to one model and one of the five frequency bins (indicated by the different colors). The x-value of each data point is given by the model's accuracy on the entire original validation set. The y-value is given by the model's accuracy on our new test images falling into the respective selection frequency bin. The plot shows that the selection frequency has strong influence on the model accuracy. For instance, images with selection frequencies in the $[0.4, 0.6)$ bin lead to an average model accuracy about 20\% lower than for the entire test set \textsf{MatchedFrequency}{}, and 30\% lower than the original validation set. We remark that we manually reviewed all images in \textsf{MatchedFrequency}{} to ensure that (almost) all images have the correct class label, regardless of selection frequency bin.} \label{fig:rainbow_plot} \end{figure*} \begin{figure*}[ht!] \centering \begin{subfigure}{.74\textwidth} \includegraphics[width=\linewidth]{figures/rainbow_plot/rainbow_imagenetv2-b-33_bin_original_vs_original_plot.pdf} \end{subfigure} \begin{subfigure}{0.24\textwidth} \includegraphics[width=\linewidth]{figures/rainbow_plot/rainbow_imagenetv2-b-33_bin_new_legend_cropped.pdf} \end{subfigure} \\ \caption{Model accuracy on the original ImageNet validation set stratified into five selection frequency bins. This plot has a similar structure as Figure \ref{fig:rainbow_plot} above, but contains the original validation set accuracy on both axes (as before, the images are binned on the y-axis and not binned on the x-axis, i.e., the x-value is the accuracy on the entire validation set). The plot shows that the selection frequency has strong influence on the model accuracy on the original ImageNet validation set as well. For instance, images with selection frequencies in the $[0.4, 0.6)$ bin lead to an average model accuracy about 10 -- 15\% lower than for the entire validation set. } \label{fig:rainbow_plot_original} \end{figure*} \begin{figure*}[ht!] \centering \begin{subfigure}{.74\textwidth} \includegraphics[width=\linewidth]{figures/rainbow_plot/rainbow_imagenetv2-b-33_bin_both_plot.pdf} \end{subfigure} \begin{subfigure}{0.24\textwidth} \includegraphics[width=\linewidth]{figures/rainbow_plot/rainbow_imagenetv2-b-33_bin_new_legend_cropped.pdf} \end{subfigure} \\ \caption{Model accuracy on the original ImageNet validation set vs.\ accuracy on our new test set \textsf{MatchedFrequency}{}. In contrast to the preceding Figures \ref{fig:rainbow_plot} and \ref{fig:rainbow_plot_original}, both original and new test accuracy is now stratified into five selection frequency bins. Each data point corresponds to the accuracy achieved by one model on the images from one of the five frequency bins (indicated by the different colors). The plot shows that the model accuracies in the various bins are strongly correlated, but the accuracy on images in our new test is consistently lower. The gap is largest for images in the middle frequency bins (about 20\% accuracy difference) and smallest for images in the lowest and highest frequency bins (5 -- 10 \% difference). } \label{fig:rainbow_plot_both} \end{figure*} \subsubsection{Ambiguous Class Examples} \label{app:ambiguous_imagenet} Figure \ref{fig:ambiguous_examples_imagenet} shows randomly selected images from the original ImageNet validation set for three pairs of classes with ambiguous class boundaries. We remark that several more classes in ImageNet have ill-defined boundaries. The three pairs of classes here were chosen only as illustrative examples. The following list shows names and definitions for the three class pairs: \begin{itemize} \item Pair 1 \begin{enumerate} \item[a.] \class{projectile, missile}: ``a weapon that is forcibly thrown or projected at a targets but is not self-propelled'' \item[b.] \class{missile}: ``a rocket carrying a warhead of conventional or nuclear explosives; may be ballistic or directed by remote control'' \end{enumerate} \item Pair 2 \begin{enumerate} \item[c.] \class{tusker}: ``any mammal with prominent tusks (especially an elephant or wild boar)'' \item[d.] \class{Indian elephant, Elephas maximus}: ``Asian elephant having smaller ears and tusks primarily in the male'' \end{enumerate} \item Pair 3 \begin{enumerate} \item[e.] \class{screen, CRT screen}: ``the display that is electronically created on the surface of the large end of a cathode-ray tube'' \item[f.] \class{monitor}: ``electronic equipment that is used to check the quality or content of electronic transmissions'' \end{enumerate} \end{itemize} \begin{figure*} \centering \setlength{\imagedim}{2cm} \setlength{\imagexspacing}{0.3cm} \setlength{\imageyspacing}{0.3cm} \vspace{-.5cm} \begin{subfigure}[t]{0.49\textwidth} \begin{tikzpicture} \tikzstyle{img}=[inner sep=0pt,outer sep=0pt]; \node [img] (image0) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/ambiguous_class_example_images/wnid_n04008634/0.jpeg}}; \node [img,anchor=west,at=(image0.east),xshift=\imagexspacing] (image1) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/ambiguous_class_example_images/wnid_n04008634/1.jpeg}}; \node [img,anchor=west,at=(image1.east), xshift=\imagexspacing] (image2) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/ambiguous_class_example_images/wnid_n04008634/2.jpeg}}; \node [img,anchor=north,at=(image0.south),yshift=-\imagexspacing] (image3) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/ambiguous_class_example_images/wnid_n04008634/3.jpeg}}; \node [img,anchor=west,at=(image3.east),xshift=\imagexspacing] (image4) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/ambiguous_class_example_images/wnid_n04008634/4.jpeg}}; \node [img,anchor=west,at=(image4.east),xshift=\imagexspacing] (image5) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/ambiguous_class_example_images/wnid_n04008634/5.jpeg}}; \node [img,anchor=north,at=(image3.south),yshift=-\imagexspacing] (image6) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/ambiguous_class_example_images/wnid_n04008634/6.jpeg}}; \node [img,anchor=west,at=(image6.east),xshift=\imagexspacing] (image7) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/ambiguous_class_example_images/wnid_n04008634/7.jpeg}}; \node [img,anchor=west,at=(image7.east),xshift=\imagexspacing] (image8) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/ambiguous_class_example_images/wnid_n04008634/8.jpeg}}; \end{tikzpicture} \centering \vspace{-.1cm} \caption{\class{projectile, missile}} \end{subfigure} \begin{subfigure}[t]{0.49\textwidth} \begin{tikzpicture} \tikzstyle{img}=[inner sep=0pt,outer sep=0pt]; \node [img] (image0) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/ambiguous_class_example_images/wnid_n03773504/0.jpeg}}; \node [img,anchor=west,at=(image0.east),xshift=\imagexspacing] (image1) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/ambiguous_class_example_images/wnid_n03773504/1.jpeg}}; \node [img,anchor=west,at=(image1.east), xshift=\imagexspacing] (image2) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/ambiguous_class_example_images/wnid_n03773504/2.jpeg}}; \node [img,anchor=north,at=(image0.south),yshift=-\imagexspacing] (image3) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/ambiguous_class_example_images/wnid_n03773504/3.jpeg}}; \node [img,anchor=west,at=(image3.east),xshift=\imagexspacing] (image4) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/ambiguous_class_example_images/wnid_n03773504/4.jpeg}}; \node [img,anchor=west,at=(image4.east),xshift=\imagexspacing] (image5) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/ambiguous_class_example_images/wnid_n03773504/5.jpeg}}; \node [img,anchor=north,at=(image3.south),yshift=-\imagexspacing] (image6) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/ambiguous_class_example_images/wnid_n03773504/6.jpeg}}; \node [img,anchor=west,at=(image6.east),xshift=\imagexspacing] (image7) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/ambiguous_class_example_images/wnid_n03773504/7.jpeg}}; \node [img,anchor=west,at=(image7.east),xshift=\imagexspacing] (image8) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/ambiguous_class_example_images/wnid_n03773504/8.jpeg}}; \end{tikzpicture} \centering \vspace{-.1cm} \caption{\class{missile} \end{subfigure} \vspace{.3cm} \begin{subfigure}[t]{0.49\textwidth} \begin{tikzpicture} \tikzstyle{img}=[inner sep=0pt,outer sep=0pt]; \node [img] (image0) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/ambiguous_class_example_images/wnid_n01871265/0.jpeg}}; \node [img,anchor=west,at=(image0.east),xshift=\imagexspacing] (image1) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/ambiguous_class_example_images/wnid_n01871265/1.jpeg}}; \node [img,anchor=west,at=(image1.east), xshift=\imagexspacing] (image2) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/ambiguous_class_example_images/wnid_n01871265/2.jpeg}}; \node [img,anchor=north,at=(image0.south),yshift=-\imagexspacing] (image3) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/ambiguous_class_example_images/wnid_n01871265/3.jpeg}}; \node [img,anchor=west,at=(image3.east),xshift=\imagexspacing] (image4) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/ambiguous_class_example_images/wnid_n01871265/4.jpeg}}; \node [img,anchor=west,at=(image4.east),xshift=\imagexspacing] (image5) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/ambiguous_class_example_images/wnid_n01871265/5.jpeg}}; \node [img,anchor=north,at=(image3.south),yshift=-\imagexspacing] (image6) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/ambiguous_class_example_images/wnid_n01871265/6.jpeg}}; \node [img,anchor=west,at=(image6.east),xshift=\imagexspacing] (image7) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/ambiguous_class_example_images/wnid_n01871265/7.jpeg}}; \node [img,anchor=west,at=(image7.east),xshift=\imagexspacing] (image8) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/ambiguous_class_example_images/wnid_n01871265/8.jpeg}}; \end{tikzpicture} \centering \vspace{-.1cm} \caption{\class{tusker} \end{subfigure} \begin{subfigure}[t]{0.49\textwidth} \begin{tikzpicture} \tikzstyle{img}=[inner sep=0pt,outer sep=0pt]; \node [img] (image0) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/ambiguous_class_example_images/wnid_n02504013/0.jpeg}}; \node [img,anchor=west,at=(image0.east),xshift=\imagexspacing] (image1) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/ambiguous_class_example_images/wnid_n02504013/1.jpeg}}; \node [img,anchor=west,at=(image1.east), xshift=\imagexspacing] (image2) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/ambiguous_class_example_images/wnid_n02504013/2.jpeg}}; \node [img,anchor=north,at=(image0.south),yshift=-\imagexspacing] (image3) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/ambiguous_class_example_images/wnid_n02504013/3.jpeg}}; \node [img,anchor=west,at=(image3.east),xshift=\imagexspacing] (image4) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/ambiguous_class_example_images/wnid_n02504013/4.jpeg}}; \node [img,anchor=west,at=(image4.east),xshift=\imagexspacing] (image5) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/ambiguous_class_example_images/wnid_n02504013/5.jpeg}}; \node [img,anchor=north,at=(image3.south),yshift=-\imagexspacing] (image6) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/ambiguous_class_example_images/wnid_n02504013/6.jpeg}}; \node [img,anchor=west,at=(image6.east),xshift=\imagexspacing] (image7) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/ambiguous_class_example_images/wnid_n02504013/7.jpeg}}; \node [img,anchor=west,at=(image7.east),xshift=\imagexspacing] (image8) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/ambiguous_class_example_images/wnid_n02504013/8.jpeg}}; \end{tikzpicture} \centering \vspace{-.1cm} \caption{\class{Indian elephant, Elephas maximus} \end{subfigure} \vspace{.3cm} \begin{subfigure}[t]{0.49\textwidth} \begin{tikzpicture} \tikzstyle{img}=[inner sep=0pt,outer sep=0pt]; \node [img] (image0) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/ambiguous_class_example_images/wnid_n04152593/0.jpeg}}; \node [img,anchor=west,at=(image0.east),xshift=\imagexspacing] (image1) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/ambiguous_class_example_images/wnid_n04152593/1.jpeg}}; \node [img,anchor=west,at=(image1.east), xshift=\imagexspacing] (image2) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/ambiguous_class_example_images/wnid_n04152593/2.jpeg}}; \node [img,anchor=north,at=(image0.south),yshift=-\imagexspacing] (image3) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/ambiguous_class_example_images/wnid_n04152593/3.jpeg}}; \node [img,anchor=west,at=(image3.east),xshift=\imagexspacing] (image4) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/ambiguous_class_example_images/wnid_n04152593/4.jpeg}}; \node [img,anchor=west,at=(image4.east),xshift=\imagexspacing] (image5) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/ambiguous_class_example_images/wnid_n04152593/5.jpeg}}; \node [img,anchor=north,at=(image3.south),yshift=-\imagexspacing] (image6) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/ambiguous_class_example_images/wnid_n04152593/6.jpeg}}; \node [img,anchor=west,at=(image6.east),xshift=\imagexspacing] (image7) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/ambiguous_class_example_images/wnid_n04152593/7.jpeg}}; \node [img,anchor=west,at=(image7.east),xshift=\imagexspacing] (image8) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/ambiguous_class_example_images/wnid_n04152593/8.jpeg}}; \end{tikzpicture} \centering \vspace{-.1cm} \caption{\class{screen, CRT screen}} \end{subfigure} \begin{subfigure}[t]{0.49\textwidth} \begin{tikzpicture} \tikzstyle{img}=[inner sep=0pt,outer sep=0pt]; \node [img] (image0) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/ambiguous_class_example_images/wnid_n03782006/0.jpeg}}; \node [img,anchor=west,at=(image0.east),xshift=\imagexspacing] (image1) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/ambiguous_class_example_images/wnid_n03782006/1.jpeg}}; \node [img,anchor=west,at=(image1.east), xshift=\imagexspacing] (image2) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/ambiguous_class_example_images/wnid_n03782006/2.jpeg}}; \node [img,anchor=north,at=(image0.south),yshift=-\imagexspacing] (image3) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/ambiguous_class_example_images/wnid_n03782006/3.jpeg}}; \node [img,anchor=west,at=(image3.east),xshift=\imagexspacing] (image4) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/ambiguous_class_example_images/wnid_n03782006/4.jpeg}}; \node [img,anchor=west,at=(image4.east),xshift=\imagexspacing] (image5) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/ambiguous_class_example_images/wnid_n03782006/5.jpeg}}; \node [img,anchor=north,at=(image3.south),yshift=-\imagexspacing] (image6) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/ambiguous_class_example_images/wnid_n03782006/6.jpeg}}; \node [img,anchor=west,at=(image6.east),xshift=\imagexspacing] (image7) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/ambiguous_class_example_images/wnid_n03782006/7.jpeg}}; \node [img,anchor=west,at=(image7.east),xshift=\imagexspacing] (image8) {\includegraphics[width=\imagedim, height=\imagedim]{figures/imagenet/ambiguous_class_example_images/wnid_n03782006/8.jpeg}}; \end{tikzpicture} \centering \vspace{-.1cm} \caption{\class{monitor}} \end{subfigure} \vspace{-.2cm} \caption{\small Random images from the original ImageNet validation set for three pairs of classes with ambiguous class boundaries.} \label{fig:ambiguous_examples_imagenet} \end{figure*} \section{Potential Causes of Accuracy Drops} \label{sec:formal} We adopt the standard classification setup and posit the existence of a ``true'' underlying data distribution $\ensuremath{\mathcal{D}}$ over labeled examples $(x, y)$. The overall goal in classification is to find a model $\ensuremath{\hat{f}}$ that minimizes the population loss \begin{equation} \label{eq:pop_loss} \ensuremath{L}_\ensuremath{\mathcal{D}}(\ensuremath{\hat{f}}) \; = \; \ensuremath{\mathop{\mathbb{E}}}_{(x, y) \sim \ensuremath{\mathcal{D}}} \brackets*{\ensuremath{\mathbb{I}}\brackets{\ensuremath{\hat{f}}(x) \neq y}} \; . \end{equation} Since we usually do not know the distribution $\ensuremath{\mathcal{D}}$, we instead measure the performance of a trained classifier via a \emph{test set} $\ensuremath{S}$ drawn from the distribution $\ensuremath{\mathcal{D}}$: \begin{equation} \label{eq:emp_loss} \ensuremath{L}_{\ensuremath{S}}(\ensuremath{\hat{f}}) \; = \; \frac{1}{\abs{\ensuremath{S}}} \sum_{(x, y) \in \ensuremath{S}} \ensuremath{\mathbb{I}}\brackets{\ensuremath{\hat{f}}(x) \neq y} \; . \end{equation} We then use this test error $\ensuremath{L}_{\ensuremath{S}}(\ensuremath{\hat{f}})$ as a proxy for the population loss $\ensuremath{L}_\ensuremath{\mathcal{D}}(\ensuremath{\hat{f}})$. If a model $\ensuremath{\hat{f}}$ achieves a low test error, we assume that it will perform similarly well on future examples from the distribution $\ensuremath{\mathcal{D}}$. This assumption underlies essentially all empirical evaluations in machine learning since it allows us to argue that the model $\ensuremath{\hat{f}}$ generalizes. In our experiments, we test this assumption by collecting a new test set $\ensuremath{S}'$ from a data distribution $\ensuremath{\mathcal{D}}'$ that we carefully control to resemble the original distribution $\ensuremath{\mathcal{D}}$. Ideally, the original test accuracy $\ensuremath{L}_{\ensuremath{S}}(\ensuremath{\hat{f}})$ and new test accuracy $\ensuremath{L}_{\ensuremath{S}'}(\ensuremath{\hat{f}})$ would then match up to the random sampling error. In contrast to this idealized view, our results in Figure \ref{fig:intro_plot} show a large drop in accuracy from the original test set $\ensuremath{S}$ set to our new test set $\ensuremath{S}'$. To understand this accuracy drop in more detail, we decompose the difference between $\ensuremath{L}_{\ensuremath{S}}(\ensuremath{\hat{f}})$ and $\ensuremath{L}_{\ensuremath{S}'}(\ensuremath{\hat{f}})$ into three parts (dropping the dependence on $\ensuremath{\hat{f}}$ to simplify notation): \begin{align*} \ensuremath{L}_{\ensuremath{S}} - \ensuremath{L}_{\ensuremath{S}'} \, = \, \underbrace{(\ensuremath{L}_{\ensuremath{S}} - \ensuremath{L}_{\ensuremath{\mathcal{D}}})}_{\text{Adaptivity gap}} \; + \; \underbrace{( \ensuremath{L}_{\ensuremath{\mathcal{D}}} - \ensuremath{L}_{\ensuremath{\mathcal{D}}'})}_{\text{Distribution Gap}} \; + \; \underbrace{(\ensuremath{L}_{\ensuremath{\mathcal{D}}'} - \ensuremath{L}_{\ensuremath{S}'})}_{\text{Generalization gap}} \, . \end{align*} We now discuss to what extent each of the three terms can lead to accuracy drops. \vspace{\negspaceint} \paragraph{Generalization Gap.} By construction, our new test set $\ensuremath{S}'$ is independent of the existing classifier $\ensuremath{\hat{f}}$. Hence the third term $\ensuremath{L}_{\ensuremath{\mathcal{D}}'} - \ensuremath{L}_{\ensuremath{S}'}$ is the standard \emph{generalization gap} commonly studied in machine learning. It is determined solely by the random sampling error. \iftoggle{isicml}{The size of our test sets makes the generalization gap small enough so we can ignore it here (see Appendix \ref{app:randomness} for details).}{ A first guess is that this inherent sampling error suffices to explain the accuracy drops in Figure \ref{fig:intro_plot} (e.g., the new test set $\ensuremath{S}'$ could have sampled certain ``harder'' modes of the distribution $\ensuremath{\mathcal{D}}$ more often). However, random fluctuations of this magnitude are unlikely for the size of our test sets. With 10,000 data points (as in our new ImageNet test set), a Clopper-Pearson 95\% confidence interval for the test accuracy has size of at most $\pm \;\! 1\%$. Increasing the confidence level to 99.99\% yields a confidence interval of size at most $\pm \;\! 2\%$. Moreover, these confidence intervals become smaller for higher accuracies, which is the relevant regime for the best-performing models. Hence random chance alone cannot explain the accuracy drops observed in our experiments.\footnote{We remark that the sampling process for the new test set $\ensuremath{S}'$ could indeed \emph{systematically} sample harder modes more often than under the original data distribution $\ensuremath{\mathcal{D}}$. Such a systematic change in the sampling process would not be an effect of random chance but captured by the distribution gap described below.} } \vspace{\negspaceint} \paragraph{Adaptivity Gap.} We call the term $\ensuremath{L}_{\ensuremath{S}} - \ensuremath{L}_{\ensuremath{\mathcal{D}}}$ the \emph{adaptivity gap}. It measures how much adapting the model $\ensuremath{\hat{f}}$ to the test set $\ensuremath{S}$ causes the test error $\ensuremath{L}_{\ensuremath{S}}$ to underestimate the population loss $\ensuremath{L}_{\ensuremath{\mathcal{D}}}$. If we assumed that our model $\ensuremath{\hat{f}}$ is independent of the test set $\ensuremath{S}$, this terms would follow the same concentration laws as the generalization gap $\ensuremath{L}_{\ensuremath{\mathcal{D}}'} - \ensuremath{L}_{\ensuremath{S}'}$ above\iftoggle{isicml}{ (see Appendix \ref{app:randomness})}{}. But this assumption is undermined by the common practice of tuning model hyperparameters directly on the test set, which introduces dependencies between the model $\ensuremath{\hat{f}}$ and the test set $\ensuremath{S}$. In the extreme case, this can be seen as training directly on the test set. But milder forms of adaptivity may also artificially inflate accuracy scores by increasing the gap between $\ensuremath{L}_{\ensuremath{S}}$ and $\ensuremath{L}_{\ensuremath{\mathcal{D}}}$ beyond the purely random error. \vspace{\negspaceint} \paragraph{Distribution Gap.} We call the term $\ensuremath{L}_{\ensuremath{\mathcal{D}}} - \ensuremath{L}_{\ensuremath{\mathcal{D}}'}$ the \emph{distribution gap}. It quantifies how much the change from the original distribution $\ensuremath{\mathcal{D}}$ to our new distribution $\ensuremath{\mathcal{D}}'$ affects the model $\ensuremath{\hat{f}}$. Note that this term is not influenced by random effects but quantifies the systematic difference between sampling the original and new test sets. While we went to great lengths to minimize such systematic differences, in practice it is hard to argue whether two high-dimensional distributions are exactly the same. We typically lack a precise definition of either distribution, and collecting a real dataset involves a plethora of design choices. \subsection{Distinguishing Between the Two Mechanisms} \label{sec:formal_multiple} For a single model $\ensuremath{\hat{f}}$, it is unclear how to disentangle the adaptivity and distribution gaps. To gain a more nuanced understanding, we measure accuracies for \emph{multiple} models $\ensuremath{\hat{f}}_1, \ldots, \ensuremath{\hat{f}}_k$. This provides additional insights because it allows us to determine how the two gaps have evolved over time. For both CIFAR-10 and ImageNet, the classification models come from a long line of papers that incrementally improved accuracy scores over the past decade. A natural assumption is that later models have experienced more adaptive overfitting since they are the result of more successive hyperparameter tuning on the same test set. Their higher accuracy scores would then come from an increasing adaptivity gap and reflect progress only on the specific examples in the test set $\ensuremath{S}$ but not on the actual distribution $\ensuremath{\mathcal{D}}$. In an extreme case, the population accuracies $\ensuremath{L}_{\ensuremath{\mathcal{D}}}(\ensuremath{\hat{f}}_i)$ would plateau (or even decrease) while the test accuracies $\ensuremath{L}_{\ensuremath{S}}(\ensuremath{\hat{f}}_i)$ would continue to grow for successive models $\ensuremath{\hat{f}}_i$. However, this idealized scenario is in stark contrast to our results in Figure \ref{fig:intro_plot}. Later models do not see diminishing returns but an \emph{increased} advantage over earlier models. Hence we view our results as evidence that the accuracy drops mainly stem from a large distribution gap. After presenting our results in more detail in the next section, we will further discuss this point in Section \ref{sec:discussion}. \section{Introduction} \label{sec:intro} \input{intro_new} \section{Summary of Our Experiments} \label{sec:overview} \input{experiment_overview} \section{Discussion} \label{sec:discussion} \input{discussion} \section{Conclusion \& Future Work} \label{sec:conclusion} \input{future_work} \section*{Acknowledgements} \input{acknowledgements} \bibliographystyle{plainnat}
{ "timestamp": "2019-03-01T02:04:29", "yymm": "1902", "arxiv_id": "1902.10811", "language": "en", "url": "https://arxiv.org/abs/1902.10811" }
\section*{ABSTRACT} \emph{Experience-based planning domains} (EBPDs) have been recently proposed to improve problem solving by learning from experience. EBPDs provide important concepts for long-term learning and planning in robotics. They rely on acquiring and using task knowledge, i.e., activity schemata, for generating concrete solutions to problem instances in a class of tasks. Using Three-Valued Logic Analysis (TVLA), we extend previous work to generate a set of conditions as the scope of applicability for an activity schema. The inferred scope is a bounded representation of a set of problems of potentially unbounded size, in the form of a 3-valued logical structure, which allows an EBPD system to automatically find an applicable activity schema for solving task problems. We demonstrate the utility of our approach in a set of classes of problems in a simulated domain and a class of real world tasks in a fully physically simulated PR2 robot in Gazebo. \section{Contiguous Non-overlapping Longest Common Prefix Array} \label{sec:app} Here we present an updated version of the $CNLCP$ algorithm \cite{vahid2017iros,vahid2017prletter} for computing potential patterns in a string. Given a string (representing the abstract plan $\Omega$ of a generalized and abstracted experience), Algorithm~\ref{alg:cnlcp} builds the suffix array $SA$, the $NLCP$ array, and the $CNLCP$ array for the string. Let \texttt{'abacacacdedfdfgh'} be the string representing the abstract plan of the activity schema in Listing~\ref{lst:schema}. Table~\ref{tbl:nlcp} shows the computed $SA$ and $NLCP$ arrays, and Table~\ref{tbl:cnlcp} shows the computed $CNLCP$ array for this string. $CNLCP$ gives the patterns (iterations of loops) \texttt{ac} and \texttt{df} happening at the positions ($2$, $4$, $6$) and ($10$, $12$) respectively. The resulting looping string is \ttf{ab(ac)$^*$de(df)$^*$gh}. \renewcommand{\algorithmicrequire}{\textbf{input:}} \renewcommand{\algorithmicensure}{\textbf{output:}} \begin{algorithm}[t] \caption{Contiguous Non-overlapping Longest Common Prefixes ($CNLCP$)} \label{alg:cnlcp} \begin{algorithmic}[1] { \Require $\Omega$ \Comment{{\scriptsize a string (representing the enriched abstract plan of an activity schema)}} \Ensure $CNLCP$ \Comment{{\scriptsize a contiguous non-overlapping longest common prefixes array (Def.~\ref{def:cnlcp})}} \vspace{10pt} \State $SA \gets \ttf{sorted}(\ttf{range}(\ttf{len}(\Omega)),\ttf{key}=lambda~i: \Omega[i:])$ \label{alg:cnlcs:sa} \Comment{{\scriptsize build a suffix array from $\Omega$ (in python)}} \State $NLCP[0] \gets 0$ \For {$i$ in $\ttf{range}(\ttf{len}(SA)-1)$} \Comment{{\scriptsize build a non-overlapping longest common prefixes array, NLCP (Def.~\ref{def:nlcpa})}} \State $NLCP[i+1] \gets \Call{\ttf{nlcp}} {\Omega[SA[i]:],\Omega[SA[i+1]:]}$\label{alg:cnlcs:nlcs} \Comment{{\scriptsize see line \ref{alg:nlcp:function}}} \EndFor \State $CNLCP~\gets \revemptyset$\label{alg:cnlcs:cnlcp:st} \Comment{{\scriptsize an empty dictionary}} \For {$maxlen$ in $\ttf{sorted}($NLCP$)$} \Comment{{\scriptsize build a CNLCP array (Def.~\ref{def:cnlcp})}} \For {$i$ in $\ttf{range}(\ttf{len}(\Omega))$} \If {$NLCP[i] == maxlen$ } \If {$\ttf{abs}(SA[i]-SA[i-1]) == maxlen$} \label{alg:cnlcp:if} \Comment{{\scriptsize keep only consecutive occurrences in NLCP}} \State $k~\gets \Omega[SA[i]:SA[i]+maxlen]$ \State $CNLCP[k]~\gets$ $CNLCP[k] \cup \{SA[i-1]\}$ \label{alg:cnlcs:cnlcp:en} \Comment{{\scriptsize starting position of pattern $k$}} \EndIf \EndIf\vspace{-1.6pt} \EndFor \EndFor \State \Return $CNLCP$ \vspace{20pt} \Function{$\ttf{nlcp}$}{$suf_1,suf_2$}\label{alg:nlcp:function} \Comment{{\scriptsize find the non-overlapping longest common prefix between two given suffixes (Def.~\ref{def:nlcp})}} \State $maxlen \gets \ttf{min}(\ttf{abs}(\ttf{len}(suf_1)-\ttf{len}(suf_2)),\ttf{len}(suf_1),\ttf{len}(suf_2))$ \label{alg:lcp:max} \For {$i$ in $\ttf{range}(maxlen)$} \If {$suf_1[i] \neq suf_2[i]$ } \State \Return $\ttf{len}(suf_1[0:i])$ \EndIf \EndFor \vspace{-2pt} \State \Return $\ttf{len}(suf_1[0:maxlen])$ \EndFunction \end{algorithmic} \end{algorithm} \begin{table}[t] \centering \caption{The computed $SA$ and $NLCP$ arrays for the string \texttt{'abacacacdedfdfgh'}.} { \begin{tabular}{clcc} $i$ & suffix & $SA[i]$ & $NLCP[i]^*$ \\ \hline 0 & \ttf{abacacacdedfdfgh} & 0 & 0 \\ 1 & \ttf{acacacdedfdfgh} & 2 & 1 \\ 2 & \ttf{acacdedfdfgh} & 4 & 2 \\ 3 & \ttf{acdedfdfgh} & 6 & 2 \\ 4 & \ttf{bacacacdedfdfgh} & 1 & 0 \\ 5 & \ttf{cacacdedfdfgh} & 3 & 0 \\ 6 & \ttf{cacdedfdfgh} & 5 & 2 \\ 7 & \ttf{cdedfdfgh} & 7 & 1 \\ 8 & \ttf{dedfdfgh} & 8 & 0 \\ 9 & \ttf{dfdfgh} & 10 & 1 \\ 10 & \ttf{dfgh} & 12 & 2 \\ 11 & \ttf{edfdfgh} & 9 & 0 \\ 12 & \ttf{fdfgh} & 11 & 0 \\ 13 & \ttf{fgh} & 13 & 1 \\ 14 & \ttf{gh} & 14 & 0 \\ 15 & \ttf{h} & 15 & 0 \\\hline \end{tabular} } {\begin{flushleft} \scriptsize \ \ $^{\rm *}$ {Each number in $i^{\text{th}}$ row specifies the length of the non-overlapping longest common prefix between two suffixes in rows $i$ and $(i-1)$ for $i\geq1$. For example, in rows $1$ and $2$, \texttt{'ac'} is the non-overlapping longest common prefix between two consecutive suffixes \texttt{'acacacdedfdfgh'} and \texttt{'acacdedfdfgh'}.} \end{flushleft}} \label{tbl:nlcp} \end{table} \begin{table}[t] \centering \caption{The computed $CNLCP$ array for the same string \texttt{'abacacacdedfdfgh'}.} { \begin{tabular}{ll} $k$ & $CNLCP[k]$ \\ \hline \ttf{ac} & 2, 4, 6 \\ \ttf{df} & 10, 12 \\ \hline \end{tabular} } \begin{flushleft} \scriptsize \ \ Each substring in the CNLCP array is a pattern (an iteration of a loop) with its starting positions in a given string. \end{flushleft} \label{tbl:cnlcp} \end{table} \section{INTRODUCTION}\label{sec:introduction} Planning is a key ability for intelligent robots, increasing their autonomy and flexibility through the construction of sequences of actions to achieve their goals \cite{ghallab2004automated}. Planning is a hard problem and even what is known historically as classical planning is PSPACE-complete over propositional state variables \cite{bylander1994computational}. To carry out increasingly complex tasks, robotic communities make strong efforts on developing robust and sophisticated high-level decision making models and implement them as planning systems. One of the most challenging issues is to find an optimum in a trade-off between computational efficiency and needed domain expert engineering work to build a reasoning system. In a recent work \cite{mokhtari2016jint,mokhtari2016icaps,vahid2017prletter,vahid2017iros}, we have proposed and integrated the notion of \emph{Experience-Based Planning Domain} (EBPD)---a framework that integrates important concepts for long-term learning and planning---into robotics. An EBPD is an extension of the standard \emph{planning domains} which in addition to planning operators, includes experiences and methods (called \emph{activity schemata}) for solving classes of problems. Figure~\ref{fig:ebpd} illustrates the experience extraction, learning and planning pipeline for building an EBPD system. \emph{Experience extraction} provides a human-robot interaction for teaching tasks and an approach to recording experiences of past robot's observations and activities. Experiences are used to learn activity schemata, i.e., methods of guiding a search-based planner for finding solutions to other related problems. \emph{Conceptualization} combines several techniques, including deductive generalization, different forms of abstraction, feature extraction, loop detection and inferring the scope of applicability, to generate activity schemata from experiences. \emph{Planning} is a hierarchical problem solver consisting of an abstract and a concrete planner which applies learned activity schemata for problem solving. In previous work, algorithms have been developed for experience extraction \cite{vahid2014experience,mokhtari2016jint}, activity schema learning and planning \cite{mokhtari2016jint,vahid2017prletter,vahid2017iros}. \tikzstyle{box}=[rectangle, draw=black, rectangle split, rectangle split parts=2, rounded corners, minimum width=5.5cm, minimum height=5.6cm] \begin{figure}[t] \fontfamily{cmss}\selectfont \begin{center} \resizebox{\textwidth}{!}{% \begin{tikzpicture}[node distance=1.6cm] \node[align=center] (teacher) [box] { {Experience extraction} \nodepart[align=left]{second} Instruction-based task teaching\\ Recording experience }; \node[align=center,right of=teacher, xshift=5.5cm] (learning) [box] { {Conceptualization} \nodepart[align=left]{second} Generalization\\ \textbf{Abstraction}\\ Feature extraction\\ Loop detection\\ \textbf{Inferring scope of applicability} }; \node[align=center,right of=learning, xshift=5.5cm] (world) [box] { {Planning} \nodepart[align=left]{second} \textbf{Retrieving activity schemata}\\ Abstract planner\\ Concrete planner }; \node (fix1) [below of=teacher, yshift=-.2cm] {}; \node (fix2) [below of=world, yshift=-.2cm]{}; \path[->,>=stealth] (teacher) edge node[sloped,above] {\small experience} (learning) (learning) edge node[sloped,above] {\small activity} (world); \path[] (learning) edge node[sloped,below] {\small schema} (world); \path[] (world.south) edge [] (fix2.center) (fix2.center) edge [] (fix1.center); \path[->,>=stealth] (fix1.center) edge [->,>=stealth] (teacher.south); \end{tikzpicture}} \caption{An abstract illustration of the experience extraction, learning and planning pipeline in EBPDs.} \label{fig:ebpd} \end{center} \end{figure} In this paper, we present several recent improvements and extensions of EBPDs. The procedures highlighted in bold in Figure~\ref{fig:ebpd} outline the contribution of this paper. As the main contribution of this paper, we extend and improve the EBPDs framework to automatically retrieve an applicable activity schema for solving a task problem. We propose an approach to infer a set of conditions from an experience that determines the \emph{scope of applicability} of an activity schema for solving a set of task problems. The inferred scope is a $3$-valued logical structure \cite{kleene1952introduction} (i.e., a structure that extends Boolean logic by introducing an indefinite value $\frac{1}{2}$ to denote either $0$ or $1$) which associates a bounded representation for a set of problems in the form of $2$-valued logical structures of potentially unbounded size. We employ Three-Valued Logic Analysis (TVLA) \cite{SRW:TOPLAS02} both to infer the scope of applicability of activity schemata (Section~\ref{sec:tvla_learning}) and to test whether existing activity schemata can be used to solve given task problems (Section~\ref{sec:tvla_execution}). We also extend and improve the abstraction methodology used in EBPDs (Section~\ref{sub:abstraction}). We propose to apply two independent abstraction hierarchies for reducing the level of detail during both learning an activity schema and planning, which leads to generate an abstract solution useful to reduce the search at the more concrete planning level. In the rest of the paper, we recapitulate the previous work and present an integrated and up-to-dated formal model of EBPDs (Section~\ref{sec:formalization}), and the approaches to learning activity schemata from robot experiences and task planning (Sections~\ref{sec:schema}--\ref{sec:planner}) (note that Sections~\ref{sec:schema} and \ref{sec:planner} are prior works). A special attention is given to abstracting an experience and inferring the scope of applicability of an activity schema using the TVLA (Sections~\ref{sec:tvla_learning} and \ref{sec:tvla_execution}). We validate our system over a set of classes of problems in a simulated domain and a class of real world task in a fully physically simulated PR2 robot in Gazebo (Section~\ref{sec:experiments}). \section{RELATED WORK}\label{sec:literature} The EBPDs' objective is to perform tasks. Learning of Hierarchical Task Networks (HTNs) is among the most related works to EBPDs. In HTN planning, a plan is generated by decomposing a method for a given task into simpler tasks until primitive tasks are reached that can be directly achieved by planning operators. CaMeL \cite{ilghami2002camel,ilghami2005learning} is an HTN learner which receives as input plan traces and the structure of an HTN method and tries to identify under which conditions the HTN is applicable. CaMeL requires all information about methods except for the preconditions. The same group transcends this limitation in a later work \cite{ilghami2006hdl} and presents the HDL algorithm which starts with no prior information about the methods but requires hierarchical plan traces produced by an expert problem-solver. HTN-Maker \cite{hogg2008htn} generates an HTN domain model from a STRIPS domain model, a set of STRIPS plans, and a set of annotated tasks. HTN-Maker generates and traverses a list of states by applying the actions in a plan, and looks for an annotated task whose effects and preconditions match some states. Then it regresses the effects of the annotated task through a previously learned method or a new primitive task. Overall, identifying a good hierarchical structure is an issue, and most of the techniques in HTN learning rely on the hierarchical structure of the HTN methods specified by a human expert. On the contrary, the EBPDs framework presents a fully autonomous approach to learning activity schemata with loops from single experiences. The inclusion of loops in activity schemata is an alternative to recursive HTN methods. Aranda~\cite{Srivastava2011615} takes a planning problem and finds a plan that includes loops. Using TVLA~\cite{LAmiS:SAS00} and back-propagation, Aranda finds an abstract state space from a set of concrete states of problem instances with varying numbers of objects that guarantees completeness, i.e., the plan works for all inputs that map onto the abstract state. These strong guarantees come at a cost: (i) restrictions on the language of actions; and (ii) high running times. Indeed computing the abstract state is worst-case doubly-exponential in the number of predicates. In contrast, the EBPDs system assumes standard PDDL actions. We also use TVLA to compute an abstract structure that determines the scope of applicability of an activity schema, however, we trade completeness for a polynomial time algorithm, which results in dramatically better performance. Loop\textsc{Distill} \cite{Winner07loopdistill} also learns plans with loops from example plans. It identifies the largest matching sub-plan in a given example and converts the repeating occurrences of the sub-plans into a loop. The result is a domain-specific planning program (dsPlanner), i.e., a plan with if-statements and while-loops that can solve similar problems of the same class. Loop\textsc{Distill}, nonetheless, does not address the applicability test of plans. Other approaches in AI planning including case based planning \cite{hammond1986chef,borrajo2015acm}, and macro operators \cite{fikes1972strips2,chrpa2010generation} are also related to our work. These methods tend to suffer from the utility problem, in which learning more information can be counterproductive due to the difficulty with storage and management of the information and with determining which information should be used to solve a particular problem. In EBPDs, by combining generalization with abstraction in task learning, it is possible to avoid saving large sets of concrete cases. Additionally, since in EBPDs, task learning is supervised, solving the utility problem can be to some extent delegated to the user, who chooses which tasks and associated procedures to teach. Other related work includes Learning from Demonstration (LfD) which puts effort into learning robot control programs by simply showing robots how to achieve tasks \cite{Argall2009469,billard2008robot}. This has the immediate advantage of requiring no specialized skill or training, and makes use of a human demonstrator's knowledge to identify which control program to acquire, typically by regression-based methods. Although LfD is useful in learning primitive action-control policies (such as for object manipulation), it is unsuitable for learning complex tasks. LfD usually requires many examples in order to induce the intended control structure \cite{allen2007plow}. Moreover, the representations are task-specific and are not likely to transfer to structurally similar tasks \cite{chao2011towards}. \section{RUNNING EXAMPLE} \label{sec:running_example} We develop a \textsc{stacking-blocks}\xspace planning domain, based on the blocks world domain, for representing provided concepts and definitions. Assume a set of blocks of red and blue colors sitting on a table. The goal is to build a vertical stack of red and blue blocks. The state of a problem in this domain consists of predicates with the following meanings. \texttt{pile(x)}, \texttt{table(x)}, \texttt{red(x)}, \texttt{blue(x)}, \texttt{pallet(x)}: \texttt{x} is a pile, table, red block, blue block, or pallet, respectively. \texttt{attached(p,l)}: pile \texttt{p} is attached to location \texttt{l}. \texttt{belong(h,l)}: hoist \texttt{h} belongs to location \texttt{l}. \texttt{at(h,p)}: hoist \texttt{h} is at place \texttt{p}. \texttt{holding(h,x)}: hoist \texttt{h} is holding block \texttt{x}. \texttt{empty(h)}: hoist \texttt{h} is empty. \texttt{on(x,y)}: block \texttt{x} is on block \texttt{y}. \texttt{ontable(x,t)}: block \texttt{x} is on table \texttt{t}. \texttt{top(x,p)}: block \texttt{x} is the top of pile \texttt{p}. The \textsc{stacking-blocks}\xspace domain has the actions with the following meanings. \texttt{move(h,x,y,l)}: hoist \texttt{h} moves from place \texttt{x} to place \texttt{y} at location \texttt{l}. \texttt{unstack(h,x,y,p,l)}: hoist \texttt{h} unstacks block \texttt{x} from block \texttt{y} on pile \texttt{p} at location \texttt{l}. \texttt{stack(h,x,y,p,l)}: hoist \texttt{h} puts block \texttt{x} on block \texttt{y} on pile \texttt{p} at location \texttt{l}. \texttt{pickup(h,x,t,l)}: hoist \texttt{h} picks up block \texttt{x} from table \texttt{t} at location \texttt{l}. \texttt{putdown(h,x,t,l)}: hoist \texttt{h} puts down block \texttt{x} on table \texttt{t} at location \texttt{l}. We define a specific class of `stack' problems where an equal number of red and blue blocks are initially on a table and need to be stacked on a pile with blue blocks at bottom and red blocks on top using a hoist which can hold only one block at a time. Generalizing from this example, we formally present and define concepts used for creating this domain and problem solving in EBPDs. \section{A FORMAL MODEL OF EXPERIENCE-BASED PLANNING DOMAINS} \label{sec:formalization} An EBPD is a unified framework that provides intelligent robots with the capability of problem solving by learning from experience \cite{mokhtari2016jint,vahid2017prletter}. Problem solving in this framework is achieved using a hierarchical problem solver, consisting of an abstract and a concrete planning domain, which employs a set of learned activity schemata for guiding a search-based planning. Formally, an EBPD is described as a tuple $\Delta=(\mc{D}_a,\mc{D}_c,\mc{R},\mc{E},\mc{M})$ where $\mc{D}_a$ is an abstract planning domain, $\mc{D}_c$ is a concrete planning domain, $\mc{R}$ is a set of abstraction hierarchies (i.e., inference rules) $f:\mc{D}_c\to\mc{D}_a$ to translate the concrete space in $\mc{D}_c$ into the abstract space in $\mc{D}_a$, $\mc E$ is a set of experiences, and $\mc M$ is a set of methods in the form of activity schemata for solving problems. In general, a planning domain of problem solving $\mc{D}=(\mc{L},\Sigma,\mc{S},\mc{O})$ is described by a first-order logic language $\mc{L}$, a finite subset of ground atoms $\Sigma$ of $\mc{L}$ for representing the static or invariant properties of the world (properties of the world that are always true), a set of all possible states $\mc{S}$, in which every state $s\in\mc{S}$ is a set of ground atoms of $\mc{L}$ representing dynamic or transient properties of the world (i.e., $s\cap\Sigma\neq\emptyset$), and a set of planning operators $\mc O$. { In EBPDs, the abstract and concrete planning domains are denoted by $\mc{D}_a=(\mc{L}_a,\Sigma_a,\mc{S}_a,\mc{O}_a)$ and $\mc{D}_c=(\mc{L}_c,\Sigma_c,\mc{S}_c,\mc{O}_c)$ respectively.} A \emph{planning operator} $o \in\mc{O}$ is described as a tuple $( h,S,P,E )$ where $h$ is the planning operator head, $S$ is the static precondition of $o$, a set of predicates that must be proved in $\Sigma$, $P$ is the precondition of $o$, a set of literals that must be proved in a state $s \in \mc{S}$ in order to apply $o$ in $s$, and $E$ is the effect of $o$, a set of literals specifying the changes on $s$ effected by $o$. A head takes a form \(n(x_1,...,x_k)\) in which $n$ is the name and \(x_1,...,x_k\) are the arguments, e.g., {\texttt{(pick ?block ?table)}} \footnote{ The notation in the Planning Domain Definition Language (PDDL) is used to represent EBPDs. All terms starting with a question mark (?) are variables, and the rest are constants or function symbols.}. Any ground instance of a planning operator is called an \emph{action}. \lstref{absoperator} shows a planning operator in EBPDs. \lstinputlisting[style=customlst, label=lst:absoperator, float, morekeywords={action, static, parameters, precondition, effect}, caption={Representation of a planning operator in EBPDs.}] {listings/abs_operator} Abstraction in EBPDs is achieved by dropping or transforming predicates and operators of the concrete planning domain $\mc{D}_c$ into the abstract planning domain $\mc{D}_a$. This transformation involves two independent abstraction hierarchies: a \emph{predicate abstraction hierarchy} and an \emph{operator abstraction hierarchy} which are expressed in $\mc{R}$. The predicate abstraction hierarchy is a set of abstraction relations, each one relating a concrete predicate $p_c(u_1,\dots,\allowbreak u_n)\allowbreak \in \mc{L}_c$ to an abstract predicate $p_a(v_1,\dots,v_m) \in \mc{L}_a$, such that $m \leq n$ and $(v_1,\dots,\allowbreak v_m) \allowbreak \subseteq (u_1,\dots,u_n)$; or to $\varnothing$ ($nil$). That is, a concrete predicate in $\mc{L}_c$ might: map onto an abstract predicate in $\mc{L}_a$ by replacing predicate symbols and excluding some arguments of the concrete predicate from the arguments of the abstract predicate, e.g., {\texttt{(holding ?hoist ?block) $\to$ (holding ?block)}}; or map onto $\varnothing$ ($nil$), that is, it is excluded from the abstract predicates, e.g., {\texttt{(attached ?pile ?location) $\to \varnothing$}}. Similarly, the operator abstraction hierarchy translates concrete operators in $\mc{O}_c$ into abstract operators in $\mc{O}_a$. In this abstraction, a concrete operator in $\mc{O}_c$ might: map onto an abstract operator in $\mc{O}_a$ by replacing operator symbols and excluding some arguments of the concrete operator from the arguments of the abstract operator, e.g., {\texttt{(pickup ?hoist ?block ?table ?loc) $\to$ (pick ?block ?table)}}; or map onto $\varnothing$ ($nil$), that is, it is excluded from the abstract operators, e.g., {\texttt{(move ?hoist ?from ?to ?loc) $\to \varnothing$}}. In this paper, a functional expression \texttt{parent}$(x)$, wherever it is used, given a concrete predicate/operator, returns the parent of $x$, i.e., an abstract predicate/operator corresponding to the concrete predicate/operator. Tables~\ref{tbl:predicate_hierarchies} and \ref{tbl:operator_hierarchies} present the predicate and operator abstraction hierarchies in the \textsc{stacking-blocks}\xspace EBPD. \footnote{ As a prerequisite of EBPDs, it is assumed that descriptions of the abstract and concrete planning domains $(\mc{D}_a,\mc{D}_c)$ with operators and predicates abstraction hierarchies $\mc{R}$ are given by a domain expert. Although it may require more effort to specify the abstract language, but we believe this is a price we have to pay to make planning more tractable in certain situation. Moreover, automatic definition of abstract and concrete planning domains is beyond the scope of this work.} \begin{table} \centering \caption{Predicate abstraction hierarchy in the \textsc{stacking-blocks}\xspace EBPD.} \label{tbl:predicate_hierarchies} \setlength\extrarowheight{-1pt} {\footnotesize \begin{tabular}{rl} \textbf{\normalsize Abstract predicate} & \textbf{\normalsize Concrete predicate} \smallskip\\ \hline \texttt{(table ?table)} & \texttt{(table ?table)} \\ \texttt{(pile ?pile)} & \texttt{(pile ?pile)} \\ \texttt{(block ?block)} & \texttt{(block ?block)} \\ \texttt{(blue ?block)} & \texttt{(blue ?block)} \\ \texttt{(red ?block)} & \texttt{(red ?block)} \\ \texttt{(pallet ?pallet)} & \texttt{(pallet ?pallet)} \\ \texttt{(on ?block1 ?block2)} & \texttt{(on ?block1 ?block2)} \\ \texttt{(ontable ?block ?table)} & \texttt{(ontable ?block ?table)} \\ \texttt{(top ?block ?pile)} & \texttt{(top ?block ?pile)} \\ \texttt{(holding ?block)} & \texttt{(holding ?hoist ?block)} \\ $\varnothing$\; & \texttt{(location ?location)} \\ $\varnothing$\; & \texttt{(hoist ?hoist)} \\ $\varnothing$\; & \texttt{(attached ?pile ?location)} \\ $\varnothing$\; & \texttt{(belong ?hoist ?location)} \\ $\varnothing$\; & \texttt{(at ?hoist ?pile)} \\ $\varnothing$\; & \texttt{(empty ?hoist)} \\ \end{tabular}} \end{table} \begin{table}[t] \centering \caption{Operator abstraction hierarchy in the \textsc{stacking-blocks}\xspace EBPD.} \label{tbl:operator_hierarchies} {\footnotesize \begin{tabular}{@{}rl@{}} \textbf{\normalsize Abstract operator} & \textbf{\normalsize Concrete operator} \smallskip\\ \hline \texttt{(unstack ?block1 ?block2 ?pile)} & \texttt{(unstack ?hoist ?block1 ?block2 ?pile ?loc)} \\ \texttt{(stack ?block2 ?block1 ?pile)} & \texttt{(stack ?hoist ?block2 ?block1 ?pile ?loc)} \\ \texttt{(pick ?block ?table)} & \texttt{(pickup ?hoist ?block ?table ?loc)} \\ \texttt{(put ?block ?table)} & \texttt{(putdown ?hoist ?block ?table ?loc)} \\ $\varnothing$\; & \texttt{(move ?hoist ?from ?to ?loc)} \\ \end{tabular}} \end{table} We propose to use experience given in the form of a concrete previously solved problem and to abstract this experience for its reuse in new situations. An \emph{experience} $e \in\mc{E}$ is a triple of ground structures $( t,K,\pi )$ where $t$ is a task achieved in the experience, i.e., a functional expression of the form $n(c_1,...,c_k)$ with $n$ being the task name and each $c_i$ a constant, e.g., {\ttf{(stack table1 pile1)}}, $K$ is a set of key-properties describing properties of the world in the experience, and $\pi$ is a solution plan to achieve $t$. Every \emph{key-property} is of the form $\tau(p)$ where $\tau$ is a temporal symbol and $p$ is a predicate. Temporal symbols specify the temporal extent of predicates in the experience. Three types of temporal symbols are used in key-properties, namely \texttt{init}---true at the initial state, \texttt{static}---always true during an experience, and \texttt{end}---true at the final state, e.g., {\ttf{(end(top block8 pile1))}}. \lstref{experience} shows part of an experience in EBPDs. Experiences are collected through human-robot interaction and instruction-based teaching. We previously presented methods and approaches of instructing and teaching a robot how to achieve a task as well as extracting and recording experiences \cite{vahid2014experience,mokhtari2016jint}. Experience extraction is beyond the scope of this paper. \begin{figure}[!t] \lstinputlisting[style=customlst, label=lst:experience, morekeywords={define, experience, domain, episode_id, task, parameters, key, properties, plan, objects}, caption={Part of the `stack' experience in the \textsc{stacking-blocks}\xspace EBPD. There are 8 (4 blue and 4 red) blocks in this experience. The goal of the task in this experience is to stacking the blocks from a table on a pile. The key-properties describe the initial, final and static world information of the experience (some key-properties are omitted due to limited space). The plan solution to this problem contains 31 primitive actions. }] {listings/robotic_arm_exp.ebpd} \vspace{-30pt} \end{figure} Extracted experiences are the main inputs to acquire activity schemata. Activity schemata are task planning knowledge obtained from experiences and contain generic solutions to classes of task problems. An \emph{activity schema} $m \in\mc{M}$ is a triple of ungrounded structures $m=(t,\Scope,\Omega)$, where $t$ is the target task to perform by a robot, e.g., {\texttt{(stack ?table ?pile)}}, $\mz{S}$ is the scope of applicability of $m$, and $\Omega$ is a sequence of enriched abstract operators (also called en enriched abstract plan). Each \emph{enriched abstract operator}, denoted by $\omega$, is a pair $(o, F)$, where $o$ is an abstract operator head, and $F$ is a set of features of $o$, i.e., ungrounded key-properties, obtained from an experience, that characterize $o$. In Section~\ref{sec:schema}, we present the method of learning and a concrete example of an activity schema. In Section~\ref{sec:tvla_learning}, we further develop the definition of the scope of applicability and present a method of inferring the scope of applicability for an activity schema from an experience. Finally, a \emph{task planning problem} in EBPDs is described as a tuple of ground structures $\mc P=( t,\sigma,s_0,g )$ where $t$ is the target task, $\sigma\subseteq\Sigma$ is a subset of the static world information, $s_0$ is the initial world state, and $g$ is the goal. \footnote{ A full representation and implementation of the \textsc{stacking-blocks}\xspace EBPD, and a set of all concepts required for problem-solving in this EBPD are available at: \url{https://github.com/mokhtarivahid/ebpd/tree/master/domains/}.} \section{LEARNING ACTIVITY SCHEMATA}\label{sec:schema} In this section, we recapitulate the procedure for learning an activity schema in EBPDs. We also improve the abstraction methodology in EBPDs to achieve more compact and applicable concepts. \subsection{Genralization} \label{sub:genralization} The first stage, applied to an experience, in order to extract its basic principles, is a deductive generalization method based on the tradition of PLANEX \citep{fikes1972strips2} and Explanation-Based Generalization (EBG) \citep{mitchell1986explanation}. Through the generalization, a general concept is formulated from a single experience and domain knowledge. The proposed EBG method is carried out over the plan of the experience. In this transformation, constants appearing in the plan are replaced with variables, hence the plan becomes free from the specific constants and could be used in situations involving arbitrary constants. The EBG method consistently variablizes all constants appearing in the actions of the plan in the experience and when it gets the last action in the plan, propagates the variables for constants in the whole experience, i.e., the constants in the key-properties of the experience are also replaced with the variables obtained by the EBG. EBG then generates a generalized experience, i.e., a new planning control knowledge, which forms the basis of a learned activity schema. \subsection{Abstraction} \label{sub:abstraction} We propose to use an abstraction methodology for translating the obtained generalized experience into an abstracted generalized experience. Abstract representation allows, during problem solving, to solve given problems with a reduced computational effort. It also makes the learned concepts broader, more compact and widely applicable. Given the predicate and operator abstraction hierarchies $\mc{R}$, the abstraction of an experience is achieved by transforming the concrete predicates and operators into abstract predicates and operators, which results in reducing the level of detail in the generalized experience. The predicate and operator abstraction hierarchies in $\mc{R}$ specify which of the concrete predicates/operators are mapped and which are skipped. A concrete (generalized) experience $e=(t,K,\pi)$ is translated into an abstracted (generalized) experience $e_a=(t,K_a,\pi_a)$, denoted by $\ttf{Abs}(e)$, as follows: \[ K_a = \{\tau(\ttf{parent}(p)) \mid \tau(p) \in K\}, \quad \pi_a = \{\ttf{parent}(o) \mid o \in \pi\}. \] Listing~\ref{lst:experience_gen} partially shows an experience after the generalization and abstraction. In this example, the abstraction is based on the predicate and operator abstraction hierarchies presented in Tables~\ref{tbl:predicate_hierarchies} and \ref{tbl:operator_hierarchies}. \lstinputlisting[style=customlst, float=t, label=lst:experience_gen, belowskip=0pt, morekeywords={define, experience, domain, episode_id, task, parameters, key, properties, plan, objects}, caption={After generalization and abstraction, the constants are replaced with variables (Generalization), and some key-properties and actions are excluded from the generalized experience (Abstraction) as specified in the predicate and operator abstraction hierarchies.}] {listings/robotic_arm_exp_generalized.ebpd} \subsection{Feature extraction} \label{sub:features} The discovery of meaningful features can contribute to the creation of a more concise and accurate learned concept \citep{fawcett1992automatic}. While abstraction reduces the level of detail in an experience, extracting other features would help to capture the essence of the experience. Features are properties of abstract operators in learned planning knowledge. In an experience, a \emph{feature} of an abstract operator is a key-property $\tau(p)$ such that $p$ contains at least one argument of the abstract operator and at least one argument of the task in the experience, that is, the feature links the abstract operator with the task in the experience. For example in Listing~\ref{lst:experience_gen}, the key-property \texttt{(init(ontable ?block1 ?table))} is a feature connecting \texttt{?block1}, the argument of an abstract operator \texttt{pick}, to \texttt{?table}, the argument of the task `stack'. For each abstract operator in a generalized and abstracted experience, all possible relations between the arguments of the abstract operator and the task arguments are automatically extracted and associated to the abstract operator. Features are intended to improve the performance of problem solving by guiding a planner toward a goal state and reducing the probability of backtracking, that is, during problem solving, objects that satisfy the features are preferable to instantiate actions. Listing~\ref{lst:schema} shows thus far the learned activity schema from the `stack' experience, after generalization, abstraction and feature extraction. \lstinputlisting[style=customlst, label=lst:schema, float=t, morekeywords={parameters, domain, define, method, activity, schema, abstract, plan, objects}, caption={A learned activity schema thus far for the `stack' task after generalization, abstraction and feature extraction. Each abstract operator is associated with a set of features (some are omitted due to limited space) that during problem solving determine which objects can be used to instantiate abstract actions. }] {listings/robotic_arm_schema.ebpd} \subsection{Loop detection} \label{sub:loop} Detecting and representing possible loops of enriched abstract operators in an activity schema would help to improve the compactness and to increase the applicability of the activity schema. In the previous work \cite{vahid2017iros,vahid2017prletter}, we proposed a loop detection approach based on the standard methods of computing \emph{Suffix Array} ($SA$) of a string --- an array of integers providing the starting positions of all suffixes of a string, sorted in lexicographical order --- and the \emph{Longest Common Prefix} ($LCP$) array --- an array of integers storing the lengths of the longest common prefixes between all pairs of consecutive suffixes in a suffix array \cite{manber1993suffix}. \footnote{ Suffix array and the longest common prefix array allow efficient implementations of many important string operations.} Since the $LCP$ algorithm also selects the overlapping longest repeated substrings in a string, it cannot be independently used to detect potential loops in the string. We extend the definition of the $LCP$ to the Non-overlapping Longest Common Prefix ($NLCP$) and build an $NLCP$ array from a string: \begin{definition \label{def:nlcp} Let $A$ and $B$ be two strings, and $A[i:j]$ and $B[i:j]$ denote the substrings of $A$ and $B$ ranging from $i$ to $j-1$ respectively. The length of the \emph{Non-overlapping Longest Common Prefix} ($NLCP$) of $A$ and $B$, denoted by \ttf{nlcp}$(A,B)$, is the largest integer $l \leq \ttf{min}(\Len{A},\Len{B},\allowbreak\ttf{abs}(\Len{A}-\Len{B}))$ such that $A[0:l] = B[0:l]$. \end{definition} \begin{definition \label{def:nlcpa} Let $S$ be a string and $SA$ the suffix array of $S$. An $NLCP$ array, built from $S$ and $SA$, is an array of integers of size $n=\ttf{len}(S)$ such that $NLCP[0]$ is undefined and $NLCP[i]=\ttf{nlcp}(S[SA[i-1]:n],S[SA[i]:n])$, for $1\leq i<n$. \end{definition} The $NLCP$ array gives a list of potential patterns in a string, however, it does not warrant the obtained patterns are consecutive. We proposed the Contiguous Non-overlapping Longest Common Prefix array obtained form the $NLCP$ array: \begin{definition \label{def:cnlcp} A \emph{Contiguous Non-overlapping Longest Common Prefix} ($CNLCP$) array is an array of structures, constructed from the $SA$ and $NLCP$ arrays of a string, such that each $CNLCP[i]$, for $i\geq 0$, contains a substring, representing a pattern that consecutively occurs in the string, and a list of starting positions of the pattern in the string. A non-overlapping longest common prefix between $NLCP[i]$ and $NLCP[i-1]$ is consecutive if $NLCP[i]=\ttf{abs}($SA$[i]-$SA$[i-1])$ for $1\leq i<n$. \end{definition} When the $CNLCP$ array is constructed for the abstract plan of a generalized and abstracted experience (represented as a string), we start by selecting an iteration with the largest length in the $CNLCP$ array and construct a loop by merging all iterations of the loop, that is, the loop iterations are merged and an intersection of their corresponding features is computed, and a new variable represents the different variables playing the same role in the corresponding abstract operators and in their corresponding features in each subsequence. We continue this process for all iterations in the $CNLCP$ array until no more loops are formed. In Appendix~\ref{sec:app}, we present an updated version of the $CNLCP$ algorithm and a concrete example of computing the CNLCP array. \lstref{schema_loop} shows a learned activity schema of the `stack' task in the \textsc{stacking-blocks}\xspace EBPD with two potential loops of actions. The specific algorithms for learning activity schemata have been described in \cite{mokhtari2016jint,vahid2017prletter,vahid2017iros}. \lstinputlisting[style=customlst, label=lst:schema_loop, float=t, captionpos=b, morekeywords={parameters, domain, objects, define, activity, schema, method, abstract, plan, precondition}, caption={A learned activity schema for the `stack' task with loops (note only key-properties that make distinction between abstract operators are shown and the rest are omitted due to limited space). There are two loops in this activity schema. During problem solving, iterations of loops are generated for blue and red blocks on a table, respectively. }] {listings/robotic_arm_schema_loop.ebpd} \section{INFERRING THE SCOPE OF AN ACTIVITY SCHEMA} \label{sec:tvla_learning} To extend the EBPDs framework, we propose to infer the \emph{scope of applicability} of the learned activity schema. The scope allows for testing the applicability of the activity schema to solve a given problem. We develop an approach based on Canonical Abstraction \citep{SRW:TOPLAS02}, which creates a finite representation of a (possibly infinite) set of logical structures. The approach is based on Kleene's $3$-valued logic \citep{kleene1952introduction}, which extends Boolean logic by introducing an indefinite value $\frac{1}{2}$, to denote either $0$ or $1$. We infer the scope of an activity schema from the key-properties of a generalized and abstracted experience in the form of a $3$-valued logical structure, which can be used as an abstraction of a larger $2$-valued logical structure. We first represent the key-properties of a generalized and abstracted experience using a $2$-valued logical structure: \begin{definition \label{def:2_valued} A \emph{$2$-valued logical structure}, also called a \emph{concrete structure}, over a set of predicate symbols $P$ and a set of temporal symbols $T$ is a pair, \[ C=( U, \iota), \] where $U$ is a set of individuals called the universe of $C$ and $\iota$ is an interpretation for $P$ and $T$ over $U$. The interpretation of a predicate symbol $p\in P$ with a temporal symbol $\tau\in T$, denoted by $\iota(\tau(p))$, is a function mapping $\tau(p)$ over the universe $U$ to its the truth-value in $C$: for every predicate symbol $p^{(k)}$ of arity $k$ and temporal symbol $\tau$, $\iota(\tau(p)):U^k \to \{0,1\}$. \end{definition} A set of key-properties $K$ is converted into a $2$-valued logical structure, denoted by $\texttt{Struct}(K) =(U, \iota)$, as follows: \[ \begin{array}{rcl}~ P &=& \bigcup\limits_{\tau(p(t_1,\dots,t_k)) \in K}\{p\}\enspace, \\ T &=& \bigcup\limits_{\tau(p(t_1,\dots,t_k)) \in K}\{\tau\}\enspace, \\ U &=& \bigcup\limits_{\tau(p(t_1,\dots,t_k)) \in K}\{t_1,\dots,t_k\}\enspace, \\ \iota &=& \lambda\tau\in T, p^{(k)}\in P~.~\lambda(t_1,\dots,t_k)\in U^k. \left\{\begin{array}{ll} 1, & \text{if}\quad \tau(p(t_1,\dots,t_k)) \in K\hbox{;} \\ 0, & \hbox{otherwise.} \end{array}\right. \end{array} \] That is, the universe of $\texttt{Struct}(K)$ consists of the objects appearing in the key-properties of $K$, and the interpretation is defined over the key-properties of $K$. The interpretation of a temporal symbol $\tau\in T$, where $T=\{\texttt{static},\texttt{init},\texttt{end}\}$, and a predicate symbol $p^{(k)}\in P$ of arity $k$, for a tuple of objects $(t_1,\dots,t_k)\in U$ is $1$, i.e., $\iota(\tau(p))(t_1,\dots,t_k)=1$, if a corresponding key-property $\tau(p(t_1,\dots,t_k))$ appears in $K$. $2$-valued logical structures are drawn as directed graphs. The individuals of the universe are drawn as nodes, and the key-properties with definite values ($1$) are drawn as directed edges. For example, \figref{tvla_abstraction}(a) shows a $2$-valued logical (concrete) structure $C$ representing the generalized and abstracted experience in Listing~\ref{lst:experience_gen}. In this example, the universe and the truth-values (interpretations) of the key-properties over the universe of $C$ are as follows: \footnote{ The truth-value of a predicate is $0$ if it is not present in $\iota$.} \[ \begin{array}{rcl} P &=& \{ \ttfs{pile}, \ttfs{table}, \ttfs{pallet}, \ttfs{block} \}, \\ T &=& \{ \ttfs{static}, \ttfs{init}, \ttfs{end} \}, \\ U &=& \{ \ttfs{?pile}, \ttfs{?table}, \ttfs{?pallet}, \ttfs{?block1}, \ttfs{?block2}, \ttfs{?block3}, \ttfs{?block4}, \ttfs{?block5}, \\ & & \;\ttfs{?block6}, \ttfs{?block7}, \ttfs{?block8} \}\enspace, \\ \iota &=& \{ \ttfs{(static(pile ?pile))} = \; 1, \\ & & \;\ttfs{(static(table ?table))} = \; 1, \\ & & \;\ttfs{(static(pallet ?pallet))} = \; 1, \\ & & \;\ttfs{(static(block ?block1))} = \; 1, \\ & & \;\ttfs{(static(block ?block2))} = \; 1, \\ & & \;\ttfs{(static(block ?block3))} = \; 1, \\ & & \;\ttfs{(static(block ?block4))} = \; 1, \\ & & \;\;\vdots \qquad\}\enspace. \end{array} \] The scope inference procedure converts a $2$-valued logical structure into a $3$-valued logical structure \citep{SRW:TOPLAS02}: \begin{definition \label{def:3_valued} A \emph{$3$-valued logical structure}, also called an \emph{abstract structure}, over a set of predicate symbols $P$ and a set of temporal symbols $T$ is a pair, \[ \mz{S}=( U, \iota), \] where $U$ is a set of individuals called the universe of $\mz{S}$ and $\iota$ is an interpretation $\iota(\tau(p))$ for every predicate symbol $p\in P$ and temporal symbol $\tau\in T$. For every predicate symbol $p^{(k)}$ of arity $k$ with a temporal symbol $\tau$, $\iota(\tau(p)):U^k \to \{0,1,\frac{1}{2}\}$, where $\frac{1}{2}$ denotes unknown values. \end{definition} The scope inference procedure converts a $2$-valued logical structure into a $3$-valued logical structure \citep{SRW:TOPLAS02}. This transformation is based on canonical names, the Kleene's join operation \citep{lev2000tvla} and a canonical abstraction function: \begin{definition \label{def:canonical_name} Let $(U,\iota)$ be a ($2$-valued logical/$3$-valued logical) structure over a set of temporal symbols $T$ and a set of predicate symbols $P$. The \emph{canonical name} of an object $u \in U$, also called an \emph{abstraction predicate}, denoted by $\ttf{canon}(u)$, is a set of unary predicate symbols with temporal symbols that hold for $u$ in the structure: \[ \ttf{canon}(u) = \{ \tau(p) \mid \tau\in T, p\in P,\iota(\tau(p))(u)=1\}\enspace. \] \end{definition} For example, the canonical names of the objects in $U$ in the above example are the following: \[ \begin{array}{rcl} \ttfs{canon(?table}) &=& \{\ttfs{static(table)}\}\\ \ttfs{canon(?pile}) &=& \{\ttfs{static(pile)}\}\\ \ttfs{canon(?pallet}) &=& \{\ttfs{static(pallet)}\}\\ \ttfs{canon(?block1}) &=& \{\ttfs{static(block),static(blue)}\}\\ \ttfs{canon(?block2}) &=& \{\ttfs{static(block),static(blue)}\}\\ \ttfs{canon(?block3}) &=& \{\ttfs{static(block),static(blue)}\}\\ \ttfs{canon(?block4}) &=& \{\ttfs{static(block),static(blue)}\}\\ \ttfs{canon(?block5}) &=& \{\ttfs{static(block),static(red)}\}\\ \ttfs{canon(?block6}) &=& \{\ttfs{static(block),static(red)}\}\\ \ttfs{canon(?block7}) &=& \{\ttfs{static(block),static(red)}\}\\ \ttfs{canon(?block8}) &=& \{\ttfs{static(block),static(red)}\}\enspace. \end{array} \] \begin{definition \label{def:kleene_join} In Kleene's $3$-valued logic, let say the values $0$ and $1$ are definite values and $\frac{1}{2}$ is an indefinite value. For $l_1,l_2\in\{0, 1, \frac{1}{2}\}$, $l_1$ has more definite information than $l_2$, denoted by $l_1\preceq l_2$, if $l_1=l_2$ or $l_2=\frac{1}{2}$. The Kleene's \emph{join operation} of $l_1$ and $l_2$, denoted by $l_1\sqcup l_2$, is the least-upper-bound operation with respect to $\preceq$ defined as follows: \[l_1\sqcup l_2 = \left\{\begin{array}{ll} l_1, & \text{if}\quad l_1=l_2\hbox{;} \\ \frac{1}{2}, & \hbox{otherwise.} \end{array} \right. \] \end{definition} \begin{definition \label{def:CanonicalAbstraction} Let $C=( U, \iota)$ be a $2$-valued logical structure. The \emph{canonical abstraction} of $C$, denoted by $\beta(C)$, is a $3$-valued logical structure $\mz{S}=( U', \iota')$ defined as follows: \begin{align*} &U' = \{ \ttf{canon}(u) \mid u \in U \}\enspace,\\ &\iota'(\tau(p^{(k)}))(t_1',\dots,t_k') = \bigsqcup\limits_{t_1,\dots,t_k} \{ \iota(\tau(p^{(k)}))(t_1,\dots,t_k) \mid \forall i=1..k.\ t_i' = \ttf{canon}(t_i) \} \enspace. \end{align*} $\mz{S}$ may contain \emph{summary objects}, that is, a set of objects in $U$ with a canonical name, $c$, is merged \footnote{Note that we avoid merging objects appearing in the task (parameters) of an experience into a summary object.} into a summary object in $U'$, denoted by \ttf{summary$(c)$}: \[ \ttf{summary}(c)=\{ u\in U \mid \ttf{canon}(u)=c\}\enspace. \] \end{definition} Kleene's join operation determines the truth-value (interpretation) of key-pro\-perties in a $3$-valued logical structure. That is, the interpretation of a key-property in the $3$-valued logical structure is $1$ (solid arrows) if the key-property exists for all objects of the same canonical name in the $2$-valued logical structure, otherwise the truth-value is $\frac{1}{2}$ if the key-property exists for some objects of the same canonical name (dashed arrows), and $0$ if no key-property exists. \footnote{ In a planning domain description, the set of unary predicates is used to build the set of abstraction predicates. The function of canonical abstraction suggests that we should have sufficient unary predicates to be able to determine if an abstract structure exists for a concrete structure. In all example domains used in this work, we provided sufficient unary predicates. However, the type of objects (in typed planning domains descriptions) are also assumed as unary predicates when unary predicates are not sufficient. } Computing $\beta(\texttt{Struct}(K))$ for a set of key-properties $K$ of the (generalized and abstracted) experience takes polynomial time in $|K|+|U|$. $3$-valued logical structures are also drawn as directed graphs. Summary objects are drawn as double circles. Definite values are drawn as in the $2$-values logical structures, and indefinite values ($\frac{1}{2}$) are drawn as dashed directed edges. For example, \figref{tvla_abstraction}(b) shows a $3$-valued logical structure $\mz{S}$ of the concrete structure $C$ in \figref{tvla_abstraction}(a). The double circles stand for summary objects, e.g., $\ttf{summary}(\allowbreak\{\ttf{(during,block),(during,blue)}\})$ is a summary object in $\mz{S}$ corresponding to the objects (\texttt{?block1..?block4}) in $C$ with the same canonical name. Solid (dashed) arrows represent truth-values of $1$ ($\frac{1}{2}$). Intuitively, because of the summary objects, the abstract structure $\mz{S}$ represents the concrete structure $C$ and all other `Stack' problems that have exactly one \ttf{table}, one \ttf{pile}, one \ttf{pallet}, and at least one \ttf{blue block} and one \ttf{red block} such that the blocks are initially on the table and finally red blocks are on top of blue blocks in the pile. \begin{figure*}[!t] \centering \begin{subfigure}[b]{\textwidth} \includegraphics[width=.97\textwidth]{figs/stack_prob_abs.pdf} \caption{A concrete structure $C$.} \end{subfigure} \\\vspace{5pt} \begin{subfigure}[b]{.58\textwidth} \includegraphics[width=\textwidth]{figs/stack_1.pdf} \caption{An abstract structure $\mz{S}=\beta(C)$.} \end{subfigure} \caption{Canonical abstraction of the (generalized and abstracted) `stack' experience (in Listing~\ref{lst:experience_gen}) in the \textsc{stacking-blocks}\xspace EBPD. Nodes constitute the universe of a structure and edges represent the truth-values of the key-properties over the universe.} \label{fig:tvla_abstraction} \vspace{-25pt} \end{figure*} The inferred scope is finally represented in a learned activity schema as a set of key-properties. Summary objects are represented as \ttf{(summary ?c)} where \ttf{?c} is a canonical name. Indefinite values ($\frac{1}{2}$) appear as \texttt{(maybe($p$))} where $p$ represents a key-property. \lstref{schema_loop_prec} shows the inferred scope of applicability for the activity schema of the `stack' task. \begin{figure}[!t] \lstinputlisting[style=customlst, label=lst:schema_loop_prec, captionpos=b, morekeywords={parameters, domain, objects, define, activity, schema, method, abstract, plan, scope}, caption={The inferred scope of applicability for the learned activity schema of the `stack' task. `Summary' objects represent arbitrary numbers of objects of the same canonical name, and `maybe' key-properties represent key-properties with truth-values of either $0$ or $1$ in a task planning problem. }] {listings/scope} \end{figure} \section{SELECTING AN APPLICABLE ACTIVITY SCHEMA FOR PROBLEM SOLVING} \label{sec:tvla_execution} When an activity schema is learned for a class of problems, it can be used to generate a solution plan for a given task problem. In the previous work, the EBPDs framework lacked an automatic strategy to find an applicable activity schema, among several learned activity schemata, for solving a task problem. Here, we extend the previous work in which an activity schema is selected as applicable to solving a given task problem if the task problem is \emph{embedded} in the scope of the activity schema, i.e., the task problem maps onto the scope of the activity schema. Selecting an activity schema involves \emph{problem abstraction} and \emph{testing the scope of applicability} (i.e., embedding). Given the predicate abstraction hierarchy in $\mc{R}$, the abstraction of a problem is achieved by transforming the concrete predicates into abstract predicates, which results in an abstracted task problem. A concrete task problem $\mc{P}=( t,\sigma,s_0,g )$ is translated into an abstracted task problem $\mc{P}_a=( t,\sigma_a,{s_0}_a,g_a )$, denoted by $\ttf{Abs}(\mc{P})$, as follows: \[ \sigma_a = \{\ttf{parent}(p) \mid p\in\sigma\}, \quad {s_0}_a = \{\ttf{parent}(p) \mid p\in s_0\}, \quad g_a = \{\ttf{parent}(p) \mid p\in g\}. \] To see how the abstracted task problem $\mc{P}_a$ is embedded in the scope of an activity schema, we first convert $\mc{P}_a$ into a $2$-valued structure \footnote{ More precisely, to represent an abstracted task problem $\mc{P}=(t,\sigma,{s_0},g)$ into a $2$-valued structure, we generate a set of key-properties $K$ for $\mc{P}$ by wrapping the predicates of $(\sigma,{s_0},g)$ with temporal symbols \ttf{static}, \ttf{init} and \ttf{end}, and then convert $K$ into a $2$-valued structure. } (as described in Section~\ref{sec:tvla_learning}), and then test if the obtained $2$-valued structure is embedded in the scope of an activity schema: \begin{definition \label{def:embedding} We say that a concrete structure (i.e., an abstracted task problem represented in a $2$-valued logical structure) $C=(U, \iota)$ is \emph{embedded} in an abstract structure (i.e., in the scope of an activity schema) $\mz{S}=(U', \iota')$, denoted by $C \sqsubseteq \mz{S}$, if there exists a function $f:U \to U'$ such that $f$ is surjective and for every predicate symbol $p^{(k)}$ of arity $k$ with a temporal symbol $\tau$, and tuple of objects $u_1,...,u_k \in U$, one of the following conditions holds: \begin{equation} \label{eq:embedding} \begin{array}{c} \iota(\tau(p))(u_1,...,u_k) = \iota'(\tau(p))(f(u_1),...,f(u_k)) \quad\text{or}\quad \iota'(\tau(p))(f(u_1),...,f(u_k)) = \frac{1}{2} \enspace. \end{array} \end{equation} Further, $\mz{S}$ represents the set of concrete structures embedded in it: $\{C \mid C \sqsubseteq \mz{S}\}$. \end{definition} \begin{proposition} Canonical abstraction is sound with respect to the embedding relation. That is, $C \sqsubseteq \beta(C)$ holds for every concrete structure $C$. \end{proposition} \begin{proposition} If an abstract structure $\mz{S}=(U', \iota')$ is in the image of canonical abstraction, then checking whether a concrete structure $C=(U, \iota)$ is embedded in $\mz{S}$ can be done in time polynomial in $|U|+|U'|+|K|$. \end{proposition} \noindent\textbf{\textit{Proof sketch.}} Observe that \eqref{eq:embedding} implies, if $C$ is embedded in $\mz{S}$, then $\mz{S}$ and $C$ have equal sets of canonical names (checkable in polynomial time). Therefore, the embedding function must be $f = \{ u \mapsto u' \mid \ttf{canon}_C(u)=\ttf{canon}_{\mz{S}}(u')\}$. Checking that \eqref{eq:embedding} holds for $f$ takes polynomial time. \hfill\begin{scriptsize}$\square$\end{scriptsize} Based on the above definition, we implemented and integrated an \textsc{Embedding} function into the EBPDs system to find an applicable activity schema $m=( h,\mz{S},\Omega )$ to a task planning problem $\mc{P}=(t,\sigma,s_0,g)$, by checking whether $\texttt{Struct}(\ttf{Abs}(\mc{P})) \sqsubseteq \mz{S}$ holds. \section{PLANNING USING THE LEARNED ACTIVITY SCHEMATA} \label{sec:planner} We have previously proposed a planning system for generating a solution plan to a given task problem using a learned activity schema \cite{vahid2017iros,vahid2017prletter}. Problem solving in EBPDs is achieved using a hierarchical problem solver which includes an abstract planner---\textit{Abstract Schema-Based Planner} (ASBP), and a concrete planner---\textit{Schema-Based Planner} (SBP). Given an experience-based planning domain $\Delta=(\mc{D}_a,\mc{D}_c,\mc{R},\mc{E},\mc{M})$ and a task planning problem $\mc{P}$, the EBPDs' planning system retrieves an applicable activity schema $m=(t,\Scope,\Omega)$, i.e., checks $\texttt{Struct}(\ttf{Abs}(\mc{P})) \sqsubseteq \mz{S}$ (see Section~\ref{sec:tvla_execution}), and attempts to generate a solution plan to $\mc{P}$. Using the abstract planning domain $\mc{D}_a$, ASBP first derives an abstract solution by instantiating the enriched abstract plan $\Omega$ for $\ttf{Abs}(\mc{P})$. This also involves extending possible loops in $\Omega$ for the applicable objects in $\ttf{Abs}(\mc{P})$. To extend a loop, ASBP simultaneously generates all successors for an iteration of the loop and for the following enriched abstract operator after the loop. ASBP computes a cost for all generated successors based on the number of features of abstract operators verified with the features extracted for the instantiated abstract actions, and selects the best current action with the lowest cost during the search. Finally, a ground abstract plan, $\Pi$, is generated when ASBP gets the end of (the enriched abstract plan) $\Omega$ (where the goal must also be achieved). The ground abstract plan $\Pi$ produced by ASBP becomes the main skeleton of the final solution based on which SBP, using the concrete planning domain $\mc{D}_c$, produces a final solution plan to $\mc{P}$ by generating and substituting concrete actions for the abstract actions in $\Pi$ (as specified in the operator abstraction hierarchy in $\mc{R}$). This might also involve generating and inserting actions from the $\varnothing$ ($nil$) class (see Table~\ref{tbl:operator_hierarchies}). The specific planning algorithm and its respective implementation have been proposed and described in \cite{vahid2017iros,vahid2017prletter}. \section{EMPIRICAL EVALUATION}\label{sec:experiments} We present the results of our experiments in different classes of problems. \subsection{Prototyping and implementation} \label{sec:implementation} We implemented a prototype of our system in \textsc{SWI-Prolog}, which is a general-purpose logic programming language for fast prototyping artificial intelligence techniques, and used TVLA as an engine for computing the scope of applicability of activity schemata. We performed all experiments in simulated domains and simulated robot platforms, e.g., PR2, on a machine $2.20$GHz Intel Core i$7$ with $12$G memory. \subsection{\textsc{STACKING-BLOCKS}}\label{sec:experiments:sim} In our first experiment, we use the \textsc{stacking-blocks}\xspace EBPD, as described in Section~\ref{sec:running_example} (see also Tables~\ref{tbl:predicate_hierarchies} and ~\ref{tbl:operator_hierarchies}). The main objective of this experiment is to learn a set of different activity schemata (tasks) with the same goal but different scopes of applicability, and to evaluate how the scope testing (embedding) function allows the system to automatically find an applicable activity schema to a given task problem. In this paper, we described a class of `stack' problems (Section~\ref{sec:running_example}) with an experience (Listing~\ref{lst:experience}), a learned activity schema (Listing~\ref{lst:schema_loop}), and its scope of applicability (Listing~\ref{lst:schema_loop_prec} and Figure~\ref{fig:tvla_abstraction}(b)). Additionally, we define three other classes of the `stack' problems with the same goal but different initial configurations as follow: (\emph{i}) a pile of red and blue blocks, with red blocks at the bottom and blue blocks on top; (\emph{ii}) a pile of alternating red and blue blocks, with a blue block at the bottom and a red block on top; and (\emph{iii}) a pile of alternating red and blue blocks, with a red block at the bottom and a blue block on top. In all classes of problems, the goal is to make a new pile of red and blue blocks with blue blocks at the bottom and red blocks on top (the same goal as in Section~\ref{sec:running_example}). To show the effectiveness of the proposed scope of applicability inference, we simulated an experience (containing an equal number of $20$ blocks of red and blue colors) in each of above classes, which based on them the system generates three activity schemata with distinct scopes of applicability (see Figure~\ref{fig:stack_scopes}). To evaluate the system over the learned activity schemata, we randomly generated $60$ task problems in all four classes of the `stack' tasks ($15$ in each class), ranging from $20$ to $50$ equal number of red and blue blocks in each problem. In this experiment, the system found an applicable activity schema (among $4$) to solve given task problems in under $60\text{ms}$ for testing the scope of applicability (see Figure~\ref{fig:stack_retrieval}), and then successfully solved all problems. To show the efficiency of the system, we also evaluated and compared the performance of the SBP with a state-of-the-art planner, \textsc{Madagascar} \cite{RINTANEN201245}, based on four measures: time, memory, number of evaluated nodes and plan length (see Table~\ref{tbl:stack_exp}). In this experiment, SBP was extremely efficient in terms of memory and evaluated nodes in the search tree. SBP was also fairly fast to solve some problems comparing to \textsc{Madagascar}. Note that the time comparison is not accurate in this evaluation, since SBP has been implemented in \textsc{Prolog}, but, by contrast, \textsc{Madagascar} has been implemented in \textsc{C++}. Figure~\ref{fig:stack_chart} alternatively summarizes the performance of the two planners. \begin{figure*}[!t] \centering \begin{subfigure}[b]{\textwidth} \centering \captionsetup{width=\textwidth}% \includegraphics[width=.72\textwidth]{figs/stack_3.pdf} \caption{This scope of applicability (abstract structure) represents all `stack' problems that have exactly one \ttf{table} and at least one \ttf{pile}, one \ttf{pallet}, one \ttf{blue block} and one \ttf{red block} such that blue blocks are initially on top of red blocks and finally red blocks are on top of blue blocks (on a pallet) on a pile.} \label{fig:abstract_stack3} \end{subfigure} \\\vspace{2pt} \begin{subfigure}[b]{\textwidth} \centering \captionsetup{width=\textwidth}% \includegraphics[width=.72\textwidth]{figs/stack_4.pdf} \caption{This scope of applicability represents all `stack' problems that have exactly one \ttf{table} and at least one \ttf{pile}, one \ttf{pallet}, one \ttf{blue block} and one \ttf{red block} such that alternate red and blue blocks are initially on a pile with a blue block at the bottom (on a pallet) and a red block on top and finally red blocks are on top of blue blocks.} \label{fig:abstract_stack4} \end{subfigure} \\\vspace{2pt} \begin{subfigure}[b]{\textwidth} \centering \captionsetup{width=\textwidth}% \includegraphics[width=.72\textwidth]{figs/stack_5.pdf} \caption{This scope of applicability represents all `stack' problems that have exactly one \ttf{table} and at least one \ttf{pile}, one \ttf{pallet}, one \ttf{blue block} and one \ttf{red block} such that alternate red and blue blocks are initially on a pile with a red block at the bottom (on a pallet) and a blue block on top and finally red blocks are on top of blue blocks.} \label{fig:abstract_stack5} \end{subfigure} \caption{The scope of applicability, i.e., canonical abstraction, of the additional three classes of the `stack' task in the \textsc{stacking-blocks}\xspace EBPD.} \label{fig:stack_scopes} \vspace{-25pt} \end{figure*} \begin{table}[t!] \setlength{\tabcolsep}{8pt} \centering \caption{Performance of the SBP and \textsc{Madagascar} (M) planners in terms of applicability test (retrieval) time, search time, memory, developed nodes and plan length in the different classes of `stack' problems in the \textsc{stacking-blocks}\xspace EBPD.} \resizebox*{!}{\dimexpr\textheight-5.5\baselineskip\relax}{ \begin{tabular}{cccccccccc} \hline \multirow{1}{*}{Problem/} & \multicolumn{1}{c}{Retrieval$^*$ time (s)} & \multicolumn{2}{c}{Search time (s)} & \multicolumn{2}{c}{Memory (MB)} & \multicolumn{2}{c}{Evaluated states} & \multicolumn{2}{c}{Plan length} \\ (\#blocks) & SBP & SBP & M & SBP & M & SBP & M & SBP & M \\ \hline p1\ \ \ \ (22) & 0.011 & 0.29 & 0.550 & 10.6 & 57.2 & 131 & 813 & 87 & 87 \\ \hline p2\ \ \ \ (22) & 0.022 & 0.90 & 0.820 & 8.1 & 92.5 & 133 & 597 & 88 & 88 \\ \hline p3\ \ \ \ (24) & 0.010 & 0.37 & 0.820 & 12.4 & 76.9 & 143 & 1011 & 95 & 95 \\ \hline p4\ \ \ \ (24) & 0.021 & 1.44 & 1.250 & 8.9 & 124.9 & 145 & 985 & 96 & 96 \\ \hline p5\ \ \ \ (26) & 0.010 & 0.49 & 1.290 & 13.9 & 100.2 & 155 & 1k & 103 & 103 \\ \hline p6\ \ \ \ (26) & 0.023 & 2.16 & 1.780 & 9.8 & 162.4 & 157 & 1k & 104 & 104 \\ \hline p7\ \ \ \ (28) & 0.010 & 0.61 & 1.780 & 15.9 & 130.2 & 167 & 1k & 111 & 111 \\ \hline p8\ \ \ \ (30) & 0.010 & 0.77 & 2.360 & 17.3 & 148.8 & 179 & 1k & 119 & 119 \\ \hline p9\ \ \ \ (28) & 0.026 & 3.22 & 2.750 & 10.3 & 212.9 & 169 & 1k & 112 & 112 \\ \hline p10 \ (30) & 0.029 & 4.67 & 3.170 & 11.4 & 271.2 & 181 & 1k & 120 & 127 \\ \hline p11 \ (32) & 0.010 & 0.95 & 3.330 & 19.7 & 187.10 & 191 & 1k & 127 & 127 \\ \hline p12 \ (22) & 0.035 & 1.48 & 4.220 & 16.9 & 369.7 & 271 & 4k & 172 & 172 \\ \hline p13 \ (32) & 0.023 & 6.84 & 4.460 & 12.5 & 307.4 & 193 & 1k & 128 & 128 \\ \hline p14 \ (34) & 0.010 & 1.16 & 4.570 & 22.1 & 238.5 & 203 & 2k & 135 & 135 \\ \hline p15 \ (34) & 0.022 & 9.14 & 5.270 & 13.5 & 382.8 & 205 & 2k & 136 & 136 \\ \hline p16 \ (36) & 0.010 & 1.43 & 6.390 & 24.1 & 296.7 & 215 & 2k & 143 & 143 \\ \hline p17 \ (36) & 0.026 & 12.31 & 6.900 & 13.9 & 478.5 & 217 & 2k & 144 & 144 \\ \hline p18 \ (38) & 0.014 & 1.73 & 8.770 & 26.9 & 366.7 & 227 & 3k & 151 & 151 \\ \hline p19 \ (38) & 0.028 & 16.87 & 9.200 & 15.0 & 592.2 & 229 & 3k & 152 & 152 \\ \hline p20 \ (24) & 0.055 & 2.05 & 9.950 & 17.7 & 577.7 & 288 & 7k & 191 & 184 \\ \hline p21 \ (22) & 0.048 & 1.33 & 10.930 & 16.1 & 642.6 & 264 & 7k & 175 & 168 \\ \hline p22 \ (40) & 0.024 & 22.23 & 11.100 & 16.1 & 714.4 & 241 & 3k & 160 & 160 \\ \hline p23 \ (24) & 0.032 & 2.30 & 11.190 & 19.4 & 698.8 & 296 & 10k & 188 & 188 \\ \hline p24 \ (26) & 0.046 & 3.04 & 11.630 & 20.1 & 709.5 & 312 & 7k & 207 & 200 \\ \hline p25 \ (28) & 0.050 & 4.50 & 12.560 & 22.5 & 929.2 & 336 & 8k & 223 & 216 \\ \hline p26 \ (40) & 0.010 & 2.07 & 12.900 & 29.7 & 401.1 & 239 & 2k & 159 & 159 \\ \hline p27 \ (26) & 0.039 & 3.78 & 13.530 & 21.9 & 802.6 & 321 & 7k & 204 & 204 \\ \hline p28 \ (42) & 0.011 & 2.48 & 16.120 & 31.5 & 491.7 & 251 & 3k & 167 & 167 \\ \hline p29 \ (42) & 0.023 & 29.52 & 16.930 & 17.3 & 799.7 & 253 & 3k & 168 & 168 \\ \hline p30 \ (28) & 0.040 & 5.27 & 18.330 & 23.7 & 1098.0 & 346 & 10k & 220 & 220 \\ \hline p31 \ (30) & 0.037 & 7.50 & 21.680 & 26.6 & 1388.7 & 371 & 11k & 236 & 236 \\ \hline p32 \ (44) & 0.022 & 38.26 & 23.090 & 17.9 & 947.1 & 265 & 4k & 176 & 176 \\ \hline p33 \ (44) & 0.014 & 2.96 & 24.690 & 35.0 & 593.8 & 263 & 4k & 175 & 175 \\ \hline p34 \ (30) & 0.046 & 6.50 & 24.980 & 24.1 & 1537.7 & 360 & 13k & 239 & 232 \\ \hline p35 \ (32) & 0.039 & 10.69 & 25.570 & 28.5 & 1645.8 & 396 & 12k & 252 & 252 \\ \hline p36 \ (32) & 0.046 & 9.71 & 26.170 & 26.8 & 1645.7 & 384 & 11k & 255 & 248 \\ \hline p37 \ (46) & 0.010 & 3.48 & 27.970 & 38.4 & 709.6 & 275 & 4k & 183 & 183 \\ \hline p38 \ (46) & 0.022 & 63.59 & 31.420 & 19.1 & 1128.5 & 277 & 4k & 184 & 184 \\ \hline p39 \ (34) & 0.058 & 12.68 & 35.910 & 29.5 & 2140.1 & 408 & 15k & 271 & 264 \\ \hline p40 \ (34) & 0.040 & 15.41 & 36.030 & 31.7 & 2126.8 & 421 & 13k & 268 & 268 \\ \hline p41 \ (48) & 0.022 & 103.44 & 36.120 & 20.3 & 1359.2 & 289 & 5k & 192 & 192 \\ \hline p42 \ (36) & 0.054 & 17.65 & 37.100 & 31.5 & 2410.5 & 432 & 18k & 287 & 280 \\ \hline p43 \ (48) & 0.010 & 4.06 & 37.820 & 41.9 & 841.0 & 287 & 6k & 191 & 191 \\ \hline p44 \ (50) & 0.014 & 4.72 & 41.650 & 44.7 & 904.3 & 299 & 5k & 199 & 199 \\ \hline p45 \ (36) & 0.040 & 20.64 & 42.470 & 34.9 & 2397.2 & 446 & 18k & 284 & 284 \\ \hline p46 \ (50) & 0.022 & 131.65 & 45.770 & 21.6 & 1575.5 & 301 & 6k & 200 & 207 \\ \hline p47 \ (38) & 0.048 & 25.80 & 48.710 & 34.5 & 2553.2 & 456 & 21k & 303 & 296 \\ \hline p48 \ (40) & 0.034 & 60.81 & 55.480 & 40.0 & 2658.5 & 496 & 19k & 316 & 316 \\ \hline p49 \ (38) & 0.042 & 35.48 & 57.540 & 38.1 & 2586.7 & 471 & 28k & 300 & 300 \\ \hline p50 \ (42) & 0.053 & 47.82 & 72.320 & 40.6 & 2665.7 & 504 & 26k & 335 & 328 \\ \hline p51 \ (40) & 0.061 & 47.58 & 84.570 & 37.5 & 2727.0 & 480 & 35k & 319 & 312 \\ \hline p52 \ (42) & 0.039 & 53.22 & 101.080 & 43.7 & 2724.10 & 521 & 36k & 332 & 332 \\ \hline p53 \ (44) & 0.037 & 75.32 & 105.530 & 47.4 & 2817.1 & 546 & 36k & 348 & 348 \\ \hline p54 \ (44) & 0.049 & 62.14 & 114.780 & 42.6 & 2793.1 & 528 & 43k & 351 & 344 \\ \hline p55 \ (46) & 0.048 & 84.28 & -- & 46.0 & -- & 552 & -- & 367 & -- \\ \hline p56 \ (46) & 0.034 & 95.31 & -- & 51.1 & -- & 571 & -- & 364 & -- \\ \hline p57 \ (48) & 0.056 & 108.66 & -- & 49.5 & -- & 576 & -- & 383 & -- \\ \hline p58 \ (48) & 0.038 & 120.24 & -- & 53.8 & -- & 596 & -- & 380 & -- \\ \hline p59 \ (50) & 0.050 & 132.46 & -- & 51.6 & -- & 600 & -- & 399 & -- \\ \hline p60 \ (50) & 0.040 & 147.93 & -- & 57.9 & -- & 621 & -- & 396 & -- \\ \hline \end{tabular}} \begin{flushleft} \footnotesize \ \ $^{\rm *}$ An average time of retrieving an activity schema (i.e., in this experiment among four learned activity schemata) for each task problem. The retrieval time increases linearly with the number of learned activity schemata for a specific task. \end{flushleft} \label{tbl:stack_exp} \vspace{-20pt} \end{table} \begin{figure}[!t] \centering \includegraphics[width=\linewidth]{figs/stack/retrieval.pdf} \caption{CPU time used by SBP to find an applicable activity schema (among 4) for solving problems in the \textsc{stacking-blocks}\xspace domain.} \label{fig:stack_retrieval} \vspace{-5pt} \end{figure} \begin{figure}[!t] \centering \begin{subfigure}[b]{.496\linewidth} \includegraphics[width=\linewidth]{figs/stack/time.pdf} \end{subfigure}\hspace{-1.4pt} \begin{subfigure}[b]{.496\linewidth} \includegraphics[width=\linewidth]{figs/stack/plan.pdf} \end{subfigure}\\\vspace{5pt} \begin{subfigure}[b]{.496\linewidth} \includegraphics[width=\linewidth]{figs/stack/xnodes.pdf} \end{subfigure}\hspace{-1.4pt} \begin{subfigure}[b]{.496\linewidth} \includegraphics[width=\linewidth]{figs/stack/memory.pdf} \end{subfigure}\\ \caption{Performance of the SBP and \textsc{Madagascar} (M) in the \textsc{stacking-blocks}\xspace domain.} \label{fig:stack_chart} \vspace{-5pt} \end{figure} \subsection{\textsc{ROVER}}\label{sec:experiments:rover} In the second experiment, we used the \textsc{rover}\xspace domain from the 3rd International Planning Competition (IPC-3). In this experiment, we adopt a different approach for evaluating the proposed scope inference technique. We randomly generated $50$ problems containing exactly $1$ rover and ranging from $1$ to $3$ waypoints, $5$ to $30$ objectives, $5$ to $10$ cameras and $5$ to $20$ goals in each problem. Using the scope inference procedure, the problems are classified into $9$ sets of problems. That is, problems that converge to the same $3$-valued structure are put together in the same set. Hence, each set of problems is identified with a distinct scope of applicability. Figure~\ref{fig:rover_scope} shows the time required to classify the problems into different sets, i.e., the time required by TVLA to generate $3$-valued structures for the problems and test which problems converge to the same $3$-valued structure. Figure~\ref{fig:rover_portion} shows the distribution of the problems in the obtained sets of problems. In each set of problems, we simulated an experience and generated an activity schema for problem solving. Figure~\ref{fig:rover_retrieval} shows the time required to retrieve an applicable activity schema (among 9 activity schemata in this experiment) for solving given problems, i.e., the time required to check whether a given problem is embedded in the scope of an activity schema. SBP successfully solved all problems in each class. \begin{figure}[!t] \centering \begin{subfigure}[b]{.58\linewidth} \centering \includegraphics[width=\linewidth]{figs/rover/scope_inference3.pdf} \caption{} \label{fig:rover_scope} \end{subfigure}\hspace{-2.4pt} \begin{subfigure}[b]{.41\linewidth} \centering \includegraphics[width=\linewidth]{figs/rover/portion3.pdf} \caption{} \label{fig:rover_portion} \end{subfigure}\\ \caption{CPU time used by TVLA to classify the problems (a), and distribution of the problems in the obtained problem sets (b) in the \textsc{rover}\xspace domain} \label{fig:rover} \vspace{-5pt} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=\linewidth]{figs/rover/retrieval2.pdf} \caption{CPU time used by SBP to find an applicable activity schema (among 9) for solving problems in the \textsc{rover}\xspace domain.} \label{fig:rover_retrieval} \vspace{-7pt} \end{figure} \subsection{\textsc{CAFE}}\label{sec:experiments:real} In order to validate the practical utility of our approach, we also applied it to a real-world task using a fully physically simulated PR2 in Gazebo, the standard simulator in ROS. We developed a \textsc{cafe}\xspace EBPD including $14$ concrete and $4$ abstract planning operators (see Table~\ref{tbl:cafe_domain}). We use a coffee serving demonstration including two Scenarios A and B with different sets of instructions to teach a PR2 to serve a guest in a cafe environment (see Figure~\ref{fig:cafe_scenarios}). Instructions for Scenario A is ``Move to counter1, grasp mug1, move to south of table1, place mug1 at the right placement area of guest1 -- this is a \texttt{ServeACoffee} task.'' Instructions for Scenario B is ``Move to counter1, grasp mug1 and mug2, move to south of table1, place mug1 at the right placement area of guest1, move to north of table1, place mug2 at the left placement area of guest1 -- this is also a \texttt{ServeACoffee} task.'' In both scenarios, it is assumed that the robot knows the location of the guest and of the placement areas on the table. However, it does not know which placement area to approach for a guest. We used the infrastructure and simulation environment of the RACE project \footnote{ \texttt{{http://project-race.eu/}}} \cite{race2014} for instruction-based teaching of the robot to achieve the tasks. Figure~\ref{fig:sim_cafe} shows the snapshots of teaching the PR2 a \texttt{ServeACoffee} task in Scenario A in Gazebo. \begin{table}[t] \centering \caption{Abstract and planning operators in the \textsc{cafe}\xspace EBPD. } {\footnotesize \begin{tabular}[l]{rl} \textbf{\normalsize Abstract operators} & \textbf{\normalsize Concrete operators}\\ \hline \texttt{move/3} & \texttt{move-base/3} \\ \texttt{move/3} & \texttt{move-base-blind/3} \\ \texttt{pick/4} & \texttt{pick-up-object/8} \\ \texttt{place/4} & \texttt{place-object/8} \\ $\varnothing$ & \texttt{tuck-arm/5} \\ $\varnothing$ & \texttt{move-arm-to-carry/5} \\ $\varnothing$ & \texttt{move-arm-to-side/5} \\ $\varnothing$ & \texttt{move-torso-down/5} \\ $\varnothing$ & \texttt{move-torso-middle/5} \\ $\varnothing$ & \texttt{move-torso-up/5} \\ $\varnothing$ & \texttt{ready-to-safe-move-with-no-object/8} \\ $\varnothing$ & \texttt{ready-to-safe-move-with-one-object/9} \\ $\varnothing$ & \texttt{ready-to-safe-move-with-two-object/10} \\ $\varnothing$ & \texttt{observe-object-on-area/4} \\ \end{tabular}} \label{tbl:cafe_domain} \end{table} \begin{figure*}[!t] \centering \begin{subfigure}[b]{.49\textwidth} \includegraphics[width=\textwidth]{figs/_Y1D_Scenario_1A} \caption{Scenario A} \end{subfigure} \ \ \begin{subfigure}[b]{.49\textwidth} \includegraphics[width=\textwidth]{figs/_Y1D_Scenario_1B} \caption{Scenario B} \end{subfigure} \caption{Initial states of the restaurant floor for the \texttt{ServeACoffee} demonstration in Scenarios A and B with a PR2. (a) In Scenario A, PR2 is taught to take \texttt{mug1} from \texttt{counter1} and approaches the south of \texttt{table1} and place \texttt{mug1} on the right side of \texttt{guest1}. (b) In Scenario B, PR2 is taught to take \texttt{mug1} and \texttt{mug2} from \texttt{counter1} and approaches the south of \texttt{table1} and place \texttt{mug1} on the right side of \texttt{guest1}, and then approaches the north of \texttt{table1} and place \texttt{mug2} on the left side of \texttt{guest1}.} \label{fig:cafe_scenarios} \end{figure*} \begin{figure*}[!t] \centering \begin{subfigure}[b]{.246\textwidth} \includegraphics[width=\textwidth]{figs/scene1.png} \end{subfigure}\hspace{-1pt} \begin{subfigure}[b]{.246\textwidth} \includegraphics[width=\textwidth]{figs/scene2.png} \end{subfigure}\hspace{-1pt} \begin{subfigure}[b]{.246\textwidth} \includegraphics[width=\textwidth]{figs/scene3.png} \end{subfigure}\hspace{-1pt} \begin{subfigure}[b]{.246\textwidth} \includegraphics[width=\textwidth]{figs/scene4.png} \end{subfigure}\hspace{-1pt} \caption{An example of the execution of a \texttt{ServeACoffee} task with a PR2 in Gazebo simulated environment. In this scenario, (from left to right) robot moves to a \textit{counter}, picks up a \textit{mug} from the counter, moves to a \textit{table}, and puts the mug on the table in front of a \textit{guest}.} \label{fig:sim_cafe} \end{figure*} Our system learned two activity schemata for \texttt{ServeACoffee} task with distinct abstract plans (i.e, different instructions sets) and distinct scopes of applicability. To validate the utility of the learned activity schemata, we setup two test scenarios in each class of \texttt{ServeACoffee} task in which the robot is asked for serving a guest sitting at \texttt{table2}. Our system computes the solution plans for each task problem using the learned activity schemata in less than $1$ second. Video of the PR2 doing \texttt{ServeACoffee} tasks in this experiment are available at \url{https://goo.gl/HJ6g2R} and \url{https://github.com/mokhtarivahid/ebpd/tree/master/demos}. The domains, experiences, learned activity schemata and given task problems used in our experiments are available online by the link: \url{https://github.com/mokhtarivahid/ebpd/}. \section{CONCLUSION AND FUTURE WORK}\label{sec:conclusion} We proposed an approach to generate a set of conditions that determines the scope of applicability of an activity schema in experience-based planning domains (EBPDs). The inferred scope allows an EBPD system to automatically find an applicable activity schema for solving a task problem, among several learned activity schemata for a specific task. We validated the utility of this work in a simulated domain and a fully physically simulated PR2 in Gazebo. Through our experiments, we demonstrated the effectiveness of the system, including loop detection and scope inference procedures. We showed the timing results for test problems in these experiments. The time required for learning activity schemata, and computing and testing their scopes were negligible. The system learned activity schemata from single examples in under seconds, in contrast to other machine learning techniques, addressed in the related work, which usually require large sets of plan traces to learn planning domain knowledge (e.g., HTN-Maker \citep{hogg2008htn} uses 75 out of 100 input problems to train the system). While the results show good scalability, many engineering optimizations are possible on our prototype implementation of the proposed algorithms. Faster results can be obtained from an implementation in a compiled language. Extensive evaluation of the proposed system on a large set of domains is also part of the future work.
{ "timestamp": "2019-03-06T02:17:13", "yymm": "1902", "arxiv_id": "1902.10770", "language": "en", "url": "https://arxiv.org/abs/1902.10770" }
\section{Introduction} In modern theory of cosmological structure formation, it is supposed that galaxies and clusters of galaxies formed from peak patches of the density field of matter in the Universe (Bardeen et al. 1986). In cosmological simulations the primary reference objects which are populated by galaxies and galaxy clusters are dark matter halos and their abundance is described by the dark matter halo mass function (e.g. Jenkins et al. 2001, Tinker et al. 2008). Observationally galaxies and glaxy clusters have very different appearances. Galaxies just mark the central region of the dark matter halo and the extent of their embedding dark matter halo can only be traced by weak gravitational lensing. On the contrary the dark matter halos of clusters of galaxies are filled by a very hot intracluster plasma, which can be observed in X-rays (e.g. Sarazin, 1986) and through the Sunyaev-Zeldovich effect in the cosmic microwave background (Sunyaev \& Zeldovich, 1972). In this way the gravitational potential of the dark and baryonic matter halo can be visualised more directly. In this note we explore if the observational data on galaxies and groups and clusters of galaxies can be described consistently in the from of a continous halo mass function, even though the observational signatures of these objects are very different. We test in this way the validity of structure formation theory and the correctenss of the interpretation of the observational data. In the present study we show as representation of the object mass distribution mostly the fraction of the cosmic matter density made up by galaxies and clusters, which is a direct reflection of the cumulative mass function. This provides us in addition with the interesting information where the major parts of matter are located in our Universe. In this note we explore if the observational data on galaxies and groups and clusters of galaxies can be described consistently in the from of a continous halo mass function, even though the observational signatures of these objects are very different. We test in this way the validity of structure formation theory and the correctenss of the interpretation of the observational data. In the present study we show as representation of the object mass distribution mostly the fraction of the cosmic matter density made up by galaxies and clusters, which is a direct reflection of the cumulative mass function. This provides us in addition with the interesting information where the major parts of matter are located in our Universe. For all calculations depending on the cosmological model, we use a flat cosmic geometry and the parameters, $\Omega_m = 0.282$ (B\"ohringer et al. 2017) and $H_0 = 70$ km s$^{-1}$ Mpc$^{-1}$. We retain $h = h_{100}$ for some values quoted from the literature. This mass density is compared to $0.308\pm0.012$ of the 2015 result of Planck (Planck Collaboration 2016) and to $0.279\pm0.025$ of the WMAP 9 year result (Bennett et al. 2013). The specific value for $\Omega_m$ is chosen because it provides the best fit to our data on the galaxy cluster abundance and we thus apply it in the following for consistency reasons. The deviation from the Planck result agrees with the well-recognised tension seen in the $\sigma_8-\Omega_m$ plane between the Planck result and that from weak lensing; see e.g. Hildebrandt et al. (2017). The cluster fit gives a value consistent with the weak lensing result. \section{Galaxy and cluster data} To assign a definte mass to galaxies and their dark matter halos and to galaxy clusters, we need to define an outer radius up to which the mass distribution in the systems is integrated. In an analysis of gravitational lensing around galaxies it is indicated that the mass of galaxies is distributed beyond the pseudovirial radius of galaxies, which was operationally defined as 200 times the mass of the critical density encircled by a sphere with this radius, $r_{200}$ (Masaki et al. 2012; hereafter MFY). The analysis indicates that the distribution of mass around galaxies is extended to a few Mpc, to the middle to neighbouring galaxies: there seems no boundary in the mass distribution. Also for galaxy clusters the mass profile continues to increase well beyond a radius of $r_{200}$; see e.g. Ettori et al. (2019). Therefore a common fiducial outer radius has to be adopted for the comparison of the galaxy and cluster matter density content. Here we use $r_{200}$ in our further analysis, which approximately describes the boundary between the partly virialised material inside and the mostly infalling matter outside. \begin{figure} \includegraphics[width=\columnwidth]{Fukugita_1.ps} \caption{Fraction of the matter density of the Universe contained in collapsed objects (inside $M_{200}$) with masses above the lower mass limit given on the x-axis. The curve on the right gives the mass fraction in groups and clusters of galaxies, where the solid line marks the function derived from observations with uncertainties given by the grey area. The extrapolation to lower masses, by means of the halo mass function of Tinker et al. (2008), is indicated by a dashed line. The curve to the left is the fraction of the matter density contained in galaxy halos deduced from the galaxy luminosity function and the gravitational lensing effect of galaxy halos. The region of the curve in which the data are extrapolated from the interval covered by observations is shown as dashed line. } \label{fig1} \end{figure} The mass function of galaxy halos for our study was obtained in the following way. The luminosity function of galaxies is now accurately known (Blanton et al. 2001; 2003; Folkes 1999) to $L>10^{8.3}L_\odot$. McKay et al. (2001; 2002) measured the mass of galaxies encircled by haloes to $260h^{-1}$kpc by measuring weak lensing shear around galaxies for the Sloan Digital Sky Survey (SDSS) spectroscopic sample. Their measurement gives $\langle M/L_r\rangle \simeq 170\pm21h^{-1}$ for the $r$-band for the mass of galaxies encircled by haloes to $260h^{-1}$kpc, which is thought to be well beyond the virial radius of galaxies and thus to stand for the mass associated with galaxies. Their data show that the mass-to-light ratio does not depend on galaxy luminosity for an interval of a decade, $5 \times 10^9 - 8 \times 10^{10}$ L$_{\odot}$. They also find the dynamical mass from the virial velocity for the same sample to be $\langle M/L_r\rangle \simeq 145\pm34h^{-1}$, with a reasonable agreement with their lensing estimate. For our analysis we adopted $160\pm30h^{-1}$ at the radius of $260h^{-1}$kpc, but scaled to the pseudovirial radius. With the aid of the N-body simulation result for haloes of galaxies, the average pseudovirial radius ($r_{200}$) of galaxies that match the SDSS sample, which is estimated to have a lower mass cutoff $M_{\rm low}\approx 2\times 10^{11}h^{-1}M_\odot$, is approximately $120h^{-1}$kpc, and so the radius McKay et al. measured corresponds to $\approx 2.2r_{200}$ (MFY). As $260h^{-1}$kpc is significantly larger than $r_{200}$, this is taken as evidence that the mass distribution extends much beyond $r_{200}$; $r_{200}$ comprise only a fraction of mass associated with galaxies. For the comparison with clusters, we scale the average mass measured at $260h^{-1}$kpc to that at $r_{200}$, using the weak lensing scaling result, which approximately reads $M\propto r^{0.6}$ beyond the pseudovirial radius (MFY). This yields $\langle M/L_r\rangle|_{r_{200}} \simeq 90\pm 20h^{-1}$. This is the value we have adopted to estimate the mass of galaxies. We remark that this radius dependence of the mass profile is consistent with that expected for the Navarro-Frenk-White (NFW, Navarro et al. 1995, 1997) profile with the core radius $r_s$ in units of $r_{200}$ to be $c=r_{200}/r_s=5-10$, which is the value compatible to that derived for clusters $c\approx 5$ and for haloes of galaxies $c\approx 10-15$ from inner profiles, typically, for $r<r_{500}$. This means that the NFW profile stands also for a good description of galaxy haloes extended beyond the virial radius. Combing the galaxy luminosity function with the mass-to-light ratio from weak lensing we construct the galaxy halo mass function. In our preceding work (B\"ohringer et al. 2017) we have computed the mass function of clusters and groups of galaxies down to $3\times10^{12}h^{-1}_{70}M_\odot$, using an X-ray selected cluster-group sample. We find that this mass function agrees well with that obtained from optical cluster samples (Bahcall \& Cen 1993), when the cluster mass is standardised to a universal definition, say by adopting $r_{200}$. In more detail, the mass function was determined from the cluster catalogue compiled in the {\sf REFLEX II} survey which was based on X-ray detections of clusters in the ROSAT All Sky Survey in the southern sky, statistically complete down to a flux limit of $1.8 \times 10^{-12}$ erg s$^{-1}$ cm$^{-2}$ in the 0.1 - 2.4 keV energy band with an estimated completeness of about 95\% (B\"ohringer et al. 2004, 2013). Cluster masses were estimated by means of the X-ray luminosity mass relation determined for smaller subsamples (Vikhlinin et al. 2009, Pratt et al. 2009). The mass function was determined by combining the theoretical model of structure formation with the observational data. In practice it was derived from the best fitting theoretical prediction for the mass function based on a $\Lambda$CDM cosmological model with flat geometry, a matter density distribution power spectrum determined with CAMB (Lewis et al. 2000)\footnote{ CAMB is publicly available from http://www.camb.info/CAMBsubmit.html}, and a parametrized halo mass function derived from N-body simulations by Tinker et al. (2008) compared to the observational data. The statitical uncertainty of the most critical cosmological parameters, $\Omega_m$ and $\sigma_8$, and of the $L_x$ - $M$ scaling relation in the fit determins the uncertainty range of the mass function. For the present work we also added the estimated uncertainty of the numerically derived mass function, which is in the range of 5 - 10\%, as an additional uncertainty of conservatively 10\% to the resulting mass function. We have also compared these results to the mass function obtained if we use other parametrisations for the halo mass function from the literature (e.g. Watson et a. 2013, Despali et al. 2016) finding differences which are well within the uncertainty of our first results. This mass function is used in the following study. The observational data of the cluster sample cover the mass range $M_{200} = 7\times10^{12}$ to $3\times10^{15} h^{-1}_{70}M_\odot$. However, the underlying numerically determined form of the mass function was obatined from simulations over a much wider mass range, so it is to some degree justified to use the resulting mass function for an extrapolation to masses lower than our observational limits. \section{The combined matter density fraction} From the galaxy and cluster mass function determined as described above, we derive the matter density fraction contained in all collapsed objects inside $M_{200}$ above a certain limiting mass. For these calculations we have taken $\Omega_m = 0.282$ consistent with the best fit to the cluster abundance. The mater density fraction was calculated from $\rho_m^{-1}~ \int {dn/dm} ~m ~dm$, where ${dn/dm}$ is the differential cluster mass function. Fig. 1 shows the mass fraction in collapsed objects from galaxies to groups and clusters of galaxies. The dashed part of the cluster mass function shows the regime where the mass function is extrapolated to masses lower than covered by the observational data. The galaxy halo mass fraction was estimated from the luminosity function of galaxies (Blanton et al. 2001) multiplied with the mass-to-light ratio, $\rho_m={\cal L}_r\times \langle M/L_r\rangle$, where $\langle M/L_r\rangle \simeq 90\pm 20h^{-1}$ and $\cal L_{\rm r}$ is the galaxy luminosity density in the $r$ band. The galaxy halo mass function is observationally constraint to $M>10^{11.2}M_\odot$. We note that at the low masses the two functions match perfectly, even though they have been derived from very different observational data sets. In Fig. 2 we show the differential form of the matter density fraction derived from galaxy group and cluster observations. It is derived from the mass function through $\rho_m^{-1} {dn \over d\ln ~m} ~m $. This curve illustrates, which object population contributes most to the matter density. We see a broad maximum for the mass range $M_{200} \sim 10^{12} - 10^{14}~ h^{-1}_{70} M_\odot$. \begin{figure} \includegraphics[width=\columnwidth]{Fukugita_2.ps} \caption{Differential matter density fraction for groups and clusters of galaxies } \label{fig2} \end{figure} Fig. 3 shows the local power law index (logarithmic slope) of the cumulative mass function and of the function of the matter fraction of groups and clusters of galaxies. We find that the matter fraction saturates at masses lower than about $10^{11} M_\odot$, with a further increase of not more than 1\%. This originates from a flattening of the cumulative mass function. In our previous study we have fitted a Schechter function as an approximation to the observed cumulative mass function of groups and clusters and found a low mass slope of about -1 (B\"ohringer et al. 2017) for the mass range covered by observations, $ \ge 3\times10^{12}h^{-1}_{70} M_\odot$. Fig. 3 shows that the numerical function decreases further below this limit to an asymptotic value of about 0.35 (i.e. $\alpha = - 1.35$ with ${dn\over dM} \propto M^{\alpha}$). This corresponds to the insignificant increase seen in the mass fraction. \section{Discussion and Conclusion} We see in Fig. 1 that the matter density fraction in galaxy halos and clusters match well at the low mass end, as well as the underlying cumulative mass functions. The mass fraction of the galaxy group and cluster fuction reaches a saturation value of $\Omega_{\rm cluster virial}/\Omega_m=0.28(1\pm0.02)$ and the galaxy luminosity function leads to $\Omega_{\rm galaxy virial}/\Omega_m=0.28(\pm0.08)$. This provides a convergent answer for the mass contribution of collapsed objects if the region considered is restricted to the pseudovirial radii. This means that the bulk of mass is in the intergalactic space. We note that the result for galaxy halos does not change when we use the values relevant to other colour bands. With other colour band results (Blanton et al. 2001; McKay et al. 2002), we obtain the mass density $u:g:r:i:z=0.64: 1.14: 1: 0.99: 1.03$, where we have normalised the values to the $r$-band result. With the exception of the $u$-band, we have a convergent answer with variations well within the uncertainties, and we can take the value from the $r$-band as the representative mass of haloes within the pseudovirial radius of galaxies. \begin{figure} \includegraphics[width=\columnwidth]{Fukugita_3.ps} \caption{Logarithmic slope of the cumulative mass function (solid line) and the matter density fraction (dashed line) of groups of clusters. }\label{fig3} \end{figure} It is interesting to see that the cluster-group mass fraction function departs from the galaxy mass fraction for $M > 3 \times 10^{11}h_{70}^{-1}$, indicating the cooling process that works for the galaxies. This leads to the observed high-mass cutoff of the mass function from galaxies, while the high mass cutoff for clusters and groups is purely set by the intial condition and the gravitational physics. We see in Fig. 2 that most of the mass is contained in objects in the mass range $M_{200} \sim 10^{12} - 10^{14} h^{-1}_{70} M_\odot$. It is worth noting, that this is the range of structures where the variance of the density fluctuations in the linearly extrapolated density fluctuation field, usually designated by $\sigma(M)$, is close to unity. For the quoted mass range we find $\sigma(M) = 0.8 - 1.9$ for the structure formation model fitting the cluster data best. In the model $\sigma(M) = 1$ is at $M_{200} \sim 5\times10^{13} h^{-1}_{70} M_\odot$. This is the mass scale where most object formation takes place at the present epoch and it is thus not surprising to find most matter in collapsed objects in this mass range. The observations imply that substantially more mass is distributed beyond the pseudovirial radius of $r_{200}$, for both galaxies and clusters while it is custom to adopt $r_{200}$ to define the cluster. The pseudovirial sphere contains only 28\% of the matter density in the Universe. This is in good agreement with the N-body result, which gives 26\% for the mass fraction contained within $r_{200}$. This increases to 45\% within $2.2r_{200}$ and increase to 70\% if the radius of sphere is taken to be 10 times $r_{200}$ (MFY). Our results exhibit that galaxies and clusters live at the peak patches of the density field, and most of the mass is present in intergalactic space. We stress that this differs from the distribution of the luminous component, which should have an edge of the distribution, corresponding to the cooling radius of the baryons. We expect that gas behaves similarly to dark matter at cosmological scales, where we see, at large radii, no reasons to segregate gas from dark matter. So the fractions we discussed here are likely to apply similarly to the distribution of baryons. \section*{Acknowledgements} MF thanks late Yasuo Tanaka for the hospitality at the Max-Planck-Institut f\"ur Extraterrestrische Physik and also Eiichiro Komatsu at Max-Planck-Institut f\"ur Astrophysik in Garching, where the bulk of this work was done. He also wishes his thanks to Alexander von Humboldt Stiftung for the support during his stay in Garching. He received in Tokyo a Grant-in-Aid (No. 154300000110) from the Ministry of Education in Japan. H.B. likes to thank Gyoung Chon for her role in the compilation and construction of the REFLEX data and for discussions.
{ "timestamp": "2019-03-01T02:11:34", "yymm": "1902", "arxiv_id": "1902.10941", "language": "en", "url": "https://arxiv.org/abs/1902.10941" }
\section{Introduction} Let ${\bf A}^n$ be the affine $n$-space with coordinates $x=(x^1,\dots,x^n)$. It is called a unimodular affine space if it is equipped with a parallel volume element, namely a determinant function. The unimodular affine group ${\rm SA}(n)={\rm SL}(n,{\bf R})\ltimes {\bf R}^n$ acts as \[ x=(x^i) \longrightarrow gx+a={\Big(}\sum_{j} g_j^ix^j + a^i{\Big)}, \quad g=(g_j^i)\in {\rm SL}(n,{\bf R}),\ a=(a^i)\in {\bf R}^n, \] which preserves the volume element. Study of geometric properties of submanifolds in ${\bf A}^n$ invariant under this group is called equiaffine differential geometry, while the study of properties invariant under the general affine group ${\rm GA}(n)={\rm GL}(n,{\bf R})\ltimes {\bf R}^n$ is called general-affine differential geometry. Furthermore, the study of the geometric properties of submanifolds in the projective space ${\bf P}^n$ invariant under the projective linear group ${\rm PGL}(n)={\rm GL}(n+1,{\bf R})/{\rm center}$ is called projective differential geometry. Equiaffine differential geometry, as well as projective differential geometry, has long been studied and has yielded a plentiful amount of results, especially for curves and hypersurfaces; we refer to \cite{Bl,Sc,NS} and \cite{Wi, La, Bol} to name a few references. However, the study of general-affine differential geometry is little known even for curves. The purpose of this paper is to present a basic study of plane curves and space curves in general-affine differential geometry by recalling old results and by adding some new results. In addition, we relate them with the curve theory in equiaffine and projective differential geometry. Although study of invariants of curves of higher-codimension could possibly be given by a similar formulation used in this paper, it probably requires a more complicated presentation and is not attempted here. For the studies of submanifolds in the affine space which correspond to other types of subgroups of PGL$(n)$, we refer to, {\it e.g.}, \cite{Sc}. Let us begin with plane curves relative to SA$(2)$. Let ${\bf A}^2$ be the unimodular affine plane with the determinant function $|x\ y|=x^1y^2-x^2y^1$ for two vectors $x=(x^1,x^2)$ and $y=(y^1,y^2)$, and let $x(t)$ be a curve with parameter $t$ into ${\bf A}^2$, which is nondegenerate in the sense that $|x'\ x''|\neq 0$. When $|x'\ x''|=1$, furthermore, the parameter $t$ is called an equiaffine length parameter. In this case, it holds that $|x'\ x'''|=0$, namely $x'''$ is linearly dependent on $x'$ and we can write this dependence as \[x'''=- k_ax',\] where $ k_a$ is a scalar-valued function called the equiaffine curvature. Conversely, given a differential equation of this form, the map defined by two linearly independent non-constant solutions defines a curve whose equiaffine curvature is $k_a$. With reference to this presentation of equiaffine notions, we first define the general-affine length parameter and the general-affine curvature relative to the full affine group GA$(2)$ for a nondegenerate curve in Section \ref{sec:gacurve}. In contrast to the differential equation above, we have \[x'''=- \frac{3}{2} kx'' - \left(\epsilon + \frac{1}{2}k'+\frac{1}{2} k^2\right)x', \] where $k$ is the general-affine curvature and $\epsilon=\pm 1$ denotes additional information of the curve; see Section $\ref{subsectionga}$. Conversely, for a given function $k$ and $\epsilon = \pm1$, there exists a nondegenerate curve $x$ for which $x$ satisfies the above equation and the curvature of $x$ is $k$, uniquely up to a general-affine motion (Theorem \ref{planenatural}). We then give remarks on the total curvature of closed curves and on the sextactic points (Corollary \ref{totalcurv}), and we study how to compute the curvature. In particular, we give an expression of the curvature of graph immersions and classify plane curves with constant general-affine curvature (Proposition \ref{gaconst}), and discuss some relations with equiaffine treatment and projective treatment of plane curves. We next consider an extremal problem of the general-affine length functional and derive a variational formula, a nonlinear ordinary differential equation that characterizes an extremal plane curve, as \[ k''' + \frac{3}{2} k k'' + \frac{1}{2} {k'}^2 + \frac{1}{2} k^2 k' + \epsilon k' =0 \] (Proposition \ref{planeextremalprop}). Here we give remarks on the preceding studies: the formulation used to define general-affine curvature of plane curves in this paper is very similar to that in \cite{Mihai1}. For example, the ordinary differential equation above for $x$ and Theorem \ref{planenatural} were already given in \cite{Mihai1}. The formula of the general-affine plane curvature was given also in \cite{Sc, OST} in a different context. The variational formula above for $k$ was first given in \cite[(33)]{Mihai2}, though some modifications are necessary. The same formula for $\epsilon=1$ was then rediscovered by S. Verpoort \cite[p.432]{Ve} in the equiaffine setting. Furthermore, the author of \cite{Ve} gave a relation of solutions of this nonlinear equation with the coordinate functions of the immersion; we reprove this relation in Corollary \ref{planeextremal}. We then remark that the differential equation above is very similar to some of the nonlinear differential equations of Chazy type. Furthermore we derive the variational formula for a certain generalized curvature functional (Theorem \ref{genvarform}). When the curve is given as a graph immersion, the general-affine curvature is written as a nonlinear form of a certain intermediate function. It is interesting to obtain a graph immersion from a given curvature function, which is treated in Section \ref{sec:graphimmersion}: We see that its integration can be reduced to solving the Abel equations of the first kind and the second kind (Theorem \ref{abeleq}). \bigskip The second aim of this paper is to study general-affine invariants of space curves. The procedure is similar to that for the plane curves. We give the definition of general-affine space curvatures of two kinds $k$ and $M$, and the ordinary differential equation of rank four \[ x''''=-3kx'''-\left(2k'+\frac{11}{4}k^2+\epsilon\right)x'' - \left(M+\frac{1}{2}\epsilon k + \frac{1}{2}k'' +\frac{7}{4}kk' + \frac{3}{4}k^3\right)x' \] (Lemma \ref{gaspclemma}), which defines the immersion. Then we show how to obtain curvatures, and discuss relations with the equiaffine and projective treatment of space curves. In particular, we give a list of nondegenerate space curves of constant general-affine curvature and a new proof of the theorem that a nondegenerate curve in the affine $3$-space is extremal relative to the equiaffine length functional if and only if the two kinds of equiaffine curvature vanish, due to \cite{Bl} (Theorem \ref{equi3extremal}). Finally, we solve the extremal problem of the general-affine length functional: a {nondegenerate} space curve without affine inflection point is general-affine extremal if and only if the pair of ordinary differential equations \begin{align*} &k''' + \frac{3}{2} k k'' + \frac{1}{2}k^2 k' +\frac{1}{2} {k'}^2 - \frac{1}{5} \epsilon k' + \frac{6}{5} M'=0, \\ &k'' + \frac{2}{3} k k' + \frac{5 }{6}\epsilon k' M -\frac{3 }{2} \epsilon k M'- \epsilon {M''}=0 \end{align*} are satisfied (Theorem \ref{spaceextremal}). Then, we discuss a similarity amongst the nonlinear differential equations for the curvature functions, one for plane curves, and the other for space curves belonging to a linear complex, {\it i.e.}, $M=\epsilon k$ (Corollary \ref{lincomplex}, \ref{planetospace}). In Appendix, we discuss the projective treatment of plane curves and space curves and present the variational formula of the projective length functional by use of the method in this paper. Theorem \ref{cartanplane} reproduces the variational formula for projective plane curves due to E.~Cartan, and Theorem \ref{projspcextremal} gives the variational formula for projective space curves, which is essentially due to \cite{Ki}. Furthermore, we treat nondegenerate projective homogeneous space curves, called ``W-Kurve''. The list of such curves may be found elsewhere, but nonetheless, we give here a list in Appendix C for later reference. In this paper, we use the classical moving frame method; refer to \cite{Ca2, ST}. For the equiaffine differential geometry and its terminologies, we refer to the books \cite{NS, Bl, Sc}, and, for the projective treatment of curves, to E. J. Wilczynski \cite{Wi}, E. P. Lane \cite{La}. \section{General-affine curvature of plane curves} \label{sec:gacurve} Let $x:M\longrightarrow {\bf A}^{2}$ be a curve into the $2$-dimensional affine space, where $M$ is a $1$-dimensional parameter space. Let $e=\{e_1,e_{2}\}$ be a frame along $x$; at each point of $x(M)$ it is a set of independent vectors of ${\bf A}^{2}$ that depends smoothly on the parameter. The vector-valued $1$-form $dx$ is written as \begin{equation} \label{dx} dx = \omega^1 e_1 + \omega^2 e_2, \end{equation} and the dependence of $e_i$ on the parameter is described by the equation \begin{equation} \label{dei} de_i = \sum_j \omega_i^j e_j, \end{equation} where $\omega^j$ and $\omega_i^j$ are $1$-forms, and the matrix of $1$-forms \[ \Omega = \left(\begin{array}{cc} \omega^1 & \omega^2 \\ \omega_1^1 & \omega_1^2 \\ \omega_2^1 & \omega_2^2 \end{array} \right) \] is called the coframe. \subsection{Choice of frames for plane curves and general-affine curvature}\label{frames} We now reduce the choice of frames in order to define certain invariants. First, we assume $\omega^2=0$, which means that $e_1$ is tangent to the curve, and we set $\omega^1=\omega$ for simplicity. The vector $e_2$ is arbitrary at present, as long as it is independent of $e_1$. Let $\tilde{e}=\{\tilde{e}_1, \tilde{e}_2\}$ be another choice of such a frame. Then, it is written as \[\tilde{e}_1 = \lam e_1,\qquad \tilde{e}_2 = \mu e_1 + \nu e_2,\] where $\lam\nu \neq 0$. The coframe is written as $\tilde{\omega}$ and $\tilde{\omega}_i^j$, which satisfy \[ dx=\tilde{\omega}\tilde{e}_1, \qquad d\tilde{e}_i = \sum_j \tilde{\omega}_i^j\tilde{e}_j. \] Then we certainly have \begin{equation}\label{tildeom} \tilde{\omega} = \lam^{-1}\omega. \end{equation} Since $d\tilde{e}_1$ is represented in two way, one being \[ d\tilde{e}_1 = (d\lam) e_1+ \lam (\omega_1^1e_1+\omega_1^2e_2)\] and the other being \[ d\tilde{e}_1 = \tilde{\omega}_1^1(\lam e_1)+\tilde{\omega}_1^2(\mu e_1+\nu e_2),\] by comparing the coefficients of $e_1$ and $e_2$ in these expressions, we get \begin{equation}\label{chg1} \begin{array}{rcl} & \lam \omega_1^2 = \nu \tilde{\omega}_1^2,& \\ & d\lam + \lam \omega_1^1 = \lam \tilde{\omega}_1^1 + \mu \tilde{\omega}_1^2. & \end{array} \end{equation} Similarly, by considering $d\tilde{e}_2$, we have \begin{equation}\label{chg2} \begin{array}{rcl} & \mu \omega_1^2+d\nu +\nu \omega_2^2 = \nu \tilde{\omega}_2^2,& \\ & d\mu + \mu \omega_1^1 +\nu \omega_2^1 = \lam \tilde{\omega}_2^1 + \mu\tilde{\omega}_2^2. & \end{array} \end{equation} Since the immersion is $1$-dimensional, we can set $\omega_1^2=h\omega$ and $\tilde{\omega}_1^2=\tilde{h}\tilde{\omega}$, and then the first identity of $(\ref{chg1})$ implies that \[ \tilde{h}=\nu^{-1} \lam^2 h.\] Hence the property that $h$ is nonvanishing is independent of the frame and we assume in the following that it is nonvanishing. Such a curve is said to be \textsl{nondegenerate}. Geometrically, this property means that the curve is locally strictly convex at each point. By the identity above, provided that $h$ is nonzero, we can choose a frame $\tilde{e}$ so that $\tilde{h}=1$ and we treat such frames with $h=1$ in the following. Then $\nu=\lam^2$ immediately follows. Next, we see from $(\ref{chg1})$ and $(\ref{chg2})$ that \begin{eqnarray*} & \tilde{\omega}_1^1 = \omega_1^1 + \lam^{-1}d\lam - \mu\lam^{-2}\omega, & \\ & \tilde{\omega}_2^2 = \omega_2^2 + \nu^{-1}d\nu + \mu\nu^{-1}\omega. & \end{eqnarray*} Hence, \[ 2\tilde{\omega}_1^1-\tilde{\omega}_2^2 = 2\omega_1^1-\omega_2^2 -3\mu\lam^{-2}\omega,\] which means that we can choose $\mu$ so that $2\tilde{\omega}_1^1-\tilde{\omega}_2^2 = 0$, and, by considering only such frames in the following, we must have $\mu=0$. Thus, we have determined the frame $e$ up to a change of the form \[ \tilde{e}_1 = \lam e_1, \qquad \tilde{e}_2 = \lam^2 e_2.\] We call the direction determined by $e_2$ the \textsl{general-affine normal direction}. Furthermore, we have \[\tilde{\omega}_1^1 = d\log \lam + \omega_1^1, \qquad \tilde{\omega}_2^1=\lam\omega_2^1.\] From the first identity, we can choose $\lam$ so that $\tilde{\omega}_1^1=0$. Hence, we consider the frame with $\omega_1^1=0$ and $\lam$ is assumed to be constant. From the second identity, by setting \[\omega_2^1=-\ell \omega,\] and, similarly, $\tilde{\omega}_2^1= - \tilde{\ell}\tilde{\omega}$, we get \begin{equation} \label{elltrans} \tilde{\ell} = \lam^2\ell. \end{equation} We call this scalar function $\ell$ the \textsl{equiaffine curvature}, see Section \ref{equigen}, or \textsl{affine mean curvature}, in analogy with equiaffine theory of hypersurfaces, though it still depends on the frame chosen. A point where $\ell=0$ is called an \textsl{affine inflection point}. For its geometrical meaning, we refer to Section $\ref{subsc:graph}$ and \cite{IS}. Thus, we have seen that, given a nondegenerate curve $x$, there exists a frame $e$ with coframe of the form \begin{equation}\label{eq:coframeforequi} \left( \begin{array}{cc} \omega & 0 \\ 0 & \omega \\ -\ell \omega & 0 \end{array}\right) \end{equation} and that such frames are related by $\tilde{e}_1=\lam e_1$ and $\tilde{e}_2=\lam^2 e_2$ for a nonzero constant $\lam$. In the following, given a curve $x=x(t)$ with parameter $t$, we assume that the vector $e_1$ is a positive multiple of the tangent vector $dx/dt$. Then, the choice of $\lam$ is limited to be positive and the form $\omega$ is a positive multiple of $dt$. We now assume $\ell\neq 0$ and let $\epsilon$ denote the sign of $\ell$: \[\epsilon={\rm sign}(\ell).\] It is a locally defined invariant of the curve called the \textsl{sign} of the curve. Then we define a form \begin{equation} \omega_s = \sqrt{\epsilon \ell}\omega, \end{equation} which is uniquely defined, independent of the frame, in view of $(\ref{tildeom})$ and $(\ref{elltrans})$. We call this form the \textsl{general-affine length element} and call the parameter $s$ such that $ds=\omega_s$ the \textsl{general-affine length parameter}, determined up to an additional constant. \begin{definition} \label{planecurv} We call the scalar function $k$ defined as \[ \frac{d\ell}{\ell} = k\omega_s,\] the {\rm general-affine curvature}. In other words, \begin{equation} k={d\log\ell\over ds}. \end{equation} \end{definition} We define a new frame $\{E_1, E_2\}$ by setting \begin{equation}\label{eq:E1E2} E_1 = \frac{1}{\sqrt{\epsilon\ell}}e_1,\qquad E_2=\frac{1}{\epsilon \ell}e_2. \end{equation} Then, \[dx = \omega_s E_1.\] For another frame $\{\tilde{e}_1,\tilde{e}_2\}$ where $\tilde{e}_1=\lambda e_1$ and $\tilde{e}_2 = \lambda^2 e_2$, we similarly define $\tilde{E}_1$ and $\tilde{E}_2$. Then we can see that \[ \tilde{E}_1={1\over \sqrt{\epsilon\tilde{\ell}}}\tilde{e_1} ={1\over \sqrt{\epsilon \lambda^2\ell}}\lambda e_1 = E_1\] and \[\tilde{E}_2={1\over \epsilon\tilde{\ell}}\tilde{e_2} ={1\over \epsilon \lambda^2\ell}\lambda^2 e_2 = E_2. \quad \;\;\] Thus, we have proved the following. \begin{proposition} Assume $\ell\neq 0$. Then, the frame $\{E_1, E_2\}$ is uniquely defined from the immersion and it satisfies a Pfaffian equation \begin{equation} \label{pfaff} d \left(\begin{array}{c} x \\ E_1 \\ E_2\end{array}\right) = \Omega \left(\begin{array}{c} E_1 \\ E_2 \end{array}\right); \qquad \Omega = \left( \begin{array}{cc} \omega_s & 0 \\ -{1\over 2}k\omega_s & \omega_s \\ -\epsilon \omega_s & -k\omega_s \end{array}\right), \end{equation} where $\omega_s$ is the general-affine length form, $k$ is the general-affine curvature and $\epsilon$ is ${\rm sign}(\ell)$. \end{proposition} By use of this choice of frame, we have the following lemma. \begin{lemma}\label{lem:kode} The immersion $x$ satisfies the ordinary differential equation \begin{equation} \label{kode} x'''+{3\over 2}kx'' + \left(\epsilon + {1\over 2}k'+{1\over 2}k^2\right)x'=0, \end{equation} relative to a general-affine length parameter. \end{lemma} \noindent Proof. The equation $(\ref{pfaff})$ shows that $x'=E_1$, $E_1'=-{1\over 2}kE_1 + E_2$, and $E_2'=-\epsilon E_1-kE_2$, where the derivation $\{{}'\}$ is taken relative to the length parameter. Then, combining these derivations, we easily obtain the differential equation above. \begin{remark} \label{paramsign} {\rm The definition of the curvature depends on the orientation of the parameter $t$. If we let the parameter be $u=-t$ and denote by an overhead dot the derivation relative to $u$, then we have \[ \threedot{x}-{3\over 2}k \ddot{x} + \left(\epsilon - {1\over 2}\dot{k} +{1\over 2}k^2\right)\dot{x}=0. \] Namely, the curvature changes sign and its absolute value is a true invariant independent of the orientation of the parameter.} \end{remark} With this remark in mind, we have the following theorem. \begin{theorem}[\cite{Mihai1}] \label{planenatural} Given a function $ k(t)$ of a parameter $t$ and $\epsilon=\pm 1$, there exists a nondegenerate curve $x(t)$ for which $t$ is an length-parameter, $k$ the curvature function and $\epsilon$ the sign of $\ell$, uniquely up to a general-affine transformation. \end{theorem} \noindent Proof. Given $k$ and $\epsilon$, we solve the ordinary differential equation in \eqref{kode} to get the vector $x'(t)$, which is determined up to a general linear transformation. Then, we get $x(t)$ up to an additional translation by a constant vector; that is, the curve $x(t)$ is determined up to a transformation in GA$(2)$. Theorem \ref{planenatural} and the ordinary differential equation \eqref{kode} were first given by T. Mih$\breve{\rm a}$ilescu in \cite{Mihai1}, to the authors' knowledge; refer also to \cite{Mihai2} and \cite{CG}. \begin{example} \label{gacircle} Ellipse and Hyperbola. {\rm Let $x$ denote an ellipse $(a\cos\theta,b\sin\theta)$ or a hyperbola $(a\cosh\theta, b\sinh\theta)$. Then, $x'''=-\epsilon x'$, where $\epsilon=1$ for the ellipse and $\epsilon=-1$ for the hyperbola. It is easy to see that $\theta$ is a general affine length, see \eqref{dstwo2}. Hence, $k=0$. } \end{example} According to this example, we may call a nondegenerate curve is of \textsl{elliptic {\rm (resp.} hyperbolic{\rm )} type} if $\epsilon=1$ (resp. $\epsilon=-1$). We call the vector $E_2$, uniquely defined when $\ell\neq 0$, the \textsl{general-affine normal} and the map $t \longmapsto E_2$ the general-affine Gauss map. Then, by an analogy with affine spheres in equiaffine differential geometry, it is natural to call a curve such that the map $E_2$ passes through one fixed point a \textsl{general-affine circle}. For the ellipse or the hyperbola, $E_2= x'' $ and it holds that \[x+\epsilon E_2=0,\] and thus it is a general-affine circle. Conversely, for a curve $x$ to be such a circle, there exists a scalar function $r(t)$ and a fixed vector $v$ such that \[ x+ rE_2=v.\] However, this implies that $dx+ dr E_2 + rdE_2=0$, which induces, by the identity $(\ref{pfaff})$, $(1-\epsilon r)\omega E_1 + (dr-kr\omega)E_2=0$. Hence, $r=\epsilon$ is constant and $k=0$. Then, by integrating the differential equation $(\ref{kode})$ when $k=0$, we see that any general-affine circle is general-affinely congruent to (a part of) an ellipse or a hyperbola. We also refer to Example \ref{ellzero}. \subsection{Total curvature and sextactic points} The formula $(\ref{pfaff})$ implies the identity \begin{equation} \label{traceOm} d\log\left(\left|\det\left(\begin{array}{cc} E_1\\E_2\end{array}\right) \right|\right) = -{3\over 2}k\omega_s, \end{equation} where $\det$ is taken relative to a (any) unimodular structure of the space ${\bf R}^2$. This formula shows the following corollary immediately. \begin{corollary} \label{totalcurv} Assume that the curve $C$ is nondegenerate and closed, and has no affine inflection point. Then, the total curvature $\int_C k\omega_s$ vanishes. In particular, such a curve has at least two general-affine flat points. \end{corollary} As we will see in Section \ref{gatoproj}, any general-affine flat point, where $k=0$ by definition, is nothing but a sextactic point. We know a classical theorem due to Mukhopadhayaya, also due to G. Herglotz and J. Rado, we refer, {\it e.g.}, to \cite{ST,TU}, that the number of sextactic points of a strictly convex simply closed smooth curve is at least six. In other words, on such a curve there are at least six general-affine flat points. Furthermore, as an analogue of the Euclidean plane curve, it is natural to introduce a notion of a \textsl{general-affine vertex} where $k$ is extremal. The corollary above says that any nondegenerate closed curve without affine inflection point has at least two general-affine vertices. In fact, the example \ref{rose2} shows that there exists a plane curve which has two general-affine vertices. \subsection{Computation of general-affine curvature of plane curves} \label{subsectionga} In this subsection, we will see how to obtain the curvature of a curve given relative to a parameter not necessarily a length parameter. Let $t\longrightarrow x=x(t)\in {\bf A}^2$ be a nondegenerate curve so that the vectors $x'$ and $x''$ are linearly independent. Then the derivative $x'''$ is written as a linear combination of $x'$ and $x''$: there are scalar functions $a=a(t)$ and $b=b(t)$ such that \begin{equation} \label{xode} x''' = a x'' + b x'. \end{equation} Since $dx=x'\, dt$, the frame vector $e_1$ is a scalar multiple of $x'$: \begin{equation}\label{eq:frame} dx = \omega \, e_1;\qquad e_1=\lambda x',\quad \omega = \lambda^{-1}dt. \end{equation} Then, the derivation \[ de_1=\lambda ( \lambda x'' + \lambda' x')\, \omega\] implies that the second frame vector is \[ e_2=\lambda ( \lambda x'' + \lambda' x').\] In order for the frame $\{e_1, e_2\}$ to be chosen as in Section \ref{frames}, the vector $de_2$ must be a multiple of $e_1$. Since \begin{eqnarray*} de_2 &=& \lambda(\lambda^2x'''+3\lambda\lambda'x''+ (\lambda\lambda''+{\lambda'}^2)x')\, \omega \\ &=& \{\lambda^2(\lambda a + 3\lambda')x'' + \lambda(\lambda^2b+\lambda\lambda''+{\lambda'}^2)x'\}\, \omega, \end{eqnarray*} we have \begin{equation}\label{eq:lambda} \lambda a + 3\lambda'=0, \quad {\it i.e.}\quad \lambda = e^{-{1\over 3}\int a(t) dt} \end{equation} up to a positive constant multiple, and by definition, \begin{equation}\label{eq:ell} \ell = - (\lambda^2b+\lambda\lambda''+{\lambda'}^2). \end{equation} We now assume that $\ell\neq 0$ and recall that $\epsilon = {\rm sign} (\ell)$. Then, we have \[ ds^2 = -\epsilon\left( b+{\lambda\lambda''+\lambda'^2\over \lambda^2}\right) dt^2. \] In terms of $a$ and $b$, \begin{equation}\label{dstwo2} ds^2 =-\epsilon \left(b+{2\over 9}a^2 - {1\over 3}a' \right) dt^2. \end{equation} Hence, a length parameter $s$ which is a function of $t$ is obtained by solving the equation \[ \left({ds \over dt}\right)^2 =-\epsilon \left(b+{2\over 9}a^2 - {1\over 3}a'\right).\] If in particular $t$ itself is a length parameter, then we must have \begin{equation}\label{length} \ell=\lambda^2\epsilon, \qquad -b - {2\over 9}a^2 + {1\over 3}a'=\epsilon. \end{equation} Assume now that the curve $x(t)$ is given relative to a length parameter $t$. Then the differential equation $(\ref{xode})$ is written as \begin{equation} \label{gaode} x''' = a x'' + \left({1\over 3}a'-{2\over 9}a^2 -\epsilon \right) x', \end{equation} and the curvature is \begin{equation} \label{odecurv} k = {d\log (\epsilon\lambda^2 ) \over dt} = -{2\over 3}a, \end{equation} which agrees with the expression of the coefficient in the equation $(\ref{kode})$. \medskip For another parameter $\sigma=\sigma(t)$, we write \[ y(\sigma) = x(t).\] Then, a calculation shows that \[ \threedot{y}(\sigma) = A(\sigma)\ddot{y}(\sigma)+B(\sigma)\dot{y}(\sigma),\] where \begin{eqnarray} &A(\sigma) = \displaystyle \left(a-{3\sigma''\over \sigma'}\right){1\over \sigma'},& \label{newa}\\ &B(\sigma) = \displaystyle \left(b + a{\sigma''\over \sigma'} - {\sigma'''\over \sigma'}\right){1\over \sigma'^2}.& \label{newb} \end{eqnarray} We note that there holds a covariance relation: \[ B+{2\over 9}A^2-{1\over 3}\dot{A} =\left(b+{2\over 9}a^2-{1\over 3}a'\right){1\over \sigma'^2}.\] Thus, we have the following procedure to obtain curvature: \bigskip \noindent {\sf Procedure for computing curvature \begin{enumerate} \item Given a curve $x(t)$, derive the differential equation $(\ref{xode})$. \item {C}ompute $L=\left(-b-{2\over 9}a^2 + {1\over 3}a' \right)$ and define $\epsilon$ by $\epsilon={\rm sign}(L)$. \item {C}ompute the length parameter $\sigma$ by solving $d\sigma = \sqrt{\epsilon L}dt$. \item {C}ompute $A$ by $(\ref{newa})$; then, $-{2\over 3}A$ is the curvature. \end{enumerate} } \begin{example} \label{logsp} {\rm A logarithmic spiral is the curve $x(t)=e^{\gamma t}(\cos \alpha t, \sin \alpha t)$. It is easy to see that $x''' = 2\gamma x'' - (\gamma^2+\alpha^2)x'$. Hence, $a=2\gamma$, $b=-\gamma^2 -\alpha^2$, and $L=\gamma^2/9+\alpha^2$; hence, $\epsilon=1$. The length parameter is $s=\sqrt{\gamma^2/9+\alpha^2}t$ and, by rewriting the equation with this $s$, the coefficient $a$ is multiplied by $1/\sqrt{\gamma^2/9+\alpha^2}$. Hence, the curvature is the constant $-4\gamma/\sqrt{\gamma^2+9\alpha^2}$. We require here that $\gamma\neq 0$, since the curve when $\gamma=0$ is a circle, which we will consider in Example $\ref{ellzero}$, and also $\alpha\neq 0$, because the curve is nondegenerate. Then, it is easy to see that the possible values of $k$ range in $0<|k|<4$. } \end{example} \begin{example} \label{catenary} {\rm The catenary curve is defined as $x(t)=(t,\cosh(t))$. The equation is \[x'''=\tanh(t)x''.\] Then $L=-(2\cosh(t)^2-5)/(9\cosh(t)^2)$, which vanishes at $2\cosh(t)^2-5=0$ ($t=\pm 1.031\dots$) , where the curvature is undefined. For the value $t$ where $L<0$, on the dotted curved of the left-hand side picture in Figure 1, $\epsilon=1$ and, elsewhere $\epsilon=-1$. As the picture shows, it is difficult to ``see'' where $L$ vanishes and how $\epsilon$ changes the value.} \end{example} \begin{example} \label{rose2} {\rm The curve $x(t)=(\cos(nt)\cos(t), \cos(nt)\sin(t))$ is called a rose curve. Now choose $n=1/3$ with the range $t\in [0,3\pi]$; shown in Figure 1 (right). It satisfies the equation \[ x''' = -{8\sin(t/3)T\over 3(1+4T^2)}x'' -{4(7+8T^2)\over 9(1+4T^2)}x',\] where $T=\cos(t/3)$. A computation shows that $L>0$, $\epsilon=1$ and the length parameter $\sigma$ is defined by $d\sigma = {2\over 9}{\sqrt{256T^2+320T^4+69}\over 1+4T^2}dt$, and the curvature $k(t)$ is defined for all value $t$. It vanishes at $t=0, 3\pi/2$. Moreover, it is easy to see that the number of general-affine vertices is two. Thus the rose curve with $n=1/3$ attains the minimum number of general-affine vertices amongst general-affine plane closed curves. } \end{example} \begin{figure} \centering \begin{tabular}{cc} \includegraphics[width=6cm]{catenary-k.eps} \qquad & \qquad \includegraphics[width=6cm]{planerose2-k.eps} \\ \multicolumn{2}{c}{Figure 1. {\rm Catenary (left) and rose curve (right)}} \end{tabular} \end{figure} \comment{ \begin{example} \label{rose3} {\rm The curve $x(t)=(\sin(3t)\cos(t), \sin(3t)\sin(t))$ is one of rose curves. It satisfies the equation \[x''' = -{24 \cos(3t)\sin(3t)\over 4\cos(3t)^2+5}x'' -{4(35-8\cos(3t)^2)\over 4\cos(3t)^2+5}x'.\] A computation shows that $\epsilon=1$ and the length parameter $\sigma$ is defined by $d\sigma = {2\sqrt{205-16\cos(3t)^2}\over 4\cos(3t)^2+5}dt$ and, that the curvature $k(t)$ is defined for all value $t$, and has a symmetry $k(\pi/3-t)=-k(t)$. } \end{example} } \subsection{General-affine curvature of a graph immersion}\label{subsc:graph} Let us consider the nondegenerate curve given by a graph immersion $x(t)=(t, f(t))$. We will find the formula of the curvature given by the function $f$ and show fundamental examples of graph immersions. Note that since $x$ is nondegenerate, $x''=(0, f'') \neq 0$, we can assume $f''>0$ in the following. Since $x'''=(0,f''')$, the coefficients of the differential equation $(\ref{xode})$ are \[ a={f'''\over f''}, \qquad b=0.\] Hence, \[\lambda = e^{-\frac{1}{3} \int a(t) dt} = (f'')^{-1/3}\] up to a constant multiple. With this $\lambda$, we have \[ \ell = -(\lambda\lambda'' + (\lambda')^2)\] and the length element is \[ ds^2 = -\epsilon \lambda^{-2}(\lambda\lambda'' + (\lambda')^2) dt^2.\] If we set $\mu=\lambda^2= (f'')^{-2/3}$, then \begin{equation} \label{muell} \ell=-\frac{\mu''}{2},\qquad ds^2 = -{\epsilon \mu''\over 2\mu}dt^2. \end{equation} Hence, we have the formula \begin{equation} \label{curv} k = {d\log \ell\over ds} = \sqrt{-2\epsilon\mu\over \mu''}{\mu'''\over \mu''}. \end{equation} \begin{lemma} The quantities $ds^2$ and $k^2$ are expressed by use of the function $f$ as follows$:$ \begin{equation} \label{olver} \begin{array}{c} \displaystyle ds^2 = {\left|3f''f'''' - 5(f''')^2 \right|\over 9(f'')^2} dt^2,\\ \noalign{\medskip} \displaystyle k^2 = { \left|9(f'')^2f''''' - 45f''f'''f'''' + 40(f''')^3\right|^2 \over \left|3f''f'''' - 5(f''')^2\right|^3}. \end{array} \end{equation} \end{lemma} This is seen by expressing $\mu''$ and $\mu'''$ explicitly in terms of derivations of $f$. As we will see in Section $\ref{equigen}$, $-\mu''(= 2 \ell)$ equals the equiaffine curvature up to a multiplicative constant. The factor in the right-hand side of the first expression is known, see \cite[p. 14]{Bl}. The second expression of $k^2$ was already presented in \cite[p. 54]{Sc}, which was proved from a different point of view. We refer also to \cite[p. 343]{OST} and \cite{CQ}. The differential polynomials in the numerator and denominator in the right-hand side of $k^2$ are known classically; Berzolari \cite{Be} stated that those go back to G. Monge in 1810: Sur les \'Equations diff\'erentielles des courbes du second degr\'e, Corresp. sur l'\'Ecole imp. Polytechn., Paris, N$^{\rm o}$ II, 1810, pp. 51--54. \vskip1pc We consider a nondegenerate curve around the point $t=0$. For appropriate affine coordinates, we have an expression \[ {f} = {1\over 2}t^2 + {p\over 4!}t^4 + {q\over 5!}t^5 + \cdots.\] Then, we see \[ \mu = 1- {p\over 3} t^2 - {q\over 9}t^3 + \cdots\] and \begin{equation} \label{muexp} \mu'' = -{2p\over 3} - {2q\over 3}t + \cdots. \end{equation} Hence, the property $\ell=0$ at $t=0$ means that $p=0$ and, hence, that the osculating parabola touches the curve to at least $5$-th order. In particular, $\ell\equiv 0$ for any parabola. Conversely, we have the following example. \begin{example} \label{ellzero} \textit{Curves with constant $\ell$}. {\rm We first let $\ell\equiv 0$. Then, $\mu=at+b$ or $\mu =a$, and it follows that $f=c_1(at+b)^{1/2}+c_2t+c_3$ or $f=c_1t^2+c_2t+c_3$ for certain constants $a, b, c_1, c_2, c_3$. Namely, the curve $x(t)= (t, f(t))$ is ({general-}affinely equivalent to) a parabola. If $\ell$ is a nonzero constant: $\ell=-c \neq 0$, then $\mu=ct^2+at+b$, and $f(t)=c_1(ct^2+at+b)^{1/2}+c_2t + c_3$ for constants $a,b,c,c_1,c_2,c_3$. This implies that the curve is an ellipse or a hyperbola. Thus, we have seen that the curve with constant $\ell$ is a quadric and hence the curvature satisfies $k=0$. } \end{example} For the ellipse ${f}=-(\alpha^2 - t^2)^{1/2}$ and the hyperbola ${f}=(\alpha^2+ t^2)^{1/2}$, we can see that $\mu=\alpha^{-4/3}(\alpha^2\pm t^2)$ and $\displaystyle ds^2 = {1\over \alpha^2\pm t^2}dt^2$. Relative to the reparametrization by use of angular variable $t=\alpha\sin\theta$ or $\alpha\sinh \theta$, we have $ds^2=d\theta^2$; namely, $ds$ is independent of $\alpha$, the ``size'' of the curve in euclidean sense. \comment{ The vector $E_2$ in \eqref{eq:E1E2} is uniquely defined when $\ell\neq 0$, which we may call the \textsl{general-affine normal}, and the map $t\rightarrow E_2$ is the general-affine Gauss map. Then, it is natural to call a curve such that the map $E_2$ passes through one fixed point a \textsl{general-affine circle}. For the ellipse or the hyperbola ${f}= -\epsilon(\alpha^2 - \epsilon t^2)^{1/2}$, it holds that \[x+\epsilon E_2=0\] and thus it is a general-affine circle. Conversely for a curve $x$ to be such a general-affine circle, there exists a scalar function $r(t)$ and a fixed vector $v$ such that \[ x+ rE_2=v.\] However, this implies that $dx+ dr E_2 + rdE_2=0$, which induces, by the identity $(\ref{pfaff})$, $(1-\epsilon r)\omega E_1 + (dr-kr\omega)E_2=0$. Hence, $r=\epsilon$ is constant and $k=0$, which implies that any general-affine circle is general-affinely congruent to (a part of) an ellipse or a hyperbola. } \begin{example} \label{exp} {\rm For the curve ${f}=e^t$, we see that $\epsilon = -1$ and $k=-\sqrt{2}$. For the curve $f = t\log t$ ($t>0$), we have $\epsilon=1$ and $k=-4$.} \end{example} \begin{example} \label{power} {\rm For the curve ${f}=t^\alpha$, we see that the curvature is \[ k(\alpha) = {-2(\alpha +1)\over \sqrt{|(2\alpha-1)(\alpha -2)|}}.\] Here we assume $\alpha \neq 0,\ \pm 1,\ 1/2,\ 2$ so that the curve is neither trivial nor quadratic. Since $\mu''=(1/9)(2\alpha-4)(2\alpha-1)t^{-2(\alpha+1)/3}$, we have $\epsilon =1$ when $1/2<\alpha<2$, and $\epsilon=-1$ when $\alpha>2$ or $\alpha<1/2$. Note that $k(1/\alpha)=k(\alpha)$ when $\alpha>0$, and $k(1/\alpha)=-k(\alpha)$ when $\alpha<0$. By this symmetry, in order to know the possible values of $k=k(\alpha)$, it is sufficient to consider the case $-1<\alpha<1$. Then, $\epsilon=-1$ and $k \in (-\infty,-\sqrt{2})$ for $\alpha\in (0,1/2)$; $\epsilon=-1$ and $k \in (-\sqrt{2},0)$ for $\alpha\in (-1,0)$; $\epsilon=1$ and $k \in (-\infty,-4)$ for $\alpha\in (1/2,1)$. } \end{example} Due to Theorem $\ref{planenatural}$, the curves with constant curvature are obtained by solving the equation $(\ref{gaode})$ for constant $a$: \[ x''' = a x'' + \left(-{2\over 9}a^2 -\epsilon \right) x'. \] The case where $a=0$ is the case $\ell=0$: Example $\ref{ellzero}$. When $a\neq 0$ we have the following: \begin{proposition} \label{gaconst} When the curvature $k$ is a nonzero constant, the curve is general-affinely congruent to one of the curves in Examples $\ref{logsp}$, $\ref{exp}$ and $\ref{power}$: $e^{\gamma t}(\cos t, \sin t)$, $(t, e^t)$, $(t, t\log t)$, $(t, t^\alpha)\; (\alpha \neq 0,\ \pm 1,\ 2,\ 1/2)$. \end{proposition} \noindent Proof. We set $u=e^{-at/2}x'$. Then $u$ satisfies the equation $u'' + p u=0$, $p=\epsilon - a^2/36$. According to $p=0$, $p<0$, $p>0$, a set of independent solutions gives a map $x$ which is congruent to the curves $(t, t\log t)$, $(t, t^\alpha)$ and $(t, e^t)$, and $e^{\gamma t}(\cos t, \sin t)$ with parameter renewed appropriately. More precisely, when $\epsilon=1$, we have the curve $(t, t^\alpha)$ ($1/2<\alpha<1$), $(t, t\log t)$, and $e^{\gamma t}(\cos t, \sin t)$ according to $k\in (-\infty,-4)$, $k=-4$, and $k \in (-4,0)$, respectively. When $\epsilon=-1$, we have $(t, t^\alpha)$ ($0<\alpha<1/2$), $(t, e^t)$, and $(t, t^\alpha)$ ($-1<\alpha<0$) according to $k\in (-\infty,-\sqrt{2})$, $k=-\sqrt{2}$, and $k \in (-\sqrt{2},0)$, respectively. We remark that any curve with constant curvature is an orbit of a $1$-parameter subgroup of GA$(2)$, because of the unique and existence in Theorem $\ref{planenatural}$. Table {\ref{table:gap}} is the classification of general-affine curves with constant curvature $k$. We let $k\leq 0$, see Remark \ref{paramsign}, and note that the case $k = -\infty$ corresponds to the parabola defined in Example \ref{ellzero}. \bigskip \begin{table} \centering \begin{tabular}{lll } \hline curvatures & curves with $\epsilon =+1$& Examples \\ \hline $ k=0$ & $(t, - (\alpha^2 -t^2)^{1/2})$ & \ref{ellzero}\\ $-4 < k <0$ & $e^{\gamma t} ( \cos \alpha t, \sin \alpha t)\;\; (\gamma \neq 0, \alpha \neq 0)$ & \ref{logsp} \\ $k=-4$ & $(t, t \log t)\; (t>0)$ & \ref{exp} \\ $-\infty < k<-4$ & $(t, t^{\alpha})$ $(\alpha \in (1/2, 1))$& \ref{power}\\ \hline \end{tabular} \medskip \begin{tabular}{lll} \hline curvatures & curves with $\epsilon =-1$ & Examples \\ \hline $k=0$ &$(t, (\alpha^2 + t^2)^{1/2}$) & \ref{ellzero} \\ $k = - \sqrt{2}$ & $(t, e^t)$& \ref{exp} \\ $-\infty <k < 0, \; k \neq -\sqrt{2}$ & $(t, t^{\alpha})$ $(\alpha \in (0, 1/2)$ or $(\alpha \in (-1, 0))$ & \ref{power}\\ \hline \end{tabular} \caption{Plane curves with constant general-affine curvature} \label{table:gap} \end{table} \subsection{From equiaffine to general-affine} \label{equigen} Since the group of equiaffine motions ${\rm SA}(2)={\rm SL}(2,{\bf R})\ltimes {\bf R}^2$ is a subgroup of the general-affine group ${\rm GA}(2)={\rm GL}(2,{\bf R})\ltimes {\bf R}^2$, any general-affine invariant is obviously an equiaffine invariant. In this subsection, we give the expression of the general-affine length parameter and the general-affine curvature by use of the equiaffine length parameter and the equiaffine curvature. Let us consider the coframe \eqref{eq:coframeforequi}: \begin{equation}\label{eq:coframeforequi2} d e = \left( \begin{array}{cc} \omega & 0 \\ 0 & \omega \\ -\ell \omega & 0 \end{array}\right) e \end{equation} for $e = \{e_1, e_2\}$. In the equiaffine treatment, it is enough to consider only the unimodular change of frame, {\it i.e.} $\lam \nu =1$. Because $\nu = \lam^2$, we have $\lam^{3} =1$. Thus $\ell$ is an absolute invariant. The scalar $\ell$ is usually denoted by $k_a$ and called the \textsl{equiaffine curvature} of a plane curve. The parameter $t$ for which $\omega = dt$ holds is called the \textsl{equiaffine length parameter}. Let $x(t)$ be a curve with equiaffine length parameter $t$. Then, it is easy to see by \eqref{eq:coframeforequi2} that $x$ satisfies \begin{equation} \label{equiaffeq} x''' = -k_a x'. \end{equation} On the other hand, when the curve is given by a graph immersion $x(t)=(t,f(t))$, where $t$ is a parameter that is not necessarily equiaffine, the equiaffine length element is $(f'')^{1/3} dt$ and the equiaffine curvature is $k_a = -{1\over 2}\mu''$ for $\mu = (f'')^{-2/3}$. As we have seen in $(\ref{muell})$, the equiaffine curvature is nothing but $\ell$ up to a constant multiple. We refer to \cite[p. 13--14]{Bl}. Now we consider the curve in view of the group GA$(2)$. Then, the differential equation $(\ref{equiaffeq})$ above shows that the general-affine length element is \[ ds = |k_a|^{1/2} dt. \] We rewrite the differential equation using the parameter $s=s(t)$: we set $y(s)=x(t)$ and let $\{\dot{}\}$ denote the derivation relative to $s$. Then, we get the equation: \[ \threedot{y} = A(s) \ddot{y} + B(s)\dot{y},\] where $A(s)$ and $B(s)$ are given by use of \eqref{newa} and \eqref{newb}. For simplicity, we set $K=|k_a|$. Since $s'=ds/dt = K^{1/2}$ and $s''={1\over 2}K' K^{-1/2}$, we see that $A(s) = -{3\over 2}K'K^{-3/2}$. Therefore, the general-affine curvature of $x(t)$ is \[ k= K'{K^{-3/2}}.\] The quantity of this form was already treated in \cite[p. 24]{Bl} by dimension considerations to get an invariant relative to similarity transformation. There a remark was given that the curves with constant ${K'K^{-3/2}}$ consist of parts of curves called ``$W$-Kurve'', discussed by Klein and Lie \cite{KL}. \bigskip \subsection{From general-affine to projective} \label{gatoproj} It was G. H. Halphen \cite{Ha} who began a systematic study of projective curves in view of ordinary differential equations. Later, E. J. Wilczynski gave a classical treatment of projective curves in the book \cite{Wi}. Also, the books by E. P. Lane \cite{La} are standard references for this subject. In this subsection, we recall the definition of the projective length element and the projective curvature, and give the expressions of such invariants in terms of general-affine invariants. A nondegenerate curve in ${\bf P}^2$ with parameter $t$ is described by an ordinary differential equation of the form \begin{equation} \label{projone} y''' + P_2 y' + P_3 y=0, \end{equation} such that a set of three independent solutions, say, $x^1$, $x^2$, $x^3$ defines a map $t\rightarrow [x^1,x^2,x^3]\in {\bf P}^2$, where $[\quad ]$ denotes homogeneous coordinates. For this equation, the form \begin{equation} \label{projlen} P^{1/3}dt,\quad {\rm where} \quad P=P_3 -{1\over 2}P_2', \end{equation} is called the projective length element. Furthermore, when $t$ itself is a projective length parameter, the equation can be transformed by a certain change of variables from $y$ to $z=\lambda y$ into the equation of the form \begin{equation} \label{halphen} z''' + 2k_p z' + (1+k_p')z=0, \end{equation} which is called the Halphen canonical form. Then, the coefficient $k_p$ is called the \textsl{projective curvature} and is given by the formula \begin{equation} \label{p2curv} k_p=P^{-2/3}\left({1\over 2}P_2 - {1\over 3}{P''\over P} + {7\over 18}\left({P'\over P}\right)^2\right); \end{equation} we refer to \cite[p. 71]{Ca2}. In particular, when $k_p$ is constant, the curve is called an anharmonic curve and is obtained by integrating the differential equation $z''' + 2k_p z' + z =0$; we refer to \cite[p. 86--91]{Wi}. The list of anharmonic curves is the same as the list of plane curves with constant general-affine curvature in Section \ref{subsc:graph}, up to projective equivalence; we note that the sign $\epsilon$ plays no role in the projective classification. In the general-affine setting we had a differential equation $(\ref{gaode})$, which can be transformed into the equation of the form $(\ref{projone})$ by changing $x$ into $y=e^{-{1\over 3}\int a dt}x$. The result is \[ y''' = \left({1\over 9}a^2 -{2\over 3}a' - \epsilon \right) y' +\left({1\over 9}aa' - {1\over 3}a'' - {1\over 3}a\epsilon\right)y.\] Hence, we can see \[ P= -{1\over 3}a\epsilon;\] this implies that the projective length element is $a^{1/3}dt$ up to a scalar multiple, while $-{2\over 3}a$ is the general-affine curvature. In particular, the point where the general-affine curvature vanishes is the point where the invariant $P$ vanishes, which is classically called a \textsl{sextactic point}. We remark that, in \cite{SS}, S. Sasaki showed how to obtain the projective length parameter and the projective curvature directly from the equiaffine curvature. \section{General-affine extremal plane curves and the associated differential equation} In Section $\ref{frames}$, we have defined the general-affine length element $\omega_s =\sqrt{\epsilon \ell} \omega$ with $\epsilon = {\rm sign} (\ell)$. It defines the length functional for general-affine curves, and in this section we consider the curves that are extremal with respect to this functional. First, we prove the variational formula, which is a nonlinear differential equation relative to the general-affine curvature, then discuss some special solutions with reference to Chazy equations. Second, we compute the variational formula for a more generally defined curvature functional. \subsection{Extremal plane curves relative to the length functional} We have shown that there exists a unique frame $e = \{x, E_1, E_2\}$ such that \eqref{pfaff} holds. Recall that $\Omega$ denotes the $3 \times 2$ matrix in \eqref{pfaff}. Let $x_{\eta} (t)$ be a family of curves parametrized by $\eta$ around $\eta =0$ and $x_{0} = x$. We assume that $x_{\eta} (t) = x(t)$ outside a compact set $C$ and $x_0(t)$ is parametrized by general-affine arc length. For simplicity, we further assume that $x_{\eta}$ does not have an affine inflection point, {\it i.e.}, the invariant $\omega_s$ does not vanish anywhere for all $\eta$. The length functional $L$ is given by \[ L (\eta) = \int_C \omega_s (\eta). \] For the sake of brevity, we use the notation $\delta$ to denote the derivative with respect to $\eta$ evaluated at $\eta =0$: \[ \delta a = \frac{d a (\eta)}{d \eta}\Big|_{\eta =0}. \] Then the curve $x$ is called \textsl{general-affine extremal} if \[ \delta L =0 \] for any compactly supported deformation of $x$. We want to derive a differential equation for affine extremality. Since $\{E_1, E_2\}$ are linearly independent, there exists a $3 \times 2$-matrix $\tau$ such that \[ \delta \begin{pmatrix}x \\ E_1 \\E_2 \end{pmatrix} = \tau \begin{pmatrix}E_1 \\E_2 \end{pmatrix}, \qquad \tau = \begin{pmatrix} \tau_0^1 & \tau_0^2 \\ \tau_1^1 & \tau_1^2 \\ \tau_2^1 & \tau_2^2 \end{pmatrix} \] holds. Components of $\Omega$ and $\tau$ are denoted by $\omega_{\alpha}^{\beta}$ and $\tau_{\alpha}^{\beta}$, where $\alpha=0, 1, 2$ and $\beta =1,2$. Since $\delta d e = d \delta e$ with $e = \{x, E_1, E_2\}$, we have \[ \delta \omega_{\alpha}^{\beta} -d \tau_{\alpha}^\beta =\sum_{\gamma=1,2}\tau_{\alpha}^{\gamma} \omega_{\gamma}^{\beta} - \omega_{\alpha}^{\gamma} \tau_{\gamma}^{\beta}. \] In terms of entries of $\Omega$ and $\tau$, we have \begin{align} \delta \omega_s - d \tau_{0}^1 &= - \left( \frac{1}{2} k \tau_0^1 + \epsilon \tau_0^2 + \tau_1^1 \right) \omega_s, \label{eq:0-1}\\ - d \tau_{0}^2 &= \left( \tau_0^1 - \tau_1^2 -k \tau_0^2 \right) \omega_s, \label{eq:0-2}\\ - \frac{1}{2}\delta (k\omega_s) - d \tau_{1}^1 &= - \left( \epsilon \tau_1^2 +\tau_2^1 \right) \omega_s, \label{eq:1-1}\\ \delta \omega_s - d \tau_{1}^2 &= \left( \tau_1^1 -\frac{1}{2} k \tau_1^2 -\tau_2^2\right) \omega_s, \label{eq:1-2}\\ - \epsilon \delta \omega_s - d \tau_{2}^1 &= \epsilon \left(\tau_1^1 + \frac{\epsilon}{2} k \tau_2^1 -\tau_2^2\right) \omega_s, \label{eq:2-1}\\ - \delta (k \omega_s) - d \tau_{2}^2 &= \left( \tau_2^1 + \epsilon \tau_1^2 \right) \omega_s.\label{eq:2-2} \end{align} Here we use $\omega_0^1 = \omega_1^2 =-\epsilon \omega_2^1= \omega_s$, $\omega_0^2 =0$, $\omega_1^1 = - 1/2 k \omega_s$, and $\omega_2^2 = - k \omega_s$. Then adding \eqref{eq:1-2} and $- \epsilon$\eqref{eq:2-1}, \[ 2 \delta \omega_s -d \tau_1^2 + \epsilon d \tau_2^1 = - \frac{1}{2} k (\tau_1^2 + \epsilon \tau_2^1) \omega_s \] holds. Then adding \eqref{eq:2-2} and $-2$\eqref{eq:1-1}, we have \begin{equation}\label{eq:tau1221} 2 d \tau_1^1 - d \tau_2^2 = 3\epsilon ( \tau_1^2 + \epsilon \tau_2^1 ) \omega_s. \end{equation} Combining these equations, we have \begin{equation}\label{eq:deltaomegas} 2 \delta \omega_s = d \tau_1^2 - \epsilon d \tau_2^1 + \frac{\epsilon}{6} k (-2 d \tau_1^1 + d \tau_2^2). \end{equation} Recall that the deformation is compactly supported. Then by using Stokes' theorem and integration by parts, we have \begin{equation*} \delta L = -\frac{\epsilon}{12} \int_{C} \left(-2\tau_1^1 + \tau_2^2\right) d k. \end{equation*} We now compute $-2 \tau_1^1+\tau_2^2$ as follows. From \eqref{eq:0-1} and \eqref{eq:1-2}, we have \begin{equation*} d \tau_0^1- d \tau_1^2 = \left(2 \tau_1^1 - \tau_2^2 + \epsilon \tau_0^2\right) \omega_s + \frac{1}{2}k \left(\tau_0^1 - \tau_1^2 \right) \omega_s. \end{equation*} Inserting \eqref{eq:0-2} into the above equation, we have \begin{equation} \label{tau11tau22} -2 \tau_1^1 + \tau_2^2 = {\tau_0^2}'' - \frac{3}{2}k {\tau_0^2}' + \left(\epsilon +\frac{1}{2}k^2 - k'\right)\tau_0^2. \end{equation} Here $\{'\}$ denotes the derivative with respect to the general-affine arc length $\omega_s$, {\it i.e.} $a'= d a/\omega_s$ for a function $a$. Finally, using integration by parts again, \begin{equation} \delta L = -\frac{\epsilon}{12} \int_{C} \left(k''' + \frac{3}{2}k k'' + \frac{1}{2} {k'}^2 + \frac{1}{2} k^2 k' + \epsilon k'\right) \tau_0^2 \omega_s \end{equation} holds. If we now take $x(t)+\eta \left\{v^1(t,\eta)E_1(t)+v^2(t,\eta)E_2(t)\right\}$ for the family of curves $x_\eta(t)$, where $v^1$ and $v^2$ are arbitrary smooth functions with compact support relative to $t$, then $\delta x =v^1(t,0)E_1 + v^2(t,0)E_2$. This means that $\tau_0^2=v^2(t,0)$ is arbitrary for this family and the vanishing of the integral implies the following proposition. \begin{proposition}[\cite{Mihai2}] \label{planeextremalprop} A {nondegenerate} plane curve without affine inflection point is general-affine extremal relative to the length functional if and only if \begin{equation}\label{eq:variationformula} k''' + \frac{3}{2} k k'' + \frac{1}{2} {k'}^2 + \frac{1}{2} k^2 k' + \epsilon k' =0 \end{equation} holds. In particular, any curves of constant general-affine curvature are extremal. \end{proposition} We remark here that the differential equation $(\ref{eq:variationformula})$ was first given in \cite[(33)]{Mihai2}, though some modifications are necessary. The formula for $\epsilon=1$ was then rediscovered by S. Verpoort in \cite[p.432]{Ve} by making use of his general variational formula of equiaffine invariants: The differential equation is written in terms of both of the equiaffine curvature and the general-affine curvature, and looks much simpler than $(\ref{eq:variationformula})$. As a result, he proved the following corollary, which we now state in our setting. \begin{corollary}[\cite{Ve}]\label{planeextremal} Let $x(t)=(x_1(t), x_2(t))$ be a curve parametrized by general-affine parameter $t$. Assume that it is general-affine extremal. Then, there exist constants $c_1$, $c_2$ and $c_3$ such that the general-affine curvature $k$ can be written as \[k= c_1x_1+c_2x_2 +c_3.\] \end{corollary} \noindent Proof. Let us consider the ordinary linear differential equation \[z'''+{3\over 2}kz'' + \left(\epsilon + {1\over 2}k'+{1\over 2}k^2\right)z'=0, \] with unknown function $z$. We note that this is nothing but of the same form as the equation $(\ref{kode})$, therefore, $x_1$, $x_2$ are solutions. Also any constant is obviously a solution. On the other hand, if $x$ is extremal, then the equation $(\ref{eq:variationformula})$ shows that $k(t)$ itself is a solution. Therefore, $k$ can be expressed as claimed. \medskip We also have the following property on the curvature integral. \begin{proposition} The variation of total curvature on any compact interval always vanish, {\it i.e.} \[ \delta \int_{C} k\omega_s =0 \] holds. \end{proposition} \noindent Proof. Adding \eqref{eq:1-1} and \eqref{eq:2-2}, \[ - \frac{3}{2} \delta (k \omega_s)- d \tau_1^1 - d \tau_2^2 = 0 \] holds. Then Stokes' theorem implies the proposition. \begin{remark} {\rm The differential equation \eqref{eq:variationformula} associated to general-affine extremal plane curves is the third order nonlinear differential equation of Chazy type. It is known that J. Chazy \cite{Ch} classified third order nonlinear ordinary differential equations of Painlev\'e type, {\it i.e.} the solutions only admit poles as movable singularities. Then Chazy equations are classified into \textrm{I} to \textrm{X\!I\!I\!I} classes of equations and the full list of equations can be found in \cite{Ba}. We here cite Chazy equations for \textrm{I\!V, V} and \textrm{V\!I}, which are respectively given by \begin{align} & k''' +3k k''+ 3 {k'}^2 + 3k^2 k' - Sk' - S' k - T=0, \\ & k''' +2k k''+ 4 {k'}^2 + 2k^2 k' - 2R k' - R' k =0, \\ & k''' +k k''+ 5 {k'}^2 + k^2 k' - 3Q k' - Q' k + Q''=0, \end{align} where $S, T, R, Q$ are certain analytic functions of $t$, \cite{Ba}. Then \eqref{eq:variationformula} is clearly of the form of the above Chazy equations with the coefficients of $k k''$, ${k'}^2$ and $k^2 k'$ replaced by the half integers $3/2$, $1/2$ and $1/2$, respectively, and with $S$, $T$, $R$ or $Q$ chosen properly. } \end{remark} \begin{example} \label{planeextremalexample} {\rm We can see that the following $k(t)$ are solutions of \eqref{eq:variationformula}: \[ k (t) = 3 \sqrt{2}\tanh (\sqrt{2}(t-c)) \quad\mbox{and}\quad k(t) = 3 \sqrt{2}\coth (\sqrt{2}(t-c)) \] for $\epsilon = 1$ (\cite[Example 14]{Ve}), and \[ k(t)=-3\sqrt{2}\tan(\sqrt{2}(t-c)),\quad k(t)=3\sqrt{2}\cot(\sqrt{2}(t-c))\quad \mbox{and}\quad k(t)=\pm\sqrt{2}+\frac{3}{t-c} \] for $\epsilon =-1$. For each of these solutions, we can compute the associated plane curve by integrating the differential equation, using computer software. Since the expression is not simple, we give here one example for the case $\epsilon=-1$ and $k(t)=\sqrt{2}+3/t$, and the curve is written as $(x_1,x_2)$ for $t>0$: \begin{align*} x_1 &= 3\sqrt{2} -{1\over t}, \\ x_2 &= {\sqrt{3\pi}(\sqrt{2}-6t)\ {\rm erfi}\left({\sqrt{3t}\over 2^{1/4}}\right)\over t} + {62^{1/4}\exp\left({3 t \over \sqrt{2}}\right)\over \sqrt{t}}, \end{align*} where erfi is the error function defined by \[ {\rm erfi}(x) = {2 \over \sqrt{\pi} } \int_0^{x} e^{t^2} \ dt. \] } \end{example} \subsection{Extremal problem for a generalized curvature functional} More generally, one can consider a variational problem for the following curvature functional: \begin{equation}\label{eq:generalfunc} F(\eta) = \int_C f(k) \omega_s, \end{equation} where $f$ is a smooth function of one variable and $\eta$ is the parameter for the variation of curves. Then we have \begin{equation}\label{eq:generalfunc2} \delta F = \int_C f^{\prime}(k) (\delta k) \omega_s + f(k) \delta \omega_s. \end{equation} The computation of $\delta k $ is done as follows: Adding \eqref{eq:1-2} and $\epsilon$\eqref{eq:2-1}, we have \[ - d \tau_1^2 - \epsilon d \tau_2^1 = 2 (\tau_1^1 - \tau_2^2) \omega_s - \frac{1}{2} k (\tau_1^2 - \epsilon \tau_2^1) \omega_s. \] By taking a derivative of this equation and by using \eqref{eq:tau1221}, we have \begin{equation}\label{eq:ddot} \frac{\epsilon}{3}\left(-2 {\tau_1^1} + {\tau_2^2}\right)'''= 2 ({\tau_1^1} - {\tau_2^2})' - \frac{1}{2} k' ( \tau_1^2 - \epsilon \tau_2^1) - \frac{1}{2}k ({\tau_1^2} - \epsilon {\tau_2^1})'. \end{equation} Subtracting \eqref{eq:2-2} from \eqref{eq:1-1}, we have \[ \frac{1}{2} (\delta k) \omega_s + \frac{1}{2} k \delta \omega_s - d \tau_1^1 + d \tau_2^2 = -2 (\tau_2^1 + \epsilon \tau_1^2) \omega_s. \] Then, using \eqref{eq:deltaomegas} we see that \begin{equation*} \delta k = -k \left( \frac{1}{2} \left(\tau_1^2 -\epsilon \tau_2^1\right)' + \frac{\epsilon}{12} k (-2 \tau_1^1 + \tau_2^2)'\right) + 2 (\tau_1^1 - \tau_2^2)' -4 (\tau_2^1 + \epsilon \tau_1^2). \end{equation*} Again, by using \eqref{eq:tau1221}, \begin{align*} \delta k = - \frac{1}{2} k \left( \tau_1^2 -\epsilon \tau_2^1\right)' + \left(\frac{4}{3} - \frac{\epsilon}{12}k^2\right) \left( -2 {\tau_1^1} + {\tau_2^2}\right)' + 2 ( \tau_1^1 - \tau_2^2)', \end{align*} and finally, by using \eqref{eq:ddot}, we have \begin{equation}\label{eq:deltak} \delta k = \frac{1}{2}k' \left(\tau_1^2 - \epsilon \tau_2^1\right)+ \frac{\epsilon}{3}\left(-2 {\tau_1^1} + {\tau_2^2} \right)''' + \left(\frac{4}{3}- \frac{\epsilon }{12}k^2\right) \left( -2 {\tau_1^1} + {\tau_2^2}\right)'. \end{equation} Then, by inserting the expressions $\delta\omega_s$ \eqref{eq:deltaomegas} and $\delta k$ \eqref{eq:deltak} into \eqref{eq:generalfunc2}, and by using integration by parts, we get \begin{align*} \delta F = \int_C \dot f (k)\left\{\frac{\epsilon}{3}\left(-2{\tau_1^1} + {\tau_2^2} \right)'''\right. + \left(- \frac{\epsilon k^2}{12} + \frac{4}{3}\right) &\left. \left( -2 {\tau_1^1} + {\tau_2^2}\right)'\right\} \omega_s \\ & + f(k) \left\{\frac{\epsilon}{12} k \left(-2 {\tau_1^1} + {\tau_2^2}\right)'\right\} \omega_s. \end{align*} Again applying integration by parts, we have \[ \delta F = -\frac{\epsilon}{12}\int_C G (-2 \tau_1^1 + \tau_2^2) \omega_s, \] where \begin{equation}\label{eq:G} G = 4 \ddddot f(k){k'}^3 + 12 \dddot f(k) k' k'' + \ddot f(k) (4 k''' - k' k^2 + 16\epsilon k' ) - \dot f (k) k k' + f(k) k'. \end{equation} Thus, by use of $(\ref{tau11tau22})$, we have the following theorem. \begin{theorem} \label{genvarform} A plane curve without affine inflection points is general-affine extremal with respect to the curvature functional \eqref{eq:generalfunc} if and only if \begin{equation} \label{Gextremal} G'' + \frac{3}{2}{G'} k + \frac{1}{2}G k' + \frac{1}{2}G k^2+ \epsilon G =0 \end{equation} holds, where $G$ is the function defined in \eqref{eq:G}. \end{theorem} \begin{remark} Variation of energy integral. {\rm When $f={1\over 2}k^2$, the integral $F$ may be called the energy integral. For this $f$, we see that \[ G=4k'''-{3\over 2}k^2k' + 16\epsilon k'\] and the equation $(\ref{Gextremal})$ give an extremal curve relative to the energy functional. } \end{remark} \section{How to find plane curves with given general-affine curvature} \label{sec:graphimmersion} In Section \ref{subsc:graph}, we have derived the expression \eqref{curv} of the general-affine curvature for a graph immersion $x(t) = (t, f(t))$ with $\mu = (f^{\prime \prime})^{-2/3}>0$. Making use of this expression, we study how to find a graph immersion of plane curves with given general-affine curvature, by considering the following nonlinear differential equation directly, \begin{equation} \label{muconst} \mu (\mu''')^2 = -\epsilon {k^2\over 2}(\mu'')^3. \end{equation} We regard the function $\mu'$ of $t$ as a function of $\mu$ and set \[ w(\mu) = \mu'(t) ={d\mu\over dt}.\] Then, by the chain rule, we have \[ \mu'' = w\dot{w},\quad \mu'''=w\dot{w}^2+ w^2\ddot{w}.\] Hence, the equation $(\ref{muconst})$ is written as \begin{equation} \label{wode} \mu w^2(\dot{w}^2+w\ddot{w})^2 + \epsilon {k^2\over 2}w^3\dot{w}^3=0, \end{equation} which can be reduced to the Abel equation as follows: \noindent {(i) First reduction}: We introduce $s$ by setting \[ w(x) = \pm \exp \left( -\epsilon \int s^2 dx\right).\] Here we choose the sign properly, depending on the function $w$. Then, we get the equation \begin{equation} \label{mug} 8x(-\epsilon \dot{s} + s^3)^2 - k^2s^4 =0, \quad \quad (x>0). \end{equation} Therefore, the original differential equation $(\ref{muconst})$ is equivalent to \begin{equation}\label{eq:abel1} \epsilon \dot s={k\over 2\sqrt{2x }}s^{2} + s^3, \end{equation} which is an \textsl{Abel equation of the first kind}. It is easy to see that for constant $k<0$ with $\epsilon =- 1$ or $k\leq -4$ with $\epsilon = 1$, the solution $s$ of \eqref{eq:abel1} can be explicitly obtained as \[ s(x) = \frac{a}{\sqrt{2 x}} \quad \mbox{with}\quad a = \frac{-k \pm\sqrt{- 16\epsilon +k^2}}{4}. \] The corresponding {curves} are given in Examples \ref{exp} and \ref{power}. Moreover, in the case of $k =0$ (both $\epsilon = \pm 1$), the solution $s$ can be obtained as \[ s(x) = \frac{1}{\sqrt{\epsilon(a -2 x)}}, \] where $a$ is some constant. The corresponding {curves} are given in Example \ref{ellzero}. On the contrary, in the case of $-4 < k < 0$ for $\epsilon =1$, the solution $s$ of \eqref{eq:abel1} is not easy to write down explicitly. The corresponding {curves} are given in Example \ref{logsp}. \medskip \noindent {(ii) Second reduction}: We define $s$ by \[w(x)= \pm \exp\left(-\epsilon \int s^{-2}dx\right),\] by choosing the sign properly. Then a straightforward computation shows that the equation $(\ref{wode})$ is transformed to \[ -k^2 s^2 + 8x(\epsilon +s\dot s)^2=0,\] which is equivalent to \begin{equation}\label{eq:abel2} s\dot s = \frac{k}{2 \sqrt{2 x}} s - \epsilon. \end{equation} This is a particular case of the \textsl{Abel equation of the second kind}. We refer to \cite[Section 1.3.2]{PZ} for integrable Abel equations. \begin{theorem} \label{abeleq} For any general-affine plane curve with graph immersion $(t, f(t))$, there exists a function $s$ given as above such that $s$ satisfies the Abel equation of the first kind or second kind, \eqref{eq:abel1} or \eqref{eq:abel2}, respectively. Conversely, for given any function $k$, a solution $s$ of \eqref{eq:abel1} or \eqref{eq:abel2} gives rise to a plane curve of graph immersion $(t, f(t))$ with general-affine curvature $k$. \end{theorem} \section{General-affine curvature of space curves} The equiaffine treatment of space curves as well as the projective treatment of space curves are classically known. However, it seems that a general-affine treatment of space curves is not fully developed. In this section, we will introduce several notions such as curvature, length parameter and ordinary differential equation associated with space curves from a general-affine point of view. \subsection{Choice of frames for space curves and general-affine curvatures} \label{subsc:3frames} Let $x:t \longrightarrow x(t)\in {\bf A}^{3}$ be a curve in a $3$-dimensional affine space with parameter $t$ and let $e=\{e_1,e_2,e_3\}$ be a frame along $x$; it is a set of independent vectors of ${\bf A}^{3}$. The vector-valued $1$-form $dx$ is written as \begin{equation} \label{dx3} dx = \omega^1 e_1 + \omega^2 e_2 + \omega^3e_3, \end{equation} and the dependence of $e_i$ on the parameter is described by the equation \begin{equation} \label{dei3} de_i = \sum_j \omega_i^j e_j, \end{equation} where $\omega^j$ and $\omega_i^j$ are $1$-forms as before in the 2-dimensional case and $1\le i,j \le 3$. We call $\{\omega^i, \omega_i^j\}$ the coframe. We assume in the following that the curve is nondegenerate in the sense that the vectors $x'$, $x''$ and $x'''$ are linearly independent and that $\omega^2=\omega^3=0$ and $\omega_1^3=0$, so that $e_1$ is tangent to the curve and that $\{e_1, e_2\}$ is the first osculating space of the curve. We write $\omega^1 = \omega$ for simplicity. Let $\tilde{e}=\{\tilde{e}_1, \tilde{e}_2, \tilde{e}_3\}$ be another choice of such a frame. Then, it can be written as \[\tilde{e}_1 = \lam e_1,\qquad \tilde{e}_2 = \mu e_1 + \nu e_2,\qquad \tilde{e}_3=\alpha e_1+ \beta e_2+\gamma e_3,\] where $\lam\nu\gamma \neq 0$. The associated coframe is written as $\tilde{\omega}$ and $\tilde{\omega}_i^j$, which satisfies \[ dx=\tilde{\omega}\tilde{e}_1, \qquad d\tilde{e}_i = \sum_j \tilde{\omega}_i^j\tilde{e}_j. \] Then we have \begin{equation} \label{shiki0} \tilde{\omega} = \lam^{-1}\omega. \end{equation} Since $d\tilde{e}_1$ is represented in two ways, one being \[ d\tilde{e}_1 = (d\lam) e_1+ \lam (\omega_1^1e_1+\omega_1^2e_2)\] and the other being \[ d\tilde{e}_1 = \tilde{\omega}_1^1(\lam e_1)+\tilde{\omega}_1^2(\mu e_1+\nu e_2),\] by comparing the coefficients of $e_1$ and $e_2$ in these expressions, we get \begin{eqnarray} & \nu \tilde{\omega}_1^2= \lam \omega_1^2, & \label{shiki1} \\ & \lam \tilde{\omega}_1^1 + \mu \tilde{\omega}_1^2 = d\lam + \lam \omega_1^1. & \label{shiki2} \end{eqnarray} Similarly, by considering $d\tilde{e}_2$, we have \def\tilde{\omega}{\tilde{\omega}} \begin{eqnarray} &\gamma\tilde{\omega}_2^3 = \nu \omega_2^3,& \label{shiki3} \\ & \nu \tilde{\omega}_2^2+\beta \tilde{\omega}_2^3 = d\nu + \mu\omega_1^2 + \nu \omega_2^2,& \label{shiki4} \\ & \lam \tilde{\omega}_2^1 + \mu\tilde{\omega}_2^2 + \alpha \tilde{\omega}_2^3 = d\mu + \mu\omega_1^1 + \nu\omega_2^1,& \label{shiki5} \end{eqnarray} and by $d\tilde{e}_3$ we have \begin{eqnarray} & \gamma\tilde{\omega}_3^3 = d\gamma + \beta\omega_2^3+\gamma\omega_3^3, & \label{shiki6} \\ &\nu\tilde{\omega}_3^2+\beta\tilde{\omega}_3^3=d\beta+\alpha\omega_1^2+\beta\omega_2^2+\gamma\omega_3^2, & \label{shiki7} \\ &\lambda\tilde{\omega}_3^1+\mu\tilde{\omega}_3^2+\alpha\tilde{\omega}_3^3 =d\alpha + \alpha\omega_1^1 + \beta\omega_2^1 + \gamma\omega_3^1.& \label{shiki8} \end{eqnarray} First note that, from the generality assumption, we have $\omega_1^2\neq 0$ and $\omega_2^3\neq 0$. Then, by an appropriate choice of $\nu$ and $\gamma$, in view of $(\ref{shiki1})$ and $(\ref{shiki3})$, we can assume that $\tilde{\omega}_1^2=\tilde{\omega}$ and $\tilde{\omega}_2^3=\tilde{\omega}$. Hence, we can restrict our consideration to the case \[\omega_1^2=\omega\quad {\rm and} \quad \omega_2^3=\omega\] in the following. In particular, \begin{equation} \label{unimo} \nu=\lambda^2\quad {\rm and}\quad \gamma=\lambda^3 \end{equation} are necessary. We next see that, from $(\ref{shiki2})$, $(\ref{shiki4})$ and $(\ref{shiki6})$, we have \begin{align*} 2 \tilde{\omega}_1^1 - \tilde{\omega}_2^2 &= 2 \omega_1^1 - \omega_2^2 - 3 \lam^{-2} \mu \omega + \lam^{-3} \beta \omega, \\ 3\tilde{\omega}_1^1 - \tilde{\omega}_3^3 &= 3 \omega_1^1 - \omega_3^3 - 3 \lam^{-2} \mu \omega - \lam^{-3} \beta \omega. \end{align*} Thus an appropriate choice of the parameters $\mu$ and $\beta$ makes the identities $\tilde{\omega}_2^2=3 \tilde{\omega}_1^1$ and $\tilde{\omega}_3^3=2 \tilde{\omega}_1^1$ hold. To keep this condition it is necessary to have $\mu=\beta=0$. Now \eqref{shiki5} can be rephrased as \[ \lam \tilde{\omega}_2^1 + \alpha \tilde{\omega}_2^3 = \lam^2 \omega_2^1, \] and we choose $\alpha$ so that $\tilde{\omega}_2^1 =0$. Thus, we can assume that $\omega_2^1=0$ and $\alpha=0$ in the following. Moreover \eqref{shiki2} is \[ \tilde{\omega}_1^1 = \lam^{-1} d \lam + \omega_1^1, \] and we can choose $\lam$ so that $\tilde{\omega}_1^1=0$. Therefore $\omega_1^1=0$, and to keep this condition, $\lam$ is a non-zero constant. With these considerations, the last identities $(\ref{shiki7})$ and $(\ref{shiki8})$ turn out to be \[\tilde{\omega}_3^2 = \lambda\omega_3^2 \quad {\rm and}\quad \tilde{\omega}_3^1=\lambda^2\omega_3^1,\] respectively. We set \begin{equation} \label{spcinv} \omega_3^2 = -\ell \omega, \quad \omega_3^1=-m\omega, \end{equation} and similarly for $\tilde{\omega}_3^2$ and $\tilde{\omega}_3^1$. Then, we have the covariance \begin{equation} \label{spccov} \tilde{\ell}=\lambda^2\ell, \quad {\rm and}\quad \tilde{m} = \lambda^3 m. \end{equation} Thus, we have seen that, given a nondegenerate curve $x$, there exists a frame $e$ with the coframe of the form \begin{equation} \label{eq:coframeeqaff} \begin{pmatrix} \omega & 0 &0 \\ 0 & \omega & 0 \\ 0& 0 & \omega\\ -m \omega & - \ell \omega & 0 \end{pmatrix}. \end{equation} We remark here that, in the equiaffine treatment of space curves, the scalars $\ell$ and $m$ above are known to be absolute invariants, called the \textsl{equiaffine curvature} and the \textsl{equiaffine torsion}, respectively; we refer to Section \ref{3equiaffine}. In this paper, we call the point where $\ell=0$ an \textsl{affine inflection point}. In the following we assume $\ell\neq 0$ and let $\epsilon$ denote the sign of $\ell$: \[\epsilon={\rm sign}(\ell).\] It is an invariant of the curve. Then we define the \textsl{general-affine length element} by \begin{equation} \label{galength} \omega_s = \sqrt{\epsilon \ell}\omega, \end{equation} which is well-defined independent of the frame in view of $(\ref{spccov})$, and a parameter $s$ for which $ds = \omega_s$ holds is the \textsl{general-affine length parameter} determined up to an additive constant. \begin{definition} \label{spacecurv} We call the scalar function $k$ defined by \[ {d\ell\over \ell} = k\omega_s\] the \textsl{first general-affine curvature}. In other words, \begin{equation}\label{kforspacecurve} k={d\log\ell\over ds}. \end{equation} We call the scalar function $M$ defined by \begin{equation} \label{gasecond} M={m\over (\epsilon\ell)^{3/2}} \end{equation} the \textsl{second general-affine curvature} of the space curve. \end{definition} Both curvatures defined above are absolute invariants. We next define a new frame $\{E_1, E_2, E_3\}$ by setting \[E_1 = {1\over (\epsilon\ell)^{1/2}}e_1,\qquad E_2={1\over \epsilon \ell}e_2, \qquad E_3={1\over (\epsilon\ell)^{3/2}}e_3.\] It is easy to see that this frame does not depend on the choice of $\lambda$; hence, it is determined uniquely. Thus we have proved the following: \begin{proposition} Assume $\ell\neq 0$. Then, the frame $\{E_1, E_2, E_3\}$ is uniquely defined from the immersion and it satisfies the Pfaffian equation \begin{equation} \label{pfaff3} d \left(\begin{array}{c} x \\ E_1 \\ E_2 \\ E_3\end{array}\right) =\Omega \left(\begin{array}{c} E_1 \\ E_2 \\E_3 \end{array}\right), \qquad \Omega =\left( \begin{array}{ccc} \omega_s & 0 & 0 \\ -{1\over 2}k\omega_s & \omega_s & 0 \\ 0 & -k\omega_s & \omega_s \\ - M \omega_s & -\epsilon \omega_s & -{3\over 2}k\omega_s \end{array}\right), \end{equation} where $\omega_s$ is the general-affine length form, $k$ and $M$ are the first and second general-affine curvatures, respectively, and $\epsilon={\rm sign}(\ell)$. \end{proposition} By use of this choice of frame, we can see the following lemma, by a similar reasoning to that for Lemma \ref{lem:kode}. \begin{lemma} \label{gaspclemma} The immersion $x$ satisfies the ordinary differential equation \begin{equation} \label{gaspcode} x''''+3kx''' +\left(2k'+{11\over 4}k^2+\epsilon\right)x'' + \left(M+{1\over 2}\epsilon k + {1\over 2}k'' +{7\over 4}kk' + {3\over 4}k^3\right)x'=0, \end{equation} relative to a general-affine length parameter. \end{lemma} In the definition of the curvature, we had an ambiguity of orientation of the chosen parameter: by the change of the parameter from $t$ to $-t$, the equation transforms to \[ x''''-3kx''' +\left(-2k'+{11\over 4}k^2+\epsilon\right)x'' + \left(-M-{1\over 2}\epsilon k - {1\over 2}k'' +{7\over 4}kk' - {3\over 4}k^3\right)x'=0. \] Namely, the transform $(k,M)\rightarrow (-k,-M)$ keeps the form of the equation. Thus, up to this ambiguity, we have the following theorem. \begin{theorem} \label{spacenatural} Given functions $ k(t)$ and $M(t)$ of a parameter $t$, and $\epsilon=\pm 1$, there exists a nondegenerate space curve $x(t)$ for which $t$ is a general-affine length parameter, $k$ is the first general-affine curvature, $M$ is the second general-affine curvature, and $\epsilon$ is the sign of $\ell$, uniquely up to a general-affine transformation. \end{theorem} Analogously to the case of plane curves, we have the following property on the total general-affine curvature: \begin{corollary} \label{spacetotalcurv} Assume that the curve $C$ is nondegenerate and closed, and has no affine inflection point. Then, the total curvature $\int_C k\omega_s$ vanishes. In particular, such a curve has at least two general-affine flat points. \end{corollary} \comment{ \begin{remark} {\rm We have chosen a certain frame and determined the coframe to be $(\ref{pfaff3})$ under the condition $\ell\neq 0$; we call this process a normalization of frame. As is seen from the above, however, the normalization is not unique. In fact, we can show that there is another normalization by which the coframe has the form \[ \left( \begin{array}{ccc} \omega_s & 0 & 0 \\ -{1\over 2}k\omega_s & \omega_s & 0 \\ -\epsilon p \omega_s & -k\omega_s & \omega_s \\ - M \omega_s & -\epsilon \omega_s & -{3\over 2}k\omega_s \end{array}\right), \] where $p$ is not equal to $-2$, which we do not represent here. Moreover, it seems that Mih$\breve{\rm a}$ilescu gave another normalization in \cite{Mihai3} under the condition that the curve does not belong to a linear complex instead of assuming $\ell\neq 0$. } \end{remark} } \subsection{Computation of general-affine curvatures of space curves} Let $t\longrightarrow x=x(t)\in {\bf A}^3$ be a nondegenerate curve such that the vectors $x'$, $x''$ and $x'''$ are linearly independent. Since $x''''$ is written as a linear combination of $x'$, $x''$ and $x'''$, there are scalar functions $a=a(t)$, $b=b(t)$ and $c=c(t)$ such that \begin{equation} \label{x3ode} x'''' = a x''' + b x'' + cx'. \end{equation} We give a formula to compute these coefficients by use of the general-affine curvatures of such a curve. The method is similar to that used for plane curves. Since $dx=x'\, dt$, the frame vector $e_1$ is a scalar multiple of $x'$: \begin{equation}\label{eq:3frame} dx = \omega\, e_1;\qquad e_1=\lambda x',\quad \omega = \lambda^{-1}dt. \end{equation} Then, the differential \[ de_1=(\lambda^2 x'' + \lam\lambda' x')\, \omega\] implies that the second frame vector is \[ e_2=\lambda^2x'' + \lam\lambda' x'.\] The derivation of $e_2$ is \[ de_2=(\lambda^3x'''+3\lam^2\lam'x''+(\lam^2\lam''+\lam{\lam'}^2)x')\omega,\] which is equal to $\omega e_3$: \[e_3 =(\lambda^3x'''+3\lam^2\lam'x''+(\lam^2\lam''+\lam{\lam'}^2)x').\] Its derivation is \[de_3=\left((\lambda^3a+6\lam^2\lam')x'''+(\lam^3b+7\lam{\lam'}^2 +4\lam^2\lam'')x'' +(\lam^3c+4\lam\lam'\lam''+\lam^2\lam'''+{\lam'}^3)x' \right)dt\] by use of $(\ref{x3ode})$. Since $de_3$ has no $e_3$-component, we have \begin{equation} \label{eq:3lambda} \lambda a + 6\lambda'=0, \quad {\it i.e.}\quad \lambda = e^{-{1\over 6}\int a(t) dt} \end{equation} up to a multiplicative constant. Then, $de_3$ is written as \[ de_3 = (\lam^2b+7{\lam'}^2+4\lam\lam'')\omega e_2 + (\lam^3c - \lam^2\lam'b-6{\lam'}^3+\lam^2\lam''')\omega e_1. \] By the definition in \eqref{spcinv}, we have \begin{equation} \label{3ell} \ell = -(\lam^2b + 7{\lam'}^2 + 4\lam \lam''). \end{equation} Also, by the definition of $m$, we have \begin{equation} \label{3m} m = - \lam^3 c+\lam^2\lam'b+6{\lam'}^3 - \lam^2\lam'''. \end{equation} We now assume that $\ell\neq 0$ and recall that $\epsilon={\rm sign}(\ell)$. Then, we have \[ ds^2 = \epsilon\ell\omega \omega = -\epsilon \left(b+7{\lambda'^2\over \lambda^2}+4\frac{\lam''}{\lam}\right) dt^2. \] In terms of $a$ and $b$, \begin{equation}\label{dstwo3} ds^2 = -\epsilon \left(b+{11\over 36}a^2 - {2\over 3}a' \right) dt^2. \end{equation} Hence, a length parameter $s$ which is a function of $t$ is obtained by solving the equation \[ \left({ds \over dt}\right)^2 = -\epsilon \left(b+{11\over 36}a^2 - {2\over 3}a'\right).\] If, in particular, $t$ itself is a length parameter, then we must have \begin{equation}\label{lengthspace} \ell=\epsilon \lambda^2, \qquad b=-\epsilon - {11\over 36}a^2 + {2\over 3}a'. \end{equation} By definition, the first curvature $k$ is \begin{equation}\label{3curv} k=-{1\over 3}a. \end{equation} We next treat the second curvature $M$ defined in {$(\ref{gasecond})$}: From the formula $(\ref{3m})$ above, \begin{equation} \label{Mvalue} M = -c +{\lam'\over \lam}b+6\left({\lam'\over \lam}\right)^3 - \frac{\lam'''}{\lam}. \end{equation} Hence, by $(\ref{eq:3lambda})$, we can see that \begin{equation} \label{cvalue} c= -M + {1 \over 6}a\epsilon + {1\over 6}a''-{7\over 36}aa'+{1\over 36}a^3. \end{equation} Thus, we have seen that the differential equation $(\ref{x3ode})$ agrees with the equation $(\ref{gaspcode})$. For another parameter $\sigma=\sigma(t)$, we write \[ y(\sigma) = x(t).\] Then, using the notation $\{\,\dot{}\,\}$ for the derivation by $\sigma$ and $\{\,{}'\,\}$ for the derivation by $t$, we see that \begin{eqnarray*} x' &=& \dot{y}\sigma', \\ x'' &=& \ddot{y}\sigma'^2 + \dot{y}\sigma'',\\ x''' &=& \threedot{y}\sigma'^3+3\ddot{y}\sigma'\sigma''+\dot{y}\sigma''',\\ x'''' &=& \fourdot{y}\sigma'^4 + 6\threedot{y}\sigma'^2\sigma'' +\ddot{y}(3{\sigma''}^2+4\sigma'\sigma''')+\dot{y}\sigma''''. \end{eqnarray*} Making use of these formulas, we can show that \begin{equation} \label{yode} \fourdot{y} = A(\sigma)\threedot{y}+B(\sigma)\ddot{y}+C(\sigma)\dot{y}, \end{equation} where \begin{eqnarray} A(\sigma) &=& \left(a-6{\sigma''\over \sigma'}\right){1\over \sigma'}, \label{newspca}\\ B(\sigma) &=& \left( b+ 3a{\sigma''\over \sigma'}-3\left({\sigma''\over \sigma'}\right)^2-4{\sigma'''\over \sigma'}\right){1\over \sigma'^2}, \label{newspcb} \\ C(\sigma) &=& \left( c+b{\sigma''\over \sigma'} + a{\sigma'''\over \sigma'} - {\sigma''''\over \sigma'}\right){1\over \sigma'^3}. \label{newspcc} \end{eqnarray} The differential polynomials that appeared in the representation of $b$ and $c$ in \eqref{lengthspace} and \eqref{cvalue} have a covariant property with respect to this change of parameters: \begin{lemma} By the change of parameter, the following covariant relations hold. \begin{equation} \label{ABCformula} \begin{array}{rcl} \displaystyle B-{2\over 3}\dot{A}+{11\over 36}A^2 &=& \displaystyle \left(b-{2\over 3}a' + {11\over 36}a^2\right){1\over \sigma'^2}, \\ \displaystyle C-{1\over 6}\ddot{A}+{7\over 36}A\dot{A}-{1\over 36}A^3 &=& \displaystyle \left(c-{1\over 6}a''+{7\over 36}aa'-{1\over 36}a^3\right){1\over \sigma'^3} \\ &&\qquad \displaystyle + \left(b-{2\over 3}a' + {11\over 36}a^2\right){\sigma''\over \sigma'^4}. \end{array} \end{equation} \end{lemma} Thanks to the formulas above, we can compute curvatures according to a procedure similar to that in Section \ref{subsectionga}. \medskip \begin{example} Viviani's curve. {\rm This curve is given by the mapping \[(1+\cos(2t),\sin(2t),2\sin(t)).\] The associated differential equation is \[ x'''' = - \tan(t)x''' - 4x'' - 4\tan(t)x',\] which is singular at $t$ with $\cos(t)=0$; in the left figure, this corresponds to $z = \pm 2$. A simple calculation shows the identity \[-b-{11\over 36}a^2+{2\over 3}a'= {5(31 \cos(t)^2-7)\over 36\cos(t)^2};\] hence, at the values $t$ with $\cos(t)^2=7/31$, the general-affine length parameter cannot be defined, namely, $\ell=0$ at these values; in the left figure, there correspond to the four points with $z=\pm 1.75...$. Except for these six values of $t$ (we marked these points as dots in the figure), $\epsilon$ is determined and the curvatures are computable. The first curvature $k$ has the absolute value \[ \frac{2 |\sin t| (49- 31 \cos^2 t)}{\sqrt{5}|31 \cos^2 t -7|^{3/2}}. \] } \end{example} \begin{example} Torus knot. {\rm The mapping \[x=((4+\cos(3t))\cos(t), (4+\cos(3t))\sin(t), \sin(3t))\] defines one of the torus knots. The equation is computed as \begin{eqnarray*} x'''' &=& {-3\sin(3t)(12T^2-152T-35)\over P}x''' +{2(52T^3-178T^2+562T-891)\over P}x'' \\ && +{12\sin(3t)(8T^2+82T+281)\over P}x', \end{eqnarray*} where \[T=\cos(3t)\quad {\rm and}\quad P= 4T^3-76T^2 - 35T +198.\] Since $P>0$ for all value $t$, the equation is non-singular. With computer assistance, we can see that the length parameter is well-defined and $\epsilon=1$, and that $k$ has period $2\pi/3$ and symmetry $k(t)=-k(2\pi/3-t)=-k(-t)$, with values $-4<k<4$. }\end{example} \begin{figure} \centering \begin{tabular}{cc} \includegraphics[width=3.8cm]{viviani-k-2.eps} & \includegraphics[width=5cm]{torusknot-k.eps} \\ {\rm Viviani's curve} & {\rm Torus knot} \end{tabular} \end{figure} \subsection{Space curves with constant curvatures} \label{spcconst} The curves with constant $k$ and $M$ have a special interest, because such a curve is an orbit of a $1$-parameter subgroup of general-affine motions. In \cite[p. 36--39]{Sc} a classification of such groups is given. We here give a list of such curves by use of the differential equation treated in the previous subsection. If the curvatures are constant, then the differential equation \[ x'''' = ax''' + bx'' + cx',\] has constant coefficients $a$, $b$ and $c$. Conversely, assume that a curve satisfies a differential equation with constant coefficients. Then, the length parameter $s$ is obtained by the identity \[ ds^2 = -\epsilon\left(b+{11\over 36}a^2\right) dt^2,\] where \[\epsilon ={\rm sign}\left(-b-{11\over 36}a^2\right).\] We set \[ q = \sqrt{-\epsilon \left(b+{11\over 36}a^2\right)}.\] Then, the first curvature is \[ k = -{1\over 3}\left( \frac{a}{q}\right),\] and the second curvature is \[ M=-{c\over q^3} + {\epsilon \over 6}\left({a\over q} \right) +{1\over 36}\left({a\over q}\right)^3. \] Therefore, any differential equation of the form above with constant coefficients defines a curve with constant general-affine curvatures. It is sufficient to solve \[ y'''=ay''+by'+cy\] and integrate it to get $x$. If the function $y=e^{\lam t}$ is a solution of the equation, then $\lam$ is a root of the algebraic equation \[\lam^3-a\lam^2-b\lam-c=0.\] Depending on whether $\lam$ is a single or multiple root or a real or imaginary root, the form of the solution varies. Without showing detailed computations, we list the result of the classification as follows, which agrees with the classification of the $1$-parameter subgroup of general-affine motions. First, we start with some examples. \begin{example} \label{helix} {\rm Curves with $k=M=0$. The equation is \[x'''' = - \epsilon x''.\] When $\epsilon=1$, the curve is $x=(t, \cos t, \sin t)$ called a \textsl{circular helix} and, when $\epsilon=-1$, the curve is $x=(t,\cosh t, \sinh t)$, called a \textsl{hyperbolic helix}. } \end{example} \begin{example} \label{logspiral} Logarithmic spiral. {\rm This curve is given by $x = (e^{-2\lam t}, e^{\lam t}\cos pt, e^{\lam t}\sin pt)$. The equation is \[ x'''' = (3\lam^2-p^2)x'' - 2\lam(p^2+\lam^2)x'.\] We see that $\epsilon=1$ (resp. $\epsilon=-1)$ when $3\lam^2< p^2$ (resp. $3\lam^2> p^2$), $k=0$, and $M=2\lam(p^2+\lam^2)|3\lam^2-p^2|^{-3/2}$. } \end{example} \begin{example} {\rm Curves given by $x=(e^{\lambda t}, e^{\mu t}, e^{\nu t})$, where the values of $\lam$, $\mu$, $\nu$ are distinct, satisfy \[ x''''=(\lambda+\mu+\nu)x'''-(\lambda\mu+\lambda\nu+\mu\nu)x''+ \lambda\mu\nu x', \] where $\epsilon$ takes both $\pm 1$. When, in addition, $\lambda+\mu+\nu=0$, we have $k=0$, $\epsilon=-1$, $q=\sqrt{\lam^2+\lam\mu+\mu^2}$, and $M=\lam\mu(\lam+\mu)/q^3$. } \end{example} \begin{example} \label{mk} {\rm For curves given by $x = (t, e^{\lam t}, te^{\lam t})$, the equation is \[ x''''=2\lam x''' - \lam^2 x'',\] and $\epsilon=-1$, $k=-\sqrt{2}{\rm sign}(\lam)$, $M=\sqrt{2}{\rm sign}(\lam)$. In particular, the identity $M-k\epsilon=0$ holds.} \end{example} Together with these examples, the curves in the next table exhaust the list of nondegenerate curves with constant general-affine curvatures. Note that the hyperbolic helix in Example \ref{helix} is listed also in the class of curves numbered $1$, and Example \ref{logspiral} is in the class numbered $7$ below. We list also the associated differential equations. \bigskip { \centering \begin{tabular}{lll} \hline & curves & differential equations \\ \hline 1& $(t,e^{\lambda t}, e^{\mu t})$ & $x''''=(\lambda +\mu)x'''-\lambda\mu x''$ \\ 2& $(e^t, te^t, e^{\lam t})$ & $x''''=(\lam +2)x'''-(2\lam+1)x''+\lam x'$ \\ 3& $(t,{1\over 2}t^2, e^{\lambda t})$ & $x''''=\lambda x'''$ \\ 4& $(e^t, te^t, t^2e^t)$ & $x''''=3x'''-3x''+x'$ \\ 5&$(t, e^t\cos(pt),e^t\sin(pt))$ & $x''''=2x'''-(p^2+1)x''$ \\ 6&$(e^{\lam t},\cosh(pt), \sinh(pt))$ & $x''''=\lam x'''+p^2x''-\lam p^2 x'$ \\ 7&$(e^{\lam t},e^{\mu t}\cos(pt),e^{\mu t}\sin(pt))$ & $x''''=(\lam + 2 {\mu})x'''-(p^2+{\mu (2\lam+\mu)})x''+\lam(p^2+{\mu}^2)x'$ \\ 8 & $(t, {1\over 2}t^2, {1\over 6}t^3)$ & $x''''=0$ \\ \hline \end{tabular} } \subsection{From equiaffine to general-affine for space curves} \label{3equiaffine} Let us pay some attention to the equiaffine theory of space curves in comparison with the general-affine treatment. Recall the choice of the frame $e=\{e_1,e_2,e_3\}$ and the scalars $\ell$ and $m$ in {Section} \ref{subsc:3frames}, \eqref{spccov}. \begin{equation} \label{pfaffequiaffine} d \left(\begin{array}{c} x \\ e_1 \\ e_2 \\ e_3\end{array}\right) = \left( \begin{array}{ccc} \omega & 0 & 0 \\ 0 & \omega & 0 \\ 0 & 0 & \omega \\ -m\omega & -\ell\omega & 0 \end{array}\right) \left(\begin{array}{c} e_1 \\ e_2 \\e_3 \end{array}\right). \end{equation} In the equiaffine treatment, it is enough to consider only the unimodular change of frames: {\it i.e.} $\lambda\nu\gamma=1$ and $e$ takes values in SL$(3,{\bf R})$. By $(\ref{unimo})$, we have $\lambda^6=1$. This means that the scalar $\ell$ is an absolute invariant and the scalar $m$ is an invariant determined up to $\pm 1$ by $(\ref{spccov})$. As was remarked in Section \ref{subsc:3frames}, the scalar $\ell$ is usually called the \textsl{equiaffine curvature} of the space curve and the scalar $m$ is called the \textsl{equiaffine torsion}; we refer to the books \cite{Bl,Sc}. The invariant $\ell$ measures how the space curve differs from the osculating cubic parabola, which is defined to be the curve $(t,t^2/2, t^3/6)$ relative to certain affine coordinates. The parameter $t$ for which $\omega=dt$ holds is called an \textsl{equiaffine length parameter.} Then the equation $(\ref{pfaffequiaffine})$ implies that the immersion $x(t)$ satisfies the differential equation \begin{equation} \label{easpcode} x''''+ \ell x'' + mx' =0, \end{equation} which is written in the form of equation $(\ref{x3ode})$ where $a=0$, $b=-\ell$, $c=-m$. By $(\ref{galength})$, the general-affine length parameter $\sigma$ is determined by use of the equiaffine curvature $\ell$ as \[ d\sigma^2 = Ldt^2,\qquad {\rm where}\quad L=\epsilon\ell,\quad \epsilon={\rm sign}(\ell)\] and, by $(\ref{kforspacecurve})$ and $(\ref{gasecond})$, the first and second general-affine curvatures are given as \[ k=L'L^{-3/2}, \qquad M= mL^{-3/2}\] in terms of equiaffine curvature and equiaffine torsion. Relative to the parameter $\sigma$, the map $y(\sigma)=x(t)$ is seen to satisfy the equation $(\ref{yode})$ whose coefficients are given by \begin{eqnarray*} A(\sigma) &=& -{3L'\over L^{3/2}}, \\ B(\sigma) &=& -\epsilon + {L'\over 4L^3} - {2L''\over L^2}, \\ C(\sigma) &=& L^{-3/2}\left(-m -{\epsilon L'\over 2} - {L'''\over 2L}+{3L'L''\over 4L^2}-{3L'^3\over 8L^3}\right), \end{eqnarray*} by use of $(\ref{newspca})-(\ref{newspcc})$. The curves for which $\ell$ and $m$ are constant can be listed, by solving the equation $(\ref{easpcode})$, as follows; see \cite[p.75]{Sc}. \medskip \begin{tabular}{lll} 1. $(e^{\lambda t}, e^{\mu t}, e^{-(\lambda+\mu)t})$, & 2. $(te^{\lambda t}, e^{\lambda t}, e^{-2\lambda t})$, & 3. $(e^{-2\alpha t}, e^{\alpha t}\cos(\beta t), e^{\alpha t}\sin(\beta t))$,\\ 4. $(t, \cosh t, \sinh t)$, & 5. $(t, \cos t, \sin t)$, & 6. $(t,{1\over 2}t^2, {1\over 6}t^3)$, \end{tabular} \medskip \noindent where $\lambda$, $\mu$, $\alpha$, $\beta$ are nonzero constants. They are homogeneous under equiaffine transformations. The value $m$ is nonzero for the first three and is zero for the last three. The value $\ell$ is $-1$, $1$ and $0$ for the last three, in this order. Except for the last example, the general-affine curvature $k$ is defined, and it is vanishing because $\ell$ is constant. The listed curves are general-affinely equivalent to some of the examples in the previous section. \subsubsection{Extremal equiaffine space curves} W. Blaschke \cite{Bl} gave a variational formula of the equiaffine length and showed that extremal curves of this variation are the curves with $\ell=m=0$; hence, the cubic parabola. This will be seen as follows in the present setting. \begin{theorem}[\cite{Bl}]\label{equi3extremal} A nondegenerate curve in the affine $3$-space is extremal relative to the equiaffine length functional if and only if the equiaffine curvatures $\ell$ and $m$ are vanishing. \end{theorem} \noindent {Proof.} Let $x_{\eta} (t)$ denote a family of curves parametrized by $\eta$ around $\eta=0$ and $x_0 = x$ as before. We assume that $x_{\eta} (t)= x(t)$ outside of a compact set $C$ and $x_0(t)$ is parametrized by equiaffine arc length, and that $\omega$ does not vanish anywhere for all $\eta$. The equiaffine length functional is given by \[ L(\eta) = \int_C \omega(\eta). \] Then the curve $x$ is \textsl{equiaffine extremal} if $ \delta L =0$. Let $e= \{x, e_1, e_2, e_3\}$ be the frame defined as in \eqref{pfaffequiaffine}, and set $\Omega$ to be $4 \times 3$ coefficient matrix. Since $\{e_1, e_2, e_3\}$ are linearly independent, there exists a $4 \times 3$-matrix $\tau$ such that \[ \delta \begin{pmatrix} x \\ e_1 \\ e_2 \\ e_3 \end{pmatrix} = \tau \begin{pmatrix} e_1 \\ e_2 \\ e_3 \end{pmatrix}. \] We denote the components of $\Omega$ and $\tau$ by $\omega_{\alpha}^{\beta}$ and $\tau_{\alpha}^{\beta}$, respectively, where $0\le \alpha\le 3$ and $1\le \beta\le 3$. Since $\delta d e = d \delta e$, we have $ \delta \omega_{\alpha}^{\beta} -d \tau_{\alpha}^\beta = \tau_{\alpha}^{\gamma} \omega_{\gamma}^{\beta} - \omega_{\alpha}^{\gamma} \tau_{\gamma}^{\beta} $; in terms of entries of $\Omega$ and $\tau$, we have \begin{align} \delta \omega - d \tau_{0}^1 &= - (m \tau_0^3 + \tau_1^1) \omega, \label{eqaff0-1}\\ - d \tau_{0}^3 &= (\tau_0^2 - \tau_1^3) \omega, \label{eqaff0-3}\\ \delta \omega - d \tau_{1}^2 &= (\tau_1^1 - \ell \tau_1^3 - \tau_2^2) \omega, \label{eqaff1-2}\\ \delta \omega - d \tau_{2}^3 &= (\tau_2^2 - \tau_3^3) \omega. \label{eqaff2-3} \end{align} First, note that since $\{e_1, e_2, e_3\}$ takes values in ${\rm SL}(3,{\bf R})$, we have \begin{equation}\label{eq:tausum} \tau_1^1 + \tau_2^2 + \tau_3^3 =0. \end{equation} Adding \eqref{eqaff0-1}, \eqref{eqaff1-2} and \eqref{eqaff2-3}, we have \[ 3 \delta \omega - d \tau_0^1 - d \tau_1^2 - d \tau_2^3 = -(m \tau_0^3 + \ell \tau_1^3 + \tau_3^3)\omega. \] On the one hand, subtracting \eqref{eqaff0-1} from \eqref{eqaff2-3}, we have \[ - d \tau_2^3 + d \tau_0^1 = (m \tau_0^3 -2 \tau_3^3) \omega, \] where we use the relation \eqref{eq:tausum}. Thus we have \[ 3 \delta \omega = d \tau_0^1 + d \tau_1^2 + d \tau_2^3 - (m \tau_0^3 + \ell \tau_1^3) \omega - \frac{1}{2} (d \tau_2^3 - d \tau_0^1 + m \tau_0^3 \omega), \] and therefore \[ 3 \delta L = \int_C \left( - \frac{3}{2} m \tau_0^3 - \ell \tau_1^3\right) \omega \] holds. Finally, using \eqref{eqaff0-3} and integration by parts, we obtain \begin{align*} 3 \delta L &= \int_C \left\{ - \frac{3}{2} m \tau_0^3 - \ell \left({ \tau_0^3}' + \tau_0^2\right) \right\}\omega \\ & = \int_C \left\{ \left(- \frac{3}{2} m + \ell' \right) \tau_0^3 - \ell \tau_0^2 \right\}\omega, \end{align*} where the $\{'\}$ denotes the derivative with respect to the arc length. Since $\tau_0^3$ and $\tau_0^2$ are independent variation vector fields, we have completed the proof. \subsection{From general-affine to projective for space curves} \label{remtoproj} \comment{ Study of projective curves has an old history: a first systematic study was done by Halphen \cite{Ha}, followingly by Laguerre and Forsyth, and then, from the viewpoint of projective differential geometry, by Wilczynski \cite{Wi}. Here, we present some invariants of projective space curves due to \cite[p.83-84]{La} and show a relation of the projective length parameter with the $\theta$ in \eqref{theta}; in Appendix, we will make some compliment on this matter. } A space curve in ${\bf P}^3$ is given by the immersion $t \longmapsto x(t)\in {\bf A}^4$ in homogeneous coordinates satisfying an ordinary differential equation of the form \[ x''''+4p_1x'''+6p_2x''+4p_3x'+p_4x=0.\] By multiplying a nonzero factor to the indeterminate $x$, the equation is transformed to the equation \[ x'''''+6P_2x''+4P_3x'+P_4x=0,\] where \[ \begin{array}{rcl} P_2 &=& p_2 - p_1^2 -p_1',\\ P_3 &=& p_3 - 3p_1p_2+2p_1^3-p_1'', \\ P_4 &=& p_4-4p_1p_3+6p_1^2p_2-6p_1'p_2-3p_1^4+6p_1^2p_1'+3(p_1')^2-p_1'''. \end{array} \] Then, the two forms $\theta_3dt^3$ and $\theta_4dt^4$, where \begin{equation} \label{thetainv} \begin{array} {rcl} \theta_3 &=& P_3-{3\over 2}P_2',\\ \theta_4 &=& P_4 -{9\over 5}P_2''-{81\over 25}P_2^2-2\theta_3', \end{array} \end{equation} are fundamental invariant forms: \cite{La}. Provided that $\theta_3\neq0$, the parameter $s$ defined as \[ ds = \theta_3^{1/3} dt\] is called the \textsl{projective length parameter}. Relative to this parameter, we can define projective curvatures; we refer to Appendix \ref{subs-projinv}. When $\theta_3\equiv 0$, the curve $x$ has a special property that the curve formed by the tangent vectors to the curve $x$, which lies in the $5$-dimensional projective space consisting of lines in ${\bf P}^3$, is degenerate in the sense that it belongs to a $4$-dimensional hyperplane. Such a curve was said to belong to a \textsl{linear complex} and is named Gewindekurve in \cite{Bl}. Given a nondegenerate curve $x(t)$ in the affine space ${\bf A}^3$, which is described by the differential equation $(\ref{gaspcode})$, we associate a curve in ${\bf P}^3$ by a mapping $t \longmapsto (1,x(t))\in {\bf A}^4$, where $1$ is a constant function. Then, the projective invariants are computed by the definition above. In fact, a straightforward computation shows that \begin{align} & \theta_3 = {1\over 4}(M-\epsilon k), \label{theta3} \\ & \theta_4 = -{3\over 4}kM-{1\over 2}M'+ {1\over 5}\epsilon k' + {3\over 10}\epsilon k^2 -{9\over 100}. \label{theta4} \end{align} In particular, when $\theta_3\neq 0$, the projective length parameter $s$ is given as above by use of the general-affine curvatures $k$ and $M$. When $M=\epsilon k$, the curve belongs to a linear complex. Example \ref{mk} in the previous subsection is such an example. \section{General-affine extremal space curves and the associated differential equations} In Section \ref{subsc:3frames}, we have defined a frame for a general-affine space curve under the condition that the curve has no affine inflection point. In this section, we obtain the condition under which a space curve is extremal relative to the length functional and, in particular, show that any curve with constant general-affine curvatures is extremal. Let $x_{\eta}(t)$ be a family of curves parametrized by $\eta$ around $\eta = 0$ and $x_0 = x$. We assume that $x_{\eta} (t) = x(t)$ outside a compact set $C$, and that the invariant $\omega_s$ does not vanish anywhere for all $\eta$. Then $x_{\eta}$ and the corresponding frame $\{E_1, E_2, E_3\}$ satisfy the equation in \eqref{pfaff3}. \comment{ \begin{equation} \label{pfaff3-2} d \left(\begin{array}{c} x_\eta \\ E_1 \\ E_2 \\ E_3\end{array}\right) =\Omega \left(\begin{array}{c} E_1 \\ E_2 \\E_3 \end{array}\right), \qquad \Omega =\left( \begin{array}{ccc} \omega_s & 0 & 0 \\ -{1\over 2}k\omega_s & \omega_s & 0 \\ 0 & -k\omega_s & \omega_s \\ - M \omega_s & -\epsilon \omega_s & -{3\over 2}k\omega_s \end{array}\right). \end{equation} } Then the length functional $L$ is given by \[ L(\eta) = \int_C \omega_s (\eta), \] and the curve $x=x_0$ is said to be \textsl{general-affine extremal} if \[ \delta L = \frac{d L}{d \eta}\Big|_{\eta =0}=0 \] holds for any compactly supported deformation of $x$. We now consider the variation \[ \delta \begin{pmatrix} x_{\eta} \\ E_1 \\ E_2 \\ E_3 \end{pmatrix} = \tau \begin{pmatrix} E_1 \\ E_2 \\ E_3 \end{pmatrix}, \quad \quad \tau = (\tau_\alpha^\beta)_{0 \leq \alpha\leq 3, 1\leq \beta \leq 3}. \] Then the compatibility condition $d \delta = \delta d$ implies that \[ \delta \omega_\alpha^\beta- d \tau_\alpha^\beta = \tau_\alpha^\gamma \omega_\gamma^\beta - \omega_\alpha^\gamma \tau_\gamma^\beta, \] where we set the entries of $\Omega$ in \eqref{pfaff3} as $(\omega_\alpha^\beta)_{0 \leq i\leq 3, 1\leq \beta \leq 3}$. Then they are explicitly given by \begin{align} \delta \omega_s - d \tau_0^1 &= \left( - \frac{1}{2} k \tau_0^1 - M \tau_0^3 - \tau_1^1 \right) \omega_s, \label{eq:01} \\ - d \tau_0^2 &= (\tau_0^1 -k \tau_0^2 - \epsilon \tau_0^3 -\tau_1^2) \omega_s, \label{eq:02}\\ - d \tau_0^3 &= \left(\tau_0^2 -\frac{3}{2}k \tau_0^3 -\tau_1^3 \right) \omega_s, \label{eq:03}\\ \delta \omega_s - d \tau_1^2 &= \left( \tau_1^1 - \frac{1}{2}k \tau_1^2 - \epsilon \tau_1^3 - \tau_2^2\right) \omega_s, \label{eq:12}\\ - d\tau_1^3 &= \left(\tau_1^2 - k \tau_1^3 -\tau_2^3\right) \omega_s, \label{eq:13}\\ - d \tau_2^1 &= \left(\frac{1}{2} k \tau_2^1 - M \tau_2^3 - \tau_3^1 \right) \omega_s, \label{eq:21}\\ \delta \omega_s - d \tau_2^3 &= \left( \tau_2^2 - \frac{1}{2} k \tau_2^3 - \tau_3^3 \right) \omega_s, \label{eq:23}\\ - \epsilon \delta \omega_s - d \tau_3^2 &= \left( -\epsilon (\tau_3^3 - \tau_2^2 )+ \tau_3^1 + \frac{1}{2} k \tau_3^2 + M \tau_1^2 \right) \omega_s, \label{eq:32}\\ - \frac{1}{2}\delta (k \omega_s) - d \tau_1^1 &= (- M \tau_1^3 -\tau_2^1) \omega_s, \label{eq:11}\\ - \delta (k \omega_s) - d \tau_2^2 &= (\tau_2^1 - \epsilon \tau_2^3 - \tau_3^2) \omega_s, \label{eq:22}\\ - \frac{3}{2}\delta (k \omega_s) - d \tau_3^3 &= (\tau_3^2 + M \tau_1^3 + \epsilon \tau_2^3) \omega_s, \label{eq:33} \\ - \delta (M \omega_s) - d \tau_3^1 &= (k \tau_3^1 + M (\tau_1^1 - \tau_3^3) + \epsilon \tau_2^1) \omega_s. \label{eq:31} \end{align} Adding \eqref{eq:23} and $-\epsilon$\eqref{eq:32}, we have \begin{equation*} 2 \delta \omega_s - d \tau_2^3 + \epsilon d \tau_3^2 = \left(- \frac{1}{2} k (\tau_2^3 + \epsilon \tau_3^2) - \epsilon \tau_3^1 - \epsilon M \tau_1^2\right) \omega_s. \end{equation*} Then, by Stokes' theorem, we have \begin{equation}\label{eq:omega} 2 \delta \int_C \omega_s = \int_C \left(-\frac{1}{2}k( \tau_2^3 + \epsilon \tau_3^2) - \epsilon \tau_3^1 - \epsilon M \tau_1^2 \right) \omega_s. \end{equation} Next, from \eqref{eq:11} $+$ \eqref{eq:22} $-$\eqref{eq:33}, we get \begin{equation*} (- {\tau_2^2} -{\tau_1^1} +{\tau_3^3})' = -2 (\epsilon \tau_2^3 + \tau_3^2) - 2 M \tau_1^3, \end{equation*} which is written as \begin{equation}\label{eq:tau2332} -\frac{1}{2}k( \tau_3^2 + \epsilon \tau_2^3) = - \frac{1 }{4}\epsilon k ({\tau_1^1} + {\tau_2^2} - {\tau_3^3})' + \frac{1}{2} \epsilon k M \tau_1^3. \end{equation} Here the $\{'\}$ denotes $\frac{d}{\omega_s}$. On the one hand, by \eqref{eq:21}, \begin{equation}\label{eq:21-2} \tau_3^1 = {\tau_2^1}' + \frac{1}{2} k \tau_2^1 - M \tau_2^3 \end{equation} and \eqref{eq:22}$+$\eqref{eq:33}$-5$\eqref{eq:11} implies that \begin{equation}\label{eq:223311} (-{\tau_2^2} -{\tau_3^3} + 5{\tau_1^1})' = 6 (\tau_2^1 + M \tau_1^3). \end{equation} Then by use of \eqref{eq:13} and \eqref{eq:223311}, \eqref{eq:21-2} can be rephrased as \begin{equation}\label{eq:tau31} \tau_3^1 = {\tau_2^1}' + \frac{1}{12} k ( 5 \tau_1^1 - \tau_2^2 - \tau_3^3)' - M\left({\tau_1^3}' + \tau_1^2 -\frac{1}{2}k \tau_1^3\right). \end{equation} Finally \eqref{eq:tau2332} and \eqref{eq:tau31} implies that \begin{align} 2 \delta \int_C \omega_s &= \frac{\epsilon}{12} \int_C \left(- k \left(8{\tau_1^1} + 2 \tau_2^2 - 4 \tau_3^3\right)' + 12 M {\tau_1^3}'\right) \omega_s \nonumber \\ & = \frac{\epsilon}{12} \int_C \left(k' \left(8\tau_1^1 + 2 \tau_2^2 - 4 \tau_3^3\right) + 12 M {\tau_1^3}'\right) \omega_s. \label{eq:domegas} \end{align} Here we use integration by parts for the second equality. We now compute $-6$\eqref{eq:01}$+2$\eqref{eq:12}$+4$\eqref{eq:23}. A straightforward computation shows that \begin{align} 8 \tau_1^1 + 2 \tau_2^2 -4 \tau_3^3 &= (6 \tau_0^1- 2\tau_1^2 -4\tau_2^3)' -3 k \tau_0^1 -6 M \tau_0^3 + k \tau_1^2 +2 \epsilon \tau_1^3 + 2 k \tau_2^3 \nonumber \\ & = 6 X' -4 Y' -3 k X + 2 k Y -6 M \tau_0^3 + 2 \epsilon \tau_1^3. \nonumber \end{align} Here $X = \tau_0^1 - \tau_1^2$ and $Y =\tau_2^3 - \tau_1^2$. Thus \eqref{eq:domegas} can be again rephrased, by using integration by parts, as \begin{align} 24 \epsilon \delta \int_C \omega_s = \int_C \left\{ (-6 k''-3 k' k) X + (4 k'' + 2 k' k) Y \right. & -6 k' M \tau_0^3 \nonumber \\ &\left.+ (2 \epsilon k'- 12 M') \tau_1^3\right\} \omega_s. \label{eq:domegas-f} \end{align} Then by \eqref{eq:13} and \eqref{eq:02} we have \[ X = \tau_0^1 - \tau_1^2 = -{\tau_0^2}' + k \tau_0^2 + \epsilon \tau_0^3, \quad Y = \tau_2^3 - \tau_1^2 = {\tau_1^3}' - k \tau_1^3. \] Finally, making use of \eqref{eq:03} to erase the $\tau_1^3$-term, we can see that the $\tau_0^2$ part of the integrand of \eqref{eq:domegas-f} is computed as \begin{equation}\label{eq:tau02part} -10 k''' -15 k'' k -5 k' k^2 -5 {k'}^2 +2 \epsilon k' -12 M'. \end{equation} Similarly the $\tau_0^3$ part of the integrand of \eqref{eq:domegas-f} can be computed as \begin{equation}\label{eq:tau03part} 4 k'''' + 12 k''' k + (11 k^2 + 10 k' -8 \epsilon ) k'' + 7 {k'}^2 k -6 \epsilon k' k + 3 k' k^3 -6 k' M + 12 M'' + 18 M' k. \end{equation} \begin{theorem} \label{spaceextremal} A {nondegenerate} space curve without affine inflection point is general-affine extremal if and only if the following pair of ordinary differential equations is satisfied$:$ \begin{align} &k''' + \frac{3}{2}k k'' +\frac{1}{2} {k'}^2 + \frac{1}{2}k^2 k' - \frac{1}{5} \epsilon k' + \frac{6}{5} M'=0 \label{eq:tau02part2}\\ \intertext{and} &{k}'' + \frac{2}{3} k' k + \frac{5 }{6}\epsilon k' M -\frac{3 }{2} \epsilon k M' - \epsilon {M}''=0. \label{eq:tau03part2} \end{align} In particular, all space curves which have constant general-affine curvatures are general-affine extremal. \end{theorem} \noindent Proof. Inserting \eqref{eq:tau02part}$=0$ into \eqref{eq:tau03part}$=0$, we have the differential equation \eqref{eq:tau03part2}. \begin{example} \label{mconst} Extremal curves with constant $M$. {\rm First, assume $M=0$. Then \eqref{eq:tau03part2} can be easily integrated as \[ k(t)= -3a\tan(at)\quad {\rm and}\quad 3a\tanh(at),\] where $a$ is constant. Inserting this expression into \eqref{eq:tau02part2}, we get solutions \[ k(t)= -3a\tan(at),\quad a=\sqrt{2/5} \quad {\rm when} \quad \epsilon=1\] and \[ k(t)= 3a\tanh(at),\quad a=\sqrt{2/5} \quad {\rm when} \quad \epsilon=-1.\] Second, assume $M$ is a nonzero constant. Then \[k(t)=-\frac{5}{4}\epsilon M + 3a \tanh(at)\] is a solution of \eqref{eq:tau03part2} and it satisfies \eqref{eq:tau02part2} if and only if \[ a^2(80 a^2 - 125 \epsilon^2 M^2 + 32\epsilon)=0.\] Thus, except for a constant solution, we have the above $k(t)$, where $a=\sqrt{-32 + 125 M^2}/4 \sqrt{5}$ when $\epsilon=1$ and $a=\sqrt{32+125M^2}/20$ when $\epsilon=-1$. If we started with $-5/4\epsilon M-3a\tan(at)$, an another solution of \eqref{eq:tau03part2}, then $a$ turns out to be pure imaginary and we get the same curvature function. } \end{example} We here recall the invariant $\theta_3$ given by the equation $(\ref{theta3})$: \begin{equation*} \theta_3 = {1\over 4}(M - \epsilon k). \end{equation*} Then, the differential equations $(\ref{eq:tau02part2})$ and $(\ref{eq:tau03part2})$ are written as \begin{align} &k'''+{3\over 2}kk''+{1\over 2}k'^2+{1\over 2}k^2k'+\epsilon k' + {24\over 5}\theta_3' =0, \\ \noalign{\smallskip} & \theta_3'' + {3\over 2}k\theta_3' - {5\over 6}k'\theta_3 =0. \end{align} Since $\theta_3=0$ characterizes a curve belonging to a linear complex, see Section \ref{remtoproj}, in view of the equation \eqref{eq:variationformula}, we have the following corollary. \begin{corollary} \label{lincomplex} The general-affine extremal space curve belongs to a linear complex if and only if $M = \epsilon k$ and $k$ satisfies \eqref{eq:variationformula}. \end{corollary} Since the differential equation \eqref{eq:variationformula} is the equation for the plane extremality, we have the following method of constructing an extremal space curve belonging to a linear complex: \begin{corollary} \label{planetospace} Let $k$ be the general-affine curvature of an extremal plane curve without affine inflection point. Let $\epsilon$ denote the sign of this curve. Then, the set $\{k, M, \epsilon\}$, where $M=\epsilon k$, defines a space curve that is general-affine extremal and belonging to a linear complex. \end{corollary} \def{\rm erf}{{\rm erf}} Thanks to Example \ref{planeextremalexample}, we can give concrete examples of such curves in Corollary \ref{planetospace}. The explicit integration of the associated differential equation can be carried out with computer assistance. For example, when $\epsilon=-1$ and $k(t)=\sqrt{2}+3/t$, we get the curve $(x_1, x_2, x_3)$ for $t>0$, where \begin{align*} x_1 &= {1\over t}, \\ x_2 &= 2^{1/4}\sqrt{\pi}\ {\rm erf}\left({\sqrt{3t} \over2^{1/4} }\right) - {1- \sqrt{2}\ 3t \over (3t)^{3/2}} \exp\left(- {3\over \sqrt{2}} t\right), \\ x_3 &= \int \left\{\frac{6}{t^{2}} \int H(t) \ dt +{1 \over t^{5/2}(\sqrt{2}+6 t)} \exp\left(- {3\over \sqrt{2}}t\right) \right\} \ dt, \\ \end{align*} with ${\rm erf}(x) = \displaystyle {2 \over \sqrt{\pi}} \int_0^{x} e^{-t^2} dt$ and $H(t)=\displaystyle \frac{1}{\sqrt{t}(\sqrt{2}+6t)^{2}} \exp\left(-{3\over \sqrt{2}} t\right) $. \newpage
{ "timestamp": "2019-03-01T02:10:39", "yymm": "1902", "arxiv_id": "1902.10926", "language": "en", "url": "https://arxiv.org/abs/1902.10926" }
\section{Introduction}\label{sec:Intro} The main goal of this paper is to develop a Wiener-Hopf type factorization for finite time-inhomo\-geneous Markov chains. In order to motivate this goal, we first provide a brief account of the Wiener-Hopf factorization for time-homogeneous Markov chains based on \cite{BarlowRogersWilliams1980}. Towards this end, consider a finite state space $\mathbf{E}$ with cardinality $m$, and let $\mathsf{\Lambda}$ be a sub-Markovian generator matrix of dimension $m\times m$, that is, $\mathsf{\Lambda}(i,j)\geq 0$, $i\neq j$, and $\sum_{j\in\mathbf{E}}\mathsf{\Lambda}(i,j)\leq 0$. Next, let $v$ be a real valued function on $\mathbf{E}$, such that $v(i)\neq 0$ for all $i\in\mathbf{E}$, and define \begin{align*} \mathbf{E}_{\pm}:=\left\{i\in\mathbf{E}:\,\pm\,v(i)>0\right\}. \end{align*} We also denote by $m_{\pm}$ cardinality of $\mathbf{E}_{\pm}$, and we let $\mathsf{V}:=\textrm{diag}\{v(i)\,:\,i\in\mathbf{E}\}$ be the diagonal matrix of dimension $m\times m$. Finally, let $\mathsf{I}$ and $\mathsf{I}^{\pm}$ denote the identity matrices of dimensions $m\times m$ and $m_{\pm}\times m_{\pm}$, respectively. Using probabilistic methods, the following result was proved in \cite{BarlowRogersWilliams1980}. \begin{theorem}[{\cite[Theorem I]{BarlowRogersWilliams1980}}]\label{thm:BRW80WH1} For any $c>0$, there exists a unique pair of matrices $(\mathsf{\Pi}^{+}_{c},\mathsf{\Pi}^{-}_{c})$ of dimensions $m_{-}\times m_{+}$ and $m_{+}\times m_{-}$m respectively, such that the matrix \begin{align*} \mathsf{S}=\begin{pmatrix} \mathsf{I}^{+} & \mathsf{\Pi}^{-}_{c} \\ \mathsf{\Pi}^{+}_{c} & \mathsf{I}^{-} \end{pmatrix} \end{align*} is invertible and the following factorization holds true \begin{align}\label{eq:TimeHomoWH} \mathsf{V}^{-1}(\mathsf{\Lambda}-c\,\mathsf{I})=\mathsf{S}\begin{pmatrix} \mathsf{Q}^{+}_{c} & 0 \\ 0 & \mathsf{Q}^{-}_{c} \end{pmatrix}\mathsf{S}^{-1}, \end{align} where $\mathsf{Q}^{\pm}_{c}$ are $m_{\pm}\times m_{\pm}$ sub-Markovian generator matrices. Moreover, $\mathsf{\Pi}^{\pm}_{c}$ are strictly substochastic. \end{theorem} The right-hand side of \eqref{eq:TimeHomoWH} is said to constitute the Wiener-Hopf factorization of the matrix $\mathsf{V}^{-1}(\mathsf{Q}-c\mathsf{I})$. While the factorization \eqref{eq:TimeHomoWH} is algebraic in its nature, it admits a very important probabilistic interpretation, which leads to very efficient computation of some useful expectations. More precisely, let $X$ be a time-homogeneous Markov chain taking values in $\mathbf{E}\cup\partial$, where $\partial$ is a coffin state, with generator $\mathsf{\Lambda}$. For $t\geq 0$, we define the additive functional \begin{align*} \phi(t):=\int_{0}^{t}v(X_{u})\,du, \end{align*} and two stopping times \begin{align*} \tau^{\pm}_{t}:=\inf\left\{u\geq 0:\,\pm\,\phi(u)>t\right\}. \end{align*} \begin{theorem}[{\cite[Theorem II]{BarlowRogersWilliams1980}}]\label{thm:BRW80WH2} For any $i\in\mathbf{E}_{\mp}$ and $j\in\mathbf{E}_{\pm}$, \begin{align}\label{eq:PicPlusMinus} \mathbb{E}\bigg(e^{-c\,\tau_{0}^{\pm}}\1_{\big\{X_{\tau_{0}^{\pm}}=j\big\}}\,\Big|\,X_{0}=i\bigg)=\mathsf{\Pi}_{c}^{\pm}(i,j). \end{align} For any $i,j\in\mathbf{E}_{\pm}$ and $t\geq 0$, \begin{align}\label{eq:QcPlusMinus} \mathbb{E}\bigg(e^{-c\,\tau_{t}^{\pm}}\1_{\big\{X_{\tau_{t}^{\pm}}=j\big\}}\,\Big|\,X_{0}=i\bigg)=e^{t\,\mathsf{Q}_{c}^{\pm}}(i,j). \end{align} \end{theorem} Both Theorems \ref{thm:BRW80WH1} and \ref{thm:BRW80WH2} have been studied for more general classes of Markov process, as well as for various types of stopping times, that naturally occur in applications {(cf. \cite{KennedyWilliams1990}, \cite{AvramPistoriusUsabel2003}, \cite{Williams2008}, \cite{MijatovicPistorius2011}, and references therein)}. However, in all these studies the Markov processes have been assumed to be time-homogeneous. As it turns out, the time-inhomogeneous case is more intricate, and direct (naive) generalizations or applications of the time-homogenous case to the non-homogenous case can not be done in principle. Specifically, let now $X$ be a finite state time-inhomogeneous Markov chain taking values in $\mathbf{E}\cup\partial$, with generator function $\mathsf{\Lambda}_{s}$, $s\geq 0$. The first observation that one needs to make is that the Wiener-Hopf factorization of the matrix $\mathsf{V}^{-1}(\mathsf{\Lambda}_{s}-c\mathsf{I})$ can be done for each $s\geq 0$ separately, exactly as described in Theorem \ref{thm:BRW80WH1}. However, the resulting matrices $\mathsf{\Pi}^{\pm}_{c}(s)$ and $\mathsf{Q}^{\pm}_{c}(s)$, $s\geq 0$, are not useful for computing the expectations of the form \begin{align*} \mathbb{E}\bigg(e^{-c\,\tau^{\pm}_{t}(s)}\1_{\big\{X_{\tau^{\pm}_{t}(s)}=j\big\}}\,\Big|\,X_{s}=i\bigg), \end{align*} where \begin{align*} \tau^{\pm}_{t}(s):=\inf\left\{u\geq s:\,\pm\int_{s}^{s+u}v(X_{r})\,dr>t\right\}. \end{align*} This makes the study of the time-inhomogeneous case a highly nontrivial and novel enterprise. As it will be seen from the discussion presented below, an entirely new theory needs to be put forth for this purpose. The research effort in this direction has been originated in \cite{BieCiaGonHua2018}. This work contributes to the continuation of the research endeavor in this direction. \section{Setup and the main goal of the paper}\label{sec:Setup} \subsection{Preliminaries}\label{subsec:Notations} Throughout this paper we let $\mathbf{E}$ be a finite set, with $|\mathbf{E}|=m>1$. We define $\overline{\mathbf{E}}:=\mathbf{E}\cup\{\partial\}$, where $\partial$ denotes the coffin state isolated from $\mathbf{E}$. Let $(\mathsf{\Lambda}_{s})_{s\in\mathbb{R}_{+}}$, where $\mathbb{R}_{+}:=[0,\infty)$, be a family of $m\times m$ generator matrices, i.e., their off-diagonal elements are non-negative, and the entries in their rows sum to zero. We additionally define $\mathsf{\Lambda}_{\infty}:=\mathsf{0}$, the $m\times m$ matrix with all entries equal to zero. \medskip We make the following standing assumption: \begin{assumption}\label{assump:GenLambda}\mbox{} \vspace{-0.5em} \begin{itemize} \item [(i)] There exists a universal constant $K\in(0,\infty)$, such that $|\mathsf{\Lambda}_{s}(i,j)|\leq K$, for all $i,j\in\mathbf{E}$ and $s\in\mathbb{R}_{+}$. \item [(ii)] $(\mathsf{\Lambda}_{s})_{s\in\mathbb{R}_{+}}$, considered as a mapping from $\mathbb{R}_{+}$ to the set of $m\times m$ generator matrices, is continuous with respect to $s$. \end{itemize} \end{assumption} Let $v:\overline{\mathbf{E}}\rightarrow\mathbb{R}$ with $v(i)\neq 0$ for any $i\in\mathbf{E}$ and $v(\partial)=0$, $\mathsf{V}:=\text{diag}\{v(i):i\in\mathbf{E}\}$, $\overline{v}:=\max_{i\in\mathbf{E}}|v(i)|$, and $\underline{v}:=\min_{i\in\mathbf{E}}|v(i)|$. We will use the following partition of the set $\mathbf{E}$ \begin{align*} \mathbf{E}_{+}:=\left\{i\in\mathbf{E}:\,v(i)>0\right\}\quad\text{and}\quad\mathbf{E}_{-}:=\left\{i\in\mathbf{E}:\,v(i)<0\right\}. \end{align*} We assume that both $\mathbf{E}_{+}$ and $\mathbf{E}_{-}$ are non-empty, and that the indices of the first $m_+=|\mathbf{E}_{+}|$ (respectively, last $m_-=|\mathbf{E}_{-}|$) rows and columns of any $m\times m$ matrix correspond to the elements in $\mathbf{E}_{+}$ (respectively, $\mathbf{E}_{-}$). Accordingly, we write $\mathsf{\Lambda}_{s}$ and $\mathsf{V}$ in the block form \begin{align}\label{eq:MatrixBlocks} \mathsf{\Lambda}_{s}=\bordermatrix{~ & \mathbf{E}_{+} & \mathbf{E}_{-} \cr \mathbf{E}_{+} & \mathsf{A}_{s} & \mathsf{B}_{s} \cr \mathbf{E}_{-} & \mathsf{C}_{s} & \mathsf{D}_{s} \cr},\quad\mathsf{\mathsf{V}}=\bordermatrix{~ & \mathbf{E}_{+} & \mathbf{E}_{-} \cr \mathbf{E}_{+} & \mathsf{V}_{+} & \mathsf{0} \cr \mathbf{E}_{-} & \mathsf{0} & \mathsf{V}_{-} \cr}. \end{align} In what follows we let $\mathscr{X}:=\mathbb{R}_{+}\times\mathbf{E}$, and $\mathscr{X}_{\pm}:=\mathbb{R}_{+}\times\mathbf{E}_{\pm}$). The Borel $\sigma$-field on $\mathscr{X}$ (respectively, $\mathscr{X}_{\pm}$) is denoted by $\mathcal{B}(\mathscr{X}):=\mathcal{B}(\mathbb{R}_{+})\otimes 2^{\mathbf{E}}$ (respectively, $\mathcal{B}(\mathscr{X}_{\pm}):=\mathcal{B}(\mathbb{R}_{+})\otimes 2^{\mathbf{E}_{\pm}}$). Accordingly, we let $\overline{\mathscr{X}}:=\mathscr{X}\cup(+\infty,\partial)$ (respectively, $\overline{\mathscr{X}_{\pm}}:=\mathscr{X}_{\pm}\cup(+\infty,\partial)$) be the one-point completion of $\mathscr{X}$ (respectively, $\mathscr{X}_{\pm}$), and let $\mathcal{B}(\overline{\mathscr{X}}):=\sigma(\mathcal{B}(\mathscr{X})\cup\{(\infty,\partial)\})$ (respectively, $\mathcal{B}(\overline{\mathscr{X}_{\pm}}):=\sigma(\mathcal{B}(\mathscr{X}_{\pm})\cup\{(\infty,\partial)\})$). A pair $(s,i)\in \mathscr{X}$ consists of the time variable $t$ and the space variable $s$. \medskip We will also use the following notations for various spaces of real-valued functions: \begin{itemize} \item $L^{\infty}(\overline{\mathscr{X}})$ is the space of $\mathcal{B}(\overline{\mathscr{X}})$-measurable, and bounded functions $f$ on $\overline{\mathscr{X}}$, with $g(+\infty,\partial)=0$. \item $C_{0}(\overline{\mathscr{X}})$ is the space of functions $f\in L^{\infty}(\overline{\mathscr{X}})$ such that $f(\cdot,i)\in C_{0}(\mathbb{R}_{+})$ for all $i\in\mathbf{E}$, where $C_{0}(\mathbb{R}_{+})$ is the space {of} functions vanishing at infinity. \item $C_{c}(\overline{\mathscr{X}})$ is the space of functions $f\in L^{\infty}(\overline{\mathscr{X}})$ such that $f(\cdot,i)\in C_{c}(\mathbb{R}_{+})$ for all $i\in\mathbf{E}$, where $C_{c}(\mathbb{R}_{+})$ is the space of functions with compact support. \item $C_{0}^{1}(\overline{\mathscr{X}})$ is the space of functions $f\in C_{0}(\overline{\mathscr{X}})$ such that, for any $i\in\mathbf{E}$, $\partial f(\cdot,i)/\partial s$ exists and belongs to $C_{0}(\mathbb{R}_{+})$ ({for convenience,} we stipulate that $\partial f(\infty,\partial)/\partial s=0$). \item $C_{c}^{1}(\overline{\mathscr{X}})$ is the space of functions $f\in C_{c}(\overline{\mathscr{X}})$ such that, for any $i\in\mathbf{E}$, $\partial f(\cdot,i)/\partial s$ exists ({for convenience,} we stipulate that $\partial f(\infty,\partial)/\partial s=0$). \end{itemize} Sometimes $\overline{\mathscr{X}}$ will be replaced by $\overline{\mathscr{X}_{+}}$ or $\overline{\mathscr{X}_{-}}$ when the functions are defined on these spaces, in which case the set $\mathbf{E}$ will be replaced by $\mathbf{E}_{+}$ or $\mathbf{E}_{-}$, respectively, in the above definitions. Note that each function on $\overline{\mathscr{X}}$ can be viewed as a time-dependent vector of size $m$, which can be split into a time-dependent vector of size $m_{+}$ (a function on $\mathscr{X}_{+}$) and a time-dependent vector of size $m_{-}$ (a function on $\mathscr{X}_{-}$). \medskip We conclude this section by introducing some more notations, this time for operators: \begin{itemize} \item $\widetilde{\mathsf{\Lambda}}:L^{\infty}(\overline{\mathscr{X}})\rightarrow L^{\infty}(\overline{\mathscr{X}})$ is the multiplication operator associated with $(\mathsf{\Lambda}_{s})_{s\in\mathbb{R}_{+}}$, defined by \begin{align}\label{eq:DefTildeLambda} (\widetilde{\mathsf{\Lambda}}\,g)(s,i):=(\mathsf{\Lambda}_{s}\,g(s,\cdot))(i),\quad (s,i)\in\overline{\mathscr{X}}. \end{align} \item Similarly, we define multiplication operators $\widetilde{\mathsf{A}}:L^{\infty}(\overline{\mathscr{X}_{+}})\rightarrow L^{\infty}(\overline{\mathscr{X}_{+}})$, $\widetilde{\mathsf{B}}:L^{\infty}(\overline{\mathscr{X}_{-}})\rightarrow L^{\infty}(\overline{\mathscr{X}_{+}})$, $\widetilde{\mathsf{C}}:L^{\infty}(\overline{\mathscr{X}_{+}})\rightarrow L^{\infty}(\overline{\mathscr{X}_{-}})$, and $\widetilde{\mathsf{D}}:L^{\infty}(\overline{\mathscr{X}_{-}})\rightarrow L^{\infty}(\overline{\mathscr{X}_{-}})$, associated with the blocks $(\mathsf{A}_{s})_{s\in\mathbb{R}_{+}}$, $(\mathsf{B}_{s})_{s\in\mathbb{R}_{+}}$, $(\mathsf{C}_{s})_{s\in\mathbb{R}_{+}}$, and $(\mathsf{D}_{s})_{s\in\mathbb{R}_{+}}$ given in \eqref{eq:MatrixBlocks}, respectively. \end{itemize} Given the above, for any\footnote{The superscript $T$ will be used to denote the transpose of a vector or matrix.} $g=(g^{+},g^{-})^{T}\in L^{\infty}(\overline{\mathscr{X}})$, where $g^{\pm}\in L^{\infty}(\overline{\mathscr{X}_{\pm}})$, we have \begin{align}\label{eq:TildeLambdaBlocks} \widetilde{\mathsf{\Lambda}}\,g=\begin{pmatrix} \widetilde{\mathsf{A}}\,g^{+}+\widetilde{\mathsf{B}}\,g^{-} \\ \widetilde{\mathsf{C}}\,g^{+}+\widetilde{\mathsf{D}}\,g^{-} \end{pmatrix}. \end{align} \subsection{A time-inhomogeneous Markov family corresponding to $(\mathsf{\Lambda}_{s})_{s\in\mathbb{R}_{+}}$ and related passage times}\label{subsec:Markov} We start with introducing a time-inhomogeneous Markov Family corresponding to $(\mathsf{\Lambda}_{s})_{s\in\mathbb{R}_{+}}$. Then, we proceed with a study of some passage times related to this family. \subsubsection{A time-inhomogeneous Markov family $\mathcal{M}^{*}$ corresponding to $(\mathsf{\Lambda}_{s})_{s\in\mathbb{R}_{+}}$}\label{subsubsec:Markov} We take $\Omega^{*}$ as the collection of $\mathbf{E}$-valued functions $\omega^{*}$ on $\mathbb{R}_{+}$, and $\mathscr{F}^{*}:=\sigma\{X^{*}_{t},\,t\in\mathbb{R}_{+}\}$, where $X$ is the coordinate mapping $X^{*}_\cdot(\omega^{*}):=\omega^{*}(\cdot)$. Sometimes we may need the value of $\omega^{*}\in\Omega^{*}$ at infinity, and in such case we set $X_{\infty}^{*}(\omega^{*})=\omega^{*}(\infty)=\partial$, for any $\omega^{*}\in\Omega^{*}$. We endow the space $(\Omega^{*},\mathscr{F}^{*})$ with a family of filtrations $\mathbb{F}_{s}^{*}:=\{\mathscr{F}^{s,*}_{t},\,t\in[s,\infty]\}$, $s\in\overline{\mathbb{R}}_{+}$, where, for $s\in\mathbb{R}_{+}$, \begin{align*} \mathscr{F}^{s,*}_{t}:=\bigcap_{r>t}\sigma\left(X^{*}_{u},\,\,u\in[s,r]\right),\,\,\,t\in[s,\infty);\quad\mathscr{F}^{s,*}_{\infty}:=\sigma\bigg(\bigcup_{t\geq s}\mathscr{F}^{s,*}_{t}\bigg), \end{align*} and $\mathscr{F}^{\infty,*}_{\infty}:=\{\emptyset,\Omega^{*}\}$. We denote by \begin{align*} \mathcal{M}^{*}:=\big\{\big(\Omega^{*},\mathscr{F}^{*},\mathbb{F}_{s}^{*},(X^{*}_t)_{t\in[s,\infty]},\mathbb{P}_{s,i}^{*}\big),\,(s,i)\in\overline{\mathscr{X}}\big\} \end{align*} a canonical {\it time-inhomogeneous} Markov family. That is, \begin{itemize} \item $\mathbb{P}^{*}_{s,i}$ is a probability measure on $(\Omega^{*},\mathscr{F}^{s,*}_{\infty})$ for $(s,i)\in\overline{\mathscr{X}}$; \item the function $P^{*}:\overline{\mathscr{X}}\times\overline{\mathbb{R}}_{+}\times 2^{\overline{\mathbf{E}}}\rightarrow [0,1]$ defined for $0\leq s\leq t\leq\infty$ as \begin{align*} P^{*}(s,i,t,B):=\mathbb{P}^{*}_{s,i}\!\left(X^{*}_t\in B\right) \end{align*} is measurable with respect to $i$ for any fixed $s\leq t$ and $B\in 2^{\overline{\mathbf{E}}}$; \item $\mathbb{P}^{*}_{s,i}(X^{*}_{s}=i)=1$ for any $(s,i)\in\overline{\mathscr{X}}$; \item for any $(s,i)\in\overline{\mathscr{X}}$, $s\leq t\leq r\leq\infty$, and $B\in 2^{\overline{\mathbf{E}}}$, it holds that \begin{align*} \mathbb{P}^{*}_{s,i}\!\left(X^{*}_r\in B\,|\,\mathscr{F}^{s,*}_{t}\right)=\mathbb{P}^{*}_{t,X^{*}_{t}}\!\left(X^{*}_r\in B\right),\quad\mathbb{P}^{*}_{s,i}-\text{a.s.}\,. \end{align*} \end{itemize} Let $\mathsf{U}^{*}:=(\mathsf{U}_{s,t}^{*})_{0\leq s\leq t<\infty}$ be the evolution system (cf. \cite{Bottcher2014}) corresponding to $\mathcal{M}^{*}$ defined by \begin{align}\label{eq:DefEvolSytXStar} \mathsf{U}_{s,t}^{*}f(i):=\mathbb{E}_{s,i}^{*}\left(f(X_{t}^{*})\right),\quad 0\leq s\leq t<\infty,\quad i\in\mathbf{E}, \end{align} for all functions (column vectors) $f:\mathbf{E}\rightarrow\mathbb{R}$.\footnote{Note that for $t\in\mathbb{R}_{+}$, $X_{t}^{*}$ takes values in $\mathbf{E}$.} We assume that \begin{align}\label{eq:DefGenXStar} \lim_{h\downarrow 0}\frac{1}{h}\left(\mathsf{U}_{s,s+h}^{*}f(i)-f(i)\right)=\mathsf{\Lambda}_{s}f(i),\quad\text{for any }\,(s,i)\in\mathscr{X}, \end{align} for all $f:\mathbf{E}\rightarrow\mathbb{R}$. It is well known that a standard version of the Markov family $\mathcal{M}^{*}$ (cf. \cite[Definition I.6.6]{GikhmanSkorokhod2004}) can be constructed. This is done by first constructing via Peano-Baker series the evolution system $\mathsf{U}^{*}=(\mathsf{U}^{*}_{s,t})_{0\leq s\leq t<\infty}$ that solves \begin{align}\label{eq:EvolDE} \frac{\dif\mathsf{U}^{*}_{s,t}}{\dif t}=\mathsf{\Lambda}_{t}\,\mathsf{U}^{*}_{s,t},\quad\mathsf{U}^{*}_{s,s}=\mathsf{I},\quad 0\leq s\leq t<\infty. \end{align} Since $\mathsf{\Lambda}_{t}$ is a generator matrix, $\mathsf{U}^{*}_{s,t}$ is positive preserving and contracting with $\mathsf{U}^{*}_{s,t}\mathsf{1}_{m}=\mathsf{1}_{m}$. In addition, due to Assumption \ref{assump:GenLambda}-(i) and \eqref{eq:EvolDE}, it holds for any $0\leq s<t$ and $r\in(0,t-s)$ that \begin{align*} \left\|\mathsf{U}^{*}_{s,t+r}-\mathsf{U}^{*}_{s,t}\right\|_{\infty}=\left\|\mathsf{U}^{*}_{s,t}\left(\mathsf{U}^{*}_{t,t+r}-\mathsf{I}\right)\right\|_{\infty}\leq Cr, \end{align*} and \begin{align*} \left\|\mathsf{U}^{*}_{s+r,t}-\mathsf{U}^{*}_{s,t}\right\|_{\infty}=\left\|\left(\mathsf{I}-\mathsf{U}^{*}_{s,s+r}\right)\mathsf{U}^{*}_{s+r,t}\right\|_{\infty}\leq Cr, \end{align*} for some positive constant $C$, so that $\mathsf{U}^{*}_{s,t}$ is strongly continuous in $s$ and $t$. The above, together with the finiteness of the state space, implies that $\mathsf{U}^*$ is a Feller evolution system. The corresponding standard version can then be constructed (cf. \cite[Theorem I.6.3]{GikhmanSkorokhod2004}). In view of the above, we will consider the standard version of $\mathcal{M}^{*}$ in what follows, and, for simplicity, we will preserve the notation $\mathcal{M}^{*}=\set{(\Omega^{*},\mathscr{F}^{*},\mathbb{F}_{s}^{*},(X^{*}_t)_{t\ge s},\mathbb{P}_{s,i}^{*})$, $(s,i)\in\overline{\mathscr{X}}}$, in which $\Omega^{*}$ is restricted to the collection of $\mathbf{E}$-valued c\`{a}dl\`{a}g functions $\omega^{*}$ on $\mathbb{R}_{+}$ with $\omega^{*}(\infty)=\partial$. \subsubsection{Passage times related to $\mathcal{M}^{*}$}\label{subsubsec:PassageTime} For any $s\in\overline{\mathbb{R}}_{+}$, we define an additive functional $\phi_{\cdot}^{*}(s)$ as \begin{align*} \phi_{t}^{*}(s):=\int_{s}^{t}v(X_{u}^{*})\,du,\quad t\in[s,\infty], \end{align*} and we stipulate $\phi_{\infty}^{*}(s,\omega^{*})=\infty$ for every $\omega^{*}\in\Omega^{*}$. In addition, for any $s\in\overline{\mathbb{R}}_{+}$ and $\ell\in\mathbb{R}_{+}$, we define associated passage times \begin{align*} \tau_{\ell}^{+,*}(s):=\inf\left\{t\in[s,\infty]:\,\phi_{t}^{*}(s)>\ell\right\}\quad\text{and} \quad\tau_{\ell}^{-,*}(s):=\inf\left\{t\in[s,\infty]:\,\phi_{t}^{*}(s)<-\ell\right\}. \end{align*} Both $\tau_{\ell}^{+,*}(s)$ and $\tau_{\ell}^{-,*}(s)$ are $\mathbb{F}_{s}^{*}$-stopping times since, $\phi_{\cdot}^{*}(s)$ is $\mathbb{F}^{*}_{s}$-adapted, has continuous sample paths, and $\mathbb{F}_{s}^{*}$ is right-continuous (cf. \cite[Proposition 1.28]{JacodShiryaev2003}). For notational convenience, if no confusion arises, we will omit the parameter $s$ in $\phi_{t}^{*}(s)$ and $\tau_{\ell}^{\pm,*}(s)$. The following result will be used later in the paper. \begin{lemma}\label{lem:RangeXTaupm} For any $s\in\overline{\mathbb{R}}_{+}$, $\ell\in\mathbb{R}_{+}$, and $\omega^{*}\in\Omega^{*}$, $X_{\tau_{\ell}^{\pm,*}(s)}^{*}(\omega^{*})\in\mathbf{E}_{\pm}\cup\{\partial\}$. In particular, if $\tau_{\ell}^{\pm,*}(s,\omega^{*})<\infty$, then $X_{\tau_{\ell}^{\pm,*}(s)}^{*}(\omega^{*})\in\mathbf{E}_{\pm}$. \end{lemma} \begin{proof} We will only prove the ``+" version of the lemma; a proof of the ``$-$" version proceeds in an analogous way. To begin with, for any $\omega^{*}\in\Omega^{*}$ such that $\tau_{\ell}^{\pm,*}(s,\omega^{*})=\infty$, clearly we have $X_{\tau_{\ell}^{\pm,*}(s)}^{*}(\omega^{*})=X_{\infty}^{*}(\omega^{*})=\omega^{*}(\infty)=\partial$. Next, suppose that for some $\omega_{0}^{*}\in\Omega^{*}$ and $s_{0},\ell_{0}\in\mathbb{R}_{+}$, $\tau_{\ell_{0}}^{+,*}(s_{0},\omega_{0}^{*})<\infty$ and $X_{\tau_{\ell_{0}}^{+,*}(s_{0})}^{*}(\omega_{0}^{*})\in\mathbf{E}_{-}$. By the definition of $\phi^{*}$, we have $\phi^{*}_{\tau^{+,*}_{\ell_{0}}(s_{0})}(\omega_{0}^{*})=\ell_{0}$. Moreover, since $X_{\cdot}(\omega_{0}^{*})$ is right-continuous, there exists $\varepsilon_{0}>0$, such that for any $t\in[\tau_{\ell_{0}}^{+,*}(s_{0},\omega_{0}^{*}),\tau_{\ell_{0}}^{+,*}(s_{0},\omega_{0}^{*})+\varepsilon]$, $X_{t}^{*}(\omega_{0}^{*})\in\mathbf{E}_{-}$. Hence, for any $t\in[\tau_{\ell_{0}}^{+,*}(s_{0},\omega_{0}^{*}),\tau_{\ell_{0}}^{+,*}(s_{0},\omega_{0}^{*})+\varepsilon]$, $\phi^{*}_{t}(\omega_{0}^{*})\leq\ell_{0}$, which contradicts the definition of $\tau_{\ell}^{+,*}$. \end{proof} \begin{remark} Here is an example where $\{\tau^{+,*}_{\ell}=\infty\}$ has a positive probability. Consider $\mathbf{E}=\{1,-1\}$, $v(\pm 1)=\pm 1$, and \begin{align*} \mathsf{\Lambda}_{s}=\bordermatrix{~ & +1 & -1 \cr +1 & -1 & 1 \cr -1 & 0 & 0 \cr},\quad s\in\mathbb{R}_{+}. \end{align*} Then, for any $s\in\mathbb{R}_{+}$ and $\ell>0$, \begin{align*} \mathbb{P}^{*}_{s,1}\left(\tau^{+,*}_{\ell}=\infty\right)=\mathbb{P}^{*}_{s,1}\left(\left\{\text{the first jump of $X^{*}$ occurs before time $\ell$}\right\}\right)=1-e^{-\ell}>0. \end{align*} \end{remark} \subsection{The main goal of the paper}\label{subsec:MainGoal} Our main interest is to derive a Wiener-Hopf type method for computing expectations of the following form \begin{align}\label{eq:ExpTauXTau} \mathbb{E}^{*}_{s,i}\Big(g^{\pm}\Big(\tau_{\ell}^{\pm,*},X_{\tau_{\ell}^{\pm,*}}^{*}\Big)\Big) \end{align} for $g^{\pm}\in L^{\infty}(\overline{\mathscr{X}_{\pm}})$, $\ell\in\mathbb{R}_{+}$, and $(s,i)\in\mathscr{X}$. In view of Lemma \ref{lem:RangeXTaupm}, it is enough to compute the expectation in \eqref{eq:ExpTauXTau} for $g^{\pm}\in L^{\infty}(\overline{\mathscr{X}_{\pm}})$ in order to compute the analogous expectation for $g\in L^{\infty}(\overline{\mathscr{X}})$. The Wiener-Hopf type method derived in this paper generalizes the Wiener-Hopf type method of \cite{BarlowRogersWilliams1980} that was developed for the time-homogeneous Markov chains. \begin{remark} The time-homogeneous version of the problem of computing the expectation of the type given in \eqref{eq:ExpTauXTau} appears frequently in time-homogeneous fluid models (see e.g. \cite{Rogers1994} and the references therein). Time inhomogeneous extensions of such models is important and natural due to temporal (seasonal) effect, for example. This is one practical motivation for the study presented in this paper. \end{remark} In order to proceed, we introduce the following operators: \begin{itemize} \item $J^{+}:L^{\infty}(\overline{\mathscr{X}_{+}})\rightarrow L^{\infty}(\overline{\mathscr{X}_{-}})$ is defined as \begin{align}\label{eq:DefJPlus} \big(J^{+}g^{+}\big)(s,i):=\mathbb{E}_{s,i}^{*}\Big(g^{+}\Big(\tau_{0}^{+,*},X_{\tau_{0}^{+,*}}^{*}\Big)\Big),\quad (s,i)\in\overline{\mathscr{X}_{-}}. \end{align} Clearly, for any $g^{+}\in L^{\infty}(\overline{\mathscr{X}_{+}})$, $|(J^{+}g^{+})(s,i)|\leq\|g^{+}\|_{L^{\infty}(\overline{\mathscr{X}_{+}})}<\infty$ for any $(s,i)\in\mathscr{X}_{-}$, and $(J^{+}g^{+})(\infty,\partial)=0$, so that $J^{+}g^{+}\in L^{\infty}(\overline{\mathscr{X}_{-}})$. \item $J^{-}:L^{\infty}(\overline{\mathscr{X}_{-}})\rightarrow L^{\infty}(\overline{\mathscr{X}_{+}})$ is defined as, \begin{align}\label{eq:DefJMinus} \big(J^{-}g^{-}\big)(s,i):=\mathbb{E}_{s,i}^{*}\Big(g^{-}\Big(\tau_{0}^{-,*},X_{\tau_{0}^{-,*}}^{*}\Big)\Big),\quad (s,i)\in\overline{\mathscr{X}_{+}}. \end{align} \item For any $\ell\in\mathbb{R}_{+}$, $\mathcal{P}_{\ell}^{+}:L^{\infty}(\overline{\mathscr{X}_{+}})\rightarrow L^{\infty}(\overline{\mathscr{X}_{+}})$ is defined as \begin{align}\label{eq:DefPPlus} \big(\mathcal{P}^{+}_{\ell}g^{+}\big)(s,i):=\mathbb{E}_{s,i}^{*}\Big(g^{+}\Big(\tau_{\ell}^{+,*},X_{\tau_{\ell}^{+,*}}^{*}\Big)\Big),\quad (s,i)\in\overline{\mathscr{X}_{+}}. \end{align} \item For any $\ell\in\mathbb{R}_{+}$, $\mathcal{P}_{\ell}^{-}:L^{\infty}(\overline{\mathscr{X}_{-}})\rightarrow L^{\infty}(\overline{\mathscr{X}_{-}})$ is defined as, \begin{align}\label{eq:DefPMinus} \big(\mathcal{P}^{-}_{\ell}g^{-}\big)(s,i):=\mathbb{E}_{s,i}^{*}\Big(g^{-}\Big(\tau_{\ell}^{-,*},X_{\tau_{\ell}^{-,*}}^{*}\Big)\Big),\quad (s,i)\in\overline{\mathscr{X}_{-}}. \end{align} \item For any $(s,i)\in\overline{\mathscr{X}_{+}}$, we define \begin{align}\label{eq:DefGenGPlus} \big(G^{+}g^{+}\big)(s,i):=\lim_{\ell\rightarrow 0+}\frac{1}{\ell}\big(\mathcal{P}^{+}_{\ell}g^{+}(s,i)-g^{+}(s,i)\big), \end{align} for any $g^{+}\in C_{0}(\overline{\mathscr{X}_{+}})$ such that the limit in \eqref{eq:DefGenGPlus} exists and is finite. \item For any $(s,i)\in\overline{\mathscr{X}_{-}}$, we define \begin{align}\label{eq:DefGenGMinus} \big(G^{-}g^{-}\big)(s,i):=\lim_{\ell\rightarrow 0+}\frac{1}{\ell}\big(\mathcal{P}^{-}_{\ell}g^{-}(s,i)-g^{-}(s,i)\big), \end{align} for all $g^{-}\in C_{0}(\overline{\mathscr{X}_{+}})$ such that the above limit in \eqref{eq:DefGenGMinus} exists and is finite. \end{itemize} \begin{remark}\label{rem:Expgpmsimp} For $g^{+}\in L^{\infty}(\overline{\mathscr{X}_{+}})$, $\ell\in(0,\infty)$, and $(s,i)\in\mathscr{X}_{-}$, it can be shown that \begin{align}\label{eq:ExpgpsimJPPlus} \mathbb{E}^{*}_{s,i}\Big(g^{+}\Big(\tau_{\ell}^{+,*},X_{\tau_{\ell}^{+,*}}^{*}\Big)\Big)=\big(J^{+}\mathcal{P}_{\ell}^{+}g^{+}\big)(s,i). \end{align} Similarly, for $g^{-}\in L^{\infty}(\overline{\mathscr{X}_{-}})$, $\ell\in(0,\infty)$, and $(s,i)\in\mathscr{X}_{+}$, we have \begin{align}\label{eq:ExpgmsipJPMinus} \mathbb{E}^{*}_{s,i}\Big(g^{-}\Big(\tau_{\ell}^{-,*},X_{\tau_{\ell}^{-,*}}^{*}\Big)\Big)=\big(J^{-}\mathcal{P}_{\ell}^{-}g^{-}\big)(s,i). \end{align} The identity \eqref{eq:ExpgpsimJPPlus} will be verified in Remark \ref{rem:ProofExpgpsimJPMinus} below, while \eqref{eq:ExpgmsipJPMinus} can be proved in an analogous way with $v$ replaced by $-v$. In view of \eqref{eq:DefJPlus}$-$\eqref{eq:DefPMinus} and \eqref{eq:ExpgpsimJPPlus}$-$\eqref{eq:ExpgmsipJPMinus}, the expectation of the form \eqref{eq:ExpTauXTau} for any $g^{\pm}\in L^{\infty}(\overline{\mathscr{X}_{\pm}})$, $\ell\in\mathbb{R}_{+}$, and $(s,i)\in\mathscr{X}$, can be represented in terms of the operators $J^{\pm}$ and $\mathcal{P}_{\ell}^{\pm}$. \end{remark} \section{Main Results}\label{sec:MainResults} We now state the main results of this paper, Theorem \ref{thm:WHExistUniq} and Theorem \ref{thm:WHProbInterpr}. Theorem \ref{thm:WHExistUniq} is analytical in nature, and it provides the {\it Wiener-Hopf factorization} for the generator $\mathsf{V}^{-1}(\partial/\partial s+\widetilde{\mathsf{\Lambda}})$. This factorization is given in terms of operators $(S^{+},H^{+},S^{-},H^{-})$ showing in the statement of the theorem. Theorem \ref{thm:WHProbInterpr} is probabilistic in nature, and provides the probabilistic interpretation of the operators $(S^{+},H^{+},S^{-},H^{-})$, which is key for various applications of our Wiener-Hopf factorization. \begin{theorem}\label{thm:WHExistUniq} Let $(\mathsf{\Lambda}_{s})_{s\in\mathbb{R}_{+}}$ be a family of $m\times m$ generator matrices satisfying Assumption \ref{assump:GenLambda}, and let $\widetilde{\mathsf{\Lambda}}$ be the associated multiplication operator defined as in \eqref{eq:DefTildeLambda}. Let $v:\overline{\mathbf{E}}\rightarrow\mathbb{R}$ with $v(i)\neq 0$ for any $i\in\mathbf{E}$, $v(\partial)=0$, and $\mathsf{V}=\textnormal{diag}\,\{v(i):i\in\mathbf{E}\}$. Then, there exists a unique quadruple of operators $(S^{+},H^{+},S^{-},H^{-})$ which solves the following operator equation \begin{align}\label{eq:WH} \mathsf{V}^{-1}\bigg(\frac{\partial}{\partial s}+\widetilde{\mathsf{\Lambda}}\bigg) \begin{pmatrix} I^{+} & S^{-} \\ S^{+} & I^{-} \end{pmatrix} \begin{pmatrix} g^{+} \\ g^{-} \end{pmatrix} = \begin{pmatrix} I^{+} & S^{-} \\ S^{+} & I^{-} \end{pmatrix} \begin{pmatrix} H^{+} & \mathsf{0} \\ \mathsf{0} & -H^{-} \end{pmatrix} \begin{pmatrix} g^{+} \\ g^{-} \end{pmatrix},\quad g^{\pm}\in C_{0}^{1}(\overline{\mathscr{X}_{\pm}}), \end{align} subject to the conditions below: \begin{itemize} \item [$(\textnormal{a}^{\pm})$] $S^{\pm}:C_{0}(\overline{\mathscr{X}_{\pm}})\rightarrow C_{0}(\overline{\mathscr{X}_{\mp}})$ is a bounded operator such that \begin{itemize} \item[(i)] for any $g^{\pm}\in C_{c}(\overline{\mathscr{X}_{\pm}})$ with $\supp g^{\pm}\subset[0,\eta_{g^{\pm}}]\times\mathbf{E}_{\pm}$ for some constant $\eta_{g^{\pm}}\in(0,\infty)$, we have $\supp S^{\pm}g^{\pm}\subset[0,\eta_{g^{\pm}}]\times\mathbf{E}_{\mp}$; \item[(ii)] for any $g^{\pm}\in C_{0}^{1}(\overline{\mathscr{X}_{\pm}})$, we have $S^{\pm}g^{\pm}\in C_{0}^{1}(\overline{\mathscr{X}_{\mp}})$. \end{itemize} \item [$(\textnormal{b}^{\pm})$] $H^{\pm}$ is the strong generator of a strongly continuous positive contraction semigroup $(\mathcal{Q}_{\ell}^{\pm})_{\ell\in\mathbb{R}_{+}}$ on $C_{0}(\overline{\mathscr{X}_{\pm}})$ with domain $\mathscr{D}(H^{\pm})=C_{0}^{1}(\overline{\mathscr{X}_{\pm}})$. \end{itemize} \end{theorem} \begin{theorem}\label{thm:WHProbInterpr} For any $g^{\pm}\in C_{0}(\overline{\mathscr{X}_{\pm}})$, we have \begin{align*} S^{\pm}g^{\pm}=J^{\pm}g^{\pm}\quad\text{and}\quad\mathcal{Q}_{\ell}^{\pm}g^{\pm}=\mathcal{P}_{\ell}^{\pm}g^{\pm},\quad\text{for any }\,\ell\in\mathbb{R}_{+}, \end{align*} where $J^{\pm}$ and $(\mathcal{P}_{\ell}^{\pm})_{\ell\in\mathbb{R}_{+}}$ are defined in \eqref{eq:DefJPlus}$-$\eqref{eq:DefPMinus}. Moreover, $G^+$ given in \eqref{eq:DefGenGPlus} is the (strong) generator of $(\mathcal{P}^{+}_{\ell})_{\ell\geq 0}$ with $\mathcal{D}(G^+) = C_0^1(\overline{\mathscr{X}}_{+})$, and $G^-$ given in \eqref{eq:DefGenGMinus} is the (strong) generator of $(\mathcal{P}^{-}_{\ell})_{\ell\geq 0}$ with $\mathcal{D}(G^-) = C_0^1(\overline{\mathscr{X}}_{-})$. \end{theorem} The proofs of these two theorems is deferred to Section~\ref{sec:Proofs}. By Theorems \ref{thm:WHExistUniq} and \ref{thm:WHProbInterpr}, we are able to compute $J^{\pm}g^{\pm}$ and $\mathcal{P}_{\ell}^{\pm}g^{\pm}$, for any $g^{\pm}\in C_{0}^{1}(\overline{\mathscr{X}_{\pm}})$ and $\ell\in\mathbb{R}_{+}$, by solving equation~\eqref{eq:WH} subject to the conditions $(a^{\pm})$ and $(b^{\pm})$. In view of Remark~\ref{rem:Expgpmsimp}, these functions lead to the expectation of the form \eqref{eq:ExpTauXTau} for any $g^{\pm}\in C_{0}^{1}(\overline{\mathscr{X}_{\pm}})$. In particular, for any $c>0$ and $j\in\mathbf{E}_{\pm}$, by taking $g^{\pm}_{j}\in C_{0}^{1}(\overline{\mathscr{X}_{\pm}})$ with \begin{align}\label{eq:Funtgjpm} g^{\pm}_{j}(s,i):=e^{-cs}\,\1_{\{j\}}(i),\quad (s,i)\in\mathscr{X}_{\pm}, \end{align} we obtain the following Laplace transform for $(\tau_{\ell}^{\pm,*},X_{\tau_{\ell}^{\pm,*}}^{*})$ \begin{align*} \mathbb{E}^{*}_{s,i}\bigg(e^{-c\tau_{\ell}^{\pm,*}}\1_{\big\{X_{\tau_{\ell}^{\pm,*}}^{*}=j\big\}}\bigg), \end{align*} for any $c\in(0,\infty)$, $\ell\in\mathbb{R}_{+}$, and $(s,i)\in\mathscr{X}$. We then perform the inverse Laplace transform with respect to $c$ to obtain the join distribution of $(\tau_{\ell}^{\pm,*},X_{\tau_{\ell}^{\pm,*}}^{*})$ under $\mathbb{P}_{s,i}^{*}$, which enables us to compute the expectations \eqref{eq:ExpTauXTau} for any $g^{\pm}\in L^{\infty}(\overline{\mathscr{X}_{\pm}})$. Note that the equation \eqref{eq:WH} can be decomposed into the following two uncoupled equations \begin{align}\label{eq:WHPlus} \mathsf{V}^{-1}\bigg(\frac{\partial}{\partial s}+\widetilde{\mathsf{\Lambda}}\bigg)\begin{pmatrix} I^{+} \\ S^{+} \end{pmatrix} g^{+}&=\begin{pmatrix} I^{+} \\ S^{+} \end{pmatrix} H^{+}g^{+},\quad g^{+}\in C_{0}^{1}(\overline{\mathscr{X}_{+}}),\\ \label{eq:WHMinus} \mathsf{V}^{-1}\bigg(\frac{\partial}{\partial s}+\widetilde{\mathsf{\Lambda}}\bigg)\begin{pmatrix} I^{-} \\ S^{-} \end{pmatrix} g^{+}&= -\begin{pmatrix} I^{-} \\ S^{-} \end{pmatrix} H^{-}g^{-},\quad g^{-}\in C_{0}^{1}(\overline{\mathscr{X}_{-}}). \end{align} Hence, one can compute $J^{+}g^{+}$ and $G^{+}g^{+}$ (and thus $\mathcal{P}_{\ell}^{+}g^{+}$) separately from $J^{-}g^{-}$ and $G^{-}g^{-}$ (and thus $\mathcal{P}_{\ell}^{-}g^{-}$) by solving \eqref{eq:WHPlus} and \eqref{eq:WHMinus} subject to $(a^{+})$ and $(b^{+})$, and $(a^{-})$ and $(b^{-})$, respectively. \begin{remark}\label{rem:Riccati} By \eqref{eq:MatrixBlocks}, \eqref{eq:TildeLambdaBlocks}, and Theorems \ref{thm:WHExistUniq} and \ref{thm:WHProbInterpr}, we see that $(J^{+},G^{+})$ is the unique solution, subject to $(a^{+})$ and $(b^{+})$, to the following two operator equations, \begin{align}\label{eq:WHPlus1} \big(\mathsf{V}^{+}\big)^{-1}\bigg(\frac{\partial}{\partial s}+\widetilde{\mathsf{A}}+\widetilde{\mathsf{B}}\,S^{+}\bigg)g^{+}&=H^{+}g^{+},\\ \label{eq:WHPlus2} \big(\mathsf{V}^{-}\big)^{-1}\bigg(\frac{\partial}{\partial s}+\widetilde{\mathsf{C}}+\widetilde{\mathsf{D}}\,S^{+}\bigg)g^{+}&=S^{+}H^{+}g^{+}, \end{align} where $g^{+}\in C_{0}^{1}(\overline{\mathscr{X}_{+}})$. By plugging \eqref{eq:WHPlus1} into \eqref{eq:WHPlus2}, we obtain the operator Riccati equation of the form \begin{align*} \bigg(S^{+}\big(V^{+}\big)^{-1}\widetilde{\mathsf{B}}\,S^{+}+S^{+}\big(\mathsf{V}^{+}\big)^{-1}\bigg(\frac{\partial}{\partial s}+\widetilde{\mathsf{A}}\bigg)-\big(\mathsf{V}^{-}\big)^{-1}\bigg(\frac{\partial}{\partial s}+\widetilde{\mathsf{D}}\bigg)S^{+}-\big(\mathsf{V}^{-}\big)^{-1}\widetilde{\mathsf{C}}\bigg)\,g^{+}=0. \end{align*} Hence, in order to compute $(J^{+},G^{+})$ from \eqref{eq:WHPlus}, one needs first to compute $J^{+}$ by solving the above operator equation subject to $(a^{+})$, and {then} $G^{+}$ is given in terms of $J^{+}$ by \eqref{eq:WHPlus1}. Similarly, one can compute $(J^{-},G^{-})$ from \eqref{eq:WHMinus} in an analogous way. \end{remark} \begin{remark}\label{rem:OperPsiInv} The operator \begin{align*} \Psi:=\begin{pmatrix} I^{+} & J^{-} \\ J^{+} & I^{-} \end{pmatrix}:\,C_{0}(\overline{\mathscr{X}})\rightarrow C_{0}(\overline{\mathscr{X}}) \end{align*} is {the} counterpart of the matrix $\mathsf{S}$ given in Theorem \ref{thm:BRW80WH1}. It can be shown that the operator $\Psi$ is injective. However, unlike the matrix $\mathsf{S}$ which is invertible, the operator $\Psi$ is not invertible in general. In fact, the surjectivity of $\Psi$ may fail, even when restricted to $C_{0}^{1}(\overline{\mathscr{X}})$ (recall the condition $(a^{+})$(ii)). Nevertheless, the potential lack of invertibility of $\Psi$ does not affect the existence and uniqueness of our Wiener-Hopf factorization. It only affects the form of equality \eqref{eq:WH}, with $S^\pm$ replaced with $J^\pm$. \end{remark} \begin{remark}\label{rem:RealityCheck} When the Markov family $\mathcal{M}^{*}$ is time-homogeneous, namely, $\mathsf{\Lambda}_{s}=\mathsf{\Lambda}$ for all $s\in\mathbb{R}_{+}$, where $\mathsf{\Lambda}$ is an $m\times m$ generator matrix, the equation \eqref{eq:WH} reduces to the time-homogeneous Wiener-Hopf factorization \eqref{eq:TimeHomoWH}, which, in light of the invertibility of $\mathsf{S}$, can be rewritten as \begin{align}\label{eq:TimeHomoWHReform} \mathsf{V}^{-1}(\mathsf{\Lambda}-c\,\mathsf{I}) \begin{pmatrix} \mathsf{I}^{+} & \mathsf{\Pi}^{-}_{c} \\ \mathsf{\Pi}^{+}_{c} & \mathsf{I}^{-} \end{pmatrix} = \begin{pmatrix} \mathsf{I}^{+} & \mathsf{\Pi}^{-}_{c} \\ \mathsf{\Pi}^{+}_{c} & \mathsf{I}^{-} \end{pmatrix} \begin{pmatrix} \mathsf{Q}^{+}_{c} & 0 \\ 0 & \mathsf{Q}^{-}_{c} \end{pmatrix}. \end{align} In what follows, we will only check the ``+" part of the above equality. Towards this end, for any $c\in(0,\infty)$ and $j\in\mathbf{E}_{+}$, take $g^{+}_{j}\in C_{0}^{1}(\overline{\mathscr{X}_{+}})$ as in \eqref{eq:Funtgjpm}. Since $(J^{+},G^{+})$ is the unique solution to \eqref{eq:WHPlus} subject to $(\textnormal{a}^{+})$ and $(\textnormal{b}^{+})$, we have \begin{align}\label{eq:WHPlusgj} \mathsf{V}^{-1}\bigg(\frac{\partial}{\partial s}+\mathsf{\Lambda}\bigg) \begin{pmatrix} I^{+} \\ J^{+} \end{pmatrix} g^{+}_{j}= \begin{pmatrix} I^{+} \\ J^{+} \end{pmatrix} G^{+}g^{+}_{j}. \end{align} Since $\mathcal{M}^{*}$ is a time-homogeneous Markov family, for any $s,\ell\in\mathbb{R}_{+}$ and $i\in\mathbf{E}$, the distribution of $(\tau^{+,*}_{\ell}(s)-s,X_{\tau^{+,*}_{\ell}(s)})$ under $\mathbb{P}_{s,i}^{*}$ is the same as that of $(\tau^{+,*}_{\ell}(0),X_{\tau^{+,*}_{\ell}(0)})$ under $\mathbb{P}_{0,i}^{*}$. Hence, for any $s\in\mathbb{R}_{+}$ and $i\in\mathbf{E}_{+}$, we have \begin{align} \big(G^{+}\!g^{+}_{j}\big)(s,i)&=\lim_{\ell\rightarrow 0+}\frac{1}{\ell}\Big(\big(\mathcal{P}^{+}_{\ell}g^{+}_{j}\big)(s,i)\!-\!g^{+}_{j}(s,i)\Big)\!=\!\lim_{\ell\rightarrow 0+}\frac{1}{\ell}\bigg(\mathbb{E}^{*}_{s,i}\bigg(e^{-c\tau^{+,*}_{\ell}(s)}\1_{\big\{X^{*}_{\tau^{+,*}_{\ell}(s)}\!=j\!\big\}}\bigg)\!-\!\1_{\{j\}}(i)\bigg)\nonumber\\ \label{eq:GPlusgjPlus} &=\lim_{\ell\rightarrow 0+}\frac{e^{-cs}}{\ell}\bigg(\mathbb{E}^{*}_{0,i}\bigg(e^{-c\tau^{+,*}_{\ell}(0)}\1_{\big\{X^{*}_{\tau^{+,*}_{\ell}(0)}=j\big\}}\bigg)-\1_{\{j\}}(i)\bigg)=e^{-cs}\,\mathsf{Q}_{c}^{+}(i,j), \end{align} where we recall that the matrix $\mathsf{Q}_{c}^{+}$ is defined in \eqref{eq:QcPlusMinus}. Similarly, for any $s\in\mathbb{R}_{+}$ and $i\in\mathbf{E}_{-}$, \begin{align} \big(J^{+}g^{+}_{j}\big)(s,i)&=\mathbb{E}^{*}_{s,i}\Big(g^{+}_{j}\Big(\tau^{+,*}_{0}(s),X^{*}_{\tau^{+,*}_{0}(s)}\Big)\Big)=\mathbb{E}^{*}_{s,i}\bigg(e^{-c\tau^{+,*}_{0}(s)}\1_{\big\{X^{*}_{\tau^{+,*}_{0}(s)}=j\big\}}\bigg)\nonumber\\ \label{eq:JPlusgjPlus} &=e^{-cs}\,\mathbb{E}^{*}_{0,i}\bigg(e^{-c\tau^{+,*}_{0}(0)}\1_{\big\{X^{*}_{\tau^{+,*}_{0}(0)}=j\big\}}\bigg)=e^{-cs}\,\mathsf{\Pi}^{+}_{c}(i,j), \end{align} where the matrix $\mathsf{\Pi}^{+}_{c}$ is defined by \eqref{eq:PicPlusMinus}, and \begin{align} \big(J^{+}G^{+}g^{+}_{j}\big)(s,i)&=\mathbb{E}^{*}_{s,i}\Big(\big(G^{+}g^{+}_{j}\big)\Big(\tau^{+,*}_{0}(s),X^{*}_{\tau^{+,*}_{0}(s)}\Big)\Big)=\mathbb{E}_{s,i}^{*}\Big(e^{-c\tau^{+,*}_{0}(s)}\,\mathsf{Q}_{c}^{+}\Big(X^{*}_{\tau^{+,*}_{0}(s)},j\Big)\Big)\nonumber\\ &=e^{-cs}\,\mathbb{E}_{0,i}^{*}\Big(\!e^{-c\tau^{+,*}_{0}(0)}\mathsf{Q}_{c}^{+}\!\Big(\!X^{*}_{\tau^{+,*}_{0}(0)},j\!\Big)\!\Big)\!=\!e^{-cs}\!\!\!\sum_{k\in\mathbf{E}_{+}}\!\mathbb{E}_{0,i}^{*}\bigg(\!e^{-c\tau^{+,*}_{0}(0)}\1_{\big\{\!X^{*}_{\tau_{0}^{+,*}}\!=k\!\big\}}\!\bigg)\mathsf{Q}_{c}^{+}\!(k,j)\nonumber\\ \label{eq:JGPlusgjPlus} &=e^{-cs}\sum_{k\in\mathbf{E}_{+}}\mathsf{\Pi}^{+}_{c}(i,k)\,\mathsf{Q}_{c}^{+}(k,j)=e^{-cs}\big(\mathsf{\Pi}^{+}_{c}\mathsf{Q}^{+}_{c}\big)(i,j). \end{align} By plugging \eqref{eq:GPlusgjPlus}$-$\eqref{eq:JGPlusgjPlus} into \eqref{eq:WHPlusgj}, we obtain \begin{align*} \mathsf{V}^{-1}\bigg(\frac{\partial}{\partial s}+\mathsf{\Lambda}\bigg) \begin{pmatrix} \mathsf{I}^{+} \\ \mathsf{\Pi}_{c}^{+} \end{pmatrix} e^{-cs}\,\mathbf{e}_{j}^{+} = \begin{pmatrix} \mathsf{I}^{+} \\ \mathsf{\Pi}_{c}^{+} \end{pmatrix} \mathsf{Q}_{c}^{+}\mathbf{e}_{j}^{+}, \end{align*} where $\mathbf{e}_{j}^{+}$ is the $j$-th $m_{+}$-dimensional unit column vector. Finally, by evaluating the derivative and taking $s=0$ on the left-hand side above, we deduce that \begin{align*} \mathsf{V}^{-1}\big(\mathsf{\Lambda}-c\,\mathsf{I}\big) \begin{pmatrix} \mathsf{I}^{+} \\ \mathsf{\Pi}_{c}^{+} \end{pmatrix} = \begin{pmatrix} \mathsf{I}^{+} \\ \mathsf{\Pi}_{c}^{+} \end{pmatrix} \mathsf{Q}_{c}^{+}, \end{align*} which is the ``+" part of \eqref{eq:TimeHomoWHReform}. \end{remark} \begin{remark}\label{rem:Uniqueness} From the discussion in Remark \ref{rem:RealityCheck}, for each $c>0$, solving the time-homogeneous Wiener-Hopf equation \eqref{eq:TimeHomoWH} for the matrices $(\mathsf{\Pi}_{c}^{\pm},\mathsf{Q}_{c}^{\pm})$ is equivalent to solving the time-inhomogeneous Wiener-Hopf equation \eqref{eq:WH}, subject to the conditions $(a^{\pm})$ and $(b^{\pm})$, for the operators $(J^{\pm},G^{\pm})$ with $g^{\pm}\in C_{0}^{1}(\overline{\mathscr{X}_{\pm}})$ of the form \eqref{eq:Funtgjpm}. Therefore, for each $c\in(0,\infty)$, the uniqueness of $(\mathsf{\Pi}_{c}^{\pm},\mathsf{Q}_{c}^{\pm})$ as a solution to \eqref{eq:TimeHomoWH} corresponds to the uniqueness of $(J^{\pm},G^{\pm})$ as a solution to \eqref{eq:WH}, subject to $(a^{\pm})$ and $(b^{\pm})$, when $g^{\pm}$ is restricted to the subclasses of $C_{0}^{1}(\overline{\mathscr{X}_{\pm}})$ of the form \eqref{eq:Funtgjpm}. When $c=0$, the functions $g^{\pm}$ of the form \eqref{eq:Funtgjpm} do not belong to $C_{0}^{1}(\overline{\mathscr{X}_{\pm}})$ anymore. Hence, our uniqueness result does not contradict the non-uniqueness of $(\mathsf{\Pi}_{0}^{\pm},\mathsf{Q}_{0}^{\pm})$ that was shown in \cite{BarlowRogersWilliams1980}. \end{remark} \section{Proofs of the main results}\label{sec:Proofs} In this section we prove Theorems \ref{thm:WHExistUniq} and \ref{thm:WHProbInterpr}. We will only give the proofs of the ``+" case in both theorems, as the ``$-$" case can be proved in an analogous way with $v$ replaced by $-v$. \subsection{Auxiliary Markov families}\label{subsec:TimeHomogen} In this subsection, we introduce an auxiliary time-inhomogenous Markov family $\mathcal{M}$ and an auxiliary time-homogenous Markov family $\widetilde \mathcal{M}$. We start by introducing some more notations of spaces and $\sigma$-fields. Let $\mathscr{Y}:=\mathbf{E}\times\mathbb{R}$, and the Borel $\sigma$-field on $\mathscr{Y}$ is denoted by $\mathcal{B}(\mathscr{Y}):=2^{\mathbf{E}}\otimes\mathcal{B}(\mathbb{R})$. Accordingly, let $\overline{\mathscr{Y}}:=\mathscr{Y}\cup\{(\partial,\infty)\}$ be the one-point completion of $\mathscr{Y}$, and $\mathcal{B}(\overline{\mathscr{Y}}):=\sigma(\mathcal{B}(\mathscr{Y})\cup\set{(\partial,\infty)})$. Moreover, we set $\mathscr{Z}:=\mathbb{R}_{+}\times\mathscr{Y}=\mathscr{X}\times\mathbb{R}$ and $\overline{\mathscr{Z}}:=\mathscr{Z}\cup\{(\infty,\partial,\infty)\}$. Let $\Omega$ be the set of c\`{a}dl\`{a}g functions $\omega$ on $\mathbb{R}_{+}$ taking values in $\mathscr{Y}$. We define $\omega(\infty):=(\partial,\infty)$ for every $\omega\in\Omega$. As shown in Appendix \ref{sec:AppendixA}, one can construct a {\it standard} canonical time-inhomogeneous Markov family (cf. \cite[Definition I.6.6]{GikhmanSkorokhod2004}) \begin{align*} \mathcal{M}:=\big\{\big(\Omega,\mathscr{F},\mathbb{F}_{s},(X_{t},\varphi_{t})_{t\in[s,\infty]},\mathbb{P}_{s,(i,a)}\big),\,(s,i,a)\in\overline{\mathscr{Z}}\big\} \end{align*} with transition function $P$ given by \begin{align}\label{eq:DefTranProbXvarphi} P(s,(i,a),t,A):=\mathbb{P}^{*}_{s,i}\bigg(\bigg(X^{*}_{t},\,a+\int_{s}^{t}v\big(X^{*}_{u}\big)\,du\bigg)\in A\bigg), \end{align} where $(s,i,a)\in\overline{\mathscr{Z}}$, $t\in[s,\infty]$, and $A\in\mathcal{B}(\overline{\mathscr{Y}})$. Furthermore, $\mathcal{M}$ has the following properties: \begin{itemize} \item[(i)] for any $(s,i,a)\in\overline{\mathscr{Z}}$, \begin{align}\label{eq:LawXXStar} \text{the law of }\,X\,\,\text{under }\,\mathbb{P}_{s,(i,a)}\,\,=\,\,\text{the law of }\,X^{*}\,\,\text{under }\,\mathbb{P}_{s,i}^{*}; \end{align} \item[(ii)] for any $(s,i,a)\in\mathscr{Z}$, \begin{align}\label{eq:DistvarphiInt} \mathbb{P}_{s,(i,a)}\bigg(\varphi_{t}=a+\int_{s}^{t}v(X_{u})\,du,\,\,\,\,\text{for all $t\in[s,\infty)$}\bigg)=1. \end{align} \end{itemize} Considering the standard Markov family $\mathcal{M}$, for any $s,\ell\in\mathbb{R}_{+}$, we define \begin{align*} \tau_{\ell}^{+}(s):=\inf\big\{t\in[s,\infty]:\,\varphi_{t}>\ell\big\}, \end{align*} which is an $\mathbb{F}_{s}$-stopping time in light of the continuity of $\varphi$ and the right-continuity of the filtration $\mathbb{F}_{s}$. By similar arguments as in the proof of Lemma \ref{lem:RangeXTaupm}, for any $(s,i,a)\in\mathscr{Z}$ and $\ell\in[a,\infty)$, \begin{align}\label{eq:RangeXtauPlus} \mathbb{P}_{s,(i,a)}\Big(X_{\tau^{+}_{\ell}(s)}\in\mathbf{E}_{+}\cup\{\partial\}\Big)=1. \end{align} Moreover, it follows from \eqref{eq:DistvarphiInt} that, for any $(s,i,a)\in\mathscr{Z}$, \begin{align*} \tau_{\ell}^{+}(s)&=\inf\bigg\{t\geq s:\,\,a+\!\int_{s}^{t}v(X_{u})\,du>\ell\bigg\},\quad\mathbb{P}_{s,(i,a)}-\text{a.}\,\text{s.}\,. \end{align*} If no confusion arise, we will omit the $s$ in $\tau^{+}_{\ell}(s)$. \begin{proposition}\label{prop:ExpPStarExpP} For any $g^{+}\in L^{\infty}(\overline{\mathscr{X}_{+}})$, $(s,i,a)\in\mathscr{Z}$, and $\ell\in[a,\infty)$, \begin{align*} \mathbb{E}_{s,(i,a)}\Big(g^{+}\Big(\tau_{\ell}^{+},X_{\tau_{\ell}^{+}}\Big)\Big)=\mathbb{E}_{s,i}^{*}\Big(g^{+}\Big(\tau_{\ell-a}^{+,*},X_{\tau_{\ell-a}^{+,*}}^{*}\Big)\Big). \end{align*} \end{proposition} \begin{proof} By \eqref{eq:DistvarphiInt} and Lemma \ref{lem:XvarphiPathst}, we have \begin{align*} \mathbb{E}_{s,(i,a)}\Big(g^{+}\Big(\tau_{\ell}^{+},X_{\tau_{\ell}^{+}}\Big)\!\Big)&=\mathbb{E}_{s,(i,a)}\bigg(g^{+}\bigg(\!\inf\bigg\{u\!\geq\!s:a+\!\!\int_{s}^{u}\!v(X_{r})dr\!>\!\ell\bigg\},X_{\inf\big\{u\geq s:\,a+\int_{s}^{u}v(X_{r})dr>\ell\big\}}\bigg)\!\bigg)\\ &=\mathbb{E}^{*}_{s,i}\bigg(g^{+}\bigg(\!\inf\bigg\{u\geq s:\!\int_{s}^{u}\!v\big(X_{r}^{*}\big)dr>\ell-a\bigg\},X_{\inf\big\{u\geq s:\int_{s}^{u}v(X_{r}^{*})dr>\ell-a\big\}}^{*}\bigg)\!\bigg)\\ &=\mathbb{E}^{*}_{s,i}\bigg(g^{+}\bigg(\tau_{\ell-a}^{+,*},\,X_{\tau_{\ell-a}^{+,*}}^{*}\bigg)\bigg), \end{align*} which completes the proof. \end{proof} Proposition \ref{prop:ExpPStarExpP} provides a useful representation of the expectation $\mathbb{E}_{s,i}^{*}\Big(g^{+}\Big(\tau_{\ell-a}^{+,*},X_{\tau_{\ell-a}^{+,*}}^{*}\Big)\Big)$. We will need still another representation of this expectation. Towards this end, we will first transform the time-inhomogeneous Markov family $\mathcal{M}$ into a {\it time-homogeneous} Markov family \begin{align*} \widetilde{\mathcal{M}}=\big\{\big(\widetilde{\Omega},\widetilde{\mathscr{F}},\widetilde{\mathbb{F}},({Z}_t)_{t\in\overline{\mathbb{R}}_{+}},(\theta_{r})_{r\in\mathbb{R}_{+}},\widetilde{\mathbb{P}}_{z}\big),z\in\overline{\mathscr{Z}}\big\} \end{align*} following the setup in \cite{Bottcher2014}. The construction of $\widetilde{\mathcal{M}}$ proceeds as follows. \begin{itemize} \item We let $\widetilde{\Omega}:=\overline{\mathbb{R}}_{+}\times\Omega$ to be the new sample space, with elements $\widetilde{\omega}=(s,\omega)$, where $s\in\overline{\mathbb{R}}_{+}$ and $\omega\in\Omega$. On $\widetilde{\Omega}$ we consider the $\sigma$-field \begin{align*} \widetilde{\mathscr{F}}:=\Big\{\widetilde{A}\subset\widetilde{\Omega}:\,\widetilde{A}_{s}\in\mathscr{F}_{\infty}^{s}\,\,\text{for any }s\in\overline{\mathbb{R}}_{+}\Big\}, \end{align*} where $\widetilde{A}_{s}:=\{\omega\in\Omega:\,(s,\omega)\in\widetilde{A}\}$ and $\mathscr{F}_{\infty}^{s}$ is the last element in $\mathbb{F}_{s}$ (the filtration in $\mathcal{M}$). \item We let $\overline{\mathscr{Z}}=\mathscr{Z}\cup\{(\infty,\partial,\infty)\}$ to be the new state space, where $\mathscr{Z}=\mathbb{R}_{+}\times\mathscr{Y}=\mathscr{X}\times\mathbb{R}$, with elements $z=(s,i,a)$. On $\mathscr{Z}$ we consider the $\sigma$-field \begin{align*} \widetilde{\mathcal{B}}(\mathscr{Z}):=\left\{\widetilde{B}\subset\mathscr{Z}:\,\widetilde{B}_{s}\in\mathcal{B}(\mathscr{Y})\,\,\,\text{for any }s\in\mathbb{R}_{+}\right\}, \end{align*} where $\widetilde{B}_{s}:=\big\{(i,a)\in\mathscr{Y}:\,(s,i,a)\in\widetilde{B}\big\}$. Let $\widetilde{\mathcal{B}}(\overline{\mathscr{Z}}):=\sigma(\widetilde{\mathcal{B}}(\mathscr{Z})\cup\{(\infty,\partial,\infty)\})$. \item We consider a family of probability measures $(\widetilde{\mathbb{P}}_{z})_{z\in\overline{\mathscr{Z}}}$, where, for $z=(s,i,a)\in\overline{\mathscr{Z}}$, \begin{align}\label{eq:Probz} \widetilde{\mathbb{P}}_{z}\big(\widetilde{A}\big)=\widetilde{\mathbb{P}}_{s,i,a}\big(\widetilde{A}\big):=\mathbb{P}_{s,(i,a)}\big(\widetilde{A}_{s}\big),\quad\widetilde{A}\in\widetilde{\mathscr{F}}. \end{align} \item We consider the process $Z:=(Z_{t})_{t\in\overline{\mathbb{R}}_{+}}$ on $(\widetilde{\Omega},\widetilde{\mathscr{F}})$, where, for $t\in\overline{\mathbb{R}}_{+}$, \begin{align}\label{eq:ProcZ} Z_{t}(\widetilde{\omega}):=\big(s+t,X_{s+t}(\omega),\varphi_{s+t}(\omega)\big),\quad\widetilde{\omega}=(s,\omega)\in\widetilde{\Omega}. \end{align} Hereafter, we denote the three components of $Z$ by $Z^{1}$, $Z^{2}$, and $Z^{3}$, respectively. \item On $(\widetilde{\Omega},\widetilde{\mathscr{F}})$, we define $\widetilde{\mathbb{F}}:=(\widetilde{\mathscr{F}}_{t})_{t\in\overline{\mathbb{R}}_{+}}$, where $\widetilde{\mathscr{F}}_{t}:=\widetilde{\mathscr{G}}_{t+}$ (with the convention $\widetilde{\mathscr{G}}_{\infty+}=\widetilde{\mathscr{G}}_{\infty}$), and $(\widetilde{\mathscr{G}}_{t})_{t\in\overline{\mathbb{R}}_{+}}$ is the completion of the natural filtration generated by $(Z_{t})_{t\in\overline{\mathbb{R}}_{+}}$ with respect to the set of probability measures $\{\widetilde{\mathbb{P}}_{z},z\in\overline{\mathscr{Z}}\}$ (cf. \cite[Chapter I]{GikhmanSkorokhod2004}). \item Finally, for any $r\in\mathbb{R}_{+}$, we consider the shift operator ${\theta}_{r}:\widetilde{\Omega}\rightarrow\widetilde{\Omega}$ defined by \begin{align*} \theta_{r}\,\widetilde{\omega}=(u+r,\omega_{\cdot+r}),\quad\widetilde{\omega}=(u,\omega)\in\widetilde{\Omega}. \end{align*} It follows that $Z_{t}\circ{\theta}_{r}=Z_{t+r}$, for any $t,r\in\mathbb{R}_{+}$. \end{itemize} For $z=(s,i,a)\in\overline{\mathscr{Z}}$, $t\in\overline{\mathbb{R}}_{+}$, and $\widetilde{B}\in\widetilde{\mathcal{B}}(\overline{\mathscr{Z}})$, we define the transition function $\widetilde{P}$ by \begin{align*} \widetilde{P}\big(z,t,\widetilde{B}\big):=\widetilde{\mathbb{P}}_{z}\big(Z_{t}\in\widetilde{B}\big). \end{align*} In view of \eqref{eq:Probz}, we have \begin{align}\label{eq:TranProbTildeX} \widetilde{P}\big(z,t,\widetilde{B}\big)=\mathbb{P}_{s,(i,a)}\Big((X_{t+s},\varphi_{t+s})\in\widetilde{B}_{s+t}\Big)=P\big(s,(i,a),s+t,\widetilde{B}_{s+t}\big). \end{align} By Lemma \ref{lem:FellerTranProb}, the transition function $P$, defined in \eqref{eq:DefTranProbXvarphi}, is associated with a Feller semigroup, so that $P$ is a Feller transition function. This and \cite[Theorem 3.2]{Bottcher2014} imply that $\widetilde{P}$ is also a Feller transition function. In light of the right continuity of the sample paths, and invoking \cite[Theorem I.4.7]{GikhmanSkorokhod2004}, we conclude that $\widetilde{\mathcal{M}}$ is a {\it time-homogeneous strong} Markov family. For any $\ell\in\mathbb{R}$, we define \begin{align*} \widetilde{\tau}_{\ell}^{+}:=\inf\big\{t\in\overline{\mathbb{R}}_{+}:\,Z^{3}_{t}>\ell\big\}. \end{align*} Note that $\widetilde{\tau}_{\ell}^{+}$ is an $\widetilde{\mathbb{F}}$-stopping time since $Z^{3}$ has continuous sample paths and $\widetilde{\mathbb{F}}$ is right-continuous. In light of \eqref{eq:DistvarphiInt}, \eqref{eq:Probz}, and \eqref{eq:ProcZ}, for any $(s,i,a)\in\mathscr{Z}$, we have \begin{align}\label{eq:DistTildeZ3IntTildeZ2} \widetilde{\mathbb{P}}_{s,i,a}\Big(Z^{3}_{t}=a+\int_{0}^{t}v\big(Z^{2}_{u}\big)\,du,\,\,\text{for all }t\in\mathbb{R}_{+}\Big)=1. \end{align} Consequently, for any $(s,i)\in\mathscr{X}_{+}$ and $\ell\in\mathbb{R}$, \begin{align}\label{eq:TildeTauPlus0} \widetilde{\mathbb{P}}_{s,i,\ell}\big(\widetilde{\tau}_{\ell}^{+}=0\big)=1. \end{align} Moreover, by \eqref{eq:RangeXtauPlus} and \eqref{eq:DistTildeZ3IntTildeZ2}, for any $(s,i,a)\in\overline{\mathscr{Z}}$ and $\ell\in[a,\infty)$, we have \begin{align}\label{eq:RangeZ2TildeTauPlus} \widetilde{\mathbb{P}}_{s,i,a}\Big(Z^{2}_{\widetilde{\tau}^{+}_{\ell}}\in\mathbf{E}_{+}\cup\{\partial\}\Big)=1. \end{align} By Proposition \ref{prop:ExpPStarExpP}, \eqref{eq:Probz} and \eqref{eq:ProcZ}, for any $g^{+}\in L^{\infty}(\overline{\mathscr{X}_{+}})$, $(s,i,a)\in\mathscr{Z}$, and $\ell\in[a,\infty)$, \begin{align}\label{eq:ExpTildePPStarPlus} \mathbb{E}_{s,i}^{*}\bigg(g^{+}\bigg(\tau_{\ell-a}^{+,*},X_{\tau_{\ell-a}^{+,*}}^{*}\bigg)\bigg)=\widetilde{\mathbb{E}}_{s,i,a}\Big(g^{+}\Big(Z^{1}_{\widetilde{\tau}_{\ell}^{+}},Z^{2}_{\widetilde{\tau}_{\ell}^{+}}\Big)\Big), \end{align} which, in particular, implies that \begin{align}\label{eq:ExpTildePShift} \widetilde{\mathbb{E}}_{s,i,a}\Big(g^{+}\Big(Z^{1}_{\widetilde{\tau}_{\ell}^{+}},Z^{2}_{\widetilde{\tau}_{\ell}^{+}}\Big)\Big)=\widetilde{\mathbb{E}}_{s,i,0}\Big(g^{+}\Big(Z^{1}_{\widetilde{\tau}_{\ell-a}^{+}},Z^{2}_{\widetilde{\tau}_{\ell-a}^{+}}\Big)\Big). \end{align} Consequently, the operators $J^{+}$ and $\mathcal{P}_{\ell}^{+}$, $\ell\in\mathbb{R}_{+}$, defined by \eqref{eq:DefJPlus} and \eqref{eq:DefPPlus}, can be written as \begin{align}\label{eq:DefJPlusTildeP} \big(J^{+}g^{+}\big)(s,i)&=\widetilde{\mathbb{E}}_{s,i,0}\Big(g^{+}\Big(Z^{1}_{\widetilde{\tau}_{0}^{+}},Z^{2}_{\widetilde{\tau}_{0}^{+}}\Big)\Big),\quad g^{+}\in L^{\infty}(\overline{\mathscr{X}_{+}}),\quad (s,i)\in\mathscr{X}_{-},\\ \label{eq:DefPellPlusTildeP} \big(\mathcal{P}_{\ell}^{+}g^{+}\big)(s,i)&=\widetilde{\mathbb{E}}_{s,i,0}\Big(g^{+}\Big(Z^{1}_{\widetilde{\tau}_{\ell}^{+}},Z^{2}_{\widetilde{\tau}_{\ell}^{+}}\Big)\Big),\quad g^{+}\in L^{\infty}(\overline{\mathscr{X}_{+}}),\quad (s,i)\in\mathscr{X}_{+}. \end{align} We conclude this section with the following key lemma, which will be crucial in the proofs of {the} main results. \begin{lemma}\label{lem:StrongMarkov} Let $\widetilde{\tau}$ be any $\widetilde{\mathbb{F}}$-stopping time, and $g^{+}\in L^{\infty}(\overline{\mathscr{X}_{+}})$. Then, for any $(s,i,a)\in\overline{\mathscr{Z}}$ and $\ell\in[a,\infty)$, we have \begin{align}\label{eq:StrongMarkovCondPlus} \1_{\{\widetilde{\tau}\leq\widetilde{\tau}^{+}_{\ell}\}}\,\widetilde{\mathbb{E}}_{s,i,a}\Big(g^{+}\Big(Z^{1}_{\widetilde{\tau}^{+}_{\ell}},Z^{2}_{\widetilde{\tau}^{+}_{\ell}}\Big)\,\Big|\,\widetilde{\mathscr{F}}_{\widetilde{\tau}}\Big)=\1_{\{\widetilde{\tau}\leq\widetilde{\tau}^{+}_{\ell}\}}\,\widetilde{\mathbb{E}}_{Z^{1}_{\widetilde{\tau}},Z^{2}_{\widetilde{\tau}},Z^{3}_{\widetilde{\tau}}}\Big(g^{+}\Big(Z^{1}_{\widetilde{\tau}^{+}_{\ell}},Z^{2}_{\widetilde{\tau}^{+}_{\ell}}\Big)\Big),\quad\widetilde{\mathbb{P}}_{s,i,a}-\text{a.}\,\text{s.}. \end{align} \end{lemma} \begin{proof} Note that if $(s,i,a)=(\infty,\partial,\infty)$, then both sides of \eqref{eq:StrongMarkovCondPlus} are zero. Hence, without loss of generality, assume that $(s,i,a)\in\mathscr{Z}$ and $\{\widetilde{\tau}\leq\widetilde{\tau}^{+}_{\ell}\}\neq\emptyset$. Note that for any $\ell\in\mathbb{R}$ and $\widetilde{\omega}\in\{\widetilde{\tau}\leq\widetilde{\tau}^{+}_{\ell}\}$, \begin{align*} \widetilde{\tau}^{+}_{\ell}\big(\theta_{\widetilde{\tau}}(\widetilde{\omega})\big)&=\inf\big\{t\in\mathbb{R}_{+}:Z^{3}_{t}\big(\theta_{\widetilde{\tau}}(\widetilde{\omega})\big)>\ell\big\}=\inf\big\{t\in\mathbb{R}_{+}:Z^{3}_{t+\widetilde{\tau}(\widetilde{\omega})}(\widetilde{\omega})>\ell\big\}\\ &=\inf\big\{t\geq\widetilde{\tau}(\widetilde{\omega}):Z^{3}_{t}(\widetilde{\omega})>\ell\big\}-\widetilde{\tau}(\widetilde{\omega})=\widetilde{\tau}_{\ell}^{+}(\widetilde{\omega})-\widetilde{\tau}(\widetilde{\omega}), \end{align*} and thus \begin{align*} \Big(Z_{\widetilde{\tau}_{\ell}^{+}}\circ\theta_{\widetilde{\tau}}\Big)(\widetilde{\omega})=Z_{\widetilde{\tau}^{+}_{\ell}(\theta_{\widetilde{\tau}}(\widetilde{\omega}))}\big(\theta_{\widetilde{\tau}}(\widetilde{\omega})\big)=Z_{\widetilde{\tau}^{+}_{\ell}(\widetilde{\omega})-\widetilde{\tau}(\widetilde{\omega})}\big(\theta_{\widetilde{\tau}(\widetilde{\omega})}\widetilde{\omega}\big)=Z_{\widetilde{\tau}_{\ell}^{+}}(\widetilde{\omega}). \end{align*} Therefore, for any $(s,i,a)\in\mathscr{Z}$ and $\ell\in[a,\infty)$, \begin{align*} \1_{\{\widetilde{\tau}\leq\widetilde{\tau}^{+}_{\ell}\}}\,\widetilde{\mathbb{E}}_{s,i,a}\Big(g^{+}\Big(Z^{1}_{\widetilde{\tau}^{+}_{\ell}},Z^{2}_{\widetilde{\tau}^{+}_{\ell}}\Big)\Big|\widetilde{\mathscr{F}}_{\widetilde{\tau}}\Big)&=\widetilde{\mathbb{E}}_{s,i,a}\Big(\1_{\{\widetilde{\tau}\leq\widetilde{\tau}^{+}_{\ell}\}}\,g^{+}\Big(Z^{1}_{\widetilde{\tau}^{+}_{\ell}},Z^{2}_{\widetilde{\tau}^{+}_{\ell}}\Big)\Big|\widetilde{\mathscr{F}}_{\widetilde{\tau}}\Big)\\ &=\widetilde{\mathbb{E}}_{s,i,a}\Big(\1_{\{\widetilde{\tau}\leq\widetilde{\tau}^{+}_{\ell}\}}\,g^{+}\Big(Z^{1}_{\widetilde{\tau}^{+}_{\ell}}\circ\theta_{\widetilde{\tau}},Z^{2}_{\widetilde{\tau}^{+}_{\ell}}\circ\theta_{\widetilde{\tau}}\Big)\Big|\widetilde{\mathscr{F}}_{\widetilde{\tau}}\Big)\\ &=\1_{\{\widetilde{\tau}\leq\widetilde{\tau}^{+}_{\ell}\}}\,\widetilde{\mathbb{E}}_{s,i,a}\Big(g^{+}\Big(Z^{1}_{\widetilde{\tau}^{+}_{\ell}},Z^{2}_{\widetilde{\tau}^{+}_{\ell}}\Big)\circ\theta_{\widetilde{\tau}}\Big|\widetilde{\mathscr{F}}_{\widetilde{\tau}}\Big)\\ &=\1_{\{\widetilde{\tau}\leq\widetilde{\tau}^{+}_{\ell}\}}\,\widetilde{\mathbb{E}}_{Z^{1}_{\widetilde{\tau}},Z^{2}_{\widetilde{\tau}},Z^{3}_{\widetilde{\tau}}}\Big(g^{+}\Big(Z^{1}_{\widetilde{\tau}^{+}_{\ell}},Z^{2}_{\widetilde{\tau}^{+}_{\ell}}\Big)\Big), \end{align*} where we used the fact that $\{\widetilde{\tau}\leq\widetilde{\tau}^{+}_{\ell}\}\in\widetilde{\mathscr{F}}_{\widetilde{\tau}}$ (cf. \cite[Lemma 1.2.16]{KaratzasShreve1998}) in the first and third equality, and the strong Markov property of $Z$ (cf. \cite[Theorem III.9.4]{RogersWilliams1994}) in the last equality. \end{proof} \begin{corollary}\label{cor:StrongMarkovExp} Under the assumptions of Lemma \ref{lem:StrongMarkov}, \begin{align* \widetilde{\mathbb{E}}_{s,i,a}\Big(\1_{\{\widetilde{\tau}\leq\widetilde{\tau}^{+}_{\ell}\}}\,g^{+}\Big(Z^{1}_{\widetilde{\tau}^{+}_{\ell}},Z^{2}_{\widetilde{\tau}^{+}_{\ell}}\Big)\Big)=\widetilde{\mathbb{E}}_{s,i,a}\Big(\1_{\{\widetilde{\tau}\leq\widetilde{\tau}^{+}_{\ell}\}}\,\widetilde{\mathbb{E}}_{Z^{1}_{\widetilde{\tau}},Z^{2}_{\widetilde{\tau}},Z^{3}_{\widetilde{\tau}}}\Big(g^{+}\Big(Z^{1}_{\widetilde{\tau}^{+}_{\ell}},Z^{2}_{\widetilde{\tau}^{+}_{\ell}}\Big)\Big)\Big). \end{align*} \end{corollary} \begin{proof} This is a direct consequence of \eqref{eq:StrongMarkovCondPlus} and the fact that $\{\widetilde{\tau}\leq\widetilde{\tau}^{+}_{\ell}\}\in\widetilde{\mathscr{F}}_{\widetilde{\tau}}$. \end{proof} \begin{corollary}\label{cor:StrongMarkovExpTildeTau} For any $g^{+}\in L^{\infty}(\overline{\mathscr{X}_{+}})$, $(s,i,a)\in\overline{\mathscr{Z}}$, $\ell\in[a,\infty)$, and $h\in(0,\infty)$, \begin{align*} \widetilde{\mathbb{E}}_{s,i,a}\Big(g^{+}\Big(Z^{1}_{\widetilde{\tau}^{+}_{\ell+h}},Z^{2}_{\widetilde{\tau}^{+}_{\ell+h}}\Big)\Big)=\widetilde{\mathbb{E}}_{s,i,a}\Big(\big(\mathcal{P}_{h}^{+}g^{+}\big)\Big(Z^{1}_{\widetilde{\tau}_{\ell}^{+}},Z^{2}_{\widetilde{\tau}_{\ell}^{+}}\Big)\Big). \end{align*} \end{corollary} \begin{proof} Since $g^{+}\in L^{\infty}(\overline{\mathscr{X}_{+}})$ and $\widetilde{\tau}_{\ell+h}^{+}\geq\widetilde{\tau}_{\ell}^{+}$, $g^{+}(Z^{1}_{\widetilde{\tau}^{+}_{\ell+h}},Z^{2}_{\widetilde{\tau}^{+}_{\ell+h}})=g^{+}(\infty,\partial)=0$ on $\{\widetilde{\tau}_{\ell}^{+}=\infty\}$, so that $\widetilde{\mathbb{E}}_{Z^{1}_{\widetilde{\tau}_{\ell}^{+}},Z^{2}_{\widetilde{\tau}_{\ell}^{+}},Z^{3}_{\widetilde{\tau}_{\ell}^{+}}}(g^{+}(Z^{1}_{\widetilde{\tau}^{+}_{\ell+h}},Z^{2}_{\widetilde{\tau}^{+}_{\ell+h}}))=0$ on $\{\widetilde{\tau}_{\ell}^{+}=\infty\}$. Moreover, $Z^{3}_{\widetilde{\tau}^{+}_{\ell}}=\ell$ on $\{\widetilde{\tau}_{\ell}^{+}<\infty\}$. Thus, using Corollary \ref{cor:StrongMarkovExp}, \eqref{eq:ExpTildePShift}, \eqref{eq:DefPellPlusTildeP}, and \eqref{eq:RangeXtauPlus}, we obtain that \begin{align*} \widetilde{\mathbb{E}}_{s,i,a}\Big(g^{+}\Big(Z^{1}_{\widetilde{\tau}^{+}_{\ell+h}},Z^{2}_{\widetilde{\tau}^{+}_{\ell+h}}\Big)\Big)&=\widetilde{\mathbb{E}}_{s,i,a}\bigg(\widetilde{\mathbb{E}}_{Z^{1}_{\widetilde{\tau}^{+}_{\ell}},Z^{2}_{\widetilde{\tau}^{+}_{\ell}},Z^{3}_{\widetilde{\tau}^{+}_{\ell}}}\Big(g^{+}\Big(Z^{1}_{\widetilde{\tau}^{+}_{\ell+h}},Z^{2}_{\widetilde{\tau}^{+}_{\ell+h}}\Big)\Big)\bigg)\\ &=\widetilde{\mathbb{E}}_{s,i,a}\bigg(\1_{\{\widetilde{\tau}^{+}_{\ell}<\infty\}}\,\widetilde{\mathbb{E}}_{Z^{1}_{\widetilde{\tau}^{+}_{\ell}},Z^{2}_{\widetilde{\tau}^{+}_{\ell}},\ell}\Big(g^{+}\Big(Z^{1}_{\widetilde{\tau}^{+}_{\ell+h}},Z^{2}_{\widetilde{\tau}^{+}_{\ell+h}}\Big)\Big)\bigg)\\ &=\widetilde{\mathbb{E}}_{s,i,a}\bigg(\1_{\{\widetilde{\tau}^{+}_{\ell}<\infty\}}\,\widetilde{\mathbb{E}}_{Z^{1}_{\widetilde{\tau}^{+}_{\ell}},Z^{2}_{\widetilde{\tau}^{+}_{\ell}},0}\Big(g^{+}\Big(Z^{1}_{\widetilde{\tau}^{+}_{h}},Z^{2}_{\widetilde{\tau}^{+}_{h}}\Big)\Big)\bigg)\\ &=\widetilde{\mathbb{E}}_{s,i,a}\Big(\1_{\{\widetilde{\tau}^{+}_{\ell}<\infty\}}\big(\mathcal{P}_{h}^{+}g^{+}\big)\Big(Z^{1}_{\widetilde{\tau}^{+}_{\ell}},Z^{2}_{\widetilde{\tau}^{+}_{\ell}}\Big)\Big)\\ &=\widetilde{\mathbb{E}}_{s,i,a}\Big(\big(\mathcal{P}_{h}^{+}g^{+}\big)\Big(Z^{1}_{\widetilde{\tau}^{+}_{\ell}},Z^{2}_{\widetilde{\tau}^{+}_{\ell}}\Big)\Big), \end{align*} where the last equality is due to the fact that $(\mathcal{P}_{h}^{+}g^{+})(\infty,\partial)=0$. \end{proof} \begin{remark}\label{rem:ProofExpgpsimJPMinus} We now verify \eqref{eq:ExpgpsimJPPlus} using the strong Markov family $\widetilde{\mathcal{M}}$. Indeed, by \eqref{eq:ExpTildePPStarPlus} and Corollary \ref{cor:StrongMarkovExpTildeTau}, for any $g^{+}\in L^{\infty}(\overline{\mathscr{X}_{+}})$, $(s,i)\in\mathscr{X}_{-}$, and $\ell\in(0,\infty)$, \begin{align*} \mathbb{E}_{s,i}^{*}\Big(g^{+}\Big(\tau_{\ell}^{+,*},X_{\tau_{\ell}^{+,*}}^{*}\Big)\Big)=\widetilde{\mathbb{E}}_{s,i,0}\Big(g^{+}\Big(Z^{1}_{\widetilde{\tau}_{\ell}^{+}},Z^{2}_{\widetilde{\tau}_{\ell}^{+}}\Big)\Big)=\widetilde{\mathbb{E}}_{s,i,0}\Big(\big(\mathcal{P}_{\ell}^{+}g^{+}\big)\Big(Z^{1}_{\widetilde{\tau}_{0}^{+}},Z^{2}_{\widetilde{\tau}_{0}^{+}}\Big)\Big)=\big(J^{+}\mathcal{P}_{\ell}^{+}g^{+}\big)(s,i). \end{align*} \end{remark} \subsection{A regularity lemma}\label{subsec:RegLemma} Fix $g^{+}\in C_{0}(\overline{\mathscr{X}_{+}})$, and define $f_{+}:\mathscr{X}\times\mathbb{R}_{+}\rightarrow\mathbb{R}$ by \begin{align}\label{eq:FuntfPlus} f_{+}(s,i,\ell):=\widetilde{\mathbb{E}}_{s,i,0}\Big(g^{+}\Big(Z_{\widetilde{\tau}_{\ell}^{+}}^{1},Z_{\widetilde{\tau}_{\ell}^{+}}^{2}\Big)\Big). \end{align} In particular, in view of \eqref{eq:TildeTauPlus0}, we have \begin{align}\label{eq:Trivfplus} f_{+}(s,i,0)=g^{+}(s,i),\quad (s,i)\in\overline{\mathscr{X}_{+}}. \end{align} Moreover, by \eqref{eq:DefJPlusTildeP}, \eqref{eq:DefPellPlusTildeP}, and \eqref{eq:FuntfPlus}, \begin{align}\label{eq:JPlusfPlus} J^{+}g^{+}(s,i)&=f_{+}(s,i,0),\quad (s,i)\in\mathscr{X}_{-},\\ \label{eq:PellPlusfPlus} \mathcal{P}^{+}_{\ell}g^{+}(s,i)&=f_{+}(s,i,\ell),\quad (s,i)\in\mathscr{X}_{+},\quad\ell\in\mathbb{R}_{+}. \end{align} The following lemma addresses the continuity of $f_{+}$ with respect to different variables. In particular, due to \eqref{eq:JPlusfPlus} and \eqref{eq:PellPlusfPlus}, for any $g^{+}\in C_{0}(\overline{\mathscr{X}_{+}})$, the continuity of $J^{+}g^{+}(\cdot,i)$, and $\mathcal{P}_{\cdot}^{+}g^{+}(\cdot,i)$, with respect to each individual variable, is established as special cases of $f_{+}$. Recall that, by Assumption~\ref{assump:GenLambda}, $K$ is a constant such that $\sup_{s\in\mathbb{R}_{+},i,j\in\mathbf{E}}|\mathsf{\Lambda}_{s}(i,j)|\leq K$. Additionally, recall that $\underline{v}=\min_{i\in\mathbf{E}}|v(i)|$ and $\overline{v}=\max_{i\in\mathbf{E}}|v(i)|$. \begin{lemma}\label{lem:UnifContf} For any $g^{+}\in C_{0}(\overline{\mathscr{X}_{+}})$, $f_{+}(\cdot,i,\cdot)$ is uniformly continuous on $\mathbb{R}_{+}^{2}$, uniformly for all $i\in\mathbf{E}$. That is, for any $\varepsilon>0$, there exists $\delta=\delta(\varepsilon,K,\|g^{+}\|_{\infty},\underline{v},\overline{v})>0$ such that \begin{align*} \sup_{i\in\mathbf{E}}\,\sup_{\substack{(s_{1},\ell_{1}),(s_{2},\ell_{2})\in\mathbb{R}_{+}^{2}: \\ |s_{2}-s_{1}|+|\ell_{2}-\ell_{1}|<\delta}}\big|f_{+}(s_{2},i,\ell_{2})-f_{+}(s_{1},i,\ell_{1})\big|<\varepsilon. \end{align*} Moreover, for any $i\in\mathbf{E}$ and $\ell\in\mathbb{R}_{+}$, $f(\cdot,i,\ell)\in C_{0}(\mathbb{R}_{+})$. In particular, $J^{+}g^{+}\in C_{0}(\overline{\mathscr{X}_{-}})$ and $\mathcal{P}_{\ell}^{+}g^{+}\in C_{0}(\overline{\mathscr{X}_{+}})$. \end{lemma} The proof of this lemma is deferred to Appendix \ref{sec:AppendixB}. \subsection{Existence of the Wiener-Hopf factorization}\label{sec:ExistProof} This section is devoted to the proof of the $``+"$ portion of Theorem \ref{thm:WHExistUniq}. We do this by demonstrating the existence of solution to \eqref{eq:WHPlus} subject to conditions ($a^{+}$) and ($b^{+}$). Recall that $J^{+}$ and $(\mathcal{P}_{\ell}^{+})_{\ell\in\mathbb{R}_{+}}$ are defined as in \eqref{eq:DefJPlus} and \eqref{eq:DefPPlus}, and have the respective representations \eqref{eq:DefJPlusTildeP} and \eqref{eq:DefPellPlusTildeP} in terms of the time-homogeneous Markov family $\widetilde{\mathcal{M}}$; $G^{+}$ is defined as in \eqref{eq:DefGenGPlus} with respect to $(\mathcal{P}_{\ell}^{+})_{\ell\in\mathbb{R}_{+}}$. We will show that $(J^{+},G^{+})$ is a solution to \eqref{eq:WHPlus} (which is equivalent to \eqref{eq:WHPlus1}$-$\eqref{eq:WHPlus2}) subject to ($a^{+}$) and ($b^{+}$). The proof is divided into four steps. \medskip \noindent \textbf{Step 1.} In this step show that $J^{+}$ satisfies the condition $(a^{+})$(i). Let $g^{+}\in C_{0}(\overline{\mathscr{X}_{+}})$. By Lemma \ref{lem:UnifContf}, we have $J^{+}g^{+}\in C_{0}(\overline{\mathscr{X}_{-}})$. Moreover, if $\supp g^{+}\subset[0,\eta_{g^{+}}]\times\mathbf{E}_{+}$ for some $\eta_{g^{+}}\in(0,\infty)$, we have $(J^{+}g^{+})(s,i)=\widetilde{\mathbb{E}}_{s,i,0}(g^{+}(s+\widetilde{\tau}_{0}^{+},Z^{2}_{\widetilde{\tau}_{0}^{+}}))=0$, for any $(s,i)\in[\eta_{g^{+}},\infty)\times\mathbf{E}_{-}$, which completes the proof in Step 1. \medskip \noindent \textbf{Step 2.} Here we will show that $(\mathcal{P}^{+}_{\ell})_{\ell\in\mathbb{R}_{+}}$ is a strongly continuous positive contraction semigroup on $C_{0}(\overline{\mathscr{X}_{+}})$, and thus a Feller semigroup. Let $g^{+}\in C_{0}(\overline{\mathscr{X}_{+}})$ and $\ell\in\mathbb{R}_{+}$. By Lemma \ref{lem:UnifContf}, we have $\mathcal{P}^{+}_{\ell}g^{+}\in C_{0}(\overline{\mathscr{X}_{+}})$. The positivity and contraction property of $\mathcal{P}_{\ell}^{+}$ follow immediately from its definition. Hence, it remains to show that $(\mathcal{P}^{+}_{\ell})_{\ell\in\mathbb{R}_{+}}$ is a strongly continuous semigroup. To this end, we fix any $(s,i)\in\mathscr{X}_{+}$. By \eqref{eq:Trivfplus} and \eqref{eq:PellPlusfPlus}, we first have \begin{align}\label{eq:PellPlusSemiGroup1} \big(\mathcal{P}_{0}^{+}g^{+}\big)(s,i)=f_{+}(s,i,0)=g^{+}(s,i). \end{align} Moreover, for any $\ell\in\mathbb{R}_{+}$ and $h>0$, by \eqref{eq:DefPellPlusTildeP} and Corollary \ref{cor:StrongMarkovExpTildeTau}, we have \begin{align}\label{eq:PellPlusSemiGroup2} \big(\mathcal{P}^{+}_{\ell+h}g^{+}\big)(s,i)&=\widetilde{\mathbb{E}}_{s,i,0}\Big(g^{+}\Big(\!Z^{1}_{\widetilde{\tau}^{+}_{\ell+h}},Z^{2}_{\widetilde{\tau}^{+}_{\ell+h}}\Big)\!\Big)=\widetilde{\mathbb{E}}_{s,i,0}\Big(\!\big(\mathcal{P}^{+}_{h}g^{+}\big)\!\Big(Z^{1}_{\widetilde{\tau}^{+}_{\ell}},Z^{2}_{\widetilde{\tau}^{+}_{\ell}}\Big)\!\Big)=\big(\mathcal{P}^{+}_{\ell}\mathcal{P}^{+}_{h}g^{+}\big)(s,i), \end{align} Hence, $(\mathcal{P}_{\ell}^{+})_{\ell\in\mathbb{R}_{+}}$ is a semigroup on $C_{0}(\overline{\mathscr{X}_{+}})$. Finally, for any $\ell\in\mathbb{R}_{+}$ and $g^{+}\in C_{0}(\overline{\mathscr{X}_{+}})$, by \eqref{eq:PellPlusfPlus} and Lemma \ref{lem:UnifContf}, we have \begin{align*} \lim_{\ell\rightarrow 0+}\sup_{(s,i)\in\overline{\mathscr{X}_{+}}}\big|\big(\mathcal{P}^{+}_{\ell}g^{+}\big)(s,i)-g^{+}(s,i)\big|&=\lim_{\ell\rightarrow 0+}\sup_{(s,i)\in\mathscr{X}_{+}}\big|\big(\mathcal{P}^{+}_{\ell}g^{+}\big)(s,i)-g^{+}(s,i)\big|\\ &=\lim_{\ell\rightarrow 0+}\sup_{(s,i)\in\mathscr{X}_{+}}\big|f_{+}(s,i,\ell)-f_{+}(s,i,0)\big|=0, \end{align*} {which} shows the strong continuity of $(\mathcal{P}_{\ell}^{+})_{\ell\in\mathbb{R}_{+}}$, and thus completes the proof in Step 2. \medskip \noindent \textbf{Step 3.} We will show here that $G^{+}$ is the strong generator of $(\mathcal{P}^{+}_{\ell})_{\ell\in\mathbb{R}_{+}}$ with domain $C_{0}^{1}(\overline{\mathscr{X}_{+}})$, and that \begin{align}\label{eq:WHplusU} G^{+}g^{+}=\big(\mathsf{V}^{+}\big)^{-1}\bigg(\frac{\partial}{\partial s}+\widetilde{\mathsf{A}}+\widetilde{\mathsf{B}}J^{+}\bigg)g^{+},\quad g^{+}\in C_{0}^{1}(\overline{\mathscr{X}_{+}}). \end{align} The argument proceeds in two sub-steps: \textbf{(i)} and \textbf{(ii)}. \medskip \noindent \textbf{(i)} We first show that, for any $g^{+}\in C_{0}(\overline{\mathscr{X}_{+}})$, the pointwise limit in \eqref{eq:DefGenGPlus} exists for every $(s,i)\in\overline{\mathscr{X}_{+}}$ if and only if $g(\cdot,i)$ is right-differentiable on $\mathbb{R}_{+}$ for each $i\in\mathbf{E}$. Moreover, for such $g^{+}$, we have \begin{align} &\lim_{\ell\rightarrow 0+}\frac{1}{\ell}\big(\mathcal{P}^{+}_{\ell}g^{+}(s,i)-g^{+}(s,i)\big)\nonumber\\ \label{eq:LimitGPlus} &\quad =\frac{1}{v(i)}\bigg(\frac{\partial_+ g^{+}}{\partial s}(s,i)+\sum_{j\in\mathbf{E}_{+}}\mathsf{\Lambda}_{s}(i,j)g^{+}(s,j)+\sum_{j\in\mathbf{E}_{-}}\mathsf{\Lambda}_{s}(i,j)\big(J^{+}g^{+}\big)(s,j)\bigg),\quad (s,i)\in\overline{\mathscr{X}_{+}}. \end{align} When $(s,i)=(\infty,\partial)$, \eqref{eq:LimitGPlus} is trivial since both sides of the equality are equal to zero. In what follows, fix $g^{+}\in C_{0}(\overline{\mathscr{X}_{+}})$ and $(s,i)\in\mathscr{X}_{+}$. Let $\widetilde{\gamma}_{1}$ be the first jump time of $\widetilde{Z}^{2}$. For any $\ell\in(0,\infty)$, by \eqref{eq:DefPellPlusTildeP} and \eqref{eq:Gamma1overh}, we have \begin{align} &\frac{1}{\ell}\big(\mathcal{P}^{+}_{\ell}g^{+}(s,i)-g^{+}(s,i)\big)=\frac{1}{\ell}\Big(\widetilde{\mathbb{E}}_{s,i,0}\Big(g^{+}\Big(Z^{1}_{\widetilde{\tau}_{\ell}^{+}},Z^{2}_{\widetilde{\tau}_{\ell}^{+}}\Big)\Big)-g^{+}(s,i)\Big)\nonumber\\ &\quad =\frac{1}{\ell}\bigg(\widetilde{\mathbb{E}}_{s,i,0}\Big(g^{+}\Big(Z^{1}_{\widetilde{\tau}_{\ell}^{+}},Z^{2}_{\widetilde{\tau}_{\ell}^{+}}\Big)\1_{\{\widetilde{\gamma}_{1}>\ell/v(i)\}}\Big)+\widetilde{\mathbb{E}}_{s,i,0}\Big(g^{+}\Big(Z^{1}_{\widetilde{\tau}_{\ell}^{+}},Z^{2}_{\widetilde{\tau}_{\ell}^{+}}\Big)\1_{\{\widetilde{\gamma}_{1}\leq\ell/v(i)\}}\Big)-g^{+}(s,i)\bigg)\nonumber\\ &\quad =\frac{1}{\ell}\bigg(\widetilde{\mathbb{E}}_{s,i,0}\Big(g^{+}\Big(Z^{1}_{\widetilde{\tau}_{\ell}^{+}},Z^{2}_{\widetilde{\tau}_{\ell}^{+}}\Big)\1_{\{\widetilde{\gamma}_{1}>\ell/v(i),\widetilde{\tau}^{+}_{\ell}=\ell/v(i)\}}\Big)+\widetilde{\mathbb{E}}_{s,i,0}\Big(g^{+}\Big(Z^{1}_{\widetilde{\tau}_{\ell}^{+}},Z^{2}_{\widetilde{\tau}_{\ell}^{+}}\Big)\1_{\{\widetilde{\gamma}_{1}\leq\ell/v(i)\}}\Big)-g^{+}(s,i)\bigg)\nonumber\\ &\quad =\frac{1}{\ell}\bigg(\widetilde{\mathbb{E}}_{s,i,0}\Big(g^{+}\Big(Z^{1}_{\ell/v(i)},Z^{2}_{\ell/v(i)}\Big)\1_{\{\widetilde{\gamma}_{1}>\ell/v(i)\}}\Big)+\widetilde{\mathbb{E}}_{s,i,0}\Big(g^{+}\Big(Z^{1}_{\widetilde{\tau}_{\ell}^{+}},Z^{2}_{\widetilde{\tau}_{\ell}^{+}}\Big)\1_{\{\widetilde{\gamma}_{1}\leq\ell/v(i)\}}\Big)-g^{+}(s,i)\bigg)\nonumber\\ &\quad =\frac{1}{\ell}\,\widetilde{\mathbb{P}}_{s,i,0}\bigg(\widetilde{\gamma}_{1}>\frac{\ell}{v(i)}\bigg)\bigg(g^{+}\bigg(s+\frac{\ell}{v(i)},i\bigg)-g^{+}(s,i)\bigg)-\frac{1}{\ell}\,\widetilde{\mathbb{P}}_{s,i,0}\bigg(\widetilde{\gamma}_{1}\leq\frac{\ell}{v(i)}\bigg)g^{+}(s,i)\nonumber\\ &\qquad +\frac{1}{\ell}\,\widetilde{\mathbb{E}}_{s,i,0}\Big(g^{+}\Big(Z^{1}_{\widetilde{\tau}_{\ell}^{+}},Z^{2}_{\widetilde{\tau}_{\ell}^{+}}\Big)\1_{\{\widetilde{\gamma}_{1}\leq\ell/v(i)\}}\Big)\nonumber\\ \label{eq:DecompGPlus} &\quad =:\mathcal{I}_{1}(\ell)-\mathcal{I}_{2}(\ell)+\mathcal{I}_{3}(\ell). \end{align} Clearly, \begin{align}\label{eq:LimitGPlus1} \lim_{\ell\rightarrow 0+}\mathcal{I}_{1}(\ell)=\frac{1}{v(i)}\frac{\partial g^{+}}{\partial s+}(s,i) \end{align} if and only if $g^{+}(\cdot,i)$ is right-differentiable at $s$. As for $\mathcal{I}_{2}(\ell)$, \eqref{eq:TailDistJumpTildeZ2} implies that \begin{align}\label{eq:LimitGPlus2} \lim_{\ell\rightarrow 0+}\mathcal{I}_{2}(\ell)=\lim_{\ell\rightarrow 0}\frac{1}{\ell}\bigg(1-\exp\bigg(\int_{s}^{s+\ell/v(i)}\!\Lambda_{u}(i,i)\,du\bigg)\bigg)g^{+}(s,i)=-\frac{\mathsf{\Lambda}_{s}(i,i)}{v(i)}g^{+}(s,i). \end{align} It remains to analyze the limit of $\mathcal{I}_{3}(\ell)$, as $\ell\rightarrow 0+$. By \eqref{eq:hoverGamma1} and \eqref{eq:StrongMarkovCondPlus}, \begin{align*} \mathcal{I}_{3}(\ell)&=\frac{1}{\ell}\,\widetilde{\mathbb{E}}_{s,i,0}\Big(g^{+}\Big(Z^{1}_{\widetilde{\tau}_{\ell}^{+}},Z^{2}_{\widetilde{\tau}_{\ell}^{+}}\Big)\1_{\{\widetilde{\gamma}_{1}\leq\ell/v(i),\,\widetilde{\gamma}_{1}\leq\widetilde{\tau}^{+}_{\ell}\}}\Big)\\ &=\frac{1}{\ell}\,\widetilde{\mathbb{E}}_{s,i,0}\Big(\1_{\{\widetilde{\gamma}_{1}\leq\ell/v(i),\,\widetilde{\gamma}_{1}\leq\widetilde{\tau}^{+}_{\ell}\}}\,\widetilde{\mathbb{E}}_{s,i,0}\Big(g^{+}\Big(Z^{1}_{\widetilde{\tau}_{\ell}^{+}},Z^{2}_{\widetilde{\tau}_{\ell}^{+}}\Big)\Big|\widetilde{\mathscr{F}}_{\widetilde{\gamma}_{1}}\Big)\Big)\\ &=\frac{1}{\ell}\,\widetilde{\mathbb{E}}_{s,i,0}\Big(\1_{\{\widetilde{\gamma}_{1}\leq\ell/v(i),\,\widetilde{\gamma}_{1}\leq\widetilde{\tau}_{\ell}^{+}\}}\,\widetilde{\mathbb{E}}_{Z_{\widetilde{\gamma}_{1}}^{1},Z_{\widetilde{\gamma}_{1}}^{2},Z_{\widetilde{\gamma}_{1}}^{3}}\!\Big(g^{+}\Big(Z^{1}_{\widetilde{\tau}_{\ell}^{+}},Z^{2}_{\widetilde{\tau}_{\ell}^{+}}\Big)\Big)\Big). \end{align*} Since $Z^{3}_{\widetilde{\gamma}_{1}}\leq\ell$ on $\{\widetilde{\gamma}_{1}\leq\widetilde{\tau}_{\ell}^{+}\}$, by \eqref{eq:ExpTildePShift} and \eqref{eq:FuntfPlus}, we have \begin{align*} &\1_{\{\widetilde{\gamma}_{1}\leq\ell/v(i),\,\widetilde{\gamma}_{1}\leq\widetilde{\tau}_{\ell}^{+}\}}\,\widetilde{\mathbb{E}}_{Z_{\widetilde{\gamma}_{1}}^{1},Z_{\widetilde{\gamma}_{1}}^{2},Z_{\widetilde{\gamma}_{1}}^{3}}\!\Big(g^{+}\Big(Z^{1}_{\widetilde{\tau}_{\ell}^{+}},Z^{2}_{\widetilde{\tau}_{\ell}^{+}}\Big)\Big)\\ &\quad =\1_{\{\widetilde{\gamma}_{1}\leq\ell/v(i),\,\widetilde{\gamma}_{1}\leq\widetilde{\tau}_{\ell}^{+}\}}\,\widetilde{\mathbb{E}}_{t,j,a}\Big(g^{+}\Big(Z^{1}_{\widetilde{\tau}_{\ell}^{+}},Z^{2}_{\widetilde{\tau}_{\ell}^{+}}\Big)\Big)\Big|_{(t,j,a)=\big(Z_{\widetilde{\gamma}_{1}}^{1},Z_{\widetilde{\gamma}_{1}}^{2},Z_{\widetilde{\gamma}_{1}}^{3}\big)}\\ &\quad =\1_{\{\widetilde{\gamma}_{1}\leq\ell/v(i),\,\widetilde{\gamma}_{1}\leq\widetilde{\tau}_{\ell}^{+}\}}\,\widetilde{\mathbb{E}}_{t,j,0}\Big(g^{+}\Big(Z^{1}_{\widetilde{\tau}_{\ell-a}^{+}},Z^{2}_{\widetilde{\tau}_{\ell-a}^{+}}\Big)\Big)\Big|_{(t,j,a)=\big(Z_{\widetilde{\gamma}_{1}}^{1},Z_{\widetilde{\gamma}_{1}}^{2},Z_{\widetilde{\gamma}_{1}}^{3}\big)}\\ &\quad =\1_{\{\widetilde{\gamma}_{1}\leq\ell/v(i),\,\widetilde{\gamma}_{1}\leq\widetilde{\tau}_{\ell}^{+}\}}\,f_{+}\big(Z_{\widetilde{\gamma}_{1}}^{1},Z_{\widetilde{\gamma}_{1}}^{2},\ell-Z_{\widetilde{\gamma}_{1}}^{3}\big). \end{align*} Thus, we can further decompose $\mathcal{I}_{3}(\ell)$ as \begin{align} \mathcal{I}_{3}(\ell)&=\frac{1}{\ell}\,\widetilde{\mathbb{E}}_{s,i,0}\Big(\1_{\{\widetilde{\gamma}_{1}\leq\ell/v(i),\,\widetilde{\gamma}_{1}\leq\widetilde{\tau}_{\ell}^{+}\}}\,f_{+}\big(Z_{\widetilde{\gamma}_{1}}^{1},Z_{\widetilde{\gamma}_{1}}^{2},\ell-Z_{\widetilde{\gamma}_{1}}^{3}\big)\Big)\nonumber\\ &=\frac{1}{\ell}\,\widetilde{\mathbb{E}}_{s,i,0}\Big(\1_{\{\widetilde{\gamma}_{1}\leq\ell/v(i),\,\widetilde{\gamma}_{1}\leq\widetilde{\tau}_{\ell}^{+}\}}\Big(f_{+}\big(Z^{1}_{\widetilde{\gamma}_{1}},Z^{2}_{\widetilde{\gamma}_{1}},\ell-Z^{3}_{\widetilde{\gamma}_{1}}\big)-f_{+}\big(s,Z^{2}_{\widetilde{\gamma}_{1}},\ell\big)\Big)\Big)\nonumber\\ &\quad +\frac{1}{\ell}\,\widetilde{\mathbb{E}}_{s,i,0}\Big(\1_{\{\widetilde{\gamma}_{1}\leq\ell/v(i)\}}\,f_{+}\big(s,Z^{2}_{\widetilde{\gamma}_{1}},\ell\big)\Big)\nonumber\\ \label{eq:DecompGPlus3} &=:\mathcal{I}_{31}(\ell)+\mathcal{I}_{32}(\ell). \end{align} For $\mathcal{I}_{31}(\ell)$, by \eqref{eq:ProcZ}, Lemma \ref{lem:UnifContf}, and \eqref{eq:DistFirstJumpTildeZ2}, we have \begin{align} \lim_{\ell\rightarrow 0+}\big|\mathcal{I}_{31}(\ell)\big|&=\lim_{\ell\rightarrow 0+}\frac{1}{\ell}\Big|\widetilde{\mathbb{E}}_{s,i,0}\Big(\1_{\{\widetilde{\gamma}_{1}\leq\ell/v(i),\,\widetilde{\gamma}_{1}\leq\widetilde{\tau}_{\ell}^{+}\}}\Big(f_{+}\big(s+\widetilde{\gamma}_{1},Z^{2}_{\widetilde{\gamma}_{1}},\ell-Z^{3}_{\widetilde{\gamma}_{1}}\big)-f_{+}\big(s,Z^{2}_{\widetilde{\gamma}_{1}},\ell\big)\Big)\Big)\Big|\nonumber\\ &\leq\lim_{\ell\rightarrow 0+}\frac{1}{\ell}\widetilde{\mathbb{E}}_{s,i,0}\bigg(\1_{\{\widetilde{\gamma}_{1}\leq\ell/v(i),\,\widetilde{\gamma}_{1}\leq\widetilde{\tau}_{\ell}^{+}\}}\sup_{j\in\mathbf{E}}\sup_{\ell'\in[0,\ell],\,|s'-s|\in[0,\ell/\underline{v}]}\big|f_{+}(s',j,\ell')-f_{+}(s,j,\ell)\big|\bigg)\nonumber\\ &\leq\lim_{\ell\rightarrow 0+}\sup_{j\in\mathbf{E}}\sup_{\ell'\in[0,\ell],\,|s'-s|\in[0,\ell/\underline{v}]}\big|f_{+}(s',j,\ell')-f_{+}(s,j,\ell)\big|\cdot\frac{1}{\ell}\,\widetilde{\mathbb{P}}_{s,i,0}\bigg(\widetilde{\gamma}_{1}\leq\frac{\ell}{v(i)}\bigg)\nonumber\\ \label{eq:LimitGPlus31} &\leq\frac{K}{\underline{v}}\lim_{\ell\rightarrow 0+}\sup_{j\in\mathbf{E}}\sup_{\ell'\in[0,\ell],\,|s'-s|\in[0,\ell/\underline{v}]}\big|f_{+}(s',j,\ell')-f_{+}(s,j,\ell)\big|=0, \end{align} where we recall that $\underline{v}=\min_{i\in\mathbf{E}}|v(i)|$. To study the limit of $\mathcal{I}_{32}(\ell)$ as $\ell\rightarrow 0+$, we first rewrite $\mathcal{I}_{32}(\ell)$ as \begin{align}\label{eq:DecompGPlus32} \mathcal{I}_{32}(\ell)=\frac{1}{\ell}\sum_{j\in\mathbf{E}\setminus\{i\}}\widetilde{\mathbb{P}}_{s,i,0}\bigg(\widetilde{\gamma}_{1}\leq\frac{\ell}{v(i)},\,Z_{\widetilde{\gamma}_{1}}^{2}=j\bigg)f_{+}(s,j,\ell). \end{align} Note that, for any $j\in\mathbf{E}\setminus\{i\}$, the probability in \eqref{eq:DecompGPlus32} can be further decomposed as \begin{align} &\widetilde{\mathbb{P}}_{s,i,0}\bigg(\widetilde{\gamma}_{1}\leq\frac{\ell}{v(i)},\,Z_{\widetilde{\gamma}_{1}}^{2}=j\bigg)\nonumber\\ &\quad =\widetilde{\mathbb{P}}_{s,i,0}\bigg(\widetilde{\gamma}_{1}\leq\frac{\ell}{v(i)},\,Z_{\widetilde{\gamma}_{1}}^{2}=j,\,Z_{\ell/v(i)}^{2}=j\bigg)+\widetilde{\mathbb{P}}_{s,i,0}\bigg(\widetilde{\gamma}_{1}\leq\frac{\ell}{v(i)},\,Z_{\widetilde{\gamma}_{1}}^{2}=j,\,Z_{\ell/v(i)}^{2}\neq j\bigg)\nonumber\\ &\quad =\widetilde{\mathbb{P}}_{s,i,0}\bigg(\widetilde{\gamma}_{1}\leq\frac{\ell}{v(i)},\,Z_{\ell/v(i)}^{2}=j\bigg)-\widetilde{\mathbb{P}}_{s,i,0}\bigg(\widetilde{\gamma}_{1}\leq\frac{\ell}{v(i)},\,Z_{\widetilde{\gamma}_{1}}^{2}\neq j,\,Z_{\ell/v(i)}^{2}=j\bigg)\nonumber\\ \label{eq:DecompGPlusProb32} &\qquad +\widetilde{\mathbb{P}}_{s,i,0}\bigg(\widetilde{\gamma}_{1}\leq\frac{\ell}{v(i)},\,Z_{\widetilde{\gamma}_{1}}^{2}=j,\,Z_{\ell/v(i)}^{2}\neq j\bigg). \end{align} By \eqref{eq:TranProbTildeX}, \eqref{eq:LawXXStar}, and \eqref{eq:DefGenXStar}, for $j\neq i$, \begin{align*} \lim_{\ell\rightarrow 0+}\frac{1}{\ell}\,\widetilde{\mathbb{P}}_{s,i,0}\bigg(\widetilde{\gamma}_{1}\leq\frac{\ell}{v(i)},\,Z_{\ell/v(i)}^{2}=j\bigg)&=\lim_{\ell\rightarrow 0+}\frac{1}{\ell}\,\widetilde{\mathbb{P}}_{s,i,0}\big(Z_{\ell/v(i)}^{2}=j\big)=\lim_{\ell\rightarrow 0+}\frac{1}{\ell}\,\mathbb{P}_{s,(i,0)}\big(X_{s+\ell/v(i)}=j\big)\\ &=\lim_{\ell\rightarrow 0+}\frac{1}{\ell}\,\mathbb{P}^{*}_{s,i}\big(X_{s+\ell/v(i)}^{*}=j\big)=\frac{\mathsf\Lambda_{s}(i,j)}{v(i)}, \end{align*} which, together with Lemma \ref{lem:UnifContf}, gives \begin{align}\label{eq:LimitGPlus321} \lim_{\ell\rightarrow 0+}\frac{1}{\ell}\sum_{j\in\mathbf{E}\setminus\{i\}}\widetilde{\mathbb{P}}_{s,i,0}\bigg(\widetilde{\gamma}_{1}\leq\frac{\ell}{v(i)},\,Z_{\ell/v(i)}^{2}=j\bigg)f_{+}(s,j,\ell)=\sum_{j\in\mathbf{E}\setminus\{i\}}\frac{\mathsf\Lambda_{s}(i,j)}{v(i)}f_{+}(s,j,0). \end{align} Moreover, denoting by $\widetilde{\gamma}_{2}$ the second jump time of $\widetilde{Z}^2$, then by \eqref{eq:FuntfPlus} and \eqref{eq:DistSecondJumpTildeZ2}, we have \begin{align} &\lim_{\ell\rightarrow 0+}\bigg|\frac{1}{\ell}\sum_{j\in\mathbf{E}\setminus\{i\}}\widetilde{\mathbb{P}}_{s,i,0}\bigg(\widetilde{\gamma}_{1}\leq\frac{\ell}{v(i)},\,Z_{\widetilde{\gamma}_{1}}^{2}\neq j,\,Z_{\ell/v(i)}^{2}=j\bigg)f_{+}(s,j,\ell)\bigg|\nonumber\\ &\quad\leq\lim_{\ell\rightarrow 0+}\frac{1}{\ell}\sum_{j\in\mathbf{E}\setminus\{i\}}\widetilde{\mathbb{P}}_{s,i,0}\bigg(\widetilde{\gamma}_{2}\leq\frac{\ell}{v(i)},\,Z_{\ell/v(i)}^{2}=j\bigg)\big\|g^{+}\big\|_{\infty}\nonumber\\ \label{eq:LimitGPlus322} &\quad\leq\lim_{\ell\rightarrow 0+}\frac{1}{\ell}\,\widetilde{\mathbb{P}}_{s,i,0}\bigg(\widetilde{\gamma}_{2}\leq\frac{\ell}{v(i)}\bigg)\big\|g^{+}\big\|_{\infty}\leq\lim_{\ell\rightarrow 0+}\frac{2K^{2}\ell}{\underline{v}^{2}}\big\|g^{+}\big\|_{\infty}=0, \end{align} and similarly, \begin{align}\label{eq:LimitGPlus323} \lim_{\ell\rightarrow 0+}\frac{1}{\ell}\bigg|\sum_{j\in\mathbf{E}\setminus\{i\}}\widetilde{\mathbb{P}}_{s,i,0}\bigg(\widetilde{\gamma}_{1}\leq\frac{\ell}{v(i)},\,Z_{\widetilde{\gamma}_{1}}^{2}=j,\,Z_{\ell/v(i)}^{2}\neq j\bigg)f_{+}(s,j,\ell)\bigg|=0. \end{align} Combining \eqref{eq:DecompGPlus32}$-$\eqref{eq:LimitGPlus323} leads to \begin{align}\label{eq:LimitGPlus32} \lim_{\ell\rightarrow 0+}\mathcal{I}_{32}(\ell)=\!\!\sum_{j\in\mathbf{E}\setminus\{i\}}\!\!\frac{\mathsf\Lambda_{s}(i,j)}{v(i)}f_{+}(s,j,0)=\!\!\sum_{j\in\mathbf{E}_{+}\setminus\{i\}}\!\!\frac{\mathsf{\Lambda}_{s}(i,j)}{v(i)}\,g^{+}(s,j)+\!\sum_{j\in\mathbf{E}_{-}}\!\frac{\mathsf{\Lambda}_{s}(i,j)}{v(i)}\big(J^{+}g^{+}\big)(s,j), \end{align} where the last equality is due to \eqref{eq:Trivfplus} and \eqref{eq:JPlusfPlus}. Therefore, from \eqref{eq:DecompGPlus3}, \eqref{eq:LimitGPlus31}, and \eqref{eq:LimitGPlus32}, we have \begin{align}\label{eq:LimitGPlus3} \lim_{\ell\rightarrow 0+}\mathcal{I}_{3}(\ell)=\sum_{j\in\mathbf{E}_{+}\setminus\{i\}}\frac{\mathsf{\Lambda}_{s}(i,j)}{v(i)}\,g^{+}(s,j)+\sum_{j\in\mathbf{E}_{-}}\frac{\mathsf{\Lambda}_{s}(i,j)}{v(i)}\left(J^{+}g^{+}\right)(s,j). \end{align} Combining \eqref{eq:DecompGPlus}$-$\eqref{eq:LimitGPlus2} and \eqref{eq:LimitGPlus3}, we conclude that the limit in \eqref{eq:DefGenGPlus} exists for every $(s,i)\in\overline{\mathscr{X}_{+}}$ if and only if $g(\cdot,i)$ is right-differentiable on $\mathbb{R}_{+}$ for each $i$, and that for such $g^{+}\in C_{0}(\overline{\mathscr{X}_{+}})$, \eqref{eq:LimitGPlus} holds true for any $(s,i)\in\overline{\mathscr{X}_{+}}$. \medskip \noindent \textbf{(ii)} We now show that $\mathscr{D}(G^{+})=C_{0}^{1}(\overline{\mathscr{X}_{+}})$. Toward this end we define \begin{align*} \mathscr{L}(G^{+}):=\big\{g^{+}\in C_{0}(\overline{\mathscr{X}_{+}}):\,\text{the limit in \eqref{eq:DefGenGPlus} exists for all }(s,i)\in\mathscr{X}_{+}\,\,\,\text{and}\,\,\,G^{+}g^{+}\in C_{0}(\overline{\mathscr{X}_{+}})\big\}. \end{align*} Since $(\mathcal{P}^{+}_{\ell})_{\ell\in\mathbb{R}_{+}}$ is a Feller semigroup on $C_{0}(\overline{\mathscr{X}_{+}})$ (cf. Step 2), it follows from \cite[Theorem 1.33]{BottcherSchillingWang2013} that $G^{+}$ is the strong generator of $(\mathcal{P}^{+}_{\ell})_{\ell\in\mathbb{R}_{+}}$ with $\mathscr{D}(G^{+})=\mathscr{L}(G^{+})$. Hence, we only need to show that $\mathscr{L}(G^{+})=C_{0}^{1}(\overline{\mathscr{X}_{+}})$. We first show that $\mathscr{L}(G^{+})\subset C_{0}^{1}(\overline{\mathscr{X}_{+}})$. For any $g^{+}\in\mathscr{L}(G^{+})$, it was shown in Step~3 (i) that \begin{align}\label{eq:GPlusStep3} \big(G^{+}\!g^{+}\big)(s,i)\!=\!\frac{1}{v(i)}\bigg(\!\frac{\partial_{+}g^{+}}{\partial s}(s,i)\!+\!\!\sum_{j\in\mathbf{E}_{+}}\!\!\mathsf{\Lambda}_{s}(i,j)g^{+}\!(s,j)\!+\!\!\sum_{j\in\mathbf{E}_{-}}\!\!\mathsf{\Lambda}_{s}(i,j)\big(J^{+}\!g^{+}\big)(s,j)\!\!\bigg),\,\,(s,i)\!\in\!\overline{\mathscr{X}_{+}}, \end{align} where the right-hand side, as a function of $(s,i)$, belongs to $C_{0}(\overline{\mathscr{X}_{+}})$. By Lemma \ref{lem:UnifContf}, we have $J^{+}g^{+}\in C_{0}(\overline{\mathscr{X}_{-}})$. This, together with Assumption \ref{assump:GenLambda} (ii), ensures that \begin{align}\label{eq:GPlusStep3part} \sum_{j\in\mathbf{E}_{+}}\mathsf{\Lambda}_{s}(i,j)g^{+}(s,j)+\sum_{j\in\mathbf{E}_{-}}\mathsf{\Lambda}_{s}(i,j)\big(J^{+}g^{+}\big)(s,j), \end{align} as a function of $(s,i)\in\overline{\mathscr{X}_{+}}$, belongs to $C_{0}(\overline{\mathscr{X}_{+}})$. Thus, we must have $\partial_{+}g^{+}/\partial s$ exists at any $(s,i)\in\mathscr{X}_{+}$, $\partial_{+}g^{+}(\infty,\partial)/\partial s=0$, and $\partial_{+}g^{+}/\partial s\in C_{0}(\overline{\mathscr{X}_{+}})$. Therefore, $\partial g^{+}/\partial s$ exists and belongs to $C_{0}(\overline{\mathscr{X}_{+}})$, i.e., $g^{+}\in C_{0}^{1}(\overline{\mathscr{X}_{+}})$. To show $C_{0}^{1}(\overline{\mathscr{X}_{+}})\subset\mathscr{L}(G^{+})$, we first note that for $g^{+}\in C_{0}^{1}(\overline{\mathscr{X}_{+}})$, Step 3 (i) shows that the limit in \eqref{eq:DefGenGPlus} exists for every $(s,i)\in\overline{\mathscr{X}_{+}}$, and that \eqref{eq:GPlusStep3} holds true. Since $g^{+}\in C_{0}(\overline{\mathscr{X}_{+}})$, the same argument as above implies that \eqref{eq:GPlusStep3part}, as a function of $(s,i)$, belongs to $C_{0}(\overline{\mathscr{X}_{+}})$. Hence, $G^{+}g^{+}\in C_{0}(\overline{\mathscr{X}_{+}})$. The proof in Step 3 is now complete. \medskip \noindent \textbf{Step 4.} In this step we will show that $J^{+}$ satisfies the condition $(a^{+})$(ii), that is for any $g^{+}\in C_{0}^{1}(\overline{\mathscr{X}_{+}})$ we have $J^{+}g^{+}\in C_{0}^{1}(\overline{\mathscr{X}_{-}})$. We will also show that \begin{align}\label{eq:WHplusD} \frac{\partial}{\partial s}\big(J^{+}g^{+}\big)=\big(-\widetilde{\mathsf{C}}-\widetilde{\mathsf{D}}J^{+}+\mathsf{V}^{-}J^{+}G^{+}\big)\,g^{+},\quad g^{+}\in C_{0}^{1}(\overline{\mathscr{X}_{+}}). \end{align} We fix $g^{+}\in C_{0}^{1}(\overline{\mathscr{X}_{+}})$ for the rest of the proof. To begin with, we claim that in order to prove $J^{+}g^{+}\in C_{0}^{1}(\overline{\mathscr{X}_{-}})$ and \eqref{eq:WHplusD}, it is sufficient to show that \begin{align}\label{eq:WHplusDRightDev} \frac{\partial_{+}}{\partial s}J^{+}g^{+}=\big(-\widetilde{\mathsf{C}}-\widetilde{\mathsf{D}}J^{+}+\mathsf{V}^{-}J^{+}G^{+}\big)\,g^{+}. \end{align} In fact, since $g^{+}\in C_{0}^{1}(\overline{\mathscr{X}_{+}})$, Step 3 shows that $G^{+}g^{+}\in C_{0}(\overline{\mathscr{X}_{+}})$. Hence, by Lemma \ref{lem:UnifContf}, we have $J^{+}g^{+}\in C_{0}(\overline{\mathscr{X}_{-}})$ and $J^{+}G^{+}g^{+}\in C_{0}(\overline{\mathscr{X}_{-}})$. Given the definition of $\widetilde{\mathsf{C}}$ and $\widetilde{\mathsf{D}}$ and invoking Assumption \ref{assump:GenLambda}, we conclude that $(-\widetilde{\mathsf{C}}-\widetilde{\mathsf{D}}J^{+}+\mathsf{V}^{-}J^{+}G^{+})g^{+}\in C_{0}(\overline{\mathscr{X}_{-}})$. Thus, indeed, \eqref{eq:WHplusDRightDev} implies that $J^{+}g^{+}\in C_{0}^{1}(\overline{\mathscr{X}_{-}})$ and that \eqref{eq:WHplusD} holds. To prove \eqref{eq:WHplusDRightDev}, it is sufficient to consider $(s,i)\in\mathscr{X}_{-}$ only, since both sides of \eqref{eq:WHplusDRightDev} {are} equal to zero for $(s,i)=(\infty,\partial)$. In view of \eqref{eq:DefJPlusTildeP}, we will evaluate \begin{align*} \lim_{r\rightarrow 0+}\frac{1}{r}\bigg(\widetilde{\mathbb{E}}_{s+r,i,0}\Big(g^{+}\Big(Z^{1}_{\widetilde{\tau}_{0}^{+}},Z^{2}_{\widetilde{\tau}_{0}^{+}}\Big)\Big)-\widetilde{\mathbb{E}}_{s,i,0}\Big(g^{+}\Big(Z^{1}_{\widetilde{\tau}_{0}^{+}},Z^{2}_{\widetilde{\tau}_{0}^{+}}\Big)\Big)\bigg),\quad (s,i)\in\mathscr{X}_{-}. \end{align*} For any $r>0$, by \eqref{eq:ProcZ} and \eqref{eq:FuntfPlus}, \begin{align} &\widetilde{\mathbb{E}}_{s+r,i,0}\Big(g^{+}\Big(Z^{1}_{\widetilde{\tau}_{0}^{+}},Z^{2}_{\widetilde{\tau}_{0}^{+}}\Big)\Big)-\widetilde{\mathbb{E}}_{s,i,0}\Big(g^{+}\Big(Z^{1}_{\widetilde{\tau}_{0}^{+}},Z^{2}_{\widetilde{\tau}_{0}^{+}}\Big)\Big)\nonumber\\ &\quad =\widetilde{\mathbb{E}}_{s,i,0}\bigg(\Big(\widetilde{\mathbb{E}}_{s+r,i,0}\Big(g^{+}\Big(Z^{1}_{\widetilde{\tau}_{0}^{+}},Z^{2}_{\widetilde{\tau}_{0}^{+}}\Big)\Big)-f_{+}\big(Z^{1}_{r},Z^{2}_{r},0\big)\Big)\bigg)+\widetilde{\mathbb{E}}_{s,i,0}\Big(f_{+}\big(Z^{1}_{r},Z^{2}_{r},0\big)-g^{+}\Big(Z^{1}_{\widetilde{\tau}_{0}^{+}},Z^{2}_{\widetilde{\tau}_{0}^{+}}\Big)\Big)\nonumber\\ &\quad =\widetilde{\mathbb{E}}_{s,i,0}\Big(f_{+}(s+r,i,0\big)-f_{+}\big(s+r,Z^{2}_{r},0\big)\Big)+\widetilde{\mathbb{E}}_{s,i,0}\Big(f_{+}\big(s+r,Z^{2}_{r},0\big)-g^{+}\Big(Z^{1}_{\widetilde{\tau}_{0}^{+}},Z^{2}_{\widetilde{\tau}_{0}^{+}}\Big)\Big)\nonumber\\ &\quad =\widetilde{\mathbb{E}}_{s,i,0}\Big(f_{+}(s\!+\!r,i,0\big)-f_{+}\big(s\!+\!r,Z^{2}_{r},0\big)\Big)+\widetilde{\mathbb{E}}_{s,i,0}\Big(\1_{\{\widetilde{\tau}_{0}^{+}\leq r\}}\Big(f_{+}\big(s\!+\!r,Z^{2}_{r},0\big)-g^{+}\Big(s\!+\!\widetilde{\tau}_{0}^{+},Z^{2}_{\widetilde{\tau}_{0}^{+}}\Big)\Big)\Big)\nonumber\\ \label{eq:PreDecompPartialJPlus} &\qquad +\widetilde{\mathbb{E}}_{s,i,0}\Big(\1_{\{\widetilde{\tau}_{0}^{+}>r\}}\Big(f_{+}\big(s+r,Z^{2}_{r},0\big)-g^{+}\Big(Z^{1}_{\widetilde{\tau}_{0}^{+}},Z^{2}_{\widetilde{\tau}_{0}^{+}}\Big)\Big)\Big). \end{align} Clearly, by \eqref{eq:RangeXtauPlus} and \eqref{eq:Trivfplus}, $g^{+}(s+\widetilde{\tau}_{0}^{+},Z^{2}_{\widetilde{\tau}_{0}^{+}})=f_{+}(s+\widetilde{\tau}_{0}^{+},Z^{2}_{\widetilde{\tau}_{0}^{+}},0)$. Hence, the second term in \eqref{eq:PreDecompPartialJPlus} can be decomposed as \begin{align} &\widetilde{\mathbb{E}}_{s,i,0}\Big(\1_{\{\widetilde{\tau}_{0}^{+}\leq r\}}\Big(f_{+}\big(s+r,Z^{2}_{r},0\big)-g^{+}\Big(s+\widetilde{\tau}_{0}^{+},Z^{2}_{\widetilde{\tau}_{0}^{+}}\Big)\Big)\Big)\nonumber\\ &\quad =\widetilde{\mathbb{E}}_{s,i,0}\Big(\1_{\{\widetilde{\tau}_{0}^{+}\leq r\}}\Big(f_{+}\big(s+r,Z^{2}_{r},0\big)-f_{+}\Big(s+\widetilde{\tau}_{0}^{+},Z^{2}_{\widetilde{\tau}_{0}^{+}},0\Big)\Big)\Big)\nonumber\\ &\quad =\widetilde{\mathbb{E}}_{s,i,0}\Big(\1_{\{\widetilde{\tau}_{0}^{+}\leq r\}}\Big(f_{+}\big(s+r,Z^{2}_{r},0\big)-f_{+}\Big(s+r,Z^{2}_{\widetilde{\tau}_{0}^{+}},0\Big)\Big)\Big)\nonumber\\ \label{eq:PreDecompPartialJPlus2} &\qquad +\widetilde{\mathbb{E}}_{s,i,0}\Big(\1_{\{\widetilde{\tau}_{0}^{+}\leq r\}}\Big(f_{+}\Big(s+r,Z^{2}_{\widetilde{\tau}_{0}^{+}},0\Big)-f_{+}\Big(s+\widetilde{\tau}_{0}^{+},Z^{2}_{\widetilde{\tau}_{0}^{+}},0\Big)\Big)\Big). \end{align} Moreover, by \eqref{eq:StrongMarkovCondPlus}, \eqref{eq:ExpTildePShift}, \eqref{eq:FuntfPlus}, and \eqref{eq:ProcZ}, we can decompose the second term in \eqref{eq:PreDecompPartialJPlus} as \begin{align} &\widetilde{\mathbb{E}}_{s,i,0}\Big(\1_{\{\widetilde{\tau}_{0}^{+}>r\}}\Big(f_{+}\big(s+r,Z^{2}_{r},0\big)-g^{+}\Big(Z^{1}_{\widetilde{\tau}_{0}^{+}},Z^{2}_{\widetilde{\tau}_{0}^{+}}\Big)\Big)\Big)\nonumber\\ &\quad =\widetilde{\mathbb{E}}_{s,i,0}\bigg(\1_{\{\widetilde{\tau}_{0}^{+}>r\}}\Big(f_{+}\big(s+r,Z^{2}_{r},0\big)-\widetilde{\mathbb{E}}_{s,i,0}\Big(g^{+}\Big(Z^{1}_{\widetilde{\tau}_{0}^{+}},Z^{2}_{\widetilde{\tau}_{0}^{+}}\Big)\Big|\mathscr{F}_{r}\Big)\Big)\bigg)\nonumber\\ &\quad =\widetilde{\mathbb{E}}_{s,i,0}\left(\1_{\{\widetilde{\tau}_{0}^{+}>r\}}\bigg(f_{+}\big(s+r,Z^{2}_{r},0\big)-\widetilde{\mathbb{E}}_{t,j,a}\Big(g^{+}\Big(Z^{1}_{\widetilde{\tau}_{0}^{+}},Z^{2}_{\widetilde{\tau}_{0}^{+}}\Big)\Big)\Big|_{(t,j,a)=\big(Z^{1}_{r},Z^{2}_{r},Z^{3}_{r}\big)}\bigg)\right)\nonumber\\ &\quad =\widetilde{\mathbb{E}}_{s,i,0}\left(\1_{\{\widetilde{\tau}_{0}^{+}>r\}}\bigg(f_{+}\big(s+r,Z^{2}_{r},0\big)-\widetilde{\mathbb{E}}_{t,j,0}\Big(g^{+}\Big(Z^{1}_{\widetilde{\tau}_{-a}^{+}},Z^{2}_{\widetilde{\tau}_{-a}^{+}}\Big)\Big)\Big|_{(t,j,a)=\big(Z^{1}_{r},Z^{2}_{r},Z^{3}_{r}\big)}\bigg)\right)\nonumber\\ &\quad =\widetilde{\mathbb{E}}_{s,i,0}\Big(\1_{\{\widetilde{\tau}_{0}^{+}>r\}}\big(f_{+}(s+r,Z^{2}_{r},0)-f_{+}\big(Z^{1}_{r},Z^{2}_{r},-Z^{3}_{r}\big)\big)\Big)\nonumber\\ \label{eq:PreDecompPartialJPlus3} &\quad =\widetilde{\mathbb{E}}_{s,i,0}\Big(\1_{\{\widetilde{\tau}_{0}^{+}>r\}}\big(f_{+}(s+r,Z^{2}_{r},0)-f_{+}\big(s+r,Z^{2}_{r},-Z^{3}_{r}\big)\big)\Big). \end{align} Hence, by combining \eqref{eq:PreDecompPartialJPlus}$-$\eqref{eq:PreDecompPartialJPlus3}, we obtain that \begin{align} &\frac{1}{r}\bigg(\widetilde{\mathbb{E}}_{s+r,i,0}\Big(g^{+}\Big(Z^{1}_{\widetilde{\tau}_{0}^{+}},Z^{2}_{\widetilde{\tau}_{0}^{+}}\Big)\Big)-\widetilde{\mathbb{E}}_{s,i,0}\Big(g^{+}\Big(Z^{1}_{\widetilde{\tau}_{0}^{+}},Z^{2}_{\widetilde{\tau}_{0}^{+}}\Big)\Big)\bigg)\nonumber\\ &\quad =\frac{1}{r}\,\widetilde{\mathbb{E}}_{s,i,0}\Big(f_{+}(s+r,i,0)-f_{+}\big(s+r,Z^{2}_{r},0\big)\Big)\nonumber\\ &\qquad +\frac{1}{r}\,\widetilde{\mathbb{E}}_{s,i,0}\Big(\1_{\{\widetilde{\tau}_{0}^{+}\leq r\}}\big(f_{+}\big(s+r,Z^{2}_{r},0\big)-f_{+}\big(s+r,Z^{2}_{\widetilde{\tau}_{0}^{+}},0\big)\big)\Big)\nonumber\\ &\qquad +\frac{1}{r}\,\widetilde{\mathbb{E}}_{s,i,0}\Big(\1_{\{\widetilde{\tau}_{0}^{+}\leq r\}}\Big(f_{+}\Big(s+r,Z^{2}_{\widetilde{\tau}_{0}^{+}},0\Big)-f_{+}\Big(s+\widetilde{\tau}_{0}^{+},Z^{2}_{\widetilde{\tau}_{0}^{+}},0\Big)\Big)\Big)\nonumber\\ &\qquad +\frac{1}{r}\,\widetilde{\mathbb{E}}_{s,i,0}\Big(\1_{\{\widetilde{\tau}_{0}^{+}>r\}}\big(f_{+}\big(s+r,Z^{2}_{r},0)-f_{+}\big(s+r,Z^{2}_{r},-Z^{3}_{r}\big)\big)\Big)\nonumber\\ \label{eq:DecompPartialJPlus} &\quad =:\mathcal{J}_{1}(r)+\mathcal{J}_{2}(r)+\mathcal{J}_{3}(r)+\mathcal{J}_{4}(r). \end{align} Next, we will analyze the limit of $\mathcal{J}_{k}(r)$, $k=1,2,3,4$, as $r\rightarrow 0+$. We begin with evaluating the limit of $\mathcal{J}_{1}(r)$ as $r\rightarrow 0+$. By \eqref{eq:Probz}, \eqref{eq:ProcZ}, and \eqref{eq:LawXXStar}, and using the evolution system $\mathsf{U}^{*}=(\mathsf{U}^{*}_{s,t})_{0\leq s\leq t<\infty}$ defined as in \eqref{eq:DefEvolSytXStar}, we have \begin{align} \mathcal{J}_{1}(r)&=\frac{1}{r}\,\mathbb{E}_{s,(i,a)}\big(f_{+}(s+r,i,0)-f_{+}\big(s+r,X_{s+r},0\big)\big)\nonumber\\ &=\frac{1}{r}\,\mathbb{E}^{*}_{s,i}\big(f_{+}(s+r,i,0)-f_{+}\big(s+r,X_{s+r}^{*},0\big)\big)=-\frac{1}{r}\big(\big(\mathsf{U}_{s,s+r}^{*}-\mathsf{I}\big)f_{+}(s+r,\cdot,0)\big)(i)\nonumber\\ \label{eq:DecompJ1r} &= -\frac{1}{r}\big(\big(\mathsf{U}_{s,s+r}^{*}-\mathsf{I}\big)f_{+}(s,\cdot,0)\big)(i)+\frac{1}{r}\big(\big(\mathsf{U}_{s,s+r}^{*}-\mathsf{I}\big)\big(f_{+}(s,\cdot,0)-f_{+}(s+r,\cdot,0)\big)\big)(i). \end{align} It follows immediately from \eqref{eq:DefGenXStar} that \begin{align}\label{eq:LimitJ1r1} \lim_{r\rightarrow 0+}\frac{1}{r}\big(\big(\mathsf{U}_{s,s+r}^{*}-\mathsf{I}\big)f_{+}(s,\cdot,0)\big)(i)=\sum_{j\in\mathbf{E}}\mathsf{\Lambda}_{s}(i,j)f_{+}(s,j,0). \end{align} Moreover, by \eqref{eq:EvolDE}, Assumption \ref{assump:GenLambda} (so that $\|\mathsf{\Lambda}^{*}_{t}\|_{\infty}\leq 2K$ for all $t\in\overline{\mathbb{R}}_{+}$), Lemma \ref{lem:UnifContf}, and the fact that $\mathsf{U}^{*}_{s,t}$ is a contraction map, we have \begin{align} &\lim_{r\rightarrow 0+}\frac{1}{r}\big|\big(\big(\mathsf{U}_{s,s+r}^{*}-\mathsf{I}\big)\big(f_{+}(s,\cdot,0)-f_{+}(s+r,\cdot,0)\big)\big)(i)\big|\nonumber\\ &\quad\leq\lim_{r\rightarrow 0+}\frac{1}{r}\big\|\mathsf{U}_{s,s+r}^{*}-\mathsf{I}\big\|_{\infty}\sup_{(s,j)\in\mathscr{X}}\big|f_{+}(s,j,0)-f_{+}(s+r,j,0)\big|\nonumber\\ &\quad\leq\lim_{r\rightarrow 0+}\frac{1}{r}\int_{s}^{s+r}\big\|\mathsf{U}_{s,t}^{*}\big\|_{\infty}\big\|\mathsf{\Lambda}^{*}_{t}\big\|_{\infty}dt\cdot\sup_{(s,j)\in\mathscr{X}}\big|f_{+}(s,j,0)-f_{+}(s+r,j,0)\big|\nonumber\\ \label{eq:LimitJ1r2} &\quad\leq 2K\lim_{r\rightarrow 0+}\sup_{(s,j)\in\mathscr{X}}\big|f_{+}(s,j,0)-f_{+}(s+r,j,0)\big|=0. \end{align} Combining \eqref{eq:DecompJ1r}$-$\eqref{eq:LimitJ1r2} leads to \begin{align}\label{eq:LimitJ1r} \lim_{r\rightarrow 0+}\mathcal{J}_{1}(r)=-\sum_{j\in\mathbf{E}}\mathsf{\Lambda}_{s}(i,j)\,f_{+}(s,j,0). \end{align} Next, we will study the limits of $\mathcal{J}_{2}(r)$ and $\mathcal{J}_{3}(r)$ as $r\rightarrow 0+$. Since $i\in\mathbf{E}_{-}$, $Z^{2}$ must have at least one jump to $\mathbf{E}_{+}$ before $Z^{3}$ (which coincides with $\int_{0}^{\cdot}v(Z^{2}_{u})du$ in view of \eqref{eq:DistTildeZ3IntTildeZ2}) can upcross the level $0$, i.e., $\widetilde{\mathbb{P}}_{s,i,0}(\widetilde{\gamma}_{1}\leq\widetilde{\tau}_{0}^{+})=1$, where we recall that $\widetilde{\gamma}_{1}$ denotes the first jump time of $Z^{2}$. Hence, by \eqref{eq:DistFirstJumpTildeZ2} and Lemma \ref{lem:UnifContf}, \begin{align} \lim_{r\rightarrow 0+}\big|\mathcal{J}_{3}(r)\big|&=\lim_{r\rightarrow 0+}\frac{1}{r}\Big|\widetilde{\mathbb{E}}_{s,i,0}\Big(\1_{\{\widetilde{\gamma}_{1}\leq\widetilde{\tau}_{0}^{+}\leq r\}}\Big(f_{+}\Big(s+r,Z^{2}_{\widetilde{\tau}_{0}^{+}},0\Big)-f_{+}\Big(s+\widetilde{\tau}_{0}^{+},Z^{2}_{\widetilde{\tau}_{0}^{+}},0\Big)\Big)\Big)\Big|\nonumber\\ &\leq\lim_{r\rightarrow 0+}\frac{1}{r}\,\widetilde{\mathbb{P}}_{s,i,0}\big(\widetilde{\gamma}_{1}\leq r\big)\,\sup_{(r',j)\in[0,r]\times\mathbf{E}}\big|f_{+}(s+r',j,0)-f_{+}(s,j,0)\big|\nonumber\\ \label{eq:LimitJ3r} &\leq K\,\lim_{r\rightarrow 0+}\sup_{(r',j)\in[0,r]\times\mathbf{E}}\big|f_{+}(s+r',j,0)-f_{+}(s,j,0)\big|=0. \end{align} Moreover, note that $\1_{\{\widetilde{\tau}_{0}^{+}\leq r\}}(f_{+}(s+r,Z^{2}_{r},0)-f_{+}(s+r,Z^{2}_{\widetilde{\tau}_{0}^{+}},0))$ does not vanish only if $Z^{2}_{r}\neq Z^{2}_{\widetilde{\tau}_{0}^{+}}$, so $Z^{2}$ must jump at least twice before time $r$. Hence, by \eqref{eq:DistSecondJumpTildeZ2} and \eqref{eq:FuntfPlus}, we have \begin{align} \lim_{r\rightarrow 0+}\big|\mathcal{J}_{2}(r)\big|&=\lim_{r\rightarrow 0+}\frac{1}{r}\Big|\widetilde{\mathbb{E}}_{s,i,0}\Big(\1_{\{\widetilde{\tau}_{0}^{+}\leq r,\,\widetilde{\gamma}_{2}\leq r\}}\big(f_{+}\big(s+r,Z^{2}_{r},0\big)-f_{+}\big(s+r,Z^{2}_{\widetilde{\tau}_{0}^{+}},0\big)\big)\Big)\Big|\nonumber\\ \label{eq:LimitJ2r} &\leq 2\big\|g^{+}\big\|_{\infty}\cdot\lim_{r\rightarrow 0+}\frac{1}{r}\,\widetilde{\mathbb{P}}_{s,i,0}\big(\widetilde{\gamma}_{2}\leq r\big)\leq 2K^{2}\big\|g^{+}\big\|_{\infty}\cdot\lim_{r\rightarrow 0+}r=0, \end{align} where we recall that $\widetilde{\gamma}_{2}$ denotes the second jump time of $Z^{2}$. Finally, we study the limit of $\mathcal{J}_{4}(r)$, as $r\rightarrow 0+$, by further decomposing $\mathcal{J}_{4}(r)$ as \begin{align} \mathcal{J}_{4}(r)&=\frac{1}{r}\,\widetilde{\mathbb{E}}_{s,i,0}\Big(\1_{\{\widetilde{\tau}_{0}^{+}>r\}}\big(f_{+}\big(s+r,Z^{2}_{r},0\big)-f_{+}\big(s+r,Z^{2}_{r},-v(i)r\big)\big)\Big)\nonumber\\ &\quad +\frac{1}{r}\,\widetilde{\mathbb{E}}_{s,i,0}\Big(\1_{\{\widetilde{\tau}_{0}^{+}>r\}}\big(f_{+}\big(s+r,Z^{2}_{r},-v(i)r\big)-f_{+}\big(s+r,Z^{2}_{r},-Z^{3}_{r}\big)\big)\Big)\nonumber\\ &=\frac{1}{r}\,\widetilde{\mathbb{E}}_{s,i,0}\big(f_{+}\big(s+r,Z^{2}_{r},0\big)-f_{+}\big(s+r,Z^{2}_{r},-v(i)r\big)\big)\nonumber\\ &\quad -\frac{1}{r}\,\widetilde{\mathbb{E}}_{s,i,0}\Big(\1_{\{\widetilde{\tau}_{0}^{+}\leq r\}}\big(f_{+}\big(s+r,Z^{2}_{r},0\big)-f_{+}\big(s+r,Z^{2}_{r},-v(i)r\big)\big)\Big)\nonumber\\ &\quad +\frac{1}{r}\,\widetilde{\mathbb{E}}_{s,i,0}\Big(\1_{\{\widetilde{\tau}_{0}^{+}>r\}}\big(f_{+}\big(s+r,Z^{2}_{r},-v(i)r\big)-f_{+}\big(s+r,Z^{2}_{r},-Z^{3}_{r}\big)\big)\Big)\nonumber\\ \label{eq:DecompJ4r} &=:\mathcal{J}_{41}(r)+\mathcal{J}_{42}(r)+\mathcal{J}_{43}(r). \end{align} For $\mathcal{J}_{41}(r)$, by \eqref{eq:Probz}, \eqref{eq:ProcZ}, \eqref{eq:LawXXStar}, and \eqref{eq:DefEvolSytXStar}, we have \begin{align*} \mathcal{J}_{41}(r)&=\frac{1}{r}\,\mathbb{E}_{s,(i,0)}\big(f_{+}\big(s+r,X_{s+r},0\big)-f_{+}\big(s+r,X_{s+r},-v(i)r\big)\big)\\ &=\frac{1}{r}\,\mathbb{E}_{s,i}^{*}\big(f_{+}\big(s+r,X_{s+r}^{*},0\big)-f_{+}\big(s+r,X_{s+r}^{*},-v(i)r\big)\big)\\ &=\frac{1}{r}\,\mathsf{U}_{s,s+r}^{*}\big(f_{+}(s+r,\cdot,0)-f_{+}(s+r,\cdot,-v(i)r)\big)(i)\\ &=\frac{1}{r}\big(\mathsf{U}_{s,s+r}^{*}\!-\mathsf{I}\big)\big(f_{+}(s\!+\!r,\cdot,0)\!-\!f_{+}(s\!+\!r,\cdot,-v(i)r)\big)(i)\!+\!\frac{1}{r}\big(f_{+}(s\!+\!r,i,0)\!-\!f_{+}(s\!+\!r,i,-v(i)r)\big). \end{align*} By Assumption \ref{assump:GenLambda}, \eqref{eq:EvolDE}, and Lemma \ref{lem:UnifContf}, a similar argument leading to \eqref{eq:LimitJ1r2} shows that \begin{align*} \lim_{r\rightarrow 0+}\frac{1}{r}\big|\big(\mathsf{U}_{s,s+r}^{*}-\mathsf{I}\big)\big(f_{+}(s+r,\cdot,0)-f_{+}(s+r,\cdot,-v(i)r)\big)(i)\big|=0. \end{align*} Hence, noting that $g^{+}\in C_{0}^{1}(\overline{\mathscr{X}_{+}})=\mathscr{D}(G^{+})$, by \eqref{eq:FuntfPlus} and Corollary \ref{cor:StrongMarkovExpTildeTau}, we have \begin{align} \lim_{r\rightarrow 0+}\mathcal{J}_{41}(r)&= -\lim_{r\rightarrow 0+}\frac{1}{r}\big(f_{+}(s+r,i,-v(i)r)-f_{+}(s+r,i,0)\big)\nonumber\\ &= -\lim_{r\rightarrow 0+}\frac{1}{r}\bigg(\widetilde{\mathbb{E}}_{s,i,0}\Big(g^{+}\Big(Z^{1}_{\widetilde{\tau}_{-v(i)r}^{+}},Z^{2}_{\widetilde{\tau}_{-v(i)r}^{+}}\Big)\Big)-\widetilde{\mathbb{E}}_{s,i,0}\Big(g^{+}\Big(Z^{1}_{\widetilde{\tau}_{0}^{+}},Z^{2}_{\widetilde{\tau}_{0}^{+}}\Big)\Big)\bigg)\nonumber\\ &= -\lim_{r\rightarrow 0+}\frac{1}{r}\,\widetilde{\mathbb{E}}_{s,i,0}\Big(\big(\mathcal{P}_{-v(i)r}^{+}\,g^{+}\big)\Big(Z^{1}_{\widetilde{\tau}_{0}^{+}},Z^{2}_{\widetilde{\tau}_{0}^{+}}\Big)-g^{+}\Big(Z^{1}_{\widetilde{\tau}_{0}^{+}},Z^{2}_{\widetilde{\tau}_{0}^{+}}\Big)\Big)\nonumber\\ \label{eq:LimitJ41r} &=v(i)\,\widetilde{\mathbb{E}}_{s,i,0}\Big(\big(G^{+}g^{+}\big)\Big(Z^{1}_{\widetilde{\tau}_{0}^{+}},Z^{2}_{\widetilde{\tau}_{0}^{+}}\Big)\Big). \end{align} Next, since $\widetilde{\mathbb{P}}_{s,i,0}(\widetilde{\gamma}_{1}\leq\widetilde{\tau}_{0}^{+})=1$ for $i\in\mathbf{E}_{-}$, by Lemma \ref{lem:UnifContf} and \eqref{eq:DistFirstJumpTildeZ2}, \begin{align} \lim_{r\rightarrow 0+}\big|\mathcal{J}_{42}(r)\big|&\leq\lim_{r\rightarrow 0+}\frac{1}{r}\,\widetilde{\mathbb{E}}_{s,i,0}\Big(\1_{\{\widetilde{\gamma}_{1}\leq r\}}\big|f_{+}\big(s+r,Z^{2}_{r},0\big)-f_{+}\big(s+r,Z^{2}_{r},-v(i)r\big)\big|\Big)\nonumber\\ &\leq\lim_{r\rightarrow 0+}\frac{1}{r}\,\widetilde{\mathbb{P}}_{s,i,0}\big(\widetilde{\gamma}_{1}\leq r\big)\,\,\sup_{(j,\ell)\in\mathbf{E}\times[0,-v(i)r]}\big|f_{+}(s+r,j,0)-f_{+}(s+r,j,\ell)\big|\nonumber\\ \label{eq:LimitJ42r} &\leq K\,\lim_{r\rightarrow 0+}\sup_{(j,\ell)\in\mathbf{E}\times[0,-v(i)r]}\big|f_{+}(s+r,j,0)-f_{+}(s+r,j,\ell)\big|=0. \end{align} As for $\mathcal{J}_{43}(r)$, since $Z^{2}_{u}=Z^{2}_{0}$ for all $u\in[0,r]$ on $\{\widetilde{\gamma}_{1}>r\}$, it follows from \eqref{eq:DistTildeZ3IntTildeZ2} that $Z^{3}_{r}=\int_{0}^{r}v(Z^{2}_{u})du=v(i)r$, $\widetilde{\mathbb{P}}_{s,i,0}-$a.s. on $\{\widetilde{\gamma}_{1}>r\}$, and thus \begin{align*} \1_{\{\widetilde{\gamma}_{1}>r\}}\big(f_{+}\big(s+r,Z^{2}_{r},-v(i)r\big)-f_{+}\big(s+r,Z^{2}_{r},-Z^{3}_{r}\big)\big)=0,\quad\widetilde{\mathbb{P}}_{s,i,a}-\text{a.}\,\text{s.}. \end{align*} Hence, by Lemma \ref{lem:UnifContf} and \eqref{eq:DistFirstJumpTildeZ2}, we have \begin{align} \lim_{r\rightarrow 0+}\big|\mathcal{J}_{43}(r)\big|&=\lim_{r\rightarrow 0+}\frac{1}{r}\Big|\widetilde{\mathbb{E}}_{s,i,0}\Big(\1_{\{\widetilde{\tau}_{0}^{+}>r\geq\widetilde{\gamma}_{1}\}}\big(f_{+}\big(s+r,Z^{2}_{r},-v(i)r\big)-f_{+}\big(s+r,Z^{2}_{r},-Z^{3}_{r}\big)\big)\Big)\Big|\nonumber\\ &\leq\lim_{r\rightarrow 0+}\frac{1}{r}\,\widetilde{\mathbb{E}}_{s,i,0}\Big(\1_{\{\widetilde{\tau}_{0}^{+}>r\geq\widetilde{\gamma}_{1}\}}\sup_{j\in\mathbf{E},\,\ell_{1},\ell_{2}\in[0,\overline{v}r]}\big|f_{+}(s+r,j,\ell_{1})-f_{+}(s+r,j,\ell_{2})\big|\Big)\nonumber\\ &\leq\lim_{r\rightarrow 0+}\frac{1}{r}\,\widetilde{\mathbb{P}}_{s,i,0}\big(\widetilde{\gamma}_{1}\leq r\big)\sup_{j\in\mathbf{E},\,\ell_{1},\ell_{2}\in[0,\overline{v}r]}\big|f_{+}(s+r,j,\ell_{1})-f_{+}(s+r,j,\ell_{2})\big|\nonumber\\ \label{eq:LimitJ43r} &\leq K\,\lim_{r\rightarrow 0+}\sup_{j\in\mathbf{E},\,\ell_{1},\ell_{2}\in[0,\overline{v}r]}\big|f_{+}(s+r,j,\ell_{1})-f_{+}(s+r,j,\ell_{2})\big|=0. \end{align} Combining \eqref{eq:DecompJ4r}$-$\eqref{eq:LimitJ43r}, we obtain that \begin{align}\label{eq:LimitJ4r} \lim_{r\rightarrow 0+}\mathcal{J}_{4}(r)=v(i)\,\widetilde{\mathbb{E}}_{s,i,0}\Big(\big(G^{+}g^{+}\big)\Big(Z^{1}_{\widetilde{\tau}_{0}^{+}},Z^{2}_{\widetilde{\tau}_{0}^{+}}\Big)\Big). \end{align} Finally, in view of \eqref{eq:DefJPlusTildeP}, \eqref{eq:DecompPartialJPlus}, \eqref{eq:LimitJ1r}, \eqref{eq:LimitJ3r}, \eqref{eq:LimitJ2r}, and \eqref{eq:LimitJ4r}, for any $g^{+}\in C_{0}^{1}(\overline{\mathscr{X}_{+}})$ and $(s,i)\in\mathscr{X}_{-}$, we get \begin{align} \frac{\partial_{+}}{\partial s}\big(J^{+}g^{+}\big)(s,i)&=\lim_{r\rightarrow 0+}\frac{1}{r}\bigg(\widetilde{\mathbb{E}}_{s+r,i,0}\Big(g^{+}\Big(Z^{1}_{\widetilde{\tau}_{0}^{+}},Z^{2}_{\widetilde{\tau}_{0}^{+}}\Big)\Big)-\widetilde{\mathbb{E}}_{s,i,0}\Big(g^{+}\Big(Z^{1}_{\widetilde{\tau}_{0}^{+}},Z^{2}_{\widetilde{\tau}_{0}^{+}}\Big)\Big)\bigg)\nonumber\\ \label{eq:WHPlusDPoint} &=v(i)\,\widetilde{\mathbb{E}}_{s,i,0}\Big(\big(G^{+}g^{+}\big)\Big(Z^{1}_{\widetilde{\tau}_{0}^{+}},Z^{2}_{\widetilde{\tau}_{0}^{+}}\Big)\Big)-\sum_{j\in\mathbf{E}}\mathsf{\Lambda}_{s}(i,j)\,\widetilde{\mathbb{E}}_{s,j,0}\Big(g^{+}\Big(Z^{1}_{\widetilde{\tau}_{0}^{+}},Z^{2}_{\widetilde{\tau}_{0}^{+}}\Big)\Big). \end{align} Moreover, by \eqref{eq:Trivfplus}, \eqref{eq:JPlusfPlus}, and the definitions of $\widetilde{\mathsf{C}}$ and $\widetilde{\mathsf{D}}$ (cf. the end of Section \ref{subsec:Notations}) \begin{align}\label{eq:WHPlusDPoint2} &v(i)\,\widetilde{\mathbb{E}}_{s,i,0}\Big(\big(G^{+}g^{+}\big)\Big(Z^{1}_{\widetilde{\tau}_{0}^{+}},Z^{2}_{\widetilde{\tau}_{0}^{+}}\Big)\Big)-\sum_{j\in\mathbf{E}}\mathsf{\Lambda}_{s}(i,j)\,\widetilde{\mathbb{E}}_{s,j,0}\Big(g^{+}\Big(Z^{1}_{\widetilde{\tau}_{0}^{+}},Z^{2}_{\widetilde{\tau}_{0}^{+}}\Big)\Big)\nonumber\\ &\quad =v(i)\big(J^{+}G^{+}g^{+}\big)(s,i)-\sum_{j\in\mathbf{E}_{+}}\mathsf{\Lambda}_{s}(i,j)g^{+}(s,j)-\sum_{j\in\mathbf{E}_{-}}\mathsf{\Lambda}_{s}(i,j)\big(J^{+}g^{+}\big)(s,j)\nonumber\\ &\quad =v(i)\big(J^{+}G^{+}g^{+}\big)(s,i)-\big(\widetilde{\mathsf{C}}\,g^{+}\big)(s,i)-\big(\widetilde{\mathsf{D}}J^{+}g^{+}\big)(s,i), \end{align} Putting together \eqref{eq:WHPlusDPoint} and \eqref{eq:WHPlusDPoint2}, we deduce \eqref{eq:WHplusDRightDev}, which completes the proof. \subsection{Uniqueness of the Wiener-Hopf factorization}\label{sec:UniqProof} In this section we prove the ``+" part of Theorem \ref{thm:WHProbInterpr}. Specifically, we will show that, if $(S^{+},H^{+})$ solves \eqref{eq:WHPlus} subject to $(a^{+})$ and $(b^{+})$, then, for any $g^{+}\in C_{0}(\overline{\mathscr{X}_{+}})$, $S^{+}g^{+}=J^{+}g^{+}$ and $\mathcal{Q}_{\ell}^{+}g^{+}=\mathcal{P}_{\ell}^{+}g^{+}$, $\ell\in\mathbb{R}_{+}$. This also guarantees the uniqueness of $G^{+}$, since two strongly continuous contraction semigroup coincide if and only if their generators coincide (cf. \cite[Theorem 1.2]{Dynkin1965}). Throughout this subsection, we assume that $(S^{+},H^{+})$ satisfies \eqref{eq:WHPlus} (or equivalently, \eqref{eq:WHPlus1} and \eqref{eq:WHPlus2}) and the conditions $(a^{+})$ and $(b^{+})$. To begin with, we will show a sufficient condition of what we would like to prove. For any $g^{+}\in C_{0}(\overline{\mathscr{X}_{+}})$, $(s,i,a)\in\mathscr{Z}$, and $\ell\in[a,\infty)$, we define \begin{align}\label{eq:FhatPlus} \widehat{F}_{+}(s,i,a,\ell;g^{+})&:=\left(\begin{pmatrix} I^{+} \\ S^{+} \end{pmatrix}\mathcal{Q}^{+}_{\ell-a}g^{+}\right)(s,i),\\ \label{eq:FPlus} F_{+}(s,i,a,\ell;g^{+})&:=\widetilde{\mathbb{E}}_{s,i,0}\Big(g^{+}\Big(Z^{1}_{\widetilde{\tau}_{\ell-a}^{+}},Z^{2}_{\widetilde{\tau}_{\ell-a}^{+}}\Big)\Big). \end{align} When no confusion arises, we will omit $g^{+}$ in $\widehat{F}_{+}(s,i,a,\ell;g^{+})$ and $F_{+}(s,i,a,\ell;g^{+})$. \begin{proposition}\label{prop:SuffCondUniq} Suppose that \begin{align}\label{eq:FhatFPlus} \widehat{F}_{+}(s,i,0,\ell;g^{+})=F_{+}(s,i,0,\ell;g^{+}),\quad (s,i)\in\mathscr{X},\quad\ell\in\mathbb{R}_{+},\quad g^{+}\in C_{c}^{1}(\overline{\mathscr{X}_{+}}). \end{align} Then, for any $g^{+}\in C_{0}(\overline{\mathscr{X}_{+}})$, \begin{align*} S^{+}g^{+}=J^{+}g^{+},\quad\mathcal{Q}^{+}_{\ell}g^{+}=\mathcal{P}^{+}_{\ell}g^{+},\quad\ell\in\mathbb{R}_{+}. \end{align*} \end{proposition} \begin{proof} Let $g^{+}\in C^{1}_{c}(\overline{\mathscr{X}_{+}})$. By Corollary \ref{cor:StrongMarkovExpTildeTau}, for any $(s,i)\in\mathscr{X}$ and $\ell\in\mathbb{R}_{+}$, \begin{align*} F_{+}(s,i,0,\ell;g^{+})&=\widetilde{\mathbb{E}}_{s,i,0}\Big(\big(\mathcal{P}^{+}_{\ell}g^{+}\big)\Big(Z^{1}_{\widetilde{\tau}^{+}_{0}},Z^{2}_{\widetilde{\tau}^{+}_{0}}\Big)\Big)=\left(\begin{pmatrix} I^{+} \\ J^{+} \end{pmatrix}\mathcal{P}^{+}_{\ell}g^{+}\right)(s,i). \end{align*} This, together with \eqref{eq:FhatFPlus}, implies that, for any $\ell\in\mathbb{R}_{+}$, \begin{align}\label{eq:QPellPlusCc} \mathcal{Q}^{+}_{\ell}g^{+}=\mathcal{P}^{+}_{\ell}g^{+}{,\quad S^{+}\mathcal{Q}^{+}_{\ell}g^{+}=J^{+}\mathcal{P}^{+}_{\ell}g^{+},} \end{align} and thus $S^{+}\mathcal{P}^{+}_{\ell}g^{+}=J^{+}\mathcal{P}^{+}_{\ell}g^{+}$. Since $(\mathcal{P}_{\ell}^{+})_{\ell\in\mathbb{R}_{+}}$ is a strongly continuous semigroup, and $S^{+}$ and $J^{+}$ are bounded operators, we have \begin{align}\label{eq:SJPlusCc} {S^{+}g^{+}=\lim_{\ell\rightarrow 0+}S^{+}\mathcal{P}^{+}_{\ell}g^{+}=\lim_{\ell\rightarrow 0+}J^{+}\mathcal{P}^{+}_{\ell}g^{+}=J^{+}g^{+}} \end{align} so that $S^+g^+ = J^+g^+$. Alternatively, this equality can be obtained by letting $\ell=0$ in \eqref{eq:FhatFPlus}. Finally, since $C^{1}_{c}(\overline{\mathscr{X}_{+}})$ is dense in $C_{0}(\overline{\mathscr{X}_{+}})$, and $S^{+}$, $J^{+}$, and $\mathcal{P}_{\ell}^{+}$ are bounded operators, both \eqref{eq:QPellPlusCc} and \eqref{eq:SJPlusCc} hold true for any $g^{+}\in C_{0}(\overline{\mathscr{X}_{+}})$, which completes the proof of the proposition. \end{proof} Proposition \ref{prop:SuffCondUniq} states that if \eqref{eq:FhatFPlus} is satisfied then the ``+" part of Theorem \ref{thm:WHProbInterpr} holds true. Thus, to conclude the proof of the ``+" part of Theorem \ref{thm:WHProbInterpr}, and therefore the proof of uniqueness of our Wiener-Hopf factorization, it remains to prove that \eqref{eq:FhatFPlus} holds. The rest of this section is devoted to this task. We need the following {three} technical lemmas, whose proofs are deferred to Appendix~\ref{sec:ProofLemUniq}. \begin{lemma}\label{lem:DiffQell} For any $g^{+}\in C_{0}^{1}(\overline{\mathscr{X}_{+}})$ and $(s,i)\in\mathscr{X}_{-}$, $(S^{+}\mathcal{Q}^{+}_{\cdot}g^{+})(s,i)$ is differentiable on $\mathbb{R}_{+}$, and \begin{align*} \frac{\partial}{\partial\ell}\big(\big(S^{+}\mathcal{Q}^{+}_{\ell}g^{+}\big)(s,i)\big)=\big(S^{+}H^{+}\mathcal{Q}^{+}_{\ell}g^{+}\big)(s,i). \end{align*} \end{lemma} Let $C_{0}(\overline{\mathscr{Z}})$ be the space of real-valued $\mathcal{B}(\overline{\mathscr{Z}})$-measurable functions $h$ on $\mathscr{Z}$ such that $h(\infty,\partial,\infty)=0$, and that $h(\cdot,i,\cdot)\in C_{0}(\mathbb{R}_{+}\times\mathbb{R})$ for all $i\in\mathbf{E}$. Let $C_{0}^{1}(\overline{\mathscr{Z}})$ be the space of functions $h\in C_{0}(\overline{\mathscr{Z}})$ such that, for all $i\in\mathbf{E}$, $\partial h(\cdot,i,\cdot)/\partial s$ and $\partial h(\cdot,i,\cdot)/\partial a$ exist and belong to $C_{0}(\mathbb{R}_{+}\times\mathbb{R})$. \begin{lemma}\label{lem:GenTildeM} Let $\mathcal{A}$ be the strong generator of the Feller semigroup associated with the Markov family $\widetilde{\mathcal{M}}$. Then, $C_{0}^{1}(\overline{\mathscr{Z}})\subset\mathscr{D}(\mathcal{A})$, and for any $h\in C_{0}^{1}(\overline{\mathscr{Z}})$, \begin{align*} (\mathcal{A}\,h)(s,i,a)=\frac{\partial h}{\partial s}(s,i,a) + \sum_{j\in\mathbf{E}}\mathsf{\Lambda}_{s}(i,j)h(s,j,a)+v(i)\frac{\partial h}{\partial a}(s,i,a),\quad (s,i,a)\in\mathscr{Z}. \end{align*} \end{lemma} \begin{lemma}\label{lem:QellCompgPlus} For any $g^{+}\in C_{c}(\overline{\mathscr{X}_{+}})$ with $\supp g^{+}\subset[0,\eta_{g^{+}}]\times\mathbf{E}_{+}$ for some $\eta_{g^{+}}\in(0,\infty)$, we have $\supp\mathcal{Q}^{+}_{\ell}g^{+}\subset[0,\eta_{g^{+}}]\times\mathbf{E}_{+}$, for any $\ell\in\mathbb{R}_{+}$. \end{lemma} For any $a\in\mathbb{R}$, let $C_{0}(\mathscr{X}\times (-\infty,a])$ be the space of real-valued $\mathcal{B}(\mathscr{X})\otimes\mathcal{B}((-\infty,a])$-measurable functions $h$ on $\mathscr{X}\times (-\infty,a])$ such that $h(\cdot,i,\cdot)\in C_{0}(\mathbb{R}_{+}\times (-\infty,a])$ for all $i\in\mathbf{E}$. Let $C_{0}^{1}(\mathscr{X}\times (-\infty,a])$ be the space of functions $h\in C_{0}(\mathscr{X}\times (-\infty,a])$ such that, for all $i\in\mathbf{E}$, $\partial h(\cdot,i,\cdot)/\partial s$ and $\partial h(\cdot,i,\cdot)/\partial a$ exist and belong to $C_{0}(\mathbb{R}_{+}\times (-\infty,a])$. \begin{lemma}\label{lem:FhatPlusC01} For any $\ell\in\mathbb{R}$ and $g^{+}\in C_{c}^{1}(\overline{\mathscr{X}_{+}})$, $\hat{F}_{+}(\cdot,\cdot,\cdot,\ell;g^{+})\in C_{0}^{1}(\mathscr{X}\times (-\infty,\ell])$. \end{lemma} We are now in the position of proving \eqref{eq:FhatFPlus}. In what follows, we fix $g^{+}\in C_{c}^{1}(\overline{\mathscr{X}_{+}})$ with $\supp g^{+}\subset[0,\eta_{g^{+}}]\times\mathbf{E}_{+}$ for some $\eta_{g^{+}}\in(0,\infty)$. We first show that, for any $(s,i)\in\mathscr{X}$, $\ell\in\mathbb{R}_{+}$, and $T\in(0,\infty)$, \begin{align}\label{eq:FhatPlusMartingale} \widetilde{\mathbb{E}}_{s,i,0}\Big(\widehat{F}_{+}\Big(Z^{1}_{\widetilde{\tau}^{+}_{\ell}\wedge T},Z^{2}_{\widetilde{\tau}^{+}_{\ell}\wedge T},Z^{3}_{\widetilde{\tau}^{+}_{\ell}\wedge T},\ell\Big)\Big)=\widehat{F}_{+}(s,i,0,\ell). \end{align} Let $\phi\in C^{1}(\mathbb{R})$ with $\phi(a)=1$ for $a\in(-\infty,\ell]$ and $\lim_{a\rightarrow\infty}\phi(a)=0$. We extend $\widehat{F}_+(\cdot,\cdot,\cdot,\ell)$ to be a function on $\mathscr{Z}$ by defining \begin{align*} \widehat{F}_{+}(s,i,a,\ell):=\phi(a)\big(2\widehat{F}_{+}(s,i,\ell,\ell)-\widehat{F}_{+}(s,i,2\ell-a,\ell)\big),\quad (s,i)\in\mathscr{X},a\in(\ell,\infty). \end{align*} By Lemma \ref{lem:FhatPlusC01}, we now have $\widehat{F}_{+}(\cdot,\cdot,\cdot,\ell)\in C_{0}^{1}(\overline{\mathscr{Z}})$ (with the convention that $\widehat{F}_{+}(\infty,\partial,\infty,\ell)=0$). It follows from Lemma \ref{lem:GenTildeM}, \eqref{eq:FhatPlus}, \eqref{eq:DevQella}, and Lemma \ref{lem:DiffQell} that, for any $(s,i)\in\mathscr{X}$ and $a\in(-\infty,\ell)$, \begin{align*} \big(\mathcal{A}\widehat{F}_{+}\big)(s,i,a,\ell)&=\frac{\partial\widehat{F}_{+}}{\partial s}(s,i,a,\ell)+\sum_{j\in\mathbf{E}}\mathsf{\Lambda}_{s}(i,j)\widehat{F}_{+}(s,j,a,\ell)+v(i)\frac{\partial\widehat{F}_{+}}{\partial a}(s,i,a,\ell)\\ &=\Bigg(\!\frac{\partial}{\partial s}\!\begin{pmatrix} I^{+} \\ S^{+} \end{pmatrix}\!\mathcal{Q}^{+}_{\ell-a}g^{+}\!\!\Bigg)(s,i)+\!\Bigg(\!\widetilde{\mathsf{\Lambda}} \begin{pmatrix} I^{+} \\ S^{+} \end{pmatrix}\!\mathcal{Q}^{+}_{\ell-a}g^{+}\!\!\Bigg)(s,i)+\!\Bigg(\!\mathsf{V}\frac{\partial}{\partial a}\!\begin{pmatrix} I^{+} \\ S^{+} \end{pmatrix}\!\mathcal{Q}^{+}_{\ell-a}g^{+}\!\!\Bigg)(s,i)\\ &=\left(\Bigg(\bigg(\frac{\partial}{\partial s}+\widetilde{\mathsf{\Lambda}}\bigg) \begin{pmatrix} I^{+} \\ S^{+} \end{pmatrix} -\mathsf{V} \begin{pmatrix} I^{+} \\ S^{+} \end{pmatrix} H^{+}\Bigg)\mathcal{Q}^{+}_{\ell-a}g^{+}\right)(s,i), \end{align*} where we note that $\mathcal{Q}^{+}_{\ell-a}g^{+}\in\mathscr{D}(H^{+})$ since $g^{+}\in C_{0}^{1}(\overline{\mathscr{X}_{+}})=\mathscr{D}(H^{+})$. Hence, since $(S^{+},H^{+})$ solves \eqref{eq:WHPlus}, we have \begin{align*} \big(\mathcal{A}\widehat{F}_{+}\big)(s,i,a,\ell)=0,\quad (s,i)\in\mathscr{X},\quad a\in(-\infty,\ell). \end{align*} Therefore, by Dynkin's formula (cf. \cite[III.10]{RogersWilliams1994}), we obtain that \begin{align*} \widetilde{\mathbb{E}}_{s,i,0}\Big(\widehat{F}_{+}\Big(Z^{1}_{\widetilde{\tau}^{+}_{\ell}\wedge T},Z^{2}_{\widetilde{\tau}^{+}_{\ell}\wedge T},Z^{3}_{\widetilde{\tau}^{+}_{\ell}\wedge T},\ell\Big)\Big)-\widehat{F}_{+}(s,i,0,\ell)=\widetilde{\mathbb{E}}_{s,i,0}\bigg(\int_{0}^{\widetilde{\tau}^{+}_{\ell}\wedge T}\!\!\big(\mathcal{A}\widehat{F}_{+}\big)\big(Z^{1}_{t},Z^{2}_{t},Z^{3}_{t},\ell\big)dt\bigg)=0, \end{align*} which completes the proof of \eqref{eq:FhatPlusMartingale}. By \eqref{eq:FhatPlusMartingale}, we have \begin{align*} \widehat{F}_{+}(s,i,0,\ell)=\widetilde{\mathbb{E}}_{s,i,0}\Big(\widehat{F}_{+}\Big(Z^{1}_{\widetilde{\tau}^{+}_{\ell}},Z^{2}_{\widetilde{\tau}^{+}_{\ell}},Z^{3}_{\widetilde{\tau}^{+}_{\ell}},\ell\Big)\1_{\{\widetilde{\tau}^{+}_{\ell}<T\}}\Big)+\widetilde{\mathbb{E}}_{s,i,0}\Big(\widehat{F}_{+}\big(Z^{1}_{T},Z^{2}_{T},Z^{3}_{T},\ell\big)\1_{\{\widetilde{\tau}^{+}_{\ell}\geq T\}}\Big). \end{align*} From the definition of $\widetilde{\tau}^{+}_{\ell}$ and the right-continuity of the sample paths of $Z$, we have $Z^{3}_{\widetilde{\tau}^{+}_{\ell}}=\ell$ on $\{\widetilde{\tau}^{+}_{\ell}<T\}$. Moreover, it is clear from the construction of $\widetilde{\mathcal{M}}$ that $Z_{t}^{2}\in\mathbf{E}$ for $t\in\mathbb{R}_{+}$, and, in view of \eqref{eq:RangeZ2TildeTauPlus}, we deduce that $Z^{2}_{\widetilde{\tau}^{+}_{\ell}}\in\mathbf{E}_{+}$ on $\{\widetilde{\tau}^{+}_{\ell}<T\}$. Together with \eqref{eq:FhatPlus}, \eqref{eq:FPlus}, and \eqref{eq:ProcZ}, we obtain that \begin{align} \widehat{F}_{+}(s,i,0,\ell)&=\widetilde{\mathbb{E}}_{s,i,0}\Big(\widehat{F}_{+}\Big(Z^{1}_{\widetilde{\tau}^{+}_{\ell}},Z^{2}_{\widetilde{\tau}^{+}_{\ell}},\ell,\ell\Big)\1_{\{\widetilde{\tau}^{+}_{\ell}<T\}}\Big)+\widetilde{\mathbb{E}}_{s,i,0}\Big(\widehat{F}_{+}\big(Z^{1}_{T},Z^{2}_{T},Z^{3}_{T},\ell\big)\1_{\{\widetilde{\tau}^{+}_{\ell}\geq T\}}\Big)\nonumber\\ &=\widetilde{\mathbb{E}}_{s,i,0}\Big(g^{+}\Big(Z^{1}_{\widetilde{\tau}^{+}_{\ell}},Z^{2}_{\widetilde{\tau}^{+}_{\ell}}\Big)\1_{\{\widetilde{\tau}^{+}_{\ell}<T\}}\Big)+\widetilde{\mathbb{E}}_{s,i,0}\Big(\widehat{F}_{+}\big(Z^{1}_{T},Z^{2}_{T},Z^{3}_{T},\ell\big)\1_{\{\widetilde{\tau}^{+}_{\ell}\geq T\}}\Big)\nonumber\\ &=F_{+}(s,i,0,\ell)-\widetilde{\mathbb{E}}_{s,i,0}\Big(g^{+}\Big(Z^{1}_{\widetilde{\tau}^{+}_{\ell}},Z^{2}_{\widetilde{\tau}^{+}_{\ell}}\Big)\1_{\{\widetilde{\tau}^{+}_{\ell}\geq T\}}\Big)+\widetilde{\mathbb{E}}_{s,i,0}\Big(\widehat{F}_{+}\big(Z^{1}_{T},Z^{2}_{T},Z^{3}_{T},\ell\big)\1_{\{\widetilde{\tau}^{+}_{\ell}\geq T\}}\Big)\nonumber\\ &=F_{+}(s,i,0,\ell)-\widetilde{\mathbb{E}}_{s,i,0}\Big(g^{+}\Big(s+\widetilde{\tau}^{+}_{\ell},Z^{2}_{\widetilde{\tau}^{+}_{\ell}}\Big)\1_{\{\widetilde{\tau}^{+}_{\ell}\geq T\}}\Big)\nonumber\\ \label{eq:FhatFPlusDiff} &\quad +\widetilde{\mathbb{E}}_{s,i,0}\Big(\widehat{F}_{+}\big(s+T,Z^{2}_{T},Z^{3}_{T},\ell\big)\1_{\{\widetilde{\tau}^{+}_{\ell}\geq T\}}\Big). \end{align} Therefore, in order to prove \eqref{eq:FhatFPlus}, it remains to show that the last two terms in \eqref{eq:FhatFPlusDiff} vanish. Since $g^{+}\in C_{c}(\overline{\mathscr{X}_{+}})$ (and so $g^{+}(Z^{1}_{\infty},Z^{2}_{\infty})=g^{+}(\infty,\partial)=0$) with $\supp g^{+}\subset[0,\eta_{g^{+}}]\times\mathbf{E}_{+}$, and using the fact that $Z^{2}_{\widetilde{\tau}^{+}_{\ell}}\in\mathbf{E}_{+}$ on $\{\widetilde{\tau}^{+}_{\ell}<\infty\}$, we have, for $T\in[\eta_{g^{+}}-s,\infty)$, \begin{align*} g^{+}\Big(s+\widetilde{\tau}^{+}_{\ell},Z^{2}_{\widetilde{\tau}^{+}_{\ell}}\Big)\1_{\{\widetilde{\tau}^{+}_{\ell}\geq T\}}=g^{+}\Big(s+\widetilde{\tau}^{+}_{\ell},Z^{2}_{\widetilde{\tau}^{+}_{\ell}}\Big)\1_{\{\widetilde{\tau}^{+}_{\ell}\in[T,\infty)\}}=0. \end{align*} Hence, the second term in \eqref{eq:FhatFPlusDiff} vanishes when $T\in[\eta_{g^{+}}-s,\infty)$. As for the last term in \eqref{eq:FhatFPlusDiff}, since $\supp g^{+}\subset[0,\eta_{g^{+}}]\times\mathbf{E}_{+}$, Lemma \ref{lem:QellCompgPlus} and the condition $(a^{+})(\text{i})$ ensure that $\supp\widehat{F}_{+}(\cdot,\cdot,\cdot,\ell)\subset[0,\eta_{g}^{+}]\times\mathbf{E}\times[0,\ell]$. Hence, when $T\in[\eta_{g^{+}}-s,\infty)$, $\widehat{F}_{+}(s+T,Z^{2}_{T},Z^{3}_{T},\ell)=0$ so that the last term in \eqref{eq:FhatFPlusDiff} vanishes. Therefore, by choosing $T\in[\eta_{g^{+}}-s,\infty)$, we obtain \eqref{eq:FhatFPlus} from \eqref{eq:FhatFPlusDiff}. The proof of the ``+" part of Theorem \ref{thm:WHProbInterpr} is complete. As mentioned earlier, the proof of the ``-" part of Theorem \ref{thm:WHProbInterpr} proceeds in direct analogy to ``+" part given above.
{ "timestamp": "2019-03-01T02:06:54", "yymm": "1902", "arxiv_id": "1902.10850", "language": "en", "url": "https://arxiv.org/abs/1902.10850" }
\section{Introduction} \label{section1} The Joint Replenishment Problem (JRP) occurs when several items are ordered from the same supplier, or several products have the same means of transportation, or several products are processed on the same piece of equipment \citep{salamehetal2014}. Every time an order is placed, the group fixed ordering cost is incurred regardless the number of items replenished; in addition there are also item-specific fixed and variable ordering costs that are charged whenever an item is included in a replenishment order. The goal of the JRP is to determine the optimal inventory replenishment plan that minimises the cost of replenishing multiple items. Literature on JRP can be roughly categorised into deterministic and stochastic based on the nature of demand. In the deterministic joint replenishment inventory system demand for each individual item is known to be constant over an infinite time horizon and replenishments are made at equally spaced time intervals; the problem is to determine the length of replenishment cycles and the frequency of replenishing individual items, e.g., \citep{goyalandBelton1979, kaspiandRosenblatt1991,viswanathan1996,wildemanetal1997,hariga1994,goyal1993,boctoretal2004,nilssonetal2007}. In the stochastic joint replenishment inventory system the demand for individual items is unknown but follows a certain type of distributions; the problem is to decide the optimal parameters of a given inventory policy, e.g., \citep{balintfy1964, atkinsandIyogun1988, renbergandPlanche1967, kalpakamandArivarignan1993, viswanathan1997, nielsenandLarsen2005, ozkayaetal2006}. Most literature still presents applications to constant and dynamic deterministic demands; however, the study regarding stochastic demand has received increasing attention due to its practical relevance \citep{bastosetal2017}. This work belongs to the growing literature on the stochastic joint replenishment. This paper applies the static-dynamic strategy, proposed by \cite{bt1988} for tackling single-item lot-sizing problems, in the context of a JRP system. The static-dynamic strategy, known as $(R, S)$, features two control parameters: $R$, timing of replenishment, and $S$, order-up-to-position. At each review period, the decision maker places an order so as to increase the inventory position (net inventory level + outstanding orders) to a given order-up-to-position. In the context of the JRP system, a periodic-review $(R, S)$ policy is adopted for each item. The $(R, S)$ policy is an appealing strategy since it eases the coordination between supply chain players \citep{kilicandtarim2011}, and facilitates managing joint replenishment \citep{silveretal1998}. Our goal is to tackle the periodic-review stochastic JRP under $(R, S)$ policy. We first present a mixed-integer linear programming (MILP) model for computing optimal policy parameters that optimise the expected total cost comprising group fixed ordering costs, item-specific group ordering costs, holing costs, and penalty costs over the planning horizon. Our model generalises \cite{rossietal2015}, which discussed an MILP model for approximating optimal $(R, S)$ policy parameters for single-item lot-sizing problems. We further show that our MILP model can be used to approximate the optimal $(\sigma, \vec{S})$ policies, which is known to be optimal for this class of problem \citep{liuandesogbue2012}. Under this policy, decision makers order up to $\vec{S}$ if opening inventory positions fall in $\sigma$ ($\sigma \subset \mathcal{R}^N$, $\vec{S} \in \mathcal{R}^N$, $N$ represents the number of items) at the beginning of each time period. Numerical experiments illustrate the effectiveness of our models. We contribute to the literature on the stochastic JPR as follows. \begin{itemize} \item We present an MILP model for tackling the nonstationary stochastic JRP under $(R, S)$ policy. \item We demonstrate that the MILP model can be used to approximate $(\sigma, \vec{S})$ policies. \item In an extensive computational study based on existing test beds drawn from the literature we demonstrate the effectiveness of our models when compared to other competing approaches in the literature. \end{itemize} This rest of this paper is organised as follows. Section \ref{literaturereview} surveys relevant literature. Section \ref{problemdescription} describes problem settings. Section \ref{milp} presents an MILP model for computing $(R, S)$ policy parameters. Section \ref{kconvexity} extends the MILP model for approximating the optimal $(\sigma, \vec{S})$ policy parameters. An extensive computational study is conducted in Section \ref{computationalstudy}. We draw conclusions in Section \ref{conclusion}. \section{Literature review}\label{literaturereview} The problem of controlling the inventory of a multi-item system under joint replenishment has received increasing attention over the past several decades. For a thorough review of literature readers could refer to \citep{silverandPeterson1985,goyalandSatir1989,vanetal1992, khoujaandGoyal2008, bastosetal2017}. In this section, we focus our attention on existing policies for tackling stochastic JRPs. In particular, we survey control policies that have been considered in the literature. {\bf $(\sigma, \vec{S})$ policy.} Since the landmark study \cite{Scarf1960} proved the optimality for the single-item inventory problem, there have been few attempts to prove the optimality for multi-item inventory systems. \cite{johnson1967} proved the optimal policy in the stationary case is a $(\sigma, \vec{S})$ policy, where $\sigma \subset \mathcal{R}^N$ and $\vec{S} \in \mathcal{R}^N$, and one orders up to $\vec{S}$ if inventory levels $\vec{I} \in \sigma$ and $\vec{I} \leq \vec{S}$ and one does not order if $\vec{I} \notin \sigma$. \cite{kalin1980} showed when $\vec{I} \in \sigma$ and $\vec{I} \not\leq \vec{S}$, there exists $\vec{S}(\vec{I}) \geq \vec{I}$ such that the optimal policy is to order up to $\vec{S}(\vec{I})$, this policy is named $(\sigma, \vec{S}(\cdot))$ policy. \cite{ohnoandishigaki2001} proved the optimality of $(\sigma, \vec{S}(\cdot))$ policy for continuous-time inventory problems with compound Poisson demands. \cite{gallegoandsethi2005} gave the general definition of $K$-convexity in $\mathcal{R}^N$, which encompasses both the joint ordering and individual ordering case. {\bf $(s, c, S)$ policy.} Several works on stochastic JRPs have focused on computing $(s, c, S)$ policies, introduced by \cite{balintfy1964}. This policy features three control parameters: $s$, reorder point; $c$, can-order level; $S$, order-up-to-position. Under this policy, decision makers order up to $S$ when either at a demand epoch the inventory position drops to or below $s$; or when at a special replenishment opportunity the inventory position is at or below $c$. Under the assumption of Poisson-distributed demands, \cite{ignall1969} proved that the $(s, c, S)$ policy is not optimal even for two-item problems. \cite{silver1974} proposed the decomposition method to compute $(s, c, S)$ policy parameters, where the multi-item problem is decomposed into several single-item problems. This approximation technique was followed by \citep{melchiors2002, johansenandMelchiors2003}. \cite{kayisetal2008} modelled the two-item JRP problem as a semi-Markov decision model, and proposed an enumerative approach to approximate $(s, c, S)$ policies. In addition, \citep{schaackandSilver1972, thompstoneandSilver1975, silver1981, federgruenetal1984} studied JRPs with compound Poisson-distributed demands. {\bf $(R, T)$ policy.} \cite{atkinsandIyogun1988} proposed two periodic-review $(R, T)$-type policies, namely periodic policy $P$ and modified periodic policy $MP$, which differ only in the way the ordering periods $T_i$ are determined. Under this policy, every $T_i$ periods, the inventory position of item $i$ is raised to $R_i$. Numerical experiments demonstrate that the $MP$ policy performs consistently better than the $(s, c, S)$ policy, and the $P$ policy generally outperforms the the $(s, c, S)$ policy excepting problems involving small values of group fixed ordering cost {\bf $(Q, S)$ policy.} This policy was first proposed by \cite{renbergandPlanche1967}. Under this policy, whenever the total inventory position drops to the group reorder point, an order is placed to raise inventory position of each item to item-specific order-up-to-position $S$. The combined order quantity is $Q$, and the group reorder point is reached when the combined usage reaches $Q$. \cite{pantumsinchai1992} evaluated the computational performance of the $(Q, S)$ policy by comparing it against the $(s, c, S)$ policy, $P$ policy and $MP$ policy on the basis of long-run total average costs. Computational experiments showed that the $MP$ policy consistently outperforms the $(s, c, S)$ policy on the test instances, and both $MP$ and $(Q, S)$ policy perform better as the group ordering cost increases. The study showed that the $(Q, S)$ policy is appropriate for items for which the stock-out costs are low and the major set-up cost is high relative to the minor set-up cost. {\bf $P(s, S)$ policy.} This policy was proposed by \cite{viswanathan1997} for periodic-review inventory systems, in which inventory position of each item is reviewed at every fixed and constant time interval. At each review time, the $(s, S)$ policy is applied to each item, so that any item with inventory position at or below $s$ is order up to $S$. For a fixed review period, the algorithm of \cite{zhengandFedergruen1991} is adopted to compute the optimal $(s, S)$ policy parameters. Computational studies indicated that although the proposed policy requires more computational effort, it generally dominates the $MP$ policy, and dominates $(s, c, S)$ policy, and $(Q, S)$ policy for most test instances. {\bf $Q(s, S)$ policy.} \cite{nielsenandLarsen2005} combined features of $(Q, S)$ policy and $P(s, S)$ policy, and proposed the $Q(s, S)$ policy. By operating under this policy, the total inventory position is continuously reviewed while the item-specific inventory positions are reviewed only when the total consumption since the last order reaches $Q$. Then every item with inventory position less than or equal to its respective reorder point $s$ is order to $S$. An analytic solution is derived by using the Markov decision theory in \cite{nielsenandLarsen2005}. Computational study demonstrated that the $Q(s, S)$ policy outperforms $P(s, S$) policy, and dominates $(Q, S)$ policy in $17$ of $18$ test instances on the data set of \cite{atkinsandIyogun1988}. {\bf $(Q, S, T)$ policy.} This continuous-review policy was proposed by \cite{ozkayaetal2006}. Decision makers raise the inventory position of each item $i$ to its order-up-to-position $S_i$ whenever a total of $Q$ demands accumulated or $T$ time units have elapsed, whichever occurs first. This policy is a hybrid of the continuous review $(Q, S)$ policy, proposed by \cite{renbergandPlanche1967}, and the periodic review $(R, T)$ policy, proposed by \cite{atkinsandIyogun1988}. Thus, it features benefits of two separate policies. The comprehensive numerical study indicates that the proposed policy dominates the $P(s, S)$ policy, $(Q, s)$ policy, $Q(s, S)$ policy, and $(s, c, S)$ policy in $100$ of $139$ instances. {\bf $(R, S)$ policy.} This policy is proposed by \cite{bt1988} for controlling single-item inventory system. The policy requires decision makers to place an order at each replenish period to increase the inventory position to the order-up-to-position $S$. This policy has been widely studied in stream of single-item lot-sizing problems. \citep{tarimandkingsman2004, tarimandkingsman2006} formulated a mixed integer programming (MIP) model for computing optimal $(R, S)$ policy parameters. \cite{tarimetal2011} relaxed the MIP model, and solved it as a shortest path problem which does not require the use of any MIP or Constraint Programming (CP) commercial solver. In addition, \cite{ozenetal2012} showed a DP-based algorithm for solving small-size problems, and an approximation heuristic and a relaxation heuristic for tackling larger-size problems; \cite{tuncetal2014} suggested a deterministic equivalent MIP model. Recently, \cite{rossietal2015} generalised the discussions above and developed a unified MILP model for approximating $(R, S)$ polices by adopting the piecewise linear approximation technique in \cite{rossietal2014}. Although various efficient modelling methods for computing $(R, S)$ policy parameters were proposed, they generally control the single-item inventory system. The main purpose of this work is to apply the $(R, S)$ policy for the multi-item inventory system. In the context of the JRP, a periodic-review $(R, S)$ policy is adopted for each item. The stochastic JRP is an open research area for the development of more efficient computational methods and control policies. In this study, we apply the periodic review $(R,S)$ policy, originally proposed by \cite{bt1988} for tackling single-item lot sizing problems, to JRPs with stochastic demand and fixed lead time. In the context of the JRP system, a periodic review $(R, S)$ policy is adopted for each item. Note that when the demand is stationary stochastic, the $(R, S)$ policy is the same as the $MP$ policy proposed by \cite{atkinsandIyogun1988}, where every $T_n$ periods, raising the inventory position of item $n$ to the order-up-to-position $R_n$. However, the $(R, S)$ policy also deals with non-stationary stochastic demands which was not addressed in \cite{atkinsandIyogun1988}. In this paper, we present an MILP approach for approximating $(R, S)$ policies under non-stationary stochastic demands. Nonlinear costs are approximated by leveraging technique introduced in \citep{rossietal2014}. Numerical experiments investigate the effectiveness of our approach against competing policies from the literature. \section{Problem description}\label{problemdescription} Consider a periodic-review $N$-item inventory management system over a $T$-period planning horizon. We assume that demand $d_t^n$ of item $n$, $n=1, \ldots, N$, in period $t$, $t=1, \ldots, T$ are independently distributed random variables with known probability density function $g_{t}^n(\cdot)$, and cumulative distribution function $G_t^n(\cdot)$ We assume that ordering decisions are made at the beginning of each time period. There is a group fixed ordering cost $K$ and an item-specific fixed ordering cost $k^n$. The group fixed ordering cost is incurred whenever an order is placed at a given time period, no matter which and how many items are included in this order. The item-specific fixed ordering cost is incurred whenever an order for item $n$ is placed at a given time period, no matter how many items are included in this order. We define $Q_t^n$ as the quantity of item $n$ ordered in period $t$, which will be received after lead time $L^n$. Then, the ordering cost of item $n$ in period $t$ with ordering quantity $Q_t^n$ can be written as, \begin{align} &c_t^n(Q_t^n)=\begin{cases} k^n, &Q_t^n > 0,\\ 0, &Q_t^n=0.\end{cases} \end{align} Let $c_t(\vec{Q}_t)$ denote the ordering cost of period $t$ with ordering quantity vector $\vec{Q}_t = (Q_t^1, \ldots, Q_t^N)$. $c_t(\vec{Q}_t)$ has the following structure \begin{align} &c_t(\vec{Q}_t)=\begin{cases}K+ \sum_{n=1}^N c_t^n(Q_t^n), &\exists Q_t^n|Q_t^n >0,\\ 0,& \mbox{otherwise}. \end{cases} \end{align} A penalty cost $b^n$ is incurred for each unit of item $n$ of backorder demand per period, and a holding cost $h^n$ is charged for each unit of item $n$ carried from one period to the next. The immediate penalty and holding cost of period $t$ can be expressed as \begin{align} &L_t(\vec{y})=\sum_{t=1}^n\Big(b^n\cdot \text{E}[\max(d_t^n-y^n, 0)]+h^n\cdot \text{E}[\max(y^n-d_t^n, 0)]\Big), \end{align} where vector $\vec{y}=(y^1, \ldots, y^N)$ is the inventory level immediately after orders are received at the beginning of period $t$, and ``$\text{E}$" denotes the expectation taken with respect to the random demand. Let $I_t^n$ denote the net inventory level of item $n$ at the end of period $t$, which is also the opening inventory level of period $t+1$, and $C_t(\vec{I}_{t-1})$ denote the expected total cost of an optimal policy over period $t, \ldots, T$, given opening inventory level $\vec{I}_{t-1}=(I_{t-1}^1, \ldots, I_{t-1}^N)$ at the beginning of period $t$. Note that there is no outstanding order at the beginning of the planning horizon. Then, $C_t(\vec{I}_{t-1})$ can be written as, \begin{align} \small C_t(\vec{I}_{t-1})=\begin{cases} \min_{\vec{Q}_t}\big\{ c_t(\vec{Q}_t)+ L_t(\vec{I}_{t-1}+\vec{Q}_{t-\vec{L}})+E[C_{t+1}(\vec{I}_{t-1}+\vec{Q}_{t-\vec{L}}-\vec{D}_t)]\big\}, & t \geq \vec{L}+1,\\ \min_{\vec{Q}_t}\big\{ c_t(\vec{Q}_t)+ L_t(\vec{I}_{t-1})+E[C_{t+1}(\vec{I}_{t-1}-\vec{D}_t)]\big\}, & \text{otherwise;} \end{cases} \end{align} where $\vec{D}_t=(d_t^1, \cdots, d_t^N)$, $\vec{L}=(L^1, \cdots, L^N)$, and \begin{align} C_T(\vec{I}_{T-1})=\begin{cases} \min_{\vec{Q}_t} \big\{c_T(\vec{Q}_T)+ L_t(\vec{I}_{T-1}+\vec{Q}_{T-\vec{L}})\big\},& t\geq \vec{L}+1,\\ \min_{\vec{Q}_t} \big\{c_T(\vec{Q}_T)+ L_t(\vec{I}_{T-1})\big\}, & \text{otherwise;} \end{cases} \end{align} represents the boundary condition. Moreover, let us define, $t=L^n+1, \ldots, T$, \begin{align} G_t(\vec{I}_{t-1})=L_t(\vec{I}_{t-1}+\vec{Q}_{t-\vec{L}})+E[C_{t+1}(\vec{I}_{t-1}+\vec{Q}_{t-\vec{L}}-\vec{D}_t)]. \end{align} {\bf Example.} We consider an instance in which the group fixed ordering cost is $K=10$, the item-specific ordering cost $k$ is 0, the holding cost is $h=1$, the stock-out penalty cost is $b=5$. We control inventory for two items over a planning horizon of $T=4$ periods. We assume that the demand of item $n$ in period $t$ follows a Poisson distribution with rate $\lambda^n_t$; where $\lambda^1_t=\lambda^2_t=\{3,6,9,6\}$. For simplicity, we assume that the lead time is $0$ for every item. The expected total cost, i.e. $C_1(\vec{I}_0)$, of an optimal policy, given initial inventory level $I_0^1=I_0^2=0$, can be obtained via stochastic dynamic programming (SDP) and is equal to $65.4$. In Fig. \ref{fig:example_countour_plot} we plot $G_1(\vec{I}_0)$ for $I_0^1 \in [0, 14]$ and $I_0^2 \in [0, 14]$. \begin{figure}[!htbp] \begin{center} \includegraphics[width=8.4cm]{contour_plot.pdf} \caption{Expected total cost, i.e. $G_1(\vec{I}_0)$, contour plot for the two-item joint replenishment numerical example} \label{fig:example_countour_plot} \end{center} \end{figure} \section{An MILP model for approximating non-stationary stochastic $(R, S)$ policies}\label{milp} In this section, we formulate the stochastic JRP problem under the $(R, S)$ policy as an MILP model. Under the $(R, S)$ policy, the replenish periods and associated order-up-to-positions are fixed at the beginning of the planning horizon, while actual order quantities are decided at the beginning of each replenish period. Note that in the context of JRP, a periodic-review $(R, S)$ policy is adopted for each item. We first introduce a stochastic programming formulation in Section \ref{stochasticprogramming} and then we reformulate it as an MILP model in Section \ref{MINLPformulation}. \subsection{A stochastic program} \label{stochasticprogramming} Consider the periodic-review N-item T-period JRP described in Section \ref{problemdescription}. We introduce binary variables $\delta_t$ and $y_t^n$, $t=1, \ldots, T$, and $n=1, \ldots, N$; $\delta_t$ takes value $1$ if a group order is made in period $t$ no matter how many types of items involved, otherwise $0$; $y_t^n$ is set to $1$ if item $n$ is replenished in period $t$. We further assume that the system is forced to place an order in period $1$, and all orders should be received by the end of the planning horizon. We reformulate the stochastic dynamic programming model in Section \ref{problemdescription} as the stochastic program in Fig. \ref{spformulation}. \begin{figure}[!htbp] \tiny \begin{align} &\min \sum_{t=1}^T\Big(K\cdot\delta_t+\sum_{n=1}^N(k^n\cdot y_t^n+b^n\text{E}[\max(-I_t^n, 0)]+h^n\text{E}[\max(I_t^n, 0)])\Big)\label{sp} \end{align} Subject to, $n=1, \ldots, N$, \begin{align} &\delta_t \geq y_t^n & t=1, \ldots, T \label{sp-1}\\ &y_1^n=1 & \label{sp-1-1}\\ &I_t^n =I_0^n -\sum_{j=1}^td_j^n & t=1, \ldots, L^n \label{sp-1-2}\\ &y_t^n=0 & t=T-L^n, \ldots, T \label{sp-2-1}\\ &I_t^n = I_0^n +\sum_{i=1}^{t-L^n}Q_i^n-\sum_{j=1}^td_j^n & t= L^n+1, \ldots, T \label{sp-2-2}\\ &y_t^n =\begin{cases} 1, &Q_t^n > 0,\\ 0, &Q_t^n = 0. \end{cases}\label{sp-5}\\ &Q_t^n \geq 0 \label{sp-3}\\ &\delta_t =\{0, 1\} \label{sp-4}\\ &I_t^n \in \mathcal{R} \label{sp-6} \end{align} \caption{Stochastic programming formulation of the JRP.} \label{spformulation} \end{figure} The objective is to find the optimal replenish plan so so to minimise the expected ordering costs, penalty costs, and holding costs of $N$ items over the $T$-period planning horizon. Constraints (\ref{sp-1}) imply that if at least an item is ordered, then a group replenishment is issued. Constraints (\ref{sp-1-1}) force the system to replenish every item in period $1$. Constraints (\ref{sp-1-2}) are inventory conservation constraints in periods $1, \ldots, L^n$: inventory level at the end of period $t$ is equal to the initial inventory level, minus demands raised up to period $t$. Constraints (\ref{sp-2-1}) ensure all replenishments are received by the end of the planning horizon. Constraints (\ref{sp-2-2}) are the inventory conservation constraints in periods $1+L^n, \ldots, T$: inventory level at the end of period $t$ is equal to the initial inventory level, plus all orders received before the end of period $t$, demands raised up to period $t$. Constraints (\ref{sp-5})- (\ref{sp-6}) state domains of $y_t^n$, $Q_t^n$, $\delta_t$, and $I_t^n$. \subsection{An MILP model}\label{MINLPformulation} The stochastic programming formulation in Fig. \ref{spformulation} can be reformulated into an MILP model via the piecewise approximation approach in \citep{rossietal2014}. In the rest of this paper, let ``$\sim$" denote the expectation operator. We introduce the first order loss function \[\mathcal{L}(x,\omega)=\int_{-\infty}^{\infty}\max(t-x,0)g_{\omega}(t)\text{d}(t)\] and its complementary function \[\mathcal{\hat{L}}(x,\omega)=\int_{-\infty}^{\infty}\max (x-t,0)g_{\omega}(t)d(t),\] where $\omega$ is a random variable with probability density function $g_{\omega}(\cdot)$, and $x$ is a scalar variable. Consider a partition of the support $\Omega$ of $\omega$ into $W$ disjoint subregions $\Omega_1, \ldots, \Omega_W$, the probability mass $p_i=$Pr$\{\omega \in \Omega_i\}=\int_{\Omega_i}g_{\omega}(t)d(t)$, and the conditional expectation $\text{E}[d_{jt}|\Omega_i]=\frac{1}{p_i}\int_{\Omega_i}tg_{\omega}(t)dt$, $i = 1, \ldots, W$. By applying Jensen's lower bound \footnote{Similarly, the Edmundson-Madansky upper bound can be applied for approximating the expected excess inventory and back-orders as well, for further details refer to \citep{rossietal2014}.}, $L(x,\omega)$ and $\hat{L}(x,\omega)$ can be approximated as piecewise linear functions, as presented in the following lemma. \begin{lemma}\label{lemma1} For the first order loss function and its complementary function, the lower bounds $\mathcal{L}_{lb}$ and $\hat{\mathcal{L}}_{lb}$, where $E[\omega|\Omega_i] \leq x \leq E[\omega|\Omega_{i+1}]$, $i=1, \ldots, W$, \begin{align} &\mathcal{L}(x, \omega) \geq \mathcal{L}_{lb}(x, \omega) = x\sum_{k=1}^ip_k+\sum_{k=1}^ip_k E[\omega|\Omega_k]+(x-\tilde{\omega}),\\ &\hat{\mathcal{L}}(x, \omega) \geq \hat{\mathcal{L}_{lb}}(x, \omega) = x\sum_{k=1}^ip_k+\sum_{k=1}^ip_k E[\omega|\Omega_k] \end{align} are piecewise linear functions with $W+1$ segments. \end{lemma} We introduce two sets of variables $\tilde{B}_t^n \geq 0$ and $\tilde{H}_t^n \geq 0$ represent lower bounds of $\text{E}[\max(-I_t^n, 0)]$ and $\text{E}[\max(I_t^n, 0)]$ , $t=1, \ldots, T$, $n = 1, \ldots, N$. Then, the objective function (\ref{sp}) in Fig. \ref{spformulation} can be rewritten as \begin{align} \min \sum_{t=1}^T\Big(K\cdot \delta_t + \sum_{n=1}^N\big(k^n\cdot y_t^n +b^n\tilde{B}_t^n + h^n\tilde{H}_t^n\big)\Big). \end{align} We next construct constraints by separating the discussion into two parts. The first part involves periods $1, \dots, L^n$, $n=1, \ldots, N$, where no order is received. Recall that there is no outstanding order at the beginning of the planning horizon, and the system is forced to issue an order in period $1$, then the inventory level $I_t$ must equal to the initial inventory level of item $n$ at the beginning of the planning horizon, minus the demand convolution over periods $1, \ldots, t$, i.e., $I_t^n=I_0^n -d_{1,t}^n$, where $d_{1,t}^n$ is the demand convolution of item $n$ over periods $1, \ldots, t$, i.e., $d_{1,t}^n=d_1^n+\ldots +d_t^n$. We rewrite the expected back-orders and excess on-hand stocks using the first order loss function and its complementary function, $\mathcal{L}(I_0^n, d_{1,t}^n)$ and $\hat{\mathcal{L}}(I_0^n, d_{1,t}^n)$. By applying Lemma \ref{lemma1}, $\tilde{B}_t^n$ and $\tilde{H}_t^n$ can be written as follows, $t=1,\ldots,L^n$, $n=1, \ldots, N$, $i=1, \ldots, W$, \begin{align} &\tilde{B}_t^n \geq -\tilde{I}_t^n+\sum_{k=1}^ip_kI_0^n-\sum_{k=1}^ip_kE[d_{1,t}^n|\Omega_i], \\ &\tilde{H}_t^n \geq \sum_{k=1}^ip_kI_0^n-\sum_{k=1}^ip_kE[d_{1,t}^n|\Omega_i]. \end{align} Additionally, constraints (\ref{sp-1-2}) in Fig. \ref{spformulation} can be rewritten as, \begin{align} &\tilde{I}_t^n+\tilde{d}_t^n-\tilde{I}_{t-1}^n=0, &t={1, \ldots, L^n}. \end{align} The second part involves periods $1+L^n, \ldots, T$, $n=1, \ldots, N$. Consider a single cycle of item $n$ over periods $i,\ldots, j$, in which a single order is received at the beginning of period $i$, and the next order will be received at the beginning of period $j+1$. Since the lead time of item $n$ is $L^n$, the order that arrives in period $i$ must be issued in period $i-L^n$ with order-up-to-position $S_{i-L^n}^n$. Thus, $I_t^n$, $t=\{i, \ldots, j\}$, must equal to the order-up-to-position $S_{i-L^n}^n$, minus the demand convolution over periods $i-L^n, \ldots, t$, i.e. $I_t^n=S_{i-L^n}^n-d_{i-L^n,t}^n$. We introduce a binary variable $P_{jt}^n$ which is set to one if the most recent order received before period $t$ arrived in period $j$, where $j\leq t$, $j=1+L^n, \ldots, t$, $t=1+L^n, \ldots, T$, and $n=1, \ldots, N$; and we introduce the following constraints, $t={1+L^n, \ldots, T}$, $n=1, \ldots, N$, \begin{align} &\sum_{j=1+L^n}^tP_{jt}^n=1, & \label{c-1}\\ &P_{j,t}^n \geq y_{j-L^n}^n - \sum_{k=j-L^n+1}^{t-L^n}y_k^n, & j={1+L^n, \ldots, t} \label{c-2}. \end{align} Constraints (\ref{c-1}) indicate that the most recent order received before period t arrived in period $j$. Constraints (\ref{c-2}) identify uniquely the period in which the most recent order received before period t has been received. Therefore, the inventory level $I_t^n=\sum_{j=1+L^n}^t(S_{j-L^n}^n - d_{j-L^n,t}^n)P_{jt}^n$, where $t=1+L^n, \ldots, T$, and $S_{j-L^n}^n$ represents the order-up-to-position of item $n$ in period $j-L^n$. We write the back-orders and excess inventory as the first order loss function and its complementary, $\sum_{j=1+L^n}^t\mathcal{L}(S_{j-L^n}^n, d_{j-L^n,t}^n)P_{jt}^n$ and $\sum_{j=1+L^n}^t\hat{\mathcal{L}}(S_{j-L^n}^n, d_{j-L^n,t}^n)P_{jt}^n$. By applying Lemma \ref{lemma1}, $\tilde{B}_t^n$ and $\tilde{H}_t^n$ can be written as, $t=1+L^n,\ldots,T$, $n=1, \ldots, N$, $i=1, \ldots, W$, \begin{align} &\tilde{B}_t^n \geq -\tilde{I}_t^n+(\tilde{I}_t^n+\sum_{j=1+L^n}^t\tilde{d}_{j-L^n,t}^nP_{jt}^n)\sum_{k=1}^ip_k-\sum_{j=1+L^n}^t\sum_{k=1}^ip_kE[d_{j-L^n,t}^n|\Omega_i]P_{jt}^n,\\ &\tilde{H}_t^n \geq (\tilde{I}_t^n+\sum_{j=1+L^n}^t\tilde{d}_{j-L^n,t}^nP_{jt}^n)\sum_{k=1}^ip_k-\sum_{j=1+L^n}^t\sum_{k=1}^ip_kE[d_{j-L^n,t}^n|\Omega_i]P_{jt}^n. \end{align} Note that $S_{j-L^n}^n = \tilde{I}_t^n+\tilde{d}_{j-L^n, t}^n$. In addition, constraints (\ref{sp-2-2})-(\ref{sp-3}) in Fig. \ref{spformulation} can be reformulated as follows, \begin{align} &y^n_{t-L^n}=0 \rightarrow \tilde{I}_{t}^n+\tilde{d}_{t}^n-\tilde{I}_{t-1}^n=0, &t={1+L^n, \ldots, T}, \\ &\tilde{I}_t^n+\tilde{d}_t^n-\tilde{I}_{t-1}^n\geq 0, &t={1+L^n, \ldots, T}. \end{align} We now present the overall model in Fig. \ref{MILP}. The objective function (\ref{MINLP-0}) minimise the expected group fixed ordering costs, item-specific fixed ordering costs, penalty costs, and holding costs of $N$-item over the $T$-period planning horizon. Constraints (\ref{MINLP-1}) imply an individual item can only be included in a group replenishment if that replenishment is made. Constraints (\ref{MINLP-2}) - (\ref{MINLP-3}) assume that the first order is issued at the beginning of period $1$, and there is no outstanding replenishment at the beginning of the planning horizon. Constraints (\ref{MINLP-6}) - (\ref{MINLP-7}) represent the expected back-orders and on-hand stocks of item $n$ over periods $1, \ldots, L^n$. Constraints (\ref{MINLP4-1}) state all orders are received by the end of the planning horizon. Constraints (\ref{MINLP-4}) - (\ref{MINLP-5}) are inventory balance constraints. Constraints (\ref{MINLP-8}) - (\ref{MINLP-9}) ensure the most recent replenishment that has arrived before period $t$ was received in period $j$. Constraints (\ref{MINLP-10}) - (\ref{MINLP-11}) represent the expected back-orders and on-hand stocks of item $n$ over periods $1+L^n, \ldots, T$. Constraints (\ref{MINLP-12}) - (\ref{MINLP-14}) indicate domains of binary variables $\delta_t^n$, $y_t^n$, and $P_{jt}^n$. \begin{figure}[!ht] \tiny \begin{equation} \min \sum_{t=1}^T\Big(K\cdot \delta_{t}+\sum_{n=1}^N\big(k^n\cdot y_t^n+h^n\tilde{H}_{t}^n+b^n\tilde{B}_{t}^n\big)\Big) \label{MINLP-0} \end{equation} Subject to, $n = 1, \ldots, N$ \begin{align} &\delta_t\geq y_t^n & t=1, \ldots, T \label{MINLP-1}\\ &y_1^n=1 & \label{MINLP-2}\\ &\tilde{I}_t^n+\tilde{d}_t^n-\tilde{I}_{t-1}^n=0 &t={1, \ldots, L^n} \label{MINLP-3}\\ &B_t^n \geq -\tilde{I}_t^n+\sum_{k=1}^ip_kI_0^n-\sum_{k=1}^ip_kE[d_{1,t}^n|\Omega_i], & t=1,\ldots, L^n, i=1, \ldots, W \label{MINLP-6}\\ &H_t^n \geq \sum_{k=1}^ip_kI_0^n-\sum_{k=1}^ip_kE[d_{1,t}^n|\Omega_i], & t=1,\ldots, L^n, i=1, \ldots, W \label{MINLP-7}\\ &y_t^n=0 & t=T-L^n, \ldots, T \label{MINLP4-1}\\ &\tilde{I}_t^n+\tilde{d}_t^n-\tilde{I}_{t-1}^n\geq 0 &t={1+L^n, \ldots, T} \label{MINLP-4}\\ &y^n_{t-L^n}=0 \rightarrow \tilde{I}_{t}^n+\tilde{d}_{t}^n-\tilde{I}_{t-1}^n=0 &t={1+L^n, \ldots, T} \label{MINLP-5}\\ &\sum_{j=1+L^n}^tP_{jt}^n=1 & t={1+L^n, \ldots, T} \label{MINLP-8}\\ &P_{j,t}^n \geq y_{j-L^n}^n - \sum_{k=j-L^n+1}^{t}y_k^n & t={1+L^n, \ldots, T}, j={1, \ldots, t} \label{MINLP-9}\\ &B_t^n \geq -\tilde{I}_t^n+(\tilde{I}_t^n+\sum_{j=1+L^n}^t\tilde{d}_{j-L^n,t}^nP_{jt}^n)\sum_{k=1}^ip_k-\sum_{j=1+L^n}^t\sum_{k=1}^ip_kE[d_{j-L^n,t}^n|\Omega_i]P_{jt}^n & t=1+L^n, \ldots, T, i=1, \ldots, W \label{MINLP-10}\\ &H_t^n \geq (\tilde{I}_t^n+\sum_{j=1+L^n}^t\tilde{d}_{j-L^n,t}^nP_{jt})\sum_{k=1}^ip_k-\sum_{j=1+L^n}^t\sum_{k=1}^ip_kE[d_{j-L^n,t}^n|\Omega_i]P_{jt}^n & t=1+L^n, \ldots, T, i=1, \ldots, W \label{MINLP-11}\\ &\delta_t=\{0, 1\} & t={1, \ldots, T} \label{MINLP-12}\\ &y_t^n =\{0, 1\} &t={1, \ldots, T} \label{MINLP-13}\\ &P_{jt}^n=\{0, 1\} & t={1+L^n, \ldots, T}, j={1+L^n, \ldots, t} \label{MINLP-14} \end{align} \caption{MILP model for approximating $(R, S)$ policies} \label{MILP} \end{figure} By solving the model in Fig. \ref{MILP}, the optimal replenishment plan including group replenish periods $\delta_t$, and item-specific replenish periods $y_t^n$, and the item-specific order-up-to-positions $S_{t}^n=\tilde{I}_{t+L^n}^n+\tilde{d}_{t, t+L^n}^n$ are obtained, for $t=1, \ldots, T$, and $n=1, \ldots, N$. {\bf Example.} We demonstrate the modelling strategy behind the MILP model on a $5$-item $10$-period example. It is assumed that the demand is Poisson-distributed with rate $\lambda_t^n$ presented in Table \ref{demandrate}. The initial inventory level is taken as zero. Other parameters are: $K=500$, $b=10$, $h=2$, $k^n=[120, 100, 80, 120, 150]$, and $L^n=[1, 2, 3, 1, 3]$. We employ eleven segments in the piecewise-linear approximations of $B_{t}^n$ and $H_{t}^n$ (for $n=1,\ldots, 5$, and $t=1,\ldots, 10$). \begin{table}[!ht] \centering \begin{tabular}{|c|cccccccccc|} \hline \diagbox{item}{$\lambda_t^n$}{period}& 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ \hline 1 & 40 & 40 & 40 & 40 & 40 & 40 & 40 & 40 & 40 & 40 \\ 2 & 5 & 64 & 29 & 54 & 70 & 50 & 54 & 45 & 13 & 50 \\ 3 & 40 & 55 & 72 & 86 & 78 & 51 & 42 & 38 & 30 & 26 \\ 4 & 41 & 58 & 75 & 63 & 40 & 35 & 33 & 18 & 29 & 39 \\ 5 & 45 & 40 & 22 & 31 & 38 & 46 & 59 & 62 & 46 & 40 \\ \hline \end{tabular} \caption{Demand rates $\lambda_t^n$ of the $5$-item $10$-period example} \label{demandrate} \end{table} The resulting expected total cost is $14236.24$. Replenishment plans of each item are presented in Fig. \ref{replenishplans}. Items $1$, $2$ and $4$ are replenished in periods $1$, $3$, $5$, and $8$; while item $3$ and $5$ are replenished only in periods $1$, $3$, and $5$ since orders in period $8$ could not be received by the end of the planning horizon. Additionally, item $1$ is expected to be ordered every two periods with the same order-up-to-position $123$ by the nature of stationary demand, while it is ordered up to a higher position $164$ in period $5$ to cover demands in the next $3$ periods in order to coordinate with other items. \begin{figure}[!htbp] \centering \begin{tikzpicture}[x=0.0480952380952381cm, y=0.018518518518518517cm] \draw [-latex] ([xshift=-2mm] 0.0,0) -- ([xshift=100mm] 3.5,0) node[right] {Time}; \draw (0.0,0) -- +(0mm,1mm) -- +(0mm,-1.5mm) node[below] {0}; \draw (20,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {1}; \draw (40,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {2}; \draw (60,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {3}; \draw (80,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {4}; \draw (100,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {5}; \draw (120,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {6}; \draw (140,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {7}; \draw (160,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {8}; \draw (180,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {9}; \draw (200,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {10}; \draw [-latex] ([yshift=-0mm] 0,-10.0) -- ([yshift=22mm] 0, 10.0) node[left] {$\tilde{IP}$}; \draw (0,50) -- (2,50) node[left] {100}; \draw (0,100) -- (2,100) node[left] {200}; \filldraw[black] (0,0) circle (1.5pt); \filldraw[black] (0,61.5) circle (1.5pt); \filldraw[black] (20,41.5) circle (1.5pt); \filldraw[black] (40,61.5) circle (1.5pt); \filldraw[black] (60,41.5) circle (1.5pt); \filldraw[black] (80,21.5) circle (1.5pt); \filldraw[black] (80,82) circle (1.5pt); \filldraw[black] (100,62) circle (1.5pt); \filldraw[black] (120,42) circle (1.5pt); \filldraw[black] (140,22) circle (1.5pt); \filldraw[black] (140,61.5) circle (1.5pt); \filldraw[black] (160,41.5) circle (1.5pt); \filldraw[black] (180,21.5) circle (1.5pt); \filldraw[black] (200,1.5) circle (1.5pt); \draw (0,0)--(0,61.5); \draw (0,61.5)--(20,41.5); \draw (20,41.5)--(40,21.5); \draw (40,21.5)--(40,61.5); \draw (40,61.5)--(60,41.5); \draw (60,41.5)--(80,21.5); \draw (80,21.5)--(80,82); \draw (80,82)--(100,62); \draw (100,62)--(120,42); \draw (120,42)--(140,22); \draw (140,22)--(140,61.5); \draw (140,61.5)--(160,41.5); \draw (160,41.5)--(180,21.5); \draw (180,21.5)--(200,1.5); \node at (50mm,22mm) {Item 1}; \end{tikzpicture} \\ \begin{tikzpicture}[x=0.0480952380952381cm, y=0.018518518518518517cm] \draw [-latex] ([xshift=-2mm] 0.0,0) -- ([xshift=100mm] 3.5,0) node[right] {Time}; \draw (0.0,0) -- +(0mm,1mm) -- +(0mm,-1.5mm) node[below] {0}; \draw (20,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {1}; \draw (40,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {2}; \draw (60,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {3}; \draw (80,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {4}; \draw (100,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {5}; \draw (120,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {6}; \draw (140,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {7}; \draw (160,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {8}; \draw (180,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {9}; \draw (200,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {10}; \draw [-latex] ([yshift=-0mm] 0,-10.0) -- ([yshift=24mm] 0, 10.0) node[left] {$\tilde{IP}$}; \draw (0,50) -- (2,50) node[left] {100}; \draw (0,100) -- (2,100) node[left] {200}; \filldraw (0,0) circle (1.5pt); \filldraw(0,78) circle (1.5pt); \filldraw (20,75.5) circle (1.5pt); \filldraw (40,43.5) circle (1.5pt); \filldraw (40,103.5) circle (1.5pt); \filldraw (60,89) circle (1.5pt); \filldraw (80,62) circle (1.5pt); \filldraw (80,118) circle (1.5pt); \filldraw (100,83) circle (1.5pt); \filldraw (120,58) circle (1.5pt); \filldraw (140,31) circle (1.5pt); \filldraw (140,105.5) circle (1.5pt); \filldraw (160,33) circle (1.5pt); \filldraw (180,26.5) circle (1.5pt); \filldraw (200,1.5) circle (1.5pt); \draw (0,0)--(0,78); \draw (0,78)--(20,75.5); \draw (20,75.5)--(40,43.5); \draw (40,43.5)--(40,103.5); \draw (40,103.5)--(60,89); \draw (60,89)--(80,62); \draw (80,62)--(80,118); \draw (80,118)--(100,83); \draw (100,83)--(120,58); \draw (120,58)--(140,31); \draw (140,31)--(140,105.5); \draw (140,105.5)--(160,33); \draw (160,33)--(180,26.5); \draw (180,26.5)--(200,1.5); \node at (50mm,24mm) {Item 2}; \end{tikzpicture} \\ \begin{tikzpicture}[x=0.0480952380952381cm, y=0.018518518518518517cm] \draw [-latex] ([xshift=-2mm] 0.0,0) -- ([xshift=100mm] 3.5,0) node[right] {Time}; \draw (0.0,0) -- +(0mm,1mm) -- +(0mm,-1.5mm) node[below] {0}; \draw (20,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {1}; \draw (40,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {2}; \draw (60,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {3}; \draw (80,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {4}; \draw (100,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {5}; \draw (120,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {6}; \draw (140,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {7}; \draw (160,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {8}; \draw (180,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {9}; \draw (200,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {10}; \draw [-latex] ([yshift=-0mm] 0,-10.0) -- ([yshift=33mm] 0, 10.0) node[left] {$\tilde{IP}$}; \draw (0,50) -- (2,50) node[left] {100}; \draw (0,100) -- (2,100) node[left] {200}; \draw (0,150) -- (2,150) node[left] {300}; \filldraw[black] (0,0) circle (1.5pt); \filldraw[black] (0,168) circle (1.5pt); \filldraw[black] (20,148) circle (1.5pt); \filldraw[black] (40,120.5) circle (1.5pt); \filldraw[black] (40,167) circle (1.5pt); \filldraw[black] (60,131) circle (1.5pt); \filldraw[black] (80,88) circle (1.5pt); \filldraw[black] (80,135) circle (1.5pt); \filldraw[black] (100,96) circle (1.5pt); \filldraw[black] (120,70.5) circle (1.5pt); \filldraw[black] (140,49.5) circle (1.5pt); \filldraw[black] (160,30.5) circle (1.5pt); \filldraw[black] (180,15.5) circle (1.5pt); \filldraw[black] (200,2.5) circle (1.5pt); \draw (0,0)--(0,168); \draw (0,168)--(20,148); \draw (20,148)--(40,120.5); \draw (40,120.5)--(40,167); \draw (40,167)--(60,131); \draw (60,131)--(80,88); \draw (80,88)--(80,135); \draw (80,135)--(100,96); \draw (100,96)--(120,70.5); \draw (120,70.5)--(140,49.5); \draw (140,49.5)--(160,30.5); \draw (160,30.5)--(180,15.5); \draw (180,15.5)--(200,2.5); \node at (50mm,33mm) {Item 3}; \end{tikzpicture} \\ \begin{tikzpicture}[x=0.0480952380952381cm, y=0.018518518518518517cm] \draw [-latex] ([xshift=-2mm] 0.0,0) -- ([xshift=100mm] 3.5,0) node[right] {Time}; \draw (0.0,0) -- +(0mm,1mm) -- +(0mm,-1.5mm) node[below] {0}; \draw (20,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {1}; \draw (40,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {2}; \draw (60,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {3}; \draw (80,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {4}; \draw (100,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {5}; \draw (120,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {6}; \draw (140,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {7}; \draw (160,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {8}; \draw (180,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {9}; \draw (200,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {10}; \draw [-latex] ([yshift=-0mm] 0,-10.0) -- ([yshift=22mm] 0, 10.0) node[left] {$\tilde{IP}$}; \draw (0,50) -- (2,50) node[left] {100}; \draw (0,100) -- (2,100) node[left] {200}; \filldraw[black] (0,0) circle (1.5pt); \filldraw[black] (0,89) circle (1.5pt); \filldraw[black] (20,68.5) circle (1.5pt); \filldraw[black] (40,39.5) circle (1.5pt); \filldraw[black] (40,91) circle (1.5pt); \filldraw[black] (60,53.5) circle (1.5pt); \filldraw[black] (80,22) circle (1.5pt); \filldraw[black] (80,64.5) circle (1.5pt); \filldraw[black] (100,44.5) circle (1.5pt); \filldraw[black] (120,27) circle (1.5pt); \filldraw[black] (140,10.5) circle (1.5pt); \filldraw[black] (140,44.5) circle (1.5pt); \filldraw[black] (160,35.5) circle (1.5pt); \filldraw[black] (180,21) circle (1.5pt); \filldraw[black] (200,1.5) circle (1.5pt); \draw (0,0)--(0,89); \draw (0,89)--(20,68.5); \draw (20,68.5)--(40,39.5); \draw (40,39.5)--(40,91); \draw (40,91)--(60,53.5); \draw (60,53.5)--(80,22); \draw (80,22)--(80,64.5); \draw (80,64.5)--(100,44.5); \draw (100,44.5)--(120,27); \draw (120,27)--(140,10.5); \draw (140,10.5)--(140,44.5); \draw (140,44.5)--(160,35.5); \draw (160,35.5)--(180,21); \draw (180,21)--(200,1.5); \node at (50mm,22mm) {Item 4}; \end{tikzpicture} \\ \begin{tikzpicture}[x=0.0480952380952381cm, y=0.018518518518518517cm] \draw [-latex] ([xshift=-2mm] 0.0,0) -- ([xshift=100mm] 3.5,0) node[right] {Time}; \draw (0.0,0) -- +(0mm,1mm) -- +(0mm,-1.5mm) node[below] {0}; \draw (20,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {1}; \draw (40,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {2}; \draw (60,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {3}; \draw (80,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {4}; \draw (100,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {5}; \draw (120,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {6}; \draw (140,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {7}; \draw (160,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {8}; \draw (180,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {9}; \draw (200,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {10}; \draw [-latex] ([yshift=-0mm] 0,-10.0) -- ([yshift=32mm] 0, 10.0) node[left] {$\tilde{IP}$}; \draw (0,50) -- (2,50) node[left] {100}; \draw (0,100) -- (2,100) node[left] {200}; \draw (0,150) -- (2,150) node[left] {300}; \filldraw[black] (0,0) circle (1.5pt); \filldraw[black] (0,90) circle (1.5pt); \filldraw[black] (20,67.5) circle (1.5pt); \filldraw[black] (40,47.5) circle (1.5pt); \filldraw[black] (40,100) circle (1.5pt); \filldraw[black] (60,89) circle (1.5pt); \filldraw[black] (80,73.5) circle (1.5pt); \filldraw[black] (80,148) circle (1.5pt); \filldraw[black] (100,129) circle (1.5pt); \filldraw[black] (120,106) circle (1.5pt); \filldraw[black] (140,76.5) circle (1.5pt); \filldraw[black] (160,45.5) circle (1.5pt); \filldraw[black] (180,22.5) circle (1.5pt); \filldraw[black] (200,1.5) circle (1.5pt); \draw (0,0)--(0,90); \draw (0,90)--(20,67.5); \draw (20,67.5)--(40,47.5); \draw (40,47.5)--(40,100); \draw (40,100)--(60,89); \draw (60,89)--(80,73.5); \draw (80,73.5)--(80,148); \draw (80,148)--(100,129); \draw (100,129)--(120,106); \draw (120,106)--(140,76.5); \draw (140,76.5)--(160,45.5); \draw (160,45.5)--(180,22.5); \draw (180,22.5)--(200,1.5); \node at (50mm,32mm) {Item 5}; \end{tikzpicture} \\ \caption{Replenish plans of the $5$-item $10$-period example} \label{replenishplans} \end{figure} \section{MILP model for approximating the optimal ($\sigma, \vec{S}$) policies} \label{kconvexity} Since the landmark study of \cite{Scarf1960} which proved the optimality for the single-item inventory system, there have been few attempts to prove the optimality for multi-item inventory systems, e.g.: \citep{johnson1967, kalin1980, ohnoandishigaki2001, gallegoandsethi2005}. In this section we show how the MILP model proposed in Section \ref{MINLPformulation} can be used to approximate the optimal replenish plan under $(\sigma, \vec{S})$ policy for the JRP. \begin{definition} Function $G(\cdot): \mathcal{R}^N\rightarrow \mathcal{R}$ is $K$-convex if \[G(ax+(1-a)z) \leq a G(x)+(1-a)[G(z)+K\delta(z-x)],\] where $\delta(0)=0$, $\delta(i)=1$ for $i>0$, $x \leq z$, and $a \in [0,1]$. \end{definition} \vspace{1em} \cite{gallegoandsethi2005} showed the optimal policy for the joint setup cost case by studying function \begin{align} G_t(\vec{y}) =L_t(\vec{y})+ C_{t+1}(\vec{y}-\vec{d}_t). \end{align} Consider a continuous $K$-convex function $G_t(\cdot)$, then it has global minimum at $\vec{S}_t$. Define set $\Sigma = \{\vec{I}_{t-1}\leq \vec{S}_t| G_t(\vec{I}_{t-1}) \leq G_t(\vec{S}_t)+K\}$, and set $\sigma=\{\vec{I}_{t-1}\leq \vec{S}_t|\vec{I}_{t-1} \notin \Sigma\}$. Lemma \ref{lemma_K-convexity} shows that the optimal replenish plan is to order up to $\vec{S}_t$ if opening inventory levels $\vec{I}_{t-1} \in \sigma$ and $\vec{I}_{t-1} \leq \vec{S}_t$; and not to order, otherwise. \vspace{1em} \begin{lemma}[\cite{gallegoandsethi2005}]\label{lemma_K-convexity} If $G$ is continuous $K$-convex, continuous and coercive, then \begin{itemize} \item $\vec{I} \in \Sigma \Rightarrow G(\vec{I}) \leq K+G(\vec{S})$, \item $\vec{I} \in \sigma \Rightarrow G(\vec{I}) > K+G(\vec{S})$. \end{itemize} \end{lemma} We next show that the MILP model in Fig. \ref{MILP} can be adjusted to approximate set $\sigma$ and $\vec{S}$. Since the $(\sigma, \vec{S})$ optimises group fixed ordering costs, holding costs and penalty costs. We first drop the item-specific fixed ordering cost, i.e., $K^n\cdot y_t^n$, in the objective function (\ref{MINLP-0}). We then set the lead time of all items to $0$, i.e.: $L^n=0$, $n=1, \ldots, N$. Since orders are delivered immediately, we drop constraints (\ref{MINLP-2}) - (\ref{MINLP4-1}) Due to the complexity of $\sigma$, it is impractical to derive a closed form expression for it. Alternatively, one may propose a strategy to determine whether given initial inventory levels $\vec{I}_0 \in \sigma$. By solving our modified MILP model over planning horizon $k, \ldots, T$, we observe the minimised expected total cost $G_k(\vec{S}_k)$, order-up-to-levels $\vec{S}_k$, and the first period order decision $\delta_k$. If $\delta_k =1$, then $\vec{I}_{k-1} \in \sigma$; otherwise, $\vec{I}_{k-1} \in \Sigma$. Therefore our MILP model can be used to determine whether given initial inventory levels $\vec{I}_0 \in \sigma$. Moreover, by repeating this procedure, one can approximate the optimal replenish strategy for every period $k=1, \ldots, T$. {\bf Example.} We illustrate the concept introduced on the 2-item 4-period example presented in Section \ref{problemdescription}. Assuming the initial inventory level $\vec{I}_0 \in [0, \ldots, 20]$, we plot the expected total cost contours, obtained via the modified MILP in Fig. \ref{fig:example_countour_plot_milp}. Note that there are two similar minima, which is expected since the ordering cost is relatively small and the demand variance is large. We plot set $\sigma$ and $\vec{S}$ obtained via the modified MILP model, and compare them with that obtained via stochastic dynamic programming in Fig. \ref{fig:example_optimality}. The optimal policy is to place an order whenever inventory levels $\vec{I}_0=(I_0^1, I_0^2)$ fall in set $\sigma$, and not to place an order if $\vec{I}_0$ fall in $\Sigma$. We observe that set $\sigma$ and $\vec{S}$ obtained via the modified MILP model neatly approximate those obtained via stochastic dynamic programming. \begin{figure}[!ht] \centering \subfigure[Expected total cost contour plot obtained via MILP approximation] { \label{fig:example_countour_plot_milp} \includegraphics[width=0.45\textwidth]{contour_plot_mip.pdf} } \subfigure[Plot of expected total costs obtained via MILP and SDP]{ \label{fig:example_optimality} \includegraphics[width=0.45\textwidth]{example_sdp_milp_comparison.pdf} } \caption{Plot of expected total costs for the two-item joint replenishment numerical example} \end{figure} \section{Computational Experiments}\label{computationalstudy} In this section we assess the cost performance of the $(R, S)$ policy by comparing its cost performance against $(Q, S, T)$ policy \citep{ozkayaetal2006}, $Q(s, S)$ policy \citep{nielsenandLarsen2005}, $P(s, S)$ policy \citep{viswanathan1997}, $(Q, S)$ policy \citep{pantumsinchai1992}, $MP$ policy \citep{atkinsandIyogun1988}, $(s, c, S)_M$ policy \citep{melchiors2002}, and $(s, c, S)_F$ policy \citep{federgruenetal1984}, on data sets of \cite{atkinsandIyogun1988} and \cite{viswanathan1997}. These data sets consider stationary demand over an infinite horizon. Unfortunately, computing $(R, S)$ policy parameters for infinite horizon JRPs via our MILP model is computationally expensive; however, since demand is stationary, it is possible to derive an efficient shortest path reformulation, which we present in \ref{shortespathapproximation} and we use in our computational study. Computational experiments are conducted by using IBM ILOG CPLEX Optimization Studio 12.7 and Matlab R2016a on a 3.20 GHz Intel Core i5-6500 CPU with 16.0 GB RAM, 64 bit machine. Since the shortest path reformulation operates over a finite horizon, in order to compare the cost performance of the $(R, S)$ policy with continuous-review $(s, c, S)$, $(Q, S)$, and $(Q, S, T)$ policy, we discretize each time period into $20$ small periods. We consider a planning horizon length of $6.6$ periods for a total of $132$ small periods. For each test instance, we first obtain the optimal replenishment plan by solving the shortest path reformulation presented in \ref{shortespathapproximation}. The computational time is limited to $5$ minutes, if a timeout occurs, the best solution available is adopted. Next, we simulate the expected average cost of each test instance via Monte Carlo Simulation (100,000 replications). Finally, we compare the average cost per small period against the average cost under existing policies. The data set of \cite{atkinsandIyogun1988} assumes that the demand of each item follows stationary Poisson distribution with rate $\lambda^n$, $n=1, \ldots, 12$. The item-specific fixed ordering cost $K^n$, expected demand $\lambda^n$, and lead time $L^n$ are displayed in Table \ref{set1parameters}. Items share the same penalty cost $b=30$, holding cost $h \in \{2, 6, 20\}$, and group fixed ordering cos $K \in \{20, 50, 100, 150, 500\}$. \begin{table}[!htbp] \centering \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c} \hline items& 1 & 2& 3& 4& 5& 6& 7& 8& 9& 10& 11& 12 \\ \hline $K^n$& 10& 10& 20& 20& 40& 20& 40& 40& 60& 60& 80& 80 \\ $\lambda^n$& 40& 35& 40& 40& 40& 20& 20& 20& 28& 20& 20& 20\\ $L^n$&0.2& 0.5& 0.2& 0.1& 0.2& 1.5& 1.0& 1.0& 1.0& 1.0& 1.0& 1.0\\ \hline \end{tabular} \caption{$K^n$, $\lambda^n$, and $L^n$ of data set \cite{atkinsandIyogun1988}} \label{set1parameters} \end{table} The data set of \cite{atkinsandIyogun1988} contains some unusual lot sizing instances; more specifically, instances for which the group as well as item fixed ordering costs become negligible in comparison to holding costs. In the lot-sizing literature the fixed ordering cost is commonly assumed to be greater than the holding cost \citep[see][p. 62, Property 2]{citeulike:8526547}; moreover, the penalty cost should not be smaller than the holding cost. Additionally, we observe that it is meaningless when the fixed ordering cost is greater than the penalty cost since in such a case the inventory system tends to place orders in every period instead of penalising backorders. To focus on meaningful lot sizing instances --- instances in which a trade off between fixed ordering and holding/penalty cost is sought --- we filter test instances of the data set of \cite{atkinsandIyogun1988} by using the following conditions: $K > b \geq h$. We also check the order frequency in each period and we discard instances in which orders are issued too frequently --- i.e. instance in which a replenishment is issued more than twice per time period, as it turns out that for these instances order coordination is straightforward due to negligible item fixed ordering costs: if a group order is placed, all items are ordered. We present computational results in Table \ref{dataset1}. \begin{table*}[!htbp] \centering \resizebox{0.95\linewidth}{!}{ \begin{tabular}{lll|l|rrrrrrr} \hline \multicolumn{1}{r}{\multirow{2}[4]{*}{$K$}} & \multicolumn{1}{r}{\multirow{2}[4]{*}{$b$}} & \multicolumn{1}{r|}{\multirow{2}[4]{*}{$h$}} & \multicolumn{1}{r|}{\multirow{2}[4]{*}{$(R, S)$}} & \multicolumn{7}{c}{Average cost improvement $\Delta\%$} \\ \cline{5-11} & & \multicolumn{1}{r|}{} & & $(Q, S, T)$ & $Q(s, S)$ & $P(s, S)$ & $(Q, S)$ & $MP$ & $(s, c, S)\_M$ & $(s, c, S)\_F$ \\ \hline \multicolumn{1}{r}{50} & \multicolumn{1}{r}{30} & \multicolumn{1}{r|}{2} & \multicolumn{1}{r|}{936.94} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.91}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.84}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.33}} & 4.38 & 0.68 & 0.79 & 2.14 \\ \multicolumn{1}{r}{100} & \multicolumn{1}{r}{30} & \multicolumn{1}{r|}{2} & \multicolumn{1}{r|}{990.50} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.05}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.45}} & 0.75 & 2.57 & 1.77 & 4.39 & 6.81 \\ \multicolumn{1}{r}{150} & \multicolumn{1}{r}{30} & \multicolumn{1}{r|}{2} & \multicolumn{1}{r|}{1046.56} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.24}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-1.01}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.35}} & 0.52 & 0.65 & 5.68 & 8.36 \\ \multicolumn{1}{r}{200} & \multicolumn{1}{r}{30} & \multicolumn{1}{r|}{2} & \multicolumn{1}{r|}{1072.97} & 1.32 & 0.47 & 1.11 & 1.34 & 2.12 & 8.34 & 12.31 \\ \multicolumn{1}{r}{100} & \multicolumn{1}{r}{30} & \multicolumn{1}{r|}{6} & \multicolumn{1}{r|}{1639.75} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.23}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-1.52}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-1.02}} & 2.15 & 0.00 & 1.24 & 3.31 \\ \multicolumn{1}{r}{150} & \multicolumn{1}{r}{30} & \multicolumn{1}{r|}{6} & \multicolumn{1}{r|}{1707.05} & 0.64 & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.60}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.07}} & 1.46 & 0.95 & 2.34 & 6.68 \\ \multicolumn{1}{r}{200} & \multicolumn{1}{r}{30} & \multicolumn{1}{r|}{6} & \multicolumn{1}{r|}{1766.38} & 1.16 & 0.08 & 0.65 & 1.17 & 1.67 & 3.08 & 9.04 \\ \multicolumn{1}{r}{150} & \multicolumn{1}{r}{30} & \multicolumn{1}{r|}{20} & \multicolumn{1}{r|}{2718.47} & 0.77 & 4.32 & \textcolor[rgb]{ 1, 0, 0}{\textbf{-1.26}} & 1.27 & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.21}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.59}} & 6.20 \\ \multicolumn{1}{r}{200} & \multicolumn{1}{r}{30} & \multicolumn{1}{r|}{20} & \multicolumn{1}{r|}{2812.52} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-3.23}} & 0.14 & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.72}} & 0.77 & 0.34 & 0.25 & 8.34 \\ \hline \multicolumn{4}{l|}{Average cost improvement $\Delta\%$} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.09}} & 0.07 & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.14}} & 1.74 & 0.89 & 2.84 & 7.02 \\ \hline \end{tabular}% } \caption{Computational results on the data set of \cite{atkinsandIyogun1988}} \label{dataset1}% \end{table* Let $\Delta \%$ denote the percentage gap between the expected average cost of existing policies and that of the proposed $(R, S)$ policy, over the expected averaged cost of the $(R, S)$ policy. By definition, a positive $\Delta\%$ represents the $(R, S)$ policy outperforms existing policy. Note that expected average costs under $(Q, S, T)$, $Q(s, S)$, $P(s, S)$, $(Q, S)$, and $(s,c, S)_M$ policy are obtained from \cite{ozkayaetal2006}, that of $(s, c, S)_F$ policy is obtained from \cite{melchiors2002}, and that of $MP$ policy is obtained from \cite{viswanathan1997}. We observe that the $(R, S)$ policy fully dominates all policies in $2$ of $9$ test instances; $(Q, S, T)$ is the best policy in $2$ instances; $Q(s, S)$ is the best policy in $4$ instances; $P(s, S)$ is the best policy in 1 instance. Moreover, the $(R, S)$ policy outperforms the $(Q, S)$ and $(s, c, S)_F$ policy, and no dominant policy on all test instances. The average cost improvement $\Delta\%$ increases with the increase of group fixed ordering cost, and decreases with the increase of holding cost compared with $(s, c, S)_M$ and $(s, c, S)_F$ policy. That means an increase in group fixed ordering cost or a decrease in holding cost improves the cost performance of $(R, S)$ policy. It is difficult to make a general remark with respect to group fixed ordering cost and holding cost compared with $(Q, S, T)$, $Q(s, S)$, $P(s, S)$, $(Q, S)$, and $MP$ policy. On average, the $(R, S)$ policy performs better than $Q(s, S)$, $(Q, S)$, $MP$, $(s, c, S)_M$, and $(s, c, S)_F$ policy with an average improvement of $0.07\%$, $1.74\%$, $0.89\%$, $2.84\%$, and $7.02\%$, respectively; however, the $(Q, S, T)$ and $P(s, S)$ policies performs slightly better than the $(R, S)$ policy with an average improvement of $0.09\%$ and $0.14\%$, respectively. The data set of \cite{viswanathan1997} adopts the same parameters as the data set of \cite{atkinsandIyogun1988}, except $b \in \{10, 50, 100, 200, 1000, 5000, 10000, 20000\}$, $h \in \{2, 6, 10, 200, 600, 1000\}$, and $K \in \{20, 50, 100, 200, 500\}$. We filter the computational results by using the same conditions previously adopted. We present computational results of the $(R, S)$ policy on the data set of \cite{viswanathan1997} in Table \ref{dataset2}. We observe that the $(R, S)$ policy dominates $13$ of $31$ test instances; $(Q, S, T)$ is the best policy in $13$ instances; $Q(s, S)$ is the best policy in $9$ instances; $P(s, S)$ is the best policy in $1$ instances. There is no dominant policy on all test instances. Regarding the comparison with other policies, the average cost improvement $\Delta\%$ decreases as the penalty cost increase; while there is no obvious trend with respect to the group fixed ordering cost, and penalty cost. On average, the $(R, S)$ policy performs better than $Q(s, S)$, $P(s, S)$, $(Q, S)$, $MP$, and $(s, c, S)_F$ policy with average cost improvements of $0.37\%$, $0.37\%$, $1.81\%$, $1.41\%$, and $1.67\%$; while the $(Q,S,T)$ policy performs slightly better than the $(R, S)$ policy with average cost improvement $0.19\%$. \begin{table*}[!ht] \centering \resizebox{0.95\linewidth}{!}{ \begin{tabular}{rrr|r|rrrrrr} \hline \multicolumn{1}{r}{\multirow{2}[4]{*}{$K$}} & \multicolumn{1}{r}{\multirow{2}[4]{*}{$b$}} & \multicolumn{1}{r|}{\multirow{2}[4]{*}{$h$}} & \multicolumn{1}{c|}{\multirow{2}[4]{*}{$(R, S)$}} & \multicolumn{6}{c}{Average cost improvement $\Delta\%$} \\ \cline{5-10} & & \multicolumn{1}{r|}{} & & $(Q, S, T)$ & $Q(s, S)$ & $P(s, S)$ & $(Q, S)$ & $MP$ & $(s, c, S)_F$ \\ \hline \multicolumn{1}{r}{20} & \multicolumn{1}{r}{10} & \multicolumn{1}{r|}{2} & \multicolumn{1}{r|}{772.25} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.03}} & 0.48 & 0.76 & 8.30 & 1.79 & 1.80 \\ \multicolumn{1}{r}{50} & \multicolumn{1}{r}{10} & \multicolumn{1}{r|}{2} & \multicolumn{1}{r|}{813.94} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.48}} & 0.12 & 0.62 & 0.47 & 1.64 & 1.74 \\ \multicolumn{1}{r}{100} & \multicolumn{1}{r}{10} & \multicolumn{1}{r|}{2} & \multicolumn{1}{r|}{861.05} & 0.23 & 0.70 & 1.17 & 3.68 & 2.20 & 2.38 \\ \multicolumn{1}{r}{200} & \multicolumn{1}{r}{10} & \multicolumn{1}{r|}{2} & \multicolumn{1}{r|}{932.86} & 1.62 & 1.83 & 2.38 & 2.88 & 3.42 & 3.73 \\ \multicolumn{1}{r}{500} & \multicolumn{1}{r}{10} & \multicolumn{1}{r|}{2} & \multicolumn{1}{r|}{1131.42} & 0.14 & 0.14 & 0.59 & 0.18 & 1.60 & 2.12 \\ \multicolumn{1}{r}{20} & \multicolumn{1}{r}{10} & \multicolumn{1}{r|}{6} & \multicolumn{1}{r|}{1166.06} & 0.85 & 2.84 & 0.01 & 7.99 & 1.08 & 1.04 \\ \multicolumn{1}{r}{50} & \multicolumn{1}{r}{10} & \multicolumn{1}{r|}{6} & \multicolumn{1}{r|}{1222.82} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.15}} & 1.83 & 0.62 & 5.53 & 1.68 & 1.73 \\ \multicolumn{1}{r}{100} & \multicolumn{1}{r}{10} & \multicolumn{1}{r|}{6} & \multicolumn{1}{r|}{1283.92} & 1.33 & 2.50 & 1.26 & 4.49 & 2.34 & 2.46 \\ \multicolumn{1}{r}{200} & \multicolumn{1}{r}{10} & \multicolumn{1}{r|}{6} & \multicolumn{1}{r|}{1413.72} & 0.30 & 1.23 & 1.02 & 1.82 & 2.10 & 2.33 \\ \multicolumn{1}{r}{500} & \multicolumn{1}{r}{10} & \multicolumn{1}{r|}{6} & \multicolumn{1}{r|}{1658.48} & 2.26 & 2.20 & 2.52 & 2.30 & 3.59 & 4.03 \\ \multicolumn{1}{r}{50} & \multicolumn{1}{r}{10} & \multicolumn{1}{r|}{10} & \multicolumn{1}{r|}{1420.63} & 1.57 & 5.30 & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.03}} & 5.88 & 1.07 & 1.07 \\ \multicolumn{1}{r}{100} & \multicolumn{1}{r}{10} & \multicolumn{1}{r|}{10} & \multicolumn{1}{r|}{1497.96} & 1.67 & 4.28 & 0.75 & 4.37 & 1.87 & 1.93 \\ \multicolumn{1}{r}{200} & \multicolumn{1}{r}{10} & \multicolumn{1}{r|}{10} & \multicolumn{1}{r|}{1637.27} & 0.66 & 2.18 & 1.15 & 2.16 & 2.28 & 2.44 \\ \multicolumn{1}{r}{500} & \multicolumn{1}{r}{10} & \multicolumn{1}{r|}{10} & \multicolumn{1}{r|}{1935.07} & 1.60 & 1.60 & 1.79 & 1.60 & 2.90 & 3.27 \\ \multicolumn{1}{r}{100} & \multicolumn{1}{r}{50} & \multicolumn{1}{r|}{2} & \multicolumn{1}{r|}{1043.31} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-1.95}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.79}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.23}} & 1.98 & 0.78 & 0.92 \\ \multicolumn{1}{r}{200} & \multicolumn{1}{r}{50} & \multicolumn{1}{r|}{2} & \multicolumn{1}{r|}{1132.61} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-1.29}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.48}} & 0.30 & 0.50 & 1.31 & 1.97 \\ \multicolumn{1}{r}{500} & \multicolumn{1}{r}{50} & \multicolumn{1}{r|}{2} & \multicolumn{1}{r|}{1327.95} & 0.08 & 0.08 & 0.82 & 0.13 & 1.83 & 2.30 \\ \multicolumn{1}{r}{100} & \multicolumn{1}{r}{50} & \multicolumn{1}{r|}{6} & \multicolumn{1}{r|}{1794.60} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-1.37}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-2.65}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-2.09}} & 0.94 & \textcolor[rgb]{ 1, 0, 0}{\textbf{-1.09}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.97}} \\ \multicolumn{1}{r}{200} & \multicolumn{1}{r}{50} & \multicolumn{1}{r|}{6} & \multicolumn{1}{r|}{1938.25} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.27}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-1.56}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.89}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.05}} & 0.13 & 0.34 \\ \multicolumn{1}{r}{500} & \multicolumn{1}{r}{50} & \multicolumn{1}{r|}{6} & \multicolumn{1}{r|}{2244.01} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.27}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.27}} & 0.43 & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.26}} & 1.44 & 1.87 \\ \multicolumn{1}{r}{200} & \multicolumn{1}{r}{50} & \multicolumn{1}{r|}{10} & \multicolumn{1}{r|}{2448.79} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-3.83}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-2.11}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-1.55}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.75}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.53}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.34}} \\ \multicolumn{1}{r}{500} & \multicolumn{1}{r}{50} & \multicolumn{1}{r|}{10} & \multicolumn{1}{r|}{2796.29} & 0.35 & 0.35 & 0.97 & 0.35 & 2.00 & 2.40 \\ \multicolumn{1}{r}{200} & \multicolumn{1}{r}{100} & \multicolumn{1}{r|}{2} & \multicolumn{1}{r|}{1200.38} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-1.61}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.94}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.11}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.01}} & 0.90 & 1.13 \\ \multicolumn{1}{r}{500} & \multicolumn{1}{r}{100} & \multicolumn{1}{r|}{2} & \multicolumn{1}{r|}{1406.67} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.76}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.83}} & 0.16 & 0.16 & 1.17 & 1.60 \\ \multicolumn{1}{r}{200} & \multicolumn{1}{r}{100} & \multicolumn{1}{r|}{6} & \multicolumn{1}{r|}{2106.78} & 0.44 & \textcolor[rgb]{ 1, 0, 0}{\textbf{-1.23}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.48}} & 0.94 & 0.54 & 0.73 \\ \multicolumn{1}{r}{500} & \multicolumn{1}{r}{100} & \multicolumn{1}{r|}{6} & \multicolumn{1}{r|}{2449.51} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.88}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.88}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.07}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.07}} & 0.94 & 1.33 \\ \multicolumn{1}{r}{200} & \multicolumn{1}{r}{100} & \multicolumn{1}{r|}{10} & \multicolumn{1}{r|}{2728.08} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-3.41}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-1.90}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-1.29}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.49}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.27}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.10}} \\ \multicolumn{1}{r}{500} & \multicolumn{1}{r}{100} & \multicolumn{1}{r|}{10} & \multicolumn{1}{r|}{3108.05} & 0.22 & 0.22 & 0.94 & 0.94 & 1.96 & 2.33 \\ \multicolumn{1}{r}{500} & \multicolumn{1}{r}{200} & \multicolumn{1}{r|}{2} & \multicolumn{1}{r|}{1470.29} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.90}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.90}} & 0.05 & 0.05 & 1.05 & 1.45 \\ \multicolumn{1}{r}{500} & \multicolumn{1}{r}{200} & \multicolumn{1}{r|}{6} & \multicolumn{1}{r|}{2620.77} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.91}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.91}} & 0.08 & 0.08 & 1.09 & 1.45 \\ \multicolumn{1}{r}{500} & \multicolumn{1}{r}{200} & \multicolumn{1}{r|}{10} & \multicolumn{1}{r|}{3421.28} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.94}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.94}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.04}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.04}} & 0.97 & 1.30 \\ \hline \multicolumn{4}{l|}{Average cost improvement $\Delta\%$} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.19}} & 0.37 & 0.37 & 1.81 & 1.41 & 1.67 \\ \hline \end{tabular}% } \caption{Computational results on the data set of \cite{viswanathan1997}} \label{dataset2}% \end{table*}% Even though the $(R, S)$ policy does not fully dominate other competing policies, it presents a key advantage: {\em in contrast to all other policies in the literature, it is able to tackle stationary as well as nonstationary demand. \section{Conclusion}\label{conclusion} In this paper, we presented a mathematical programming approach for controlling the multi-item inventory system with joint replenishment under the $(R, S)$ policy. We first present an MILP-based model for approximating optimal $(R, S)$ policies, which is built upon the piecewise-linear approximation technique proposed by \citep{rossietal2014}. We further demonstrate that the MILP model can be used to approximate the $(\sigma, \vec{S})$ policy We conducted an extensive computational study comprising $55$ instances. We first evaluated our approach on the data set of \citep{atkinsandIyogun1988}. This evaluation demonstrates that the $(R, S)$ policy fully dominates other competing policies in the literature in 2 out of $10$ test instances considered. The $(R, S)$ policy performs better than $Q(s, S)$, $(Q, S)$, $MP$, $(s, c, S)_M$, and $(s, c, S)_F$ policies with an average improvement of $0.17\%$, $2.61\%$, $0.33\%$, $1.24\%$, and $4.60\%$, respectively; however, the $(Q, S, T)$ and $P(s, S)$ policies performs slightly better than the $(R, S)$ policy with an average improvement of $1.01\%$ and $0.71\%$. Computational experiments on the data set of \citep{viswanathan1997} indicates that $(R, S)$ is the best policy in $13$ out of $45$ test instances. $(R, S)$ performs better than $(Q, S)$, $MP$, and $(s, c, S)_F$ policies with average cost improvements of $1.81\%$, $0.62\%$, $0.82\%$; while $(Q,S,T)$, $Q(s, S)$, and $P(s, S)$ policies perform slightly better than it with an average cost improvement$1.11\%$, $0.64\%$, $0.40\%$. Most importantly, the $(R, S)$ policy comes with the additional advantage of being able to tackle stationary and nonstationary demand. Future research may focus on investigating the cost performance of $(R, S)$ policy in a rolling horizon setting. \section{Introduction} \label{section1} The Joint Replenishment Problem (JRP) occurs when several items are ordered from the same supplier, or several products have the same means of transportation, or several products are processed on the same piece of equipment \citep{salamehetal2014}. Every time an order is placed, the group fixed ordering cost is incurred regardless the number of items replenished; in addition there are also item-specific fixed and variable ordering costs that are charged whenever an item is included in a replenishment order. The goal of the JRP is to determine the optimal inventory replenishment plan that minimises the cost of replenishing multiple items. Literature on JRP can be roughly categorised into deterministic and stochastic based on the nature of demand. In the deterministic joint replenishment inventory system demand for each individual item is known to be constant over an infinite time horizon and replenishments are made at equally spaced time intervals; the problem is to determine the length of replenishment cycles and the frequency of replenishing individual items, e.g., \citep{goyalandBelton1979, kaspiandRosenblatt1991,viswanathan1996,wildemanetal1997,hariga1994,goyal1993,boctoretal2004,nilssonetal2007}. In the stochastic joint replenishment inventory system the demand for individual items is unknown but follows a certain type of distributions; the problem is to decide the optimal parameters of a given inventory policy, e.g., \citep{balintfy1964, atkinsandIyogun1988, renbergandPlanche1967, kalpakamandArivarignan1993, viswanathan1997, nielsenandLarsen2005, ozkayaetal2006}. Most literature still presents applications to constant and dynamic deterministic demands; however, the study regarding stochastic demand has received increasing attention due to its practical relevance \citep{bastosetal2017}. This work belongs to the growing literature on the stochastic joint replenishment. This paper applies the static-dynamic strategy, proposed by \cite{bt1988} for tackling single-item lot-sizing problems, in the context of a JRP system. The static-dynamic strategy, known as $(R, S)$, features two control parameters: $R$, timing of replenishment, and $S$, order-up-to-position. At each review period, the decision maker places an order so as to increase the inventory position (net inventory level + outstanding orders) to a given order-up-to-position. In the context of the JRP system, a periodic-review $(R, S)$ policy is adopted for each item. The $(R, S)$ policy is an appealing strategy since it eases the coordination between supply chain players \citep{kilicandtarim2011}, and facilitates managing joint replenishment \citep{silveretal1998}. Our goal is to tackle the periodic-review stochastic JRP under $(R, S)$ policy. We first present a mixed-integer linear programming (MILP) model for computing optimal policy parameters that optimise the expected total cost comprising group fixed ordering costs, item-specific group ordering costs, holing costs, and penalty costs over the planning horizon. Our model generalises \cite{rossietal2015}, which discussed an MILP model for approximating optimal $(R, S)$ policy parameters for single-item lot-sizing problems. We further show that our MILP model can be used to approximate the optimal $(\sigma, \vec{S})$ policies, which is known to be optimal for this class of problem \citep{liuandesogbue2012}. Under this policy, decision makers order up to $\vec{S}$ if opening inventory positions fall in $\sigma$ ($\sigma \subset \mathcal{R}^N$, $\vec{S} \in \mathcal{R}^N$, $N$ represents the number of items) at the beginning of each time period. Numerical experiments illustrate the effectiveness of our models. We contribute to the literature on the stochastic JPR as follows. \begin{itemize} \item We present an MILP model for tackling the nonstationary stochastic JRP under $(R, S)$ policy. \item We demonstrate that the MILP model can be used to approximate $(\sigma, \vec{S})$ policies. \item In an extensive computational study based on existing test beds drawn from the literature we demonstrate the effectiveness of our models when compared to other competing approaches in the literature. \end{itemize} This rest of this paper is organised as follows. Section \ref{literaturereview} surveys relevant literature. Section \ref{problemdescription} describes problem settings. Section \ref{milp} presents an MILP model for computing $(R, S)$ policy parameters. Section \ref{kconvexity} extends the MILP model for approximating the optimal $(\sigma, \vec{S})$ policy parameters. An extensive computational study is conducted in Section \ref{computationalstudy}. We draw conclusions in Section \ref{conclusion}. \section{Literature review}\label{literaturereview} The problem of controlling the inventory of a multi-item system under joint replenishment has received increasing attention over the past several decades. For a thorough review of literature readers could refer to \citep{silverandPeterson1985,goyalandSatir1989,vanetal1992, khoujaandGoyal2008, bastosetal2017}. In this section, we focus our attention on existing policies for tackling stochastic JRPs. In particular, we survey control policies that have been considered in the literature. {\bf $(\sigma, \vec{S})$ policy.} Since the landmark study \cite{Scarf1960} proved the optimality for the single-item inventory problem, there have been few attempts to prove the optimality for multi-item inventory systems. \cite{johnson1967} proved the optimal policy in the stationary case is a $(\sigma, \vec{S})$ policy, where $\sigma \subset \mathcal{R}^N$ and $\vec{S} \in \mathcal{R}^N$, and one orders up to $\vec{S}$ if inventory levels $\vec{I} \in \sigma$ and $\vec{I} \leq \vec{S}$ and one does not order if $\vec{I} \notin \sigma$. \cite{kalin1980} showed when $\vec{I} \in \sigma$ and $\vec{I} \not\leq \vec{S}$, there exists $\vec{S}(\vec{I}) \geq \vec{I}$ such that the optimal policy is to order up to $\vec{S}(\vec{I})$, this policy is named $(\sigma, \vec{S}(\cdot))$ policy. \cite{ohnoandishigaki2001} proved the optimality of $(\sigma, \vec{S}(\cdot))$ policy for continuous-time inventory problems with compound Poisson demands. \cite{gallegoandsethi2005} gave the general definition of $K$-convexity in $\mathcal{R}^N$, which encompasses both the joint ordering and individual ordering case. {\bf $(s, c, S)$ policy.} Several works on stochastic JRPs have focused on computing $(s, c, S)$ policies, introduced by \cite{balintfy1964}. This policy features three control parameters: $s$, reorder point; $c$, can-order level; $S$, order-up-to-position. Under this policy, decision makers order up to $S$ when either at a demand epoch the inventory position drops to or below $s$; or when at a special replenishment opportunity the inventory position is at or below $c$. Under the assumption of Poisson-distributed demands, \cite{ignall1969} proved that the $(s, c, S)$ policy is not optimal even for two-item problems. \cite{silver1974} proposed the decomposition method to compute $(s, c, S)$ policy parameters, where the multi-item problem is decomposed into several single-item problems. This approximation technique was followed by \citep{melchiors2002, johansenandMelchiors2003}. \cite{kayisetal2008} modelled the two-item JRP problem as a semi-Markov decision model, and proposed an enumerative approach to approximate $(s, c, S)$ policies. In addition, \citep{schaackandSilver1972, thompstoneandSilver1975, silver1981, federgruenetal1984} studied JRPs with compound Poisson-distributed demands. {\bf $(R, T)$ policy.} \cite{atkinsandIyogun1988} proposed two periodic-review $(R, T)$-type policies, namely periodic policy $P$ and modified periodic policy $MP$, which differ only in the way the ordering periods $T_i$ are determined. Under this policy, every $T_i$ periods, the inventory position of item $i$ is raised to $R_i$. Numerical experiments demonstrate that the $MP$ policy performs consistently better than the $(s, c, S)$ policy, and the $P$ policy generally outperforms the the $(s, c, S)$ policy excepting problems involving small values of group fixed ordering cost {\bf $(Q, S)$ policy.} This policy was first proposed by \cite{renbergandPlanche1967}. Under this policy, whenever the total inventory position drops to the group reorder point, an order is placed to raise inventory position of each item to item-specific order-up-to-position $S$. The combined order quantity is $Q$, and the group reorder point is reached when the combined usage reaches $Q$. \cite{pantumsinchai1992} evaluated the computational performance of the $(Q, S)$ policy by comparing it against the $(s, c, S)$ policy, $P$ policy and $MP$ policy on the basis of long-run total average costs. Computational experiments showed that the $MP$ policy consistently outperforms the $(s, c, S)$ policy on the test instances, and both $MP$ and $(Q, S)$ policy perform better as the group ordering cost increases. The study showed that the $(Q, S)$ policy is appropriate for items for which the stock-out costs are low and the major set-up cost is high relative to the minor set-up cost. {\bf $P(s, S)$ policy.} This policy was proposed by \cite{viswanathan1997} for periodic-review inventory systems, in which inventory position of each item is reviewed at every fixed and constant time interval. At each review time, the $(s, S)$ policy is applied to each item, so that any item with inventory position at or below $s$ is order up to $S$. For a fixed review period, the algorithm of \cite{zhengandFedergruen1991} is adopted to compute the optimal $(s, S)$ policy parameters. Computational studies indicated that although the proposed policy requires more computational effort, it generally dominates the $MP$ policy, and dominates $(s, c, S)$ policy, and $(Q, S)$ policy for most test instances. {\bf $Q(s, S)$ policy.} \cite{nielsenandLarsen2005} combined features of $(Q, S)$ policy and $P(s, S)$ policy, and proposed the $Q(s, S)$ policy. By operating under this policy, the total inventory position is continuously reviewed while the item-specific inventory positions are reviewed only when the total consumption since the last order reaches $Q$. Then every item with inventory position less than or equal to its respective reorder point $s$ is order to $S$. An analytic solution is derived by using the Markov decision theory in \cite{nielsenandLarsen2005}. Computational study demonstrated that the $Q(s, S)$ policy outperforms $P(s, S$) policy, and dominates $(Q, S)$ policy in $17$ of $18$ test instances on the data set of \cite{atkinsandIyogun1988}. {\bf $(Q, S, T)$ policy.} This continuous-review policy was proposed by \cite{ozkayaetal2006}. Decision makers raise the inventory position of each item $i$ to its order-up-to-position $S_i$ whenever a total of $Q$ demands accumulated or $T$ time units have elapsed, whichever occurs first. This policy is a hybrid of the continuous review $(Q, S)$ policy, proposed by \cite{renbergandPlanche1967}, and the periodic review $(R, T)$ policy, proposed by \cite{atkinsandIyogun1988}. Thus, it features benefits of two separate policies. The comprehensive numerical study indicates that the proposed policy dominates the $P(s, S)$ policy, $(Q, s)$ policy, $Q(s, S)$ policy, and $(s, c, S)$ policy in $100$ of $139$ instances. {\bf $(R, S)$ policy.} This policy is proposed by \cite{bt1988} for controlling single-item inventory system. The policy requires decision makers to place an order at each replenish period to increase the inventory position to the order-up-to-position $S$. This policy has been widely studied in stream of single-item lot-sizing problems. \citep{tarimandkingsman2004, tarimandkingsman2006} formulated a mixed integer programming (MIP) model for computing optimal $(R, S)$ policy parameters. \cite{tarimetal2011} relaxed the MIP model, and solved it as a shortest path problem which does not require the use of any MIP or Constraint Programming (CP) commercial solver. In addition, \cite{ozenetal2012} showed a DP-based algorithm for solving small-size problems, and an approximation heuristic and a relaxation heuristic for tackling larger-size problems; \cite{tuncetal2014} suggested a deterministic equivalent MIP model. Recently, \cite{rossietal2015} generalised the discussions above and developed a unified MILP model for approximating $(R, S)$ polices by adopting the piecewise linear approximation technique in \cite{rossietal2014}. Although various efficient modelling methods for computing $(R, S)$ policy parameters were proposed, they generally control the single-item inventory system. The main purpose of this work is to apply the $(R, S)$ policy for the multi-item inventory system. In the context of the JRP, a periodic-review $(R, S)$ policy is adopted for each item. The stochastic JRP is an open research area for the development of more efficient computational methods and control policies. In this study, we apply the periodic review $(R,S)$ policy, originally proposed by \cite{bt1988} for tackling single-item lot sizing problems, to JRPs with stochastic demand and fixed lead time. In the context of the JRP system, a periodic review $(R, S)$ policy is adopted for each item. Note that when the demand is stationary stochastic, the $(R, S)$ policy is the same as the $MP$ policy proposed by \cite{atkinsandIyogun1988}, where every $T_n$ periods, raising the inventory position of item $n$ to the order-up-to-position $R_n$. However, the $(R, S)$ policy also deals with non-stationary stochastic demands which was not addressed in \cite{atkinsandIyogun1988}. In this paper, we present an MILP approach for approximating $(R, S)$ policies under non-stationary stochastic demands. Nonlinear costs are approximated by leveraging technique introduced in \citep{rossietal2014}. Numerical experiments investigate the effectiveness of our approach against competing policies from the literature. \section{Problem description}\label{problemdescription} Consider a periodic-review $N$-item inventory management system over a $T$-period planning horizon. We assume that demand $d_t^n$ of item $n$, $n=1, \ldots, N$, in period $t$, $t=1, \ldots, T$ are independently distributed random variables with known probability density function $g_{t}^n(\cdot)$, and cumulative distribution function $G_t^n(\cdot)$ We assume that ordering decisions are made at the beginning of each time period. There is a group fixed ordering cost $K$ and an item-specific fixed ordering cost $k^n$. The group fixed ordering cost is incurred whenever an order is placed at a given time period, no matter which and how many items are included in this order. The item-specific fixed ordering cost is incurred whenever an order for item $n$ is placed at a given time period, no matter how many items are included in this order. We define $Q_t^n$ as the quantity of item $n$ ordered in period $t$, which will be received after lead time $L^n$. Then, the ordering cost of item $n$ in period $t$ with ordering quantity $Q_t^n$ can be written as, \begin{align} &c_t^n(Q_t^n)=\begin{cases} k^n, &Q_t^n > 0,\\ 0, &Q_t^n=0.\end{cases} \end{align} Let $c_t(\vec{Q}_t)$ denote the ordering cost of period $t$ with ordering quantity vector $\vec{Q}_t = (Q_t^1, \ldots, Q_t^N)$. $c_t(\vec{Q}_t)$ has the following structure \begin{align} &c_t(\vec{Q}_t)=\begin{cases}K+ \sum_{n=1}^N c_t^n(Q_t^n), &\exists Q_t^n|Q_t^n >0,\\ 0,& \mbox{otherwise}. \end{cases} \end{align} A penalty cost $b^n$ is incurred for each unit of item $n$ of backorder demand per period, and a holding cost $h^n$ is charged for each unit of item $n$ carried from one period to the next. The immediate penalty and holding cost of period $t$ can be expressed as \begin{align} &L_t(\vec{y})=\sum_{t=1}^n\Big(b^n\cdot \text{E}[\max(d_t^n-y^n, 0)]+h^n\cdot \text{E}[\max(y^n-d_t^n, 0)]\Big), \end{align} where vector $\vec{y}=(y^1, \ldots, y^N)$ is the inventory level immediately after orders are received at the beginning of period $t$, and ``$\text{E}$" denotes the expectation taken with respect to the random demand. Let $I_t^n$ denote the net inventory level of item $n$ at the end of period $t$, which is also the opening inventory level of period $t+1$, and $C_t(\vec{I}_{t-1})$ denote the expected total cost of an optimal policy over period $t, \ldots, T$, given opening inventory level $\vec{I}_{t-1}=(I_{t-1}^1, \ldots, I_{t-1}^N)$ at the beginning of period $t$. Note that there is no outstanding order at the beginning of the planning horizon. Then, $C_t(\vec{I}_{t-1})$ can be written as, \begin{align} \small C_t(\vec{I}_{t-1})=\begin{cases} \min_{\vec{Q}_t}\big\{ c_t(\vec{Q}_t)+ L_t(\vec{I}_{t-1}+\vec{Q}_{t-\vec{L}})+E[C_{t+1}(\vec{I}_{t-1}+\vec{Q}_{t-\vec{L}}-\vec{D}_t)]\big\}, & t \geq \vec{L}+1,\\ \min_{\vec{Q}_t}\big\{ c_t(\vec{Q}_t)+ L_t(\vec{I}_{t-1})+E[C_{t+1}(\vec{I}_{t-1}-\vec{D}_t)]\big\}, & \text{otherwise;} \end{cases} \end{align} where $\vec{D}_t=(d_t^1, \cdots, d_t^N)$, $\vec{L}=(L^1, \cdots, L^N)$, and \begin{align} C_T(\vec{I}_{T-1})=\begin{cases} \min_{\vec{Q}_t} \big\{c_T(\vec{Q}_T)+ L_t(\vec{I}_{T-1}+\vec{Q}_{T-\vec{L}})\big\},& t\geq \vec{L}+1,\\ \min_{\vec{Q}_t} \big\{c_T(\vec{Q}_T)+ L_t(\vec{I}_{T-1})\big\}, & \text{otherwise;} \end{cases} \end{align} represents the boundary condition. Moreover, let us define, $t=L^n+1, \ldots, T$, \begin{align} G_t(\vec{I}_{t-1})=L_t(\vec{I}_{t-1}+\vec{Q}_{t-\vec{L}})+E[C_{t+1}(\vec{I}_{t-1}+\vec{Q}_{t-\vec{L}}-\vec{D}_t)]. \end{align} {\bf Example.} We consider an instance in which the group fixed ordering cost is $K=10$, the item-specific ordering cost $k$ is 0, the holding cost is $h=1$, the stock-out penalty cost is $b=5$. We control inventory for two items over a planning horizon of $T=4$ periods. We assume that the demand of item $n$ in period $t$ follows a Poisson distribution with rate $\lambda^n_t$; where $\lambda^1_t=\lambda^2_t=\{3,6,9,6\}$. For simplicity, we assume that the lead time is $0$ for every item. The expected total cost, i.e. $C_1(\vec{I}_0)$, of an optimal policy, given initial inventory level $I_0^1=I_0^2=0$, can be obtained via stochastic dynamic programming (SDP) and is equal to $65.4$. In Fig. \ref{fig:example_countour_plot} we plot $G_1(\vec{I}_0)$ for $I_0^1 \in [0, 14]$ and $I_0^2 \in [0, 14]$. \begin{figure}[!htbp] \begin{center} \includegraphics[width=8.4cm]{contour_plot.pdf} \caption{Expected total cost, i.e. $G_1(\vec{I}_0)$, contour plot for the two-item joint replenishment numerical example} \label{fig:example_countour_plot} \end{center} \end{figure} \section{An MILP model for approximating non-stationary stochastic $(R, S)$ policies}\label{milp} In this section, we formulate the stochastic JRP problem under the $(R, S)$ policy as an MILP model. Under the $(R, S)$ policy, the replenish periods and associated order-up-to-positions are fixed at the beginning of the planning horizon, while actual order quantities are decided at the beginning of each replenish period. Note that in the context of JRP, a periodic-review $(R, S)$ policy is adopted for each item. We first introduce a stochastic programming formulation in Section \ref{stochasticprogramming} and then we reformulate it as an MILP model in Section \ref{MINLPformulation}. \subsection{A stochastic program} \label{stochasticprogramming} Consider the periodic-review N-item T-period JRP described in Section \ref{problemdescription}. We introduce binary variables $\delta_t$ and $y_t^n$, $t=1, \ldots, T$, and $n=1, \ldots, N$; $\delta_t$ takes value $1$ if a group order is made in period $t$ no matter how many types of items involved, otherwise $0$; $y_t^n$ is set to $1$ if item $n$ is replenished in period $t$. We further assume that the system is forced to place an order in period $1$, and all orders should be received by the end of the planning horizon. We reformulate the stochastic dynamic programming model in Section \ref{problemdescription} as the stochastic program in Fig. \ref{spformulation}. \begin{figure}[!htbp] \tiny \begin{align} &\min \sum_{t=1}^T\Big(K\cdot\delta_t+\sum_{n=1}^N(k^n\cdot y_t^n+b^n\text{E}[\max(-I_t^n, 0)]+h^n\text{E}[\max(I_t^n, 0)])\Big)\label{sp} \end{align} Subject to, $n=1, \ldots, N$, \begin{align} &\delta_t \geq y_t^n & t=1, \ldots, T \label{sp-1}\\ &y_1^n=1 & \label{sp-1-1}\\ &I_t^n =I_0^n -\sum_{j=1}^td_j^n & t=1, \ldots, L^n \label{sp-1-2}\\ &y_t^n=0 & t=T-L^n, \ldots, T \label{sp-2-1}\\ &I_t^n = I_0^n +\sum_{i=1}^{t-L^n}Q_i^n-\sum_{j=1}^td_j^n & t= L^n+1, \ldots, T \label{sp-2-2}\\ &y_t^n =\begin{cases} 1, &Q_t^n > 0,\\ 0, &Q_t^n = 0. \end{cases}\label{sp-5}\\ &Q_t^n \geq 0 \label{sp-3}\\ &\delta_t =\{0, 1\} \label{sp-4}\\ &I_t^n \in \mathcal{R} \label{sp-6} \end{align} \caption{Stochastic programming formulation of the JRP.} \label{spformulation} \end{figure} The objective is to find the optimal replenish plan so so to minimise the expected ordering costs, penalty costs, and holding costs of $N$ items over the $T$-period planning horizon. Constraints (\ref{sp-1}) imply that if at least an item is ordered, then a group replenishment is issued. Constraints (\ref{sp-1-1}) force the system to replenish every item in period $1$. Constraints (\ref{sp-1-2}) are inventory conservation constraints in periods $1, \ldots, L^n$: inventory level at the end of period $t$ is equal to the initial inventory level, minus demands raised up to period $t$. Constraints (\ref{sp-2-1}) ensure all replenishments are received by the end of the planning horizon. Constraints (\ref{sp-2-2}) are the inventory conservation constraints in periods $1+L^n, \ldots, T$: inventory level at the end of period $t$ is equal to the initial inventory level, plus all orders received before the end of period $t$, demands raised up to period $t$. Constraints (\ref{sp-5})- (\ref{sp-6}) state domains of $y_t^n$, $Q_t^n$, $\delta_t$, and $I_t^n$. \subsection{An MILP model}\label{MINLPformulation} The stochastic programming formulation in Fig. \ref{spformulation} can be reformulated into an MILP model via the piecewise approximation approach in \citep{rossietal2014}. In the rest of this paper, let ``$\sim$" denote the expectation operator. We introduce the first order loss function \[\mathcal{L}(x,\omega)=\int_{-\infty}^{\infty}\max(t-x,0)g_{\omega}(t)\text{d}(t)\] and its complementary function \[\mathcal{\hat{L}}(x,\omega)=\int_{-\infty}^{\infty}\max (x-t,0)g_{\omega}(t)d(t),\] where $\omega$ is a random variable with probability density function $g_{\omega}(\cdot)$, and $x$ is a scalar variable. Consider a partition of the support $\Omega$ of $\omega$ into $W$ disjoint subregions $\Omega_1, \ldots, \Omega_W$, the probability mass $p_i=$Pr$\{\omega \in \Omega_i\}=\int_{\Omega_i}g_{\omega}(t)d(t)$, and the conditional expectation $\text{E}[d_{jt}|\Omega_i]=\frac{1}{p_i}\int_{\Omega_i}tg_{\omega}(t)dt$, $i = 1, \ldots, W$. By applying Jensen's lower bound \footnote{Similarly, the Edmundson-Madansky upper bound can be applied for approximating the expected excess inventory and back-orders as well, for further details refer to \citep{rossietal2014}.}, $L(x,\omega)$ and $\hat{L}(x,\omega)$ can be approximated as piecewise linear functions, as presented in the following lemma. \begin{lemma}\label{lemma1} For the first order loss function and its complementary function, the lower bounds $\mathcal{L}_{lb}$ and $\hat{\mathcal{L}}_{lb}$, where $E[\omega|\Omega_i] \leq x \leq E[\omega|\Omega_{i+1}]$, $i=1, \ldots, W$, \begin{align} &\mathcal{L}(x, \omega) \geq \mathcal{L}_{lb}(x, \omega) = x\sum_{k=1}^ip_k+\sum_{k=1}^ip_k E[\omega|\Omega_k]+(x-\tilde{\omega}),\\ &\hat{\mathcal{L}}(x, \omega) \geq \hat{\mathcal{L}_{lb}}(x, \omega) = x\sum_{k=1}^ip_k+\sum_{k=1}^ip_k E[\omega|\Omega_k] \end{align} are piecewise linear functions with $W+1$ segments. \end{lemma} We introduce two sets of variables $\tilde{B}_t^n \geq 0$ and $\tilde{H}_t^n \geq 0$ represent lower bounds of $\text{E}[\max(-I_t^n, 0)]$ and $\text{E}[\max(I_t^n, 0)]$ , $t=1, \ldots, T$, $n = 1, \ldots, N$. Then, the objective function (\ref{sp}) in Fig. \ref{spformulation} can be rewritten as \begin{align} \min \sum_{t=1}^T\Big(K\cdot \delta_t + \sum_{n=1}^N\big(k^n\cdot y_t^n +b^n\tilde{B}_t^n + h^n\tilde{H}_t^n\big)\Big). \end{align} We next construct constraints by separating the discussion into two parts. The first part involves periods $1, \dots, L^n$, $n=1, \ldots, N$, where no order is received. Recall that there is no outstanding order at the beginning of the planning horizon, and the system is forced to issue an order in period $1$, then the inventory level $I_t$ must equal to the initial inventory level of item $n$ at the beginning of the planning horizon, minus the demand convolution over periods $1, \ldots, t$, i.e., $I_t^n=I_0^n -d_{1,t}^n$, where $d_{1,t}^n$ is the demand convolution of item $n$ over periods $1, \ldots, t$, i.e., $d_{1,t}^n=d_1^n+\ldots +d_t^n$. We rewrite the expected back-orders and excess on-hand stocks using the first order loss function and its complementary function, $\mathcal{L}(I_0^n, d_{1,t}^n)$ and $\hat{\mathcal{L}}(I_0^n, d_{1,t}^n)$. By applying Lemma \ref{lemma1}, $\tilde{B}_t^n$ and $\tilde{H}_t^n$ can be written as follows, $t=1,\ldots,L^n$, $n=1, \ldots, N$, $i=1, \ldots, W$, \begin{align} &\tilde{B}_t^n \geq -\tilde{I}_t^n+\sum_{k=1}^ip_kI_0^n-\sum_{k=1}^ip_kE[d_{1,t}^n|\Omega_i], \\ &\tilde{H}_t^n \geq \sum_{k=1}^ip_kI_0^n-\sum_{k=1}^ip_kE[d_{1,t}^n|\Omega_i]. \end{align} Additionally, constraints (\ref{sp-1-2}) in Fig. \ref{spformulation} can be rewritten as, \begin{align} &\tilde{I}_t^n+\tilde{d}_t^n-\tilde{I}_{t-1}^n=0, &t={1, \ldots, L^n}. \end{align} The second part involves periods $1+L^n, \ldots, T$, $n=1, \ldots, N$. Consider a single cycle of item $n$ over periods $i,\ldots, j$, in which a single order is received at the beginning of period $i$, and the next order will be received at the beginning of period $j+1$. Since the lead time of item $n$ is $L^n$, the order that arrives in period $i$ must be issued in period $i-L^n$ with order-up-to-position $S_{i-L^n}^n$. Thus, $I_t^n$, $t=\{i, \ldots, j\}$, must equal to the order-up-to-position $S_{i-L^n}^n$, minus the demand convolution over periods $i-L^n, \ldots, t$, i.e. $I_t^n=S_{i-L^n}^n-d_{i-L^n,t}^n$. We introduce a binary variable $P_{jt}^n$ which is set to one if the most recent order received before period $t$ arrived in period $j$, where $j\leq t$, $j=1+L^n, \ldots, t$, $t=1+L^n, \ldots, T$, and $n=1, \ldots, N$; and we introduce the following constraints, $t={1+L^n, \ldots, T}$, $n=1, \ldots, N$, \begin{align} &\sum_{j=1+L^n}^tP_{jt}^n=1, & \label{c-1}\\ &P_{j,t}^n \geq y_{j-L^n}^n - \sum_{k=j-L^n+1}^{t-L^n}y_k^n, & j={1+L^n, \ldots, t} \label{c-2}. \end{align} Constraints (\ref{c-1}) indicate that the most recent order received before period t arrived in period $j$. Constraints (\ref{c-2}) identify uniquely the period in which the most recent order received before period t has been received. Therefore, the inventory level $I_t^n=\sum_{j=1+L^n}^t(S_{j-L^n}^n - d_{j-L^n,t}^n)P_{jt}^n$, where $t=1+L^n, \ldots, T$, and $S_{j-L^n}^n$ represents the order-up-to-position of item $n$ in period $j-L^n$. We write the back-orders and excess inventory as the first order loss function and its complementary, $\sum_{j=1+L^n}^t\mathcal{L}(S_{j-L^n}^n, d_{j-L^n,t}^n)P_{jt}^n$ and $\sum_{j=1+L^n}^t\hat{\mathcal{L}}(S_{j-L^n}^n, d_{j-L^n,t}^n)P_{jt}^n$. By applying Lemma \ref{lemma1}, $\tilde{B}_t^n$ and $\tilde{H}_t^n$ can be written as, $t=1+L^n,\ldots,T$, $n=1, \ldots, N$, $i=1, \ldots, W$, \begin{align} &\tilde{B}_t^n \geq -\tilde{I}_t^n+(\tilde{I}_t^n+\sum_{j=1+L^n}^t\tilde{d}_{j-L^n,t}^nP_{jt}^n)\sum_{k=1}^ip_k-\sum_{j=1+L^n}^t\sum_{k=1}^ip_kE[d_{j-L^n,t}^n|\Omega_i]P_{jt}^n,\\ &\tilde{H}_t^n \geq (\tilde{I}_t^n+\sum_{j=1+L^n}^t\tilde{d}_{j-L^n,t}^nP_{jt}^n)\sum_{k=1}^ip_k-\sum_{j=1+L^n}^t\sum_{k=1}^ip_kE[d_{j-L^n,t}^n|\Omega_i]P_{jt}^n. \end{align} Note that $S_{j-L^n}^n = \tilde{I}_t^n+\tilde{d}_{j-L^n, t}^n$. In addition, constraints (\ref{sp-2-2})-(\ref{sp-3}) in Fig. \ref{spformulation} can be reformulated as follows, \begin{align} &y^n_{t-L^n}=0 \rightarrow \tilde{I}_{t}^n+\tilde{d}_{t}^n-\tilde{I}_{t-1}^n=0, &t={1+L^n, \ldots, T}, \\ &\tilde{I}_t^n+\tilde{d}_t^n-\tilde{I}_{t-1}^n\geq 0, &t={1+L^n, \ldots, T}. \end{align} We now present the overall model in Fig. \ref{MILP}. The objective function (\ref{MINLP-0}) minimise the expected group fixed ordering costs, item-specific fixed ordering costs, penalty costs, and holding costs of $N$-item over the $T$-period planning horizon. Constraints (\ref{MINLP-1}) imply an individual item can only be included in a group replenishment if that replenishment is made. Constraints (\ref{MINLP-2}) - (\ref{MINLP-3}) assume that the first order is issued at the beginning of period $1$, and there is no outstanding replenishment at the beginning of the planning horizon. Constraints (\ref{MINLP-6}) - (\ref{MINLP-7}) represent the expected back-orders and on-hand stocks of item $n$ over periods $1, \ldots, L^n$. Constraints (\ref{MINLP4-1}) state all orders are received by the end of the planning horizon. Constraints (\ref{MINLP-4}) - (\ref{MINLP-5}) are inventory balance constraints. Constraints (\ref{MINLP-8}) - (\ref{MINLP-9}) ensure the most recent replenishment that has arrived before period $t$ was received in period $j$. Constraints (\ref{MINLP-10}) - (\ref{MINLP-11}) represent the expected back-orders and on-hand stocks of item $n$ over periods $1+L^n, \ldots, T$. Constraints (\ref{MINLP-12}) - (\ref{MINLP-14}) indicate domains of binary variables $\delta_t^n$, $y_t^n$, and $P_{jt}^n$. \begin{figure}[!ht] \tiny \begin{equation} \min \sum_{t=1}^T\Big(K\cdot \delta_{t}+\sum_{n=1}^N\big(k^n\cdot y_t^n+h^n\tilde{H}_{t}^n+b^n\tilde{B}_{t}^n\big)\Big) \label{MINLP-0} \end{equation} Subject to, $n = 1, \ldots, N$ \begin{align} &\delta_t\geq y_t^n & t=1, \ldots, T \label{MINLP-1}\\ &y_1^n=1 & \label{MINLP-2}\\ &\tilde{I}_t^n+\tilde{d}_t^n-\tilde{I}_{t-1}^n=0 &t={1, \ldots, L^n} \label{MINLP-3}\\ &B_t^n \geq -\tilde{I}_t^n+\sum_{k=1}^ip_kI_0^n-\sum_{k=1}^ip_kE[d_{1,t}^n|\Omega_i], & t=1,\ldots, L^n, i=1, \ldots, W \label{MINLP-6}\\ &H_t^n \geq \sum_{k=1}^ip_kI_0^n-\sum_{k=1}^ip_kE[d_{1,t}^n|\Omega_i], & t=1,\ldots, L^n, i=1, \ldots, W \label{MINLP-7}\\ &y_t^n=0 & t=T-L^n, \ldots, T \label{MINLP4-1}\\ &\tilde{I}_t^n+\tilde{d}_t^n-\tilde{I}_{t-1}^n\geq 0 &t={1+L^n, \ldots, T} \label{MINLP-4}\\ &y^n_{t-L^n}=0 \rightarrow \tilde{I}_{t}^n+\tilde{d}_{t}^n-\tilde{I}_{t-1}^n=0 &t={1+L^n, \ldots, T} \label{MINLP-5}\\ &\sum_{j=1+L^n}^tP_{jt}^n=1 & t={1+L^n, \ldots, T} \label{MINLP-8}\\ &P_{j,t}^n \geq y_{j-L^n}^n - \sum_{k=j-L^n+1}^{t}y_k^n & t={1+L^n, \ldots, T}, j={1, \ldots, t} \label{MINLP-9}\\ &B_t^n \geq -\tilde{I}_t^n+(\tilde{I}_t^n+\sum_{j=1+L^n}^t\tilde{d}_{j-L^n,t}^nP_{jt}^n)\sum_{k=1}^ip_k-\sum_{j=1+L^n}^t\sum_{k=1}^ip_kE[d_{j-L^n,t}^n|\Omega_i]P_{jt}^n & t=1+L^n, \ldots, T, i=1, \ldots, W \label{MINLP-10}\\ &H_t^n \geq (\tilde{I}_t^n+\sum_{j=1+L^n}^t\tilde{d}_{j-L^n,t}^nP_{jt})\sum_{k=1}^ip_k-\sum_{j=1+L^n}^t\sum_{k=1}^ip_kE[d_{j-L^n,t}^n|\Omega_i]P_{jt}^n & t=1+L^n, \ldots, T, i=1, \ldots, W \label{MINLP-11}\\ &\delta_t=\{0, 1\} & t={1, \ldots, T} \label{MINLP-12}\\ &y_t^n =\{0, 1\} &t={1, \ldots, T} \label{MINLP-13}\\ &P_{jt}^n=\{0, 1\} & t={1+L^n, \ldots, T}, j={1+L^n, \ldots, t} \label{MINLP-14} \end{align} \caption{MILP model for approximating $(R, S)$ policies} \label{MILP} \end{figure} By solving the model in Fig. \ref{MILP}, the optimal replenishment plan including group replenish periods $\delta_t$, and item-specific replenish periods $y_t^n$, and the item-specific order-up-to-positions $S_{t}^n=\tilde{I}_{t+L^n}^n+\tilde{d}_{t, t+L^n}^n$ are obtained, for $t=1, \ldots, T$, and $n=1, \ldots, N$. {\bf Example.} We demonstrate the modelling strategy behind the MILP model on a $5$-item $10$-period example. It is assumed that the demand is Poisson-distributed with rate $\lambda_t^n$ presented in Table \ref{demandrate}. The initial inventory level is taken as zero. Other parameters are: $K=500$, $b=10$, $h=2$, $k^n=[120, 100, 80, 120, 150]$, and $L^n=[1, 2, 3, 1, 3]$. We employ eleven segments in the piecewise-linear approximations of $B_{t}^n$ and $H_{t}^n$ (for $n=1,\ldots, 5$, and $t=1,\ldots, 10$). \begin{table}[!ht] \centering \begin{tabular}{|c|cccccccccc|} \hline \diagbox{item}{$\lambda_t^n$}{period}& 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ \hline 1 & 40 & 40 & 40 & 40 & 40 & 40 & 40 & 40 & 40 & 40 \\ 2 & 5 & 64 & 29 & 54 & 70 & 50 & 54 & 45 & 13 & 50 \\ 3 & 40 & 55 & 72 & 86 & 78 & 51 & 42 & 38 & 30 & 26 \\ 4 & 41 & 58 & 75 & 63 & 40 & 35 & 33 & 18 & 29 & 39 \\ 5 & 45 & 40 & 22 & 31 & 38 & 46 & 59 & 62 & 46 & 40 \\ \hline \end{tabular} \caption{Demand rates $\lambda_t^n$ of the $5$-item $10$-period example} \label{demandrate} \end{table} The resulting expected total cost is $14236.24$. Replenishment plans of each item are presented in Fig. \ref{replenishplans}. Items $1$, $2$ and $4$ are replenished in periods $1$, $3$, $5$, and $8$; while item $3$ and $5$ are replenished only in periods $1$, $3$, and $5$ since orders in period $8$ could not be received by the end of the planning horizon. Additionally, item $1$ is expected to be ordered every two periods with the same order-up-to-position $123$ by the nature of stationary demand, while it is ordered up to a higher position $164$ in period $5$ to cover demands in the next $3$ periods in order to coordinate with other items. \begin{figure}[!htbp] \centering \begin{tikzpicture}[x=0.0480952380952381cm, y=0.018518518518518517cm] \draw [-latex] ([xshift=-2mm] 0.0,0) -- ([xshift=100mm] 3.5,0) node[right] {Time}; \draw (0.0,0) -- +(0mm,1mm) -- +(0mm,-1.5mm) node[below] {0}; \draw (20,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {1}; \draw (40,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {2}; \draw (60,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {3}; \draw (80,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {4}; \draw (100,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {5}; \draw (120,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {6}; \draw (140,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {7}; \draw (160,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {8}; \draw (180,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {9}; \draw (200,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {10}; \draw [-latex] ([yshift=-0mm] 0,-10.0) -- ([yshift=22mm] 0, 10.0) node[left] {$\tilde{IP}$}; \draw (0,50) -- (2,50) node[left] {100}; \draw (0,100) -- (2,100) node[left] {200}; \filldraw[black] (0,0) circle (1.5pt); \filldraw[black] (0,61.5) circle (1.5pt); \filldraw[black] (20,41.5) circle (1.5pt); \filldraw[black] (40,61.5) circle (1.5pt); \filldraw[black] (60,41.5) circle (1.5pt); \filldraw[black] (80,21.5) circle (1.5pt); \filldraw[black] (80,82) circle (1.5pt); \filldraw[black] (100,62) circle (1.5pt); \filldraw[black] (120,42) circle (1.5pt); \filldraw[black] (140,22) circle (1.5pt); \filldraw[black] (140,61.5) circle (1.5pt); \filldraw[black] (160,41.5) circle (1.5pt); \filldraw[black] (180,21.5) circle (1.5pt); \filldraw[black] (200,1.5) circle (1.5pt); \draw (0,0)--(0,61.5); \draw (0,61.5)--(20,41.5); \draw (20,41.5)--(40,21.5); \draw (40,21.5)--(40,61.5); \draw (40,61.5)--(60,41.5); \draw (60,41.5)--(80,21.5); \draw (80,21.5)--(80,82); \draw (80,82)--(100,62); \draw (100,62)--(120,42); \draw (120,42)--(140,22); \draw (140,22)--(140,61.5); \draw (140,61.5)--(160,41.5); \draw (160,41.5)--(180,21.5); \draw (180,21.5)--(200,1.5); \node at (50mm,22mm) {Item 1}; \end{tikzpicture} \\ \begin{tikzpicture}[x=0.0480952380952381cm, y=0.018518518518518517cm] \draw [-latex] ([xshift=-2mm] 0.0,0) -- ([xshift=100mm] 3.5,0) node[right] {Time}; \draw (0.0,0) -- +(0mm,1mm) -- +(0mm,-1.5mm) node[below] {0}; \draw (20,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {1}; \draw (40,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {2}; \draw (60,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {3}; \draw (80,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {4}; \draw (100,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {5}; \draw (120,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {6}; \draw (140,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {7}; \draw (160,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {8}; \draw (180,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {9}; \draw (200,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {10}; \draw [-latex] ([yshift=-0mm] 0,-10.0) -- ([yshift=24mm] 0, 10.0) node[left] {$\tilde{IP}$}; \draw (0,50) -- (2,50) node[left] {100}; \draw (0,100) -- (2,100) node[left] {200}; \filldraw (0,0) circle (1.5pt); \filldraw(0,78) circle (1.5pt); \filldraw (20,75.5) circle (1.5pt); \filldraw (40,43.5) circle (1.5pt); \filldraw (40,103.5) circle (1.5pt); \filldraw (60,89) circle (1.5pt); \filldraw (80,62) circle (1.5pt); \filldraw (80,118) circle (1.5pt); \filldraw (100,83) circle (1.5pt); \filldraw (120,58) circle (1.5pt); \filldraw (140,31) circle (1.5pt); \filldraw (140,105.5) circle (1.5pt); \filldraw (160,33) circle (1.5pt); \filldraw (180,26.5) circle (1.5pt); \filldraw (200,1.5) circle (1.5pt); \draw (0,0)--(0,78); \draw (0,78)--(20,75.5); \draw (20,75.5)--(40,43.5); \draw (40,43.5)--(40,103.5); \draw (40,103.5)--(60,89); \draw (60,89)--(80,62); \draw (80,62)--(80,118); \draw (80,118)--(100,83); \draw (100,83)--(120,58); \draw (120,58)--(140,31); \draw (140,31)--(140,105.5); \draw (140,105.5)--(160,33); \draw (160,33)--(180,26.5); \draw (180,26.5)--(200,1.5); \node at (50mm,24mm) {Item 2}; \end{tikzpicture} \\ \begin{tikzpicture}[x=0.0480952380952381cm, y=0.018518518518518517cm] \draw [-latex] ([xshift=-2mm] 0.0,0) -- ([xshift=100mm] 3.5,0) node[right] {Time}; \draw (0.0,0) -- +(0mm,1mm) -- +(0mm,-1.5mm) node[below] {0}; \draw (20,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {1}; \draw (40,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {2}; \draw (60,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {3}; \draw (80,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {4}; \draw (100,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {5}; \draw (120,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {6}; \draw (140,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {7}; \draw (160,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {8}; \draw (180,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {9}; \draw (200,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {10}; \draw [-latex] ([yshift=-0mm] 0,-10.0) -- ([yshift=33mm] 0, 10.0) node[left] {$\tilde{IP}$}; \draw (0,50) -- (2,50) node[left] {100}; \draw (0,100) -- (2,100) node[left] {200}; \draw (0,150) -- (2,150) node[left] {300}; \filldraw[black] (0,0) circle (1.5pt); \filldraw[black] (0,168) circle (1.5pt); \filldraw[black] (20,148) circle (1.5pt); \filldraw[black] (40,120.5) circle (1.5pt); \filldraw[black] (40,167) circle (1.5pt); \filldraw[black] (60,131) circle (1.5pt); \filldraw[black] (80,88) circle (1.5pt); \filldraw[black] (80,135) circle (1.5pt); \filldraw[black] (100,96) circle (1.5pt); \filldraw[black] (120,70.5) circle (1.5pt); \filldraw[black] (140,49.5) circle (1.5pt); \filldraw[black] (160,30.5) circle (1.5pt); \filldraw[black] (180,15.5) circle (1.5pt); \filldraw[black] (200,2.5) circle (1.5pt); \draw (0,0)--(0,168); \draw (0,168)--(20,148); \draw (20,148)--(40,120.5); \draw (40,120.5)--(40,167); \draw (40,167)--(60,131); \draw (60,131)--(80,88); \draw (80,88)--(80,135); \draw (80,135)--(100,96); \draw (100,96)--(120,70.5); \draw (120,70.5)--(140,49.5); \draw (140,49.5)--(160,30.5); \draw (160,30.5)--(180,15.5); \draw (180,15.5)--(200,2.5); \node at (50mm,33mm) {Item 3}; \end{tikzpicture} \\ \begin{tikzpicture}[x=0.0480952380952381cm, y=0.018518518518518517cm] \draw [-latex] ([xshift=-2mm] 0.0,0) -- ([xshift=100mm] 3.5,0) node[right] {Time}; \draw (0.0,0) -- +(0mm,1mm) -- +(0mm,-1.5mm) node[below] {0}; \draw (20,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {1}; \draw (40,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {2}; \draw (60,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {3}; \draw (80,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {4}; \draw (100,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {5}; \draw (120,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {6}; \draw (140,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {7}; \draw (160,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {8}; \draw (180,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {9}; \draw (200,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {10}; \draw [-latex] ([yshift=-0mm] 0,-10.0) -- ([yshift=22mm] 0, 10.0) node[left] {$\tilde{IP}$}; \draw (0,50) -- (2,50) node[left] {100}; \draw (0,100) -- (2,100) node[left] {200}; \filldraw[black] (0,0) circle (1.5pt); \filldraw[black] (0,89) circle (1.5pt); \filldraw[black] (20,68.5) circle (1.5pt); \filldraw[black] (40,39.5) circle (1.5pt); \filldraw[black] (40,91) circle (1.5pt); \filldraw[black] (60,53.5) circle (1.5pt); \filldraw[black] (80,22) circle (1.5pt); \filldraw[black] (80,64.5) circle (1.5pt); \filldraw[black] (100,44.5) circle (1.5pt); \filldraw[black] (120,27) circle (1.5pt); \filldraw[black] (140,10.5) circle (1.5pt); \filldraw[black] (140,44.5) circle (1.5pt); \filldraw[black] (160,35.5) circle (1.5pt); \filldraw[black] (180,21) circle (1.5pt); \filldraw[black] (200,1.5) circle (1.5pt); \draw (0,0)--(0,89); \draw (0,89)--(20,68.5); \draw (20,68.5)--(40,39.5); \draw (40,39.5)--(40,91); \draw (40,91)--(60,53.5); \draw (60,53.5)--(80,22); \draw (80,22)--(80,64.5); \draw (80,64.5)--(100,44.5); \draw (100,44.5)--(120,27); \draw (120,27)--(140,10.5); \draw (140,10.5)--(140,44.5); \draw (140,44.5)--(160,35.5); \draw (160,35.5)--(180,21); \draw (180,21)--(200,1.5); \node at (50mm,22mm) {Item 4}; \end{tikzpicture} \\ \begin{tikzpicture}[x=0.0480952380952381cm, y=0.018518518518518517cm] \draw [-latex] ([xshift=-2mm] 0.0,0) -- ([xshift=100mm] 3.5,0) node[right] {Time}; \draw (0.0,0) -- +(0mm,1mm) -- +(0mm,-1.5mm) node[below] {0}; \draw (20,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {1}; \draw (40,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {2}; \draw (60,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {3}; \draw (80,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {4}; \draw (100,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {5}; \draw (120,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {6}; \draw (140,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {7}; \draw (160,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {8}; \draw (180,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {9}; \draw (200,0) -- +(0mm,1.5mm) -- +(0mm,-1.5mm) node[below] {10}; \draw [-latex] ([yshift=-0mm] 0,-10.0) -- ([yshift=32mm] 0, 10.0) node[left] {$\tilde{IP}$}; \draw (0,50) -- (2,50) node[left] {100}; \draw (0,100) -- (2,100) node[left] {200}; \draw (0,150) -- (2,150) node[left] {300}; \filldraw[black] (0,0) circle (1.5pt); \filldraw[black] (0,90) circle (1.5pt); \filldraw[black] (20,67.5) circle (1.5pt); \filldraw[black] (40,47.5) circle (1.5pt); \filldraw[black] (40,100) circle (1.5pt); \filldraw[black] (60,89) circle (1.5pt); \filldraw[black] (80,73.5) circle (1.5pt); \filldraw[black] (80,148) circle (1.5pt); \filldraw[black] (100,129) circle (1.5pt); \filldraw[black] (120,106) circle (1.5pt); \filldraw[black] (140,76.5) circle (1.5pt); \filldraw[black] (160,45.5) circle (1.5pt); \filldraw[black] (180,22.5) circle (1.5pt); \filldraw[black] (200,1.5) circle (1.5pt); \draw (0,0)--(0,90); \draw (0,90)--(20,67.5); \draw (20,67.5)--(40,47.5); \draw (40,47.5)--(40,100); \draw (40,100)--(60,89); \draw (60,89)--(80,73.5); \draw (80,73.5)--(80,148); \draw (80,148)--(100,129); \draw (100,129)--(120,106); \draw (120,106)--(140,76.5); \draw (140,76.5)--(160,45.5); \draw (160,45.5)--(180,22.5); \draw (180,22.5)--(200,1.5); \node at (50mm,32mm) {Item 5}; \end{tikzpicture} \\ \caption{Replenish plans of the $5$-item $10$-period example} \label{replenishplans} \end{figure} \section{MILP model for approximating the optimal ($\sigma, \vec{S}$) policies} \label{kconvexity} Since the landmark study of \cite{Scarf1960} which proved the optimality for the single-item inventory system, there have been few attempts to prove the optimality for multi-item inventory systems, e.g.: \citep{johnson1967, kalin1980, ohnoandishigaki2001, gallegoandsethi2005}. In this section we show how the MILP model proposed in Section \ref{MINLPformulation} can be used to approximate the optimal replenish plan under $(\sigma, \vec{S})$ policy for the JRP. \begin{definition} Function $G(\cdot): \mathcal{R}^N\rightarrow \mathcal{R}$ is $K$-convex if \[G(ax+(1-a)z) \leq a G(x)+(1-a)[G(z)+K\delta(z-x)],\] where $\delta(0)=0$, $\delta(i)=1$ for $i>0$, $x \leq z$, and $a \in [0,1]$. \end{definition} \vspace{1em} \cite{gallegoandsethi2005} showed the optimal policy for the joint setup cost case by studying function \begin{align} G_t(\vec{y}) =L_t(\vec{y})+ C_{t+1}(\vec{y}-\vec{d}_t). \end{align} Consider a continuous $K$-convex function $G_t(\cdot)$, then it has global minimum at $\vec{S}_t$. Define set $\Sigma = \{\vec{I}_{t-1}\leq \vec{S}_t| G_t(\vec{I}_{t-1}) \leq G_t(\vec{S}_t)+K\}$, and set $\sigma=\{\vec{I}_{t-1}\leq \vec{S}_t|\vec{I}_{t-1} \notin \Sigma\}$. Lemma \ref{lemma_K-convexity} shows that the optimal replenish plan is to order up to $\vec{S}_t$ if opening inventory levels $\vec{I}_{t-1} \in \sigma$ and $\vec{I}_{t-1} \leq \vec{S}_t$; and not to order, otherwise. \vspace{1em} \begin{lemma}[\cite{gallegoandsethi2005}]\label{lemma_K-convexity} If $G$ is continuous $K$-convex, continuous and coercive, then \begin{itemize} \item $\vec{I} \in \Sigma \Rightarrow G(\vec{I}) \leq K+G(\vec{S})$, \item $\vec{I} \in \sigma \Rightarrow G(\vec{I}) > K+G(\vec{S})$. \end{itemize} \end{lemma} We next show that the MILP model in Fig. \ref{MILP} can be adjusted to approximate set $\sigma$ and $\vec{S}$. Since the $(\sigma, \vec{S})$ optimises group fixed ordering costs, holding costs and penalty costs. We first drop the item-specific fixed ordering cost, i.e., $K^n\cdot y_t^n$, in the objective function (\ref{MINLP-0}). We then set the lead time of all items to $0$, i.e.: $L^n=0$, $n=1, \ldots, N$. Since orders are delivered immediately, we drop constraints (\ref{MINLP-2}) - (\ref{MINLP4-1}) Due to the complexity of $\sigma$, it is impractical to derive a closed form expression for it. Alternatively, one may propose a strategy to determine whether given initial inventory levels $\vec{I}_0 \in \sigma$. By solving our modified MILP model over planning horizon $k, \ldots, T$, we observe the minimised expected total cost $G_k(\vec{S}_k)$, order-up-to-levels $\vec{S}_k$, and the first period order decision $\delta_k$. If $\delta_k =1$, then $\vec{I}_{k-1} \in \sigma$; otherwise, $\vec{I}_{k-1} \in \Sigma$. Therefore our MILP model can be used to determine whether given initial inventory levels $\vec{I}_0 \in \sigma$. Moreover, by repeating this procedure, one can approximate the optimal replenish strategy for every period $k=1, \ldots, T$. {\bf Example.} We illustrate the concept introduced on the 2-item 4-period example presented in Section \ref{problemdescription}. Assuming the initial inventory level $\vec{I}_0 \in [0, \ldots, 20]$, we plot the expected total cost contours, obtained via the modified MILP in Fig. \ref{fig:example_countour_plot_milp}. Note that there are two similar minima, which is expected since the ordering cost is relatively small and the demand variance is large. We plot set $\sigma$ and $\vec{S}$ obtained via the modified MILP model, and compare them with that obtained via stochastic dynamic programming in Fig. \ref{fig:example_optimality}. The optimal policy is to place an order whenever inventory levels $\vec{I}_0=(I_0^1, I_0^2)$ fall in set $\sigma$, and not to place an order if $\vec{I}_0$ fall in $\Sigma$. We observe that set $\sigma$ and $\vec{S}$ obtained via the modified MILP model neatly approximate those obtained via stochastic dynamic programming. \begin{figure}[!ht] \centering \subfigure[Expected total cost contour plot obtained via MILP approximation] { \label{fig:example_countour_plot_milp} \includegraphics[width=0.45\textwidth]{contour_plot_mip.pdf} } \subfigure[Plot of expected total costs obtained via MILP and SDP]{ \label{fig:example_optimality} \includegraphics[width=0.45\textwidth]{example_sdp_milp_comparison.pdf} } \caption{Plot of expected total costs for the two-item joint replenishment numerical example} \end{figure} \section{Computational Experiments}\label{computationalstudy} In this section we assess the cost performance of the $(R, S)$ policy by comparing its cost performance against $(Q, S, T)$ policy \citep{ozkayaetal2006}, $Q(s, S)$ policy \citep{nielsenandLarsen2005}, $P(s, S)$ policy \citep{viswanathan1997}, $(Q, S)$ policy \citep{pantumsinchai1992}, $MP$ policy \citep{atkinsandIyogun1988}, $(s, c, S)_M$ policy \citep{melchiors2002}, and $(s, c, S)_F$ policy \citep{federgruenetal1984}, on data sets of \cite{atkinsandIyogun1988} and \cite{viswanathan1997}. These data sets consider stationary demand over an infinite horizon. Unfortunately, computing $(R, S)$ policy parameters for infinite horizon JRPs via our MILP model is computationally expensive; however, since demand is stationary, it is possible to derive an efficient shortest path reformulation, which we present in \ref{shortespathapproximation} and we use in our computational study. Computational experiments are conducted by using IBM ILOG CPLEX Optimization Studio 12.7 and Matlab R2016a on a 3.20 GHz Intel Core i5-6500 CPU with 16.0 GB RAM, 64 bit machine. Since the shortest path reformulation operates over a finite horizon, in order to compare the cost performance of the $(R, S)$ policy with continuous-review $(s, c, S)$, $(Q, S)$, and $(Q, S, T)$ policy, we discretize each time period into $20$ small periods. We consider a planning horizon length of $6.6$ periods for a total of $132$ small periods. For each test instance, we first obtain the optimal replenishment plan by solving the shortest path reformulation presented in \ref{shortespathapproximation}. The computational time is limited to $5$ minutes, if a timeout occurs, the best solution available is adopted. Next, we simulate the expected average cost of each test instance via Monte Carlo Simulation (100,000 replications). Finally, we compare the average cost per small period against the average cost under existing policies. The data set of \cite{atkinsandIyogun1988} assumes that the demand of each item follows stationary Poisson distribution with rate $\lambda^n$, $n=1, \ldots, 12$. The item-specific fixed ordering cost $K^n$, expected demand $\lambda^n$, and lead time $L^n$ are displayed in Table \ref{set1parameters}. Items share the same penalty cost $b=30$, holding cost $h \in \{2, 6, 20\}$, and group fixed ordering cos $K \in \{20, 50, 100, 150, 500\}$. \begin{table}[!htbp] \centering \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c} \hline items& 1 & 2& 3& 4& 5& 6& 7& 8& 9& 10& 11& 12 \\ \hline $K^n$& 10& 10& 20& 20& 40& 20& 40& 40& 60& 60& 80& 80 \\ $\lambda^n$& 40& 35& 40& 40& 40& 20& 20& 20& 28& 20& 20& 20\\ $L^n$&0.2& 0.5& 0.2& 0.1& 0.2& 1.5& 1.0& 1.0& 1.0& 1.0& 1.0& 1.0\\ \hline \end{tabular} \caption{$K^n$, $\lambda^n$, and $L^n$ of data set \cite{atkinsandIyogun1988}} \label{set1parameters} \end{table} The data set of \cite{atkinsandIyogun1988} contains some unusual lot sizing instances; more specifically, instances for which the group as well as item fixed ordering costs become negligible in comparison to holding costs. In the lot-sizing literature the fixed ordering cost is commonly assumed to be greater than the holding cost \citep[see][p. 62, Property 2]{citeulike:8526547}; moreover, the penalty cost should not be smaller than the holding cost. Additionally, we observe that it is meaningless when the fixed ordering cost is greater than the penalty cost since in such a case the inventory system tends to place orders in every period instead of penalising backorders. To focus on meaningful lot sizing instances --- instances in which a trade off between fixed ordering and holding/penalty cost is sought --- we filter test instances of the data set of \cite{atkinsandIyogun1988} by using the following conditions: $K > b \geq h$. We also check the order frequency in each period and we discard instances in which orders are issued too frequently --- i.e. instance in which a replenishment is issued more than twice per time period, as it turns out that for these instances order coordination is straightforward due to negligible item fixed ordering costs: if a group order is placed, all items are ordered. We present computational results in Table \ref{dataset1}. \begin{table*}[!htbp] \centering \resizebox{0.95\linewidth}{!}{ \begin{tabular}{lll|l|rrrrrrr} \hline \multicolumn{1}{r}{\multirow{2}[4]{*}{$K$}} & \multicolumn{1}{r}{\multirow{2}[4]{*}{$b$}} & \multicolumn{1}{r|}{\multirow{2}[4]{*}{$h$}} & \multicolumn{1}{r|}{\multirow{2}[4]{*}{$(R, S)$}} & \multicolumn{7}{c}{Average cost improvement $\Delta\%$} \\ \cline{5-11} & & \multicolumn{1}{r|}{} & & $(Q, S, T)$ & $Q(s, S)$ & $P(s, S)$ & $(Q, S)$ & $MP$ & $(s, c, S)\_M$ & $(s, c, S)\_F$ \\ \hline \multicolumn{1}{r}{50} & \multicolumn{1}{r}{30} & \multicolumn{1}{r|}{2} & \multicolumn{1}{r|}{936.94} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.91}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.84}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.33}} & 4.38 & 0.68 & 0.79 & 2.14 \\ \multicolumn{1}{r}{100} & \multicolumn{1}{r}{30} & \multicolumn{1}{r|}{2} & \multicolumn{1}{r|}{990.50} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.05}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.45}} & 0.75 & 2.57 & 1.77 & 4.39 & 6.81 \\ \multicolumn{1}{r}{150} & \multicolumn{1}{r}{30} & \multicolumn{1}{r|}{2} & \multicolumn{1}{r|}{1046.56} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.24}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-1.01}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.35}} & 0.52 & 0.65 & 5.68 & 8.36 \\ \multicolumn{1}{r}{200} & \multicolumn{1}{r}{30} & \multicolumn{1}{r|}{2} & \multicolumn{1}{r|}{1072.97} & 1.32 & 0.47 & 1.11 & 1.34 & 2.12 & 8.34 & 12.31 \\ \multicolumn{1}{r}{100} & \multicolumn{1}{r}{30} & \multicolumn{1}{r|}{6} & \multicolumn{1}{r|}{1639.75} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.23}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-1.52}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-1.02}} & 2.15 & 0.00 & 1.24 & 3.31 \\ \multicolumn{1}{r}{150} & \multicolumn{1}{r}{30} & \multicolumn{1}{r|}{6} & \multicolumn{1}{r|}{1707.05} & 0.64 & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.60}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.07}} & 1.46 & 0.95 & 2.34 & 6.68 \\ \multicolumn{1}{r}{200} & \multicolumn{1}{r}{30} & \multicolumn{1}{r|}{6} & \multicolumn{1}{r|}{1766.38} & 1.16 & 0.08 & 0.65 & 1.17 & 1.67 & 3.08 & 9.04 \\ \multicolumn{1}{r}{150} & \multicolumn{1}{r}{30} & \multicolumn{1}{r|}{20} & \multicolumn{1}{r|}{2718.47} & 0.77 & 4.32 & \textcolor[rgb]{ 1, 0, 0}{\textbf{-1.26}} & 1.27 & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.21}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.59}} & 6.20 \\ \multicolumn{1}{r}{200} & \multicolumn{1}{r}{30} & \multicolumn{1}{r|}{20} & \multicolumn{1}{r|}{2812.52} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-3.23}} & 0.14 & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.72}} & 0.77 & 0.34 & 0.25 & 8.34 \\ \hline \multicolumn{4}{l|}{Average cost improvement $\Delta\%$} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.09}} & 0.07 & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.14}} & 1.74 & 0.89 & 2.84 & 7.02 \\ \hline \end{tabular}% } \caption{Computational results on the data set of \cite{atkinsandIyogun1988}} \label{dataset1}% \end{table* Let $\Delta \%$ denote the percentage gap between the expected average cost of existing policies and that of the proposed $(R, S)$ policy, over the expected averaged cost of the $(R, S)$ policy. By definition, a positive $\Delta\%$ represents the $(R, S)$ policy outperforms existing policy. Note that expected average costs under $(Q, S, T)$, $Q(s, S)$, $P(s, S)$, $(Q, S)$, and $(s,c, S)_M$ policy are obtained from \cite{ozkayaetal2006}, that of $(s, c, S)_F$ policy is obtained from \cite{melchiors2002}, and that of $MP$ policy is obtained from \cite{viswanathan1997}. We observe that the $(R, S)$ policy fully dominates all policies in $2$ of $9$ test instances; $(Q, S, T)$ is the best policy in $2$ instances; $Q(s, S)$ is the best policy in $4$ instances; $P(s, S)$ is the best policy in 1 instance. Moreover, the $(R, S)$ policy outperforms the $(Q, S)$ and $(s, c, S)_F$ policy, and no dominant policy on all test instances. The average cost improvement $\Delta\%$ increases with the increase of group fixed ordering cost, and decreases with the increase of holding cost compared with $(s, c, S)_M$ and $(s, c, S)_F$ policy. That means an increase in group fixed ordering cost or a decrease in holding cost improves the cost performance of $(R, S)$ policy. It is difficult to make a general remark with respect to group fixed ordering cost and holding cost compared with $(Q, S, T)$, $Q(s, S)$, $P(s, S)$, $(Q, S)$, and $MP$ policy. On average, the $(R, S)$ policy performs better than $Q(s, S)$, $(Q, S)$, $MP$, $(s, c, S)_M$, and $(s, c, S)_F$ policy with an average improvement of $0.07\%$, $1.74\%$, $0.89\%$, $2.84\%$, and $7.02\%$, respectively; however, the $(Q, S, T)$ and $P(s, S)$ policies performs slightly better than the $(R, S)$ policy with an average improvement of $0.09\%$ and $0.14\%$, respectively. The data set of \cite{viswanathan1997} adopts the same parameters as the data set of \cite{atkinsandIyogun1988}, except $b \in \{10, 50, 100, 200, 1000, 5000, 10000, 20000\}$, $h \in \{2, 6, 10, 200, 600, 1000\}$, and $K \in \{20, 50, 100, 200, 500\}$. We filter the computational results by using the same conditions previously adopted. We present computational results of the $(R, S)$ policy on the data set of \cite{viswanathan1997} in Table \ref{dataset2}. We observe that the $(R, S)$ policy dominates $13$ of $31$ test instances; $(Q, S, T)$ is the best policy in $13$ instances; $Q(s, S)$ is the best policy in $9$ instances; $P(s, S)$ is the best policy in $1$ instances. There is no dominant policy on all test instances. Regarding the comparison with other policies, the average cost improvement $\Delta\%$ decreases as the penalty cost increase; while there is no obvious trend with respect to the group fixed ordering cost, and penalty cost. On average, the $(R, S)$ policy performs better than $Q(s, S)$, $P(s, S)$, $(Q, S)$, $MP$, and $(s, c, S)_F$ policy with average cost improvements of $0.37\%$, $0.37\%$, $1.81\%$, $1.41\%$, and $1.67\%$; while the $(Q,S,T)$ policy performs slightly better than the $(R, S)$ policy with average cost improvement $0.19\%$. \begin{table*}[!ht] \centering \resizebox{0.95\linewidth}{!}{ \begin{tabular}{rrr|r|rrrrrr} \hline \multicolumn{1}{r}{\multirow{2}[4]{*}{$K$}} & \multicolumn{1}{r}{\multirow{2}[4]{*}{$b$}} & \multicolumn{1}{r|}{\multirow{2}[4]{*}{$h$}} & \multicolumn{1}{c|}{\multirow{2}[4]{*}{$(R, S)$}} & \multicolumn{6}{c}{Average cost improvement $\Delta\%$} \\ \cline{5-10} & & \multicolumn{1}{r|}{} & & $(Q, S, T)$ & $Q(s, S)$ & $P(s, S)$ & $(Q, S)$ & $MP$ & $(s, c, S)_F$ \\ \hline \multicolumn{1}{r}{20} & \multicolumn{1}{r}{10} & \multicolumn{1}{r|}{2} & \multicolumn{1}{r|}{772.25} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.03}} & 0.48 & 0.76 & 8.30 & 1.79 & 1.80 \\ \multicolumn{1}{r}{50} & \multicolumn{1}{r}{10} & \multicolumn{1}{r|}{2} & \multicolumn{1}{r|}{813.94} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.48}} & 0.12 & 0.62 & 0.47 & 1.64 & 1.74 \\ \multicolumn{1}{r}{100} & \multicolumn{1}{r}{10} & \multicolumn{1}{r|}{2} & \multicolumn{1}{r|}{861.05} & 0.23 & 0.70 & 1.17 & 3.68 & 2.20 & 2.38 \\ \multicolumn{1}{r}{200} & \multicolumn{1}{r}{10} & \multicolumn{1}{r|}{2} & \multicolumn{1}{r|}{932.86} & 1.62 & 1.83 & 2.38 & 2.88 & 3.42 & 3.73 \\ \multicolumn{1}{r}{500} & \multicolumn{1}{r}{10} & \multicolumn{1}{r|}{2} & \multicolumn{1}{r|}{1131.42} & 0.14 & 0.14 & 0.59 & 0.18 & 1.60 & 2.12 \\ \multicolumn{1}{r}{20} & \multicolumn{1}{r}{10} & \multicolumn{1}{r|}{6} & \multicolumn{1}{r|}{1166.06} & 0.85 & 2.84 & 0.01 & 7.99 & 1.08 & 1.04 \\ \multicolumn{1}{r}{50} & \multicolumn{1}{r}{10} & \multicolumn{1}{r|}{6} & \multicolumn{1}{r|}{1222.82} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.15}} & 1.83 & 0.62 & 5.53 & 1.68 & 1.73 \\ \multicolumn{1}{r}{100} & \multicolumn{1}{r}{10} & \multicolumn{1}{r|}{6} & \multicolumn{1}{r|}{1283.92} & 1.33 & 2.50 & 1.26 & 4.49 & 2.34 & 2.46 \\ \multicolumn{1}{r}{200} & \multicolumn{1}{r}{10} & \multicolumn{1}{r|}{6} & \multicolumn{1}{r|}{1413.72} & 0.30 & 1.23 & 1.02 & 1.82 & 2.10 & 2.33 \\ \multicolumn{1}{r}{500} & \multicolumn{1}{r}{10} & \multicolumn{1}{r|}{6} & \multicolumn{1}{r|}{1658.48} & 2.26 & 2.20 & 2.52 & 2.30 & 3.59 & 4.03 \\ \multicolumn{1}{r}{50} & \multicolumn{1}{r}{10} & \multicolumn{1}{r|}{10} & \multicolumn{1}{r|}{1420.63} & 1.57 & 5.30 & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.03}} & 5.88 & 1.07 & 1.07 \\ \multicolumn{1}{r}{100} & \multicolumn{1}{r}{10} & \multicolumn{1}{r|}{10} & \multicolumn{1}{r|}{1497.96} & 1.67 & 4.28 & 0.75 & 4.37 & 1.87 & 1.93 \\ \multicolumn{1}{r}{200} & \multicolumn{1}{r}{10} & \multicolumn{1}{r|}{10} & \multicolumn{1}{r|}{1637.27} & 0.66 & 2.18 & 1.15 & 2.16 & 2.28 & 2.44 \\ \multicolumn{1}{r}{500} & \multicolumn{1}{r}{10} & \multicolumn{1}{r|}{10} & \multicolumn{1}{r|}{1935.07} & 1.60 & 1.60 & 1.79 & 1.60 & 2.90 & 3.27 \\ \multicolumn{1}{r}{100} & \multicolumn{1}{r}{50} & \multicolumn{1}{r|}{2} & \multicolumn{1}{r|}{1043.31} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-1.95}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.79}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.23}} & 1.98 & 0.78 & 0.92 \\ \multicolumn{1}{r}{200} & \multicolumn{1}{r}{50} & \multicolumn{1}{r|}{2} & \multicolumn{1}{r|}{1132.61} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-1.29}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.48}} & 0.30 & 0.50 & 1.31 & 1.97 \\ \multicolumn{1}{r}{500} & \multicolumn{1}{r}{50} & \multicolumn{1}{r|}{2} & \multicolumn{1}{r|}{1327.95} & 0.08 & 0.08 & 0.82 & 0.13 & 1.83 & 2.30 \\ \multicolumn{1}{r}{100} & \multicolumn{1}{r}{50} & \multicolumn{1}{r|}{6} & \multicolumn{1}{r|}{1794.60} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-1.37}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-2.65}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-2.09}} & 0.94 & \textcolor[rgb]{ 1, 0, 0}{\textbf{-1.09}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.97}} \\ \multicolumn{1}{r}{200} & \multicolumn{1}{r}{50} & \multicolumn{1}{r|}{6} & \multicolumn{1}{r|}{1938.25} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.27}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-1.56}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.89}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.05}} & 0.13 & 0.34 \\ \multicolumn{1}{r}{500} & \multicolumn{1}{r}{50} & \multicolumn{1}{r|}{6} & \multicolumn{1}{r|}{2244.01} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.27}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.27}} & 0.43 & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.26}} & 1.44 & 1.87 \\ \multicolumn{1}{r}{200} & \multicolumn{1}{r}{50} & \multicolumn{1}{r|}{10} & \multicolumn{1}{r|}{2448.79} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-3.83}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-2.11}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-1.55}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.75}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.53}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.34}} \\ \multicolumn{1}{r}{500} & \multicolumn{1}{r}{50} & \multicolumn{1}{r|}{10} & \multicolumn{1}{r|}{2796.29} & 0.35 & 0.35 & 0.97 & 0.35 & 2.00 & 2.40 \\ \multicolumn{1}{r}{200} & \multicolumn{1}{r}{100} & \multicolumn{1}{r|}{2} & \multicolumn{1}{r|}{1200.38} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-1.61}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.94}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.11}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.01}} & 0.90 & 1.13 \\ \multicolumn{1}{r}{500} & \multicolumn{1}{r}{100} & \multicolumn{1}{r|}{2} & \multicolumn{1}{r|}{1406.67} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.76}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.83}} & 0.16 & 0.16 & 1.17 & 1.60 \\ \multicolumn{1}{r}{200} & \multicolumn{1}{r}{100} & \multicolumn{1}{r|}{6} & \multicolumn{1}{r|}{2106.78} & 0.44 & \textcolor[rgb]{ 1, 0, 0}{\textbf{-1.23}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.48}} & 0.94 & 0.54 & 0.73 \\ \multicolumn{1}{r}{500} & \multicolumn{1}{r}{100} & \multicolumn{1}{r|}{6} & \multicolumn{1}{r|}{2449.51} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.88}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.88}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.07}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.07}} & 0.94 & 1.33 \\ \multicolumn{1}{r}{200} & \multicolumn{1}{r}{100} & \multicolumn{1}{r|}{10} & \multicolumn{1}{r|}{2728.08} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-3.41}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-1.90}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-1.29}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.49}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.27}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.10}} \\ \multicolumn{1}{r}{500} & \multicolumn{1}{r}{100} & \multicolumn{1}{r|}{10} & \multicolumn{1}{r|}{3108.05} & 0.22 & 0.22 & 0.94 & 0.94 & 1.96 & 2.33 \\ \multicolumn{1}{r}{500} & \multicolumn{1}{r}{200} & \multicolumn{1}{r|}{2} & \multicolumn{1}{r|}{1470.29} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.90}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.90}} & 0.05 & 0.05 & 1.05 & 1.45 \\ \multicolumn{1}{r}{500} & \multicolumn{1}{r}{200} & \multicolumn{1}{r|}{6} & \multicolumn{1}{r|}{2620.77} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.91}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.91}} & 0.08 & 0.08 & 1.09 & 1.45 \\ \multicolumn{1}{r}{500} & \multicolumn{1}{r}{200} & \multicolumn{1}{r|}{10} & \multicolumn{1}{r|}{3421.28} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.94}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.94}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.04}} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.04}} & 0.97 & 1.30 \\ \hline \multicolumn{4}{l|}{Average cost improvement $\Delta\%$} & \textcolor[rgb]{ 1, 0, 0}{\textbf{-0.19}} & 0.37 & 0.37 & 1.81 & 1.41 & 1.67 \\ \hline \end{tabular}% } \caption{Computational results on the data set of \cite{viswanathan1997}} \label{dataset2}% \end{table*}% Even though the $(R, S)$ policy does not fully dominate other competing policies, it presents a key advantage: {\em in contrast to all other policies in the literature, it is able to tackle stationary as well as nonstationary demand. \section{Conclusion}\label{conclusion} In this paper, we presented a mathematical programming approach for controlling the multi-item inventory system with joint replenishment under the $(R, S)$ policy. We first present an MILP-based model for approximating optimal $(R, S)$ policies, which is built upon the piecewise-linear approximation technique proposed by \citep{rossietal2014}. We further demonstrate that the MILP model can be used to approximate the $(\sigma, \vec{S})$ policy We conducted an extensive computational study comprising $55$ instances. We first evaluated our approach on the data set of \citep{atkinsandIyogun1988}. This evaluation demonstrates that the $(R, S)$ policy fully dominates other competing policies in the literature in 2 out of $10$ test instances considered. The $(R, S)$ policy performs better than $Q(s, S)$, $(Q, S)$, $MP$, $(s, c, S)_M$, and $(s, c, S)_F$ policies with an average improvement of $0.17\%$, $2.61\%$, $0.33\%$, $1.24\%$, and $4.60\%$, respectively; however, the $(Q, S, T)$ and $P(s, S)$ policies performs slightly better than the $(R, S)$ policy with an average improvement of $1.01\%$ and $0.71\%$. Computational experiments on the data set of \citep{viswanathan1997} indicates that $(R, S)$ is the best policy in $13$ out of $45$ test instances. $(R, S)$ performs better than $(Q, S)$, $MP$, and $(s, c, S)_F$ policies with average cost improvements of $1.81\%$, $0.62\%$, $0.82\%$; while $(Q,S,T)$, $Q(s, S)$, and $P(s, S)$ policies perform slightly better than it with an average cost improvement$1.11\%$, $0.64\%$, $0.40\%$. Most importantly, the $(R, S)$ policy comes with the additional advantage of being able to tackle stationary and nonstationary demand. Future research may focus on investigating the cost performance of $(R, S)$ policy in a rolling horizon setting.
{ "timestamp": "2019-03-01T02:16:31", "yymm": "1902", "arxiv_id": "1902.11025", "language": "en", "url": "https://arxiv.org/abs/1902.11025" }
\section{Introduction} \label{Sec1} Several decision-making methods involve the comparison of the criteria and the alternatives in pairs, making judgements, and the compilation of the results into multiplicative positive reciprocal pairwise comparison matrices. For instance, it is a crucial element of the popular Analytic Hierarchy Process (AHP), introduced by Thomas L. Saaty \citep{Saaty1977, Saaty1980}. He has suggested deriving the priorities from such a matrix by its principal right eigenvector, which is called the \emph{eigenvector method}. Since AHP has a wide variety of applications \citep{BarkerZabinsky2011, Ho2008, SaatyVargas2012, SubramanianRamanathan2012, TamTummala2001, VaidyaKumar2006}, a better understanding of this procedure seems to be an important research question. We will focus on some shortcomings caused by its mathematical properties. \citet{JohnsonBeineWang1979} note that the use of the left eigenvectors is equally justified as long as the order is reversed, furthermore, the results from the two eigenvectors may disagree even when the matrix is nearly consistent. According to \citet{GenestLapointeDrury1993}, in the case of numerically coded ordinal preferences, the ranking obtained from the principal right eigenvector depends on the choice of parameter for the preferences. \citet{BanaeCostaVansnick2008} prove that the right eigenvector can violate a condition of order preservation, which is fundamental in decision aiding according to the author's opinion. \citet{Kulakowski2015} examines the relationship between this property and the inconsistency index proposed by Saaty. \citet{PerezMokotoff2016} present an example where the alternative with the highest priority according to all decision-maker is not the best on the basis of their aggregated preferences. \citet{Csato2017b} traces back the origin of this problem to the right-left asymmetry \citep{JohnsonBeineWang1979}, and provides a minimal counterexample with four alternatives. According to \citet{BlanqueroCarrizosaConde2006}, the eigenvector solution is not necessarily Pareto efficient, in other words, there may exist a weight vector which is at least as good in approximating all elements of the pairwise comparison matrix, and strictly better in at least one position. However, \citet{Abele-NagyBozoki2016} prove that this cannot occur if the pairwise comparison matrix differs from a consistent one only in one entry (and its reciprocal), while \citet{Abele-NagyBozokiRebak2018} extend this result to double perturbed matrices, which can be made consistent by altering two elements and their reciprocals. On the other hand, the eigenvector method may lead to an inefficient weight vector for matrices with an arbitrarily small inconsistency \citep{Bozoki2014}. Finally, \citet{BozokiFulop2018} propose linear programs to test whether a given weight vector is efficient or not, and \citet{DulebaMoslem2019} give the first examination of this property on real data. The starting point of our paper is a remark by Saaty, who says that the priority vector has two meanings: ``\emph{The first is a numerical ranking of the alternatives that indicates an order of preference among them. The other is that the ordering should also reflect intensity or cardinal preference as indicated by the ratios of the numerical values [\dots]}'' \citep[p.~86]{Saaty2003}. In our view, this second interpretation suggests that the priority of a given alternative should be a \emph{monotonic} function of its numerical comparisons with respect to any other alternatives. In other words, increasing an arbitrary entry of a pairwise comparison matrix should increase the weight of the favoured alternative (which is in the corresponding row) by the greatest factor, and should decrease the weight of the disfavoured alternative (which is in the corresponding column) by the greatest factor. We will prove that -- while the row geometric mean (logarithmic least squares) method trivially satisfies monotonicity -- the eigenvector method violates it for certain pairwise comparison matrices. It is also investigated what is the probability that this problem emerges in the case of a randomly generated matrix as a function of its consistency ratio, the consistency measure suggested by \citet{Saaty1977}. The violation of monotonicity turns out to cause no problems in the case of nearly consistent matrices. On the other hand, the eigenvector method remains a dubious choice for inherently inconsistent large matrices such as the ones that emerge in sports applications. The paper has the following structure. Section~\ref{Sec2} outlines the topic of pairwise comparison matrices and defines the axiom of monotonicity. The eigenvector method is analysed in the view of this property in Section~\ref{Sec3}. Finally, Section~\ref{Sec4} concludes. \section{The problem} \label{Sec2} In this section, the main notions around pairwise comparison matrices are briefly recalled, and a natural property is introduced. \subsection{Preliminaries: multiplicative pairwise comparison matrices} \label{Sec21} Let $N = \{ 1,2, \dots ,n \}$ be a set of alternatives to be evaluated. Assume that their pairwise comparisons are known: $a_{ij}$ is a numerical answer to the question ``\emph{How many times alternative $i$ is better than alternative $j$?}'', that is, $a_{ij}$ measures the relative importance of alternative $i$ with respect to alternative $j$. Let $\mathbb{R}^{n}_+$ and $\mathbb{R}^{n \times n}_+$ denote the set of positive (with all elements greater than zero) vectors of size $n$ and matrices of size $n \times n$, respectively. The results of the comparisons are collected into a matrix whose elements below the diagonal are reciprocal to the corresponding elements above the diagonal. \begin{definition} \label{Def21} \emph{Multiplicative pairwise comparison matrix}: Matrix $\mathbf{A} = \left[ a_{ij} \right] \in \mathbb{R}^{n \times n}_+$ is a \emph{multiplicative pairwise comparison matrix} if $a_{ji} = 1/a_{ij}$ for all $1 \leq i,j \leq n$. \end{definition} In the following, the word ``multiplicative'' will be omitted for the sake of simplicity. The set of all pairwise comparison matrices with $n$ alternatives is denoted by $\mathcal{A}^{n \times n}$. Pairwise comparisons are carried out in order to get a priority vector $\mathbf{w}$ such that the proportion of the weights $w_i$ and $w_j$ of the alternatives $i$ and $j$, respectively, approximates the value of their pairwise comparison $a_{ij}$. Thus the weights can be normalised arbitrarily. \begin{definition} \label{Def22} \emph{Weight vector}: Vector $\mathbf{w} = \left[ w_{i} \right] \in \mathbb{R}^n_+$ is a \emph{weight vector} if $\sum_{i=1}^n w_{i} = 1$. \end{definition} The set of weight vectors of size $n$ is denoted by $\mathcal{R}^{n}$. \begin{definition} \label{Def23} \emph{Weighting method}: Function $f: \mathcal{A}^{n \times n} \to \mathcal{R}^{n}$ is a \emph{weighting method}. \end{definition} The weight of alternative $i$ from the pairwise comparison matrix $\mathbf{A}$ according to the weighting method $f$ is denoted by $f_i(\mathbf{A})$. There exist many methods to estimate a suitable weight vector from a pairwise comparison matrix. Probably the most popular procedures are the row geometric mean (logarithmic least squares) method \citep{CrawfordWilliams1980, CrawfordWilliams1985, DeGraan1980, deJong1984, Rabinowitz1976}, and the eigenvector method \citep{Saaty1977}. Although the latter suffers from a number of theoretical problems as mentioned in the Introduction, and there are sound axiomatic arguments in favour of the geometric mean \citep{Fichtner1984, BarzilaiCookGolany1987, Barzilai1997, LundySirajGreco2017, Csato2018c, BozokiTsyganok2019, Csato2019a}, the AHP methodology mainly uses the eigenvector method since the pioneering work of Saaty. This particular procedure will be our focus in the following. \begin{definition} \label{Def24} \emph{Eigenvector method} \citep{Saaty1977}: The \emph{eigenvector method} associates the weight vector $\mathbf{w}^{EM} (\mathbf{A}) \in \mathcal{R}^n$ for a given pairwise comparison $\mathbf{A} \in \mathcal{A}^{n \times n}$ such that \[ \mathbf{A} \mathbf{w}^{EM}(\mathbf{A}) = \lambda_{\max} \mathbf{w}^{EM}(\mathbf{A}), \] where $\lambda_{\max}$ denotes the maximal eigenvalue, also known as the principal or Perron eigenvalue, of the (positive) matrix $\mathbf{A}$. \end{definition} There is a special case when all reasonable weighting methods give the same result. \begin{definition} \label{Def25} \emph{Consistency}: Let $\mathbf{A} = \left[ a_{ij} \right] \in \mathbb{R}^{n \times n}_+$ be a pairwise comparison matrix. It is called \emph{consistent} if the condition $a_{ik} = a_{ij} a_{jk}$ holds for all $1 \leq i,j,k \leq n$. \end{definition} However, consistency is seldom observed in practice, pairwise comparison matrices are usually \emph{inconsistent}. A variety of indices has been proposed to measure the level of inconsistency (see \citet{Brunelli2018} for a survey), and recently some formal studies have appeared in the literature \citep{BrunelliFedrizzi2015, Brunelli2017, BrunelliFedrizzi2019, Csato2018e, Csato2018a, KoczkodajSzwarc2014, KoczkodajUrban2018}. We will consider the oldest and by far the most popular Saaty inconsistency index \citep{Saaty1977}, which is closely related to the eigenvector method. \begin{definition} \label{Def26} \emph{Consistency index} ($CI$): Let $\mathbf{A} = \left[ a_{ij} \right] \in \mathbb{R}^{n \times n}_+$ be a pairwise comparison matrix. Its \emph{consistency index} is: \[ CI(\mathbf{A}) = \frac{\lambda_{\max}-n}{n-1}. \] \end{definition} \citet{Saaty1977} introduced the so-called \emph{random index} $RI_n$, that is, the average $CI$ of a large number of $n \times n$ pairwise comparison matrices with entries randomly generated from the scale $\{ 1/9, 1/8, \dots ,8,9 \}$. The proportion of $CI$ and $RI_n$ is called the \emph{consistency ratio} $CR$. This will be called the Saaty inconsistency index in the following. \citet{Saaty1977} considered a pairwise comparison matrix to be acceptable if $CR$ does not exceed the threshold $0.1$. \subsection{Monotonicity of the weights on single comparisons} \label{Sec22} The entry $a_{ij}$ measures the dominance of alternative $i$ over alternative $j$. Thus it is expected that increasing $a_{ij}$ is favourable for the weight of alternative $i$ with respect to the weight of any third alternative $k$. Otherwise, the counter-intuitive behaviour of the weights may lead to rank reversal. The following axiom formalises this requirement. \begin{axiom} \label{Axiom1} \emph{Monotonicity}: Let $\mathbf{A} \in \mathcal{A}^{n \times n}$ be any pairwise comparison matrix and $1 \leq i,j \leq n$ be any two different alternatives. Let $\mathbf{A}' \in \mathcal{A}^{n \times n}$ be identical to $\mathbf{A}$ but $a'_{ij} > a_{ij}$ (and $a_{ji}' < a_{ji}$ because of the reciprocity property). Then weighting method $f: \mathcal{A}^{n \times n} \to \mathfrak{R}^n$ is called \emph{monotonic} if $f_i \left( \mathbf{A}' \right) / f_k \left( \mathbf{A}' \right) \geq f_i \left( \mathbf{A} \right) / f_k \left( \mathbf{A} \right)$ for all $1 \leq k \leq n$. \end{axiom} Monotonicity is a reformulation of analogous conditions that are widely used in social choice theory, for example, \emph{positive responsiveness} \citep{vandenBrinkGilles2009} and \emph{positive responsiveness to the beating relation} \citep{Gonzalez-DiazHendrickxLohmann2014}: if alternative $i$ is ranked at least as high as alternative $k$, then it should be ranked strictly higher when a comparison $a_{ij}$ changes in favour of alternative $i$. A weaker version has been used in the axiomatic characterization of the row geometric mean (logarithmic least squares) ranking \citep{Csato2018c} in order to get a stronger result: $i \succeq j$ implies $i \succ j$ whenever $a_{ij}$ increases. \citet[p.~201]{Landau1914} considers another principle for nonnegative tournament matrices, which also follows from Axiom~\ref{Axiom1}: \begin{equation} \label{eq1} a_{ij} > a_{ij} \Rightarrow \frac{f_i \left( \mathbf{A}' \right)}{\sum_{k=1}^n f_k \left( \mathbf{A}' \right)} \geq \frac{f_i \left( \mathbf{A} \right)}{\sum_{k=1}^n f_k \left( \mathbf{A} \right)}. \end{equation} \citet{BrunelliFedrizzi2015} have been suggested an axiom with the same flavour called \emph{monotonicity on single comparisons} in the context of inconsistency indices, which is satisfied by the inconsistency index $CI$ \citep{AupetitGenest1993}. \citet{BrunelliFedrizzi2015} also provide a short overview of the origin of this property. \section{Results} \label{Sec3} The row geometric mean (logarithmic least squares) method trivially meets monotonicity: a greater value of $a_{ij}$ increases the weight of alternative $i$, decreases the weight of alternative $j$, while preserves the weights of all other alternatives, at least before normalization. The case of the eigenvector method will turn out to be more complicated. \subsection{The eigenvector method can be non-monotonic} \label{Sec31} Our point of departure is the following negative observation. \begin{proposition} \label{Prop31} The eigenvector method does not satisfy monotonicity. \end{proposition} \begin{proof} It is sufficient to provide a counterexample. \begin{example} \label{Examp31} Consider the following parametric pairwise comparison matrix: \[ \mathbf{A}^{\alpha} = \left[ \begin{array}{K{2.5em} K{2.5em} K{2.5em} K{2.5em}} 1 & 8 & \alpha & 5 \\ 1/8 & 1 & 3 & 7 \\ 1/\alpha & 1/3 & 1 & 9 \\ 1/5 & 1/7 & 1/9 & 1 \\ \end{array} \right]. \] \input{Figure1_EM_non-monotonicity} The ratio of the weights of the first and the fourth alternatives is plotted in Figure~\ref{Fig1} as a function of parameter $\alpha$. \end{example} Note that $w_1^{EM}(\mathbf{A}^{\alpha}) / w_4^{EM}(\mathbf{A}^{\alpha})$ is not monotonically increasing around $\alpha = 1$. For instance, $w_1^{EM}(\mathbf{A}^{1}) / w_4^{EM}(\mathbf{A}^{1}) > w_1^{EM}(\mathbf{A}^{1.01}) / w_4^{EM}(\mathbf{A}^{1.01})$, which shows the violation of Axiom~\ref{Axiom1} by the eigenvector method: increasing $a_{13}$ is disadvantageous for the weight of the first alternative with respect to the weight of the fourth alternative. However, it can be checked that both $w_1^{EM}(\mathbf{A}^{\alpha}) / w_2^{EM}(\mathbf{A}^{\alpha})$ and $w_1^{EM}(\mathbf{A}^{\alpha}) / w_3^{EM}(\mathbf{A}^{\alpha})$ are monotonically increasing around $\alpha = 1$. Furthermore, Example~\ref{Examp31} does not violate condition~\eqref{eq1} as $w_1^{EM}(\mathbf{A}^{\alpha}) / \sum_{k=1}^4 w_k^{EM}(\mathbf{A}^{\alpha})$ is monotonically increasing around $\alpha = 1$. \end{proof} Example~\ref{Examp31} is minimal in the number of alternatives because the eigenvector method is equivalent to the row geometric mean (logarithmic least squares) method for $n = 3$ \citep{CrawfordWilliams1985}, and the latter satisfies monotonicity. On the domain of nonnegative tournament matrices of order $n = 3$, the principal right eigenvector violates even condition~\eqref{eq1} \citep{Landau1914}, which is substantially weaker than Axiom~\ref{Axiom1}. We have not found any such instances for reciprocal $n \times n$ matrices. \subsection{A framework for analysing monotonicity} \label{Sec32} Although the eigenvector method violates monotonicity in certain cases, this in itself does not make the problem relevant in practice. In order to reveal this issue in depth, a computational technique will be used: a large number of pairwise comparison matrices will be considered and checked with respect to the monotonicity of the eigenvector. The entries of the random pairwise comparison matrices are generated in two ways: \begin{itemize} \item \emph{Discrete} scale: all entries $a_{ij}$ above the diagonal ($i < j$) are randomly chosen from the set \[ \left\{ \frac{1}{9};\, \frac{1}{8};\, \frac{1}{7};\, \frac{1}{6};\, \frac{1}{5};\, \frac{1}{4};\, \frac{1}{3};\, \frac{1}{2};\, 1;\, 2;\, 3;\, 4;\, 5;\, 6;\, 7;\, 8;\, 9 \right\} \] with equal probability $1/17$, and by setting $a_{ji} = 1 / a_{ij}$, as well as $a_{ii} = 1$. \item \emph{Continuous} scale: a value is chosen randomly from the interval $\left[ 1, 10 \right]$ according to a uniform distribution, and it is decided by an equal odds coin-toss whether this value or its reciprocal is written into the entry $a_{ij}$ above the diagonal ($i < j$). Other elements of the pairwise comparison matrix are determined as above. \end{itemize} The discrete scale above is the standard proposed by Saaty. The continuous scale is examined because focusing on integers may hide some crucial features of the eigenvector method. Both of them have already been used in the literature, see, for example, \citet{AlonsoLamata2006, BozokiRapcsak2008}. \begin{table}[ht!] \caption{Values of the random index used in the computations} \centering \label{Table1} \rowcolors{3}{gray!20}{} \begin{tabularx}{0.7\textwidth}{c CC} \toprule \hiderowcolors Matrix size & \multicolumn{2}{c}{Random index $RI_n$} \\ & Discrete scale & Continuous scale \\ \hline \showrowcolors 4 & 0.884 & 0.946 \\ 5 & 1.109 & 1.188 \\ 6 & 1.249 & 1.340 \\ 7 & 1.341 & 1.438 \\ 8 & 1.404 & 1.505 \\ 9 & 1.451 & 1.555 \\ \bottomrule \end{tabularx} \end{table} Since we want to investigate the connection between monotonicity and the consistency ratio $CR$, random indices $RI_n$ should also be determined. They are reported in Table~\ref{Table1} for $4 \leq n \leq 9$: for the discrete scale, the values given in \citet[Table~3]{BozokiRapcsak2008} are used -- they have been validated by our calculations, too --, while the random indices for the continuous scale have been computed from four million randomly generated matrices in each case. Monotonicity of the principal right eigenvector is tested by perturbing a matrix element, and checking whether the condition required by Axiom~\ref{Axiom1} holds or not. Thus, one iteration of the computational process consists of the following steps: \begin{enumerate} \item \label{step1} A random pairwise comparison matrix $\mathbf{A}$ of order $n$ is generated on the discrete/continuous scale. \item Its consistency ratio $CR(\mathbf{A})$ and eigenvector $\mathbf{w}^{EM}(\mathbf{A})$ is calculated, the number of matrices in the $m$th interval of consistency ratios, for which $\beta (m-1) \leq CR(\mathbf{A}) < \beta m$, is increased by one (the actual value of $\beta$ will depend on the aim of the computation, see the discussion of figures later). \item All entries above the diagonal are considered separately: $n(n-1)/2$ perturbed pairwise comparison matrices $\mathbf{A}^{ij}$ are defined such that $\mathbf{A}^{ij} = \mathbf{A}$ except for its element in the $i$th row and $j$th column, $i < j$, which is given by $a_{ij}^{ij} = 1.01 a_{ij}$, while reciprocity is preserved, so $a_{ji}^{ij} = (1 / 1.01) a_{ji}$. \item Eigenvectors $\mathbf{w} \left( \mathbf{A}^{ij} \right)$ are computed. \item Fractions $w^{EM}_i(\mathbf{A}) / w^{EM}_k(\mathbf{A})$ and $w^{EM}_i \left( \mathbf{A}^{ij} \right) / w^{EM}_k \left( \mathbf{A}^{ij} \right)$ are compared for all $i < j$ and $1 \leq k \leq n$. \item The $m$th interval of consistency ratios with the flag of non-monotonicity, in which $CR(\mathbf{A})$ falls, is increased by one if $w^{EM}_i(\mathbf{A}) / w^{EM}_k(\mathbf{A}) > w^{EM}_i \left( \mathbf{A}^{ij} \right) / w^{EM}_k \left( \mathbf{A}^{ij} \right)$, that is, Axiom~\ref{Axiom1} is violated after increasing $a_{ij}$ by 1\%. \item \label{step7} The pairwise comparison matrix $\mathbf{A}$, its consistency ratio $CR(\mathbf{A})$ and $i$, $j$, $k$ are saved as an example that violates monotonicity if $CR(\mathbf{A})$ is smaller than the consistency ratio of all previously generated pairwise comparison matrices with a non-monotonic eigenvector. \end{enumerate} Steps~\ref{step1}-\ref{step7} are repeated until the number of randomly generated pairwise comparison matrices reaches a predetermined limit. \subsection{The relationship between monotonicity and inconsistency} \label{Sec33} \input{Figure2_monotonicity_hist} First, four million iterations of the process consisting of steps~\ref{step1}-\ref{step7} have been made for all values of $n$ between $4$ and $9$. Figure~\ref{Fig2} plots the proportion of pairwise comparison matrices for which the eigenvector method does not satisfy the axiom of monotonicity as a function of the consistency ratio $CR$. It clearly does not depend on whether the entries are drawn from a discrete or continuous scale, so only the former will be analysed in the following. Note also that $CR$ cannot be arbitrarily large: \citet{AupetitGenest1993} derive a sharp upper bound on $\lambda_{\max}$ when the responses are coded on a bounded scale applied here. According to Figure~\ref{Fig2}, the probability of a non-monotonic right eigenvector gradually increases if the pairwise comparison matrix becomes more inconsistent. There is no violation of Axiom~\ref{Axiom1} for nearly consistent matrices ($CI < 0.15$), however, this problem is almost guaranteed to emerge for heavily inconsistent matrices in the case of at least six alternatives. \input{Figure3_size4_all} For $n = 4$, there are only six elements above the diagonal, and the total number of different matrices using the discrete scale for these entries is $17^6 = 24{,}137{,}569$. Although some of them can be transformed into another by row/column permutations, we have decided to preserve all of them because of the nature of random matrix generation. Note that the random index $RI_4$ is determined without taking permutation filtering into account. They are plotted in Figure~\ref{Fig3} such that the pairwise comparison matrices are grouped according to their consistency ratios: matrix $\mathbf{A}$ is registered in the $m$th interval if $CR(\mathbf{A}$ falls between $0.1(m-1)$ and $0.1m$, while the last, $35$th category contains all matrices with $CR \geq 3.5$. A matrix with the smallest consistency ratio that presents the non-monotonic behaviour of the eigenvector is the one presented in Example~\ref{Examp31}, where $CR \approx 0.4869$. \input{Figure4_small_CI_hist} Since Saaty suggested a threshold of $0.1$ for the acceptable level of the consistency ratio $CR$, nearly consistent matrices prevail in certain applications, hence they are examined more thoroughly. This is highlighted in Figure~\ref{Fig4} where the pairwise comparison matrices are classified more finely than before: matrix $\mathbf{A}$ is registered in the $m$th interval if $0.02(m-1) \leq CR(\mathbf{A}) < 0.02m$. Note that the running time of our simulations is significantly reduced by checking Axiom~\ref{Axiom1} only for matrices with a modest inconsistency of $CR < 0.4$, so the sample size of random matrices can be substantially increased. The eigenvector always remains monotonic on this domain if there are only four alternatives. The probability of non-monotonicity may emerge still around $CR \approx 0.2$ for $n \geq 7$, and it gradually increases as the pairwise comparison matrix becomes more inconsistent. Until now the axiom of monotonicity has been checked with an increase of 1\% in all entries (see Section~\ref{Sec32}). However, the property is defined on a continuous scale, so this choice may miss identifying some cases of non-monotonicity. Therefore a kind of sensitivity analysis has been implemented on the whole domain of $4 \times 4$ pairwise comparison matrices from the discrete scale such that entries have been increased by 0.1\% and 10\%, too. \input{Figure5_size4_sens_analysis} Figure~\ref{Fig5} summarises these results, where matrix $\mathbf{A}$ is registered in the $m$th interval if $0.01(m-1) \leq CR(\mathbf{A}) < 0.01m$, furthermore, the diagram is truncated at $3.49$ because the remaining 600 pairwise comparison matrices are recognised similarly with 0.1\%, 1\%, and 10\% change. On the right axis, the number of pairwise comparison matrices in the $m$th interval with a non-monotonic eigenvector is measured (normal blue line). On the left axis, the difference in the number of matrices presenting this problem with a 0.1\% and 1\% (dotted black line), as well as, with a 1\% and 10\% change (dashed red line) is plotted. For example, if $0.48 \leq CR < 0.49$, then $240$ matrices are found to have a non-monotonic eigenvector with a 0.1\% change, $192$ matrices are found to have a non-monotonic eigenvector with a 1\% change, and no matrix is found to have a non-monotonic eigenvector with a 10\% change. The difference is $48$ between the first and the second, while it is $192$ between the second and the third. Looking at Figure~\ref{Fig1} reveals that the pairwise comparison matrix of Example~\ref{Examp31} also appears in Figure~\ref{Fig5}: it is registered as problematic with a 0.1\% and 1\% change, but not with 10\% because in the latter case, the decreasing part of the function $w_1^{EM}(\mathbf{A}^{\alpha}) / w_4^{EM}(\mathbf{A}^{\alpha})$ is ``jumped over''. It is worth mentioning that some types of non-monotonicity are recognised only with a rougher change (1\% instead of 0.1\%, or 10\% instead of 1\%) as both the red and the black lines go sometimes below zero. Furthermore, these interesting cases cluster around the smaller values of the Saaty inconsistency index, and the probability of their occurrence does not depend only on the number of matrices having a non-monotonic eigenvector. A deeper analysis of this anomaly remains the topic of future research. \section{Discussion} \label{Sec4} \begin{table}[ht!] \centering \caption{The probability that the eigenvector does not satisfy monotonicity} \label{Table2} \rowcolors{3}{}{gray!20} \begin{threeparttable} \begin{tabularx}{0.7\textwidth}{cCC} \toprule Matrix size & All values of $CR$ & $CR < 0.4$ \\ \hline 4 & 0.3151 & 0.0000 \\ 5 & 0.6789 & 0.0313 \\ 6 & 0.8867 & 0.1665 \\ 7 & 0.9679 & 0.3388 \\ 8 & 0.9922 & 0.4913 \\ 9 & 0.9983 & 0.6168 \\ \bottomrule \end{tabularx} \vspace{0.25cm} \begin{tablenotes}[para,flushleft] \item \noindent \footnotesize{Discrete scale; sample sizes: all possible matrices for $n=4$, 4 million randomly generated matrix for $n \geq 5$ without restrictions on $CR$, see Figure~\ref{Fig4} for $n \geq 5$ with $CR < 0.4$} \end{tablenotes} \end{threeparttable} \end{table} In the current paper, we have argued that monotonicity on the numerical comparisons is a key requirement for any weight vector derived from a pairwise comparison matrix. The eigenvector method is proved to violate this axiom. However, contrary to the right-left asymmetry \citep{JohnsonBeineWang1979} and the Pareto inefficiency \citep{Bozoki2014} of the eigenvector, as well as, to its violation of a condition of order preservation \citep{BanaeCostaVansnick2008}, the emergence of non-monotonicity seems to be avoidable for small values of the Saaty inconsistency index. On the other hand, arbitrary pairwise comparison matrices have a principal right eigenvector without the monotonicity property with a high probability, especially in the case of more than five alternatives, as Table~\ref{Table2} shows. Our results have useful implications for practitioners. First, the probability of non-monotonicity strongly depends on the inconsistency level of the pairwise comparison matrix, which emphasizes the need for inconsistency reduction methods \citep{AbelMikhailovKeane2018, BozokiFulopKoczkodaj2011, BozokiFulopPoesz2015, ErguKouPengShi2011, KoczkodajSzarek2010, KoczkodajSzybowski2016, Szybowski2018}. Second, the possibly strange behaviour of the right eigenvector makes the use of this method questionable for inherently inconsistent large matrices such as the ones that emerge in sports applications \citep{Csato2013a, BozokiCsatoTemesi2016, Csato2017c, ChaoKouLiPeng2018}, because in this field, rewarding players or teams for poor performance is unfair \citep{Csato2018b, DagaevSonin2018, KendallLenten2017, VaziriDabadghaoYihMorin2018}. Finally, the analysis of monotonicity provides further arguments for the use of the row geometric mean (logarithmic least squares) method instead of the eigenvector method. \section*{Acknowledgements} \addcontentsline{toc}{section}{Acknowledgements} \noindent \emph{L\'aszl\'o Csat\'o}, the father of the first author has made a substantial contribution to the paper by helping to code the computations in Python. \\ We are grateful to \emph{S\'andor Boz\'oki}, \emph{J\'anos F\"ul\"op}, \emph{Tam\'as Halm}, \emph{Kees Pippel}, and \emph{Bal\'azs Vida} for useful advice. \\ The research was supported by the OTKA grant K 111797, the MTA Premium Post Doctorate Research Program, and the \'UNKP-18-3 New National Excellence Program of the Ministry of Human Capacities. \bibliographystyle{apalike}
{ "timestamp": "2019-03-01T02:03:56", "yymm": "1902", "arxiv_id": "1902.10790", "language": "en", "url": "https://arxiv.org/abs/1902.10790" }
\section{Introduction} Petrie polygons are well-known objects described by Coxeter \cite{Coxeter} (see also \cite{McMSch}). These are skew polygons in regular polyhedra such that any two consecutive edges, but not three, are on the same face. Analogs of Petrie polygons for graphs embedded in surfaces are called {\it zigzags} \cite{DDS-book,Lins1} or {\it closed left-right paths} \cite{GR-book,Shank}. Zigzags have many applications, for example, they are successfully exploited to enumerate all combinatorial possibilities for fullerenes \cite{BD}. The case when a map, i.e an embedding of a graph in a surface, has a single zigzag is very important \cite{DDS-book,GR-book}. Following \cite{DDS-book} we call such maps $z$-{\it knotted}. They have nice homological properties and are closely connected to Gauss code problem \cite{CrRos, GR-book,Lins2}. The studying of zigzags in $3$-regular plane graphs, in particular fullerenes, is one of the main directions of \cite{DDS-book}. A large class of $z$-knotted $3$-regular plane graphs is obtained by using computer. The dual objects, i.e. spherical triangulations, have the same zigzag structure. Zigzags in triangulations of surfaces (not necessarily orientable) are investigated in \cite{PT1,PT2,PT3}. By \cite{PT2}, every such triangulation admits a $z$-knotted shredding. In this paper, we describe a class of $z$-knotted triangulations which cannot be obtained by shredding. A $z$-{\it orientation} of a map is a minimal collection of zigzags which double covers the set of edges \cite{DDS-book}. In the $z$-knotted case, this collection contains only one zigzag and is unique up to reversing. For every $z$-orientation we have the following two types of edges: an edge is of type I if the distinguished zigzags pass through this edge in different directions and an edge is of type II if they pass through the edge in the same direction. It is not difficult to prove that for every face in a triangulation with fixed $z$-orientation one of the following possibilities is realized: the face contains precisely two edges of type I and the third edge is of type II or all edges are of type II. In the case when all faces are of the first type, the number of edges of type I is the twofold number of edges of type II and we say that a zigzag is {\it homogeneous} if it contains precisely two edges of type I after each edge of type II. We describe a one-to-one correspondence between triangulations with homogeneous zigzags and embeddings of directed Eulerian graphs in surfaces such that all edges of every face form a directed cycle (Theorem 1). Note that directed Eulerian spherical embeddings are known also as {\it plane alternating dimaps}; they are investigated, for example, in \cite{BHS, Farr, McCourt}. Directed Eulerian embeddings in arbitrary surfaces are considered in \cite{BCMMcK, CGH}. In the second part of the paper, we construct a class of $z$-knotted spherical triangulations whose zigzags are homogeneous. These triangulations are tree structured and we show how they can be obtained from rooted trees. \section{Zigzags and $z$-orientations of triangulations of surfaces} Let $M$ be a connected closed $2$-dimensional surface (not necessarily orientable). A {\it triangulation} of $M$ is a $2$-cell embedding of a connected simple finite graph in $M$ such that all faces are triangles \cite[Section 3.1]{MT-book}. Then the following assertions are fulfilled: (1) every edge is contained in precisely two distinct faces, (2) the intersection of two distinct faces is an edge or a vertex or empty. Let $\Gamma$ be a triangulation of $M$. A {\it zigzag} in $\Gamma$ is a sequence of edges $\{e_{i}\}_{i\in {\mathbb N}}$ satisfying the following conditions for every natural $i$: \begin{enumerate} \item[$\bullet$] $e_{i}$ and $e_{i+1}$ are distinct edges of a certain face (then they have a common vertex, since every face is a triangle), \item[$\bullet$] the faces containing $e_{i},e_{i+1}$ and $e_{i+1},e_{i+2}$ are distinct and the edges $e_{i}$ and $e_{i+2}$ are non-intersecting. \end{enumerate} Since $\Gamma$ is finite, there is a natural number $n>0$ such that $e_{i+n}=e_{i}$ for every natural $i$. In what follows, every zigzag will be presented as a cyclic sequence $e_{1},\dots,e_{n}$, where $n$ is the smallest number satisfying the above condition. Every zigzag is completely determined by any pair of consecutive edges belonging to this zigzag and for any distinct edges $e$ and $e'$ on a face there is a unique zigzag containing the sequence $e,e'$. If $Z=\{e_{1},\dots,e_{n}\}$ is a zigzag, then the reversed sequence $Z^{-1}=\{e_{n},\dots,e_{1}\}$ also is a zigzag. A zigzag cannot contain a sequence $e,e',\dots,e',e$ which implies that $Z\ne Z^{-1}$ for any zigzag $Z$, i.e. a zigzag cannot be self-reversed (see, for example, \cite{PT2}). We say that $\Gamma$ is $z$-{\it knotted} if it contains precisely two zigzags $Z$ and $Z^{-1}$, in other words, there is a single zigzag up to reversing. \begin{exmp}{\rm See Fig.1 for zigzags in the three Platonic solids which are triangulations of the sphere ${\mathbb S}^2$ (the zigzags are drawn by the bold line). }\end{exmp} \begin{center} \begin{tikzpicture}[scale=0.5] \draw[xshift=4.375cm, fill=black] (0:1.75cm) circle (3pt); \draw[xshift=4.375cm, fill=black] (90:1.75cm) circle (3pt); \draw[xshift=4.375cm, fill=black] (180:1.75cm) circle (3pt); \draw[xshift=4.375cm, fill=black] (270:1.75cm) circle (3pt); \draw[xshift=4.375cm,thick,line width=2pt] (0:1.75cm) \foreach \x in {90,180,...,359} { -- (\x:1.75cm) } -- cycle (90:1.75cm); \draw[xshift=4.375cm, dashed] (0:1.75cm) \foreach \x in {90,270} {-- (\x:1.75cm)}; \draw[xshift=4.375cm] (0:1.75cm) \foreach \x in {0,180} {-- (\x:1.75cm)}; \draw[xshift=8.75cm, fill=black] (0:1.75cm) circle (3pt); \draw[xshift=8.75cm, fill=black] (60:1.75cm) circle (3pt); \draw[xshift=8.75cm, fill=black] (120:1.75cm) circle (3pt); \draw[xshift=8.75cm, fill=black] (180:1.75cm) circle (3pt); \draw[xshift=8.75cm, fill=black] (240:1.75cm) circle (3pt); \draw[xshift=8.75cm, fill=black] (300:1.75cm) circle (3pt); \draw[xshift=8.75cm,thick,line width=2pt] (0:1.75cm) \foreach \x in {60, 120,...,359} { -- (\x:1.75cm) } -- cycle (60:1.75cm); \draw[xshift=8.75cm, dashed] (0:1.75cm)--(120:1.75cm)--(240:1.75cm)--cycle; \draw[xshift=8.75cm] (60:1.75cm)--(180:1.75cm)--(300:1.75cm)--cycle; \draw[xshift=13.125cm, fill=black] (0:1.75cm) circle (3pt); \draw[xshift=13.125cm, fill=black] (36:1.75cm) circle (3pt); \draw[xshift=13.125cm, fill=black] (72:1.75cm) circle (3pt); \draw[xshift=13.125cm, fill=black] (108:1.75cm) circle (3pt); \draw[xshift=13.125cm, fill=black] (144:1.75cm) circle (3pt); \draw[xshift=13.125cm, fill=black] (180:1.75cm) circle (3pt); \draw[xshift=13.125cm, fill=black] (216:1.75cm) circle (3pt); \draw[xshift=13.125cm, fill=black] (252:1.75cm) circle (3pt); \draw[xshift=13.125cm, fill=black] (288:1.75cm) circle (3pt); \draw[xshift=13.125cm, fill=black] (324:1.75cm) circle (3pt); \draw[xshift=13.125cm,thick,line width=2pt] (0:1.75cm) \foreach \x in {36, 72,...,359} { -- (\x:1.75cm) } -- cycle (36:1.75cm); \draw[xshift=13.125cm, fill=black] (0,0) circle (3pt); \draw[xshift=13.125cm] (0:1.75cm)--(0,0) (72:1.75cm)--(0,0) (144:1.75cm)--(0,0) (216:1.75cm)--(0,0) (288:1.75cm)--(0,0); \draw[xshift=13.125cm] (0:1.75cm)--(72:1.75cm)--(144:1.75cm)--(216:1.75cm)--(288:1.75cm)--cycle; \draw[xshift=13.125cm, dashed] (36:1.75cm)--(0,0) (108:1.75cm)--(0,0) (180:1.75cm)--(0,0) (252:1.75cm)--(0,0) (324:1.75cm)--(0,0); \draw[xshift=13.125cm, dashed] (36:1.75cm)--(108:1.75cm)--(180:1.75cm)--(252:1.75cm)--(324:1.75cm)--cycle; \end{tikzpicture} \captionof{figure}{ } \end{center} \begin{exmp}{\rm Let $BP_n$ be the $n$-gonal bipyramid, where $1,\dots,n$ denote the consecutive vertices of the base and the remaining two vertices are denoted by $a,b$ (see Fig.2 for $n=3$). \begin{center} \begin{tikzpicture}[scale=0.3] \coordinate (A) at (0.5,4); \coordinate (B) at (0.5,-5); \coordinate (1) at (0,-1); \coordinate (2) at (-2,0); \coordinate (3) at (3,0); \draw[fill=black] (A) circle (3.5pt); \draw[fill=black] (B) circle (3.5pt); \draw[fill=black] (1) circle (3.5pt); \draw[fill=black] (2) circle (3.5pt); \draw[fill=black] (3) circle (3.5pt); \draw[thick] (3)--(1)--(2); \draw[thick, dashed] (2)--(3); \draw[thick] (A)--(1); \draw[thick] (A)--(2); \draw[thick] (A)--(3); \draw[thick] (B)--(1); \draw[thick] (B)--(2); \draw[thick] (B)--(3); \node at (3.6,0) {$3$}; \node at (-0.45,-1.525) {$2$}; \node at (-2.5,0) {$1$}; \node at (0.5,4.6) {$a$}; \node at (0.5,-5.8) {$b$}; \node[color=white] at (4,0) {$.$}; \end{tikzpicture} \captionof{figure}{ } \end{center} (a). In the case when $n=2k+1$, the bipyramid $BP_n$ is $z$-knotted. If $k$ is odd, then the unique (up to reversing) zigzag is $$a1,12,2b,b3,\dots,a(n-2),(n-2)(n-1),(n-1)b,bn,n1,$$ $$1a,a2,23,3b,\dots, a(n-1),(n-1)n,nb,$$ $$b1,12,2a,a3,\dots,b(n-2),(n-2)(n-1),(n-1)a,an,n1,$$ $$1b,b2,23,3a,\dots,b(n-1),(n-1)n, na.$$ If $k$ is even, then this zigzag is $$a1,12,2b,b3,\dots,b(n-2),(n-2)(n-1), (n-1)a,an,n1,$$ $$1b,b2,23,3a,\dots,a(n-1),(n-1)n,nb,$$ $$b1,12,2a,a3,\dots,a(n-2),(n-2)(n-1),(n-1)b,bn,n1,$$ $$1a,a2,23,3b,\dots,b(n-1),(n-1)n, na.$$ (b). If $n=2k$ and $k$ is odd, then the bipyramid contains precisely two zigzags (up to reversing): $$a1,12,2b,b3,34,\dots,a(n-1),(n-1)n, nb,$$ $$b1,12,2a,a3,34,\dots,b(n-1),(n-1)n, na$$ and $$a2,23,3b,b4,45,\dots,an,n1,1b,$$ $$b2,23,3a,a4,45,\dots,bn,n1,1a.$$ (c). In the case when $n=2k$ and $k$ is even, there are precisely four zigzags (up to reversing): $$a1,12, 2b,\dots,b(n-1),(n-1)n, na;$$ $$b1,12, 2a,\dots,a(n-1),(n-1)n, nb;$$ $$a2,23,3b,\dots,bn,n1,1a;$$ $$b2,23,3a,\dots,an,n1,1b.$$ }\end{exmp} See \cite{PT1,PT2} for more examples of $z$-knotted triangulations. Examples of $z$-knotted fullerenes can be found in \cite{DDS-book}. Suppose that $\Gamma$ contains precisely distinct $k$ zigzags up to reversing. A $z$-{\it orien\-tation} of $\Gamma$ is a collection $\tau$ consisting of $k$ distinct zigzags such that for each zigzag $Z$ we have $Z\in \tau$ or $Z^{-1}\in \tau$. There are precisely $2^k$ distinct $z$-orientations of $\Gamma$. For every $z$-orientation $\tau=\{Z_{1},\dots,Z_{k}\}$ the $z$-orientation $\tau^{-1}=\{Z^{-1}_{1},\dots, Z^{-1}_{k}\}$ will be called {\it reversed} to $\tau$. Let $\tau$ be a $z$-orientation of $\Gamma$. For every edge $e$ of $\Gamma$ one of the following possibilities is realized: \begin{enumerate} \item[$\bullet$] there is a zigzag $Z\in \tau$ such that $e$ occurs in this zigzag twice and other zigzags from $\tau$ do not contain $e$, \item[$\bullet$] there are two distinct zigzags $Z,Z'\in\tau$ such that $e$ occurs in each of these zigzags only ones and other zigzags from $\tau$ do not contain $e$. \end{enumerate} In the first case, we say that $e$ is an {\it edge of type} I if $Z$ passes through $e$ twice in different directions; otherwise, $e$ is said to be an {\it edge of type} II. Similarly, in the second case: $e$ is an {\it edge of type} I if $Z$ and $Z'$ pass through $e$ in different directions or $e$ is an {\it edge of type} II if $Z$ and $Z'$ pass through $e$ in the same direction. In what follows, edges of type II will be considered together with the direction defined by $\tau$. A vertex of $\Gamma$ is called {\it of type} I if it belongs only to edges of type I; otherwise, we say that this is a {\it vertex of type} II. The following statements hold for any $z$-orientation $\tau$ of $\Gamma$. \begin{lemma} \label{lemma1} For each vertex of type {\rm II} the number of edges of type {\rm II} which enter this vertex is equal to the number of edges of type {\rm II} which leave it. \end{lemma} \begin{proof} The number of times that the zigzags from $\tau$ enter a vertex is equal to the number of times that these zigzags leave this vertex. \end{proof} \begin{prop}\label{prop-or} For every face of $\Gamma$ one of the following possibilities is realized: \begin{enumerate} \item[{\rm (I)}] the face contains two edges of type {\rm I} and the third edge is of type {\rm II}, see {\rm Fig.3(a)}; \item[{\rm (II)}] all edges of the face are of type {\rm II} and form a directed cycle, see {\rm Fig.3(b)}. \end{enumerate} \end{prop} A face in a triangulation is said to be {\it of type} I or {\it of type} II if the corresponding possibility is realized. \begin{center} \begin{tikzpicture}[scale=0.6] \draw[fill=black] (0,2) circle (3pt); \draw[fill=black] (-1.7320508076,-1) circle (3pt); \draw[fill=black] (1.7320508076,-1) circle (3pt); \draw [thick, decoration={markings, mark=at position 0.62 with {\arrow[scale=1.5,>=stealth]{>>}}}, postaction={decorate}] (0,2) -- (-1.7320508076,-1); \draw [thick, decoration={markings, mark=at position 0.62 with {\arrow[scale=1.5,>=stealth]{>>}}}, postaction={decorate}] (-1.7320508076,-1) -- (1.7320508076,-1); \draw [thick, decoration={markings, mark=at position 0.62 with {\arrow[scale=1.5,>=stealth]{<<}}}, postaction={decorate}] (0,2) -- (1.7320508076,-1); \node at (0,-1.65) {(a)}; \draw[fill=black] (5.1961524228,2) circle (3pt); \draw[fill=black] (3.4641016152,-1) circle (3pt); \draw[fill=black] (6.9282032304,-1) circle (3pt); \draw [thick, decoration={markings, mark=at position 0.62 with {\arrow[scale=1.5,>=stealth]{><}}}, postaction={decorate}] (5.1961524228,2) -- (3.4641016152,-1); \draw [thick, decoration={markings, mark=at position 0.62 with {\arrow[scale=1.5,>=stealth]{>>}}}, postaction={decorate}] (3.4641016152,-1) -- (6.9282032304,-1); \draw [thick, decoration={markings, mark=at position 0.62 with {\arrow[scale=1.5,>=stealth]{><}}}, postaction={decorate}] (5.1961524228,2) -- (6.9282032304,-1); \node at (5.1961524228,-1.65) {(b)}; \end{tikzpicture} \captionof{figure}{ } \end{center} \begin{proof}[Proof of Proposition \ref{prop-or}] Consider a face whose edges are denoted by $e_{1},e_{2},e_{3}$. Without loss of generality we can assume that the zigzag containing the sequence $e_{1},e_{2}$ belongs to $\tau$. Let $Z$ and $Z'$ be the zigzags containing the sequences $e_{2},e_{3}$ and $e_{3},e_{1}$, respectively. Then $Z\in \tau$ or $Z^{-1}\in \tau$ and $Z'\in \tau$ or $Z'^{-1}\in \tau$. An easy verification shows that for each of these four cases we obtain (I) or (II). \end{proof} \begin{exmp}{\rm If $n$ is odd, then the bipyramid $BP_n$ has the unique $z$-orientation (up to reversing), see Example 2(a). The edges $ai$ and $bi$, $i\in \{1,\dots,n\}$ are of type I and the edges on the base of the bipyramid are of type II. The vertices $a,b$ are of type I and the vertices on the base are of type II. All faces are of type I. The same happens for the case when $n=2k$ and $k$ is odd if the $z$-orientation is defined by the two zigzags presented in Example 2(b); however, all faces are of type II if we replace one of these zigzags by the reversed. }\end{exmp} \begin{exmp}{\rm Suppose that $n=2k$ and $k$ is even. Let $Z_{1},Z_{2},Z_{3},Z_{4}$ be the zigzags from Example 2(c). For the $z$-orientation defined by these zigzags all faces are of type I. If the $z$-orientation is defined by $Z_{1},Z_{2}$ and $Z^{-1}_{3}, Z^{-1}_{4}$, then all faces are of type II. In the case when the $z$-orientation is defined by $Z_{1},Z_{2},Z_{3}$ and $Z^{-1}_{4}$, there exist faces of the both types. }\end{exmp} \begin{rem}{\rm If we replace a $z$-orientation by the reversed $z$-orientation, then the type of every edge does not change (but all edges of type II reverse the directions), consequently, the types of vertices and faces also do not change. For $z$-knotted triangulations we can say about the types of edges, vertices and faces without attaching to a $z$-orientation \cite{PT1}. }\end{rem} \section{Homogeneous zigzags in triangulations with faces of type I} In this section, we will always suppose that $\Gamma$ is a triangulation with fixed $z$-orientation $\tau$ such that all faces in $\Gamma$ are of type I, i.e. each face contains precisely two edges of type I and the third edge is of type II. If $m$ is the number of faces, then there are precisely $m$ edges of type I and $m/2$ edges of type II. In other words, the number of edges of type I is the twofold number of edges of type II. We say that a zigzag of $\Gamma$ is {\it homogeneous} if it is a cyclic sequence $\{e_{i},e'_{i},e''_{i}\}^{n}_{i=1}$, where each $e_{i}$ is an edge of type II and all $e'_{i},e''_{i}$ are edges of type I. If a zigzag is homogeneous, then the reversed zigzag also is homogeneous. \begin{exmp}{\rm The zigzags of $BP_n$ are homogeneous if $n$ is odd (the $z$-knotted case) or $n$ is even and the $z$-orientation is defined by the two zigzags from Example 2(b) or by the four zigzags from Example 2(c). }\end{exmp} Let $\Gamma'$ be a closed $2$-cell embedding of a connected finite simple graph in the surface $M$. Then all faces of $\Gamma'$ are homeomorphic to a closed $2$-dimensional disc. For each face $F$ we take a point $v_{F}$ belonging to the interior of $F$. We add all $v_{F}$ to the vertex set of $\Gamma'$ and connect each $v_{F}$ with every vertex of $F$ by an edge. We obtain a triangulation of $M$ which will be denoted by ${\rm T}(\Gamma')$. The assumption that our $2$-cell embedding is closed cannot be omitted. Indeed, if a certain face of $\Gamma'$ is not homeomorphic to a closed $2$-dimensional disc, then ${\rm T}(\Gamma')$ is not a triangulation. \begin{exmp}{\rm If $\Gamma'$ is a triangulation of $M$, then every zigzag $e_{1},e_{2},e_{3},\dots$ in $\Gamma'$ can be extended to a zigzag $$e_{1},e'_{1},e''_{1},e_{2},e'_{2},e''_{2},e_{3},\dots$$ in ${\rm T}(\Gamma')$ which passes through edges of $\Gamma'$ in the opposite directions, see Fig.4. Therefore, every $z$-orientation $\tau'$ of $\Gamma'$ induces a $z$-orientation $t(\tau')$ of ${\rm T}(\Gamma')$. It is clear that $t(\tau'^{-1})=t(\tau')^{-1}$ and ${\rm T}(\Gamma')$ is $z$-knotted if and only if $\Gamma'$ is $z$-knotted. }\end{exmp} \begin{center} \begin{tikzpicture}[scale=1.8] \draw[fill=black] (0,0) circle (1.5pt); \draw[fill=black] (0:2cm) circle (1.5pt); \draw[fill=black] (-60:2cm) circle (1.5pt); \draw[fill=black] (-120:2cm) circle (1.5pt); \draw[fill=black] (-180:2cm) circle (1.5pt); \draw[thick, line width=2pt] (-180:2cm) -- (0:2cm) -- (-60:2cm) -- (-120:2cm) -- (-180:2cm); \draw[thick, line width=2pt] (-60:2cm) -- (0,0) -- (-120:2cm); \draw [thick, decoration={markings, mark=at position 0.59 with {\arrow[scale=2]{>}}, mark=at position 0.51 with {\arrow[scale=2,>=stealth]{<}}}, postaction={decorate}] (0:2cm) -- (-60:2cm); \draw [thick, decoration={markings, mark=at position 0.51 with {\arrow[scale=2,>=stealth]{<}}, mark=at position 0.59 with {\arrow[scale=2]{>}}}, postaction={decorate}] (-60:2cm) -- (0,0); \draw [thick, decoration={markings, mark=at position 0.51 with {\arrow[scale=2,>=stealth]{<}}, mark=at position 0.59 with {\arrow[scale=2]{>}}}, postaction={decorate}] (0,0) -- (-120:2cm); \draw [thick] (-120:2cm) -- (-180:2cm); \draw [thick, line width=1pt, dashed, decoration={markings, mark=at position 0.5 with {\arrow[scale=2]{<}}}, postaction={decorate}] (-30:1.1547cm) -- (0,0); \draw [thick, line width=1pt, dashed, decoration={markings, mark=at position 0.55 with {\arrow[scale=2]{>}}}, postaction={decorate}] (-30:1.1547cm) -- (0:2cm); \draw[thick, line width=1pt, dashed] (-30:1.1547cm) -- (-60:2cm); \draw[thick, line width=1pt, dashed] (-90:1.1547cm) -- (0,0); \draw [thick, line width=1pt, dashed, decoration={markings, mark=at position 0.5 with {\arrow[scale=2]{<}}}, postaction={decorate}] (-90:1.1547cm) -- (-120:2cm); \draw [thick, line width=1pt, dashed, decoration={markings, mark=at position 0.55 with {\arrow[scale=2]{>}}}, postaction={decorate}] (-90:1.1547cm) -- (-60:2cm); \draw [thick, line width=1pt, dashed, decoration={markings, mark=at position 0.55 with {\arrow[scale=2]{>}}}, postaction={decorate}] (-150:1.1547cm) -- (0,0); \draw[thick, line width=1pt, dashed] (-150:1.1547cm) -- (-120:2cm); \draw [thick, line width=1pt, dashed, decoration={markings, mark=at position 0.5 with {\arrow[scale=2]{<}}}, postaction={decorate}] (-150:1.1547cm) -- (-180:2cm); \draw[fill=white] (-30:1.1547cm) circle (1.5pt); \draw[fill=white] (-90:1.1547cm) circle (1.5pt); \draw[fill=white] (-150:1.1547cm) circle (1.5pt); \node at (-130:1cm) {$e_1$}; \node at (-50:1cm) {$e_2$}; \node at (-70:1.3cm) {$e''_1$}; \node at (-110:1.3cm) {$e'_1$}; \node at (-20:0.75cm) {$e'_2$}; \node at (-11.8:1.28cm) {$e''_2$}; \node at (-29:1.93cm) {$e_3$}; \end{tikzpicture} \captionof{figure}{ } \end{center} Denote by $\Gamma_{II}$ the subgraph of $\Gamma$ formed by all vertices and edges of type II. \begin{theorem} If all zigzags of $\Gamma$ are homogeneous, then $\Gamma_{II}$ is a closed $2$-cell embedding of a simple Eulerian digraph such that every face is a directed cycle and $\Gamma={\rm T}(\Gamma_{II})$. Conversely, if $\Gamma'$ is a closed $2$-cell embedding of a simple Eulerian digraph and every face is a directed cycle, then the triangulation ${\rm T}(\Gamma')$ admits a unique $z$-orien\-tation such that all zigzags of ${\rm T}(\Gamma')$ are homogeneous and $\Gamma'$ is a subgraph of ${\rm T}(\Gamma')$ formed by all vertices and edges of type II. \end{theorem} \begin{proof} (I). Let $v$ be a vertex of $\Gamma$. Consider all faces containing $v$ and take the edge on each of these faces which does not contain $v$. All such edges form a cycle which will be denoted by $C(v)$. Suppose that all zigzags of $\Gamma$ are homogeneous and consider any edge $e_{1}$ of type II. Let $v_{1}$ and $v_{2}$ be the vertices of this edge such that $e_{1}$ is directed from $v_{1}$ to $v_{2}$. We choose one of the two faces containing $e_1$ and take in this face the vertex $v$ which does not belong to $e_1$. Let $e'_{1}$ and $e''_{1}$ be the edges which contain $v$ and occur in a certain zigzag $Z\in \tau$ immediately after $e_{1}$, see Fig.5. Denote by $e_2$ the third edge of the face containing $e'_{1}$ and $e''_{1}$. This edge contains $v_{2}$ and another one vertex, say $v_{3}$. Since $Z$ is homogeneous, the edges $e'_{1}$ and $e''_{1}$ are of type I, and consequently, $e_2$ is of type II. The zigzag which goes through $e'_1$ from $v$ to $v_{2}$ belongs to $\tau$ (this follows easily from the fact that $Z$ goes through $e'_{1}$ in the opposite direction and $e'_1$ is an edge of type I). The latter guarantees that the edge $e_{2}$ is directed from $v_{2}$ to $v_{3}$. By our assumption, the edge $e_{3}$ which occurs in $Z$ immediately after $e'_{1}$ and $e''_{1}$ is of type II. This edge is directed from $v_{3}$ to a certain vertex $v_{4}$. So, $e_{1},e_{2},e_{3}$ are consecutive edges of the cycle $C(v)$ and each $e_i$ is directed from $v_i$ to $v_{i+1}$. Consider the zigzag from $\tau$ which contains the sequence $e_2, e''_1$. The next edge in this zigzag connects $v$ and $v_{4}$ (the zigzag goes from $v$ to $v_{4}$). Let $e_{4}$ be the edge which occurs in the zigzag after it. Then $e_{4}$ is an edge of type II (by our assumption), it belongs to $C(v)$ and leaves $v_{4}$. Recursively, we establish that $C(v)$ is a directed cycle formed by edges of type II and every edge containing $v$ is of type I, i.e. $v$ is a vertex of type I. Now, we consider the other face containing $e_1$ and take the vertex $v'$ of this face which does not belong to $e_{1}$. Using the same arguments, we establish that $v'$ is a vertex of type I and $C(v')$ is a directed cycle formed by edges of type II. \begin{center} \begin{tikzpicture}[scale=1.4, xshift=0.3cm] \draw[fill=black] (0,0) circle (1.5pt); \draw[fill=black] (0:2cm) circle (1.5pt); \draw[fill=black] (-45:2cm) circle (1.5pt); \draw[fill=black] (-90:2cm) circle (1.5pt); \draw[fill=black] (-135:2cm) circle (1.5pt); \draw[fill=black] (-180:2cm) circle (1.5pt); \draw[thick, line width=2pt] (0:2cm) -- (-45:2cm) -- (-90:2cm) -- (-135:2cm) -- (-180:2cm); \draw [thick, decoration={markings, mark=at position 0.49 with {\arrow[scale=2,>=stealth]{>}}, mark=at position 0.61 with {\arrow[scale=2,>=stealth]{>}}}, postaction={decorate}] (-180:2cm) -- (-135:2cm); \draw [thick, decoration={markings, mark=at position 0.49 with {\arrow[scale=2,>=stealth]{>}}, mark=at position 0.61 with {\arrow[scale=2,>=stealth]{>}}}, postaction={decorate}] (-135:2cm) -- (-90:2cm); \draw [thick, decoration={markings, mark=at position 0.49 with {\arrow[scale=2,>=stealth]{>}}, mark=at position 0.61 with {\arrow[scale=2,>=stealth]{>}}}, postaction={decorate}] (-90:2cm) -- (-45:2cm); \draw [thick, decoration={markings, mark=at position 0.49 with {\arrow[scale=2,>=stealth]{>}}, mark=at position 0.61 with {\arrow[scale=2,>=stealth]{>}}}, postaction={decorate}] (-45:2cm) -- (0:2cm); \draw[thick, line width=1pt, dashed] (-180:2cm) -- (0,0) -- (0:2cm); \draw[thick, line width=1pt, dashed] (-45:2cm) -- (0,0); \draw[thick, line width=1pt, dashed] (-90:2cm) -- (0,0); \draw[thick, line width=1pt, dashed] (-135:2cm) -- (0,0); \node at (0.15cm,0.15cm) {$v$}; \node at (-45:2.25cm) {$v_4$}; \node at (-90:2.25cm) {$v_3$}; \node at (-135:2.25cm) {$v_2$}; \node at (-180:2.25cm) {$v_1$}; \node at (-157.5:2.08cm) {$e_1$}; \node at (-112.5:2.08cm) {$e_2$}; \node at (-67.5:2.08cm) {$e_3$}; \node at (-22.5:2.08cm) {$e_4$}; \node at (-145:1.1cm) {$e'_1$}; \node at (-100:1.1cm) {$e''_1$}; \node[color=white] at (0:2.25cm) {s}; \end{tikzpicture} \captionof{figure}{ } \end{center} For every vertex $v$ of type I we can take a face containing $v$ and the edge of this face which does not contain $v$. This edge is of type II (since the remaining two edges of the face are of type I). The above arguments show that the following assertions are fulfilled: \begin{enumerate} \item[(1)] vertices of type {\rm I} exist and for every such vertex $v$ the cycle $C(v)$ is a directed cycle formed by edges of type {\rm II}; \item[(2)] for every edge of type {\rm II} there are precisely two vertices $v$ and $v'$ of type {\rm I} such that this edge is contained in the cycles $C(v)$ and $C(v')$. \end{enumerate} Similarly, for every edge $e$ of type I we take a face containing $e$; this face contains an edge of type II which implies that $e$ connects a vertices of different types. Consider $\Gamma_{II}$. Observe that any two vertices of type II in $\Gamma$ can be connected by a path formed by edges of type II which means that $\Gamma_{II}$ is connected. It is easy to see that $\Gamma_{II}$ is a $2$-cell embedding of a simple digraph such that every face is the directed cycle $C(v)$ for a certain vertex $v$ of type I; in particular, this $2$-cell embedding is closed. Lemma 1 implies that $\Gamma_{II}$ is an Eulerian digraph. The equality $\Gamma={\rm T}(\Gamma_{II})$ is obvious. The following remark will be used to prove the second part of the theorem. The conditions (1) and (2) guarantee that every zigzag of $\Gamma$ containing an edge of type II is homogeneous. Recall that the number of edges of type I is the twofold number of edges of type II. This implies that there is no zigzag containing edges of type I only (since every edge occurs twice in a unique zigzag from $\tau$ or it occurs ones in precisely two distinct zigzags from $\tau$). Therefore, every zigzag of $\Gamma$ is homogeneous if (1) and (2) hold. (II). Suppose that $\Gamma'$ is a closed $2$-cell embedding of a simple Eulerian digraph such that every face is a directed cycle. Let $e_1,\dots,e_n$ be the directed cycle formed by all edges of a certain face of $\Gamma'$. For every $i\in \{1,\dots,n\}$ we define $j(i)=i+2({\rm mod}\, n)$ and denote by $e'_i$ and $e''_{i}$ the edges containing the vertex $v_{F}$ and intersecting $e_{i}$ and $e_{j(i)}$, respectively. Consider the zigzag of ${\rm T}(\Gamma')$ which contains the sequence $e_{i},e'_{i},e''_{i}, e_{j(i)}$. It passes through $e_{i}$ and $e_{j(i)}$ according to the directions of these edges; and the same holds for every edge of $\Gamma'$ which occurs in this zigzag. Such a zigzag exists for any pair formed by a face of $\Gamma'$ and an edge on this face. The collection of all such zigzags is a $z$-orientation of ${\rm T}(\Gamma')$ with the following properties: all edges of $\Gamma'$ are of type II and every $v_F$ is a vertex of type I. This implies that ${\rm T}(\Gamma')$ satisfies the conditions (1) and (2) which gives the claim. \end{proof} \begin{exmp}{\rm If $BP_n$ is as in Example 5, then only $a$ and $b$ are vertices of type I and $C(a)=C(b)$ is the directed cycle formed by the edges of the base of the bipyramid. Conversely, if all zigzags of $\Gamma$ are homogeneous and there are precisely two vertices of type I, then $\Gamma$ is a bipyramid (easy verification). }\end{exmp} \begin{exmp}{\rm As in Example 6, we assume that $\Gamma'$ is a triangulation of $M$. If $\tau'$ is a $z$-orientation of $\Gamma'$ such that all faces are of type II (see \cite[Example 4]{PT3} for a $z$-knotted triangulation of ${\mathbb S}^2$ whose faces are of type II), then $t(\tau')$ is the $z$-orientation of ${\rm T}(\Gamma')$ described in the second part of Theorem 1. Conversely, if all zigzags of $\Gamma$ are homogeneous and for every vertex $v$ of type I the cycle $C(v)$ is a triangle, then $\Gamma_{II}$ is a triangulation of $M$ and all faces in this triangulation are of type II for the $z$-orientation $\tau'$ satisfying $\tau=t(\tau')$ (recall that the triangulation $\Gamma$ is considered together with fixed $z$-orientation $\tau$). }\end{exmp} \section{Gluing of triangulations} In this section, we describe how two triangulations of special types can be glued together to get a $z$-knotted triangulation with homogeneous zigzags. Let $\Gamma$ be a triangulation with fixed $z$-orientation. The {\it face shadow} of a zigzag $e_{1},\dots,e_{n}$ is the cyclic sequence $F_{1},\dots,F_{n}$, where $F_{i}$ is the face containing $e_{i}$ and $e_{i+1}$ if $i<n$ and the face $F_{n}$ contains $e_{n}$ and $e_{1}$. We will use the following fact (whose proof is a simple verification): if $e$ is an edge of type II which occurs in a certain zigzag of $\Gamma$ twice and $F,F'$ are the faces containing $e$, then the face shadow of this zigzag contains each of the sequences $F,F'$ and $F',F$ ones. From this moment, we suppose that $\Gamma$ is a $z$-knotted triangulation of a surface $M$ such that all faces are of type I and the zigzag is homogeneous. Let $e_{1}$ and $e_{2}$ be edges of type II in $\Gamma$ with a common vertex $v$ and such that the zigzag of $\Gamma$ is a cyclic sequence of type $$e_{1},\dots,e_{2},\dots,e_{1}, \dots,e_{2}, \dots;$$ in what follows, every such pair of edges will be called {\it special}. Let also $F^{-}_{i}$ and $F^{+}_{i}$ be the faces containing $e_i$. These four faces are mutually distinct (since each face contains only one edge of type II). Every zigzag is a cyclic sequence and we can start from any of the four times when $e_{i}$ occurs in the zigzag. Therefore, we can assume that the face shadow of the zigzag of $\Gamma$ is $$F^{-}_{1},F^{+}_{1},\dots, F^{-}_{2},F^{+}_{2},\dots, F^{+}_{1},F^{-}_{1},\dots, F^{+}_{2},F^{-}_{2},\dots$$ and the faces $F^{-}_{1},F^{-}_{2}$ are on the same side of $e_{1}\cup e_{2}$ and the faces $F^{+}_{1},F^{+}_{2}$ are on the other side, see Fig.6(a). Denote by $v_{i}$ the vertex of $e_{i}$ distinct from $v$. Let $\Gamma'$ be a triangulation of a surface $M'$ which contains precisely two zigzags (up to reversing) and admits a $z$-orientation such that all faces are of type I and the zigzags are homogeneous. Consider edges $e'_{1}$ and $e'_{2}$ of type II in $\Gamma'$ with a common vertex $v'$ and such that $e'_{1}$ occurs twice in one of the zigzags of $\Gamma'$ and $e'_{2}$ occurs twice in the other. Denote by $F'^{-}_{i}$ and $F'^{+}_{i}$ the faces containing $e'_i$. These four faces are mutually distinct. The shadows of the zigzags are cyclic sequences $$F'^{-}_{1},F'^{+}_{1},\dots, F'^{+}_{1},F'^{-}_{1},\dots\;\mbox{ and }\;F'^{-}_{2},F'^{+}_{2},\dots, F'^{+}_{2},F'^{-}_{2},\dots;$$ as above, without loss of generality we can assume that $F'^{-}_{1},F'^{-}_{2}$ are on the same side of $e'_{1}\cup e'_{2}$ and the faces $F'^{+}_{1},F'^{+}_{2}$ are on the other side. Let $v'_{i}$ be the vertex of $e'_{i}$ distinct from $v'$. Our last assumption is the following: \begin{enumerate} \item[(*)] for every $i\in \{1,2\}$ the edge $e'_{i}$ enters or leaves the vertex $v'$ if and only if the edge $e_{i}$ enters or leaves the vertex $v$, respectively. \end{enumerate} Now, we construct a triangulation of the connected sum $M\# M'$ by rearranging the triangulations $\Gamma,\Gamma'$ and gluing them together. In the triangulation $\Gamma$, we split up each $e_{i}$ in two edges $e^{+}_{i}$ and $e^{-}_{i}$ such that $v_{1}$ belongs to $e^{+}_{1}, e^{-}_{1}$ and $v_{2}$ belongs to $e^{+}_{2},e^{-}_{2}$. Also, the vertex $v$ is splitted in two vertices $v^{+}$ and $v^{-}$ such that $v^{\delta}$, $\delta\in \{+,-\}$ is a common vertex of the edges $e^{\delta}_{1}$ and $e^{\delta}_{2}$, see Fig.6(b). The vertex of the face $F^{\delta}_{i}$ which does not belong to $e_{i}$ will be connected with $v^{\delta}$. Similarly, we take a vertex of $\Gamma$ connected with $v$ by a certain edge $e$ and join it with $v_{\delta}$ if the edge $e$ and the faces $F^{\delta}_{i}$ are on the same side of $e_{1}\cup e_{2}$, see Fig.6(b). In this way, we get a graph $\Gamma_{new}$ embedded in $M$. The face of $\Gamma_{new}$ bounded by the edges $e^{+}_{1},e^{-}_{1},e^{+}_{2},e^{-}_{2}$ will be denoted by $F$. We repeat the above construction for the triangulation $\Gamma'$ and obtain a graph $\Gamma'_{new}$ embedded in $M'$. As above, every $e'_{i}$ is splitted in two edges $e'^{+}_{i},e'^{-}_{i}$ and the face bounded by $e'^{+}_{1},e'^{-}_{1},e'^{+}_{2},e'^{-}_{2}$ is denoted by $F'$. We remove the interiors of $F$ and $F'$ from $M$ and $M'$ (respectively) and glue together $e^{\delta}_{i}$ and $e'^{\delta}_{i}$ for $i\in \{1,2\}$ such that $v_{1}$ is identified with $v'_{1}$ and $v_{2}$ with $v'_{2}$. We get a triangulation of $M\# M'$ which will be denoted by ${\rm G}(\Gamma, \Gamma')$. \begin{center} \begin{tikzpicture}[scale=1] \begin{scope}[xshift=0.5cm] \draw[fill=black] (0,0) circle (2pt); \draw[fill=black] (2,0) circle (2pt); \draw[fill=black] (-2,0) circle (2pt); \draw[fill=black] (0,2) circle (2pt); \draw[fill=black] (0,-2) circle (2pt); \draw[fill=black] (-1,1.7) circle (2pt); \draw[fill=black] (1,1.7) circle (2pt); \draw[fill=black] (-1,-1.7) circle (2pt); \draw[fill=black] (1,-1.7) circle (2pt); \draw[thick] (0,0) -- (0,2); \draw[thick] (0,0) -- (0,-2); \draw[thick] (0,0) -- (2,0); \draw[thick] (0,0) -- (-2,0); \draw[thick] (-1,1.7) -- (-2,0); \draw[thick] (-1,1.7) -- (0,0); \draw[thick] (1,1.7) -- (2,0); \draw[thick] (1,1.7) -- (0,0); \draw[thick] (-1,-1.7) -- (-2,0); \draw[thick] (-1,-1.7) -- (0,0); \draw[thick] (1,-1.7) -- (2,0); \draw[thick] (1,-1.7) -- (0,0); \node at (0.3,0.13) {$v$}; \node at (-1,-0.2) {$e_1$}; \node at (1,-0.2) {$e_2$}; \node at (-1,0.66) {$F^-_1$}; \node at (1,0.66) {$F^-_2$}; \node at (-1,-0.7) {$F^+_1$}; \node at (1,-0.7) {$F^+_2$}; \end{scope} \begin{scope}[xshift=-0.5cm] \draw[fill=black] (8,0) circle (2pt); \draw[fill=black] (4,0) circle (2pt); \draw[fill=black] (6,2) circle (2pt); \draw[fill=black] (6,-2) circle (2pt); \draw[fill=black] (5,1.7) circle (2pt); \draw[fill=black] (7,1.7) circle (2pt); \draw[fill=black] (5,-1.7) circle (2pt); \draw[fill=black] (7,-1.7) circle (2pt); \draw[fill=black] (6,0.4) circle (2pt); \draw[fill=black] (6,-0.4) circle (2pt); \draw[thick] (6,0.4) -- (6,2); \draw[thick] (6,-0.4) -- (6,-2); \draw[thick] (5,1.7) -- (4,0); \draw[thick] (5,1.7) -- (6,0.4); \draw[thick] (7,1.7) -- (8,0); \draw[thick] (7,1.7) -- (6,0.4); \draw[thick] (5,-1.7) -- (4,0); \draw[thick] (5,-1.7) -- (6,-0.4); \draw[thick] (7,-1.7) -- (8,0); \draw[thick] (7,-1.7) -- (6,-0.4); \draw [xshift=4cm, thick] plot [smooth, tension=2] coordinates { (0.05,0) (2,0.4) (3.95,0)}; \draw [xshift=4cm, thick] plot [smooth, tension=2] coordinates { (0.05,0) (2,-0.4) (3.95,0)}; \node at (1,-2.4) {$(a)$}; \node at (6,-2.4) {$(b)$}; \node at (6,-0.15) {$v^+$}; \node at (6,0.2) {$v^-$}; \node at (5.2,-0.14) {$e^+_1$}; \node at (6.75,-0.14) {$e^+_2$}; \node at (4.8,0.12) {$e^-_1$}; \node at (7.2,0.12) {$e^-_2$}; \node at (5,0.8) {$F^-_1$}; \node at (7,0.8) {$F^-_2$}; \node at (5,-0.85) {$F^+_1$}; \node at (7,-0.85) {$F^+_2$}; \end{scope} \end{tikzpicture} \captionof{figure}{ } \end{center} \begin{lemma}\label{lemma-gluing} ${\rm G}(\Gamma, \Gamma')$ is a $z$-knotted triangulation, where all faces are of type {\rm I} and the zigzag is homogeneous. It contains precisely $m+m'$ vertices of type {\rm I}, where $m$ and $m'$ are the numbers of vertices of type {\rm I} in $\Gamma$ and $\Gamma'$, respectively. \end{lemma} We will use the following simple lemma to prove Lemma \ref{lemma-gluing}. \begin{lemma}\label{gl-0} If sequences $p_{1},\dots,p_{t},e$ and $e,q_{1},\dots,q_{s}$ are parts of zigzags in a certain triangulation and the edges $p_{t}$ and $q_{1}$ belong to distinct faces, then the sequence $$p_{1},\dots,p_{t},e,q_{1},\dots,q_{s}$$ is a part of a zigzag in this triangulation. \end{lemma} \begin{proof} Easy verification. \end{proof} \begin{proof}[Proof of Lemma \ref{lemma-gluing}] The zigzag of $\Gamma$ can be considered as a cyclic sequence $$e_{1},A^{+}_{1},e_{2},A^{+}_{2},e_{1},A^{-}_{1},e_{2},A^{-}_{2},$$ where each $A^{\delta}_{i}$, $\delta\in \{+,-\}$ is a sequence of edges. The face shadow of this zigzag is a cyclic sequence $$F^{-}_{1},F^{+}_{1},\dots,F^{-}_{2},F^{+}_{2},\dots, F^{+}_{1},F^{-}_{1},\dots,F^{+}_{2},F^{-}_{2},\dots$$ and we can assume that the first edge from $A^{+}_{1}$ belongs to $F^{+}_{1}$. Then the last edge from $A^{+}_{1}$ is contained in $F^{-}_{2}$. The first and the last edges from $A^{+}_{2}$ belong to $F^{+}_{2}$ and $F^{+}_{1}$ (respectively). The first edge from $A^{-}_{1}$ is contained in $F^{-}_{1}$ and the last edge is in $F^{+}_{2}$. The first and the last edges from $A^{-}_{2}$ belong to $F^{-}_{2}$ and $F^{-}_{1}$ (respectively). Similarly, the zigzags of $\Gamma'$ are cyclic sequences $$e'_{1},B^{+}_{1},e'_{1},B^{-}_{1}\;\mbox{ and }\;e'_{2},B^{+}_{2},e'_{2},B^{-}_{2},$$ where all $B^{\delta}_{i}$ are sequences of edges. The corresponding face shadows are cyclic sequences $$F'^{-}_{1},F'^{+}_{1},\dots,F'^{+}_{1},F'^{-}_{1},\dots\;\mbox{ and }\;F'^{-}_{2},F'^{+}_{2},\dots,F'^{+}_{2},F'^{-}_{2}\dots,$$ and we suppose that the first edge from $B^{+}_{i}$ belongs to $F'^{+}_{i}$. Then the last edge from $B^{+}_{i}$ also belongs $F'^{+}_{i}$. The first and the last edges from $B^{-}_{i}$ belong to $F'^{-}_{i}$. The above observations show the following sequences are parts of zigzags in the triangulation ${\rm G}(\Gamma, \Gamma')$: \begin{enumerate} \item[$\bullet$] $e^{\delta}_{1},A^{\delta}_{1},e^{-\delta}_{2}$ with $\delta\in \{+,-\}$ ($-+=-$ and $--=+$), \item[$\bullet$] $e^{\delta}_{2},A^{\delta}_{2},e^{\delta}_{1}$, \item[$\bullet$] $e^{\delta}_{i},B^{\delta}_{i},e^{\delta}_{i}$ with $\delta\in \{+,-\}$. \end{enumerate} Lemma \ref{gl-0} implies that the cyclic sequence \begin{equation}\label{eq1} e^{+}_{1},A^{+}_{1},e^{-}_{2},B^{-}_{2},e^{-}_{2},A^{-}_{2}, e^{-}_{1},B^{-}_{1},e^{-}_{1},A^{-}_{1},e^{+}_{2},B^{+}_{2},e^{+}_{2},A^{+}_{2},e^{+}_{1},B^{+}_{1} \end{equation} is a zigzag of ${\rm G}(\Gamma, \Gamma')$. This zigzag contains all $A^{\delta}_{i}$ and $B^{\delta}_i$. Also, each $e^{\delta}_{i}$ occurs in the zigzag twice. So, our zigzag passes through every edge of ${\rm G}(\Gamma, \Gamma')$ twice which means that this triangulation is $z$-knotted. Since the zigzag of ${\rm G}(\Gamma, \Gamma')$ is the cyclic sequence \eqref{eq1}, every edge of type I from $\Gamma$ or $\Gamma'$ corresponds to an edge of type I in ${\rm G}(\Gamma, \Gamma')$. Therefore, every face of ${\rm G}(\Gamma, \Gamma')$ contains two edges of type I, i.e. it is a face of type I. The remaining edges of ${\rm G}(\Gamma, \Gamma')$ are of type II which implies that the zigzag of ${\rm G}(\Gamma, \Gamma')$ is homogeneous. Also, a vertex of ${\rm G}(\Gamma, \Gamma')$ is of type I if and only if it corresponds to a vertex of type I in $\Gamma$ or $\Gamma'$. \end{proof} \begin{rem}\label{rem2}{\rm Keeping the above notations, we consider the case when $\Gamma'$ contains precisely four zigzags $Z^{+}_{1},Z^{-}_{1},Z^{+}_{2},Z^{-}_{2}$ (up to reversing) and every $e'_{i}$ occurs in each of the zigzags $Z^{+}_{i},Z^{-}_{i}$ once. We suppose that the face shadow of $Z^{+}_{i}$ contains the sequence $F'^{-}_{i},F'^{+}_{i}$ and the face shadow of $Z^{-}_{i}$ contains the reversed sequence $F'^{+}_{i},F'^{-}_{i}$. We remove from these zigzags the edge $e'_{i}$ and obtain two sequences of edges which will be denoted by $C^{+}_{i}$ and $C^{-}_{i}$, respectively. For every $\delta\in \{+,-\}$ the first edge from $C^{\delta}_{i}$ belongs to $F'^{\delta}_{i}$ and the last edge is contained in $F'^{-\delta}_{i}$ (as above, $-+=-$ and $--=+$). In other words, we have the sequence $e^{\delta}_{i},C^{\delta}_{i},e^{-\delta}_{i}$ instead of $e^{\delta}_{i},B^{\delta}_{i},e^{\delta}_{i}$ and the cyclic sequence \begin{equation}\label{eq2} e^{+}_{1},A^{+}_{1},e^{-}_{2},C^{-}_{2},e^{+}_{2},A^{+}_{2}, e^{+}_{1},C^{+}_{1},e^{-}_{1},A^{-}_{1},e^{+}_{2},C^{+}_{2},e^{-}_{2},A^{-}_{2},e^{-}_{1},C^{-}_{1} \end{equation} is a zigzag of ${\rm G}(\Gamma, \Gamma')$. It is easy to check that all statements from Lemma \ref{lemma-gluing} hold. }\end{rem} In that follows, we will consider the triangulation ${\rm G}(\Gamma, \Gamma')$ for the both cases when $\Gamma'$ contains precisely two or four zigzags (up to reversing). \section{Concordant special pairs} Let $\Gamma,\Gamma'$ and $e_{1},e_{2},e'_{1},e'_{2}$ be as in the previous section. The zigzag of $\Gamma$ is the cyclic sequence $$e_{1},A^{+}_{1},e_{2},A^{+}_{2},e_{1},A^{-}_{1},e_{2},A^{-}_{2}.$$ If $\Gamma'$ contains precisely two zigzags (up to reversing), then these zigzags are the cyclic sequences $$e'_{1},B^{+}_{1},e'_{1},B^{-}_{1}\;\mbox{ and }\;e'_{2},B^{+}_{2},e'_{2},B^{-}_{2}.$$ In the case when $\Gamma'$ contains precisely four zigzags up to reversing (Remark \ref{rem2}), $C^{+}_{i}$ and $C^{-}_{i}$ with $i\in\{1,2\}$ are sequences of edges obtained from these zigzags by removing $e'_{i}$. Recall that two distinct edges $c_{1}$ and $c_{2}$ of type II in $\Gamma$ form a {\it special pair} if the zigzag of $\Gamma$ is a cyclic sequence of type $$c_{1},\dots,c_{2},\dots,c_{1}, \dots,c_{2}, \dots$$ and the edges have a common vertex. We say that special pairs $c_{1},c_{2}$ and $t_{1},t_{2}$ in $\Gamma$ are {\it concordant} if $c_{i}\ne t_{j}$ for any $i,j\in \{1,2\}$ and the zigzag of $\Gamma$ is a cyclic sequence of type $$c_{1},\dots,t_{1},\dots,c_{2},\dots, t_{2},\dots,c_{1},\dots,t_{1},\dots,c_{2},\dots, t_{2},\dots;$$ in particular, if $t_{1},t_{2}$ is a special pair concordant to the special pair $e_{1},e_{2}$, then $t_i$ belongs to both $A^{+}_{i}$ and $A^{-}_{i}$ for every $i\in \{1,2\}$. It is clear that this relation is symmetric. \begin{lemma}\label{lemma-c1} The following assertions are fulfilled: \begin{enumerate} \item[(A)] Every special pair $t_{1},t_{2}$ in $\Gamma$ concordant to the special pair $e_{1},e_{2}$ is a special pair in ${\rm G}(\Gamma, \Gamma')$. \item[(B)] If two concordant special pairs $c_{1},c_{2}$ and $t_{1},t_{2}$ in $\Gamma$ both are concordant to the special pair $e_{1},e_{2}$, then they are concordant special pairs in ${\rm G}(\Gamma, \Gamma')$. \end{enumerate} \end{lemma} \begin{proof} (A). This follows from the fact that the zigzag of ${\rm G}(\Gamma, \Gamma')$ is the cyclic sequence \eqref{eq1} or \eqref{eq2} and every $t_i$ belongs to both $A^{+}_{i}$ and $A^{-}_{i}$. (B). Without loss of generality we can assume that the zigzag of $\Gamma$ is a cyclic sequence of type $$e_{1},\underbrace{\dots,c_{1},\dots,t_{1},\dots}_{A^{+}_{1}},e_{2}, \underbrace{\dots,c_{2},\dots, t_{2},\dots}_{A^{+}_{2}},e_{1}, \underbrace{\dots, c_{1},\dots,t_{1},\dots}_{A^{-}_{1}},e_{2}, \underbrace{\dots,c_{2},\dots, t_{2},\dots}_{A^{-}_{2}}$$ which gives the claim, since the zigzag of ${\rm G}(\Gamma, \Gamma')$ is the cyclic sequence \eqref{eq1} or \eqref{eq2}. \end{proof} \begin{lemma}\label{lemma-c2} Let $c'_{1}$ and $c'_{2}$ be edges of type II in $\Gamma'$ with a common vertex and such that $c'_i$ belongs to both $B^{+}_{i},B^{-}_{i}$ or both $C^{+}_{i},C^{-}_{i}$ for every $i\in \{1,2\}$. Then $c'_{1},c'_{2}$ is a special pair in ${\rm G}(\Gamma, \Gamma')$. If $c_{1},c_{2}$ is a special pair in $\Gamma$ concordant to the special pair $e_{1},e_{2}$, then $c_{1},c_{2}$ and $c'_{1},c'_{2}$ are concordant special pairs in ${\rm G}(\Gamma, \Gamma')$. \end{lemma} \begin{proof} As above, we use the zigzag descriptions \eqref{eq1} or \eqref{eq2}. \end{proof} \section{Tree structured $z$-knotted spherical triangulations with homogeneous zigzags} We apply the results of Sections 4 and 5 to the bipyramids with fixed $z$-orienta\-tions such that all faces are of type I and all zigzags are homogeneous. In this case, an edge of $BP_n$ is of type II if and only if it belongs to the base of the bipyramid. We will use the following observations: \begin{enumerate} \item[(1)] If $n$ is odd, then $BP_n$ is $z$-knotted and any two consecutive edges of the base form a special pair. Any two special pairs $e_{1},e_{2}$ and $c_{1},c_{2}$ in the bipyramid are concordant if $e_{i}\ne c_{j}$ for any $i,j\in \{1,2\}$. \item[(2)] In the case when $n=2k$ and $k$ is odd, $BP_n$ contains precisely two zigzags (up to reversing). If $c$ and $c'$ are consecutive edges of the base, then $c$ occurs twice in one of these zigzags and $c'$ occurs twice in the other. \item[(3)] Suppose that $n=2k$ and $k$ is even. Then $BP_n$ contains precisely four zigzags (up to reversing). If $c$ and $c'$ are consecutive edges of the base, then $c$ occurs once in two distinct zigzags and $c'$ occurs once in the two others. \end{enumerate} Let us take two consecutive edges $e_{1},e_{2}$ in the base of $BP_{2n+1}$ together with two consecutive edges $e'_{1},e'_{2}$ in the base of $BP_{2k}$ and apply Lemma \ref{lemma-gluing} or Remark \ref{rem2} to the corresponding gluing $\Gamma_{1}={\rm G}(BP_{2n+1},BP_{2k})$. By the lemmas from the previous section and the above observations, any two consecutive edges $c_{1},c_{2}$ in the base of $BP_{2n+1}$ and any two consecutive edges $c'_{1},c'_{2}$ in the base of $BP_{2k}$ form special pairs in $\Gamma_{1}$ if $c_{i}\ne e_{j}$ and $c'_{i}\ne e'_{j}$ for any $i,j\in \{1,2\}$. The set of all such pairs $c_{1},c_{2}$ and $c'_{1},c'_{2}$ will be denoted by ${\mathcal C}_1$. Lemmas \ref{lemma-c1} and \ref{lemma-c2} guarantee that any two pairs from ${\mathcal C}_1$ are concordant if there is no edge which occurs in both the pairs. Therefore, we can repeat the gluing construction for any pair from ${\mathcal C}_1$ and any pair of consecutive edges in the base of $BP_{2m}$ and obtain the triangulation $\Gamma_{2}={\rm G}(\Gamma_{1},BP_{2m})$. Any two consecutive edges in the base of each of the bipyramids $BP_{2n+1},BP_{2k},BP_{2m}$ form a special pair in $\Gamma_2$ if they are not exploited for the gluing. Let ${\mathcal C}_{2}$ be the set of all such pairs. As above, any two pairs from ${\mathcal C}_2$ are concordant if there is no edge which occurs in both the pairs. So, we can repeat the gluing construction again. \begin{rem}{\rm If $\Gamma$ and $\Gamma'$ are $z$-knotted bipyraminds, then ${\rm G}(\Gamma, \Gamma')$ is not $z$-knotted. Similarly, the gluing of $BP_{2i}$ and $BP_{2j}$ is not $z$-knotted. }\end{rem} Consider a rooted tree whose vertices are labeled by natural numbers according to the following rules: \begin{enumerate} \item[$\bullet$] The root is labeled by $2k+1$, where $k$ is not less than the root degree. \item[$\bullet$] If a vertex is not the root or a leaf, then it is labeled by $2k$ such that $k$ is not less than the degree of this vertex. \item[$\bullet$] Every leaf is labeled by an arbitrary even number not less than $4$. \end{enumerate} Using this tree, we construct a $z$-knotted triangulation of the sphere ${\mathbb S}^2$ whose faces are of type I and the zigzag is homogeneous. Let $v_{1},\dots,v_{m}$ be the vertices adjacent to the root. Suppose that these vertices are labeled by $2i_{1},\dots,2i_{m}$, respectively. Since $k\ge m$, the bipyramid $BP_{2k+1}$ contains at least $m$ mutually concordant special pairs. Using these pairs we glue the bipyramids $BP_{2i_{1}},\dots,BP_{2i_{m}}$ to $BP_{2k+1}$. Next, we take one of the vertices $v_{1},\dots,v_{m}$ which is not a leaf and repeat the above construction for all its neighbors different from the root. Step by step, we construct a triangulation of ${\mathbb S}^2$ such that the gluing of $BP_{2j}$ to $BP_{2i}$ corresponds to the edge connecting the vertex labeled by $2i$ with the vertex labeled by $2j$. This triangulation is $z$-knotted and its zigzag is homogeneous. It must be pointed out that such a triangulation is not uniquely defined by given tree. \section{Final remarks} It is an open problem to describe all cases when triangulations with homogeneous zigzags are $z$-knotted. It was observed by M. Kwiatkowski that an analogue of Lemma \ref{lemma-gluing} holds for the gluing by $6k+1$ or $6k+5$ ($k=0,1,\dots$) pairs of edges of type II with common vertices (the proof is similar to the proof of Lemma \ref{lemma-gluing}, but too many complicated details appear). Using this fact, we can obtain $z$-knotted triangulations with homogeneous zigzags for some surfaces with non-zero genus. Also, for every surface of even Euler characteristic (not necessarily orientable) we can construct a family of $z$-knotted maps whose faces are triangles and the zigzags are homogeneous; but the corresponding graphs contain double edges, i.e. these maps are not triangulations of surfaces.
{ "timestamp": "2019-03-01T02:03:43", "yymm": "1902", "arxiv_id": "1902.10788", "language": "en", "url": "https://arxiv.org/abs/1902.10788" }
\section{Introduction} \label{sec:intro} According to Austin's theory \cite{Austin1962}, every utterance in a dialog has an illocutionary force, which causes an effect over the course of the conversation. Utterances can then be grouped into \ac{da} categories depending on the relationship between words and the meaning of the expression \cite{Bach1979}. A \ac{da} conveys the intention of the speaker rather than the literal meaning of words for each utterance in a dialog. Automatic \ac{da} classification is a crucial preprocessing step for language understanding and dialog systems. This task has been approached using traditional statistical algorithms, for instance \acp{hmm} \cite{Stolcke2000}, \acp{crf} \cite{Zimmermann2009}, and more recently \ac{dl} models, such as \acp{cnn} \cite{Lee2016}, \acp{rnn} \cite{Kalchbrenner2013, Ortega2018} and \ac{am} \cite{Ortega2017, Ortega2018}, achieve state-of-the-art results. Several works have shown that context, i.e. preceding utterances, plays an important role at determining automatically the \ac{da} of the current utterance \cite{Lee2016, Ortega2018, Ortega2017}. This fact is also supported by the detailed analysis of the influence of context on \ac{da} recognition presented in \cite{DAcontext:Ribeiro2015}, whose main conclusion is that contextual information helps the \ac{da} classification, as long as such information is distinguishable from the current utterance information. In alignment with the aforementioned approaches, we present a model that employs preceding utterances and the current one. However, the particularity of this model relies on using a linear chain \ac{crf} on top of a \ac{cnn} architecture to predict the \ac{da} sequence at utterance level. Using linear chain \ac{crf} layers on top of \ac{nn} models has been already introduced for sequence labeling tasks at the word level such as named entity recognition \cite{lample2016neural}, part-of-speech tagging \cite{andor2016globally} or for joint entity recognition and relation classification \cite{adel2017global}. To the best of our knowledge, all work on \ac{da} classification has been done using only \ac{mt}. Nonetheless, this type of data differs substantially from real data, i.e. \ac{at}, generated by \ac{asr} systems. In this paper, we explore the effect of training and testing the proposed model on \ac{at}. Our goal at this point is to bring the \ac{da} classification task into a more realistic scenario. In sum, we introduce a model that combines \acp{cnn} and \acp{crf} for automatic \ac{da} classification. We train and test our model on different scenarios to contrast the effect of using manual and automatically generated transcriptions from two different ASR architectures (hybrid \ac{tdnn}/\ac{hmm} and \ac{e2e} \ac{asr} systems). Our results show that the combination of \acp{cnn} and \acp{crf} improves consistently the accuracy of the model achieving state-of-the-art performance on MRDA and SWBD. Furthermore, results on ASR outputs reveal that, although \acp{wer} are comparable, the \ac{e2e} \ac{asr} system seems to be more suitable for DA classification. \vspace{-0.35cm} \section{Dialog Act Classification} \label{sec:format} The \ac{da} classification model proposed in this paper, depicted in \autoref{fig:model}, consists of two parts: a \ac{cnn} that generates vector representations for consecutive utterances and a \ac{crf} that performs \ac{da} sequence labeling. \begin{figure}[h] \begin{minipage}[h]{\linewidth} \centering \centerline{\includegraphics[width=7cm]{DA_tagger_2.png}} \end{minipage} \caption{Model architecture. $\oplus$ stands for concatenation.} \label{fig:model}% \end{figure} \vspace{-1em} \subsection{Utterance representation}\label{sec:CNN} Based on \cite{Ortega2017}, the grid-like representations of the current utterance and $n$ previous ones are concatenated and used as input for a \ac{cnn} that generates a vector representation for each of the utterances. \acp{cnn} perform a discrete convolution using a set of different filters on an input matrix, where each column of the matrix is the word embedding of the corresponding word. We use 2D filters $f$ (with width $|f|$) spanning over all embedding dimensions $d$ as described by the following equation: \begin{equation} (w \ast f)(x,y) = \sum_{i=1}^{d}\sum_{j = -|f|/2}^{|f|/2}w(i,j) \cdot f(x-i,y-j) \end{equation} After convolution, an utterance-wise max pooling operation is applied in order to extract the highest activation. Then, the feature maps are concatenated resulting in one vector per utterance that is represented in Figure \ref{fig:model} as $p_{t-2}, p_{t-1}$ and $p_t$. \subsection{\ac{crf}-based \ac{da} sequence labeling}\label{sec:CRF} Given that a dialog is a sequence of utterances, we approach \ac{da} classification as a sequence labeling problem. Therefore, we employ \acp{crf} for this task. The first step is to generate the score vectors, depicted in Figure \ref{fig:model} as $s_{t-2}, s_{t-1}$ and $s_t$, by the means of a linear function in each time step $t$: \begin{equation} s_t = Wp_{t}+b \end{equation} where $W$ (weight matrix) and $b$ (bias) are trainable parameter . Using score vectors as input we perform sequence labeling with a \ac{crf} layer. \acp{crf} are probabilistic models that calculate the likelihood of a possible output $y$ given an observation $s$. They are commonly represented as factor graphs, in which each factor computes the aforementioned likelihood. Mathematically, each factor graph is defined as: \begin{equation} p(y|s) = \frac{\prod(\phi(s,y))}{Z(s)} \end{equation} where $Z(s)$, a normalization function, is the sum of all possible outputs for each observation $s$. To perform sequence labeling, we consider a linear chain \ac{crf}. Analogous to Equation 3, the probability of an output sequence $\vec{y}$ given a sequence of observations $\vec{s}$ is: \begin{equation} p(\vec{y}|\vec{s}) = \frac{\prod(\phi(s,y), \phi'(y,y'))}{Z(\vec{s})} \end{equation} In Equation 4, not only the factors associating input and output $\phi$ are calculated, but also the likelihood between adjacent labels $\phi'$, where $y$ and $y'$ are neighbors. In this case the normalization function $Z$ takes the sequence $\vec{s}$ as input. \section{Automatic Speech Recognition} In recent times, deep learning techniques have boosted the \ac{asr} performance significantly ~\cite{conversational-speech-transcription-using-context-dependent-deep-neural-networks-2}. In this section, we introduce the two types of \ac{asr} architectures used to generate \ac{at}. \subsection{Hybrid \ac{tdnn}/\ac{hmm} architecture} In hybrid \ac{asr} systems, \acp{nn} are used to predict emission probabilities of \ac{hmm} given speech frames. Recently, various \ac{dl} models have been proposed and developed to improve \ac{asr} performance. Most of them are variations of \acp{cnn} or \acp{rnn} \cite{conversational-speech-transcription-using-context-dependent-deep-neural-networks-2, graves2013speech}. \cite{Povey2016PurelySN} presented a hybrid \ac{tdnn}/\ac{hmm} system trained with lattice-free maximum mutual information, which is fast to train and outperforms significantly other models on many \ac{asr} tasks. To the extent of our knowledge, it is one of the best hybrid \ac{asr} systems available for research and thus was selected for our experiments. \subsection{End-to-End architecture} More recently, an \ac{e2e} architecture was introduced, which simplifies the training process and achieves competitive results in several benchmark datasets \cite{espnet}. Many studies have proposed \ac{e2e} architectures based on either \ac{ctc} \cite{EESEN} or \ac{am} \cite{end2endattention}. ESPnet, an End-to-End speech processing toolkit, benefits from two major \ac{e2e} ASR implementations based on \ac{ctc} and attention-based encoder-decoder network \cite{espnet}. It employs the multiobjective learning framework to improve robustness and achieve faster convergence. For decoding, ESPnet executes a joint decoding by combining both attention-based and \ac{ctc} scores in a one-pass beam search algorithm to eliminate irregular alignments. The training loss function is defined in Equation 5, where $\mathcal{L}^{ctc}$ and $\mathcal{L}^{att}$ are the \ac{ctc}-based and attention-based cross entropy, respectively. $\alpha$ is the tuning parameter to linearly interpolate both objective function. \begin{equation} \mathcal{L}=\alpha\mathcal{L}^{ctc} + (1-\alpha)\mathcal{L}^{att} \end{equation} During beam search, the following score combination with attention $p^{att}$ and CTC $p^{ctc}$ log probabilities is performed \begin{equation} \begin{multlined} \texorpdfstring{\log p^{hyb}(y_n|y_{1:n-1}, h_{1:T'}) \\ = \alpha \log p^{ctc}(y_n|y_{1:n-1}, h_{1:T'}) +\\ (1-\alpha) \log p^{att}(y_n|y_{1:n-1}, h_{1:T'})}{} \end{multlined} \end{equation} where $y_n$ is a hypothesis of output label at position $n$ given a history $y_{1:n-1}$ and encoder output $h_{1:T'}$ \cite{espnet}. \section{Experimental setup} \label{sec:pagestyle} \subsection{Data for \ac{da} classification} We evaluate our model on two \ac{da} labeled corpora: 1) \textbf{\acs{icsi}}: \acl{icsi} \cite{Janin2003,Shriberg2004,Dhillon2004}, a dialog corpus of \textit{multiparty meetings}. The 5-tag-set used in this work was introduced by \cite{Ang2005}, and 2) \textbf{\acs{swbd}}: NXT-format Switchboard Corpus \cite{NXT:2010}, a dialog corpus of \textit{2-speaker conversations}. Train, validation and test splits on both datasets were taken as defined in \cite{Lee2016}\footnote{Concerning \acs{swbd}, the data setup in \cite{Lee2016} was preferred over \cite{Stolcke2000}'s, because it was not clearly found in the latter which conversations belong to each split.}. Table \ref{tab:datasets} presents statistics about the corpora. Both datasets contain a highly unbalanced distribution of classes. The majority class is $59.1$\% on \acs{icsi} and $33.7$\% on \acs{swbd}. \begin{table}[h!] \centering \begin{tabular}{|l|c|c|c|c|c|} \hline \textbf{Dataset}& \textbf{C}& \textbf{$\mid$V$\mid$}& \textbf{Train}& \textbf{Validation}&\textbf{Test}\\ \hline \acs{icsi} &5 &12k & 78k &16k &15k\\ \acs{swbd} &42 &20k &193k &23k &5k\\ \hline \end{tabular} \caption{\label{tab:datasets} Data statistics. \textbf{C}: number of classes, \textbf{$\mid$V$\mid$}: vocabulary size and \textbf{Train}/\textbf{Validation}/\textbf{Test}: number of utterances.} \end{table} \vspace{-1em} \subsubsection{Hyperparameters and training} In Table \ref{tab:hyperparams}, we present the model hyperparameters for both corpora. Most of them were taken from \cite{Ortega2017}. However we tuned the optimizer, the learning rate and the mini-batch size. We found the most effective hyperparameters by changing one at a time while keeping the other ones fixed based on the model performance on the validation split. \begin{table}[h] \footnotesize \centering \begin{tabular}{|l|c c|} \hline \textbf{Hyperparameter} &\acs{icsi} & \acs{swbd}\\ \hline Activation function &\multicolumn{2}{c|}{ReLU} \\ Dropout rate &\multicolumn{2}{c|}{0.5}\\ Filter width &\multicolumn{2}{c|}{3, 4, 5} \\ Filters per width &\multicolumn{2}{c|}{100} \\ Learning rate &0.01 & 0.07 \\ Mini-batch size & 70 & 170 \\ Optimizer & \acs{gd} & AdaGrad \\ Pooling size &\multicolumn{2}{c|}{utterance-wise}\\ Word embeddings &\multicolumn{2}{c|}{word2vec (dim. 300)} \\ \hline \end{tabular} \caption{Hyperparameters.} \label{tab:hyperparams} \vspace{-5mm} \end{table} Training was done with context length $n$ ranging from 1-5. After tuning different stochastic learning algorithms with several learning rates, \ac{gd} \cite{Polyak1992} seemed to work best on \acs{icsi} and \ac{adagrad} \cite{Duchi2011} on \acs{swbd}. The learning rate was initialized at 0.01 on \acs{icsi} and 0.07 on \acs{swbd}. Any kind of parameter tuning was done only on the validation split. Word vectors were initialized with the 300-dimensional pretrained word vectors from word2vec \cite{Mikolov2013a} and fine-tuned during training. \vspace{-1mm} \subsection{Data for \acl{asr}} We employed KALDI \cite{Kaldi} to build the hybrid \ac{tdnn}/\ac{hmm} ASR system. In the recipe, 40 \acp{mfcc} were computed at each time step and each frame was append a 100-dimensional iVector to the 40-dimensional \ac{mfcc} input. Speaker adaptive feature transform techniques and data augmentation techniques were implemented. The \ac{gmm}/HMM system generated the alignments for \ac{nn} training \cite{Povey2016PurelySN}. For the \ac{swbd} dataset, we interpolated the 3-gram language model trained on the transcriptions and the 4-gram Fisher model \cite{fisher}. For \ac{icsi}, we employed a 3-gram language model trained on the \ac{mt}. \ac{espnet} was used to build the \ac{e2e} \ac{asr} system. The 80-bins log-mel filterbank features with speed-perturbation were used to train the VGG+BLSTM model with five layers encoder with 1024 units and one layer decoder with 1024 units \cite{espnet}. The language model utilized 100 subword units based on byte-pair-encoding technique, which seems to perform better than the character-level language model \cite{Xiao2018HybridCB}. Both hybrid \ac{tdnn}/\ac{hmm} and \ac{e2e} \ac{asr} systems were trained on the same train and validation splits and were later used to generate the \acl{at} of all splits (train, validation and test) for the DA classification model. Table ~\ref{tab:asr_swbd} shows the performance of hybrid \ac{tdnn}/\ac{hmm} and \ac{e2e} \ac{asr} systems on seen (train and validation splits) data and on unseen data (test split) for \ac{swbd} and \ac{icsi}. \begin{table}[h!] \centering \footnotesize \begin{tabular}{|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{\textbf{Dataset}}}&\textbf{\ac{asr}}& \textbf{Train}& \textbf{Validation}&\textbf{Test}\\ \multicolumn{1}{|c|}{}&\bf System& \textbf{WER}&\textbf{WER}&\textbf{WER}\\ \hline \multicolumn{1}{|c|}{\multirow{2}{*}{\ac{swbd}}}&\text{TDNN/HMM} & 13.8 & 14.29 & 18.02\\ \multicolumn{1}{|c|}{}&E2E & 2.90 & 8.90 & 18.80\\ \hline \multicolumn{1}{|c|}{\multirow{2}{*}{\ac{icsi}}}&\text{TDNN/HMM} & 9.89 & 19.28 & 21.48\\ \multicolumn{1}{|c|}{}&\text{E2E} & 2.30 & 16.80 & 18.80\\ \hline \end{tabular} \caption{\label{tab:asr_swbd} \ac{asr} performance in \ac{wer}(\%) on train, validation and test splits from \ac{swbd} and \ac{icsi}.} \vspace{-1em} \end{table} \section{Experimental results} \subsection{Experiments on \acl{mt}} Table \ref{tab:nospkr} shows the results from a baseline model and our proposed model trained on \ac{mt} with context length varying from 1 to 5. The baseline model is a \ac{cnn} that receives as input an utterance at a time followed by a max pooling operation and a softmax layer. \begin{table}[h!] \centering \footnotesize \begin{tabular}{|c|r|r|} \hline \textbf{Context} & \multicolumn{1}{c|}{\textbf{\ac{icsi}}} & \multicolumn{1}{c|}{\textbf{\ac{swbd}}} \\ \hline 0 (baseline) & 80.2 (80.4, 80.0) & 72.0 (72.2, 71.6) \\\hline 1 & 84.6 (84.6, 84.7) & 74.1 (73.2, 74.9) \\ \hline 2 & {\bf84.7} (84.6, 84.7) & {\bf74.6} (74.5, 74.8) \\ \hline 3 & 84.6 (84.5, 84.6) & 74.5 (74.2, 74.8) \\ \hline 4 & {\bf84.7} (84.4, 84.8) & 74.1 (73.6, 74.6) \\ \hline 5 & 84.6 (84.4, 84.8) & 74.2 (73.8, 74.5) \\ \hline \end{tabular} \caption{Baseline model and proposed model's accuracy (\%). For the latter we report for contexts from 1 to 5. Results appear like \textit{average (minimum, maximum)} calculated on 5 runs.} \label{tab:nospkr} \end{table} On average, for \ac{icsi} the best results were obtained with context 2 and 4 achieving 84.7\%, whereas for \ac{swbd} the model with context 2 achieves the highest performance, i.e. 74.6\%. To the best of our knowledge and under the setup in \cite{Lee2016}, these are state-of-the-art results on \ac{icsi} and \ac{swbd} outperforming \cite{Ortega2017}. For further experimentation in this paper, the context is fixed to 2. \subsection{Experiments on \acl{at}} We tested the pretrained models on \ac{at} from both \ac{asr} systems in order to see the impact on the accuracy (see Table \ref{tab:train_MT}). As expected, the performance dropped down dramatically due to the \ac{wer} and the lack of punctuation. On both datasets, the negative impact was higher when the model was tested on \ac{tdnn}/\ac{hmm} transcriptions. \begin{table}[h!] \centering \footnotesize \begin{tabular}{|c|r|r|} \hline \textbf{Transcriptions} & \multicolumn{1}{c|}{\textbf{\ac{icsi}}} & \multicolumn{1}{c|}{\textbf{\ac{swbd}}} \\ \hline \ac{mt} & {\bf84.7} (84.6, 84.7) & {\bf74.6} (74.5, 74.8) \\ \hline \ac{tdnn}/\ac{hmm} & 59.2 (58.9, 59.7) & 65.7 (65.4, 66.0) \\ \hline \ac{e2e} & 66.1 (65.7, 66.3) & 67.4 (66.6, 67.9) \\ \hline \end{tabular} \caption{Accuracy (\%) of the model trained on \ac{mt} with context 2 and tested on \ac{mt} and \ac{at}.} \label{tab:train_MT} \end{table} Afterwards, we retrained the DA model with \ac{at}. Tables \ref{tab:train_TDNN/HMM} and \ref{tab:train_ET} show the accuracy of training with \ac{tdnn}/\ac{hmm} and \ac{e2e} transcriptions, respectively. Training on \ac{at} increases the accuracy when testing on \ac{at} and decreases it when testing \ac{mt} as expected. In case of \ac{icsi}, the accuracy is slightly worse when training on \ac{at} from one system and testing on the other. However in case of \ac{swbd}, the accuracy is always better when testing on the \ac{at} generated from the \ac{e2e} system. Overall, we observed the best performance when training and testing on \ac{at} generated from the \ac{e2e} system on both datasets (76.6\% on \ac{swbd} and 68.7\% on \ac{icsi}. See Table \ref{tab:train_ET}). \begin{table}[h!] \centering \footnotesize \begin{tabular}{|c|r|r|} \hline \textbf{Transcriptions} & \multicolumn{1}{c|}{\textbf{\ac{icsi}}} & \multicolumn{1}{c|}{\textbf{\ac{swbd}}} \\ \hline \ac{mt} & 64.2 (62.8, 65.7) & 66.9 (64.4, 69.5) \\ \hline \ac{tdnn}/\ac{hmm} & {\bf74.0} (73.9, 74.1) & 67.9 (67.5, 68.2) \\ \hline \ac{e2e} & 71.1 (70.8, 71.7) & {\bf68.6} (68.1, 68.8) \\ \hline \end{tabular} \caption{Accuracy (\%) of the model trained on \ac{tdnn}/\ac{hmm} transcriptions with context 2 and tested on \ac{mt} and \ac{at}.} \label{tab:train_TDNN/HMM} \end{table} \vspace{-1.1em} \begin{table}[h!] \centering \footnotesize \begin{tabular}{|c|r|r|} \hline \textbf{Transcriptions} & \multicolumn{1}{c|}{\textbf{\ac{icsi}}} & \multicolumn{1}{c|}{\textbf{\ac{swbd}}} \\ \hline \ac{mt} & 70.9 (68.3, 72.7) & 66.6 (65.3, 70.0) \\ \hline \ac{tdnn}/\ac{hmm} & 73.2 (73.1, 73.3) & 67.1 (66.2, 67.6) \\ \hline \ac{e2e} & \textbf{76.6} (76.5, 76.7) & \textbf{68.7} (68.4, 69.0) \\ \hline \end{tabular} \caption{Accuracy (\%) of the model trained on \ac{e2e} transcriptions with context 2 and tested on \ac{mt} and \ac{at}.} \label{tab:train_ET} \end{table} One of the main differences between \ac{mt} and \ac{at} is that the latter has no punctuation. In \cite{Ortega2018}, it was shown that punctuation provides strong lexical cues. Therefore, we retrained the model on \ac{icsi}'s \ac{mt} without punctuation. \ac{swbd} was not considered because the NXT-\ac{swbd} has no punctuation. \begin{table}[h!] \centering \footnotesize \begin{tabular}{|c|c|c|} \hline \textbf{\ac{icsi}} & \bf With & \bf Without \\ \textbf{transcriptions} & \bf punctuation & \bf punctuation \\ \hline \ac{mt} & {\bf84.7} (84.6, 84.7) & {\bf81.3} (81.1, 81.5) \\ \hline \ac{tdnn}/\ac{hmm} & 59.2 (58.9, 59.7) & 69.3 (69.3, 69.4) \\ \hline \ac{e2e} & 66.1 (65.7, 66.3) & 76.2 (76.0, 76.4) \\ \hline \end{tabular} \caption{Accuracy (\%) of the model with context $2$ trained on \ac{icsi}'s \ac{mt} without punctuation and tested on \ac{mt} and \ac{at}.} \label{tab:train_MT_nopunct} \end{table} It can be seen from Table \ref{tab:train_MT_nopunct} that punctuation is a strong cue for \ac{da} classification. Nonetheless, it leads to a high negative impact while testing on AT without punctuation. If \ac{mt} are used to train a model, it is advisable to remove punctuation. According to our results, by doing this a 10\% improvement in accuracy terms is achieved on both \ac{asr} transcriptions of \ac{icsi}. \vspace{-1.1em} \section{Conclusion} \label{sec:conclusion} We explored dialog act classification on \ac{mt} with a novel approach for context modeling that combines \acp{cnn} and \acp{crf}, reaching state-of-the-art results on two benchmark datasets (\ac{icsi} and \ac{swbd}). We also investigated the impact of \ac{at} from two different \acl{asr} systems (hybrid \ac{tdnn}/\ac{hmm} and \acl{e2e}) on the final performance. Experimental results showed that although the \acp{wer} are comparable, the \acl{e2e} \ac{asr} system might be more suitable for \acl{da} classification. Moreover, results confirm that punctuation yields central cues for the task suggesting that punctuation should be integrated into the \ac{asr} output in future works. \clearpage \bibliographystyle{IEEEbib}
{ "timestamp": "2019-03-01T02:17:39", "yymm": "1902", "arxiv_id": "1902.11060", "language": "en", "url": "https://arxiv.org/abs/1902.11060" }
\section{Introduction} Four-wave mixing (FWM) is a nonlinear process associated with many applications. In hot or room-temperature atomic vapors, it has been used to generate quantum correlated beams \cite{Ma, Kim}, to store quantum memory and transfer orbital angular momentum between light beams \cite{Chopinaud, Offer}, to reduce the paraxial diffraction of light \cite{Katzir}, to generate slow light \cite{Arsenovic} and to obtain a single-photon source \cite{Ripka}, for instance. Most of these experiments are performed with cw lasers, exploring the high power and the tunability near atomic resonances. Femtosecond lasers have also been employed, making use of the coherent temporal control technique \cite{Mukamel}. In recent years, advances in ultrafast lasers have enhanced the direct application of mode-locked femtosecond (fs) lasers, leading for instance to the development of a rapid multidimensional coherent spectroscopy with high spectral resolution \cite{Cundiff2017}. In this sense, the combination of a cw laser and a femtosecond pulse train allow us not only to probe the action of each laser but also to explore their different characteristics in nonlinear processes. In this work, we extend our previous study of the coherent blue light generated in rubidium vapor, now using a 1 GHz fs pulse train along with a cw diode laser instead of a 100 MHz pulse train \cite{Lopez}. Under this high repetition rate of the fs laser, the necessary condition for the coherent accumulation of population is very well fulfilled. In this context, a good description of the FWM process in the frequency domain can be achieved. Moreover, with 1 GHz frequency separation of the optical modes we can easily distinguish the blue signal generated by each mode of the frequency comb. As a consequence, the results appear to be very similar to those obtained using two cw diode lasers \cite{Akulshin2009}, as will be seen from the analysis of the excitation spectra. In particular, we observed the Autler-Townes splitting \cite{Autler1955} in the FWM signal by scanning the frequency of the strong beam in a configuration of copropagating fields. \begin{figure}[htbp] \centering \includegraphics[width=1\linewidth]{fig1} \caption{(Color online) Experimental setup with relevant energy levels of $^{85}$Rb. BS, L, M, PBS, PMT and SAS indicate Beam-Splitter, Lens, Mirror, Polarizer Beam-Splitter, Photomultiplier and Saturation Absorption Spectroscopy, respectively.} \label{fig1} \end{figure} The more interesting aspect of this experimental system is that the two-photon transition 5S$_{1/2} \rightarrow 5P_{3/2} \rightarrow 5$D can be driven by two routes: (i) two modes of the frequency comb and/or (ii) by the cw laser, in the 5S$_{1/2} \rightarrow 5P_{3/2}$ (780 nm) transition, while one of the frequency comb modes excites the 5P$_{3/2} \rightarrow 5$D (776 nm) transition. In the parametric FWM process investigated here, the nonlinear signal is determined by two-photon coherence between the 5S and 5D states, and by the amplified spontaneous emission at 5.2 $\mu$m~ \cite{Willis}. In this case, the generated blue light reflects not only the characteristics of the atomic system, but also carries the phase information related to the excitation beams. When the two excitation pathways occur for the same atomic velocity group, an interference effect can be clearly observed. The signature of this effect is a narrow peak over the Autler-Townes doublet. \section{Experimental setup and results} Our experimental setup is schematically illustrated in Fig.~1. The fs pulse train is generated by a mode-locked Ti:sapphire laser (BR Labs Ltda), that emits 100 fs pulses at 1 GHz repetition rate, which is phase locked with 1-Hz resolution. The cw light source is a diode laser stabilized in temperature with a linewidth of about 1 MHz. The two beams, with parallel circular polarizations \cite{Akulshin2009}, copropagate through a 5 cm long cell containing natural Rb and heated up to $\approx 100\;^{o}$C. The coherent blue light (CBL) is collected in the forward direction. In the measurements, the average power of the fs laser was fixed at 250 mW at the cell entrance, while the power of the diode laser was varied between 0.07 and 4.5 mW. Both beams were focused at the center of the cell and then collimated. The 420 nm signal is selected using a blue bandpass filter and diffraction gratings and then sent to a photomultiplier tube and recorded by an oscilloscope. \begin{figure}[htbp] \centering \includegraphics[width=0.9\linewidth]{fig2} \caption{(Color online) Coherent blue light intensity as a function of (a) diode frequency detuning (${\delta_{d}}/{2\pi}$), and (b) repetition rate variation ($\delta f_{R}$), for different powers of the diode laser as indicated in each curve. In (a) $f_{R}$ is fixed at $\sim$ 987.791 MHz while in (b) the diode frequency is fixed near the closed transition and $\delta f_{R}$ = 0 for $f_{R}$ = 987.749 886 MHz. The upper curve in (b) is a zoom of the right structure in curve (II).} \label{fig2} \end{figure} Our results were focused at the isotope $^{85}$Rb, where we analyzed the CBL behavior with respect to the detuning and power of the diode laser, and repetition rate ($f_{R}$) of the pulse train. In the Fig. 2(a) we have the CBL as a function of the diode frequency detuning with respect to the 5S$_{1/2}, F=3 \rightarrow$ 5P$_{3/2}, F=4$ closed transition, with one mode of the frequency comb near to the 5P$_{3/2}, F=4 \rightarrow$ 5D$_{5/2}, F=5$ transition, for a locked $f_R \sim 987.791$ MHz. The cell temperature was set at $T \approx80$ $^o$C, corresponding to an atomic density of order of $10^{12}$ atoms/cm$^{3}$ \cite{Alcock}, and different powers of the diode laser were considered. We easily notice a doublet structure where the separation between the peaks depends on the power of the diode laser. This Autler-Townes (AT) splitting, observed when we scan the strong field frequency, is a kind of self-dressing effect \cite{Xiao2010}. \begin{figure}[htbp] \centering \includegraphics[width=0.8\linewidth]{fig3} \caption{(Color online) Comparison between experimental data ((a) and (c)) and numerical calculation ((b) and (d)) for the CBL peak intensity ((a) and (b)) and the Autler-Townes (AT) splitting ((c) and (d)) as a function of the diode laser power. $\gamma_{22}$ is the decay rate of the 5P$_{3/2}$ state.} \label{fig3} \end{figure} The intensity dependence of the coherent blue signal with the diode power (measure at the entrance of the cell) is represented by the blue dots of Fig. 3(a). The experimental results were obtained for a fixed value of $f_R$, with a frequency mode near the closed transition 5P$_{3/2}, F=4 \rightarrow$ 5D$_{5/2}, F=5$, and for a cell temperature of $T = 85\;^o$C. Each point corresponds to an average of five scans and represents the higher peak value of one doublet. It is interesting to note that, for this experimental condition, the signal is maximum at $\approx 0.6$ mW and goes to zero for powers greater than 2 mW. The decrease of the signal for high powers of the diode is due to the ac Stark shift: as the diode power increases, the ac Stark shift and therefore the detuning of the two-photon resonance also increases, fading the signal. Another distinct feature are the large values observed for the doublet separation as a function of the diode power, displayed in Fig. 3(c) for the same experimental conditions of Fig. 2(a). These values are in contrast with a separation of the order of the Rabi frequency, usually found when the frequency of the strong field is fixed near resonance and the AT splitting is probed by the weak beam \cite{Verkerk1986}. This apparent contradiction will be discussed in Section III. The dependence of the coherent blue light on the repetition rate is shown in Fig. 2(b). In this case, the diode frequency was tuned near to the 5S$_{1/2}, F=3 \rightarrow$ 5P$_{3/2}, F=4$ closed transition, while $f_R$ was scanning around $f_R^0 = 987.749\;886$ MHz, with $T = 74\;^o$C. We represent the results for three diode powers, where the peaks, induced by the diode laser in combination with two neighboring modes of the frequency comb, are separated by $\approx 2.5$ kHz \cite{Moreno2011a}, which corresponds to $\approx 987$ MHz in the frequency of these modes. The doublet structure is also present with similar characteristics to those observed when the diode frequency is scanned. However, an intriguing feature is a narrow peak that appears superimposed on the broader AT peaks as indicated by the arrows in Fig. 2(b). This narrow peak is observed depending on the value of the repetition rate and a close-up of the doublet structure at the right of curve (II) is displayed on the upper curve in Fig. 2(b). \section{Theory and discussion} In order to explain the main features observed in the experiment, the Autler-Townes splitting and the very narrow peak that appears when the repetition rate is scanning, we use a simple model consisting of independent four-level diamond-type systems interacting with three cw fields, as schematized in Fig. 4. Just as it is in the experiment, the first two transitions can be driven by different routes: (i) two modes of the frequency comb, $\omega_{n}$ and $\omega_{m}$ (\textit{fs-fs} pathway) and/or (ii) by the cw laser and one of the frequency comb modes, $\omega_{d}$ and $\omega_{m}$ (\textit{cw-fs} pathway). We have also included a seed for the 5D$_{5/2}$ $\rightarrow$ 6P$_{3/2}$ transition, a necessary condition to start the four wave mixing process and then to generate the blue beam. \begin{figure}[htbp] \centering \includegraphics[width=0.7\linewidth]{fig4} \caption{(Color online) Four-level theoretical model.} \label{fig4} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=1\linewidth]{fig5} \caption{(Color online) (a) $\left |\rho _{14} \right |^{2}$ scanning the diode laser frequency at a fixed repetition rate, for three different values of detuning of the $\omega_{m}$ mode regarding the transition $\left|2\right\rangle \rightarrow \left|3\right\rangle$. (b) Zoom of the bottom curve in (a) showing $\left |\rho _{14} \right |^{2}$ for co- and counter-propagating fields configuration. (c) Theoretical results scanning the repetition rate for a fixed diode frequency. $\left |\rho _{14} ^{dm} \right |^{2}$ is the result for the \textit{cw-fs} pathway, while $\left |\rho _{14} ^{nm} \right |^{2}$ accounts for the \textit{fs-fs} pathway considering the same repetition rate of Fig. 2 (b). The green and dashed curves are the two possible ways of adding these coherence pathways.} \label{fig5} \end{figure} Our treatment of the problem begins with Liouville's equation with a electric dipole Hamiltonian as the interaction, \begin{equation} \hat{H}_{int} = -\hslash\sum^4_{j \neq k}\left(\Omega_{l}e^{i\omega_{l}t}+c.c.\right)\left|j\right\rangle\left\langle k\right|, \end{equation} \noindent where $\Omega_{l}$ and $\omega_{l}$ ($l = d, n, m$ or $i$) are the Rabi frequency and the optical frequency associated to the fields indicated in Fig. 4. Given the high repetition rate of our frequency comb, it can be treated as several cw lasers with a well defined frequency given by $\omega_{m}=2\pi(f_{0}+mf_{R})$ [$\omega_{n}=2\pi(f_{0}+nf_{R})$], where $f_{0}$ is the offset frequency and $m$ [$n$] is an integer number of order of 4 $\times$ $10^{5}$ . This grants the possibility of writing the Bloch equations as \begin{equation} \frac{\partial\rho_{jk}(t)}{\partial t} = -(i\omega_{jk} + \gamma_{jk})\rho_{jk}(t) - \frac{i}{\hslash}\left\langle j \right| [ \hat{H}_{int},\hat{\rho} ] \left| k \right\rangle,\\ \end{equation} \noindent for only one of the excitation routes and then adapt the scanning parameters to account for the other pathway. In these equations, $\gamma_{jk}$ is the decay rate of the density matrix element $\rho_{jk}$ and $\omega_{jk}$ is the frequency of the $\left|j\right\rangle$ $\rightarrow$ $\left|k\right\rangle$ transition. We also apply to our Bloch equations the rotating wave approximation and look for a steady-state solution. Propagation effects are neglected. We solve the 16 coupled equations nonpertubatively by applying the cofactor expansion method on the system's matrix. Since we are looking for a generated signal between levels $\left|4\right\rangle$ and $\left|1\right\rangle$ of the theoretical model (see Fig. 4), our variable of interest is the coherence $\rho_{14}$. Once we have the full solution for this coherence, we integrate it over the Maxwell-Boltzmann distribution to account for the different velocity groups. For the first experimental configuration, i.e. scanning the diode laser frequency we show the theoretical results in Fig. 5 (a). Each curve is presented for different detunings of the frequency comb mode with respect to the $\left|2\right\rangle$ $\rightarrow$ $\left|3\right\rangle$ transition. This means that for the two upper curves the laser exciting the second transition is not perfectly resonant with the atomic velocity group $v$ = 0. As a consequence, an asymmetry arises due to the Maxwell-Boltzmann distribution. We also show, in the bottom curve, the theoretical result when the diode beam propagates in an opposite direction to the fs beam (blue curve). A zoom of the theoretical results for these two fields configuration (co- and counter-propagating) is displayed in Fig. 5 (b). All the other parameters are the same. We can clearly see in this figure that the contribution of the Doppler effect for the AT splitting and the linewidth of the peaks are much larger for the copropagating configuration, in agreement with the present experiment performed by scanning the strong field. This model was also used to reproduce the experimental curves presented in Fig. 2, changing only the Rabi frequency of the diode laser. Then, from each theoretical spectrum the AT splitting and the peak intensity were extracted. For this case, $f_0$ and $f_R$ were chosen in such a way that the mode of the frequency comb was assumed to be resonant with the $\left|2\right\rangle$ $\rightarrow$ $\left|3\right\rangle$ transition for atomic velocity group $v$ = 0. To avoid considering propagation effects, the experimental data is presented in terms of laser power measured at the entrance of the vapor cell. The theoretical data are plotted in Figs. 3(b) and (d) as a function of $|\Omega_{d}/\gamma_{22}|^2$ and scaled to compare with the experimental curves. The outcomes of the configuration scanning the repetition rate for a fixed diode laser frequency are displayed in Fig. 5 (c). The two bottommost curves are the different excitation pathways, described previously. To obtain these curves we use exactly the same equations that produce the results in Fig. 5 (a), for a configuration of copropagating beams, but adapting the way the frequency is swept. To consider only the \textit{cw-fs} pathway, the field in the $\left|1\right\rangle$ $\rightarrow$ $\left|2\right\rangle$ transition is fixed. On the other hand, for the \textit{fs-fs} pathway, the first two fields are scanning, keeping the frequency difference given by the experimental repetition rate. The \textit{cw-fs} pathway gives essentially the same result of Fig. 5 (a), when the repetition rate is fixed. The only palpable difference is in the AT splitting, which is slightly smaller but with same behavior of Fig. 3 (c). In contrast, the \textit{fs-fs} result is quite different. Since both fields that induce the first two transitions are scanning, a much broader spectrum arises. This is due to the AT effect caused by the frequency comb mode $\omega_{n}$, but blurred by the other scanning mode. The maximum intensity of the CBL generated in this case occurs when the system meets the double-resonance condition~\cite{Ban2013}: \begin{equation} \label{double resonance} f_R = \dfrac{\omega_{23}-\omega_{12}}{2\pi(m-n)}, \end{equation} \noindent where \textit{m} and \textit{n} are integer numbers that determine a pair of comb modes that satisfies resonant condition for both excitation steps: $\left|1\right\rangle$ $\rightarrow$ $\left|2\right\rangle$ and $\left|2\right\rangle$ $\rightarrow$ $\left|3\right\rangle$. In order to get the final signal, we must add the two possible coherence pathways. This operation can be done in two ways. Either we assume that these processes are independent, taking the square modulus of each coherence and adding them (dashed curve in Fig. 5 (c)) or we say that these coherences may interfere. If that is the case, we must first add the coherences and then take the square modulus of this sum. This leads to the green curve on Fig. 5 (c), with the narrow peak over the AT doublet, just as it is in the experiment. The narrow interference peak appears shifted from the resonance regarding the $v=0$ group. It does so because the frequency comb $\omega_{n}$ and the diode laser field must be simultaneously in resonance with the $\left|1\right\rangle$ $\rightarrow$ $\left|2\right\rangle$ transition for the exact same group of atoms. For the repetition rate used both in experiment and theory, this condition is satisfied for a group of atoms with some velocity. It is possible, then, to shift the narrow peak by changing the repetition rate. These changes must be in a small range to ensure that there is a group of atoms interacting simultaneously with both lasers (the cw laser and two modes of the frequency comb) \cite{Moreno2011b}. If the change in repetition rate is 463.998 kHz (Eq. \ref{double resonance}), then another pair of modes will be able to produce exactly the same interference pattern, i.e. similar in position and amplitude. Likewise, it is possible to change the detuning of diode laser, assumed to be zero in Fig. 5 (c), to shift the narrow peak. This has been tested and also confirms that the interference process only occurs for those atoms simultaneously interacting with both lasers. We remark that the linewidth of the observed narrow interference peak, of about 10 MHz, is mainly limited by experimental conditions such as the repetition rate scanning. On the other hand, the theoretical model shows that the peak has a linewidth of order of 1 MHz, a result that appears to be limited by the lifetime of the 5D$_{5/2}$ state \cite{Sheng}. Since this is an interference process, phase must play an important role \cite{Jeong}. For the theoretical results here, the lasers were assumed to be on phase whilst in the experiment there was no specific phase control. By changing the phase on the theory it is possible to control the interference process ranging from a constructive to a destructive signal. This was not observed experimentally, for there was no phase control between lasers. A new experiment is being developed on which the lasers will have their phases locked. By carefully changing the optical path of the diode laser, we expect to see distinct behaviors of the interference process. It is interesting to ask whether this type of interference process can also be observed using only cw lasers. In this context, a numerical analysis indicates that we would need at least three cw lasers, and to carefully engineer the frequency scan to vary simultaneously the frequency of two lasers at a rate that simulates the role played by the mode-locked femtosecond laser. \section{Conclusions} We have observed an interference effect between two four-wave mixing excitation routes involving a combination of cw laser and a 1 GHz frequency comb. This effect appears in the coherent blue light generated in rubidium vapor only when both lasers are resonant with one- and two- photon transitions for the same atomic velocity group. Remarkably, the signature of the interference is a narrow peak over an Autler-Townes doublet when we scan the repetition rate of the frequency comb. In particular, this process allows us to select and probe a specific velocity-group of atoms in a medium inhomogeneously broadened. We also analyze the Autler-Townes splitting in a configuration of copropagating fields, scanning either the cw laser or the repetition rate of the frequency comb. In both situations, the cw laser induces the splitting. Furthermore, the experimental and theoretical results indicate that, under copropagating beams configuration, the Doppler effect plays an important role both in the Autler-Townes splitting and in the linewidth of the peaks. A theoretical model in the frequency domain was proposed in order to describe these physical phenomena. For the AT splitting our model achieves good agreement with the experimental data, while for the interference process it provides evidence of the underlying mechanism giving rise to the effect. In fact, we were able to distinguish the role of each laser in both physical processes. Most importantly, we revealed that the combination of a cw laser and a 1 GHz frequency comb, with their respective characteristics, opens the possibility to induce different pathways that may lead to non-trivial quantum interferences. \vspace{5mm} This work was supported by CAPES (PROEX 534/2018, No. 23038.003382/2018-39) and FAPERO (No. 01.1331.00031-00.057/2017). A. A. C. de Almeida acknowledges financial support by CNPq (132833/2017-4).
{ "timestamp": "2019-03-01T02:02:35", "yymm": "1902", "arxiv_id": "1902.10761", "language": "en", "url": "https://arxiv.org/abs/1902.10761" }
\section{Introduction} The main purpose of this paper is to prove the existence of forward discretely self-similar (DSS) and self-similar (SS) weak solutions of both the MHD equations and the viscoelastic Navier-Stokes equations with damping. More precisely, we construct DSS local Leray weak solutions for DSS initial data with possibly large $L^3_w$-norm, and SS local Leray solutions for $(-1)$-homogeneous initial data in $L^3_w$. Our method follows from \cite{MR3611025} and is based on the a priori bounds \eqref{eq_1.15_mhd} and \eqref{eq_1.15_vNSEd}, and the Galerkin method. To begin with, we briefly introduce the MHD equations and the viscoelastic Navier-Stokes equations. \subsection{The incompressible MHD equations} In a magnetofluid, the interaction between the velocity field of the fluid and the magnetic field is governed by the coupling between the Navier-Stokes equations of fluid dynamics and Maxwell's equations of electromagnetism. The fundamental equations of magentohydrodynamics (MHD) is given by \begin{equation}\label{MHD} \setlength\arraycolsep{1.5pt}\def1.2{1.2} \left.\begin{array}{ll} \partial_tv-\nu_0\De v+(v\cdot\nabla)v-(b\cdot\nabla)b+\nabla\pi&=0\ \\ \partial_tb-\eta_0\De b+(v\cdot\nabla)b-(b\cdot\nabla)v&=0\ \\ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nabla\cdot v =\nabla\cdot b&=0\ \end{array}\right\} \text{ in }\R^3\times(0,\infty), \end{equation} with initial data \[v|_{t=0}=v_0\ \text{ and }\ b|_{t=0}=b_0\ \text{ in }\R^3,\] where $u:\R^3\times(0,\infty)\to\R^3$ is the fluid velocity, $b:\R^3\times(0,\infty)\to\R^3$ is the magnetic field, and $\pi:\R^3\times(0,\infty)\to\R$ represents the fluid pressure. The constants $\nu_0>0$ and $\eta_0>0$ are the kinetic viscosity and the magnetic resistivity, respectively. For simplicity, we assume $\nu_0=\eta_0=1$ throughout this paper. We recall that the MHD equations \eqref{MHD} is invariant under the scaling \EQ{ v^\la(x,t)&=\la\, v(\la x,\la^2t),\ v_0^\la(x)=\la\,v_0(\la x),\\ b^\la(x,t)&=\la\, b(\la x,\la^2t),\ \, b_0^\la(x)=\la\,b_0(\la x),\\ \pi^\la(x,t)&=\la^2\pi(\la x,\la^2t). } We say that a solution $(v,b,\pi)$ of \eqref{MHD} is self-similar (SS) if it satisfies the scaling invariant $v^\la=v,\,b^\la=b$ and $\pi^\la=\pi$ for all $\la>0$. The initial data $v_0$ and $b_0$ are called self-similar if $v_0^\la=v_0$ and $b_0^\la=b_0$. On the other hand, if the scaling invariant only holds for a particular $\la>0$, we say $(v,b,\pi)$ is discretely self-similar with factor $\la>1$ ($\la$-DSS). Similarly, the initial data $v_0$ and $b_0$ are said to be $\la$-DSS if $v_0^\la=v_0$ and $b_0^\la=b_0$ for this $\la>1$. On one hand, self-similar solutions of \eqref{MHD} have a stationary characteristic in that there exists an ansatz for $(v,b)$ in terms of time-independent profile $(u,a)$. That is, \begin{equation}\label{eq_1.6_mhd} v(x,t)=\frac1{\sqrt{2t}}\,u\left(\frac{x}{\sqrt{2t}}\right),\ \ \ b(x)=\frac1{\sqrt{2t}}\,a\left(\frac{x}{\sqrt{2t}}\right),\ \ \ \pi(x,t)=\frac1{2t}\,p\left(\frac{x}{\sqrt{2t}}\right). \end{equation} The profile $(u,a)$ solves the stationary Leray system for the MHD equations \begin{equation}\label{eq_1.7_mhd} \setlength\arraycolsep{1.5pt}\def1.2{1.2} \left.\begin{array}{ll} -\De u-u-y\cdot\nabla u+(u\cdot\nabla)u-(a\cdot\nabla)a+\nabla p&=0\ \\ -\De a-a-y\cdot\nabla a+(u\cdot\nabla)a-(a\cdot\nabla)u&=0\ \\ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nabla\cdot u =\nabla\cdot b&=0 \ \end{array}\right\} \text{ in }\R^3\times\R, \end{equation} in the variable $y=x/\sqrt{2t}$. On the other hand, discretely self-similar solutions of \eqref{MHD} are determined by the behavior on the time intervals of the form $1\le t\le\la^2$. This leads us to consider the self-similar transform \begin{equation} v(x,t)=\frac1{\sqrt{2t}}\,u(y,s),\ \ \ b(x,t)=\frac1{\sqrt{2t}}\,a(y,s),\ \ \ \pi(x,t)=\frac1{2t}\,p(y,s), \end{equation} where \begin{equation}\label{xtys} y=\frac{x}{\sqrt{2t}},\ \ \ s=\log(\sqrt{2t}). \end{equation} Then $(u,a,p)$ solves the time-dependent Leray system for the MHD equations \begin{equation}\label{eq_1.10_mhd} \setlength\arraycolsep{1.5pt}\def1.2{1.2} \left.\begin{array}{ll} \partial_su-\De u-u-y\cdot\nabla u+(u\cdot\nabla)u-(a\cdot\nabla)a+\nabla p&=0\ \\ \partial_sa-\De a-a-y\cdot\nabla a+(u\cdot\nabla)a-(a\cdot\nabla)u&=0\ \\ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nabla\cdot u =\nabla\cdot b&=0 \ \end{array}\right\} \text{ in }\R^3\times\R. \end{equation} Note that $(v,b,\pi)$ is $\la$-DSS if and only if $(u,a,p)$ is periodic in $s$ with the period $T=\log(\la)$. Many significant contributions have been made concerning the existence of solutions to the MHD equations \eqref{MHD}. We list only some results related to our studies. First, Duvaut and Lions \cite{MR0346289} constructed a class of global weak solutions with finite energy and a class of local strong solutions. And the unique existence of mild solutions in BMO$^{-1}$ for small initial data has been obtained in Miao-Yuan-Zhang \cite{MR2313731}. In He-Xin \cite{MR2514362}, they also constructed a class of global unique forward SS solutions for small $(-1)$-homogeneous initial data belonging to some Besov space, or the Lorentz space or pseudo-measure space. Recently, Lin-Zhang-Zhou \cite{MR3487253} constructed a class of global smooth solution for large initial data assuming some constraints on the initial data on Fourier side. \subsection{The incompressible viscoelastic Navier-Stokes equations with damping} The Oldroyd-type models capture the rheological phenomena of both the fluid motions and the elastic features of non-Newtonian fluids. We study the simplest case in which the relaxation and retardation times are both infinite. More specifically, we consider the following system of equations for an incompressible, viscoelastic fluid: \begin{equation}\label{vNSE} \setlength\arraycolsep{1.5pt}\def1.2{1.2} \left.\begin{array}{ll} \partial_tv-\nu_0\De v+(v\cdot\nabla)v-\nabla\cdot({\bf F}{\bf F}^\top)+\nabla\pi&=0\ \\ \partial_t{\bf F}+(v\cdot\nabla){\bf F}-(\nabla v){\bf F}&=0\ \\ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nabla\cdot v &=0\ \end{array}\right\} \text{ in }\R^3\times(0,\infty), \end{equation} with initial data \[v|_{t=0}=v_0\ \text{ and }\ {\bf F}|_{t=0}={\bf F}_0\ \text{ in }\R^3,\] where $u:\R^3\times(0,\infty)\to\R^3$ is the velocity field, ${\bf F}:\R^3\times(0,\infty)\to\R^{3\times3}$ is the local deformation tensor of the fluid, and $\pi:\R^3\times(0,\infty)\to\R$ represents the pressure. The constant $\nu_0>0$ is the kinetic viscosity. Here $(\nabla\cdot({\bf F}{\bf F}^\top))_i=\partial_j({\bf F}_{ik}{\bf F}_{jk})$ and $(\nabla v)_{ij}=\partial_jv_i$. For convenience, we assume $\nu_0=1$ throughout this paper. For the existence of weak solutions for the viscoelastic Navier-Stokes equations \eqref{vNSE}, it is well-known that short-time classical solutions and global existence of classical solutions for small initial data were established by Lin-Liu-Zhang \cite{MR2165379}. Later on, the authors \cite{MR2273974,MR2393434} proved the global existence of smooth solutions to \eqref{vNSE} in the case of near-equilibrium initial data. In \cite{MR2165379}, the authors added a damping term in the equation for ${\bf F}$ of the system \eqref{vNSE} to overcome the difficulty arises from the lack of a damping mechanism on ${\bf F}$. To be more precise, they introduced the following viscoelastic Navier-Stokes equations with damping as a way to approximate solutions of \eqref{vNSE}: \begin{equation}\label{vNSEd0} \setlength\arraycolsep{1.5pt}\def1.2{1.2} \left.\begin{array}{ll} \partial_tv-\De v+(v\cdot\nabla)v-\nabla\cdot({\bf F}{\bf F}^\top)+\nabla\pi&=0\ \\ \partial_t{\bf F}-\mu\De {\bf F}+(v\cdot\nabla){\bf F}-(\nabla v){\bf F}&=0\ \\ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nabla\cdot v &=0\ \end{array}\right\} \text{ in }\R^3\times(0,\infty), \end{equation} for a damping parameter $\mu>0$. Note that if $\nabla\cdot{\bf F}=0$ at some instance of time, then $\nabla\cdot{\bf F}=0$ at all later times. In fact, by taking divergence of $\eqref{vNSEd0}_2$ and using $\eqref{vNSEd0}_3$, one have the following equation for $\nabla\cdot F$: \[\partial_t(\nabla\cdot{\bf F})+(v\cdot\nabla)(\nabla\cdot{\bf F})=\mu\De(\nabla\cdot{\bf F}).\] Hence it is natural to assume \EQ{ \nabla\cdot{\bf F}=0. } Because the damping parameter $\mu$ plays no role in our construction of solutions, we set throughout this paper that \[\mu=1.\] Then, columnwisely, \eqref{vNSEd0} can be rewritten as \begin{equation}\label{vNSEd} \setlength\arraycolsep{1.5pt}\def1.2{1.2} \left.\begin{array}{ll} \partial_tv-\De v+(v\cdot\nabla)v-\underset{n=1}{\overset{3}\sum}(f_n\cdot\nabla)f_n+\nabla\pi&=0\ \\ \partial_tf_m-\De f_m+(v\cdot\nabla)f_m-(f_m\cdot\nabla)v&=0\ \\ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nabla\cdot f_m=\nabla\cdot v &=0\ \end{array}\right\} \text{ in }\R^3\times(0,\infty),\ m=1,2,3, \end{equation} where $f_m$ is the $m$-th column vector of ${\bf F}$. Similar to the MHD equations, the viscoelastic equations with damping \eqref{vNSEd} is invariant under the scaling \EQ{ v^\la(x,t)&=\la\, v(\la x,\la^2t),\ v_0^\la(x)=\la\,v_0(\la x),\\ {\bf F}^\la(x,t)&=\la\, {\bf F}(\la x,\la^2t),\ \, {\bf F}_0^\la(x)=\la\,{\bf F}_0(\la x),\\ \pi^\la(x,t)&=\la^2\pi(\la x,\la^2t). } We define SS and $\la$-DSS solution to \eqref{vNSEd} in the same manner as the ones we defined for the MHD equations. Self-similar solutions of \eqref{vNSEd} is determined by time-periodic profile $(u,{\bf F})$, where \begin{equation}\label{eq_1.6_vNSEd} v(x,t)=\frac1{\sqrt{2t}}\,u\left(\frac{x}{\sqrt{2t}}\right),\ \ \ {\bf F}(x)=\frac1{\sqrt{2t}}\,{\bf G}\left(\frac{x}{\sqrt{2t}}\right),\ \ \ \pi(x,t)=\frac1{2t}\,p\left(\frac{x}{\sqrt{2t}}\right), \end{equation} which satisfy the stationary Leray system for the viscoelastic Navier-Stokes equations with damping \begin{equation} \setlength\arraycolsep{1.5pt}\def1.2{1.2} \left.\begin{array}{ll} -\De u-u-y\cdot\nabla u+(u\cdot\nabla)u-\underset{n=1}{\overset{3}\sum}(g_n\cdot\nabla)g_n+\nabla p&=0\ \\ -\De g_m-g_m-y\cdot\nabla g_m+(u\cdot\nabla)g_m-(g_m\cdot\nabla)u&=0\ \\ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nabla\cdot u =\nabla\cdot g_m&=0 \ \end{array}\right\} \text{ in }\R^3\times\R,\ m=1,2,3, \end{equation} where $g_m$ is the $m$-th column vector of ${\bf G}$. For discretely self-similar solutions of \eqref{vNSEd}, we consider the self-similar transform \begin{equation} v(x,t)=\frac1{\sqrt{2t}}\,u(y,s),\ \ \ {\bf F}(x,t)=\frac1{\sqrt{2t}}\,{\bf G}(y,s),\ \ \ \pi(x,t)=\frac1{2t}\,p(y,s), \end{equation} where $x,t,y,s$ satisfy \eqref{xtys}. Then $(u,{\bf G},p)$ solves the time-dependent Leray system for the viscoelastic Navier-Stokes equations with damping \begin{equation}\label{eq_1.10_vNSEd} \setlength\arraycolsep{1.5pt}\def1.2{1.2} \left.\begin{array}{ll} \partial_su-\De u-u-y\cdot\nabla u+(u\cdot\nabla)u-\underset{n=1}{\overset{3}\sum}(g_n\cdot\nabla)g_n+\nabla p&=0\ \\ \partial_sg_m-\De g_m-g_m-y\cdot\nabla g_m+(u\cdot\nabla)g_m-(g_m\cdot\nabla)u&=0\ \\ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nabla\cdot u =\nabla\cdot g_m&=0 \ \end{array}\right\} \text{ in }\R^3\times\R,\ m=1,2,3, \end{equation} where $g_m$ is the $m$-th column vector of ${\bf G}$. Note that $(v,{\bf F},\pi)$ is $\la$-DSS if and only if $(u,{\bf G},p)$ is periodic in $s$ with the period $T=\log(\la)$. The authors \cite{MR2165379} mentioned that passing the limit of solutions to \eqref{vNSEd0} as $\mu\to0^+$ throughout standard weak convergence methods is not able to get weak solutions of \eqref{vNSE}. Despite of that, \eqref{vNSEd0} itself is still an interesting system, and there are a few of studies on this system. For instance, Lai-Lin-Wang \cite{MR3609234} established the existence of global forward SS classical solution to \eqref{vNSEd0} for locally H\"{o}lder continuous, $(-1)$-homogeneous initial data. For regularity issues, we refer the reader to \cite{MR3032986} and \cite{MR3626232}. \subsection{Main results and Notation} Our first goal is to extend the notion of weak solutions to the ones with a more general initial data. To this end, we recall the definition of local Leray weak solutions of the MHD equations \eqref{MHD}, which is consistent with the concept introduced by Lemari\'{e}-Rieusset \cite{MR1938147} on the Navier-Stokes equations. Here, for $1\le q<\infty$, let $L^q_{\textup{uloc}}$ denote the space of functions in $\R^3$ with \[\|f\|_{L^q_{\textup{uloc}}}:=\sup_{x_0\in\R^3}\|f\|_{L^q(B_1(x_0))}<\infty.\] \begin{defn}[Local Leray solutions of the MHD equations]\thlabel{def_loc_leray_mhd} A pair of vector fields $(v,b)$, where $v,b:\R^3\times[0,\infty)\to\R^3$ and $v,b\in L^2_{\textup{loc}}(\R^3\times[0,\infty))$, is called a local Leray solution to \eqref{MHD} with divergence-free initial data $v_0,\,b_0\in L_{\textup{uloc}}^2$ if \begin{enumerate}[$(i)$] \item there exists $\pi\in L_{\textup{loc}}^{3/2}(\R^3\times[0,\infty))$ such that $(v,b,\pi)$ is a distributional solution to \eqref{MHD}, \item $($Locally finite energy$/$enstrophy$)$ for any $R>0$, $(v,b)$ satisfies \EQ{\label{lfee_mhd} &\underset{0\le t<R^2}{\textup{esssup}}\ \underset{x_0\in\R^3}{\sup}\int_{B_R(x_0)}\frac12\left(|v(x,t)|^2+|b(x,t)|^2\right)dx\\ &~~~~~~~~~~~~~~~~~~~~~~~~+\underset{x_0\in\R^3}{\sup}\int_0^{R^2}\int_{B_R(x_0)}\left(|\nabla v(x,t)|^2+|\nabla b(x,t)|^2\right)dxdt<\infty, } \item $($Decay at spatial infinity$)$ for any $R>0$, $(v,b)$ satisfies \EQ{\label{dasi_mhd} \lim_{|x_0|\to\infty}\int_0^{R^2}\int_{B_R(x_0)}\left(|v(x,t)|^2+|b(x,t)|^2\right)dxdt=0, } \item $($Convergence to initial data$)$ for all compact subsets $K$ of $\R^3$ we have $v(t)\to v_0$ and $b(t)\to b_0$ in $L^2(K)$ as $t\to0^+$, \item $($Local energy inequality$)$ for all cylinders $Q$ compactly contained in $\R^3\times(0,\infty)$ and all nonnegetive $\phi\in C^\infty_0(Q)$, we have \EQ{\label{lei_mhd} 2\int\int\left(|\nabla v|^2+|\nabla b|^2\right)\phi\,dxdt\le&\int\int\left(|v|^2+|b|^2\right)\left(\partial_t\phi+\De\phi\right)dxdt\\&+\int\int\left(|v|^2+|b|^2+2\pi\right)(v\cdot\nabla\phi)dxdt\\&-2\int\int(v\cdot b)(b\cdot\nabla\phi)dxdt. } \end{enumerate} \end{defn} One of our goals in this paper is to prove the following existence theorem of a class of forward discretely self-similar solutions of the MHD equations \eqref{MHD}. \begin{thm}\thlabel{thm_1.2_mhd} Let $v_0$ and $b_0$ be divergence-free, $\la$-DSS vector fields for some $\la>1$ and satisfy \EQ{\label{eq_1.12_mhd} \|v_0\|_{L^3_w(\R^3)}\le c_0,\ \ \ \|b_0\|_{L^3_w(\R^3)}\le c_0, } for some constant $c_0>0$. Then there exists a $\la$-DSS local Leray solution $(v,b)$ to \eqref{MHD}. Moreover, there exists $C_0=C_0(v_0,b_0)$ so that \[\|v(t)-e^{t\De}v_0\|_{L^2(\R^3)}\le C_0\,t^{1/4},\ \ \ \|b(t)-e^{t\De}b_0\|_{L^2(\R^3)}\le C_0\,t^{1/4}\] for any $t\in(0,\infty)$. \end{thm} Also, self-similar solutions of the MHD equations \eqref{MHD} can be constructed with $(-1)$-homogeneous initial data. Namely, we have \begin{thm}\thlabel{thm_1.3_mhd} Let $v_0$ and $b_0$ be divergence-free, $(-1)$-homogeneous and satisfy \eqref{eq_1.12_mhd} for some constant $c_0>0$. Then there exists a self-similar local Leray solution $(v,b)$ to \eqref{MHD}. In addition, there exists $C_0=C_0(v_0,b_0)$ such that \[\|v(t)-e^{t\De}v_0\|_{L^2(\R^3)}\le C_0\,t^{1/4},\ \ \ \|b(t)-e^{t\De}b_0\|_{L^2(\R^3)}\le C_0\,t^{1/4}\] for any $t\in(0,\infty)$. \end{thm} We would like to show similar results to \thref{thm_1.2_mhd} and \thref{thm_1.3_mhd} for the viscoelastic Navier-Stokes equations with damping \eqref{vNSEd}. For this purpose, we define analogous local Leray solutions to the viscoelastic Navier-Stokes equations with damping \eqref{vNSEd} as follows. \begin{defn}[Local Leray solutions of the viscoelastic Navier-Stokes equations with damping] A pair of a vector field and a tensor field $(v,{\bf F})$, where $u:\R^3\times(0,\infty)\to\R^3$, ${\bf F}:\R^3\times(0,\infty)\to\R^{3\times3}$ and $v,f_m\in L^2_{\textup{loc}}(\R^3\times[0,\infty))$ for $m=1,2,3$ with $f_m$ being the $m$-th column of ${\bf F}$, is called a local Leray solution to \eqref{vNSEd} with divergence-free initial data $v_0,\,{\bf F}_0\in L_{\textup{uloc}}^2$ if \begin{enumerate}[$(i)$] \item there exists $\pi\in L_{\textup{loc}}^{3/2}(\R^3\times[0,\infty))$ such that $(v,{\bf F},\pi)$ is a distributional solution to \eqref{vNSEd}, \item $($Locally finite energy$/$enstrophy$)$ for any $R>0$, $(v,{\bf F})$ satisfies \EQ{\label{lfe_vNSEd} &\underset{0\le t<R^2}{\textup{esssup}}\ \underset{x_0\in\R^3}{\sup}\int_{B_R(x_0)}\frac12\left(|v(x,t)|^2+|{\bf F}(x,t)|^2\right)dx\\ &~~~~~~~~~~~~~~~~~~~~~~+\underset{x_0\in\R^3}{\sup}\int_0^{R^2}\int_{B_R(x_0)}\left(|\nabla v(x,t)|^2+|\nabla{\bf F}(x,t)|^2\right)dxdt<\infty, } \item $($Decay at spatial infinity$)$ for any $R>0$, $(v,{\bf F})$ satisfies \EQ{\label{dasi_vNSEd} \lim_{|x_0|\to\infty}\int_0^{R^2}\int_{B_R(x_0)}\left(|v(x,t)|^2+|{\bf F}(x,t)|^2\right)dxdt=0, } \item $($Convergence to initial data$)$ for all compact subsets $K$ of $\R^3$ we have $v(t)\to v_0$ and ${\bf F}(t)\to {\bf F}_0$ in $L^2(K)$ as $t\to0^+$, \item $($Local energy inequality$)$ for all cylinders $Q$ compactly contained in $\R^3\times(0,\infty)$ and all nonnegative $\phi\in C^\infty_0(Q)$, we have \EQ{\label{lei_vNSEd} 2\int\int\left(|\nabla v|^2+|\nabla{\bf F}|^2\right)\phi\,dxdt\le&\int\int\left(|v|^2+|{\bf F}|^2\right)\left(\partial_t\phi+\De\phi\right)dxdt\\&+\int\int\left(|v|^2+|{\bf F}|^2+2\pi\right)(v\cdot\nabla\phi)dxdt\\&-2\,\underset{n=1}{\overset{3}\sum}\int\int(v\cdot f_n)(f_n\cdot\nabla\phi)dxdt. } \end{enumerate} \end{defn} The main theorems in this paper for the viscoelastic Navier-Stokes equations with damping can be stated as the following: \begin{thm}\thlabel{thm_1.2_vNSEd} Let $v_0$ and ${\bf F}_0$ be divergence-free, $\la$-DSS vector fields for some $\la>1$ and satisfy \EQ{\label{eq_1.12_vNSEd} \|v_0\|_{L^3_w(\R^3)}\le c_0,\ \ \ \|{\bf F}_0\|_{L^3_w(\R^3)}\le c_0, } for some constant $c_0>0$. Then there exists a local Leray solution $(v,{\bf F})$ to \eqref{vNSEd} which is $\la$-DSS. Moreover, there exists $C_0=C_0(v_0,{\bf F}_0)$ so that \[\|v(t)-e^{t\De}v_0\|_{L^2(\R^3)}\le C_0\,t^{1/4},\ \ \ \|{\bf F}(t)-e^{t\De}{\bf F}_0\|_{L^2(\R^3)}\le C_0\,t^{1/4}\] for any $t\in(0,\infty)$. \end{thm} \begin{thm}\thlabel{thm_1.3_vNSEd} Let $v_0$ and ${\bf F}_0$ be divergence-free, $(-1)$-homogeneous and satisfy \eqref{eq_1.12_vNSEd} for some constant $c_0>0$. Then there exists a self-similar local Leray solution $(v,{\bf F})$ to \eqref{vNSEd}. In addition, there exists $C_0=C_0(v_0,{\bf F}_0)$ so that \[\|v(t)-e^{t\De}v_0\|_{L^2(\R^3)}\le C_0\,t^{1/4},\ \ \ \|{\bf F}(t)-e^{t\De}{\bf F}_0\|_{L^2(\R^3)}\le C_0\,t^{1/4}\] for any $t\in(0,\infty)$. \end{thm} \begin{remark} The solutions obtained in \thref{thm_1.3_mhd} and \thref{thm_1.3_vNSEd} are actually infinitely smooth. \end{remark} The following a priori bounds are the keys to construct our desired solutions. For the MHD equations, if $(u,b)$ is a solution of \eqref{eq_1.10_mhd}, then the differences $U=u-U_0$ and $A=a-A_0$, where $U_0$ and $A_0$ are heat solutions, formally satisfy \EQ{\label{eq_1.15_mhd} &~~~~\int_0^T\int\left(|\nabla U|^2+|\nabla A|^2+\frac12\,|U|^2+\frac12\,|A|^2\right)\\ &=\int_0^T\int\left[(U\cdot\nabla)U\cdot U_0+(U\cdot\nabla)A\cdot A_0-(A\cdot\nabla)U\cdot A_0-(A\cdot\nabla)A\cdot U_0\right]\\ &~~~~-\int_0^T\int\left[\mathcal{R}_1(U_0,A_0)\cdot U+\mathcal{R}_2(U_0,A_0)\cdot A\right], } where $\mathcal{R}_1(U_0,A_0)$ and $\mathcal{R}_2(U_0,A_0)$ will be given in \eqref{eq_R1_R2}. Similarly, for the viscoelastic Navier-Stokes equations with damping, if $(u,g_1,g_2,g_3)$ is a solution of \eqref{eq_1.10_vNSEd}, then the differences $U=u-U_0$ and $G_m=g_m-G_{m,0}$, $m=1,2,3$, where $U_0$ and $G_{m,0}$ are heat solutions, formally obey \EQ{\label{eq_1.15_vNSEd} &~~~~\int_0^T\int\left(|\nabla U|^2+\sum_{n=1}^3|\nabla G_n|^2+\frac12\,|U|^2+\frac12\,\sum_{n=1}^3|G_n|^2\right)\\ &=\int_0^T\int\left[(U\cdot\nabla)U\cdot U_0+\sum_{n=1}^3(U\cdot\nabla)G_n\cdot G_{n,0}-\sum_{n=1}^3(G_n\cdot\nabla)U\cdot G_{n,0}-\sum_{n=1}^3(G_n\cdot\nabla)G_n\cdot U_0\right]\\ &~~~~-\int_0^T\int\left[\mathcal{R}_3(U_0,G_1,G_2,G_3)\cdot U+\sum_{n=1}^3\mathcal{R}_4(U_0,G_{n,0})\cdot G_n\right], } where $\mathcal{R}_3(U_0,G_1,G_2,G_3)$ and $\mathcal{R}_4(U_0,G_{n,0})$ will be given in \eqref{eq_R3_R4}. Note that all cubic terms are either vanish or cancelled out in both \eqref{eq_1.15_mhd} and \eqref{eq_1.15_vNSEd}. To control the quadratic terms, we will choose a suitable cutoff to eliminate the possibly large local behavior of $U_0,\,A_0$ and $G_{m,0}$. See \thref{lem_2.5} for more details. The rest of this paper is organized as follows. In Sect. 2, we recall some results in \cite{MR3611025} and construct a time-periodic solution to the Leray system for the MHD equations and the viscoelastic Navier-Stokes equations with damping. In Sect. 3, we recover discretely self-similar local Leray solutions for the MHD equations and the viscoelastic Navier-Stokes equations with damping from the solutions of the corresponding Leray systems obtained in Sect. 2. In Sect. 4, we prove the existence of self-similar local Leray solutions for the MHD equations and the viscoelastic Navier-Stokes equations with damping by constructing steady-state solutions to the Leray system for the MHD equations and the viscoelastic Navier-Stokes equations with damping, respectively.\\ \\ \emph{Notation.} We define the following function spaces \EQN{\mathcal{V}=\{f\in C^\infty_0(\R^3;\R^3):\nabla\cdot f=0\},\ X=\overline{\mathcal{V}}^{H^1_0(\R^3)},\ H=\overline{\mathcal{V}}^{L^2(\R^3)}. } Let $(\cdot,\cdot)$ be the $L^2(\R^3)$ inner product, and $\left<\cdot,\cdot\right>$ be the dual pairing of $H^1$ and its dual space $H^{-1}$, or that for $X$ and $X^*$. We denote \[\mathcal{D}_T=\left\{\varphi\in C^\infty(\R^3\times\R;\R^3):\begin{array}{l}\nabla \varphi=0,\,\varphi\text{ is periodic in $s$ with period $T$},\\ \textup{spt}(\varphi(\cdot,s))\text{ is compact in }\R^3\text{ for all }s\in[0,T)\end{array}\right\}.\] We recall the Morrey space \[M^{p,\al}=\left\{f\in L^p_{\text{loc}}:\|f\|_{M^{p,\al}}:=\sup_{x\in\R^3,\,r>0}\left[r^{-\al}\int_{B_r(x)}|f|^p\right]^{1/p}<\infty\right\},\] and the weighted $L^2$ spaces \[L^2_{-k/2}=\left\{f\in L^2:\int_{\R^3}\frac{|f(x)|^2}{(1+|x|)^k}\,dx<\infty\right\}.\] \section{The Time-Periodic Leray System} \subsection{The time-periodic Leray system for the MHD equations} In this subsection, we study the existence of time-periodic weak solutions to the Leray system for the MHD equations \begin{equation}\label{leray_mhd} \begin{split} \setlength\arraycolsep{1.5pt}\def1.2{1.2} \left.\begin{array}{ll} \partial_su-\De u-u-y\cdot\nabla u+(u\cdot\nabla)u-(a\cdot\nabla)a+\nabla p&=0\ \\ \partial_sa-\De a-a-y\cdot\nabla a+(u\cdot\nabla)a-(a\cdot\nabla)u&=0\ \\ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nabla\cdot u =\nabla\cdot b&=0 \ \end{array}\right\}\ \text{ in }\R^3\times\R,\\ \lim_{|y_0|\to\infty}\int_{B_1(y_0)}\left(|u(y,s)-U_0(y,s)|^2+|a(y,s)-A_0(y,s)|^2\right)dy=0\ \text{ for all }s\in\R,\\ u(\cdot,s)=u(\cdot,s+T),\ a(\cdot,s)=a(\cdot,s+T)\ \text{ in }\R^3\text{ for all }s\in\R, \end{split} \end{equation} for given $T$-periodic divergence-free vector fields $U_0$ and $A_0$. We first revisit the assumption for the background vector field $U_0$ and the corresponding results in \cite{MR3611025}. \begin{assum}[\cite{MR3611025} Assumption 2.1]\thlabel{assum_2.1} $U_0\in C^1(\R^4;\R^3)$ is periodic in $s$ with period $T>0$, divergence-free and satisfies \[\partial_sU_0-\De U_0-U_0-y\cdot U_0=0,\] \[U_0\in L^\infty(0,T;L^4\cap L^q(\R^3)),\] \[\nabla U_0\in L^2_{loc}(\R^4),\] \[\partial_s U_0\in L^\infty(0,T;L^{6/5}_{\textup{loc}}(\R^3)),\] and \[\sup_{s\in[0,T]}\|U_0(s)\|_{L^q(\R^3\setminus B_R)}\le\Theta(R),\] for some $q\in(3,\infty]$ and $\Theta:[0,\infty)\to[0,\infty)$ such that $\Theta(R)\to0$ as $R\to\infty$. \end{assum} For notational simplicity, we define the linear differential operator $\mathcal{L}$ by \begin{equation} \label{diff_op_L}\mathcal{L}W=\partial_sW-\De W-W-y\cdot\nabla W, \end{equation} and so \[\left<\mathcal{L}W,\zeta\right>=(\partial_sW-W-y\cdot\nabla W,\zeta)+(\nabla W,\nabla\zeta)\] for all $\zeta\in C^1_0(\R^3)$. \begin{lem}[\cite{MR3611025} Lemma 2.5]\thlabel{lem_2.5} Fix $q\in(3,\infty]$ and suppose $U_0$ satisfies \thref{assum_2.1} for this $q$. Let $Z\in C^\infty(\R^3)$ with $0\le Z\le 1,\,Z(x)=1$ for $|x|>2$ and $Z(x)=0$ for $|x|<1$. For any $\de\in(0,1)$, there exists $R_0=R_0(U_0,\de)\ge1$ so that if we define $\xi(y)=Z\left(\frac{y}{R_0}\right)$, and \[ w(y,s)=\int_{\R^3}\nabla_y\,\frac1{4\pi|y-z|}\,\nabla_z\xi(z)\cdot U_0(z,s)dz, \] then \[ W(y,s)=\xi(y)U_0(y,s)+w(y,s) \] has the following properties: locally continuously differentiable in $y$ and $s$, $T$-periodic, divergence-free, $U_0-W\in L^\infty(0,T;L^2(\R^3))\cap L^2(0,T;H^1(\R^3))$, and \begin{equation}\label{eq_2.8} \|W\|_{L^\infty(0,T;L^q(\R^3))}\le\de, \end{equation} \begin{equation}\label{eq_2.9} \|W\|_{L^\infty(0,T;L^4(\R^3))}\le c(R_0,U_0), \end{equation} and \begin{equation}\label{eq_2.10} \|\mathcal{L}W\|_{L^\infty(0,T;H^{-1}(\R^3))}\le c(R_0,U_0), \end{equation} where $c(R_0,U_0)$ depends on $R_0$ and quantities associated with $U_0$ which are finite by \thref{assum_2.1}. \end{lem} \begin{lem}[\cite{MR3611025} Lemma 3.4]\thlabel{lem_3.4} Suppose $v_0$ satisfies the assumption of \thref{thm_1.2_mhd} and let $x,t,y,s$ satisfy \eqref{xtys}. Then \[U_0(y,s):=\sqrt{2t}(e^{t\De}v_0)(x)\] satisfies \thref{assum_2.1} with $T=\log(\la)$ and any $q\in(3,\infty]$. \end{lem} Similar to the Navier-Stokes counterpart of time-periodic Leray system in \cite{MR3611025}, we define periodic weak solutions and suitable periodic weak solutions of \eqref{leray_mhd} as follows. \begin{defn}[Periodic weak solution of Leray system for the MHD equations] Let $U_0$ and $A_0$ both satisfy \thref{assum_2.1}. A pair of vector fields $(u,a)$ is a periodic weak solution to \eqref{leray_mhd} if $\nabla\cdot u=\nabla\cdot b=0$, \[u-U_0,\,a-A_0\in L^\infty(0,T;L^2(\R^3))\cap L^2(0,T;H^1(\R^3)),\] and \begin{equation} \int_0^T\left[(u,\partial_s\varphi)-(\nabla u,\nabla \varphi)+(u+y\cdot\nabla u-u\cdot\nabla u+a\cdot\nabla a,\varphi)\right]ds=0, \end{equation} \begin{equation} \int_0^T\left[(a,\partial_s\varphi)-(\nabla a,\nabla \varphi)+(a+y\cdot\nabla a-u\cdot\nabla a+a\cdot\nabla u,\varphi)\right]ds=0, \end{equation} holds for all $\varphi\in\mathcal{D}_T$. \end{defn} \begin{defn}[Suitable periodic weak solution of Leray system for the MHD equations] Let $U_0$ and $A_0$ both satisfy \thref{assum_2.1}. A triple $(u,a,p)$ is a suitable periodic weak solution to \eqref{leray_mhd} if $u,a,p$ are periodic in $s$ with period $T$, $(u,a)$ is a periodic weak solution to \eqref{leray_mhd}, $p\in L^{3/2}_{\textup{loc}}(\R^4)$, $(u,a,p)$ solves \eqref{leray_mhd} in the sense of distributions, and the local energy inequality holds: \EQ{\label{lei_mhd_leray} \int_{\R^4}\left(\frac{|u|^2+|a|^2}2+|\nabla u|^2+|\nabla a|^2\right)\psi \,dyds\le&\int_{\R^4}\frac{|u|^2+|a|^2}2\left(\partial_s\psi+\De\psi\right)dyds\\&+\int_{\R^4}\left(\frac{|u|^2+|a|^2}2(u-y)+pu\right)\cdot\nabla\psi\,dyds\\&-\int_{\R^4}(u\cdot a)a\cdot\nabla\psi\,dyds, } for all nonnegative $\psi\in C^\infty_0(\R^4)$. \end{defn} We are now ready to prove the existence of suitable periodic weak solutions of \eqref{leray_mhd}. Namely, we have \begin{thm}[Existence of suitable periodic weak solutions to \eqref{leray_mhd}]\thlabel{thm_2.4_mhd} Assume $U_0(y,s)$ and $A_0(y,s)$ both satisfy \thref{assum_2.1} with $q=10/3$. Then \eqref{leray_mhd} has a periodic suitable weak solution $(u,a,p)$ in $\R^4$ with period $T$. \end{thm} \begin{proof} Fix $Z\in C^\infty(\R^3)$ with $0\le Z\le 1,\,Z(x)=1$ for $|x|>2$ and $Z(x)=0$ for $|x|<1$. Applying \thref{lem_2.5} with $\de=\frac14$, one can choose $R_0=R_0(U_0,A_0)\ge1$ such that letting $\xi(y)=Z\left(\frac{y}{R_0}\right)$ and setting \begin{equation}\label{W_def} W(y,s)=\xi(y)U_0(y,s)+w(y,s) \end{equation} and \begin{equation}\label{D_def} D(y,s)=\xi(y)A_0(y,s)+d(y,s), \end{equation} where \begin{equation} w(y,s)=\int_{\R^3}\nabla_y\,\frac1{4\pi|y-z|}\,\nabla_z\xi(z)\cdot U_0(z,s)dz \end{equation} and \begin{equation} d(y,s)=\int_{\R^3}\nabla_y\,\frac1{4\pi|y-z|}\,\nabla_z\xi(z)\cdot A_0(z,s)dz, \end{equation} both $W$ and $D$ satisfy the conclusion of \thref{lem_2.5}. Using the differential operator $\mathcal{L}$ defined in \eqref{diff_op_L}, the Leray system \eqref{leray_mhd} can be written as \begin{equation} \setlength\arraycolsep{1.5pt}\def1.2{1.2} \left\{\begin{array}{ll} \mathcal{L}u+(u\cdot\nabla)u-(a\cdot\nabla)a+\nabla p&=0\\ \mathcal{L}a+(u\cdot\nabla)a-(a\cdot\nabla)u&=0\\ ~~~~~~~~~~~~~~~~~~~~~~\nabla\cdot u=\nabla\cdot a&=0. \end{array}\right.\end{equation} We are looking for a solution of the form $u=U+W$ and $a=A+D$. Then $(U,A)$ must satisfy the perturbed Leray system for the MHD equations \begin{equation}\label{ptb_leray_mhd} \setlength\arraycolsep{1.5pt}\def1.2{1.2} \left\{\begin{array}{ll} \mathcal{L}U+(W+U)\cdot\nabla U+U\cdot\nabla W-(D+A)\cdot\nabla A-A\cdot\nabla D+\nabla p&=-\mathcal{R}_1(W,D)\\ \mathcal{L}A+(W+U)\cdot\nabla A+U\cdot\nabla D-(D+A)\cdot\nabla U-A\cdot\nabla W&=-\mathcal{R}_2(W,D)\\ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nabla\cdot U=\nabla\cdot A&=0, \end{array}\right. \end{equation} where \begin{equation}\label{eq_R1_R2} \setlength\arraycolsep{1.5pt}\def1.2{1.2} \left\{\begin{array}{l} \mathcal{R}_1(W,D):=\mathcal{L}W+W\cdot\nabla W-D\cdot\nabla D\\ \mathcal{R}_2(W,D):=\mathcal{L}D+W\cdot\nabla D-D\cdot\nabla W. \end{array}\right. \end{equation} We first solve the following mollified perturbed Leray system for the MHD equations for $(U^\ve,A^\ve,p^\ve)$ in $\R^3\times[0,T]$: \begin{equation}\label{mdf_ptb_leray_mhd} \setlength\arraycolsep{1.5pt}\def1.2{1.2} \left\{\begin{array}{ll} &\mathcal{L}U^\ve+(W+(\eta_\ve*U^\ve))\cdot\nabla U^\ve+U^\ve\cdot\nabla W\\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~-(D+(\eta_\ve*A^\ve))\cdot\nabla A^\ve-A^\ve\cdot\nabla D+\nabla p^\ve=-\mathcal{R}_1(W,D),\\ &\mathcal{L}A^\ve+(W+(\eta_\ve*U^\ve))\cdot\nabla A^\ve+U^\ve\cdot\nabla D\\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~-(D+(\eta_\ve*A^\ve))\cdot\nabla U^\ve-A^\ve\cdot\nabla W=-\mathcal{R}_2(W,D),\\ &\,~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nabla\cdot U^\ve=\nabla\cdot A^\ve=0, \end{array}\right. \end{equation} where $\eta_\ve(y)=\ve^{-3}\eta(y/\ve)$ for some fixed function $\eta\in C^\infty_0(\R^3)$ satisfying $\int_{\R^3}\eta dy=1$. The weak formulation of \eqref{mdf_ptb_leray_mhd} is \begin{equation} \setlength\arraycolsep{1.5pt}\def1.2{1.2} \left\{\begin{array}{lll} \frac{d}{ds}(U^\ve,f)&=&-(\nabla U^\ve,\nabla f)+(U^\ve+y\cdot\nabla U^\ve,f)-\left((\eta_\ve*U^\ve)\cdot\nabla U^\ve-(\eta_\ve*A^\ve)\cdot\nabla A^\ve,f\right)\\ &&-(W\cdot\nabla U^\ve+U^\ve\cdot\nabla W-D\cdot\nabla A^\ve-A^\ve\cdot\nabla D,f)-\left<\mathcal{R}_1(W,D),f\right>\\ \frac{d}{ds}(A^\ve,f)&=&-(\nabla A^\ve,\nabla f)+(A^\ve+y\cdot\nabla A^\ve,f)-\left((\eta_\ve*U^\ve)\cdot\nabla A^\ve-(\eta_\ve*A^\ve)\cdot\nabla U^\ve,f\right)\\ &&-(W\cdot\nabla A^\ve+U^\ve\cdot\nabla D-D\cdot\nabla U^\ve-A^\ve\cdot\nabla W,f)-\left<\mathcal{R}_2(W,D),f\right> \end{array}\right. \end{equation} for all $f\in\mathcal{V}$ and a.e. $s\in(0,T)$. ~\\ {\bf Step 1: Construction of a solution to the mollified perturbed Leray system} We use the Galerkin method to construct a solution of \eqref{mdf_ptb_leray_mhd}. Let $\{h_k\}_{k\in\mathbb{N}}\subset\mathcal{V}$ be an orthonormal basis of $H$. Fixing a natural number $k$, we search for an approximation solution of the form $U^\ve_k(y,s)=\sum_{i=1}^k\mu^\ve_{ki}(s)h_i(y),\,A^\ve_k(y,s)=\sum_{i=1}^k\al^\ve_{ki}(s)h_i(y)$. We first prove the existence and an a priori estimate for $T$-periodic solutions $\mu^\ve_k=(\mu^\ve_{k1},\cdots,\mu^\ve_{kk}),\,\al^\ve_k=(\al^\ve_{k1},\cdots,\al^\ve_{kk})$ to the system of ODEs \begin{equation}\label{galerkin_ode_mhd} \setlength\arraycolsep{1.5pt}\def1.2{1.2} \left\{\begin{array}{ll} \frac{d}{ds}\mu^\ve_{kj}&=\underset{i=1}{\overset{k}\sum}\mathscr{A}_{ij}\mu^\ve_{ki}+\underset{i=1}{\overset{k}\sum}\mathscr{B}_{ij}\al^\ve_{ki}+\underset{i,l=1}{\overset{k}\sum}\mathscr{C}^\ve_{ilj}\mu^\ve_{ki}\mu^\ve_{kl}-\underset{i,l=1}{\overset{k}\sum}\mathscr{C}^\ve_{ilj}\al^\ve_{ki}\al^\ve_{kl}+\mathscr{D}_j\\ \frac{d}{ds}\al^\ve_{kj}&=\underset{i=1}{\overset{k}\sum}\mathscr{E}_{ij}\mu^\ve_{ki}+\underset{i=1}{\overset{k}\sum}\mathscr{F}_{ij}\al^\ve_{ki}+\underset{i,l=1}{\overset{k}\sum}\mathscr{G}^\ve_{ilj}\mu^\ve_{ki}\al^\ve_{kl}+\mathscr{H}_j, \end{array}\right. \end{equation} for $j=1,\cdots,k$, where \EQ{\label{galerkin_ode_coeff_mhd} \mathscr{A}_{ij}&=-(\nabla h_i,\nabla h_j)+(h_i+y\cdot\nabla h_i,h_j)-(h_i\cdot\nabla W,h_j)-(W\cdot\nabla h_i,h_j),\\ \mathscr{B}_{ij}&=(h_i\cdot\nabla D,h_j)+(D\cdot\nabla h_i,h_j),\\ \mathscr{C}^\ve_{ilj}&=-((\eta_\ve*h_i)\cdot\nabla h_l,h_j),\\ \mathscr{D}_j&=-\left<\mathcal{R}_1(W,D),h_j\right>,\\ \mathscr{E}_{ij}&=-(h_i\cdot\nabla D,h_j)+(D\cdot\nabla h_i,h_j),\\ \mathscr{F}_{ij}&=-(\nabla h_i,\nabla h_j)+(h_i+y\cdot\nabla h_i,h_j)+(h_i\cdot\nabla W,h_j)-(W\cdot\nabla h_i,h_j),\\ \mathscr{G}^\ve_{ilj}&=-((\eta_\ve*h_i)\cdot\nabla h_l,h_j)+((\eta_\ve*h_l)\cdot\nabla h_i,h_j),\\ \mathscr{H}_j&=-\left<\mathcal{R}_2(W,D),h_j\right>. } Fix $k\in\mathbb{N}$. For any $U^0,A^0\in\textup{span}(h_1,\cdots,h_k)$, there exist $\mu^\ve_{kj},\al^\ve_{kj}\in H^1(0,\tilde{T})$, $j=1,\cdots,k$, that uniquely solve \eqref{galerkin_ode_mhd} with initial data $\mu^\ve_{kj}(0)=(U^0,h_j),\,\al^\ve_{kj}(0)=(A^0,h_j)$, $j=1,\cdots,k$, for some $0<\tilde{T}\le T$. We show that $\tilde{T}=T$. To this end, we first derive \EQ{\label{eq_2.27_mhd} &~~~~\frac12\,\frac{d}{ds}\left(\|U^\ve_k\|_{L^2}^2+\|A^\ve_k\|_{L^2}^2\right)+\frac12\left(\|U^\ve_k\|_{L^2}^2+\|A^\ve_k\|_{L^2}^2\right)+\left(\|\nabla U^\ve_k\|_{L^2}^2+\|\nabla A^\ve_k\|_{L^2}^2\right)\\ &=-(U^\ve_k\cdot\nabla W-D\cdot\nabla A^\ve_k-A^\ve_k\cdot\nabla D,U^\ve_k)-(U^\ve_k\cdot\nabla D-D\cdot\nabla U^\ve_k-A^\ve_k\cdot\nabla W,A^\ve_k)\\ &~~~-\left<\mathcal{R}_1(W,D),U^\ve_k\right>-\left<\mathcal{R}_2(W,D),A^\ve_k\right> ,} by multiplying the $j$-th equation of $\eqref{galerkin_ode_mhd}_1$ by $\mu^\ve_{kj}$, and multiplying the $j$-th equation of $\eqref{galerkin_ode_mhd}_2$ by $\al^\ve_{kj}$, and then sum up all $2k$ equations. In the derivation, notice that $((\eta_\ve*U^\ve)\cdot\nabla U^\ve,U^\ve),\,(W\cdot\nabla U^\ve,U^\ve),\,((\eta_\ve*U^\ve)\cdot\nabla A^\ve,A^\ve)$ and $(W\cdot\nabla A^\ve,A^\ve)$ vanish, and $((\eta_\ve*A^\ve)\cdot\nabla A^\ve,U^\ve)$ and $((\eta_\ve*A^\ve)\cdot\nabla U^\ve,A^\ve)$ are cancelled each other; thus these terms don't show up in \eqref{eq_2.27_mhd}. Using \thref{lem_2.5} with $\de=\frac14$, we get \EQ{\label{etm_2.28_mhd} &~~~~\left|-(U^\ve_k\cdot\nabla W-D\cdot\nabla A^\ve_k-A^\ve_k\cdot\nabla D,U^\ve_k)-(U^\ve_k\cdot\nabla D-D\cdot\nabla U^\ve_k-A^\ve_k\cdot\nabla W,A^\ve_k)\right|\\ &\le\frac38\left(\|U^\ve_k\|_{H^1}^2+\|A^\ve_k\|_{H^1}^2\right), } and \EQ{\label{etm_2.29_mhd} \left|-\left<\mathcal{R}_1(W,D),U^\ve_k\right>-\left<\mathcal{R}_2(W,D),A^\ve_k\right>\right|\le C_2+\frac3{32}\left(\|U^\ve_k\|_{H^1}^2+\|A^\ve_k\|_{H^1}^2\right), } where $C_2=8\left(\|\mathcal{L}W\|_{H^{-1}}^2+\|\mathcal{L}D\|_{H^{-1}}^2+(\|W\|_{L^4}^2+\|D\|_{L^4}^2)^2\right)$ is independent of $s,T,k$ and $\varepsilon$. Using the estimates \eqref{etm_2.28_mhd} and \eqref{etm_2.29_mhd}, we obtain from \eqref{eq_2.27_mhd} the differential inequality \EQ{\label{eq_2.30_mhd} \frac{d}{ds}\left(\|U^\ve_k\|_{L^2}^2+\|A^\ve_k\|_{L^2}^2\right)+\frac1{16}\left(\|U^\ve_k\|_{L^2}^2+\|A^\ve_k\|_{L^2}^2\right)+\frac1{16}\left(\|\nabla U^\ve_k\|_{L^2}^2+\|\nabla A^\ve_k\|_{L^2}^2\right)\le C_2. } Applying the Gronwall inequality, we get \EQ{\label{eq_2.31_mhd} e^{s/{16}}\left(\|U^\ve_k\|_{L^2}^2+\|A^\ve_k\|_{L^2}^2\right)\le&~\left(\|U^0\|_{L^2}^2+\|A^0\|_{L^2}^2\right)+\int_0^{\tilde{T}}e^{\tau/{16}}C_2\,d\tau\\ \le&~\left(\|U^0\|_{L^2}^2+\|A^0\|_{L^2}^2\right)+e^{T/{16}}C_2T } for all $s\in[0,\tilde{T}]$. Since the right-hand side is finite, $\tilde{T}$ is not a blow-up time and we conclude that $\tilde{T}=T$. Choosing $\rho=\frac{C_2T}{1-e^{-T/{16}}}>0$ (independent of $k$), \eqref{eq_2.31_mhd} implies that \[\left(\|U^\ve_k\|_{L^2}^2+\|A^\ve_k\|_{L^2}^2\right)^\frac12\le\rho\] if $\left(\|U^0\|_{L^2}^2+\|A^0\|_{L^2}^2\right)^\frac12\le\rho$. Define $\mathcal{T}:B_\rho^{2k}\to B_\rho^{2k}$ by $\mathcal{T}(\mu^\ve_k(0),\al^\ve_k(0))=(\mu^\ve_k(T),\al^\ve_k(T))$, where $B_\rho^{2k}$ is the closed ball in $\R^{2k}$ of radius $\rho$ and centered at the origin. Note that the map $\mathcal{T}$ is continuous by the continuous dependence on initial conditions of the solution of ODEs. Thus, it has a fixed point by the Brouwer fixed point theorem, i.e., there exist $(\mu^\ve_k(0),\al^\ve_k(0))\in B_\rho^{2k}$ such that $(\mu^\ve_k(0),\al^\ve_k(0))=(\mu^\ve_k(T),\al^\ve_k(T))$. Let $U^0=\sum_{i=1}^k\mu^\ve_{ki}(0)h_i$ and $A^0=\sum_{i=1}^k\al^\ve_{ki}(0)h_i$. Then $U^0,A^0\in\textup{span}(h_1,\cdots,h_k)$ and $U^0=U^\ve_k(T),A^0=A^\ve_k(T)$. With the choice of $U^0$ and $A^0$ we have $\left(\|U^\ve_k(s)\|_{L^2}^2+\|A^\ve_k(s)\|_{L^2}^2\right)^\frac12\le\rho$ for all $s\in[0,T]$. Hence \begin{equation} \left(\|U^\ve_k\|_{L^\infty(0,T;L^2(\R^3))}^2+\|A^\ve_k\|_{L^\infty(0,T;L^2(\R^3))}^2\right)^\frac12\le\rho. \end{equation} Moreover, by integrating \eqref{eq_2.30_mhd} in $s\in[0,T]$ and using $U^\ve_k(0)=U^\ve_k(T),\,A^\ve_k(0)=A^\ve_k(T)$, we get \begin{equation} \frac1{16}\int_0^T\left(\|U^\ve_k(s)\|_{H^1}^2+\|A^\ve_k(s)\|_{H^1}^2\right)ds\le C_2T. \end{equation} Therefore, \begin{equation}\label{eq_2.26_mhd} \|U^\ve_k\|_{L^\infty(0,T;L^2(\R^3))}+\|A^\ve_k\|_{L^\infty(0,T;L^2(\R^3))}+\|U^\ve_k\|_{L^2(0,T;H^1(\R^3))}+\|A^\ve_k\|_{L^2(0,T;H^1(\R^3))}\le C, \end{equation} where $C=\sqrt{4(\rho^2+16C_2T)}$ is independent of both $\ve$ and $k$. Using the uniform bounded sequences $\{U^\ve_k\}_{k\in\mathbb{N}}$ and $\{A^\ve_k\}_{k\in\mathbb{N}}$, and a standard limiting process, we get, for all $\ve>0$, two $T$-periodic vector fields $U^\ve,A^\ve\in L^2(0,T;H^1_0(\R^3))$ (both have $\ve$-independent $L^\infty L^2$ and $L^2H^1$ bounds), a subsequence of $\{U^\ve_k\}_{k\in\mathbb{N}}$, and a subsequence of $\{A^\ve_k\}_{k\in\mathbb{N}}$ (still denoted by $U^\ve_k$ and $A^\ve_k$, respectively) so that \EQ{ &U^\ve_k\rightharpoonup U^\ve,\ A^\ve_k\rightharpoonup A^\ve\ \text{ weakly in $L^2(0,T;X)$},\\ &U^\ve_k\to U^\ve,\ A^\ve_k\to A^\ve\ \text{ strongly in $L^2(0,T;L^2(K))$ for all compact sets $K\subset\R^3$},\\ &U^\ve_k(s)\rightharpoonup U^\ve(s),\ A^\ve_k(s)\rightharpoonup A^\ve(s)\ \text{ weakly in $L^2$ for all $s\in[0,T]$}. } The weak convergence guarantees that $U^\ve(0)=U^\ve(T)$ and $A^\ve(0)=A^\ve(T)$. Moreover, the pair $(U^\ve,A^\ve)$ is a periodic weak solution of the mollified perturbed Leray system \eqref{mdf_ptb_leray_mhd}. ~\\ {\bf Step 2: A priori estimate of the pressure in the mollified perturbed Leray system} Note that $\nabla\cdot\mathcal{L}V=0$ if $\nabla\cdot V=0$. Therefore, by taking the divergence of $\eqref{mdf_ptb_leray_mhd}_1$, we obtain \EQ{\label{eq_2.33_mhd} -\De p^\ve=&\sum_{i,j=1}^k\partial_i\partial_j\left[(\eta_\ve*U^\ve_i)U^\ve_j+W_iU^\ve_j+U^\ve_i W_j+W_iW_j\right.\\ &~~~~~~~~~~~~~\left.-(\eta_\ve*A^\ve_i)A^\ve_j-D_iA^\ve_j-A^\ve_i D_j-D_iD_j\right]. } Let \EQ{ \tilde{p}^\ve=&\sum_{i,j=1}^kR_iR_j\left[(\eta_\ve*U^\ve_i)U^\ve_j+W_iU^\ve_j+U^\ve_i W_j+W_iW_j\right.\\&~~~~~~~~~~~~~~~\left.-(\eta_\ve*A^\ve_i)A^\ve_j+D_iA^\ve_j+A^\ve_i D_j+D_iD_j\right], } where $R_i$ denote the Riesz transforms. Note that $\tilde{p}^\ve$ also satisfies \eqref{eq_2.33_mhd}. We will show that $p^\ve=\tilde{p}^\ve$ up to an additive constant by proving $\nabla(p^\ve-\tilde{p}^\ve)=0$. Let $V^\ve(x,t)=(2t)^{-1/2}U^\ve(y,s),\,\pi^\ve(x,t)=(2t)^{-1}p^\ve(y,s)$ and $\mathcal{F}^\ve(x,t)=(\mathcal{F}_1+\mathcal{F}^\ve_2)(x,t)$ where \begin{equation} \mathcal{F}_1(x,t):=-\frac1{(2t)^{3/2}}(\mathcal{L}W)(y,s), \end{equation} \EQ{ \mathcal{F}^\ve_2(x,t):=-\frac1{(2t)^{3/2}}&\left[W\cdot\nabla U^\ve+U^\ve\cdot\nabla W+(\eta_\ve*U^\ve)\cdot\nabla U^\ve+W\cdot\nabla W\right.\\&\left.\,-D\cdot\nabla A^\ve-A^\ve\cdot\nabla D-(\eta_\ve*A^\ve)\cdot\nabla A^\ve-D\cdot\nabla D\right](y,s), } and $y=x/\sqrt{2t}$ and $s=\log(\sqrt{2t})$. Hence, $\mathcal{F}^\ve\in L^\infty(1,\la^2;H^{-1}(\R^3))$, $(V^\ve,\pi)$ solves the non-stationary Stokes system on $\R^3\times[1,\la^2]$ with force $\mathcal{F}^\ve$ defined by $\eqref{mdf_ptb_leray_mhd}_1$, and $V^\ve$ is in the energy class. According to the uniqueness of the solution to the forced, non-stationary Stokes system on $\R^3\times[1,\la^2]$, we can conclude that $\nabla\pi^\ve=\nabla\tilde{\pi}^\ve$ where $\tilde{\pi}^\ve=(2t)^{-1}\tilde{p}^\ve$. Therefore $\nabla(p^\ve-\tilde{p}^\ve)=0$. At this stage, we may replace $p^\ve$ by $\tilde{p}^\ve$. Recall that the Riesz transforms $R_i\phi(x)=\lim_{\e\to0^+}\int_{|x-y|>\e}K_i(x-y)\phi$ are Calder\'on-Zygmund operators since $K_i(x)=\frac{x_i}{|x|^{n+1}}$ are Calder\'on-Zygmund kernels. Applying the Calder\'on-Zygmund theory, we get \EQN{ \|p^\ve(s)\|_{L^{5/3}}\le&~C\left\|\left[(\eta_\ve*U^\ve_i)U^\ve_j+W_iU^\ve_j+U^\ve_i W_j+W_iW_j\right.\right.\\&~~~~~\left.\left.-(\eta_\ve*A^\ve_i)A^\ve_j+D_iA^\ve_j+A^\ve_i D_j+D_iD_j\right](s)\right\|_{L^{5/3}}\\\le&~C\left(\|U^\ve(s)\|_{L^{10/3}}^2+\|A^\ve(s)\|_{L^{10/3}}^2+\|W(s)\|_{L^{10/3}}^2+\|D(s)\|_{L^{10/3}}^2\right). } Hence we obtain the following a priori bound for $p^\ve$: \EQ{\label{eq_2.37_mhd} \|p^\ve\|_{L^{5/3}(\R^3\times[0,T])}\le&~C\left(\|U^\ve\|_{L^{10/3}(\R^3\times[0,T])}^2+\|A^\ve\|_{L^{10/3}(\R^3\times[0,T])}^2\right.\\&~~~~~\left.+\|W\|_{L^{10/3}(\R^3\times[0,T])}^2+\|D\|_{L^{10/3}(\R^3\times[0,T])}^2\right). } Recall that the sequences $\{U^\ve\}_{\ve>0}$ and $\{A^\ve\}_{\ve>0}$ are both bounded in $L^\infty L^2$ and $L^2H^1$ norms. So \EQ{\label{bound_U^ve_mhd} \|U^\ve\|_{L^{10/3}(\R^3\times[0,T])}=\left\|\|U^\ve\|_{L_y^{10/3}}\right\|_{L_s^{10/3}}\le&~\left\|\|U^\ve\|_{L_y^2}^{\frac25}\|U^\ve\|_{L_y^6}^{\frac35}\right\|_{L_s^{10/3}}\\ \le&~\|U^\ve\|_{L^\infty(0,T;L^2(\R^3))}^{\frac25}\left\|\|U^\ve\|_{L_y^6}^{\frac35}\right\|_{L_s^{10/3}}\\ \le&~\|U^\ve\|_{L^\infty(0,T;L^2(\R^3))}^{\frac25}\|U^\ve\|_{L^2(0,T;L^6(\R^3))}^{\frac35}\\ \lesssim&~\|U^\ve\|_{L^\infty(0,T;L^2(\R^3))}^{\frac25}\|U^\ve\|_{L^2(0,T;H^1(\R^3))}^{\frac35}\\ \le&~C, } where $C$ is some constant independent of $\ve$. Similarly, we also obtain \EQ{\label{bound_A^ve_mhd} \|A^\ve\|_{L^{10/3}(\R^3\times[0,T])}\le C. } In addition, because we are applying \thref{lem_2.5} with $q=\frac{10}3$ and $\de=\frac14$, we have $\|W\|_{L^\infty(0,T;L^{10/3}(\R^3))}\le\frac14$ and $\|D\|_{L^\infty(0,T;L^{10/3}(\R^3))}\le\frac14$. Thus, we have the esitmates \EQ{\label{bound_WD} \|W\|_{L^{10/3}(\R^3\times[0,T])}\le \frac14\,T^{10/3}\ \ \ \text{ and }\ \ \ \|D\|_{L^{10/3}(\R^3\times[0,T])}\le \frac14\,T^{10/3}. } Using the bounds \eqref{bound_U^ve_mhd}-\eqref{bound_WD}, \eqref{eq_2.37_mhd} implies that $\{p^\ve\}_{\ve>0}$ is a bounded sequence in $L^{5/3}(\R^3\times[0,T])$. ~\\ {\bf Step 3: Convergence to a suitable periodic weak solution to \eqref{leray_mhd}} Since the sequences $\{U^\ve\}_{\ve>0}$ and $\{A^\ve\}_{\ve>0}$ are both bounded in $L^\infty(0,T;L^2(\R^3))$- and $L^2(0,T;H^1(\R^3))$- norms, there exist $U,A\in L^\infty(0,T;L^2(\R^3))\cap L^2(0,T;H^1_0(\R^3))$ and two sequences $\{U^{\ve_k}\}_{k\in\mathbb{N}},\{A^{\ve_k}\}_{k\in\mathbb{N}}$ such that \EQ{\label{U_A_conv} &U^{\ve_k}\rightharpoonup U,\ A^{\ve_k}\rightharpoonup A\ \ \ \text{ weakly in $L^2(0,T;X)$},\\ &U^{\ve_k}\to U,\ A^{\ve_k}\to A\ \ \ \text{ strongly in $L^2(0,T;L^2(K))$ for all compact sets $K\subset\R^3$},\\ &U^{\ve_k}(s)\rightharpoonup U(s),\ A^{\ve_k}(s)\rightharpoonup A(s)\ \ \ \text{ weakly in $L^2$ for all $s\in[0,T]$}, } as $\ve_k\to0$. On the other hand, since $\{p^{\ve_k}\}_{k\in\mathbb{N}}$ is a bounded sequence in $L^{5/3}(\R^3\times[0,T])$, we have that \EQ{ p^{\ve_k}\rightharpoonup p\ \text{ weakly in $L^{5/3}(\R^3\times[0,T])$,} } for some $p\in L^{5/3}(\R^3\times[0,T])$. Let $u=U+W$ and $a=A+D$. The above convergences are enough to ensure that the triple $(u,a,p)$ solves \eqref{leray_mhd} in the sense of distributions. It remains to check that $(u,a,p)$ satisfies the local energy inequality \eqref{lei_mhd_leray}. Note that $(u^{\ve_k},a^{\ve_k},p^{\ve_k})$, where $u^{\ve_k}=U^{\ve_k}+W$ and $a^{\ve_k}=A^{\ve_k}+D$, satisfies \begin{equation}\label{eq_u^ve_a^ve} \setlength\arraycolsep{1.5pt}\def1.2{1.2} \left\{\begin{array}{ll} &\mathcal{L}u^{\ve_k}+W\cdot\nabla u^{\ve_k}+(\eta_{\ve_k}*U^{\ve_k})\cdot\nabla U^{\ve_k}+U^{\ve_k}\cdot\nabla W\\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~~-D\cdot\nabla a^{\ve_k}-(\eta_{\ve_k}*A^{\ve_k})\cdot\nabla A^{\ve_k}-A^{\ve_k}\cdot\nabla D+\nabla p^{\ve_k}=0,\\ &\mathcal{L}a^{\ve_k}+W\cdot\nabla a^{\ve_k}+(\eta_{\ve_k}*U^{\ve_k})\cdot\nabla A^{\ve_k}+U^{\ve_k}\cdot\nabla D\\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~-D\cdot\nabla u^{\ve_k}-(\eta_{\ve_k}*A^{\ve_k})\cdot\nabla U^{\ve_k}-A^{\ve_k}\cdot\nabla W=0. \end{array}\right. \end{equation} Testing $\eqref{eq_u^ve_a^ve}_1$ and $\eqref{eq_u^ve_a^ve}_2$ with $u^{\ve_k}\psi$ and $a^{\ve_k}\psi$, respectively, where $0\le\psi\in C^\infty_0(\R^4)$ and adding them together, we get \EQ{\label{A_51_mhd} &~~~~\int_{\R^4}\left(\frac{|u^{\ve_k}|^2+|a^{\ve_k}|^2}2+|\nabla u^{\ve_k}|^2+|\nabla a^{\ve_k}|^2\right)\psi \,dyds\\&=\int_{\R^4}\frac{|u^{\ve_k}|^2+|a^{\ve_k}|^2}2\left(\partial_s\psi+\De\psi\right)dyds+\int_{\R^4}\frac{|u^{\ve_k}|^2+|a^{\ve_k}|^2}2\,(W-y)\cdot\nabla\psi\,dyds\\ &~~~+\int_{\R^4}\left(\frac{|U^{\ve_k}|^2+2(U^{\ve_k}\cdot W)+|A^{\ve_k}|^2+2(A^{\ve_k}\cdot D)}2\,\left(\eta_{\ve_k}*U^{\ve_k}\right)\right.\\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\left.+\frac{|W|^2+|D|^2}2\,U^{\ve_k}\right)\cdot\nabla\psi\,dyds\\ &~~~+\int_{\R^4}p^{\ve_k} u^{\ve_k}\cdot\nabla\psi\,dyds\\ &~~~-\int_{\R^4}\left((u^{\ve_k}\cdot a^{\ve_k})D+(U^{\ve_k}\cdot A^{\ve_k})(\eta_{\ve_k}*A^{\ve_k})+(U^{\ve_k}\cdot D)A^{\ve_k}+(W\cdot A^{\ve_k})A^{\ve_k}\right.\\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\left.+(W\cdot D)A^{\ve_k}\right)\cdot\nabla\psi\,dyds\\ &~~~+\int_{\R^4}\left((\eta_{\ve_k}*U^{\ve_k})-U^{\ve_k}\right)\cdot(\nabla W\cdot U^{\ve_k}+\nabla D\cdot A^{\ve_k})\psi\,dyds\\ &~~~+\int_{\R^4}((\eta_{\ve_k}*A^{\ve_k})-A^{\ve_k})\cdot(\nabla U^{\ve_k}\cdot D+\nabla A^{\ve_k}\cdot W)\psi\,dyds. } Let $\mathcal{K}$ be a compact subset of $\R^4$. We have \EQN{ \|(\eta_{\ve_k}*U^{\ve_k})-U\|_{L^2(\mathcal{K})}\le&~\|(\eta_{\ve_k}*U^{\ve_k})-(\eta_{\ve_k} *U)\|_{L^2(\mathcal{K})}+\|(\eta_{\ve_k}*U)-U\|_{L^2(\mathcal{K})}\\\le&~\|U^{\ve_k}-U\|_{L^2(\mathcal{K})}+\|\eta_{\ve_k}*U-U\|_{L^2(\mathcal{K})}. } Since $\|(\eta_{\ve_k}*U)(s)-U(s)\|_{L_y^2}\le \|(\eta_{\ve_k}*U)(s)\|_{L_y^2}+\|U(s)\|_{L_y^2}\le2\|U(s)\|_{L_y^2}\in L_s^2(I)$ for all compact interval $I$, dominated convergence theorem implies that $\|(\eta_{\ve_k}*U)-U\|_{L^2(\mathcal{K})}\to0$ as ${\ve_k}\to0$. Together with the fact that $U^{\ve_k}\to U$ in $L^2(0,T;L^2(K))$ for all compact sets $K\subset\R^3$, we conclude that \EQ{\label{eta_U_conv_mhd} \|(\eta_{\ve_k}*U^{\ve_k})-U\|_{L^2(\mathcal{K})}\to0\ \text{ as }\ \ve_k\to0\ \ \ \text{ for all compact }\mathcal{K}\subset\R^4. } Similarly, we have \EQ{\label{eta_A_conv_mhd} \|(\eta_{\ve_k}*A^{\ve_k})-A\|_{L^2(\mathcal{K})}\to0\ \text{ as }\ \ve_k\to0\ \ \ \text{ for all compact }\mathcal{K}\subset\R^4. } In addition, the sequence $\{u^\ve\}_{\ve>0}$ is bounded in $L^{10/3}(\R^3\times[0,T])$ since it is bounded in $L^2(0,T;L^6(\R^3))$ and $L^\infty(0,T;L^2(\R^3))$. According to the well-known fact mentioned in the Appendix of \cite{MR673830}, \EQ{\label{u_strong_5/2_mhd} u^{\ve_k}\to u\ \ \ \text{ strongly in }L^{5/2}(\mathcal{K})\ \text{ as }\ve_k\to0. } Combining \eqref{eta_U_conv_mhd}-\eqref{u_strong_5/2_mhd} and the convergences in \eqref{U_A_conv} with the facts that $W,\,D$ are locally differentiable and that the support of $\psi$ is compact, each term on the right hand side of \eqref{A_51_mhd} converges to the corresponding term involving $u,\,U,\,a,\,A$ and $p$. On the other hand, $\int\nabla |u^{\ve_k}|^2dyds$ and $\int|\nabla a^{\ve_k}|^2dyds$ are lower-semicontinuous as $\ve_k\to0$. This proves \eqref{lei_mhd_leray} and completes the proof of \thref{thm_2.4_mhd}. \end{proof} \subsection{The time-periodic Leray system for the viscoelastic Navier-Stokes equations with damping} In this subsection, we follow the same approach as in Sect. 2.1 to construct a periodic weak solution to the Leray system for the viscoelastic Navier-Stokes equations with damping \begin{equation}\label{leray_vNSEd} \begin{split} \setlength\arraycolsep{1.5pt}\def1.2{1.2} \left.\begin{array}{ll} \partial_su-\De u-u-y\cdot\nabla u+(u\cdot\nabla)u-\underset{n=1}{\overset{3}\sum}(g_n\cdot\nabla)g_n+\nabla p&=0\ \\ \partial_sg_m-\De g_m-g_m-y\cdot\nabla g_m+(u\cdot\nabla)g_m-(g_m\cdot\nabla)u&=0\ \\ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nabla\cdot u =\nabla\cdot g_m&=0 \ \end{array}\right\}\text{ in }\R^3\times\R,\ m=1,2,3,\\ \lim_{|y_0|\to\infty}\int_{B_1(y_0)}\left(|u(y,s)-U_0(y,s)|^2+\underset{n=1}{\overset{3}\sum}|g_n(y,s)-G_{n,0}(y,s)|^2\right)dy=0\ \text{ for all }s\in\R,\\ u(\cdot,s)=u(\cdot,s+T),\ g_m(\cdot,s)=g_m(\cdot,s+T)\ \text{ in }\R^3\text{ for all }s\in\R,\ m=1,2,3, \end{split} \end{equation} for given $T$-periodic divergence-free vector fields $U_0$ and $G_{m,0}$, $m=1,2,3$. Periodic weak solutions and suitable periodic weak solutions of \eqref{leray_vNSEd} are defined as follows. \begin{defn}[Periodic weak solution of Leray system for the viscoelastic Navier-Stokes equations with damping] Let $U_0$ and $G_{m,0}$, $m=1,2,3$, satisfy \thref{assum_2.1}. A $4$-tuple of vector fields $(u,g_1,g_2,g_3)$ is a periodic weak solution to \eqref{leray_vNSEd} if for $m=1,2,3$ we have $\nabla\cdot u=\nabla\cdot g_m=0$, \[u-U_0,\,g_m-G_{m,0}\in L^\infty(0,T;L^2(\R^3))\cap L^2(0,T;H^1(\R^3)),\] and \begin{equation} \int_0^T\left[(u,\partial_s\varphi)-(\nabla u,\nabla \varphi)+\left(u+y\cdot\nabla u-u\cdot\nabla u+\underset{n=1}{\overset{3}\sum}(g_n\cdot\nabla)g_n,\varphi\right)\right]ds=0, \end{equation} \begin{equation} \int_0^T\left[(g_m,\partial_s\varphi)-(\nabla g_m,\nabla \varphi)+(g_m+y\cdot\nabla g_m-u\cdot\nabla g_m+g_m\cdot\nabla u,\varphi)\right]ds=0, \end{equation} holds for all $\varphi\in\mathcal{D}_T$. \end{defn} \begin{defn}[Suitable periodic weak solution of Leray system for the viscoelastic Navier-Stokes equations with damping] Let $U_0$ and $G_{m,0}$, $m=1,2,3$, satisfy \thref{assum_2.1}. A $5$-tuple $(u,g_1,g_2, g_3,p)$ is a suitable periodic weak solution to \eqref{leray_vNSEd} if $u,g_1,g_2,g_3,p$ are periodic in $s$ with period $T$, $(u,g_1,g_2,g_3)$ is a periodic weak solution to \eqref{leray_vNSEd}, $p\in L^{3/2}_{\textup{loc}}(\R^4)$, $(u,g_1,g_2,g_3,p)$ solves \eqref{leray_vNSEd} in the sense of distributions, and the local energy inequality holds: \EQ{\label{lei_vNSEd_leray} \int_{\R^4}\left(\frac{|u|^2+|{\bf G}|^2}2+|\nabla u|^2+|\nabla{\bf G}|^2\right)\psi \,dyds\le&\int_{\R^4}\frac{|u|^2+|{\bf G}|^2}2\left(\partial_s\psi+\De\psi\right)dyds\\&+\int_{\R^4}\left(\frac{|u|^2+|{\bf G}|^2}2(u-y)+pu\right)\cdot\nabla\psi\,dyds\\&-\underset{n=1}{\overset{3}\sum}\int_{\R^4}(u\cdot g_n)g_n\cdot\nabla\psi\,dyds, } where ${\bf G}=(g_1,g_2,g_3)\in\R^{3\times3}$, for all nonnegative $\psi\in C^\infty_0(\R^4)$. \end{defn} The main result of this subsection can be stated as the following: \begin{thm}[Existence of suitable periodic weak solutions to \eqref{leray_vNSEd}]\thlabel{thm_2.4_vNSEd} Assume $U_0(y,s)$ and $G_{m,0}(y,s),\,m=1,2,3,$ all satisfy \thref{assum_2.1} with $q=10/3$. Then \eqref{leray_vNSEd} has a periodic suitable weak solution $(u,g_1,g_2,g_3,p)$ in $\R^4$ with period $T$. \end{thm} \begin{proof} The proof follows from the same argument in that of \thref{thm_2.4_mhd}. Let $Z\in C^\infty(\R^3)$ with $0\le Z\le 1,\,Z(x)=1$ for $|x|>2$ and $Z(x)=0$ for $|x|<1$. Applying \thref{lem_2.5} with $\de=\frac18$, we are able to choose $R_0=R_0(U_0,G_{1,0},G_{2,0},G_{3,0})\ge1$ such that letting $\xi(y)=Z\left(\frac{y}{R_0}\right)$ and setting \begin{equation} W(y,s)=\xi(y)U_0(y,s)+w(y,s) \end{equation} and \begin{equation} E_m(y,s)=\xi(y)G_{m,0}(y,s)+e_m(y,s),\ m=1,2,3, \end{equation} where \begin{equation} w(y,s)=\int_{\R^3}\nabla_y\,\frac1{4\pi|y-z|}\,\nabla_z\xi(z)\cdot U_0(z,s)dz \end{equation} and \begin{equation} e_m(y,s)=\int_{\R^3}\nabla_y\,\frac1{4\pi|y-z|}\,\nabla_z\xi(z)\cdot G_{m,0}(z,s)dz,\ m=1,2,3, \end{equation} $W$ and $E_m,\,m=1,2,3,$ all satisfy the conclusion of \thref{lem_2.5}. The Leray system \eqref{leray_vNSEd} can be written as \begin{equation} \setlength\arraycolsep{1.5pt}\def1.2{1.2} \left\{\begin{array}{ll} \mathcal{L}u+(u\cdot\nabla)u-\underset{n=1}{\overset{3}\sum}(g_n\cdot\nabla)g_n+\nabla p&=0\\ \mathcal{L}g_m+(u\cdot\nabla)g_m-(g_m\cdot\nabla)u&=0,\ m=1,2,3,\\ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nabla\cdot u=\nabla\cdot g_m&=0,\ m=1,2,3, \end{array}\right.\end{equation} where $\mathcal{L}$ is given in \eqref{diff_op_L}. We have to construct a solution of the form $u=U+W$ and $g_m=G_m+E_m,\,m=1,2,3$. It follows that $(U,G_1,G_2,G_3)$ satisfies the perturbed Leray system for the viscoelastic Navier-Stokes equations with damping \begin{equation}\label{ptb_leray_vNSEd} \setlength\arraycolsep{1.5pt}\def1.2{1.2} \left\{\begin{array}{ll} &\mathcal{L}U+(W+U)\cdot\nabla U+U\cdot\nabla W\\ &~~~~~~~~~~~~~-\underset{n=1}{\overset{3}\sum}(E_n+G_n)\cdot\nabla G_n-\underset{n=1}{\overset{3}\sum}G_n\cdot\nabla E_n+\nabla p=-\mathcal{R}_3(W,E_1,E_2,E_3),\\ &\mathcal{L}G_m+(W+U)\cdot\nabla G_m+U\cdot\nabla E_m\\ &\,~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~-(E_m+G_m)\cdot\nabla U-G_m\cdot\nabla W=-\mathcal{R}_4(W,E_m),\\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nabla\cdot U=\nabla\cdot G_m=0, \end{array}\right. \end{equation} for $m=1,2,3,$ where \EQ{\label{eq_R3_R4} \setlength\arraycolsep{1.5pt}\def1.2{1.2} \left\{\begin{array}{rl} \mathcal{R}_3(W,E_1,E_2,E_3)&:=\mathcal{L}W+W\cdot\nabla W-\underset{n=1}{\overset{3}\sum}E_n\cdot\nabla E_n\\ \mathcal{R}_4(W,E_m)&:=\mathcal{L}E_m+W\cdot\nabla E_m-E_m\cdot\nabla W,\ m=1,2,3. \end{array}\right. } We first solve the following mollified perturbed Leray system for the viscoelastic Navier-Stokes equations with damping for $(U^\ve,G_1^\ve,G_2^\ve,G_3^\ve,p^\ve)$ in $\R^3\times[0,T]$: \begin{equation}\label{mdf_ptb_leray_vNSEd} \setlength\arraycolsep{1.5pt}\def1.2{1.2} \left\{\begin{array}{ll} &\mathcal{L}U^\ve+(W+(\eta_\ve*U^\ve))\cdot\nabla U^\ve+U^\ve\cdot\nabla W\\ &~~~~~~-\underset{n=1}{\overset{3}\sum}(E_n+(\eta_\ve*G_n^\ve))\cdot\nabla G_n^\ve-\underset{n=1}{\overset{3}\sum}G_n^\ve\cdot\nabla E_n+\nabla p=-\mathcal{R}_3(W,E_1,E_2,E_3),\\ &\mathcal{L}G_m^\ve+(W+(\eta_\ve*U^\ve))\cdot\nabla G_m^\ve+U^\ve\cdot\nabla E_m\\ &\,~~~~~~~~~~~~~~~~~~~~~-(E_m+(\eta_\ve*G_m^\ve))\cdot\nabla U^\ve-G_m^\ve\cdot\nabla W=-\mathcal{R}_4(W,E_m),\\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nabla\cdot U^\ve=\nabla\cdot G_m^\ve=0, \end{array}\right. \end{equation} for $m=1,2,3$, where $\eta_\ve(y)=\ve^{-3}\eta(y/\ve)$ for some fixed function $\eta\in C^\infty_0(\R^3)$ satisfying $\int_{\R^3}\eta dy=1$. It has the following weak formulation: \begin{equation} \setlength\arraycolsep{1.5pt}\def1.2{1.2} \left\{\begin{array}{lll} \frac{d}{ds}(U^\ve,f)&=&-(\nabla U^\ve,\nabla f)+(U^\ve+y\cdot\nabla U^\ve,f)\\ &&-\left((\eta_\ve*U^\ve)\cdot\nabla U^\ve-\underset{n=1}{\overset{3}\sum}(\eta_\ve*G_n^\ve)\cdot\nabla G_n^\ve,f\right)\\ &&-\left(W\cdot\nabla U^\ve+U^\ve\cdot\nabla W-\underset{n=1}{\overset{3}\sum}E_n\cdot\nabla G_n^\ve-\underset{n=1}{\overset{3}\sum}G_n^\ve\cdot\nabla E_n,f\right)\\ &&-\left<\mathcal{R}_3(W,E_1,E_2,E_3),f\right>\\ \frac{d}{ds}(G_m^\ve,f)&=&-(\nabla G_m^\ve,\nabla f)+(G_m^\ve+y\cdot\nabla G_m^\ve,f)\\ &&-\left((\eta_\ve*U^\ve)\cdot\nabla G_m^\ve-(\eta_\ve*G_m^\ve)\cdot\nabla U^\ve,f\right)\\ &&-(W\cdot\nabla G_m^\ve+U^\ve\cdot\nabla E_m-E_m\cdot\nabla U^\ve-G_m^\ve\cdot\nabla W,f)\\ &&-\left<\mathcal{R}_4(W,E_m),f\right>,\ m=1,2,3, \end{array}\right. \end{equation} for all $f\in\mathcal{V}$ and a.e. $s\in(0,T)$. ~\\ {\bf Step 1: Construction of a solution to the mollified perturbed Leray system} We use the Galerkin method to construct a solution of \eqref{mdf_ptb_leray_vNSEd}. Let $\{h_k\}_{k\in\mathbb{N}}\subset\mathcal{V}$ be an orthonormal basis of $H$. For a fixed $k\in\mathbb{N}$, we look for an approximation solution of the form $U^\ve_k(y,s)=\sum_{i=1}^k\mu^\ve_{ki}(s)h_i(y),\,(G^\ve_m)_k(y,s)=\sum_{i=1}^k(\ga^\ve_m)_{ki}(s)h_i(y),\,m=1,2,3$. First, we prove the existence and derive an a priori bound for $T$-periodic solutions $\mu^\ve_k=(\mu^\ve_{k1},\cdots,\mu^\ve_{kk}),\,(\ga^\ve_m)_k=((\ga^\ve_m)_{k1},\cdots,(\ga^\ve_m)_{kk}),\,m=1,2,3,$ to the system of ODEs \begin{equation}\label{galerkin_ode_vNSEd} \setlength\arraycolsep{1.5pt}\def1.2{1.2} \left\{\begin{array}{ll} \frac{d}{ds}\mu^\ve_{kj}&=\underset{i=1}{\overset{k}\sum}\mathscr{A}_{ij}\mu^\ve_{ki}+\underset{i=1}{\overset{k}\sum}\,\underset{n=1}{\overset{3}\sum}\tilde{\mathscr{B}}_{ijn}(\ga^\ve_n)_{ki}+\underset{i,l=1}{\overset{k}\sum}\mathscr{C}^\ve_{ilj}\mu^\ve_{ki}\mu^\ve_{kl}-\underset{i,l=1}{\overset{k}\sum}\mathscr{C}^\ve_{ilj}\underset{n=1}{\overset{3}\sum}(\ga^\ve_n)_{ki}(\ga^\ve_n)_{kl}+\tilde{\mathscr{D}}_j\\ \frac{d}{ds}(\ga^\ve_m)_{kj}&=\underset{i=1}{\overset{k}\sum}\tilde{\mathscr{E}}_{ijm}\mu^\ve_{ki}+\underset{i=1}{\overset{k}\sum}\mathscr{F}_{ij}(\ga^\ve_m)_{ki}+\underset{i,l=1}{\overset{k}\sum}\mathscr{G}^\ve_{ilj}\mu^\ve_{ki}(\ga^\ve_m)_{kl}+\tilde{\mathscr{H}}_{jm}, \end{array}\right. \end{equation} for $j=1,\cdots,k$, where $\mathscr{A}_{ij},\,\mathscr{C}^\ve_{ilj},\,\mathscr{F}_{ij}$ and $\mathscr{G}^\ve_{ilj}$ are the same as those in \eqref{galerkin_ode_coeff_mhd}, and \EQ{ \tilde{\mathscr{B}}_{ijn}&=(h_i\cdot\nabla E_n,h_j)+(E_n\cdot\nabla h_i,h_j),\\ \tilde{\mathscr{D}}_j&=-\left<\mathcal{R}_3(W,E_1,E_2,E_3),h_j\right>,\\ \tilde{\mathscr{E}}_{ijm}&=-(h_i\cdot\nabla E_m,h_j)+(E_m\cdot\nabla h_i,h_j),\\ \tilde{\mathscr{H}}_{jm}&=-\left<\mathcal{R}_4(W,E_m),h_j\right>. } Fix any $k\in\mathbb{N}$. For any $U^0,G_m^0\in\textup{span}(h_1,\cdots,h_k),\,m=1,2,3$, there exist $\mu^\ve_{kj},(\ga^\ve_m)_{kj}\in H^1(0,\tilde{T}),\,j=1,\cdots,k$, that uniquely solve \eqref{galerkin_ode_vNSEd} with initial data $\mu^\ve_{kj}(0)=(U^0,h_j),\,(\ga^\ve_m)_{kj}(0)=(G_m^0,h_j)$, $j=1,\cdots,k$, for some $0<\tilde{T}\le T$. We prove that $\tilde{T}=T$. Indeed, multiplying the $j$-th equation of $\eqref{galerkin_ode_vNSEd}_1$ by $\mu^\ve_{kj}$, multiplying the $j$-th equation of $\eqref{galerkin_ode_vNSEd}_2$ by $(\ga^\ve_m)_{kj}$, and summing over all $j=1,\cdots,k$ and $m=1,2,3$, that yields \EQ{\label{eq_2.27_vNSEd} &~~~~\frac12\,\frac{d}{ds}\left(\|U^\ve_k\|_{L^2}^2+\underset{n=1}{\overset{3}\sum}\|(G^\ve_n)_k\|_{L^2}^2\right)+\frac12\left(\|U^\ve_k\|_{L^2}^2+\underset{n=1}{\overset{3}\sum}\|(G^\ve_n)_k\|_{L^2}^2\right)\\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+\left(\|\nabla U^\ve_k\|_{L^2}^2+\underset{n=1}{\overset{3}\sum}\|\nabla (G^\ve_n)_k\|_{L^2}^2\right)\\ &=-\left(U^\ve_k\cdot\nabla W-\underset{n=1}{\overset{3}\sum}(E_n\cdot\nabla (G^\ve_n)_k+(G^\ve_n)_k\cdot\nabla E_n),U^\ve_k\right)\\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~-\underset{n=1}{\overset{3}\sum}((U^\ve_k\cdot\nabla E_n-E_n\cdot\nabla U^\ve_k)-(G^\ve_n)_k\cdot\nabla W,(G^\ve_n)_k)\\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~-\left<\mathcal{R}_3(W,E_1,E_2,E_3),U^\ve_k\right>-\underset{n=1}{\overset{3}\sum}\left<\mathcal{R}_4(W,E_n),(G^\ve_n)_k\right>, } thanks to the vanishing of $((\eta_\ve*U^\ve)\cdot\nabla U^\ve,U^\ve),\,(W\cdot\nabla U^\ve,U^\ve),\,((\eta_\ve*U^\ve)\cdot\nabla G^\ve_m,G^\ve_m)$ and $(W\cdot\nabla G^\ve_m,G^\ve_m)$, and the cancellation of $\sum_{n=1}^3((\eta_\ve*G^\ve_n)\cdot\nabla G^\ve_n,U^\ve)$ and $((\eta_\ve*G^\ve_m)\cdot\nabla U^\ve,G^\ve_m)$. Using \thref{lem_2.5} with $\de=\frac18$, we get \EQ{\label{etm_2.28_vNSEd} &~~~\left|-\left(U^\ve_k\cdot\nabla W-\underset{n=1}{\overset{3}\sum}(E_n\cdot\nabla (G^\ve_n)_k+(G^\ve_n)_k\cdot\nabla E_n),U^\ve_k\right)\right.\\ &~~~~\left.-\underset{n=1}{\overset{3}\sum}((U^\ve_k\cdot\nabla E_n-E_n\cdot\nabla U^\ve_k)-(G^\ve_n)_k\cdot\nabla W,(G^\ve_n)_k)\right|\\ &\le\frac1{16}\left(7\,\|U^\ve_k\|_{H^1}^2+3\,\underset{n=1}{\overset{3}\sum}\|(G^\ve_n)_k\|_{H^1}^2\right), } and \EQ{\label{etm_2.29_vNSEd} &~~~\left|-\left<\mathcal{R}_3(W,E_1,E_2,E_3),U^\ve_k\right>-\underset{n=1}{\overset{3}\sum}\left<\mathcal{R}_4(W,E_n),(G^\ve_n)_k\right>\right|\\ &\le C_2+\frac1{128}\left(5\,\|U^\ve_k\|_{H^1}^2+3\,\underset{n=1}{\overset{3}\sum}\|(G^\ve_n)_k\|_{H^1}^2\right), } where $C_2=32\left(\|\mathcal{L}W\|_{H^{-1}}^2+\sum_{n=1}^3\|\mathcal{L}E_n\|_{H^{-1}}^2+(\|W\|_{L^4}^2+\sum_{n=1}^3\|E_n\|_{L^4}^2)^2\right)$ is independent of $s,T,k$ and $\varepsilon$. Using the estimates \eqref{etm_2.28_vNSEd} and \eqref{etm_2.29_vNSEd}, we obtain from \eqref{eq_2.27_vNSEd} the differential inequality \EQ{\label{eq_2.30_vNSEd} &\frac{d}{ds}\left(\|U^\ve_k\|_{L^2}^2+\underset{n=1}{\overset{3}\sum}\|(G^\ve_n)_k\|_{L^2}^2\right)+\frac1{64}\left(\|U^\ve_k\|_{L^2}^2+\underset{n=1}{\overset{3}\sum}\|(G^\ve_n)_k\|_{L^2}^2\right)\\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+\frac1{64}\left(\|\nabla U^\ve_k\|_{L^2}^2+\underset{n=1}{\overset{3}\sum}\|\nabla(G^\ve_n)_k\|_{L^2}^2\right)\le C_2. } The Gronwall inequality implies that \EQ{\label{eq_2.31_vNSEd} e^{s/{64}}\left(\|U^\ve_k\|_{L^2}^2+\underset{n=1}{\overset{3}\sum}\|(G^\ve_n)_k\|_{L^2}^2\right)\le&~\left(\|U^0\|_{L^2}^2+\underset{n=1}{\overset{3}\sum}\|G_n^0\|_{L^2}^2\right)+\int_0^{\tilde{T}}e^{\tau/{64}}C_2d\tau\\ \le&~\left(\|U^0\|_{L^2}^2+\underset{n=1}{\overset{3}\sum}\|G_n^0\|_{L^2}^2\right)+e^{T/{64}}C_2T } for all $s\in[0,\tilde{T}]$. Since the right-hand side is finite, $\tilde{T}$ is not a blow-up time and we conclude that $\tilde{T}=T$. Choosing $\rho=\frac{C_2T}{1-e^{-T/{64}}}>0$ (independent of $k$), \eqref{eq_2.31_vNSEd} implies that \[\left(\|U^\ve_k\|_{L^2}^2+\sum_{n=1}^3\|(G^\ve_n)_k\|_{L^2}^2\right)^{1/2}\le\rho\] if $\left(\|U^0\|_{L^2}^2+\underset{n=1}{\overset{3}\sum}\|G_n^0\|_{L^2}^2\right)^{1/2}\le\rho$. Define $\mathcal{T}:B_\rho^{4k}\to B_\rho^{4k}$ by \[\mathcal{T}(\mu^\ve_k(0),(\ga^\ve_1)_k(0),(\ga^\ve_2)_k(0),(\ga^\ve_3)_k(0))=(\mu^\ve_k(T),(\ga^\ve_1)_k(T),(\ga^\ve_2)_k(T),(\ga^\ve_3)_k(T)),\] where $B_\rho^{4k}$ is the closed ball in $\R^{4k}$ of radius $\rho$ and centered at the origin. According to the continuous dependence on initial conditions of the solution of ODEs, the map $\mathcal{T}$ is continuous. Thus, we can find a fixed point of $\mathcal{T}$ by the Brouwer fixed point theorem. That is, there exist $(\mu^\ve_k(0),(\ga^\ve_1)_k(0),(\ga^\ve_2)_k(0),(\ga^\ve_3)_k(0))\in B_\rho^{4k}$ such that $(\mu^\ve_k(0),(\ga^\ve_1)_k(0),(\ga^\ve_2)_k(0),(\ga^\ve_3)_k(0))=(\mu^\ve_k(T),(\ga^\ve_1)_k(T),(\ga^\ve_2)_k(T),(\ga^\ve_3)_k(T))$. Let $U^0=\sum_{i=1}^k\mu_{ki}(0)h_i$ and $G_m^0=\sum_{i=1}^k(\ga_m)_{ki}(0)h_i$. Then $U^0,G_m^0\in\textup{span}(h_1,\cdots,h_k)$ and $U^0=U^\ve_k(T),G_m^0=(G^\ve_m)_k(T)$. We have $\left(\|U^\ve_k(s)\|_{L^2}^2+\underset{n=1}{\overset{3}\sum}\|(G^\ve_n)_k(s)\|_{L^2}^2\right)^\frac12\le\rho$ for all $s\in[0,T]$ by the choice of $U^0$ and $G_m^0$. Hence \begin{equation} \left(\|U^\ve_k\|_{L^\infty(0,T;L^2(\R^3))}^2+\sum_{n=1}^3\|(G^\ve_n)_k\|_{L^\infty(0,T;L^2(\R^3))}^2\right)^\frac12\le\rho. \end{equation} Moreover, by integrating \eqref{eq_2.30_vNSEd} in $s\in[0,T]$ and using $U^\ve_k(0)=U^\ve_k(T),\,(G^\ve_m)_k(0)=(G^\ve_m)_k(T)$, we get \begin{equation} \frac1{64}\int_0^T\left(\|U^\ve_k(s)\|_{H^1}^2+\sum_{n=1}^3\|(G^\ve_n)_k(s)\|_{H^1}^2\right)ds\le C_2T. \end{equation} Therefore, \EQ{\label{eq_2.26_vNSEd} &\|U^\ve_k\|_{L^\infty(0,T;L^2(\R^3))}+\sum_{n=1}^3\|(G^\ve_n)_k\|_{L^\infty(0,T;L^2(\R^3))}\\&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+\|U^\ve_k\|_{L^2(0,T;H^1(\R^3))}+\sum_{n=1}^3+\|(G^\ve_n)_k\|_{L^2(0,T;H^1(\R^3))}\le C, } where $C=\sqrt{8(\rho^2+64C_2T)}$ is independent of both $\ve$ and $k$. Since the sequences $\{U^\ve_k\}_{k\in\mathbb{N}}$ and $\{(G^\ve_m)_k\}_{k\in\mathbb{N}}$ are uniformly bounded, a standard limiting process shows that, for all $\ve>0$, we have, up to some subsequences, that \EQ{ &U^\ve_k\rightharpoonup U^\ve,\ (G^\ve_m)_k\rightharpoonup G^\ve_m\ \text{ weakly in $L^2(0,T;X)$},\\ &U^\ve_k\to U^\ve,\ (G^\ve_m)_k\to G^\ve_m\ \text{ strongly in $L^2(0,T;L^2(K))$ for all compact sets $K\subset\R^3$},\\ &U^\ve_k(s)\rightharpoonup U^\ve(s),\ (G^\ve_m)_k(s)\rightharpoonup G^\ve_m(s)\ \text{ weakly in $L^2$ for all $s\in[0,T]$} } as $k\to\infty$, for some $U^\ve,(G^\ve_m)\in L^2(0,T;H^1_0(\R^3)),\,m=1,2,3,$ (all have $\ve$-independent $L^\infty L^2$ and $L^2H^1$ bounds). The weak convergence ensures that $U^\ve(0)=U^\ve(T)$ and $(G^\ve_m)(0)=G^\ve_m(T)$. Furthermore, the $4$-tuple $(U^\ve,G^\ve_1,G^\ve_2,G^\ve_3)$ is a periodic weak solution of the mollified perturbed Leray system \eqref{mdf_ptb_leray_vNSEd}. ~\\ {\bf Step 2: A priori estimate of the pressure in the mollified perturbed Leray system} By taking the divergence of $\eqref{mdf_ptb_leray_vNSEd}_1$, we obtain \EQ{\label{eq_2.33_vNSEd} -\De p^\ve=&\sum_{i,j=1}^k\partial_i\partial_j\,\Bigg[\,(\eta_\ve*U^\ve_i)U^\ve_j+W_iU^\ve_j+U^\ve_i W_j+W_iW_j\\ &~~~~~~~~~~~~~~~\left.-\sum_{n=1}^3\left((\eta_\ve*(G^\ve_n)_i)(G^\ve_n)_j+(E_n)_i(G^\ve_n)_j+(G^\ve_n)_i(E_n)_j+(E_n)_i(E_n)_j\right)\right]. } Let \EQ{ \tilde{p}^\ve=&\sum_{i,j=1}^kR_iR_j\,\Bigg[\,(\eta_\ve*U^\ve_i)U^\ve_j+W_iU^\ve_j+U^\ve_i W_j+W_iW_j\\ &~~~~~~~~~~~~~~~~\left.-\sum_{n=1}^3\left((\eta_\ve*(G^\ve_n)_i)(G^\ve_n)_j+(E_n)_i(G^\ve_n)_j+(G^\ve_n)_i(E_n)_j+(E_n)_i(E_n)_j\right)\right], } where $R_i$ denote the Riesz transforms. Note that $\tilde{p}^\ve$ also satisfies \eqref{eq_2.33_vNSEd}. We will prove $\nabla(p^\ve-\tilde{p}^\ve)=0$ so that $p^\ve=\tilde{p}^\ve$ up to an additive constant by proving . Let $V^\ve(x,t)=(2t)^{-1/2}U^\ve(y,s),\,\pi^\ve(x,t)=(2t)^{-1}p^\ve(y,s)$ and $\tilde{\mathcal{F}}^\ve(x,t)=(\tilde{\mathcal{F}}_1+\tilde{\mathcal{F}}^\ve_2)(x,t)$ where \begin{equation} \tilde{\mathcal{F}}_1(x,t):=-\frac1{(2t)^{3/2}}(\mathcal{L}W)(y,s), \end{equation} \EQ{ \tilde{\mathcal{F}}^\ve_2(x,t):=-\frac1{(2t)^{3/2}}&\, \Bigg[\, W\cdot\nabla U^\ve+U^\ve\cdot\nabla W+(\eta_\ve*U^\ve)\cdot\nabla U^\ve+W\cdot\nabla W\\ &~~\left.-\sum_{n=1}^3\left(E_n\cdot\nabla G^\ve_n-G^\ve_n\cdot\nabla E_n-(\eta_\ve*G^\ve_n)\cdot\nabla G^\ve_n-E_n\cdot\nabla E_n\right)\right](y,s), } and $y=x/\sqrt{2t}$ and $s=\log(\sqrt{2t})$. Hence, $\tilde{\mathcal{F}}^\ve\in L^\infty(1,\la^2;H^{-1}(\R^3))$, $(V^\ve,\pi)$ solves the non-stationary Stokes system on $\R^3\times[1,\la^2]$ with force $\tilde{\mathcal{F}}^\ve$ by $\eqref{mdf_ptb_leray_vNSEd}_1$, and $V^\ve$ is in the energy class. In view of the uniqueness of the solution to the forced, non-stationary Stokes system on $\R^3\times[1,\la^2]$, we can conclude that $\nabla\pi^\ve=\nabla\tilde{\pi}^\ve$ where $\tilde{\pi}^\ve=(2t)^{-1}\tilde{p}^\ve$. Therefore $\nabla(p^\ve-\tilde{p}^\ve)=0$. At this point, we may replace $p^\ve$ by $\tilde{p}^\ve$. As before, the Calder\'on-Zygmund theory gives \EQN{ \|p^\ve(s)\|_{L^{5/3}}\le&~C\, \Bigg\|\,\Big[ (\eta_\ve*U^\ve_i)U^\ve_j+W_iU^\ve_j+U^\ve_i W_j+W_iW_j\\ &~~~~~-\left.\sum_{n=1}^3\left((\eta_\ve*(G^\ve_n)_i)(G^\ve_n)_j(E_n)_i(G^\ve_n)_j+(G^\ve_n)_i(E_n)_j+(E_n)_i(E_n)_j\right)\Big](s)\right\|_{L^{5/3}} \\\le&~C\left(\|U^\ve(s)\|_{L^{10/3}}^2+\sum_{n=1}^3\|G^\ve_n(s)\|_{L^{10/3}}^2+\|W(s)\|_{L^{10/3}}^2+\sum_{n=1}^3\|E_n(s)\|_{L^{10/3}}^2\right). } So we get the following a priori bound for $p^\ve$: \EQ{\label{eq_2.37_vNSEd} \|p^\ve\|_{L^{5/3}(\R^3\times[0,T])}\le&~C\left(\|U^\ve\|_{L^{10/3}(\R^3\times[0,T])}^2+\sum_{n=1}^3\|G^\ve_n\|_{L^{10/3}(\R^3\times[0,T])}^2\right.\\ &~~~~~~+\left.\|W\|_{L^{10/3}(\R^3\times[0,T])}^2+\sum_{n=1}^3\|E_n\|_{L^{10/3}(\R^3\times[0,T])}^2\right). } Since the sequences $\{U^\ve\}_{\ve>0}$ and $\{G^\ve_m\}_{\ve>0},\,m=1,2,3,$ are both bounded in $L^\infty L^2$ and $L^2H^1$ norms, \EQ{\label{bound_U^ve_vNSEd} \|U^\ve\|_{L^{10/3}(\R^3\times[0,T])}=\left\|\|U^\ve\|_{L_y^{10/3}}\right\|_{L_s^{10/3}}\le&~\left\|\|U^\ve\|_{L_y^2}^{\frac25}\|U^\ve\|_{L_y^6}^{\frac35}\right\|_{L_s^{10/3}}\\ \le&~\|U^\ve\|_{L^\infty(0,T;L^2(\R^3))}^{\frac25}\left\|\|U^\ve\|_{L_y^6}^{\frac35}\right\|_{L_s^{10/3}}\\ \le&~\|U^\ve\|_{L^\infty(0,T;L^2(\R^3))}^{\frac25}\|U^\ve\|_{L^2(0,T;L^6(\R^3))}^{\frac35}\\ \lesssim&~\|U^\ve\|_{L^\infty(0,T;L^2(\R^3))}^{\frac25}\|U^\ve\|_{L^2(0,T;H^1(\R^3))}^{\frac35}\\ \le&~C, } where $C$ is some constant independent of $\ve$. Similarly, we have \EQ{\label{bound_A^ve_vNSEd} \|G^\ve_m\|_{L^{10/3}(\R^3\times[0,T])}\le C,\ m=1,2,3. } Moreover, since we are applying \thref{lem_2.5} with $q=\frac{10}3$ and $\de=\frac18$, $\|W\|_{L^\infty(0,T;L^{10/3}(\R^3))}\le\frac18$ and $\|E_m\|_{L^\infty(0,T;L^{10/3}(\R^3))}\le\frac18$. Thus, we have the estimates \EQ{\label{bound_WE} \|W\|_{L^{10/3}(\R^3\times[0,T])}\le \frac18\,T^{10/3}\ \ \ \text{ and }\ \ \ \|E_m\|_{L^{10/3}(\R^3\times[0,T])}\le \frac18\,T^{10/3},\ m=1,2,3. } Using the bounds \eqref{bound_U^ve_vNSEd}-\eqref{bound_WE}, \eqref{eq_2.37_vNSEd} implies that $\{p^\ve\}_{\ve>0}$ is a bounded sequence in $L^{5/3}(\R^3\times[0,T])$. ~\\ {\bf Step 3: Convergence to a suitable periodic weak solution to \eqref{leray_vNSEd}} On one hand, since the sequences $\{U^\ve\}_{\ve>0}$ and $\{G^\ve_m\}_{\ve>0},\,m=1,2,3,$ are all bounded in $L^\infty(0,T;L^2(\R^3))$- and $L^2(0,T;H^1(\R^3))$- norms, there exist $U,G_m\in L^\infty(0,T;L^2(\R^3))\cap L^2(0,T;H^1_0(\R^3)),\,m=1,2,3,$ and sequences $\{U^{\ve_k}\}_{k\in\mathbb{N}},\{G^{\ve_k}_m\}_{k\in\mathbb{N}},\,m=1,2,3,$ such that for $m=1,2,3,$ \EQ{\label{U_G_conv} &U^{\ve_k}\rightharpoonup U,\ G^{\ve_k}_m\rightharpoonup G_m\ \ \ \text{ weakly in $L^2(0,T;X)$},\\ &U^{\ve_k}\to U,\ G^{\ve_k}_m\to G_m\ \ \ \text{ strongly in $L^2(0,T;L^2(K))$ for all compact sets $K\subset\R^3$},\\ &U^{\ve_k}(s)\rightharpoonup U(s),\ G^{\ve_k}_m(s)\rightharpoonup G_m(s)\ \ \ \text{ weakly in $L^2$ for all $s\in[0,T]$}, } as $\ve_k\to0$. On the other hand, since $\{p^{\ve_k}\}_{k\in\mathbb{N}}$ is a bounded sequence in $L^{5/3}(\R^3\times[0,T])$, we have that \EQ{ p^{\ve_k}\rightharpoonup p\ \text{ weakly in $L^{5/3}(\R^3\times[0,T])$,} } for some $p\in L^{5/3}(\R^3\times[0,T])$. Let $u=U+W$ and $g_m=G_m+E_m,\,m=1,2,3$. The above convergences are strong enough to guarantee that the $5$-tuple $(u,g_1,g_2,g_3,p)$ solves \eqref{leray_vNSEd} in the sense of distributions. What is left is to show that $(u,g_1,g_2,g_3,p)$ satisfies the local energy inequality \eqref{lei_vNSEd_leray}. Note that $(u^{\ve_k},g^{\ve_k}_1,g^{\ve_k}_2,g^{\ve_k}_3,p^{\ve_k})$, where $u^{\ve_k}=U^{\ve_k}+W$ and $g^{\ve_k}_m=G^{\ve_k}_m+E_m,\,m=1,2,3$, satisfies \begin{equation}\label{eq_u^ve_g^ve} \setlength\arraycolsep{1.5pt}\def1.2{1.2} \left\{\begin{array}{ll} &\mathcal{L}u^{\ve_k}+W\cdot\nabla u^{\ve_k}+(\eta_{\ve_k}*U^{\ve_k})\cdot\nabla U^{\ve_k}+U^{\ve_k}\cdot\nabla W\\ &~~~~~~~~~~~~~-\underset{n=1}{\overset{3}\sum}E_n\cdot\nabla g^{\ve_k}_n-\underset{n=1}{\overset{3}\sum}(\eta_{\ve_k}*G^{\ve_k}_n)\cdot\nabla G^{\ve_k}_n-\underset{n=1}{\overset{3}\sum}G^{\ve_k}_n\cdot\nabla E_n+\nabla p^{\ve_k}=0\\ &\mathcal{L}g^{\ve_k}_m+W\cdot\nabla g^{\ve_k}_m+(\eta_{\ve_k}*U^{\ve_k})\cdot\nabla G^{\ve_k}_m+U^{\ve_k}\cdot\nabla E_m\\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~-E_m\cdot\nabla u^{\ve_k}-(\eta_{\ve_k}*G^{\ve_k}_m)\cdot\nabla U^{\ve_k}-G^{\ve_k}_m\cdot\nabla W=0. \end{array}\right. \end{equation} Testing $\eqref{eq_u^ve_g^ve}_1$ and $\eqref{eq_u^ve_g^ve}_2$ for $m=1,2,3,$ with $u^{\ve_k}\psi$ and $g^{\ve_k}_m\psi,\,m=1,2,3$, respectively, where $0\le\psi\in C^\infty_0(\R^4)$ and adding them together, we get \EQ{\label{A_51_vNSEd} &~~~~\int_{\R^4}\left[\frac12\left(|u^{\ve_k}|^2+\underset{n=1}{\overset{3}\sum}\,|g^{\ve_k}_n|^2\right)+|\nabla u^{\ve_k}|^2+\underset{n=1}{\overset{3}\sum}\,|\nabla g^{\ve_k}_n|^2\right]\psi \,dyds\\ &=\int_{\R^4}\frac12\left(|u^{\ve_k}|^2+\underset{n=1}{\overset{3}\sum}\,|g^{\ve_k}_n|^2\right)\left(\partial_s\psi+\De\psi\right)dyds\\ &~~~+\int_{\R^4}\frac12\left(|u^{\ve_k}|^2+\underset{n=1}{\overset{3}\sum}\,|g^{\ve_k}_n|^2\right)(W-y)\cdot\nabla\psi\,dyds\\ &~~~+\int_{\R^4}\left[\frac12\left(|U^{\ve_k}|^2+2(U^{\ve_k}\cdot W)+\underset{n=1}{\overset{3}\sum}\,\left(|G_n^{\ve_k}|^2+2(G_n^{\ve_k}\cdot E_n)\right)\right)\left(\eta_{\ve_k}*U^{\ve_k}\right)\right.\\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\left.+\frac12\left(|W|^2+\underset{n=1}{\overset{3}\sum}\,|E_n|^2\right)U^{\ve_k}\right]\cdot\nabla\psi\,dyds\\ &~~~+\int_{\R^4}p^{\ve_k} u^{\ve_k}\cdot\nabla\psi\,dyds\\ &~~~-\underset{n=1}{\overset{3}\sum}\,\int_{\R^4}\left[(u^{\ve_k}\cdot g^{\ve_k}_n)E_n+(U^{\ve_k}\cdot G^{\ve_k}_n)(\eta_{\ve_k}*G^{\ve_k}_n)+(U^{\ve_k}\cdot E_n)G^{\ve_k}_n\right.\\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\left.+(W\cdot G^{\ve_k}_n)G^{\ve_k}_n+(W\cdot E_n)G^{\ve_k}_n\right]\cdot\nabla\psi\,dyds\\ &~~~+\int_{\R^4}\left((\eta_{\ve_k}*U^{\ve_k})-U^{\ve_k}\right)\cdot\left(\nabla W\cdot U^{\ve_k}+\underset{n=1}{\overset{3}\sum}\,\nabla E_n\cdot G^{\ve_k}_n\right)\psi\,dyds\\ &~~~+\underset{n=1}{\overset{3}\sum}\int_{\R^4}((\eta_{\ve_k}*G^{\ve_k}_n)-G^{\ve_k}_n)\cdot(\nabla U^{\ve_k}\cdot E_n+\nabla G^{\ve_k}_n\cdot W)\psi\,dyds. } Let $\mathcal{K}$ be a compact subset of $\R^4$. Using the same argument deriving \eqref{eta_U_conv_mhd} and \eqref{eta_A_conv_mhd}, we have \EQ{\label{eta_U_conv_vNSEd} \|(\eta_{\ve_k}*U^{\ve_k})-U\|_{L^2(\mathcal{K})}\to0\ \text{ as }\ \ve_k\to0\ \ \ \text{ for all compact }\mathcal{K}\subset\R^4, } and, for $m=1,2,3$, \EQ{\label{eta_G_conv_vNSEd} \|(\eta_{\ve_k}*G^{\ve_k}_m)-G_m\|_{L^2(\mathcal{K})}\to0\ \text{ as }\ \ve_k\to0\ \ \ \text{ for all compact }\mathcal{K}\subset\R^4. } In addition, the sequence $\{u^\ve\}_{\ve>0}$ is bounded in $L^{10/3}(\R^3\times[0,T])$ since it is bounded in $L^2(0,T;L^6(\R^3))$ and $L^\infty(0,T;L^2(\R^3))$. As before, we use the fact in the Appendix of \cite{MR673830} to show that \EQ{\label{u_strong_5/2_vNSEd} u^{\ve_k}\to u\ \ \ \text{ strongly in }L^{5/2}(\mathcal{K})\ \text{ as }\ve_k\to0. } Combining \eqref{eta_U_conv_vNSEd}-\eqref{u_strong_5/2_vNSEd} and the convergences in \eqref{U_G_conv} with the facts that $W,\,E_m$ are locally differentiable and that $\psi$ is compactly supported, each term on the right hand side of \eqref{A_51_vNSEd} converges to the corresponding term involving $u,\,U,\,g_m,\,G_m$ and $p$. Passing limit as $\ve_k\to0$, we get the desired local energy inequality \eqref{lei_vNSEd_leray} since $\int\nabla |u^{\ve_k}|^2dyds$ and $\int|\nabla g^{\ve_k}_m|^2dyds$ are lower-semicontinuous as $\ve_k\to0$. This proves \thref{thm_2.4_vNSEd}. \end{proof} \section{Discretely Self-Similar Solutions} In this section, we prove \thref{thm_1.2_mhd} and \thref{thm_1.2_vNSEd}. \subsection{Discretely self-similar solutions to the MHD equations}\label{sect_3.1} \begin{proof}[Proof of \thref{thm_1.2_mhd}] Let $U_0(y,s)=\sqrt{2t}(e^{t\De}v_0)(x)$ and $A_0=\sqrt{2t}(e^{t\De}b_0)(x)$. By \thref{lem_3.4}, $U_0$ and $A_0$ both satisfy \thref{assum_2.1} with $T=\log\la$ and $q=10/3$. Let $(u,a,p)$ be the $T$-periodic weak solution derived in \thref{thm_2.4_mhd}. Let $v(x,t)=u(y,s)/\sqrt{2t},\,b(x,t)=a(y,s)/\sqrt{2t}$ and $\pi(x,t)=p(y,s)/2t$ where $x,t,y,s$ satisfy \eqref{xtys}. Then $(v,b,\pi)$ is a distributional solution to \eqref{MHD}. Note that $u-U_0$ is periodic in $s$ with period $T=\log(\la)$. So \EQN{ \|v-e^{t\De} v_0\|_{L_t^\infty(1,\la^2;L_x^2(\R^3))}^2\le&~\la\|u-U_0\|_{L_s^\infty\left(\frac12\log2,\frac12\log2+\log(\la);L_y^2(\R^3)\right)}^2\\ \le&~\la\|u-U_0\|_{L_s^\infty\left(0,T;L_y^2(\R^3)\right)}^2. } Similarly, we have \[\|v-e^{t\De} v_0\|_{L_t^2(1,\la^2;L_x^2(\R^3))}^2\le\la^3\|u-U_0\|_{L_s^2\left(0,T;L_y^2(\R^3)\right)}^2,\] and \[\|\nabla_x\left(v-e^{t\De} v_0\right)\|_{L_t^2(1,\la^2;L_x^2(\R^3))}^2\le\la\|\nabla_y(u-U_0)\|_{L_s^2\left(0,T;L_y^2(\R^3)\right)}^2.\] Therefore, \EQ{\label{eq_1_la^2} v-e^{t\De} v_0\in L^\infty(1,\la^2;L^2(\R^3))\cap L^2(1,\la^2;H^1(\R^3)). } Note that $v-e^{t\De} v_0$ is $\la$-DSS because $u-U_0$ is $T$-periodic, where $T=\log(\la)$. For $t>0$, $\la^{-2k}\le t< \la^{-2k+2}$ for some $k\in\mathbb{Z}$ so $1\le\la^{2k}t<\la^2$. Thus \EQ{\label{eq_4.1_1} \|v(t)-e^{t\De}v_0\|_{L^2(\R^3)}^2=&~\la^{-1}\int_{\R^3}\left|(v-e^{t\De}v_0)(x,\la^2t)\right|^2dx\\ =&~\cdots\\ =&~\la^{-k}\int_{\R^3}\left|(v-e^{t\De}v_0)(x,\la^{2k}t)\right|^2dx\\ \le&~t^{1/2}\sup_{1\le\tau\le\la^2}\|v(\tau)-e^{\tau\De}v_0\|_{L^2(\R^3)}^2. } Moreover, \EQ{\label{eq_4.1_2} \int_{\la^{-2k}}^{\la^{-2k+2}}\int_{\R^3}\left|\nabla(v(t)-e^{t\De}v_0)\right|^2dxdt=&~\la^{-1}\int_{\la^{-2k+2}}^{\la^{-2k+4}}\int_{\R^3}\left|\nabla(v(t)-e^{t\De}v_0)\right|^2dxdt\\ =&~\cdots\\ =&~\la^{-k}\int_1^{\la^2}\int_{\R^3}\left|\nabla(v(t)-e^{t\De}v_0)\right|^2dxdt } implies that \EQ{\label{eq_4.1_3} \int_0^{\la^2}\int_{\R^3}\left|\nabla(v(t)-e^{t\De}v_0)\right|^2dxdt=&~\sum_{k=0}^\infty\int_{\la^{-2k}}^{\la^{-2k+2}}\int_{\R^3}\left|\nabla(v(t)-e^{t\De}v_0)\right|^2dxdt\\ =&~\left(\sum_{k=0}^\infty\la^{-k}\right)\int_1^{\la^2}\int_{\R^3}\left|\nabla(v(t)-e^{t\De}v_0)\right|^2dxdt. } Therefore, we see from \eqref{eq_4.1_1} and \eqref{eq_4.1_3} that \EQ{\label{eq_4.1_4} v-e^{t\De}v_0\in L^\infty(0,\la^2;L^2(\R^3))\cap L^2(0,\la^2;H^1(\R^3)). } We first prove that $v$ has locally finite energy and enstrophy. In view of Remark 3.2 in \cite{MR673830}, we have \EQN{ &~~~\sup_{x_0\in\R^3}\int_{B_R(x_0)}|e^{t\De}v_0(x)|^2dx\\&=\sup_{x_0\in\R^3}\int_{B_1(x_0)}|e^{t\De}v_0(R(\tilde{x}-x_0)+x_0)|^2R^3\,d\tilde{x}\\ &=\sup_{x_0\in\R^3}\int_{B_1(x_0)}|e^{\frac{t}{R^2}\De}\widetilde{v_0}(\tilde{x})|^2R^3\,d\tilde{x}, \text{ where }\widetilde{v_0}(\tilde{x})=v_0(R(\tilde{x}-x_0)+x_0),\\ &=R^3\,\|e^{\frac{t}{R^2}\De}\widetilde{v_0}\|_{L^2_{\textup{uloc}}}^2\\ &\lesssim R^3\,\|\widetilde{v_0}\|_{L^2_{\textup{uloc}}}^2\ \ \ (\text{by Remark 3.2 in \cite{MR673830}})\\ &=\sup_{x_0\in\R^3}\int_{B_R(x_0)}|v_0(x)|^2dx\\ &=\la^k\sup_{x_0\in\R^3}\int_{B_{\la^{-k}R}(\la^{-k}x_0)}|v_0(x)|^2dx, \text{ where $\la^{k-1}\le R<\la^k$ for some $k$, }(\text{since $v_0$ is $\la$-DSS})\\ &\le\la^k\sup_{x_0\in\R^3}\int_{B_1(\la^{-k}x_0)}|v_0(x)|^2dx\\ &=\la^k\|v_0\|_{L^2_{\textup{uloc}}}^2. } Combining this result with \eqref{eq_4.1_1}, we actually have \EQ{\label{v_energy} &~~~~\underset{0\le t<R^2}{\textup{esssup}}\,\sup_{x_0\in\R^3}\int_{B_R(x_0)}|v(x,t)|^2dx\\ &\le2\left(\underset{0\le t<R^2}{\textup{esssup}}\,\sup_{x_0\in\R^3}\int_{B_R(x_0)}|v(x,t)-e^{t\De}v_0|^2dx+\underset{0\le t<R^2}{\textup{esssup}}\,\sup_{x_0\in\R^3}\int_{B_R(x_0)}|e^{t\De}v_0|^2dx\right)\\ &\lesssim2\left(R\sup_{1\le\tau\le\la^2}\|v(\tau)-e^{\tau\De}v_0\|_{L^2(\R^3)}+R\la\|v_0\|_{\textup{uloc}}^2\right)<\infty. } Likewise, since \EQN{ \sup_{x_0\in\R^3}\int_{B_R(x_0)}|\nabla_x(e^{t\De}v_0(x))|^2dxdt=&~\sup_{x_0\in\R^3}\int_{B_1(x_0)}\left|R^{-1}\nabla_{\tilde{x}}(e^{t\De}v_0)(R(\tilde{x}-x_0)+x_0)\right|^2R^3\,d\tilde{x}\\ =&~\sup_{x_0\in\R^3}\int_{B_1(x_0)}\left|\nabla_{\tilde{x}}(e^{\frac{t}{R^2}\De}\widetilde{v_0})(\tilde{x})\right|^2R\,d\tilde{x}\\ \lesssim&~\frac{R}{\left(\frac{t}{R^2}\right)^{\frac12}}\,\|\widetilde{v_0}\|_{L^2_{\textup{uloc}}}^2\ \ \ (\text{by Remark 3.2 in \cite{MR673830}})\\ \le&~\frac{\la}{t^{\frac12}}\,\|v_0\|_{L^2_{\textup{uloc}}}^2, } it follows from \eqref{eq_4.1_2} that \EQ{\label{v_enstrophy} &~~~~\sup_{x_0\in\R^3}\int_0^{R^2}\int_{B_R(x_0)}|\nabla v(x,t)|^2dxdt\\ &\le2\left(\sup_{x_0\in\R^3}\int_0^{R^2}\int_{B_R(x_0)}|\nabla(v-e^{t\De}v_0)|^2dxdt+\sup_{x_0\in\R^3}\int_0^{R^2}\int_{B_R(x_0)}|\nabla(e^{t\De}v_0)|^2dxdt\right)\\ &\le2\left(\sum_{m=0}^\infty\la^{-(k+m)}\int_1^{\la^2}\int_{\R^3}\left|\nabla(v(t)-e^{t\De}v_0)\right|^2dxdt+2R\la\|v_0\|_{L^2_{\textup{uloc}}}^2\right)\\ &\le2\left(\frac{R\la}{\la-1}\int_1^{\la^2}\int_{\R^3}\left|\nabla(v(t)-e^{t\De}v_0)\right|^2dxdt+2R\la\|v_0\|_{L^2_{\textup{uloc}}}^2\right)<\infty, } where $k$ is some integer so that $\la^{k-1}\le R<\la^k$. The same conclusion of \eqref{v_energy} and \eqref{v_enstrophy} can be drawn for $b(t)-e^{t\De}b_0$. This proves \eqref{lfee_mhd}. Secondly, we prove the convergence to initial data. Let $K$ be a compact subset of $\R^3$. We split $\|v(t)-v_0\|_{L^2_{\textup{loc}}}$ into two parts: $\|v(t)-e^{t\De}v_0\|_{L^2_{\textup{loc}}}$ and $\|e^{t\De}v_0-v_0\|_{L^2_{\textup{loc}}}$. The first part is controlled by \eqref{eq_4.1_1} as \EQ{\label{conv_initial_1} \|v(t)-e^{t\De}v_0\|_{L^2(K)}\lesssim t^{1/4}\to0\ \text{ as $t\to0^+$}. } For the second part, we use the fact that $e^{t\De}v_0\to v_0$ in $L^2_{-3/2}$ as $t\to 0^+$ mentioned in the Remark 2.3 of \cite{MR1179482}. Moreover, we have the embeddings $L^3_w\subset M^{2,1}\subset L^2_{-3/2}\subset L^2_{\textup{loc}}$ (see \ref{sec_append} Appendix). Hence $e^{t\De}v_0\to v_0$ in $L^2_{-3/2}$ as $t\to 0^+$ implies \EQ{\label{conv_initial_2} e^{t\De}v_0\to v_0\ \text{ in }L^2_{\textup{loc}}\ \text{ as $t\to0^+$}. } Therefore, combining \eqref{conv_initial_1} and \eqref{conv_initial_2}, we have \EQ{\label{conv_initial_v} v\to v_0\ \text{ in }L^2_{\textup{loc}}\ \text{ as $t\to0^+$}. } The same convergence \eqref{conv_initial_v} is true for $b$. This establishes the convergence to initial data. Next, we prove the decay at spatial infinity. Fix any $R>0$. We split $v$ into two parts: $v-e^{t\De}v_0$ and $e^{t\De}v_0$. For the first part, $v-e^{t\De}v_0\in L^2(0,R^2;L^2(\R^3))$ since \[\int_0^{R^2}\int_{\R^3}|(v-e^{t\De}v_0)(x,t)|^2dxdt\le\int_0^{R^2}t^{1/2}\underset{1\le\tau\le\la^2}{\sup}\|(v-e^{\tau\De}v_0)(x,\tau)\|_{L^2(\R^3)}^2dt<\infty\] by \eqref{eq_4.1_1}. The dominated convergence theorem then implies \[\int_0^{R^2}\int_{B_R(x_0)}|(v-e^{t\De}v_0)(x,t)|^2dxdt=\int_0^{R^2}\int_{\R^3}|(v-e^{t\De}v_0)(x,t)|^21_{B_R(x_0)}(x)\,dxdt\to0\] as $|x_0|\to\infty$. For the second part, since $v_0$ is $ \la$-DSS, $e^{t\De}v_0$ is also $\la$-DSS and $U_0$ is periodic in $s$ with the period $T=\log(\la)$. So \eqref{eq_1_la^2} and \eqref{eq_4.1_1} also hold for $e^{t\De}v_0$. In the same manner above, we can show \[\int_0^{R^2}\int_{B_R(x_0)}|e^{t\De}v_0(x)|^2dxdt\to0\] as $|x_0|\to\infty$. Since the same proof works for $b$, we can conclude that \eqref{dasi_mhd} holds. Finally, the local energy inequality \eqref{lei_mhd} for \eqref{MHD} follows from the local energy inequality \eqref{lei_mhd_leray} for \eqref{leray_mhd}. \end{proof} \subsection{Discretely self-similar solutions to the viscoelastic Navier-Stokes equations with damping} \begin{proof}[Proof of \thref{thm_1.2_vNSEd}] Let $U_0(y,s)=\sqrt{2t}(e^{t\De}v_0)(x)$ and $(G_m)_0=\sqrt{2t}(e^{t\De}(f_0)_m)(x),\,m=1,2,3,$ where $(f_0)_m$ is the $m$-th column of ${\bf F}_0$. By \thref{lem_3.4}, $U_0$ and $(G_m)_0,\,m=1,2,3,$ all satisfy \thref{assum_2.1} with $T=\log\la$ and $q=10/3$. Let $(u,g_1,g_2,g_3,p)$ be the $T$-periodic weak solution derived in \thref{thm_2.4_vNSEd}. Let $v(x,t)=u(y,s)/\sqrt{2t},\,{\bf F}(x,t)={\bf G}(y,s)/\sqrt{2t}$ and $\pi(x,t)=p(y,s)/2t$ where ${\bf G}=(g_1,g_2,g_3)$ and $x,t,y,s$ satisfy \eqref{xtys}. We skip the rest of the proof as it is essentially the same as that in Sect. \ref{sect_3.1}. \end{proof} \section{Self-Similar Solutions} In this section, we prove \thref{thm_1.3_mhd} and \thref{thm_1.3_vNSEd}. \subsection{Self-similar solutions to the MHD equations}\label{sect_4.1} \begin{proof}[Proof of \thref{thm_1.3_mhd}] Let $U_0$ and $A_0$ be defined as in Sect. \ref{sect_3.1}. Since $v_0$ and $b_0$ are $(-1)$-homogeneous, \[U_0(y)=2^{3/2}(4\pi)^{-3/2}\int_{\R^3}e^{-|y-z|^2/2}v_0(z)dz\text{ and }A_0(y)=2^{3/2}(4\pi)^{-3/2}\int_{\R^3}e^{-|y-z|^2/2}b_0(z)dz\] are independent of $s$. By \thref{lem_3.4}, $U_0$ and $A_0$ both satisfy \thref{assum_2.1} for any $q\in(3,\infty]$ because $v_0$ and $b_0$ are $\la$-DSS for all $\la>1$. Let $W$ and $D$ be defined as in \eqref{W_def} and \eqref{D_def}, respectively. Then $W$ and $D$ are independent of $s$. Furthermore, according to \thref{lem_2.5}, $W$ and $D$ satisfy the estimates \eqref{eq_2.8}-\eqref{eq_2.10} with $q\in(3,\infty]$. Our goal is to solve the following variational form of the stationary Leray system for the MHD equations \EQ{\label{eq_5.3_mhd} \setlength\arraycolsep{1.5pt}\def1.2{1.2} \left\{\begin{array}{ll} -(\nabla u,\nabla f)+(u+y\cdot\nabla u-u\cdot\nabla u+a\cdot\nabla a,f)&=0\\ -(\nabla a,\nabla f)+(a+y\cdot\nabla a-u\cdot\nabla a+a\cdot\nabla u,f)&=0, \end{array} \right. } for all $f\in\mathcal{V}$. Similar to the proof of \thref{thm_2.4_mhd}, we are looking for a solution of the form $u=W+U$ and $a=D+A$ and using Galerkin method to achieve this. Note that $(U,A)$ satisfies the perturbed stationary Leray system for the MHD equations, which has the weak formulation as \EQ{\label{eq_5.4_mhd} \setlength\arraycolsep{1.5pt}\def1.2{1.2} \left\{\begin{array}{ll} &-(\nabla U,\nabla f)+(U+y\cdot\nabla U,f)-(U\cdot\nabla U-A\cdot\nabla A,f)\\ &~~~~~~~~~~~~~~~~~~~~~~=(W\cdot\nabla U+U\cdot\nabla W-D\cdot\nabla A-A\cdot\nabla D,f)+\left<\mathcal{R}_1(W,D),f\right>\\ &-(\nabla A,\nabla f)+(A+y\cdot\nabla A,f)-(U\cdot\nabla A-A\cdot\nabla U,f)\\ &~~~~~~~~~~~~~~~~~~~~~~=(W\cdot\nabla A+U\cdot\nabla D-D\cdot\nabla U-A\cdot\nabla W,f)+\left<\mathcal{R}_2(W,D),f\right>, \end{array} \right. } for all $f\in\mathcal{V}$, where $\mathcal{R}_1$ and $\mathcal{R}_2$ are the same as in \eqref{eq_R1_R2}. Let $\{h_k\}_{k\in\mathbb{N}}\subset\mathcal{V}$ be an orthonormal basis of $H$. For a fixed $k$, we look for an approximation solution of the form $U_k(y)=\sum_{i=1}^k\mu_{ki}h_i(y),\,A_k(y)\sum_{i=1}^k\al_{ki}h_i(y)$. Plugging them into the weak formulation, we get the following algebraic system: \begin{equation}\label{eq_5.5_mhd} \setlength\arraycolsep{1.5pt}\def1.2{1.2} \left\{\begin{array}{ll} \underset{i=1}{\overset{k}\sum}\mathscr{A}_{ij}\mu_{ki}+\underset{i=1}{\overset{k}\sum}\mathscr{B}_{ij}\al_{ki}+\underset{i,l=1}{\overset{k}\sum}\mathscr{C}_{ilj}\mu_{ki}\mu_{kl}-\underset{i,l=1}{\overset{k}\sum}\mathscr{C}_{ilj}\al_{ki}\al_{kl}+\mathscr{D}_j&=0\\ \underset{i=1}{\overset{k}\sum}\mathscr{E}_{ij}\mu_{ki}+\underset{i=1}{\overset{k}\sum}\mathscr{F}_{ij}\al_{ki}+\underset{i,l=1}{\overset{k}\sum}\mathscr{G}_{ilj}\mu_{ki}\al_{kl}+\mathscr{H}_j&=0, \end{array}\right. \end{equation} for $j=1,\cdots,k$, where $\mathscr{A}_{ij},\,\mathscr{B}_{ij},\,\mathscr{D}_j,\,\mathscr{E}_{ij},\,\mathscr{F}_{ij},\,\mathscr{H}_j$ are the same as those in \eqref{galerkin_ode_coeff_mhd}, and \EQ{ \mathscr{C}_{ilj}&=-(h_i\cdot\nabla h_l,h_j),\\ \mathscr{G}_{ilj}&=-(h_i\cdot\nabla h_l,h_j)+(h_l\cdot\nabla h_i,h_j). } Let $P:\R^{2k}\to\R^{2k}$ be defined by \EQN{ &(P(\mu_{k1},\cdots,\mu_{kk},\al_{k1},\cdots,\al_{kk}))_j\\ &~~~~=\left\{\begin{array}{ll}\underset{i=1}{\overset{k}\sum}\mathscr{A}_{ij}\mu_{ki}+\underset{i=1}{\overset{k}\sum}\mathscr{B}_{ij}\al_{ki}+\underset{i,l=1}{\overset{k}\sum}\mathscr{C}_{ilj}\mu_{ki}\mu_{kl}-\underset{i,l=1}{\overset{k}\sum}\mathscr{C}_{ilj}\al_{ki}\al_{kl}+\mathscr{D}_j,&\ j=1,\cdots,k,\\ \underset{i=1}{\overset{k}\sum}\mathscr{E}_{i(j-k)}\mu_{ki}+\underset{i=1}{\overset{k}\sum}\mathscr{F}_{i(j-k)}\al_{ki}+\underset{i,l=1}{\overset{k}\sum}\mathscr{G}_{il(j-k)}\mu_{ki}\al_{kl}+\mathscr{H}_{j-k},&\ j=k+1,\cdots,2k.\end{array}\right. } From similar estimates as in \eqref{etm_2.28_mhd} and \eqref{etm_2.29_mhd}, we have that \EQ{\label{eq_5.6_mhd} &P(\mu_{k1},\cdots,\mu_{kk},\al_{k1},\cdots,\al_{kk})\cdot(\mu_{k1},\cdots,\mu_{kk},\al_{k1},\cdots,\al_{kk})\\ =&~-\frac12\left(\|U_k\|_{L^2}^2+\|A_k\|_{L^2}^2\right)-\left(\|\nabla U_k\|_{L^2}^2+\|\nabla A_k\|_{L^2}^2\right)\\ &~-(U_k\cdot\nabla W-D\cdot\nabla A_k-A_k\cdot\nabla D,U_k)-(U_k\cdot\nabla D-D\cdot\nabla U_k-A_k\cdot\nabla W,A_k)\\ &~-\left<\mathcal{R}_1(W,D),U_k\right>-\left<\mathcal{R}_2(W,D),A_k\right>\\ \le&~-\frac12\left(\|U_k\|_{L^2}^2+\|A_k\|_{L^2}^2\right)-\left(\|\nabla U_k\|_{L^2}^2+\|\nabla A_k\|_{L^2}^2\right)+\frac38\left(\|U_k\|_{H^1}^2+\|A_k\|_{H^1}^2\right)\\ &~+C_2+\frac3{32}\left(\|U_k\|_{H^1}^2+\|A_k\|_{H^1}^2\right)\\ =&~-\frac1{32}\left(\|U_k\|_{L^2}^2+\|A_k\|_{L^2}^2\right)-\frac{17}{32}\left(\|\nabla U_k\|_{L^2}^2+\|\nabla A_k\|_{L^2}^2\right)+C_2\\ \le&~-\frac1{32}\left|(\mu_{k1},\cdots,\mu_{kk},\al_{k1},\cdots,\al_{kk})\right|^2+C_2\\ <&~0, } if $\left|(\mu_{k1},\cdots,\mu_{kk},\al_{k1},\cdots,\al_{kk})\right|=8\sqrt{C_2}=:\rho$. Note that $C_2$ is independent of $k$. Thus, we obtain a point $(\mu_{k1},\cdots,\mu_{kk},\al_{k1},\cdots,\al_{kk})\in B_\rho^{2k}$ such that $P(\mu_{k1},\cdots,\mu_{kk},\al_{k1},\cdots,\al_{kk})=0$ by Brouwer's fixed point theorem. Then $U_k(y)=\sum_{i=1}^k\mu_{ki}h_i(y),\,A_k(y)\sum_{i=1}^k\al_{ki}h_i(y)$ is our approximation solution of \eqref{eq_5.4_mhd} with a priori bound \[\left(\|U_k\|_{L^2}^2+\|A_k\|_{L^2}^2\right)+17\left(\|\nabla U_k\|_{L^2}^2+\|]\nabla A_k\|_{L^2}^2\right)\le32\,C_2.\] Therefore, we have, up to a subsequence, the following convergences \EQ{ &U_k\rightharpoonup U,\ A_k\rightharpoonup A\ \text{ weakly in $H^1(\R^3)$},\\ &U_k\to U,\ A_k\to A\ \text{ strongly in $L^2(K)$ for all compact sets $K\subset\R^3$}. } So we derive a solution $(U,A)$ to \eqref{eq_5.4_mhd} with $U,A\in H^1(\R^3)$. Then $(u,a)$, where $u=U+W$ and $a=A+D$, is a solution to \eqref{eq_5.3_mhd}. Note that $u,a\in H^1_{\textup{loc}}\cap L^q$ for all $3<q\le 6$ since $U,A\in H^1\subset L^q$ for $q\le6$ and $W,D\in L^q\cap L^4\cap C^\infty_{\text{loc}}$ for $q>3$. Regarding the pressure, we define \[p=\sum_{i,j=1}^3R_iR_j(u_iu_j-a_ia_j),\] where $R_i$ stands for the Riesz transforms. Then $(u,a,p)$ satisfies the stationary Leray system for the MHD equations \eqref{eq_1.7_mhd} in the sense of distributions. Moreover, Calderon-Zygmund estimates gives the following a priori bound for $p$: for $3<q\le6$ \[\|p\|_{L^{q/2}(\R^3)}\le C\|u\|_{L^q(\R^3)}^2.\] Recovering $(v,b,\pi)$ from $(u,a,p)$ by the relation \eqref{eq_1.6_mhd}, we obtain a self-similar weak solution of \eqref{MHD} (see \cite[pp.33-34]{MR1643650}). It remains to show that $(v,b,\pi)$ is a local Leray solution of \eqref{MHD}. Recall that $(U,p)$ is a solution of the stationary Stokes system with the force \[\mathcal{G}_1=U+y\cdot\nabla U-(U\cdot\nabla U-A\cdot\nabla A)-W\cdot\nabla U-U\cdot\nabla W+D\cdot\nabla A+A\cdot\nabla D-\mathcal{L}W-W\cdot\nabla W+D\cdot\nabla D.\] Applying the regularity result in \cite[Proposition 1.2.2]{MR0609732} on compact subsets of $\R^3$, $u$ and $p$ are actually smooth. Additionally, $A$ is a solution of the Poisson equation with the right hand side \[\mathcal{G}_2=A+y\cdot\nabla A-(U\cdot\nabla A-A\cdot\nabla U)-W\cdot\nabla A-U\cdot\nabla D+D\cdot\nabla U+A\cdot\nabla W-\mathcal{L}D-W\cdot\nabla D+D\cdot\nabla W.\] A standard elliptic regularity result leads to the smoothness for $A$ on compact subsets of $\R^3$. Thus, $u,a$ and $p$ inherit the smoothness from $U,W,A$ and $D$. Therefore, from the self-similarity of $v,b$ and $\pi$, they are smooth in both spatial and time variables. Consequently, the local energy inequality \eqref{lei_mhd} can be achieved via integrating by parts. The rest of conditions from \thref{def_loc_leray_mhd} and the estimates of the distance between the solution $(v,b)$ and the background $(e^{t\De}v_0,e^{t\De}b_0)$ can be verified using the same approach as in Sect. \ref{sect_3.1}. \end{proof} \subsection{Self-similar solutions to the viscoelastic Navier-Stokes equations with damping} \begin{proof}[Proof of \thref{thm_1.3_vNSEd}] The proof is basically the same as in Sect. \ref{sect_4.1}. It is worth noting that in \eqref{eq_5.6_mhd} we use the estimates \eqref{etm_2.28_mhd} and \eqref{etm_2.29_mhd} obtained by applying \thref{lem_2.5} with $\de=\frac14$; while here we acheive \eqref{eq_5.6_mhd} from estimates \eqref{etm_2.28_vNSEd} and \eqref{etm_2.29_vNSEd} by applying the same lemma but with the parameter $\de=\frac18$. The details of verification are left to the reader. \end{proof} \section{Appendix}\label{sec_append} In this appendix, we prove the three inclusions $L^3_w\subset M^{2,1}\subset L^2_{-3/2}\subset L^2_{\textup{loc}}$. To begin with, the first inclusion can be shown by the inequality \EQN{ r^{-1}\int_{B_r(x_0)}|f(x)|^2dx=&~r^{-1}\int_{B_r(x_0)}\int_0^{|f(x)|}2\al\,d\al dx\\ =&~r^{-1}\int_{B_r(x_0)}\int_0^\infty2\al1_{|f|>\al}(x)\,d\al dx\\ =&~r^{-1}\int_0^\infty2\al|\{|f|>\al\}\cap B_r(x_0)|d\al\\ =&~r^{-1}\int_0^{r^{-1}}2\al|\{|f|>\al\}\cap B_r(x_0)|d\al+r^{-1}\int_{r^{-1}}^\infty2\al|\{|f|>\al\}\cap B_r(x_0)|d\al\\ \le&~r^{-1}\int_0^{r^{-1}}2\al|B_r(x_0)|d\al+r^{-1}\int_{r^{-1}}^\infty2\al|\{|f|>\al\}|d\al\\ \le&~r^{-1}|B_r(x_0)|r^{-2}+r^{-1}\int_{r^{-1}}^\infty2\al\|f\|_{L^3_w}^3\al^{-3}d\al\\ \lesssim&~1+\|f\|_{L^3_w}^3. } Next, the second inclusion is valid as \EQN{ \int_{\R^3}\frac{|f(x)|^2}{(1+|x|)^3}\,dx=&~\int_{|x|<1}\frac{|f(x)|^2}{(1+|x|)^3}\,dx+\sum_{k=0}^\infty\int_{2^k\le|x|<2^{k+1}}\frac{|f(x)|^2}{(1+|x|)^3}\,dx\\ \le&~\int_{B_1(0)}|f(x)|^2dx+\sum_{k=0}^\infty\frac1{(1+2^k)^3}\int_{2^k\le|x|<2^{k+1}}|f(x)|^2dx\\ \le&~\|f\|_{M^{2,1}}^2+\sum_{k=0}^\infty\frac1{(1+2^k)^3}\,2^{k+1}\|f\|_{M^{2,1}}^2\\ \lesssim&~\|f\|_{M^{2,1}}^2. } Finally, the third inclusion holds since \EQN{ \int_{|x|\le M}|f(x)|^2dx\le(1+M)^3\int_{\R^3}\frac{|f(x)|^2}{(1+|x|)^3}\,dx=(1+M)^3\|f\|_{L^2_{-3/2}}^2. } \section*{Acknowledgments} The research was partially supported by FYF (\#6456) of Graduate and Postdoctoral Studies, University of British Columbia (BC). The author would like to express his fully gratitude to Tai-Peng Tsai for kindly discussion. Also, he thanks Anyi Bao for her proofreading.
{ "timestamp": "2019-03-07T02:22:40", "yymm": "1902", "arxiv_id": "1902.10771", "language": "en", "url": "https://arxiv.org/abs/1902.10771" }
\section{Introduction}\label{sec_intro} From data storage to data transmission, line codes are employed in many systems to achieve a variety of goals. An important early example, introduced in \cite{tang_bahl}, is the family of run-length-limited (RLL) codes used to mitigate inter-symbol interference (ISI) in magnetic recording (MR) systems by appropriately separating transitions. RLL codes are associated with bipolar non-return-to-zero inverted (NRZI) signaling, where a $0$ is represented by no transition and a $1$ is represented by a transition, with the transitions being from $-A$ to $+A$ and vice versa. RLL codes are characterized by a pair of parameters, $(d, k)$, where $d$ (resp., $k$) is the minimum (resp., maximum) number of $0$'s between adjacent $1$'s. The parameter $d$ separates transitions, and the parameter $k$ supports self-clocking by ensuring frequent transitions. A variable-length fixed-rate $(2, 7)$ RLL code appeared in the IBM 3370, 3375, and 3380 disk drives \cite{siegel_mr}, and the issue of error propagation for $(2, 7)$ RLL codes was studied in \cite{howe_prop}. For simplicity, we abbreviate a run of $r$ consecutive $0$'s (resp., $1$'s) to $\bold{0}^r$ (resp., $\bold{1}^r$). A $\mathcal{T}_x$-constrained code is a code that forbids the patterns in $\mathcal{T}_x \triangleq \{0 \bold{1}^y 0, 1 \bold{0}^y 1 \text{ } | \text{ } 1 \leq y \leq x \}$ from appearing in any codeword. $\mathcal{T}_x$-constrained codes are associated with bipolar non-return-to-zero (NRZ) signaling, where a $0$ is represented by level $-A$ and a $1$ is represented by level $+A$. The parameter $x$ separates transitions, which mitigates ISI, serving the same purpose as the parameter $d$ in RLL codes. For example, consecutive transitions can be prevented by a $\{010, 101\}$-constrained code with NRZ signaling, or a $(1, \infty)$ RLL code with NRZI signaling. We focus in this paper on $\mathcal{T}_x$-constrained codes. Constrained codes were used to extend the life of MR systems employing peak detection, and they continue to be used in modern MR systems \cite{vasic_prc, col_detect} to improve the performance of sequence detection on partial response (PR) channels such as extended PR4 (EPR4 and E$^2$PR4) channels \cite{siegel_const, immink_1}. Moreover, constrained codes improve the performance on low resolution media by preventing short pulses, which might be missed when reading \cite{harada_resol}. As $x$ for a $\mathcal{T}_x$-constrained code or $d$ for an RLL code increases, the minimum width of a pulse in the stream to be written increases. The requirement that the power spectrum of a line code vanishes at frequency zero, i.e., the code is direct-current-free (DC-free), is important in optical recording \cite{immink_opt} and in digital communication over transmission lines. This requirement is typically accomplished by balancing signal signs in the stream of transmitted (written) codewords. The author in \cite{knuth_bal} developed a particularly elegant method of achieving balance, which requires the addition of more than $\log_2 m$ bits, where $m$ is the code length, and this method was later tailored to RLL codes \cite{knuth_mod}. The null at DC can be widened by constraining the higher order statistics of line codewords (see \cite{robert_spec1} and \cite{robert_spec2} for a frequency domain approach). Constrained codes also find application in Flash memories. Consider a single-level cell (SLC) Flash memory system. Given three adjacent cells, the pattern $101$ translates to programming the outer two cells but not the inner cell. This pattern may result in inter-cell interference (ICI) caused by an unintentional increase of the charge level in the inner cell. The pattern $010$ is typically less detrimental, but it can cause problems when erasures are not applied to the entire block and the outer cells are initially programmed. See \cite{qin_flash} for a study of balanced constrained codes that alleviate ICI in Flash systems by focusing on the pattern $(q-1)0(q-1)$, where $q$ is the Galois field size. Another related work is \cite{kayser_flash}. Furthermore, line codes find application in computer standards for data transmission, such as universal serial bus (USB) and peripheral component interconnect express (PCIe). Line codes for these applications are simpler than $\mathcal{T}_x$-constrained and RLL codes, since streams of codewords are only required to be balanced and to support self-clocking. Examples include the $8$b/$10$b code \cite{immink_2}, the $64$b/$66$b code \cite{walker_66}, and the $128$b/$132$b code \cite{saade_line}. We note that constrained codes preserving parity are studied in \cite{roth_pres}, and that constrained codes for deoxyribonucleic acid (DNA) storage are studied in \cite{immink_3}. We refer the reader to \cite{immink_1} for a comprehensive survey of constrained codes. The idea of lexicographic indexing can be traced back to \cite{tang_bahl} and to \cite{cover_lex}. The latter independently introduced the idea in the context of source coding. The RLL codes and balanced RLL codes constructed in \cite{immink_lex} and \cite{braun_lex}, respectively, are based on \cite{cover_lex}, and the rates achieved improve upon those of earlier RLL codes. However, these gains are only realized at relatively large code lengths, and therefore at a significant cost in terms of complexity, storage overhead, and error performance. Moreover, the technique in \cite{cover_lex} does not readily generalize to $\mathcal{T}_x$-constrained codes. While techniques based on lookup tables, e.g., \cite{immink_table}, offer a better rate-length trade-off, they incur significant encoding and decoding complexity. In this paper, we return to the presentation of lexicographic indexing in \cite{tang_bahl}, and develop the idea in the context of a new family of $\mathcal{T}_x$-constrained codes. We call the new codes lexicographically-ordered $\mathcal{T}_x$-constrained codes, or simply LOCO codes. Our three most significant contributions are: \begin{enumerate} \item We develop a simple rule for encoding and decoding LOCO codes based on lexicographic indexing. This rule reduces the encoding-decoding of LOCO codes to low-complexity mapping-demapping between the index of a codeword and the codeword itself. We demonstrate that LOCO codes are capacity achieving codes, and that at moderate lengths, they provide a rate gain of up to $10\%$ compared with practical RLL codes that are used to achieve the same goals. \item We demonstrate density gains of about $20\%$ in modern MR systems by using a LOCO code to protect only the parity bits of a low-density parity-check (LDPC) code via alleviating ISI. It is of course possible to protect all the bits of the LDPC code, but our method limits the rate loss. Our demonstration uses a modified version of the PR system described in \cite{ahh_bas}, and a spatially-coupled (SC) LDPC code constructed as in \cite{ahh_tit2}. \item We prove that the inherent symmetry of LOCO codes makes balancing easy. Each message in a balanced LOCO code is represented by two codewords that are the complements of each other. Moreover, we show that the rate loss in balancing LOCO codes is minimal, and that this loss tends to zero in the limit, so that balanced LOCO codes achieve the same asymptotic rates as their unbalanced counterparts. \end{enumerate} We also describe how to modify LOCO codes to achieve self-clocking with NRZ signaling. The rest of the paper is organized as follows. In Section~\ref{sec_lex}, LOCO codes are formally defined and analyzed. The mapping-demapping between the index of a codeword and the codeword itself is introduced in Section~\ref{sec_pract}. Next, the rates of LOCO codes in addition to the practical encoding and decoding algorithms are presented in Section~\ref{sec_ralg}. LOCO codes are applied to MR systems in Section~\ref{sec_mr}. Balanced LOCO codes and their rates are discussed in Section~\ref{sec_bala}. Finally, the paper is concluded in Section~\ref{sec_conc}. \section{Analysis of LOCO Codes}\label{sec_lex} We start with the formal definition of the proposed fixed-length LOCO codes. In the next two sections, we will propose simple, practical encoding and decoding schemes for these codes. \begin{definition}\label{def_loco} A LOCO code $\mathcal{C}_{m,x}$, with parameters $m$ and $x$, is defined by the following properties: \begin{enumerate} \item Each codeword $\bold{c} \in \mathcal{C}_{m,x}$ is binary and of length $m$. \item Codewords in $\mathcal{C}_{m,x}$ are ordered lexicographically. \item Each codeword $\bold{c} \in \mathcal{C}_{m,x}$ does not contain any pattern in the set $\mathcal{T}_x$, where: \begin{equation}\label{eqn_Tx} \mathcal{T}_x \triangleq \{010, 101, 0110, 1001, \dots, 0 \bold{1}^x 0, 1 \bold{0}^x 1\}; \end{equation} therefore, $\vert \mathcal{T}_x \vert = 2x$, with $x \in \{1, 2, \dots \}$. \item Codewords in $\mathcal{C}_{m,x}$ are all the codewords satisfying the previous three conditions. \end{enumerate} \end{definition} Since $\mathcal{T}_x$-constrained codes are used with NRZ signaling, the constrained set of patterns can also be written as: \begin{align} \{&-+\hspace{0.2em}-, +-+, -++\hspace{0.2em}-, +--\hspace{0.2em}+, \dots, - \boldsymbol{+}^x -, + \boldsymbol{-}^x +\}, \nonumber \end{align} where the notation $\boldsymbol{-}^r$ (resp., $\boldsymbol{+}^r$) is defined the same way as $\bold{0}^r$ (resp., $\bold{1}^r$). Throughout the paper, NRZ (resp., NRZI) signaling is adopted for LOCO (resp., RLL) codes. \begin{remark} In the case of Flash systems, the level $-A$ is replaced by the erasure level $E$. \end{remark} Observe the connection between the forbidden patterns, i.e., the patterns in $\mathcal{T}_x$, and the physics of different data storage systems. As $x$ increases, ISI (resp., ICI) is more alleviated in MR (resp., Flash) systems, and the minimum width of a pulse increases. However, increasing $x$ reduces the rate of the LOCO code. Table~\ref{table_1} presents the LOCO codes $\mathcal{C}_{m,1}$, $m \in \{1, 2, \dots, 6\}$. These LOCO codes have $x=1$ and $\mathcal{T}_1 = \{010, 101\}$. We partition the codewords in $\mathcal{C}_{m,x}$ into five distinct groups as follows: \textbf{Group~1:} Codewords in this group start with $00$ from the left, i.e., in their left-most bits (LMBs). \textbf{Group~2:} Codewords in this group start with $0\bold{1}^{x+1}$ from the left, i.e., in their LMBs. \textbf{Group~3:} Codewords in this group start with $\bold{1}^y \bold{0}^{x+1}$, $2 \leq y \leq x+1$, from the left, i.e., in their LMBs. \textbf{Group~4:} Codewords in this group start with $1 \bold{0}^{x+1}$ from the left, i.e., in their LMBs. \textbf{Group~5:} Codewords in this group start with $1\bold{1}^{x+1}$ from the left, i.e., in their LMBs. The five groups are shown in Table~\ref{table_1} for the code $\mathcal{C}_{6, 1}$. We will see that this partitioning into groups enables enumeration in addition to low complexity encoding and decoding of LOCO codewords. \begin{remark} In order to satisfy Condition~3 in Definition \ref{def_loco} for a stream of codewords of a LOCO code $\mathcal{C}_{m,x}$, a bridging pattern needs to be added between any two consecutively transmitted (written) codewords in this stream. Bridging patterns will be discussed later in this paper. \end{remark} \begin{table*} \caption{All the codewords of six LOCO codes, $\mathcal{C}_{m,1}$, $m \in \{1, 2, \dots, 6\}$. The five different groups of codewords are explicitly illustrated~for the code $\mathcal{C}_{6,1}$.} \vspace{-0.5em} \centering \scalebox{1.00} { \begin{tabular}{|c|c|c|c|c|c|c c|} \hline \multirow{2}{*}{Codeword index $g(\bold{c})$} & \multicolumn{7}{|c|}{\makecell{Codewords of the code $\mathcal{C}_{m,1}$}} \\ \cline{2-8} {} & \makecell{$m=1$} & \makecell{$m=2$} & \makecell{$m=3$} & \makecell{$m=4$} & \makecell{$m=5$} & \multicolumn{2}{|c|}{\makecell{$m=6$}} \\ \hline $0$ & $0$ & $00$ & $000$ & $0000$ & $00000$ & $000000$ & \multirow{8}{*}{Group~1} \\ \cline{1-1}\cline{2-2}\cline{3-3} $1$ & $1$ & $01$ & $001$ & $0001$ & $00001$ & $000001$ & \\ \cline{1-1}\cline{2-2}\cline{3-3}\cline{4-4} $2$ & & $10$ & $011$ & $0011$ & $00011$ & $000011$ & \\ \cline{1-1}\cline{3-3}\cline{4-4}\cline{5-5} $3$ & & $11$ & $100$ & $0110$ & $00110$ & $000110$ & \\ \cline{1-1}\cline{3-3}\cline{4-4} $4$ & & & $110$ & $0111$ & $00111$ & $000111$ & \\ \cline{1-1}\cline{4-4}\cline{5-5}\cline{6-6} $5$ & & & $111$ & $1000$ & $01100$ & $001100$ & \\ \cline{1-1}\cline{4-4} $6$ & & & & $1001$ & $01110$ & $001110$ & \\ \cline{1-1}\cline{5-5} $7$ & & & & $1100$ & $01111$ & $001111$ & \\ \cline{1-1}\cline{5-5}\cline{6-6}\cline{7-8} $8$ & & & & $1110$ & $10000$ & $011000$ & \multirow{5}{*}{Group~2} \\ \cline{1-1} $9$ & & & & $1111$ & $10001$ & $011001$ & \\ \cline{1-1}\cline{5-5} $10$ & & & & & $10011$ & $011100$ & \\ \cline{1-1}\cline{6-6} $11$ & & & & & $11000$ & $011110$ & \\ \cline{1-1} $12$ & & & & & $11001$ & $011111$ & \\ \cline{1-1}\cline{6-6}\cline{7-8} $13$ & & & & & $11100$ & $100000$ & \multirow{5}{*}{Group~4} \\ \cline{1-1} $14$ & & & & & $11110$ & $100001$ & \\ \cline{1-1} $15$ & & & & & $11111$ & $100011$ & \\ \cline{1-1}\cline{6-6} $16$ & & & & & & $100110$ & \\ \cline{1-1} $17$ & & & & & & $100111$ & \\ \cline{1-1}\cline{7-8} $18$ & & & & & & $110000$ & \multirow{3}{*}{Group~3} \\ \cline{1-1} $19$ & & & & & & $110001$ & \\ \cline{1-1} $20$ & & & & & & $110011$ & \\ \cline{1-1}\cline{7-8} $21$ & & & & & & $111000$ & \multirow{5}{*}{Group~5} \\ \cline{1-1} $22$ & & & & & & $111001$ & \\ \cline{1-1} $23$ & & & & & & $111100$ & \\ \cline{1-1} $24$ & & & & & & $111110$ & \\ \cline{1-1} $25$ & & & & & & $111111$ & \\ \hline Code cardinality & $N(1, 1) \triangleq 2$ & $N(2, 1) = 4$ & $N(3, 1) = 6$ & $N(4, 1) = 10$ & $N(5, 1) = 16$ & \multicolumn{2}{|c|}{$N(6, 1) = 26$} \\ \hline \end{tabular}} \label{table_1} \end{table*} First, we determine the cardinality of $\mathcal{C}_{m,x}$. \begin{theorem}\label{thm_loco_card} Let $N(m, x)$ be the cardinality (size) of the LOCO code $\mathcal{C}_{m,x}$, i.e., $N(m, x) = \vert \mathcal{C}_{m,x} \vert$. Define: \begin{equation}\label{eqn_Ndef} N(m, x) \triangleq 2, \textup{ } m \leq 1. \end{equation} Then, the following recursive formula gives $N(m, x)$: \begin{equation}\label{eqn_rec} N(m, x) = N(m-1, x) + N(m-x-1, x), \textup{ } m \geq 2. \end{equation} \end{theorem} \begin{IEEEproof} To prove our recursive formula, which is (\ref{eqn_rec}), we calculate the cardinalities of the five aforementioned groups in $\mathcal{C}_{m,x}$, $m \geq 2$, then add all cardinalities. Observe first that symmetry of forbidden patterns implies that $\mathcal{C}_{m,x}$ is closed under taking codeword complements. Consequently, the number of codewords starting with $0$ from the left, i.e., in their LMB, equals the number of codewords starting with $1$ from the left. \textbf{Group~1:} Each codeword in Group~1 in $\mathcal{C}_{m,x}$ corresponds to a codeword in $\mathcal{C}_{m-1,x}$ that starts with $0$ from the left and shares the remaining $m-2$ right-most bits (RMBs) with the codeword in $\mathcal{C}_{m,x}$. Thus, the cardinality of Group~1 is: \begin{equation}\label{eqn_card1} N_1(m, x) = \frac{1}{2} N(m-1, x). \end{equation} \textbf{Group~2:} Each codeword in Group~2 in $\mathcal{C}_{m,x}$ corresponds to a codeword in $\mathcal{C}_{m-1,x}$ that starts with $\bold{1}^{x+1}$ from the left and shares the remaining $m-x-2$ RMBs with~the codeword in $\mathcal{C}_{m,x}$. Note that having $x+1$ $1$'s after the $0$ for codewords in Group~2 is a must given the patterns in $\mathcal{T}_x$ (the forbidden patterns). Consequently, out of $\frac{1}{2} N(m-1, x)$ codewords starting with $1$ from the left in $\mathcal{C}_{m-1,x}$, codewords in the following subgroups do not correspond to codewords in Group~2 in $\mathcal{C}_{m,x}$: Subgroup~2e.1: Codewords starting with $1 \bold{0}^{x+1}$ from the left in $\mathcal{C}_{m-1,x}$. Each of these codewords corresponds to a codeword in $\mathcal{C}_{m-x-2,x}$ starting with $0$. Subgroup~2e.2: Codewords starting with $11 \bold{0}^{x+1}$ from the left in $\mathcal{C}_{m-1,x}$. Each of these codewords corresponds to a codeword in $\mathcal{C}_{m-x-3,x}$ starting with $0$. These subgroups continue until: Subgroup~2e.x: Codewords starting with $\bold{1}^x \bold{0}^{x+1}$ from the left in $\mathcal{C}_{m-1,x}$. Each of these codewords corresponds to a codeword in $\mathcal{C}_{m-2x-1,x}$ starting with $0$. This analysis means that a total of \begin{align} \frac{1}{2} \big [ &N(m-x-2, x) + N(m-x-3, x) + \dots + \nonumber \\ &N(m-2x-1, x) \big ] \nonumber \end{align} codewords are excluded from the codewords starting with $1$ from the left in $\mathcal{C}_{m-1,x}$. Consequently, the cardinality of Group~2 is: \begin{equation}\label{eqn_card2} N_2(m, x) = \frac{1}{2} \left [ N(m-1, x) - \sum_{j=2}^{x+1} N(m-x-j, x) \right ]. \end{equation} \textbf{Group~3:} Recall that codewords in Group~3 start with $\bold{1}^y \bold{0}^{x+1}$, $2 \leq y \leq x+1$, from the left. Consequently, these codewords can be divided into subgroups according to the value of $y$. These subgroups are associated with the excluded subgroups from Group~2. Subgroup~3.1 with $y=2$: Each codeword in Subgroup~3.1 in $\mathcal{C}_{m,x}$ corresponds to a codeword in $\mathcal{C}_{m-1,x}$ that starts with $1 \bold{0}^{x+1}$ from the left and shares the remaining $m-x-3$ RMBs with the codeword in $\mathcal{C}_{m,x}$. The codewords in $\mathcal{C}_{m-1,x}$ having this property are the codewords in Subgroup~2e.1. Subgroup~3.2 with $y=3$: Each codeword in Subgroup~3.2 in $\mathcal{C}_{m,x}$ corresponds to a codeword in $\mathcal{C}_{m-1,x}$ that starts with $11 \bold{0}^{x+1}$ from the left and shares the remaining $m-x-4$ RMBs with the codeword in $\mathcal{C}_{m,x}$. The codewords in $\mathcal{C}_{m-1,x}$ having this property are the codewords in Subgroup~2e.2. These subgroups continue until: Subgroup~3.x with $y=x+1$: Each codeword in Subgroup~3.x in $\mathcal{C}_{m,x}$ corresponds to a codeword in $\mathcal{C}_{m-1,x}$ that starts with $\bold{1}^x \bold{0}^{x+1}$ from the left and shares the remaining $m-2x-2$ RMBs with the codeword in $\mathcal{C}_{m,x}$. The codewords in $\mathcal{C}_{m-1,x}$ having this property are the codewords in Subgroup~2e.x. Accordingly, and from the analysis of Group~2, the cardinality of Group~3 is: \begin{equation}\label{eqn_card3} N_3(m, x) = \frac{1}{2} \sum_{j=2}^{x+1} N(m-x-j, x). \end{equation} \textbf{Group~4:} Each codeword in Group~4 in $\mathcal{C}_{m,x}$ corresponds to a codeword in $\mathcal{C}_{m-x-1,x}$ that starts with $0$ from the left and shares the remaining $m-x-2$ RMBs with the codeword in $\mathcal{C}_{m,x}$. Thus, the cardinality of Group~4 is: \begin{equation}\label{eqn_card4} N_4(m, x) = \frac{1}{2} N(m-x-1, x). \end{equation} \textbf{Group~5:} Each codeword in Group~5 in $\mathcal{C}_{m,x}$ corresponds to a codeword in $\mathcal{C}_{m-x-1,x}$ that starts with $1$ from the left and shares the remaining $m-x-2$ RMBs with the codeword in $\mathcal{C}_{m,x}$. Thus, the cardinality of Group~5 is: \begin{equation}\label{eqn_card5} N_5(m, x) = \frac{1}{2} N(m-x-1, x). \end{equation} From (\ref{eqn_card1}), (\ref{eqn_card2}), (\ref{eqn_card3}), (\ref{eqn_card4}), and (\ref{eqn_card5}), we get: \begin{align} N(m, x) &= \sum_{\ell=1}^5 N_\ell (m, x) \nonumber \\ &= N(m-1, x) + N(m-x-1, x), \nonumber \end{align} which completes the proof. \end{IEEEproof} \begin{remark} The cardinalities $N_2(m, x)$ and $N_3(m, x)$ can be further simplified by rewriting the term $\sum_{j=2}^{x+1} N(m-x-j, x)$. Observe that: \begin{equation} \sum_{\ell=1}^2 N_\ell(m, x) = \sum_{\ell=3}^5 N_\ell(m, x). \end{equation} Consequently, using (\ref{eqn_card1}), (\ref{eqn_card2}), (\ref{eqn_card3}), (\ref{eqn_card4}), and (\ref{eqn_card5}) gives: \begin{align} &N(m-1, x) - \frac{1}{2}\sum_{j=2}^{x+1} N(m-x-j, x) = \nonumber \\ &N(m-x-1, x) + \frac{1}{2}\sum_{j=2}^{x+1} N(m-x-j, x), \end{align} resulting in: \begin{equation}\label{eqn_sum} \sum_{j=2}^{x+1} N(m-x-j, x) = N(m-1, x) - N(m-x-1, x). \end{equation} Substituting (\ref{eqn_sum}) in (\ref{eqn_card2}) and (\ref{eqn_card3}) gives: \begin{equation}\label{eqn_card2f} N_2(m, x) = \frac{1}{2} N(m-x-1, x), \end{equation} \begin{equation}\label{eqn_card3f} N_3(m, x) = \frac{1}{2} \left [ N(m-1, x) - N(m-x-1, x) \right ]. \end{equation} \end{remark} The value of Theorem~\ref{thm_loco_card} is the insight it provides into the structure of $\mathcal{C}_{m,x}$. Not only does Theorem~\ref{thm_loco_card} perform enumeration via simple recursion, it also significantly contributes to the low-complexity encoding and decoding schemes, which are based on the lexicographic ordering. Note that $N(m, x)$ is always even. For $x=1$, the cardinalities form a Fibonacci sequence as (\ref{eqn_rec}) becomes: \begin{equation}\label{eqn_recf} N(m, 1) = N(m-1, 1) + N(m-2, 1). \end{equation} The cardinalities $N(m, 1)$ for $m \in \{1, 2, \dots, 6\}$ are given in the last row of Table~\ref{table_1}. \begin{example}\label{ex_1} Consider the LOCO code $\mathcal{C}_{6,1}$ illustrated in the last column of Table~\ref{table_1}. From Theorem~\ref{thm_loco_card}, $N(0, 1) \triangleq 2$, $N(1, 1) \triangleq 2$, $N(2, 1) = 4$, $N(3, 1) = 6$, and $N(4, 1) = 10$. Thus, and from (\ref{eqn_rec}), the cardinality of $\mathcal{C}_{6,1}$ is: \begin{equation} N(6, 1) = N(5, 1) + N(4, 1) = 16 + 10 = 26, \nonumber \end{equation} which can also be obtained from the cardinalities of the five groups that are: \begin{align} N_1(6, 1) &= \frac{1}{2}N(5, 1) = 8, \nonumber \\ N_2(6, 1) &= \frac{1}{2} \left [N(5, 1) - N(3, 1) \right ] = \frac{1}{2}N(4, 1) = 5, \nonumber \\ N_3(6, 1) &= \frac{1}{2}N(3, 1) = \frac{1}{2} \left [N(5, 1) - N(4, 1) \right ] = 3, \nonumber \\ N_4(6, 1) &= \frac{1}{2}N(4, 1) = 5, \nonumber \\ N_5(6, 1) &= \frac{1}{2}N(4, 1) = 5. \nonumber \end{align} \end{example} We now use the group structure of LOCO codes to define a lexicographic indexing of codewords. Define the index of a codeword $\bold{c} \in \mathcal{C}_{m,x}$ as $g(m, x, \bold{c}) \in \{0, 1, \dots, N(m, x)-1\}$, which we also abbreviate to $g(\bold{c})$ when the context is clear. Since the five groups can be defined for a LOCO code of any length, we define them for $\mathcal{C}_{m+1, x}$. For Groups~1, 2, and 3 in $\mathcal{C}_{m+1, x}$, let $\bold{c} \in \mathcal{C}_{m,x}$ be the corresponding codeword to $\bold{c}' \in \mathcal{C}_{m+1,x}$ according to the proof of Theorem~\ref{thm_loco_card}, i.e., the $m$ RMBs in $\bold{c}'$ are $\bold{c}$. Moreover, for Groups~4 and 5 in $\mathcal{C}_{m+1, x}$, let $\bold{c}'' \in \mathcal{C}_{m-x,x}$ be the corresponding codeword to $\bold{c}' \in \mathcal{C}_{m+1,x}$ according to the proof of Theorem~\ref{thm_loco_card}, i.e., the $m-x$ RMBs in $\bold{c}'$ are $\bold{c}''$. We define the \textbf{shift in codeword indices} for different groups in $\mathcal{C}_{m+1, x}$ as follows: \begin{align}\label{eqn_shift} \zeta_\ell \triangleq \left\{\begin{matrix}g(m+1, x, \bold{c}')-g(m, x, \bold{c}), \textup{ } &\ell \in \{1, 2, 3\}, \\ g(m+1, x, \bold{c}')-g(m-x, x, \bold{c}''), \textup{ } &\ell \in \{4, 5\}, \end{matrix}\right. \end{align} where $\ell$ is the group index. Observe that this shift is fixed for all the codewords in the same group in $\mathcal{C}_{m+1, x}$. The following lemma gives the values of the shift for all the five groups. \begin{lemma}\label{lem_shifts} The shift in codeword indices defined in (\ref{eqn_shift}) for different groups in a LOCO code $\mathcal{C}_{m+1, x}$ is given by: \begin{align}\label{eqn_svals} \zeta_\ell = \left\{\begin{matrix}0, \textup{ } &\ell = 1, \\ -\frac{1}{2} \left [ N(m, x) - N(m-x, x) \right ], \textup{ } &\ell = 2, \\ N(m-x, x), \textup{ } &\ell = 3, \\ \frac{1}{2} N(m+1, x), \textup{ } &\ell = 4, \\ N(m, x), \textup{ } &\ell = 5. \end{matrix}\right. \end{align} \end{lemma} \begin{IEEEproof} We prove (\ref{eqn_svals}) by deriving $\zeta_\ell$ for each group of codewords in $\mathcal{C}_{m+1, x}$ as follows. \textbf{Group~1:} Since corresponding codewords in $\mathcal{C}_{m+1,x}$ and in $\mathcal{C}_{m,x}$ have the same index for that group, we get: \begin{equation}\label{eqn_zeta1} \zeta_1 = g(m+1, x, \bold{c}')-g(m, x, \bold{c}) = 0. \end{equation} \textbf{Group~2:} Here, the difference in codeword indices between $\bold{c}' \in \mathcal{C}_{m+1,x}$ and $\bold{c} \in \mathcal{C}_{m,x}$ equals the total number of codewords starting with $1$ from the left in $\mathcal{C}_{m,x}$ that do not correspond to codewords in Group~2 in $\mathcal{C}_{m+1,x}$, but with a negative sign. From the proof of Theorem~\ref{thm_loco_card}, this number is exactly the cardinality of Group~3 in $\mathcal{C}_{m+1,x}$. Thus, from (\ref{eqn_card3f}): \begin{align}\label{eqn_zeta2} \zeta_2 &= g(m+1, x, \bold{c}')-g(m, x, \bold{c}) = - N_3(m+1, x) \nonumber \\ &= -\frac{1}{2} \left [ N(m, x) - N(m-x, x) \right ]. \end{align} \textbf{Group~3:} The subgroups that constitute Group~3 are consecutive; therefore, we need only to calculate the shift in codeword indices for Subgroup~3.1. Subgroup~3.1 in $\mathcal{C}_{m+1,x}$ comes right after Groups~1, 2, and 4 (see Table~\ref{table_1}). Note that Groups~1 and 2 consist of all codewords starting with $0$ from the left in $\mathcal{C}_{m+1,x}$. On the other hand, the codewords in $\mathcal{C}_{m,x}$ that correspond to the codewords in Subgroup~3.1 in $\mathcal{C}_{m+1,x}$ come right after all the codewords that start with $0$ from the left. Consequently, and using (\ref{eqn_card4}): \begin{align}\label{eqn_zeta3} \zeta_3 &= g(m+1, x, \bold{c}')-g(m, x, \bold{c}) \nonumber \\ &= \frac{1}{2}N(m+1, x) + \frac{1}{2}N_4(m+1, x) - \frac{1}{2}N(m, x) \nonumber \\ &= \frac{1}{2} \left [N(m, x) + N(m-x, x) \right ] \nonumber \\ &+ \frac{1}{2}N(m-x, x) - \frac{1}{2}N(m, x) \nonumber \\ &= N(m-x, x). \end{align} \textbf{Group~4:} Group~4 in $\mathcal{C}_{m+1,x}$ comes right after Groups~1 and 2. On the other hand, the corresponding codewords in $\mathcal{C}_{m-x,x}$ for that group start from codeword index $0$. Thus, \begin{align}\label{eqn_zeta4} \zeta_4 &= g(m+1, x, \bold{c}')-g(m-x, x, \bold{c}'') = \frac{1}{2} N(m+1, x). \end{align} \textbf{Group~5:} Because it is the last group in order, Group~5 in $\mathcal{C}_{m+1,x}$ starts at a codeword index resulting from subtracting the cardinality of Group~5 itself from $N(m+1, x)$. On the other hand, the corresponding codewords in $\mathcal{C}_{m-x,x}$ come right after all the codewords that start with $0$ from the left. Consequently, and using (\ref{eqn_card5}): \begin{align}\label{eqn_zeta5} \zeta_5 &= g(m+1, x, \bold{c}')-g(m-x, x, \bold{c}'') \nonumber \\ &= N(m+1, x) - N_5(m+1, x) - \frac{1}{2} N(m-x, x) \nonumber \\ &= N(m, x) + N(m-x, x) - \frac{1}{2} N(m-x, x) \nonumber \\ &- \frac{1}{2} N(m-x, x) = N(m, x). \end{align} Noting that (\ref{eqn_zeta1}), (\ref{eqn_zeta2}), (\ref{eqn_zeta3}), (\ref{eqn_zeta4}), and (\ref{eqn_zeta5}) combined are (\ref{eqn_svals}) completes the proof. \end{IEEEproof} \begin{example}\label{ex_2} From (\ref{eqn_svals}), the values of $\zeta_\ell$, $\ell \in \{1, 2, \dots, 5\}$, for the LOCO code $\mathcal{C}_{6,1}$ given in the last column of Table~\ref{table_1} are: \begin{align} \zeta_1 &= 0, \nonumber \\ \zeta_2 &= -\frac{1}{2} \left [ N(5, 1) - N(4, 1) \right ] = -3, \nonumber \\ \zeta_3 &= N(4, 1) = 10, \nonumber \\ \zeta_4 &= \frac{1}{2} N(6, 1) = 13, \nonumber \\ \zeta_5 &= N(5, 1) = 16. \nonumber \end{align} Note that here $m+1 =6$, i.e., $m=5$, and $x=1$. \end{example} \section{Practical Encoding and Decoding \\ of LOCO Codes}\label{sec_pract} In this section, we describe how lexicographic indexing supports simple, practical encoding and decoding of LOCO codes. The following theorem is fundamental to the encoding and decoding algorithms presented in Section \ref{sec_ralg}. In the following, we define a codeword $\bold{c} \in \mathcal{C}_{m,x}$ as $\bold{c} \triangleq \left [c_{m-1} \textup{ } c_{m-2} \textup{ } \dots \textup{ } c_0 \right ]$, where $c_i \in \{0, 1\}$, for all $i$. The same applies for $\bold{c}' \in \mathcal{C}_{m+1,x}$ and $\bold{c}'' \in \mathcal{C}_{m-x,x}$. Note that codeword indexing is trivial for the case of $m=1$. \begin{theorem}\label{thm_induct} Consider a LOCO code $\mathcal{C}_{m,x}$ with $m \geq 2$.~The index $g(\bold{c})$ of a codeword $\bold{c} \in \mathcal{C}_{m,x}$ is derived from $\bold{c}$ itself according to the following two equations: \noindent If the LMB $c_{m-1} = 0$: \begin{equation}\label{eqn_g0} g(\bold{c}) = \frac{1}{2} \sum_{i=0}^{m-2} N(i-x+1, x) \big \vert_{c_i = 1}. \end{equation} If the LMB $c_{m-1} = 1$: \begin{equation}\label{eqn_g1} g(\bold{c}) = \frac{1}{2} \left [N(m, x) + \sum_{i=0}^{m-2} N(i-x+1, x) \big \vert_{c_i = 1} \right ]. \end{equation} Here, we use the abbreviated notation $g(\bold{c})$ for simplicity. \end{theorem} \begin{IEEEproof} We prove Theorem~\ref{thm_induct} by induction as follows. \textbf{Base:} The base case here is the case of $m=2$. Let the four available codewords in $\mathcal{C}_{2,x}$ be $\bold{c}_0$, $\bold{c}_1$, $\bold{c}_2$, and $\bold{c}_3$, with the subscript of $\bold{c}$ being its index. We need to prove that (\ref{eqn_g0}) and (\ref{eqn_g1}) yield $g(\bold{c}_u) = u$, $u \in \{0, 1, 2, 3\}$. Moreover, the bits of codeword $\bold{c}_u$ are $c_{u,i}$, $i \in \{0, 1\}$. \begin{align} g(\bold{c}_0) &= \frac{1}{2} \sum_{i=0}^{0} N(i-x+1, x) \big \vert_{c_{0,i} = 1} = 0, \nonumber \\ g(\bold{c}_1) &= \frac{1}{2} \sum_{i=0}^{0} N(i-x+1, x) \big \vert_{c_{0,i} = 1} = \frac{1}{2} N(-x+1, x) = 1, \nonumber \\ g(\bold{c}_2) &= \frac{1}{2} \left [N(2, x) + \sum_{i=0}^{0} N(i-x+1, x) \big \vert_{c_{0,i} = 1} \right ] \nonumber \\ &= \frac{1}{2} \left [ 4 + 0 \right ] = 2, \nonumber \\ g(\bold{c}_3) &= \frac{1}{2} \left [N(2, x) + \sum_{i=0}^{0} N(i-x+1, x) \big \vert_{c_{0,i} = 1} \right ] \nonumber \\ &= \frac{1}{2} \left [ 4 + N(-x+1, x) \right ] = \frac{1}{2} \left [ 4 + 2 \right ] = 3. \end{align} Recall that $N(-x+1, x) = 2$, for all $x \in \{1, 2, \dots\}$ by definition from (\ref{eqn_Ndef}). Note also that $N(2, x) = 4$, for all $x \in \{1, 2, \dots\}$. \textbf{Assumption:} We assume that (\ref{eqn_g0}) and (\ref{eqn_g1}) hold for the case of $\overline{m} \in \{3, 4, \dots, m\}$, i.e., for all the LOCO codes $\mathcal{C}_{\overline{m},x}$ of length $\overline{m} \in \{3, 4, \dots, m\}$. In particular, \noindent If the LMB $c_{\overline{m}-1} = 0$: \begin{equation}\label{eqn_asum0} g(\overline{m}, x, \overline{\bold{c}}) = \frac{1}{2} \sum_{i=0}^{\overline{m}-2} N(i-x+1, x) \big \vert_{\overline{c}_i = 1}. \end{equation} If the LMB $c_{\overline{m}-1} = 1$: \begin{equation}\label{eqn_asum1} g(\overline{m}, x, \overline{\bold{c}}) = \frac{1}{2} \left [N(\overline{m}, x) + \sum_{i=0}^{\overline{m}-2} N(i-x+1, x) \big \vert_{\overline{c}_i = 1} \right ]. \end{equation} Note that $\overline{\bold{c}}$ with bits $\overline{c}_i$, $i \in \{0, 1, \dots, \overline{m}-1\}$, is a codeword in the LOCO code $\mathcal{C}_{\overline{m},x}$. \textbf{To be proved:} We prove that (\ref{eqn_g0}) and (\ref{eqn_g1}) hold for the case of $m+1$, i.e., for the LOCO codes $\mathcal{C}_{m+1,x}$ of length $m+1$. In particular, \noindent If the LMB $c_m = 0$: \begin{equation}\label{eqn_prove0} g(m+1, x, \bold{c}') = \frac{1}{2} \sum_{i=0}^{m-1} N(i-x+1, x) \big \vert_{c'_i = 1}. \end{equation} If the LMB $c_m = 1$: \begin{align}\label{eqn_prove1} &g(m+1, x, \bold{c}') \nonumber \\ &= \frac{1}{2} \left [N(m+1, x) + \sum_{i=0}^{m-1} N(i-x+1, x) \big \vert_{c'_i = 1} \right ]. \end{align} We prove (\ref{eqn_prove0}) and (\ref{eqn_prove1}) for the five groups of codewords in $\mathcal{C}_{m+1,x}$ making use of the inductive assumption and Lemma~\ref{lem_shifts}. \textbf{Group~1:} From (\ref{eqn_zeta1}), we know that for Group~1: \begin{equation} g(m+1, x, \bold{c}') = g(m, x, \bold{c}). \nonumber \end{equation} Note that here both $\bold{c}'$ and $\bold{c}$ start with $0$ from the left. Consequently, and using the assumption in (\ref{eqn_asum0}): \begin{equation}\label{eqn_temp1} g(m+1, x, \bold{c}') = \frac{1}{2} \sum_{i=0}^{m-2} N(i-x+1, x) \big \vert_{c_i = 1}. \end{equation} Since $\bold{c}'$ and $\bold{c}$ share the $m-1$ RMBs, and since $\bold{c}'$ starts with $00$ from the left, i.e., $c'_{m-1}=0$, (\ref{eqn_temp1}) can be written as: \begin{equation}\label{eqn_res1} g(m+1, x, \bold{c}') = \frac{1}{2} \sum_{i=0}^{m-1} N(i-x+1, x) \big \vert_{c'_i = 1}. \end{equation} \textbf{Group~2:} From (\ref{eqn_zeta2}), we know that for Group~2: \begin{equation} g(m+1, x, \bold{c}') = g(m, x, \bold{c}) - \frac{1}{2} \left [ N(m, x) - N(m-x, x) \right ]. \nonumber \end{equation} Note that here $\bold{c}'$ starts with $0$ from the left while $\bold{c}$ starts with $1$ from the left. Consequently, and using the assumption in (\ref{eqn_asum1}): \begin{align}\label{eqn_temp2} g(m+1, x, \bold{c}') &= \frac{1}{2} \left [N(m, x) + \sum_{i=0}^{m-2} N(i-x+1, x) \big \vert_{c_i = 1} \right ] \nonumber \\ &- \frac{1}{2} \left [ N(m, x) - N(m-x, x) \right ]. \end{align} Consequently, we get: \begin{align}\label{eqn_temp4} &g(m+1, x, \bold{c}') \nonumber \\ &= \frac{1}{2} \left [N(m-x, x) + \sum_{i=0}^{m-2} N(i-x+1, x) \big \vert_{c_i = 1} \right ]. \end{align} Since $\bold{c}'$ and $\bold{c}$ share the $m-1$ RMBs, and since $\bold{c}'$ starts with $01$ from the left, i.e., $c'_{m-1}=1$, (\ref{eqn_temp4}) can be simplified as: \begin{equation}\label{eqn_res2} g(m+1, x, \bold{c}') = \frac{1}{2} \sum_{i=0}^{m-1} N(i-x+1, x) \big \vert_{c'_i = 1}. \end{equation} Equations (\ref{eqn_res1}) and (\ref{eqn_res2}) prove that (\ref{eqn_prove0}) holds if the LMB in $\bold{c}' \in \mathcal{C}_{m+1,x}$, $c_m = 0$. \textbf{Group~3:} From (\ref{eqn_zeta3}), we know that for Group~3: \begin{equation} g(m+1, x, \bold{c}') = g(m, x, \bold{c}) + N(m-x, x). \nonumber \end{equation} Note that here both $\bold{c}'$ and $\bold{c}$ start with $1$ from the left. Consequently, and using the assumption in (\ref{eqn_asum1}): \begin{align}\label{eqn_temp5} g(m+1, x, \bold{c}') &= \frac{1}{2} \left [N(m, x) + \sum_{i=0}^{m-2} N(i-x+1, x) \big \vert_{c_i = 1} \right ] \nonumber \\ &+ N(m-x, x). \end{align} Observe that using (\ref{eqn_rec}), we have: \begin{align}\label{eqn_temp6} &\frac{1}{2} N(m, x) + N(m-x, x) \nonumber \\ &= \frac{1}{2} \left [ N(m, x) + N(m-x, x) + N(m-x, x) \right ] \nonumber \\ &= \frac{1}{2} \left [ N(m+1, x) + N(m-x, x) \right ]. \end{align} Substituting (\ref{eqn_temp6}) in (\ref{eqn_temp5}) gives: \begin{align}\label{eqn_temp7} g(m+1, x, \bold{c}') &= \frac{1}{2} \Bigg [N(m+1, x) + N(m-x, x) \nonumber \\ &\hspace{+1.0em} + \sum_{i=0}^{m-2} N(i-x+1, x) \big \vert_{c_i = 1} \Bigg ]. \end{align} Since $\bold{c}'$ and $\bold{c}$ share the $m-1$ RMBs, and since $\bold{c}'$ starts with $11$ from the left, i.e., $c'_{m-1}=1$, (\ref{eqn_temp7}) can be simplified as: \begin{align}\label{eqn_res3} &g(m+1, x, \bold{c}') \nonumber \\ &= \frac{1}{2} \left [N(m+1, x) + \sum_{i=0}^{m-1} N(i-x+1, x) \big \vert_{c'_i = 1} \right ]. \end{align} \textbf{Group~4:} From (\ref{eqn_zeta4}), we know that for Group~4: \begin{equation} g(m+1, x, \bold{c}') = g(m-x, x, \bold{c}'') + \frac{1}{2} N(m+1, x). \nonumber \end{equation} Note that here $\bold{c}'$ starts with $1$ from the left while $\bold{c}''$ starts with $0$ from the left. Consequently, and using the assumption in (\ref{eqn_asum0}): \begin{equation}\label{eqn_temp8} g(m+1, x, \bold{c}') = \frac{1}{2} \sum_{i=0}^{m-x-2} N(i-x+1, x) \big \vert_{c''_i = 1} + \frac{1}{2} N(m+1, x). \end{equation} Since $\bold{c}'$ and $\bold{c}''$ share the $m-x-1$ RMBs, and since $\bold{c}'$ starts with $1 \bold{0}^{x+1}$ from the left, i.e., $c'_{m-1}=0, \textup{ } c'_{m-2}=0, \dots, \textup{ } c'_{m-x-1}=0$, (\ref{eqn_temp8}) can be written as: \begin{align}\label{eqn_res4} &g(m+1, x, \bold{c}') \nonumber \\ &= \frac{1}{2} \left [N(m+1, x) + \sum_{i=0}^{m-1} N(i-x+1, x) \big \vert_{c'_i = 1} \right ]. \end{align} \textbf{Group~5:} From (\ref{eqn_zeta5}), we know that for Group~5: \begin{equation} g(m+1, x, \bold{c}') = g(m-x, x, \bold{c}'') + N(m, x). \nonumber \end{equation} Note that here both $\bold{c}'$ and $\bold{c}''$ start with $1$ from the left. Consequently, and using the assumption in (\ref{eqn_asum1}): \begin{align}\label{eqn_temp9} g(m+1, x, \bold{c}') &= \frac{1}{2} \Bigg [N(m-x, x) + 2 N(m, x) \nonumber \\ &\hspace{+1.0em} + \sum_{i=0}^{m-x-2} N(i-x+1, x) \big \vert_{c''_i = 1} \Bigg ]. \end{align} Observe that using (\ref{eqn_rec}) and (\ref{eqn_sum}), we have: \begin{align}\label{eqn_temp10} &N(m-x, x) + 2N(m, x) = N(m+1, x) + N(m, x) \nonumber \\ &= N(m+1, x) + N(m-x, x) + \sum_{j=1}^{x} N(m-x-j, x) \nonumber \\ &= N(m+1, x) + \sum_{j=0}^{x} N(m-x-j, x). \end{align} Using the change of variables $\overline{i}=m-j-1$ for the summation in (\ref{eqn_temp10}) yields: \begin{align}\label{eqn_temp11} &N(m-x, x) + 2N(m, x) \nonumber \\ &= N(m+1, x) + \hspace{-0.5em}\sum_{\overline{i}=m-x-1}^{m-1} N(\overline{i}-x+1, x). \end{align} Substituting (\ref{eqn_temp11}) in (\ref{eqn_temp9}) gives: \begin{align}\label{eqn_temp12} g(m+1, x, \bold{c}') &= \frac{1}{2} \Bigg [N(m+1, x) + \hspace{-0.5em}\sum_{\overline{i}=m-x-1}^{m-1} N(\overline{i}-x+1, x) \nonumber \\ &\hspace{+1.0em} + \sum_{i=0}^{m-x-2} N(i-x+1, x) \big \vert_{c''_i = 1} \Bigg ]. \end{align} Since $\bold{c}'$ and $\bold{c}''$ share the $m-x-1$ RMBs, and since $\bold{c}'$ starts with $1 \bold{1}^{x+1}$ from the left, i.e., $c'_{m-1}=1, \textup{ } c'_{m-2}=1, \dots, \textup{ } c'_{m-x-1}=1$, the two summations in (\ref{eqn_temp12}) can be combined in one summation, and (\ref{eqn_temp12}) can be written as: \begin{align}\label{eqn_res5} &g(m+1, x, \bold{c}') \nonumber \\ &= \frac{1}{2} \left [N(m+1, x) + \sum_{i=0}^{m-1} N(i-x+1, x) \big \vert_{c'_i = 1} \right ]. \end{align} Equations (\ref{eqn_res3}), (\ref{eqn_res4}), and (\ref{eqn_res5}) prove that (\ref{eqn_prove1}) holds if the LMB in $\bold{c}' \in \mathcal{C}_{m+1,x}$, $c_m = 1$. As a result of the above analysis, (\ref{eqn_prove0}) and (\ref{eqn_prove1}) are proved, i.e., the induction is proved. Therefore, Theorem~\ref{thm_induct} is proved for any LOCO code $\mathcal{C}_{m,x}$, for all $m \geq 2$ and for all $x \geq 1$. \end{IEEEproof} \begin{remark} One useful way to verify (\ref{eqn_g1}) for Groups~4 and 5 of a LOCO code $\mathcal{C}_{m,x}$ is to notice that the codewords in Group~4 are the first $N_4(m, x)$ codewords in Group~1 after replacing the $0$ in the LMB with $1$ for each. Moreover, the codewords in Group~5 are the codewords in Group~2 after replacing the $0$ in the LMB with $1$ for each. Therefore, to get the index $g(\bold{c})$ for a codeword in Group~4 (resp., 5) we need to add $\frac{1}{2}N(m, x)$ to the index of the corresponding codeword in Group~1 (resp., 2), which is obtained from (\ref{eqn_g0}). The result of this addition is (\ref{eqn_g1}). \end{remark} The value of Theorem~\ref{thm_induct} is that it provides the mathematical foundation for the practical encoding and decoding algorithms of our LOCO codes via lexicographic indexing. In particular, this theorem introduces a simple one-to-one mapping from $g(\bold{c})$ to $\bold{c}$, which is actually the encoding, and a simple one-to-one demapping from $\bold{c}$ to $g(\bold{c})$, which is actually the decoding. The value of this theorem is exemplified in the practical algorithms in the following section. In summary, Theorem~\ref{thm_induct} provides the encoding-decoding rule for LOCO codes. \begin{example}\label{ex_3} We illustrate Theorem~\ref{thm_induct} by applying (\ref{eqn_g0}) to one codeword and (\ref{eqn_g1}) to another codeword in $\mathcal{C}_{6,1}$ given in Table~\ref{table_1}. The first codeword is the one with the index $9$. This codeword has $c_{m-1}=0$; thus, we use (\ref{eqn_g0}): \begin{align} g(\bold{c}) &= \frac{1}{2} \sum_{i=0}^{4} N(i, x) \big \vert_{c_i = 1} \nonumber \\ &= \frac{1}{2} \left [ N(0, 1) + N(3, 1) + N(4, 1) \right ] \nonumber \\ &= \frac{1}{2} \left [ 2 + 6 + 10 \right ] = 9. \nonumber \end{align} The second codeword is the one with the index $24$. This codeword has $c_{m-1}=1$; thus, we use (\ref{eqn_g1}): \begin{align} g(\bold{c}) &= \frac{1}{2} \left [N(6, 1) + \sum_{i=0}^{4} N(i, 1) \big \vert_{c_i = 1} \right ] \nonumber \\ &= \frac{1}{2} \left [ 26 + N(1, 1) + N(2, 1) + N(3, 1) + N(4, 1) \right ] \nonumber \\ &= \frac{1}{2} \left [ 26 + 2 + 4 + 6 + 10 \right ] = 24. \nonumber \end{align} \end{example} Example~\ref{ex_3} shows how the index, which implies the original message, can be recovered from the LOCO codeword. \section{Rate Discussion and Algorithms}\label{sec_ralg} We first discuss \textbf{bridging patterns}. Consider the following scenario. The codeword at transmission (writing) instance $t$ is ending with $00$ from the right, while the codeword at instance $t+1$ is starting with $10$ from the left. The stream containing the two codewords will then have the pattern $010$, which is a forbidden pattern for any LOCO code. This is the motivation behind adding bridging patterns. In particular, bridging patterns prevent forbidden patterns from appearing across two consecutive codewords. If the patterns in $\mathcal{T}_x$ are prevented (Condition~3 in Definition \ref{def_loco} is satisfied), any two consecutive transitions will be separated by at least $x+1$ successive bit durations. Transitions are either from $0$ to $1$, i.e, $-A$ to $+A$, or from $1$ to $0$, i.e., $+A$ to $-A$. Define the symbol $z$ as the no transmission (no writing) symbol. For example, in magnetic recording, $z$ represents the state when the magnetic grain is unmagnetized. As done before, we also define the notation $\bold{z}^r$ to represent a run of $r$ consecutive $z$ symbols. There are two methods for adding bridging patterns that prevent forbidden patterns from appearing in streams of LOCO codewords. The first method is simply to add the bridging pattern $\bold{z}^x$ between each two consecutive LOCO codewords. The second method is to make a run-time decision on the bridging pattern of length $x$ based on the $x+1$ RMBs in the codeword at instance $t$ and the $x+1$ LMBs in the codeword at instance $t+1$. In the first method, adding $x$ $z$ symbols, i.e., not transmitting (not writing) for $x$ successive bit durations, guarantees that no pattern in $\mathcal{T}_x$ appears between consecutive LOCO codewords in $\mathcal{C}_{m, x}$. This method is quite simple, and does not require any knowledge of the codewords being transmitted (written). However, it is not optimal in the sense that it does not provide the maximum achievable protection, e.g., from ISI in MR systems, for the bits at the two ends of the codeword. For example, in the scenario at the start of this section, it is best to use $1$ for bridging if $x=1$. While the second method provides better protection for the bits at the two ends of the codeword, it introduces additional complexity and latency. However, it is still feasible for small values of $x$. For example, Table \ref{table_2} provides the bridging patterns of the second method for LOCO codes with $x=1$. \begin{table} \caption{Bridging patterns of the second method for LOCO codes with $x=1$.} \vspace{-0.5em} \centering \scalebox{1.00} { \begin{tabular}{|c|c|c|} \hline \makecell{RMB(s) \\ at instance $t$} & \makecell{Bridging \\ pattern} & \makecell{LMB(s) \\ at instance \\ $t+1$} \\ \hline $0$ & $0$ & $0$ \\ \hline $0$ & $0$ & $11$ \\ \hline $00$ & $1$ & $10$ \\ \hline $01$ & $z$ & $01$ \\ \hline $1$ & $1$ & $1$ \\ \hline $1$ & $1$ & $00$ \\ \hline $11$ & $0$ & $01$ \\ \hline $10$ & $z$ & $10$ \\ \hline \end{tabular}} \label{table_2} \vspace{-0.5em} \end{table} Whether the first or the second method is used for bridging, the number of added bits/symbols for each codeword is $x$. Moreover, bridging patterns are ignored at the decoding. \begin{remark} In the case of Flash systems, transitions are either from $0$ to $1$, i.e, $E$ to $+A$, or from $1$ to $0$, i.e., $+A$ to $E$. Moreover, the no writing symbol $z$ represents the state when the cell is programmed to a charge level about the mid point between $E$ and $+A$. \end{remark} \begin{remark} For LOCO codes with $x=1$, the optimal~bridging, in terms of bits protection, is a little different from the bridging in Table~\ref{table_2}. In particular, the two bridging patterns having symbol $z$ are replaced by $10$ and $01$ for the first and the second instances, respectively. In order to keep the code length fixed, bridging patterns $0$ and $1$ are also replaced by $00$ and $11$, respectively. However, such bridging is not efficient in terms of the added redundancy, especially for bigger values of $x$, in addition to its higher complexity. Furthermore, our simulations demonstrate that the other two bridging methods described above are already guaranteeing a more than satisfactory performance. \end{remark} One of the important requirements not only in constrained codes, but also in all types of line codes is \textbf{self-clocking} \cite{siegel_mr, immink_1}. In particular, the receiver should be capable of retrieving the clock of the transmitter from the signal itself. This requires avoiding long runs of $0$'s ($-A$'s) and long runs of $1$'s ($+A$'s). To achieve this goal, we construct the following code. \begin{definition}\label{def_clo} A self-clocked LOCO (C-LOCO) code $\mathcal{C}_{m,x}^{\textup{c}}$ is the code resulting from removing the all $0$'s and the all $1$'s codewords from the LOCO code $\mathcal{C}_{m,x}$. In particular: \begin{equation}\label{eqn_scl} \mathcal{C}_{m,x}^{\textup{c}} \triangleq \mathcal{C}_{m,x} \setminus \{\bold{0}^m, \bold{1}^m\}, \end{equation} where $m \geq 2$. The cardinality of $\mathcal{C}_{m,x}^{\textup{c}}$ is given by: \begin{equation}\label{eqn_scl_card} N^{\textup{c}}(m, x) = N(m, x) - 2. \end{equation} \end{definition} Now, there exists at least one transition in each codeword in $\mathcal{C}_{m,x}^{\textup{c}}$. Define $k_{\textup{eff}}^{\textup{c}}$ as the maximum number of successive bit durations without a transition in a stream of C-LOCO codewords that belong to $\mathcal{C}_{m,x}^{\textup{c}}$, with each two consecutive codewords separated by a bridging pattern. For the sake of abbreviation, we here use the format ``codeword at $t$ $-$ bridging pattern $-$ codeword at $t+1$''. The scenarios under which $k_{\textup{eff}}^{\textup{c}}$ is acheieved, using the first bridging method, are: \begin{align} &1\bold{0}^{m-1} - \bold{z}^x - \bold{0}^{m-1}1, \textup{ and } \nonumber \\ &0\bold{1}^{m-1} - \bold{z}^x - \bold{1}^{m-1}0. \nonumber \end{align} The scenarios under which $k_{\textup{eff}}^{\textup{c}}$ is acheieved, using the second bridging method, are: \begin{align} &1\bold{0}^{m-1} - \bold{0}^x - \bold{0}^{m-1}1, \textup{ and } \nonumber \\ &0\bold{1}^{m-1} - \bold{1}^x - \bold{1}^{m-1}0. \nonumber \end{align} Observe that a transition is only from $0$ to $1$ or from $1$ to $0$. Consequently, regardless from the bridging method we get: \begin{equation}\label{eqn_keff} k_{\textup{eff}}^{\textup{c}} = 2(m-1) + x. \end{equation} We are now ready to discuss the rate of C-LOCO codes. A C-LOCO code $\mathcal{C}_{m,x}^{\textup{c}}$, with $x$ bridging bits/symbols associated to each codeword, has rate: \begin{align}\label{eqn_rate} R_{\textup{LOCO}}^{\textup{c}} &= \frac{\left \lfloor \log_2 N^{\textup{c}}(m, x) \right \rfloor}{m+x} \nonumber \\ &= \frac{\left \lfloor \log_2 \left( N(m, x) - 2 \right ) \right \rfloor}{m+x}, \end{align} where $N(m, x)$ is obtained from the recursive relation (\ref{eqn_rec}). The numerator, which is $\left \lfloor \log_2 \left( N(m, x) - 2 \right ) \right \rfloor$, is the length of the messages $\mathcal{C}_{m,x}^{\textup{c}}$ encodes. Observe that a C-LOCO code $\mathcal{C}_{m,x}^{\textup{c}}$ consists of all codewords of length $m$, with the exception of the two codewords $\bold{0}^m$ and $\bold{1}^m$, that do not contain any of the forbidden patterns in $\mathcal{T}_x$. Moreover, the number of added bits/symbols for bridging is function of $x$ only, and thus does not grow with $m$. Consequently, it follows that C-LOCO codes are \textbf{capacity-achieving constrained codes}. \begin{example}\label{ex_4} Consider again the LOCO code $\mathcal{C}_{6,1}$ in Table~\ref{table_1}. From (\ref{eqn_keff}), the C-LOCO code $\mathcal{C}_{6,1}^{\textup{c}}$ derived from $\mathcal{C}_{6,1}$ has: \begin{equation} k_{\textup{eff}}^{\textup{c}} = 2(6-1) + 1 = 11. \nonumber \end{equation} \begin{table} \caption{The C-LOCO code $\mathcal{C}_{6,1}^{\textup{c}}$ for all messages.} \vspace{-0.5em} \centering \scalebox{1.00} { \begin{tabular}{|c|c|c|} \hline \makecell{Message} & \makecell{$g(\bold{c})$} & \makecell{Codeword $\bold{c}$} \\ \hline $0000$ & $1$ & $000001$ \\ \hline $0001$ & $2$ & $000011$ \\ \hline $0010$ & $3$ & $000110$ \\ \hline $0011$ & $4$ & $000111$ \\ \hline $0100$ & $5$ & $001100$ \\ \hline $0101$ & $6$ & $001110$ \\ \hline $0110$ & $7$ & $001111$ \\ \hline $0111$ & $8$ & $011000$ \\ \hline $1000$ & $9$ & $011001$ \\ \hline $1001$ & $10$ & $011100$ \\ \hline $1010$ & $11$ & $011110$ \\ \hline $1011$ & $12$ & $011111$ \\ \hline $1100$ & $13$ & $100000$ \\ \hline $1101$ & $14$ & $100001$ \\ \hline $1110$ & $15$ & $100011$ \\ \hline $1111$ & $16$ & $100110$ \\ \hline \end{tabular}} \label{table_3} \vspace{-0.5em} \end{table} The length of the messages $\mathcal{C}_{6,1}^{\textup{c}}$ encodes is: \begin{equation} \left \lfloor \log_2 \left( N(6, 1) - 2 \right ) \right \rfloor = \left \lfloor \log_2 24 \right \rfloor = 4. \nonumber \end{equation} The C-LOCO code $\mathcal{C}_{6,1}^{\textup{c}}$ is shown in Table~\ref{table_3} for all messages. From (\ref{eqn_rate}), the rate of $\mathcal{C}_{6,1}^{\textup{c}}$ is: \begin{equation} R_{\textup{LOCO}}^{\textup{c}} = \frac{\left \lfloor \log_2 24 \right \rfloor}{6+1} = \frac{4}{7} = 0.5714. \nonumber \end{equation} \end{example} Note that the rate of $\mathcal{C}_{6,1}^{\textup{c}}$ is relatively low because of the small code length, $m=6$, and because of the relatively high number of unused codewords. Table~\ref{table_4} shows the rates of C-LOCO codes $\mathcal{C}_{m,x}^{\textup{c}}$ for different values of $m$ and for $x \in \{1, 2\}$. The rates in Table~\ref{table_4} for C-LOCO codes with $x=1$ are significantly higher than $0.5714$. Table~\ref{table_4} demonstrates that C-LOCO codes have rates up to $0.6923$ (resp., $0.5484$) for the case of $x=1$ (resp., $x=2$) with moderate code lengths. From the literature, the capacity of a $\mathcal{T}_x$-constrained code with $x=1$ (resp., $x=2$) is $0.6942$ (resp., $0.5515$) \cite{siegel_const, immink_1}. The table shows that the rate of the C-LOCO code $\mathcal{C}_{90,1}^{\textup{c}}$ (resp., $\mathcal{C}_{91,2}^{\textup{c}}$) is within only $0.3\%$ (resp., $0.6\%$) from the capacity. In fact, these rates even increase with an informed increase in the value of $m$ until they reach the capacity. For example, the rate of $\mathcal{C}_{489,1}^{\textup{c}}$ is $0.6939$, which is only $0.04\%$ from the capacity. Additionally, $\mathcal{C}_{450,2}^{\textup{c}}$ is $0.5509$, which is only $0.1\%$ from the capacity. \begin{table} \caption{Rates and adder sizes of C-LOCO codes $\mathcal{C}_{m,x}^{\textup{c}}$ for different values of $m$ and $x$. The capacity is $0.6942$ for $x=1$ and $0.5515$ for $x=2$.} \vspace{-0.5em} \centering \scalebox{1.00} { \begin{tabular}{|c|c|c|} \hline \makecell{Values of $m$ and $x$} & \makecell{$R_\textup{LOCO}^{\textup{c}}$} & \makecell{Adder size} \\ \hline $m=8$, $x=1$ & $0.6667$ & $6$ bits \\ \hline $m=18$, $x=1$ & $0.6842$ & $13$ bits \\ \hline $m=31$, $x=1$ & $0.6875$ & $22$ bits \\ \hline $m=44$, $x=1$ & $0.6889$ & $31$ bits \\ \hline $m=54$, $x=1$ & $0.6909$ & $38$ bits \\ \hline $m=90$, $x=1$ & $0.6923$ & $63$ bits \\ \hline $m=6$, $x=2$ & $0.5000$ & $4$ bits \\ \hline $m=13$, $x=2$ & $0.5333$ & $8$ bits \\ \hline $m=24$, $x=2$ & $0.5385$ & $14$ bits \\ \hline $m=33$, $x=2$ & $0.5429$ & $19$ bits \\ \hline $m=42$, $x=2$ & $0.5455$ & $24$ bits \\ \hline $m=91$, $x=2$ & $0.5484$ & $51$ bits \\ \hline \end{tabular}} \label{table_4} \vspace{-0.5em} \end{table} To compare with other line codes having similar performance, we briefly discuss $(d, k)$ RLL codes. An RLL code with parameter $d$ constrains each codeword to have at least $d$ $0$'s between each two consecutive $1$'s. RLL codes are used with NRZI signaling. Thus, an RLL code with parameter $d$ has any two consecutive transitions separated by at least $d+1$ successive bit durations. Therefore, and from the definition of a LOCO code, an RLL code with parameter $d$ has similar performance to a LOCO code with parameter $x$. Because of their practicality, we focus on RLL codes generated via finite state machines (FSMs), and decoded via sliding window decoders \cite{siegel_mr, siegel_const, immink_1}. For $d=x$, there are three main advantages of LOCO codes over FSM-based RLL codes, and they are: \begin{enumerate} \item LOCO codes achieve higher rates. \item LOCO codes are immune against error propagation from a codeword into another. \item Balancing LOCO codes is not only simple, but also incurs a very limited rate loss. \end{enumerate} The second and third advantages will be discussed later in this paper. As for the rate advantage, a practical FSM-based RLL code with $d=1$ typically has a rate of $0.6667$ \cite{siegel_mr, siegel_const}, which is lower than the rates of all C-LOCO codes with $x=1$ in Table~\ref{table_4} except the code with $m=8$. Moreover, a practical FSM-based RLL code with $d=2$ typically has a rate of $0.5000$ \cite{siegel_mr, immink_1}, which is lower than the rates of all C-LOCO codes with $x=2$ in Table~\ref{table_4} except the code with $m=6$. The rate gain of moderate-length C-LOCO codes over practical FSM-based RLL codes, where $d=x$, is up to $10\%$. The observation that constrained codes based on lexicographic indexing offer significant rate gains compared with FSM-based constrained codes was presented in \cite{immink_lex} and \cite{braun_lex}. However, the techniques proposed in both papers require the code length to be significantly large ($m > 250$) in order to achieve such gains, which is not needed for LOCO codes. This observation will be demonstrated even more upon introducing balanced LOCO codes. \begin{remark} While lexicographically-ordered RLL codes constructed via the ideas in \cite{tang_bahl} achieve similar rates to the rates of LOCO codes asymptotically, LOCO codes offer higher rates in the finite-length regime. The reason is that for $d=x$ and at the same length, the RLL constraint results in forbidding more prospective codewords compared with the $\mathcal{T}_x$ constraint. \end{remark} \begin{algorithm}[H] \caption{Encoding C-LOCO Codes} \begin{algorithmic}[1] \State \textbf{Input:} Incoming stream of binary messages. \State Decide the values of $m$, $x$, and the bridging method based on system requirements. \State Use (\ref{eqn_rec}) to compute $N(i, x)$, $i \in \{2, 3, \dots, m\}$. \State Calculate the message length, $s^{\textup{c}} = \left \lfloor \log_2 \left( N(m, x) - 2 \right ) \right \rfloor$. \State \textbf{for} each incoming message $\bold{b}$ of length $s^{\textup{c}}$ \textbf{do} \State \hspace{2ex} Compute $g(\bold{c})=\textup{decimal}(\bold{b})+1$. \textit{(binary to decimal)} \State \hspace{2ex} Initialize $\textup{residual}$ with $g(\bold{c})$. \State \hspace{2ex} \textbf{if} $\textup{residual} < \frac{1}{2} N(m, x)$ \textbf{then} \State \hspace{4ex} Encode $c_{m-1} = 0$. \State \hspace{2ex} \textbf{else} \State \hspace{4ex} Encode $c_{m-1} = 1$. \State \hspace{4ex} $\textup{residual} \leftarrow \textup{residual} - \frac{1}{2} N(m, x)$. \State \hspace{2ex} \textbf{end if} \State \hspace{2ex} \textbf{for} $i \in \{m-2, m-3, \dots, 0\}$ \textbf{do} \State \hspace{4ex} \textbf{if} $\textup{residual} < \frac{1}{2} N(i-x+1, x)$ \textbf{then} \State \hspace{6ex} Encode $c_i = 0$. \State \hspace{4ex} \textbf{else} \State \hspace{6ex} Initialize $\bold{f}$, which is a vector of $x$ entries with $\bold{0}$. \textit{(the forbidden patterns indicators)} \State \hspace{6ex} \textbf{if} $c_{i+1} = 0$ \textbf{then} \State \hspace{8ex} $\beta_0 = \frac{1}{2} N(i-x+1, x)$. \State \hspace{8ex} \textbf{for} $j \in \{1, 2, \dots, x\}$ \State \hspace{10ex} \textbf{if} $i-j < 0$ \textbf{then} \textit{(no forbidden patterns)} \State \hspace{12ex} \textbf{break}. \textit{(exit the current loop)} \State \hspace{10ex} \textbf{end if} \State \hspace{10ex} $\beta_j = \beta_{j-1} + \frac{1}{2} N(i-x+1-j, x)$. \State \hspace{10ex} \textbf{if} $\beta_{j-1} \leq \textup{residual} < \beta_j$ \textbf{then} \State \hspace{12ex} $f_j = 1$. \textit{(a forbidden pattern of the form $0\bold{1}^j 0$ is spotted and has to be avoided)} \State \hspace{12ex} \textbf{break}. \textit{(exit the current loop)} \State \hspace{10ex} \textbf{end if} \State \hspace{8ex} \textbf{end for} \State \hspace{6ex} \textbf{end if} \State \hspace{6ex} \textbf{if} $\bold{f} = \bold{0}$ \textbf{then} \textit{(no forbidden patterns)} \State \hspace{8ex} Encode $c_i = 1$. \State \hspace{8ex} $\textup{residual} \leftarrow \textup{residual} - \frac{1}{2} N(i-x+1, x)$. \State \hspace{6ex} \textbf{else} \State \hspace{8ex} Encode $c_i = 0$. \State \hspace{6ex} \textbf{end if} \State \hspace{4ex} \textbf{end if} \State \hspace{2ex} \textbf{end for} \State \hspace{2ex} Add $x$ bridging bits/symbols according to the bridging method. \textit{(the $x+1$ LMBs from the next codeword are needed here if the second bridging method is adopted)} \State \textbf{end for} \State \textbf{Output:} Outgoing stream of binary C-LOCO codewords. \end{algorithmic} \label{alg_enc} \end{algorithm} We introduce now the encoding and decoding algorithms of our C-LOCO codes, which are based on Theorem~\ref{thm_induct}. Algorithm~\ref{alg_enc} is the encoding algorithm, and Algorithm~\ref{alg_dec} is the decoding algorithm. The steps from 18 to 31 in Algorithm~\ref{alg_enc} are to make sure forbidden patterns in $\mathcal{T}_x$ do not appear in any codeword. Observe that this part of the algorithm focuses on patterns of the form $0\bold{1}^j 0$, $1 \leq j \leq x$, because given the LMB in any $x+2$ consecutive bits, the problem arises only when a $1$ appears earlier, not later, than expected. \begin{algorithm}[H] \caption{Decoding C-LOCO Codes} \begin{algorithmic}[1] \State \textbf{Inputs:} Incoming stream of binary C-LOCO codewords, in addition to $m$ and $x$. \State Use (\ref{eqn_rec}) to compute $N(i, x)$, $i \in \{2, 3, \dots, m\}$. \State \textbf{for} each incoming codeword $\bold{c}$ of length $m$ \textbf{do} \State \hspace{2ex} Initialize $g(\bold{c})$ with $0$. \State \hspace{2ex} \textbf{if} $c_{m-1} = 1$ \textbf{then} \State \hspace{4ex} $g(\bold{c}) \leftarrow g(\bold{c}) + \frac{1}{2} N(m, x)$. \State \hspace{2ex} \textbf{end if} \State \hspace{2ex} \textbf{for} $i \in \{m-2, m-3, \dots, 0\}$ \textbf{do} \State \hspace{4ex} \textbf{if} $c_i = 1$ \textbf{then} \State \hspace{6ex} $g(\bold{c}) \leftarrow g(\bold{c}) + \frac{1}{2} N(i-x+1, x)$. \State \hspace{4ex} \textbf{end if} \State \hspace{2ex} \textbf{end for} \State \hspace{2ex} Compute $\bold{b}=\textup{binary}(g(\bold{c})-1)$. \textit{(decimal to binary)} \State \hspace{2ex} Ignore the next $x$ bridging bits/symbols. \State \textbf{end for} \State \textbf{Output:} Outgoing stream of binary messages. \end{algorithmic} \label{alg_dec} \end{algorithm} \begin{example}\label{ex_5} We illustrate Algorithm~\ref{alg_enc} by showing how to encode a message using the C-LOCO code $\mathcal{C}_{6,1}^{\textup{c}}$ given in Table~\ref{table_3}. Here, $N(0, 1) \triangleq 2$, $N(1, 1) \triangleq 2$, $N(2, 1) = 4$, $N(3, 1) = 6$, $N(4, 1) = 10$, $N(5, 1) = 16$, and $N(6, 1) = 26$. Moreover, $s^{\textup{c}} = \left \lfloor \log_2 24 \right \rfloor = 4$. Consider the message $1110$. From Step~6, $g(\bold{c}) = \textup{decimal}(1110)+1 = 15$, which is the initial value of the variable $\textup{residual}$. Since $\textup{residual} > \frac{1}{2}N(6, 1) = 13$, from Step~11, $c_{5}$ is encoded as $1$. At Step~12, $\textup{residual}$ becomes $15 - 13 = 2$. Then, the algorithm enters the \textbf{for} loop from Step~14 to Step~39. The remaining $5$ bits of the codeword are encoded as follows: \begin{itemize} \item At $i=4$, $\textup{residual} < \frac{1}{2}N(4, 1) = 5$. Consequently, $c_{4}$ is encoded as $0$ at Step~16. \item At $i=3$, $\textup{residual} < \frac{1}{2}N(3, 1) = 3$. Consequently, $c_{3}$ is encoded as $0$ at Step~16. \item At $i=2$, $\textup{residual} = \frac{1}{2}N(2, 1) = 2$. Here, $c_3 = 0$. From Steps~20 and 25, $\beta_0 = \frac{1}{2}N(2, 1) = 2$ and $\beta_1 = \frac{1}{2}N(2, 1) + \frac{1}{2}N(1,1) = 3$, respectively. Since $\beta_0 = \textup{residual} < \beta_1$, the condition in Step~26 is satisfied, leading to $f_1 =1$, which means that if $c_2$ is encoded as $1$, a forbidden pattern of the form $010$ will be created on $c_3$, $c_2$, and $c_1$. Consequently, $c_{2}$ is encoded as $0$ at Step~36 to prevent this scenario. \item At $i=1$, $\textup{residual} > \frac{1}{2}N(1, 1) = 1$. Here, $c_2 = 0$. From Steps~20 and 25, $\beta_0 = \frac{1}{2}N(1, 1) = 1$ and $\beta_1 = \frac{1}{2}N(1, 1) + \frac{1}{2}N(0,1) = 2$, respectively. Since $\beta_0 < \textup{residual} = \beta_1$, the condition in Step~26 is not satisfied, leading to $f_1 =0$. Consequently, $c_{1}$ is encoded as $1$ at Step~33, and $\textup{residual}$ becomes $2 - 1 = 1$. \item At $i=0$, $\textup{residual} = \frac{1}{2}N(0, 1) = 1$. Consequently, $c_{0}$ is encoded as $1$ at Step~33. \end{itemize} As a result, the codeword generated is $100011$, which is codeword indexed by $g(\bold{c})=15$ in Table~\ref{table_3}. \end{example} Example~\ref{ex_3} in Section~\ref{sec_pract} already showed how the decoding algorithm works. As demonstrated by Algorithm~\ref{alg_enc} and Algorithm~\ref{alg_dec} in addition to Theorem~\ref{thm_induct}, the encoding procedure of C-LOCO codes is mainly comparisons and subtractions, while the decoding procedure of C-LOCO codes is mainly additions. The size of the adders used to perform these tasks is $\log_2$ the maximum value $g(\bold{c})$ can take that corresponds to a message, and it is given by: \begin{equation} s^{\textup{c}} = \left \lfloor \log_2 \left( N(m, x) - 2 \right ) \right \rfloor, \end{equation} which is the message length. Table~\ref{table_4} links the rate of a C-LOCO code with its encoding and decoding complexity through the size of the adders to be used. For example, for a C-LOCO code with $x=1$, if a rate of $0.6667$ is satisfactory, small adders of size just $6$ bits are all what is needed. However, in case the rate needs to be about $0.6842$, adders of size $13$ bits should be used. Moreover, for a C-LOCO code with $x=2$, if a rate of $0.5000$ is satisfactory, small adders of size just $4$ bits are all what is needed. However, in case the rate needs to be just about $0.5333$, adders of size $8$ bits should be used. Note that the cardinalities $N(i, x)$, $-x+1 \leq i \leq m$, should be stored in the memory offline. Note also that the multiplication by $\frac{1}{2}$ is just a right shift by one unit in binary. From Table~\ref{table_4}, the C-LOCO code $\mathcal{C}_{90,1}^{\textup{c}}$ has rate $0.6923$ and~adder size $63$ bits. The same rate is achieved in \cite{immink_table} for an RLL code with $d=1$ at code (resp., message) length $13$ (resp., $9$) bits. However, the technique in \cite{immink_table} is based on lookup tables; thus, the complexity of the encoding and decoding is governed by lookup tables of size $(2^9)(13)=6656$ bits, which is a significantly higher complexity than what we offer. We end this section by discussing two more aspects in the proposed LOCO codes: error propagation in addition to parallel encoding and decoding. The fixed length of LOCO codes makes them immune against error propagation from a codeword into the following ones. In particular, multiple errors occurring in one codeword do not affect the decoding of the following codewords. However, for large code lengths, few bit errors in a LOCO codeword can affect many bits in the message, which is the reason why we recommend LOCO codes with moderate lengths. On the contrary, FSM-based RLL codes with sliding window decoders suffer from error propagation among different codewords that is exacerbated with long codeword lengths (and also with long streams of codewords) \cite{howe_prop}. Furthermore, because of their fixed length, LOCO codes enable parallel encoding and decoding of different codewords if the complexity constraints of the system allow that. This advantage can be of significant value in data storage systems, where codewords are already written upon receiving (reading) them. On the other hand, FSM-based RLL codes do not enable efficient parallel encoding and decoding. The properties stated here for LOCO codes also apply to the balanced LOCO codes discussed in Section~\ref{sec_bala}. \section{Density Gains in MR Systems}\label{sec_mr} Our MR system model is shown in Fig.~\ref{fig_1}, and it consists of the following modules. \begin{figure*} \vspace{-0.5em} \center \includegraphics[trim={0.0in 1.0in 0.0in 1.2in}, width=7.0in]{Figure_MR_Bdiagram.pdf} \vspace{-0.9em} \caption{MR system model with LDPC and LOCO codes used.} \label{fig_1} \vspace{-0.5em} \end{figure*} \textbf{LDPC encoder:} This is an SC LDPC encoder, which takes $w$ bits of input data and generates an SC codeword of length $n$. The adopted SC code will be described shortly. \textbf{LOCO encoder:} It takes the SC codeword as input, and using Algorithm~\ref{alg_enc}, encodes only $n-w$ parity bits via a C-LOCO code to significantly increase their reliability by mitigating ISI for them as previously illustrated. The parameters of the C-LOCO code will be described shortly, but it has a much smaller length compared with $n-w$. Thus, there is a stream of C-LOCO codewords, with each consecutive two of them separated by a bridging pattern $\bold{z}^x$. The output of the LOCO encoder is of length $n_{\textup{ov}}$. \textbf{NRZ signal generator:} It generates an NRZ stream of $n_{\textup{ov}}$ symbols, each of which is in $\{-A, +A\}$, except for the bridging symbols. Symbol $z$ for bridging corresponds to no transmission (no writing). \textbf{Interleaver:} A pseudo-random interleaver is applied only on the $w$ bits that are not encoded via the C-LOCO code. \textbf{PR channel:} We use the PR channel described in \cite{ahh_bas}. The MR channel effects are inter-symbol interference (intrinsic memory), jitter, and electronic noise. The channel density \cite{ahh_bas, shafa_2d}, which is the ratio of the read-head pulse duration at half the amplitudes to the bit duration, is swept to generate the plots. The signal-to-noise ratio (SNR) is $13.00$ dB. A continuous-time filter (CTF) followed by a digital finite-impulse-response (DFIR) filter are applied to achieve the PR equalization target $[8$~$14$~$2]$. Observe that this PR target, which is recommended by the industry, behaves in a way similar to the channel impulse response \cite{ahh_bas, shafa_2d}. \textbf{BCJR detector:} A Bahl Cocke Jelinek Raviv (BCJR) detector \cite{bcjr}, which is based on pattern-dependent noise prediction (PDNP) \cite{pdnp}, is then applied to the received stream to calculate $n_{\textup{ov}}$ likelihood ratios (LRs). There is a feedback loop incorporating the detector and the decoders. \textbf{Deinterleaver:} It rearranges the LRs of the $w$ bits that were not encoded via the C-LOCO code, i.e., the ones that were originally interleaved. \textbf{LOCO decoder:} Initially, this decoder makes a hard decision on the $n_{\textup{ov}} - w$ bits that were encoded via the C-LOCO code using their LRs. If the $\mathcal{T}_x$ constraint is violated for a received word, the LOCO decoder tries to fix that by flipping the bit with the closest LR to $1$ (the smallest $\log_e$ LR in magnitude). In other words, the LOCO decoder performs some sort of error correction here. Next, it decodes the original $n - w$ parity bits using Algorithm~\ref{alg_dec}. Finally, the LOCO decoder sends $n$ LRs to the LDPC decoder; $w$ LRs left as they are, and $n-w$ highly reliable LRs. \textbf{LDPC decoder:} This is a fast Fourier transform based $q$-ary sum-product algorithm (FFT-QSPA) LDPC decoder \cite{dec_fft}, with $q$ being set to $2$ here. The number of global (detector-decoders) iterations is $10$, and the number of local (LDPC decoder only) iterations is $20$. Unless a codeword is reached, the LDPC decoder performs its prescribed number of local iterations for each global iteration. At the end of each global iteration, except the last one, the LDPC decoder, sends its updated $n$ LRs in the feedback loop. \textbf{LR expander:} The BCJR detector operates on $n_{\textup{ov}}$ symbols. Thus, an LR expander is used to expand the LR vector from $n$ to $n_{\textup{ov}}$ via the information it receives from the LOCO and the LDPC decoders. \textbf{Interleaver:} The interleaver in the feedback branch of the detector-decoders loop is a pseudo-random interleaver, which is applied only on the $w$ LRs of the bits that were not encoded via the C-LOCO code. At the last global iteration, looping stops, and the LDPC decoder generates the data read. More details about some of these modules can be found in \cite{ahh_bas}. \begin{remark} If the C-LOCO message length, $s^{\textup{c}}$, does not divide $n-w$, we pad with few, say $\delta$, zeros. \end{remark} One of the two reasons why we do not apply the C-LOCO code on the entire LDPC codeword here is to limit the rate loss resulting from integrating the C-LOCO code in the MR system. The other reason will be introduced upon discussing the simulation plots. Lemma~\ref{lem_rate_mr} gives the overall rate of the LDPC-LOCO coding scheme applied in our system. \begin{lemma}\label{lem_rate_mr} Consider the following LDPC-LOCO coding scheme. A C-LOCO code of rate $R_\textup{LOCO}^{\textup{c}}$ is used to encode only the parity bits of an LDPC code of rate $R_\textup{LDPC}$. The overall rate of this scheme is: \begin{equation}\label{eqn_rate_ov} R_\textup{ov} \approx \frac{R_\textup{LDPC} R_\textup{LOCO}^{\textup{c}}}{R_\textup{LDPC} R_\textup{LOCO}^{\textup{c}} + (1 - R_\textup{LDPC})}. \end{equation} \end{lemma} \begin{IEEEproof} The length of the LDPC codeword can be written as: \begin{equation} n = w + (n-w). \end{equation} Only those $n-w$ bits are going to be encoded via the C-LOCO code. Consequently, \begin{equation}\label{eqn_ovlen} n_\textup{ov} = w + (n-w+\delta)\frac{1}{R_\textup{LOCO}^{\textup{c}}}. \end{equation} As a result, the overall rate is: \begin{align}\label{eqn_ovrate_pr} R_\textup{ov} &= \frac{w}{n_\textup{ov}} = \frac{w}{w + (n-w+\delta)\frac{1}{R_\textup{LOCO}^{\textup{c}}}} \allowdisplaybreaks \nonumber \\ &= \frac{w/n}{w/n + (1-w/n+\delta/n)\frac{1}{R_\textup{LOCO}^{\textup{c}}}} \nonumber \\ &\approx \frac{R_\textup{LDPC}}{R_\textup{LDPC} + (1-R_\textup{LDPC})\frac{1}{R_\textup{LOCO}^{\textup{c}}}} \nonumber \\ &= \frac{R_\textup{LDPC} R_\textup{LOCO}^{\textup{c}}}{R_\textup{LDPC} R_\textup{LOCO}^{\textup{c}} + (1 - R_\textup{LDPC})}. \end{align} Note that $\delta$ is very small compared with $n$. \end{IEEEproof} Lemma~\ref{lem_rate_mr} demonstrates that the rate loss due to integrating a C-LOCO code in the MR system the way we do it is limited. In fact, from the expression in (\ref{eqn_rate_ov}), as $R_\textup{LDPC}$ approaches $1$, $R_\textup{ov}$ approaches $R_\textup{LDPC}$. There are two SC codes used in our simulations. The two codes are constructed according to \cite{ahh_tit2}, which provides a method to design high performance SC codes particularly for MR systems. This method is based on the optimal overlap, circulant power optimizer (OO-CPO) approach. SC~Code~1 has column weight $=4$, maximum row weight $=17$, circulant size $=37$, memory $=1$, and coupling length $=6$. Thus, SC~Code~1 has block length $=3774$ bits and rate $\approx 0.725$. SC~Code~2 has column weight $=4$, maximum row weight $=13$, circulant size $=47$, memory $=1$, and coupling length $=7$. Thus, SC~Code~2 has block length $=4277$ bits and rate $\approx 0.648$. The differences in length and rate between the two SC codes will be illustrated shortly. Only SC~Code~1 will be combined with a C-LOCO code. The C-LOCO code we use in the simulations is the code $\mathcal{C}_{18,1}^{\textup{c}}$. This code has $m = 18$ and $x = 1$. Thus, from (\ref{eqn_keff}), $\mathcal{C}_{18,1}^{\textup{c}}$ has $k_{\textup{eff}}^{\textup{c}} = 2(17) + 1 = 35$. Moreover, $\mathcal{C}_{18,1}^{\textup{c}}$ has $N^{\textup{c}}(18, 1) = 8362$, which means the message length is $s^\textup{c} = \left \lfloor \log_2 8360 \right \rfloor = 13$. Thus, from (\ref{eqn_rate}), the rate of $\mathcal{C}_{18,1}^{\textup{c}}$ is $\frac{13}{18+1} = 0.6842$ since one symbol $z$ is used for bridging. \begin{figure} \vspace{-0.5em} \center \includegraphics[trim={1.5in 0.9in 1.0in 0.9in}, width=3.7in]{Figure_LOCO_gains_1.pdf} \vspace{-0.9em} \caption{Density gains achieved by LOCO codes in MR systems.} \label{fig_2} \vspace{-0.5em} \end{figure} We generate three plots, as shown in Fig.~\ref{fig_2}, for the following three simulation setups: \begin{enumerate} \item SC~Code~1 (original SC code) is used for error correction, and no C-LOCO code is applied. \item SC~Code~2 (lower rate SC code) is used for error correction, and no C-LOCO code is applied. \item SC~Code~1 is combined with the C-LOCO code $\mathcal{C}_{18,1}^{\textup{c}}$ such that only the parity bits of SC~Code~1 are encoded via $\mathcal{C}_{18,1}^{\textup{c}}$. \end{enumerate} The energy per input data bit in all three setups is the same. For Setup~3, we have the following parameters: $w = 2738$ (see \cite{ahh_tit2}), $n = 3774$, $R_\textup{LOCO}^{\textup{c}} = 0.6842$, and $\delta = 4$. From (\ref{eqn_ovlen}), the overall length after applying the C-LOCO code in Setup~3 is: \begin{equation} n_\textup{ov} = 2738 + (1036+4)\frac{1}{0.6842} = 4258. \nonumber \end{equation} Furthermore, from (\ref{eqn_rate_ov}), the overall rate is $R_\textup{ov} \approx 0.643$. Thus, the overall length and rate in Setup~3 are similar to the length and rate of SC~Code~2 in Setup~2. The frame error rate (FER) versus density plots for the three setups are shown in Fig.~\ref{fig_2}. The figure demonstrates the gains of Setup~3, in which a C-LOCO code is applied in the MR system, over the other two setups. In particular, the density gain of Setup~3 over Setup~1 (resp., Setup~2) is about $20\%$ (resp., $16\%$) at FER $\approx 10^{-6}$. The density gain achieved in Setup~3 over Setup~2 implies that exploiting the additional redundancy by applying a C-LOCO code is significantly more effective compared with exploiting this redundancy by adding more parity bits. An intriguing observation from Fig.~\ref{fig_2} is that the error floor slope in Setup~3 is sharper than the slope in the other two setups. While applying the C-LOCO code to the entire LDPC codeword provides higher density gains, the overall rate loss becomes very high since the rate in this case becomes $R_\textup{ov} \approx R_\textup{LDPC} R_\textup{LOCO}^{\textup{c}}$. For example, if $\mathcal{C}_{18,1}^{\textup{c}}$ is applied to the entire codeword of SC~Code~1, the overall rate becomes $R_\textup{ov} \approx 0.496$, which is a lot lower than $R_\textup{ov}$ in Setup~3, which is $0.643$. Additionally, the density gains achieved diminish gradually with more bits being encoded via the C-LOCO code. In summary, the proposed idea in Setup~3 offers a better rate-density gain trade-off. Setup~3 is motivated by a particular understanding of graph-based codes. Even though only a group of bits in the LDPC codeword, which are the bits encoded via the LOCO code, have highly reliable LRs while decoding, the information in these highly reliable LRs will be spread to all bits during the message passing procedure. Therefore, the LDPC decoder experiences a version of the channel with a higher effective~SNR, which results in the decoder, aided by the detector and the LOCO decoder, kicking-off its operation at higher densities. \begin{remark} In this paper, we use the word ``moderate'' to refer to code lengths of LOCO codes. The context of this usage may not be generalized to include LDPC codes since what is moderate for LOCO codes is very small for LDPC codes. \end{remark} \section{Balanced LOCO Codes}\label{sec_bala} A critical additional requirement in line codes, which appears in applications like optical recording, Flash memories, in addition to USB and PCIe standards, is balancing \cite{knuth_bal, qin_flash, braun_lex}. Examples of balanced line codes are the $8$b/$10$b \cite{immink_2} and~the $64$b/$66$b \cite{walker_66} codes (the latter is not strictly DC-free). Balanced line codes have zero average power at frequency zero, i.e., no DC power component, when the signal levels are $-A$ and $+A$. This is achieved by constraining the running disparity $p_r$ of any stream of codewords from the line code. The work in \cite{robert_spec1} relates the running disparity to the width of the power spectral null. The running disparity $p_r$ is measured before each new codeword in the stream, and $p_r$ equals the sum of disparities of all the previous codewords and their bridging patterns. The disparity of a codeword $\bold{c}$, $p(\bold{c})$, is defined as the difference between the number of $+A$ and $-A$ ($+A$ and $E$ in Flash) symbols in the transmitted (written) codeword after the signaling scheme is applied. When NRZ signaling is applied, this disparity is directly the difference between the number of $1$'s and $0$'s in the codeword. A standard way of balancing line codes is to encode each message to one of two codewords having the same magnitude but opposite signs for their disparities. Then, depending on the sign of the running disparity, one of these two codewords is picked for the incoming message. Codewords having zero disparity can be used to uniquely encode messages. For example, the $8$b/$10$b code adopts this way of balancing. This simple code is constructed to achieve balancing and self-clocking only, which is why it has a high rate. More advanced line codes, e.g., $\mathcal{T}_x$-constrained or RLL codes, have more requirements, e.g., improving the performance in data storage systems, making their rates less compared with the above simple line code. Thus, balancing these constrained codes via the approach mentioned in this paragraph incurs a penalty. This penalty is either rate loss (rate reduction) for the same complexity or additional complexity for the same rate. In this section, we demonstrate another advantage of LOCO codes, which is that they can be balanced with the minimum penalty. We start with the following lemma. \begin{lemma}\label{lem_inv} Define codeword $\bold{c}^0$ as a LOCO codeword in $\mathcal{C}_{m,x}$ that starts with $0$ from the left. Define codeword $\bold{c}^1$ as the LOCO codeword indexed by $N(m,x)-1-g(\bold{c}^0)$ in $\mathcal{C}_{m,x}$, where $g(\bold{c}^0)$ is the index of $\bold{c}^0$. The two codewords $\bold{c}^0$ and $\bold{c}^1$ are the complements of each other. \end{lemma} \begin{IEEEproof} Since $\bold{c}^0$ starts with $0$ from the left, using (\ref{eqn_g0}) gives: \begin{equation}\label{eqn_invp} g(\bold{c}^0) = \frac{1}{2} \sum_{i=0}^{m-2} N(i-x+1, x) \big \vert_{c^0_i = 1}. \end{equation} From the definition of $\bold{c}^1$, it has to start with $1$ from the left. Thus, using (\ref{eqn_g1}) gives: \begin{equation}\label{eqn_invm} g(\bold{c}^1) = \frac{1}{2} \left [N(m, x) + \sum_{i=0}^{m-2} N(i-x+1, x) \big \vert_{c^1_i = 1} \right ]. \end{equation} Furthermore, we also have: \begin{equation} g(\bold{c}^1) \triangleq N(m,x)-1-g(\bold{c}^0). \end{equation} Consequently, using (\ref{eqn_invp}) and (\ref{eqn_invm}) we get: \begin{align} g(\bold{c}^1) &+ g(\bold{c}^0) = \frac{1}{2} \sum_{i=0}^{m-2} N(i-x+1, x) \big \vert_{c^0_i = 1} \nonumber \\ &+ \frac{1}{2} \left [N(m, x) + \sum_{i=0}^{m-2} N(i-x+1, x) \big \vert_{c^1_i = 1} \right ] \nonumber \\ &\hspace{3.5em} = N(m,x)-1, \end{align} which means: \begin{align}\label{eqn_blast} \frac{1}{2} \sum_{i=0}^{m-2} N(i-x+1, x) \big \vert_{c^0_i = 1} &+ \frac{1}{2}\sum_{i=0}^{m-2} N(i-x+1, x) \big \vert_{c^1_i = 1} \nonumber \\ &\hspace{-3.5em} = \frac{1}{2}N(m,x)-1, \end{align} Observe that $\frac{1}{2}N(m,x)-1$ is the index of the LOCO codeword $0\bold{1}^{m-1}$. Thus, for a given codeword $\bold{c}^0$ starting with $0$ from the left in $\mathcal{C}_{m,x}$, the codeword $\bold{c}^1$ starting with $1$ from the left in $\mathcal{C}_{m,x}$, and having the $m-1$ RMBs being the complements of the $m-1$ RMBs in $\bold{c}^0$, makes (\ref{eqn_blast}) satisfied. Because the mapping from $g(\bold{c}^1)$ to $\bold{c}^1$ is one-to-one, such a codeword has to be the only codeword with that property. Since $c^0_{m-1}=0$ and $c^1_{m-1}=1$ are already complements, $\bold{c}^0$ and $\bold{c}^1$ are then the complements of each other. \end{IEEEproof} Note that since we adopt NRZ signaling, \begin{equation}\label{eqn_disp} p(\bold{c}^0) = - p(\bold{c}^1). \end{equation} Thus, and based on Lemma~\ref{lem_inv}, we now define the proposed balanced LOCO (B-LOCO) codes. \begin{definition}\label{def_bloco} A balanced LOCO (B-LOCO) code $\mathcal{C}_{m, x}^{\textup{b}}$ is a LOCO code in which, each pair of codewords $\bold{c}^0$ and $\bold{c}^1$, having indices $g(\bold{c}^0)$ and $g(\bold{c}^1)=N(m,x)-1-g(\bold{c}^0)$ in $\mathcal{C}_{m, x}$, respectively, are used to encode a single message. The selected codeword $\bold{c}$ is either $\bold{c}^0$ or $\bold{c}^1$ depending on the sign of the running disparity $p_r$ as shown in Table~\ref{table_5}. Consequently, the cardinality of $\mathcal{C}_{m, x}^{\textup{b}}$ is: \begin{equation}\label{eqn_bcl_card} N^{\textup{b}}(m, x) = N(m, x). \end{equation} However, only a maximum of $\frac{1}{2}N^{\textup{b}}(m, x)$ codewords in $\mathcal{C}_{m, x}^{\textup{b}}$ correspond to distinct messages. \begin{table} \caption{The selection criterion for balancing in a B-LOCO code $\mathcal{C}_{m,x}^{\textup{b}}$.} \vspace{-0.5em} \centering \scalebox{1.00} { \begin{tabular}{|c|c|} \hline \makecell{$\textup{sign}(p_r)$} & \makecell{Selected codeword $\bold{c}$} \\ \hline $+$ & $\bold{c}^0$ or $\bold{c}^1$ such that $\textup{sign}(p(\bold{c}))$ is $-$ \\ \hline $-$ & $\bold{c}^0$ or $\bold{c}^1$ such that $\textup{sign}(p(\bold{c}))$ is $+$ \\ \hline \end{tabular}} \label{table_5} \vspace{-0.5em} \end{table} \end{definition} \begin{example}\label{ex_6} The B-LOCO code $\mathcal{C}_{6,1}^{\textup{b}}$ is shown in Table~\ref{table_6} with the codeword disparities. Observe that (\ref{eqn_disp}) is always satisfied, i.e., $p(\bold{c}^0) = - p(\bold{c}^1)$. The cardinality of $\mathcal{C}_{6,1}^{\textup{b}}$ is: \begin{equation}\label{eqn_ex6} N^{\textup{b}}(6, 1) = N(6, 1) = 26. \nonumber \end{equation} However, only a maximum of $13$ codewords in $\mathcal{C}_{6, 1}^{\textup{b}}$ correspond to distinct messages. \begin{table} \caption{The B-LOCO code $\mathcal{C}_{6,1}^{\textup{b}}$. The CB-LOCO code $\mathcal{C}_{6,1}^{\textup{cb}}$ for all messages is the rows from the second until the ninth.} \vspace{-0.5em} \centering \scalebox{1.00} { \begin{tabular}{|c|c|c|c|c|c|} \hline \makecell{Message} & \makecell{$g^{\textup{b}}(\bold{c})$} & \makecell{$\bold{c}^0$} & \makecell{$p(\bold{c}^0)$} & \makecell{$\bold{c}^1$} & \makecell{$p(\bold{c}^1)$} \\ \hline { } & $0$ & $000000$ & $-6$ & $111111$ & $+6$ \\ \hline $000$ & $1$ & $000001$ & $-4$ & $111110$ & $+4$ \\ \hline $001$ & $2$ & $000011$ & $-2$ & $111100$ & $+2$ \\ \hline $010$ & $3$ & $000110$ & $-2$ & $111001$ & $+2$ \\ \hline $011$ & $4$ & $000111$ & $0$ & $111000$ & $0$ \\ \hline $100$ & $5$ & $001100$ & $-2$ & $110011$ & $+2$ \\ \hline $101$ & $6$ & $001110$ & $0$ & $110001$ & $0$ \\ \hline $110$ & $7$ & $001111$ & $+2$ & $110000$ & $-2$ \\ \hline $111$ & $8$ & $011000$ & $-2$ & $100111$ & $+2$ \\ \hline { } & $9$ & $011001$ & $0$ & $100110$ & $0$ \\ \cline{2-6} { } & $10$ & $011100$ & $0$ & $100011$ & $0$ \\ \cline{2-6} { } & $11$ & $011110$ & $+2$ & $100001$ & $-2$ \\ \cline{2-6} { } & $12$ & $011111$ & $+4$ & $100000$ & $-4$ \\ \hline \end{tabular}} \label{table_6} \vspace{-0.5em} \end{table} \end{example} The running disparity in the case of B-LOCO codes satisfies $-m \leq p_r \leq +m$ (see also Example~\ref{ex_6}). Moreover, because of the way codewords are chosen, as shown in Table~\ref{table_5}, this running disparity is around $0$ most of the time for long streams of codewords. The following theorem is the key theorem for encoding and decoding B-LOCO codes. \begin{theorem}\label{thm_induct2} Consider a B-LOCO code $\mathcal{C}_{m,x}^{\textup{b}}$ with $m \geq 2$.~The index $g^{\textup{b}}(\bold{c})$ of a codeword $\bold{c} \in \mathcal{C}_{m,x}^{\textup{b}}$ is derived from $\bold{c}$ itself according to the following two equations: \noindent If the LMB $c_{m-1} = 0$: \begin{equation}\label{eqn_g20} g^{\textup{b}}(\bold{c}) = \frac{1}{2} \sum_{i=0}^{m-2} N(i-x+1, x) \big \vert_{c_i = 1}. \end{equation} If the LMB $c_{m-1} = 1$: \begin{equation}\label{eqn_g21} g^{\textup{b}}(\bold{c}) = \frac{1}{2} \sum_{i=0}^{m-2} N(i-x+1, x) \big \vert_{c_i = 0}. \end{equation} Here, we use the abbreviated notation $g^{\textup{b}}(\bold{c})$ for simplicity. \end{theorem} \begin{IEEEproof} For the case of $c_{m-1} = 0$, it is clear that: \begin{equation} g^{\textup{b}}(\bold{c}) = g(\bold{c}^0), \end{equation} where $g(\bold{c}^0)$ is the index of $\bold{c}^0$ in $\mathcal{C}_{m,x}$. Thus, using (\ref{eqn_g0}): \begin{align}\label{eqn_prf1} g^{\textup{b}}(\bold{c}) &= \frac{1}{2} \sum_{i=0}^{m-2} N(i-x+1, x) \big \vert_{c^0_i = 1} \nonumber \\ &= \frac{1}{2} \sum_{i=0}^{m-2} N(i-x+1, x) \big \vert_{c_i = 1}. \end{align} For the case of $c_{m-1} = 1$, $g^{\textup{b}}(\bold{c})$ must equal that of the corresponding codeword in $\mathcal{C}_{m,x}^{\textup{b}}$ that starts with $0$ from the left. From Lemma~\ref{lem_inv}, $\bold{c}$ in $\mathcal{C}_{m,x}^{\textup{b}}$ that has $c_{m-1} = 1$, which is $\bold{c}^1$ in $\mathcal{C}_{m,x}$, and its corresponding codeword in $\mathcal{C}_{m,x}^{\textup{b}}$ that starts with $0$ from the left, which is $\bold{c}^0$ in $\mathcal{C}_{m,x}$, are the complements of each other. Consequently, we conclude: \begin{align}\label{eqn_prf2} g^{\textup{b}}(\bold{c}) &= \frac{1}{2} \sum_{i=0}^{m-2} N(i-x+1, x) \big \vert_{c^0_i = 1} \nonumber \\ &= \frac{1}{2} \sum_{i=0}^{m-2} N(i-x+1, x) \big \vert_{c^1_i = 0} \nonumber \\ &= \frac{1}{2} \sum_{i=0}^{m-2} N(i-x+1, x) \big \vert_{c_i = 0}, \end{align} which completes the proof. \end{IEEEproof} \begin{example}\label{ex_7} We illustrate Theorem~\ref{thm_induct2} via an example. Consider $\mathcal{C}_{6,1}^{\textup{b}}$ given in Table~\ref{table_6}. We check the two codewords indexed by $6$. From (\ref{eqn_g20}), the codeword starting with $0$ from the left has: \begin{align} g^{\textup{b}}(\bold{c}) &= \frac{1}{2} \sum_{i=0}^{4} N(i, x) \big \vert_{c_i = 1} \nonumber \\ &= \frac{1}{2} \left [ N(1, 1) + N(2, 1) + N(3, 1) \right ] \nonumber \\ &= \frac{1}{2} \left [ 2 + 4 + 6 \right ] = 6. \nonumber \end{align} From (\ref{eqn_g21}), the codeword starting with $1$ from the left has: \begin{align} g^{\textup{b}}(\bold{c}) &= \frac{1}{2} \sum_{i=0}^{4} N(i, x) \big \vert_{c_i = 0} \nonumber \\ &= \frac{1}{2} \left [ N(1, 1) + N(2, 1) + N(3, 1) \right ] \nonumber \\ &= \frac{1}{2} \left [ 2 + 4 + 6 \right ] = 6. \nonumber \end{align} \end{example} Bridging in B-LOCO codes is performed the same way as described in Section~\ref{sec_ralg} for LOCO codes. Define the disparity change resulting from adding a $z$ symbol after a B-LOCO codeword to be $0$, which makes sense as $z$ is the no transmission (no writing) symbol. Observe that whether the first method or the second method is used for bridging, the above analysis does not change. This statement is clear for the first method. As for the second method, note that the complement rule in Lemma~\ref{lem_inv} applies also for bridging patterns (see Table~\ref{table_2}), which justifies the statement. We use the first bridging method in this section since, in addition to its simplicity, it results in no change, and thus no increase in the maximum magnitude, of the disparity of codewords. \begin{definition}\label{def_clob} A self-clocked B-LOCO (CB-LOCO) code $\mathcal{C}_{m,x}^{\textup{cb}}$ is the code resulting from removing the all $0$'s and the all $1$'s codewords from the B-LOCO code $\mathcal{C}_{m,x}^{\textup{b}}$. In particular: \begin{equation}\label{eqn_sclb} \mathcal{C}_{m,x}^{\textup{cb}} \triangleq \mathcal{C}_{m,x}^{\textup{b}} \setminus \{\bold{0}^m, \bold{1}^m\}, \end{equation} where $m \geq 2$. The cardinality of $\mathcal{C}_{m,x}^{\textup{cb}}$ is given by: \begin{equation}\label{eqn_sclb_card} N^{\textup{cb}}(m, x) = N^{\textup{b}}(m, x) - 2 = N(m, x) - 2. \end{equation} However, only a maximum of $\frac{1}{2}N^{\textup{cb}}(m, x)$ codewords in $\mathcal{C}_{m, x}^{\textup{cb}}$ correspond to distinct messages. \end{definition} Define $k_{\textup{eff}}^{\textup{cb}}$ as the maximum number of successive bit durations without a transition in a stream of CB-LOCO codewords that belong to $\mathcal{C}_{m,x}^{\textup{cb}}$, with each two consecutive codewords separated by $\bold{z}^x$. Recall that a transition is only from $0$ to $1$ or from $1$ to $0$. Consequently, we get: \begin{equation}\label{eqn_kbeff} k_{\textup{eff}}^{\textup{cb}} = k_{\textup{eff}}^{\textup{c}} = 2(m-1) + x. \end{equation} \begin{remark} A stream of B-LOCO codewords that belong to $\mathcal{C}_{m,x}^{\textup{b}}$, each having $g^{\textup{b}}(\bold{c})=0$ and using the first bridging method, is encoded as follows: \begin{align} \bold{0}^{m} - \bold{z}^x - \bold{1}^{m} - \bold{z}^x - \bold{0}^{m} - \bold{z}^x - \bold{1}^m - \dots. \nonumber \end{align} If the system can make use of the $0 - z$ and the $z - 1$ changes for self-clocking, the two codewords $\bold{0}^m$ and $\bold{1}^m$ can be kept in the code, and $k_{\textup{eff}}^{\textup{b}}$ will be less than $2(m-1) + x$. Here, we assume that the system cannot use these changes for self-clocking, and that is why our definition for a transition is exclusively from $0$ to $1$ or from $1$ to $0$. \end{remark} Note that the maximum magnitude of the running disparity in the case of CB-LOCO codes is $m-2$, not $m$, because of the removal of the two codewords $\bold{0}^{m}$ and $\bold{1}^{m}$. Thus, CB-LOCO codes are better than B-LOCO codes in that regard. \begin{remark} If the second bridging method is used instead, the two codewords $\bold{0}^m$ and $\bold{1}^m$ are kept in the code, and $k_{\textup{eff}}^{\textup{b}}$ becomes $m+x+\left \lfloor \frac{m+x}{2} \right \rfloor$. However, we do not adopt this method here since it increases the maximum magnitude of the running disparity to $m+x$, in addition to its complexity. \end{remark} We are now ready to discuss the rate of CB-LOCO codes. A CB-LOCO code $\mathcal{C}_{m,x}^{\textup{cb}}$, with $x$ bridging bits/symbols associated to each codeword, has rate: \begin{align}\label{eqn_rateb} R_{\textup{LOCO}}^{\textup{cb}} &= \frac{\left \lfloor \log_2 \left ( \frac{1}{2} N^{\textup{cb}}(m, x) \right ) \right \rfloor}{m+x} \nonumber \\ &= \frac{\left \lfloor \log_2 \left( N(m, x) - 2 \right ) \right \rfloor - 1}{m+x}, \end{align} where $N(m, x)$ is obtained from the recursive relation (\ref{eqn_rec}). The numerator, which is $\left \lfloor \log_2 \left( N(m, x) - 2 \right ) \right \rfloor - 1$, is the length of the messages $\mathcal{C}_{m,x}^{\textup{cb}}$ encodes. Comparing the rate of the CB-LOCO code $\mathcal{C}_{m,x}^{\textup{cb}}$ to the C-LOCO code $\mathcal{C}_{m,x}^{\textup{c}}$ via subtracting (\ref{eqn_rateb}) from (\ref{eqn_rate}) gives: \begin{align} &R_{\textup{LOCO}}^{\textup{c}} - R_{\textup{LOCO}}^{\textup{cb}} \nonumber \\ &= \frac{\left \lfloor \log_2 \left( N(m, x) - 2 \right ) \right \rfloor}{m+x} - \frac{\left \lfloor \log_2 \left( N(m, x) - 2 \right ) \right \rfloor - 1}{m+x}. \nonumber \end{align} Consequently, \begin{align}\label{eqn_ratediff} R_{\textup{LOCO}}^{\textup{c}} - R_{\textup{LOCO}}^{\textup{cb}} = \frac{1}{m+x}. \end{align} Under the balancing approach of having two codewords to encode each message, the maximum number of codewords corresponding to distinct messages drops to at most half the cardinality of the unbalanced code. Thus, a balanced code achieves the minimum rate loss if the code has a rate loss of only ${1}/{(\textup{code length})}$ with respect to its unbalanced code; since this means the balanced code contains all the codewords of the unbalanced code. In other words, for each codeword in the unbalanced code, there exists another codeword to be paired with, such that the two codewords have their disparities with the same magnitude but opposite signs. Consequently, no codewords are skipped from the unbalanced code in order to achieve balancing. We refer to this rate loss as the \textbf{one-bit minimum penalty} because it can be viewed as a reduction of one bit from the message length. From the above discussion and (\ref{eqn_ratediff}), our CB-LOCO codes achieve the minimum rate loss, i.e., they achieve the one-bit minimum penalty. Observe that \textbf{asymptotically, i.e., as $m \rightarrow \infty$, the rate loss resulting from balancing LOCO codes tends to zero} from (\ref{eqn_ratediff}). Thus, CB-LOCO codes asymptotically achieve the same rates as C-LOCO codes. As shown in Table~\ref{table_7}, the rate of the moderate-length CB-LOCO code $\mathcal{C}_{116,1}^{\textup{cb}}$ (resp., $\mathcal{C}_{120,2}^{\textup{cb}}$) is within only $1.5\%$ (resp., $2\%$) from the capacity of an unbalanced $\mathcal{T}_x$-constrained code having $x=1$ (resp., $x=2$). As far as we know, balancing other constrained codes in the literature always incurs a notable rate loss, even asymptotically, with respect to the unbalanced codes \cite{knuth_bal, braun_lex, qin_flash}, which is not the case for LOCO codes. For example, the balancing penalty in \cite{knuth_bal} is an added redundancy of more than $\log_2 m$ (see also \cite{knuth_mod}), which is a costly penalty. Moreover, in order to reduce the rate loss due to balancing, the authors in \cite{braun_lex} are adopting large code lengths, which is not needed for LOCO codes. In the finite-length regime, we achieve a higher rate at the same code length or the same rate at a smaller code length in comparison~with \cite{braun_lex}. \begin{table} \caption{Rates and adder sizes of CB-LOCO codes $\mathcal{C}_{m,x}^{\textup{cb}}$ for different values of $m$ and $x$. The unbalanced capacity is $0.6942$ for $x=1$ and $0.5515$ for $x=2$.} \vspace{-0.5em} \centering \scalebox{1.00} { \begin{tabular}{|c|c|c|} \hline \makecell{Values of $m$ and $x$} & \makecell{$R_\textup{LOCO}^{\textup{cb}}$} & \makecell{Adder size} \\ \hline $m=14$, $x=1$ & $0.6000$ & $9$ bits \\ \hline $m=24$, $x=1$ & $0.6400$ & $16$ bits \\ \hline $m=44$, $x=1$ & $0.6667$ & $30$ bits \\ \hline $m=54$, $x=1$ & $0.6727$ & $37$ bits \\ \hline $m=80$, $x=1$ & $0.6790$ & $55$ bits \\ \hline $m=116$, $x=1$ & $0.6838$ & $80$ bits \\ \hline $m=8$, $x=2$ & $0.4000$ & $4$ bits \\ \hline $m=15$, $x=2$ & $0.4706$ & $8$ bits \\ \hline $m=24$, $x=2$ & $0.5000$ & $13$ bits \\ \hline $m=42$, $x=2$ & $0.5227$ & $23$ bits \\ \hline $m=73$, $x=2$ & $0.5333$ & $40$ bits \\ \hline $m=120$, $x=2$ & $0.5410$ & $66$ bits \\ \hline \end{tabular}} \label{table_7} \vspace{-0.5em} \end{table} \begin{example}\label{ex_8} Consider again the B-LOCO code $\mathcal{C}_{6,1}^{\textup{b}}$ in Table~\ref{table_6}. From (\ref{eqn_kbeff}), the CB-LOCO code $\mathcal{C}_{6,1}^{\textup{cb}}$ derived from $\mathcal{C}_{6,1}^{\textup{b}}$ has: \begin{equation} k_{\textup{eff}}^{\textup{cb}} = 2(6-1) + 1 = 11. \nonumber \end{equation} The length of the messages $\mathcal{C}_{6,1}^{\textup{cb}}$ encodes is: \begin{equation} \left \lfloor \log_2 \left( N(6, 1) - 2 \right ) \right \rfloor - 1 = \left \lfloor \log_2 24 \right \rfloor - 1 = 3. \nonumber \end{equation} The CB-LOCO code $\mathcal{C}_{6,1}^{\textup{cb}}$ is also shown in Table~\ref{table_6} for all messages. From (\ref{eqn_rateb}), the rate of $\mathcal{C}_{6,1}^{\textup{cb}}$ is: \begin{equation} R_{\textup{LOCO}}^{\textup{cb}} = \frac{\left \lfloor \log_2 24 \right \rfloor - 1}{6+1} = \frac{3}{7} = 0.4286. \nonumber \end{equation} \end{example} For bigger values of $m$, the rate of a CB-LOCO code $\mathcal{C}_{m,x}^{\textup{cb}}$ exceeds $0.6667$ (resp., $0.5000$) for $x=1$ (resp., $x=2$) as demonstrated in Table~\ref{table_7}. These rates for practical FSM-based balanced RLL codes having $d=x$ cannot be achieved. Moreover, even to approach these rates, the code length of the FSM-based balanced RLL code will be significantly larger than that of the CB-LOCO code. Recall that the rate of a practical FSM-based unbalanced RLL code is typically $0.6667$ (resp., $0.5000$) for $d=1$ (resp., $d=2$) \cite{siegel_mr, siegel_const}. \begin{remark} Note that lexicographically-ordered RLL codes constructed via the ideas in \cite{tang_bahl} do not have the balancing advantage of LOCO codes, which is the complement rule in Lemma~\ref{lem_inv}. Therefore, balancing these codes is associated with a higher penalty compared with balancing LOCO codes as a result of the many unused codewords. \end{remark} Algorithms~\ref{alg_enc} and \ref{alg_dec} can be modified to encode and decode CB-LOCO codes. The major changes are the following: \begin{enumerate} \item For both algorithms, the message length (adder size) is changed to $s^{\textup{cb}} = \left \lfloor \log_2 \left( N(m, x) - 2 \right ) \right \rfloor - 1$. \item For Algorithm~\ref{alg_enc}, the message here is encoded to $\bold{c} = \bold{c}^0$ initially. After Step~40, $p(\bold{c})$, which is $p(\bold{c}^0)$, is calculated. Then, a check is made on the disparities $p_r$ and $p(\bold{c})$. If $p_r$ and $p(\bold{c})$ have the same sign, the codeword complement of $\bold{c}$ is transmitted (written). Otherwise, $\bold{c}$ is transmitted (written). The updated running disparity $p_r$ is then calculated for the next codeword. Only $p(\bold{c})$ is needed because we use the first bridging method. \item Let $o(\bold{c})$ be the number of $1$'s in codeword $\bold{c}$ in $\mathcal{C}_{m,x}^{\textup{cb}}$. For Algorithm~\ref{alg_enc}, $p(\bold{c})$ can be easily computed from: \begin{equation} p(\bold{c}) = 2o(\bold{c})-m. \end{equation} \item For Algorithm~\ref{alg_dec}, Steps~5, 6, and 7 are removed. Moreover, if $c_{m-1}=1$, the condition under which $g^{\textup{b}}(\bold{c})$ is increased by $\frac{1}{2} N(i-x+1, x)$ becomes if $c_i=0$ from (\ref{eqn_g21}) in Theorem~\ref{thm_induct2}. \end{enumerate} Table~\ref{table_7} also links the rate of a CB-LOCO code with its encoding and decoding complexity through the size of the adders to be used. \section{Conclusion}\label{sec_conc} We introduced LOCO codes, a new family of constrained codes, where the combination of recursive structure and lexicographic indexing of codewords enables simple mapping-demapping between the index and the codeword itself. We showed that this mapping-demapping enables low complexity encoding and decoding algorithms. We also showed that LOCO codes are capacity achieving, and that at moderate lengths, they provide a rate gain of up to $10\%$ compared with practical RLL codes that are used to achieve the same goals. Inherent symmetry of LOCO codes makes balancing easy. We demonstrated that the rate loss associated with balancing LOCO codes is minimal, and that this loss tends to zero in the limit, so that balanced LOCO codes achieve the same asymptotic rates as their unbalanced counterparts. Moreover, we demonstrated density gains of about $20\%$ in modern MR systems by using a LOCO code to protect only the parity bits of an LDPC code via mitigating ISI. We suggest that LOCO codes provide a simple and effective practical method for improving performance of a wide variety of data storage and computer systems. \section*{Acknowledgment}\label{sec_ack} This research was supported in part by NSF under grant CCF 1421177.
{ "timestamp": "2019-03-28T01:02:34", "yymm": "1902", "arxiv_id": "1902.10898", "language": "en", "url": "https://arxiv.org/abs/1902.10898" }
\section{Introduction} Circuit representations are a promising synthesis of symbolic and statistical methods in AI. They are ``deep'' layered data structures with statistical parameters, yet they also capture intricate structural knowledge. Recently, many representations have been proposed for learning tractable probability distributions: arithmetic circuits~\cite{lowd:uai08}, weighted SDD~\cite{BekkerNIPS15}, PSDD \cite{KisaVCD14}, cutset networks~\cite{rahman2014cutset} and sum-product networks (SPNs) \cite{poon2011sum}. Collectively, these approaches achieve the state of the art in discrete density estimation and vastly outperform classical probabilistic graphical model learners~\cite{gens2013learning,rooshenas2014learning,adel2015learning,rahman2016merging,Liang2017}. However, we have not observed the same success when deploying circuit representations for \emph{classification or discriminative learning}. Probabilistic circuit classifiers significantly lag behind the performance of neural networks~\cite{classificationStanding}. In this paper, we propose a new classification model called \emph{logistic circuits}, which shares many syntactic properties with the representations mentioned earlier. One can view logistic circuits as the discriminative counterpart to probabilistic circuits. Owing to their elegant properties, learning the parameters of a logistic circuit can be reduced to a logistic regression problem and is therefore convex. Learning logistic circuit structure is reduced to a simple local search problem using primitives from the probabilistic circuit learning literature~\cite{Liang2017}. We run experiments on standard image classification benchmarks (MNIST and Fashion) and achieve accuracy higher than much larger MLPs and even CNNs with an order of magnitude more parameters. For example, logistic circuits obtain 99.4\% accuracy on MNIST. Compared to other tractable learners on MNIST, and the state-of-the-art discriminative SPN learner in particular~\cite{rat-spn2018}, our logistic circuit learner cuts the error rate by a factor of three. Furthermore, we show our learner is highly data efficient, managing to still learn well with limited data. This paper proceeds as follows. Section~\ref{s:representation} introduces the syntax and semantics of logistic circuits. Sections~\ref{section: parameter learning} and~\ref{s:structurelearning} describe our parameter and structure learning algorithms, which Section~\ref{s:experiments} evaluates empirically. Section~\ref{s:generativeconnection} elaborates on the connection with tractable generative models, after which we conclude with related and future work. \section{Representation} \label{s:representation} This section introduces the logistic circuit representation. \subsubsection*{Notation} We use uppercase $X$ to denote a Boolean random variable and lowercase $x$ for a specific assignment to it. Interchangeably, we also interpret Boolean random variables as logical variables. A set of variables $\bf X$ and their joint assignments $\bf x$ are denoted in bold. A complete assignment $\bf x$ to all variables is a possible world, or interchangeably, a data sample. Literals are variables $X$ or their negation $\neg X$. Logical sentences are constructed from literals and connectives such as AND and OR in the standard way. An assignment $\bf x$ that satisfies a logical sentence $\alpha$ is denoted as ${\bf x} \models \alpha$. \begin{figure}[t] \centering \begin{subfigure}[t]{0.48\textwidth} \centering \scalebox{0.89}{ \begin{tikzpicture}[circuit logic US, nnf] \def30pt{30pt} n (output) [] at (0,0){}; n (root) [nnf2or] at ($(output) + (0pt,-0.7*30pt)$){}; n (a1) [nnf2and] at ($(root) + (-60pt,-1*30pt)$){}; n (a2) [nnf2and] at ($(root) + (60pt,-1*30pt)$){}; n (o12) [nnf2or] at ($(a1) + (20pt,-1*30pt)$){}; n (o21) [nnf2or] at ($(a2) + (-20pt,-1*30pt)$){}; n (aeq1) [nnf2and] at ($(root) + (-60pt,-3*30pt)$){}; n (aneq) [nnf2and] at ($(root) + (0pt,-3*30pt)$){}; n (aeq2) [nnf2and] at ($(root) + (60pt,-3*30pt)$){}; n (oeq) [nnf2or] at ($(root) + (20pt,-4.6*30pt)$){}; n (oneq) [nnf2or] at ($(root) + (80pt,-4.6*30pt)$){}; n (se1) [nnf2or] at ($(root) + (-70pt,-4.7*30pt)$){}; n (se2) [nnf2or] at ($(root) + (-30pt,-4.7*30pt)$){}; n (aab11) [nnf2and] at ($(oeq) + (-14.8pt,-1*30pt)$){}; n (aab00) [nnf2and] at ($(oeq) + (14.7pt,-1*30pt)$){}; n (aab10) [nnf2and] at ($(oneq) + (-14.8pt,-1*30pt)$){}; n (aab01) [nnf2and] at ($(oneq) + (15.2pt,-1*30pt)$){}; n (tc1) [nnfterm] at ($(a1) + (-20pt,-.8*30pt)$){$A$}; n (tc0) [nnfterm] at ($(a2) + (20pt,-.8*30pt)$){$\neg A$}; n (te1) [nnfterm] at ($(root) + (-72.5pt,-6.1*30pt)$){$B$}; n (te0) [nnfterm] at ($(root) + (-27.4pt,-6.1*30pt)$){$\neg B$}; n (ta1) [nnfterm] at ($(root) + (2.5pt,-7*30pt)$){$C$}; n (ta0) [nnfterm] at ($(root) + (32.1pt,-7*30pt)$){$\neg C$}; n (tb1) [nnfterm] at ($(root) + (97.8pt,-7*30pt)$){$D$}; n (tb0) [nnfterm] at ($(root) + (67.8pt,-7*30pt)$){$\neg D$}; \begin{scope}[on background layer] \draw [nnfedge] (output) -- (root.output); \draw [nnfedge] (a1.output) -- ++(up:0.15) -| (root.input 1) node[pos=0.4,above left] {$-2.6$}; \draw [nnfedge] (a2.output) -- ++(up:0.15) -| (root.input 2) node[pos=0.4,above right] {$-5.8$}; \draw [nnfedge] (o12.output) -- ++(up:0.15) -| (a1.input 2); \draw [nnfedge] (o21.output) -- ++(up:0.15) -| (a2.input 1); \draw [nnfedge] (aeq1.output) -- ++(up:0.15) -| (o12.input 1) node[pos=0.3,above left] {$-1$}; \draw [nnfedge] (aneq.output) -- ++(up:0.15) -| (o12.input 2) node[pos=0.4,above right] {$3$}; \draw [nnfedge] (aeq2.output) -- ++(up:0.15) -| (o21.input 2) node[pos=0.3,above right] {$4$}; \draw [nnfedge] (aneq.output) -- ++(up:0.15) -| (o21.input 1) node[pos=0.4,above left] {$2.3$}; \draw [nnfedge] (oeq.output) -- ++(up:0.15) -| (aeq1.input 2); \draw [nnfedge] (oeq.output) -- ++(up:0.15) -| (aeq2.input 2); \draw [nnfedge] (oneq.output) -- ++(up:0.65) -| (aneq.input 2); \draw [nnfedge] (se1.output) -- ++(up:0.27) -| (aeq1.input 1); \draw [nnfedge] (se2.output) -- ++(up:0.52) -| (aeq2.input 1); \draw [nnfedge] (se2.output) -- ++(up:0.52) -| (aneq.input 1); \draw [nnfedge] (aab11.output) -- ++(up:0.15) -| (oeq.input 1) node[pos=0.3,above left] {$-0.5$}; \draw [nnfedge] (aab00.output) -- ++(up:0.15) -| (oeq.input 2) node[pos=0.3,above right] {$0.3$}; \draw [nnfedge] (aab10.output) -- ++(up:0.15) -| (oneq.input 1) node[pos=0.3,above left] {$1.5$}; \draw [nnfedge] (aab01.output) -- ++(up:0.15) -| (oneq.input 2) node[pos=0.3,above right] {$2.8$}; \draw [nnfedge] (tc1.north) -- ++(up:0.15) -| (a1.input 1); \draw [nnfedge] (tc0.north) -- ++(up:0.15) -| (a2.input 2); \draw [nnfedge] (te1.north) -- ++(up:0.15) -| (se1.input 1) node[pos=0.65,above left] {$-4$}; \draw [nnfedge] (te0.north) -- ++(up:0.40) -| (se1.input 2) node[pos=0.52,above right] {$1$}; \draw [nnfedge] (te1.north) -- ++(up:0.15) -| (se2.input 1) node[pos=0.66,above left] {$3.9$}; \draw [nnfedge] (te0.north) -- ++(up:0.40) -| (se2.input 2) node[pos=0.47,above right] {$4$}; \draw [nnfedge] (ta1.north) -- ++(up:0.15) -| (aab11.input 1); \draw [nnfedge] (ta1.north) -- ++(up:0.15) -| (aab10.input 1); \draw [nnfedge] (ta0.north) -- ++(up:0.35) -| (aab01.input 1); \draw [nnfedge] (ta0.north) -- ++(up:0.35) -| (aab00.input 1); \draw [nnfedge] (tb1.north) -- ++(up:0.55) -| (aab11.input 2); \draw [nnfedge] (tb1.north) -- ++(up:0.55) -| (aab01.input 2); \draw [nnfedge] (tb0.north) -- ++(up:0.75) -| (aab00.input 2); \draw [nnfedge] (tb0.north) -- ++(up:0.75) -| (aab10.input 2); \end{scope} \end{tikzpicture} } \caption{Logistic circuit\label{fig: logistic circuit}} \end{subfigure} \quad~\par~\par \begin{subfigure}[t]{0.48\textwidth} \centering \begin{sc} {\fontsize{9}{9}\selectfont \begin{tabular}{ @{} llll c c@{} } \toprule $A$ & $B$ & $C$ & $D$ & $g_r(ABCD) $& $\Pr(Y=1 \mid ABCD)$ \\ \midrule \midrule 1 & 0 & 1 & 1 & -3.1 & ~~4.31\%\\ 0 & 1 & 1 & 0 & ~1.9 & 86.99\%\\ 1 & 1 & 1 & 0 &~5.8 &99.70\%\\ \bottomrule \end{tabular} } \end{sc} \caption{Weights and classification probabilities for select examples}\label{fig: posterior distribution} \end{subfigure} \caption{A logistic circuit with example classifications.}\label{fig:1} \end{figure} \subsection{Logical Circuits} A logical circuit is a directed acyclic graph representing a logical sentence, as depicted in Figure~\ref{fig: logistic circuit} (ignoring parameters for now). Each inner node is either an AND gate or an OR gate.\footnote{We consider negation-normal-form circuits where no negation is allowed except at the leafs/inputs~\cite{darwicheJAIR02}.} A leaf (input) node represents a Boolean literal, that is, $X$ or $\neg X$, where the node can only be satisfied if $X$ is set to 1 (true) respectively 0~(false). The following properties are key for logical circuits to be well-behaved~\cite{darwicheJAIR02}. An AND gate is \emph{decomposable} if its inputs depend on disjoint sets of variables. For example, the top-most AND gates in Figure~\ref{fig: logistic circuit} depend on $A$ in their one input and on $\{B,C,D\}$ in their other input. When an AND gate has two inputs, they are called its prime (left) and sub (right). An OR gate is \emph{deterministic} if for any single complete assignment, at most one of its inputs can be set to $1$. For example, the left input to the root OR gate in Figure~\ref{fig: logistic circuit} is $1$ precisely when $A=1$, and its other input is $1$ precisely when~$A=0$. Logical circuits can be extended to \textit{probabilistic circuits} that represents a probability distribution over binary random variables, for example by parameterizing wires with conditional distributions \cite{KisaVCD14}. Probabilistic circuits have been successfully used for generative learning \cite{Liang2017}. Section~\ref{s:generativeconnection} will discuss probabilistic circuits in more detail. \subsection{Logistic Circuits} \label{s: logistic circuits} This paper proposes \emph{logistic circuits} for classification. Syntactically, they are logical circuits where every AND is decomposable and every OR is deterministic. However, logistic circuits further associate real-valued parameters $\theta_1, \dots, \theta_m$ with the $m$ input wires to every OR gate. For example, the root OR node in Figure~\ref{fig: logistic circuit} associates parameters $-2.6$ and $-5.8$ with its two inputs. To give semantics to logistic circuits, we first characterize how a particular complete assignment $\bf x$ (one data example) propagates through the circuit. \begin{definition}[Boolean Circuit Flow] \label{definition: circuit flow} Consider a deterministic OR gate $n$. The Boolean flow $f(n,{\bf x},c)$ of a complete assignment $\bf x$ between parent $n$ and child $c$ is \begin{align*} f(n,{\bf x},c) = \begin{cases} 1 &\mbox{if~~} {\bf x} \models c \\ 0 & \mbox{otherwise} \end{cases} \end{align*} \end{definition} For example, under the assignment $A=0$, $B=1$, $C=1$, $D=0$, the root node in Figure~\ref{fig: logistic circuit} has a Boolean circuit flow of 0 with its left child and 1 with its right child. Note that the determinism property guarantees that under every OR gate, for a given example ${\bf x}$, at most one wire has a flow of 1, and the rest has a flow of $0$. We are now ready to define the logistic circuit semantics. \begin{definition}[Logistic Circuit Semantics] \label{de: circuit semantics} A logistic circuit node $n$ defines the following weight function $g_{n}({\bf x})$. \begin{itemize} \item[--] If $n$ is a leaf (input) node, then $g_{n}({\bf x}) = 0$. \item[--] If $n$ is an AND gate with children $c_1,\dots,c_m$, then \begin{align*} {g}_{n}({\bf x}) = \sum_{i=1}^m g_{c_i}({\bf x}). \end{align*} \item[--] If $n$ is an OR gate with (child node, wire parameter) inputs $(c_1,\theta_1),\dots, (c_m, \theta_m)$, then \begin{align*} {g}_{n}({\bf x}) = \sum_{i=1}^m f(n, {\bf x}, c_i) \cdot \left({g}_{c_i}({\bf x}) + \theta_i\right). \end{align*} \end{itemize} At root node $r$ with weight function $g_r({\bf x})$, the logistic circuit defines the posterior distribution on class variable $Y$~as \begin{align} \label{equation: probability} {\Pr} ( Y = 1 \mid {\bf x}) = \frac{1}{1 + \exp\left(-g_{r}({\bf x}) \right)}. \end{align} \end{definition} Using Boolean circuit flow, this definition essentially collects all the parameters on wires with flow 1 that reach the root, in order to then make a prediction. This is illustrated in Figure~\ref{fig: logistic circuit} by coloring red the gates and wires whose parameters and weight function are propagated upward for the example assignment $A=0$, $B=1$, $C=1$, $D=0$. The logistic circuit in Figure~\ref{fig: logistic circuit} defines the same posterior predictions as the table in Figure~\ref{fig: posterior distribution}. Specifically, for the example assignment, the weight function simply sums the parameters colored in red: $-5.8+2.3+3.9+1.5 = 1.9$. We then apply the logistic function (Eq.~\ref{equation: probability}) to get the classification probability $\Pr(Y=1 \mid {\bf x}) = \frac{1}{1+\exp(-1.9)} = 86.99\%$. \subsection{Real-Valued Data} \label{s: real-valued data} The semantics given so far assume Boolean inputs ${\bf x}$, which is a rather restrictive assumption and prohibits many machine learning applications. Next, we augment the logistic circuit semantics such that they can classify examples with continuous variables. We interpret real-valued variables $q \in [0,1]$ as parameterizing an (independent) Bernoulli distribution (cf.~\citet{semanticLoss}). Each continuous variable represents the probability of the corresponding Boolean random variable $X$. For example, with $\bf q$ setting $A=0.4$, $B=0.8$, $C=0.2$, and $D=0.7$, the probability of $\neg A \land D$ would be $(1-0.4)\cdot 0.7=0.42$. The same distribution defines a probability for each logical sentence, and therefore each node in the logistic circuit. This allows us to generalize Boolean flow as follows. \begin{definition}[Probabilistic Circuit Flow] \label{definition: probabilistic flow} Consider a deterministic OR gate $n$. Let $\bf q$ be a vector of probabilities, one for each variable in $\bf X$. The probabilistic flow $f(n,{\bf q},c)$ of vector $\bf q$ between parent $n$ and child $c$ is \begin{align*} f(n,{\bf q},c) = {\Pr}_{\bf q}(c \mid n) = \frac{\Pr_{\bf q}(c \land n)}{ \Pr_{\bf q}(n) } = \frac{\Pr_{\bf q}(c)}{ \Pr_{\bf q}(n) }, \end{align*} where $\Pr_{\bf q}(.)$ is the fully-factorized distribution where each variable in $\bf X$ has the probability assigned by $\bf q$. \end{definition} Logistic circuit semantics now support continuous data (after normalizing to $[0,1]$), simply by replacing Boolean flow with probabilistic flow in Definition~\ref{de: circuit semantics}. Note that probabilistic circuit flow has Boolean circuit flow as a special case, when ${\bf q}$ happens to be binary. Furthermore, due to the determinism and decomposability properties, the probabilities in Definition~\ref{definition: probabilistic flow} can be computed efficiently, together with all probabilistic circuit flows and weight functions in the logistic circuit. We defer the discussion of these computational details to Section~\ref{section: computing flows}. In the rest of this paper, we will abuse notation and have ${\bf x}$ refer to Boolean inputs as well as continuous inputs~${\bf q}$ interchangeably. \section{Parameter Learning} \label{section: parameter learning} A natural next question is how to learn logistic circuit parameters from complete data, for a fixed given circuit structure (structure learning is discussed in Section~\ref{s:structurelearning}). Furthermore, we ask whether those learned parameters are guaranteed to be optimal, globally minimizing a loss function. We address these questions by showing how parameter learning can be reduced to logistic regression on a modified set of features, owing to logistic circuits' strong properties. \subsection{Special Cases} Before presenting the general reduction, we briefly discuss two special cases that establish some intuition. \subsubsection{Linear Weight Functions} Consider a vanilla logistic regression model on input variables (features) $\bf X$. Does there exist an equivalent logistic circuit with the same weight function? For sample ${\bf x}$, logistic regression with parameters ${\bm \theta}$ would have weight function ${\bf x} \cdot {\bm \theta}$. Following Definition~\ref{de: circuit semantics}, we obtain such a simple weight function (linear in the input variables) by placing OR gates over complementary pairs of literals and associating a $\theta$ parameter which each wire (see Figure~\ref{circuit: linear weights}).\footnote{The negated variable inputs and parameters $\theta_{\neg X}$ are redundant, but we keep them for the sake of consistency. Alternatively, we can fix $\theta_{\neg X} = 0$ for all $X$ to remove this redundancy.} A large parent AND gate collects these variable-wise weights into a single linear sum. Finally, an OR gate at the root adds the bias term regardless of the~input. \begin{proposition} For each classical logistic regression model, there exists an equivalent logistic circuit model. \end{proposition} \begin{figure}[t] \centering \begin{minipage}{0.48\textwidth} \centering \scalebox{0.90}{ \begin{tikzpicture}[circuit logic US, nnf] \def30pt{30pt} \def15pt{15pt} n (output) [] at (0,0){} n (root) [nnf3or] at ($(output) + (0pt,-0.7*30pt)$){} n (a1) [nnf4and] at ($(root) + (0pt,-1*30pt)$){} n (o1a) [nnf2or] at ($(a1) + (-6*15pt,-1*30pt)$){}; n (o1b) [nnf2or] at ($(a1) + (-2*15pt,-1*30pt)$){}; n (o1c) [nnf2or] at ($(a1) + (2*15pt,-1*30pt)$){}; n (o1d) [nnf2or] at ($(a1) + (6*15pt,-1*30pt)$){}; n (ta1) [nnfterm] at ($(o1a) + (-1*15pt,-1*30pt)$){$A$}; n (ta0) [nnfterm] at ($(o1a) + (1*15pt,-1*30pt)$){$\neg A$}; n (tb1) [nnfterm] at ($(o1b) + (-1*15pt,-1*30pt)$){$B$}; n (tb0) [nnfterm] at ($(o1b) + (1*15pt,-1*30pt)$){$\neg B$}; n (tc1) [nnfterm] at ($(o1c) + (-1*15pt,-1*30pt)$){$C$}; n (tc0) [nnfterm] at ($(o1c) + (1*15pt,-1*30pt)$){$\neg C$}; n (td1) [nnfterm] at ($(o1d) + (-1*15pt,-1*30pt)$){$D$}; n (td0) [nnfterm] at ($(o1d) + (1*15pt,-1*30pt)$){$\neg D$}; \begin{scope}[on background layer] \draw [nnfedge] (output) -- (root.output); \draw [nnfedge] (a1.output) -- (root.input 2) node[pos=0.32,right] {$\theta_0$}; \draw [nnfedge] (o1a.output) -- ++(up:0.18) -| (a1.input 1); \draw [nnfedge] (o1b.output) -- ++(up:0.08) -| (a1.input 2); \draw [nnfedge] (o1c.output) -- ++(up:0.08) -| (a1.input 3); \draw [nnfedge] (o1d.output) -- ++(up:0.18) -| (a1.input 4); \draw [nnfedge] (ta1.north) -- ++(up:0.15) -| (o1a.input 1) node[pos=0.35,above left] {$\theta_A$}; \draw [nnfedge] (ta0.north) -- ++(up:0.15) -| (o1a.input 2) node[pos=0.35,above right] {$\theta_{\neg A}$}; \draw [nnfedge] (tb1.north) -- ++(up:0.15) -| (o1b.input 1) node[pos=0.35,above left] {$\theta_B$}; \draw [nnfedge] (tb0.north) -- ++(up:0.15) -| (o1b.input 2) node[pos=0.35,above right] {$\theta_{\neg B}$}; \draw [nnfedge] (tc1.north) -- ++(up:0.15) -| (o1c.input 1) node[pos=0.35,above left] {$\theta_C$}; \draw [nnfedge] (tc0.north) -- ++(up:0.15) -| (o1c.input 2) node[pos=0.35,above right] {$\theta_{\neg C}$}; \draw [nnfedge] (td1.north) -- ++(up:0.15) -| (o1d.input 1) node[pos=0.35,above left] {$\theta_D$}; \draw [nnfedge] (td0.north) -- ++(up:0.15) -| (o1d.input 2) node[pos=0.35,above right] {$\theta_{\neg D}$}; \end{scope} \end{tikzpicture} } \caption{Logistic regression represented as a logistic circuit} \label{circuit: linear weights} \end{minipage} \end{figure} \subsubsection{Boolean Flow Indicators} Next, let us consider a special case that makes no assumptions about circuit structure, but that requires the inputs to be fully binary. Such a circuit would have Boolean flows through every wire. Instead of working with the input variables $\bf X$, we can introduce new features that are indicator variables, telling us how the example propagates through the circuit, and which wires have a Boolean flow that reaches the circuit root. The circuit flows (indicators) decide which parameters are summed into the weight function; this process has been implicitly revealed in Figure~\ref{fig: logistic circuit}. By introducing such indicators, we can always obtain a linear weight function of composite features that are extracted from sample ${\bf x}$. Next, we generalize this idea of introducing wire features to arbitrary logistic circuits. \subsection{Reduction to Logistic Regression} We will now consider the most general case, with continuous input data and no assumptions on the circuit structure. \begin{proposition} \label{proposition: logistic regression} Any logistic circuit model can be reduced to a logistic regression model over a particular feature~set. \end{proposition} \begin{corollary} Logistic circuit cross-entropy loss is convex. \end{corollary} To prove Proposition~\ref{proposition: logistic regression}, we need to rewrite the classification distribution in Definition~\ref{de: circuit semantics} as follows. $$ {\Pr} ( Y = 1 \mid {\bf x}) = \frac{1}{1+ \exp(- \bm{\mathbbm{x}} \cdot {\bm \theta})}. $$ Here, $\bm{\mathbbm{x}}$ is some vector of features extracted from the raw example $\bf x$. This feature vector can only depend on $\bf x$; not on the parameters $\bm \theta$. Thus, the fundamental question is whether we can decompose $g_n({\bf x})$ into $\bm{\mathbbm{x}} \cdot {\bf \theta}$ for all nodes $n$. We prove this to be true by induction: \begin{itemize} \item[--] \underline{Base case}: $n$ is a leaf (input) node. It is obvious $g_n$ can be expressed as $\bm{\mathbbm{x}} \cdot {\bf \theta}$ since $g_n$ always equals 0. \item[--] \underline{Induction step}: assume $g$ of all the nodes under node $n$ can be expressed as $\bm{\mathbbm{x}} \cdot {\bf \theta}$. We need to consider two cases: \begin{enumerate}[wide=0pt, leftmargin=\dimexpr\labelwidth + 2\labelsep\relax] \item If $n$ is an AND gate having (w.l.o.g.) two children, prime $p$ and sub $s$. Given $g_{p} = \bm{\mathbbm{x}}_{p} \cdot \theta_{p}$ and $g_{s} =\bm{\mathbbm{x}}_s \cdot \theta_s$, \begin{align*} g_n &= \bm{\mathbbm{x}}_{p} \cdot \theta_{p} + \bm{\mathbbm{x}}_s \cdot \theta_s \\ &= \begin{bmatrix} \bm{\mathbbm{x}}_p \\ \bm{\mathbbm{x}}_s \end{bmatrix} \cdot \begin{bmatrix} \theta_p \\ \theta_s \end{bmatrix}. \end{align*} \item If $n$ is an OR gate with (child node, wire parameter) inputs $\left\{(c_1,\theta_1),\dots,(c_m,\theta_m)\right\}$. Given $g_{c_i} = \bm{\mathbbm{x}}_{c_i} \cdot \theta_{c_i}$, \begin{align*} g_n & = \sum_i f(n,{\bf x},c_i) \cdot \left( \bm{\mathbbm{x}}_{c_i} \cdot \theta_{c_i}+ \theta_i\right) \\ &= \begin{bmatrix} f(n,{\bf x},c_1) \cdot\bm{\mathbbm{x}}_{c_1} \\ f(n,{\bf x},c_1) \\ \vdots \\ f(n,{\bf x},c_m) \cdot\bm{\mathbbm{x}}_{c_m} \\ f(n,{\bf x},c_m) \end{bmatrix} \cdot \begin{bmatrix} \theta_{c_1} \\ \theta_1\\ \vdots \\ \theta_{c_m} \\ \theta_m \end{bmatrix}. \end{align*} \end{enumerate} \end{itemize} Note that this proof holds true regardless of whether the input sample ${\bf x}$ is binary or real-valued. With this proof, it is obvious that learning the parameters of a logistic circuit is equivalent to logistic regression on features $\mathbbm{x}$. We refer readers to \citet{Rennie2005} for a detailed proof that logistic regression is convex. Given this correspondence, any convex optimization technique can now be brought to bear on the problem of learning the parameters of a logistic circuit. In particular, we use stochastic gradient descent for this task. \subsection{Global Circuit Flow Features} In the proof of Proposition~\ref{proposition: logistic regression}, features $\mathbbm{x}$ are computed recursively by induction. However, it is not clear what these features represent, and how they are connected to the input data. In this section we assign semantics to those extracted features. They are the \emph{global circuit flow} of the observed example through the circuit. Global circuit flow is defined with respect to the root of a logistic circuit. \begin{definition}[Global Circuit Flow] \label{definition: global flow} Consider a logistic circuit over variables $\bf X$ rooted at OR gate $r$. The global circuit flow $f_r(n, {\bf x}, c)$ of input ${\bf x} $ between parent $n$ and child $c$ is defined inductively as follows. The global circuit flow between root $r$ and its child $c$ is the (local) probabilistic circuit flow: $f_r(r, {\bf x}, c) = f(r, {\bf x}, c)$. Then, for any node $n$ with parents $v_1,\dots,v_m$, we have that \begin{itemize} \item[--] if $n$ is an AND gate, global flow from child $c$ is \begin{align*} f_r(n, {\bf x}, c) = \sum_{i=1}^m f_r(v_i, {\bf x}, n), \end{align*} \item[--] if $n$ is an OR gate, global flow from child $c$ is \begin{align*} f_r(n, {\bf x}, c) &= f(n, {\bf x},c) \cdot \sum_{i=1}^m f_r(v_i, {\bf x}, n). \end{align*} \end{itemize} \end{definition} The red wires in Figure~\ref{fig: logistic circuit} have a global circuit flow of 1 for the given Boolean input. In general, global circuit flow assigns a continuous probability value to each wire. Based on global circuit flow, we postulate the following alternative semantics for logistic circuits. \begin{definition}[Logistic Circuit Alternative Semantics] \label{definition: circuit semantic using global flow} Let $\mathcal{W}$ be the set of all wires $(n, \theta,c)$ between OR gates $n$ and children $c$ with parameters $\theta$. Then, a logistic circuit rooted at node $r$ defines the weight function $$ g_r({\bf x}) = \sum_{(n,\theta,c) \in \mathcal{W}} f_r(n,{\bf x},c) \cdot \theta. $$ \end{definition} Note that the definition of global circuit flows, as well as our alternative semantics, follow a top-down induction. In contrasts, the original semantics in Definition~\ref{de: circuit semantics} follow a bottom-up induction. We resolve this discrepancy next. \begin{proposition} \label{proposition: features} The features $\mathbbm{x}$ constructed in the proof of Proposition~\ref{proposition: logistic regression} are equivalent to global flows $f_r(n,{\bf x},c)$. \end{proposition} \begin{corollary} The bottom-up semantics of Definition~\ref{de: circuit semantics} and the top-down semantics of Definition~\ref{definition: circuit semantic using global flow} are equivalent. \end{corollary} \noindent We defer the proof of this proposition to Appendix~\ref{section: proof of proposition}. Recall that without parameters, a logistic circuit is simply a logical circuit, which means that gates in a logistic circuit have real meaning: they correspond to some logical sentence. Hence, the values of global circuit flow features~$\mathbbm{x}$ correspond to probabilities of these logical sentences according to the input vector ${\bf x}$. This provides us with a precious opportunity to assign meaning to the features learned by logistic circuits. We will revisit this point in Section~\ref{s: interpretability}, where we also visualize some global circuit flow features. \subsection{Computing Global Flow Features Efficiently} \label{section: computing flows} While logistic circuit parameter learning is convex, we would like to also guarantee that the required feature computation is tractable. This section discusses efficient methods to calculate global flow features $\bm{\mathbbm{x}}$ (i.e., $f_r(n, {\bf x}, c)$) from training samples $\bf x$ offline, before parameter learning. As is clear from Definition~\ref{definition: probabilistic flow}, circuit flows make extensive use of node probabilities. We design our computation to consist of two parts, and dedicate the first part to the calculation of node probabilities. The first part is a bottom-up linear pass over the circuit starting with leaf nodes whose probabilities are directly provided by the input sample; see the details in Appendix~\ref{section: node probabilities}. The second part makes use of these node probabilities to calculate the global circuit flow features in linear time. It is a top-down implementation of the recursion in Definition~\ref{definition: global flow}; see its details in Appendix~\ref{s: calculation of global flows}. Note that these computations correspond to the partial derivative computations used in arithmetic circuits for the purpose of probabilistic inference~\cite{DarwicheJACM}. Our algorithm is completely compatible with fast vector arithmetic: instead of inputting one single sample each time, one can directly supply the algorithms with a vector of samples (e.g., a mini batch), and this yields significant speedups. \section{Structure Learning} \label{s:structurelearning} This section presents an algorithm to learn a compact logical circuit structure for logistic circuits from data. For simplicity of designing the primitive operations, we assume AND gates always have two inputs (prime and sub). \subsection{Learning Primitive} The split operation was first introduced to modify the structure of PSDD circuits \cite{Liang2017}. We adopt it here with minor changes\footnote{Compared to the splits in LearnPSDD~\cite{Liang2017}, we do not limit constraints to be on primes.} as the primitive operation for our structure learning algorithm. Splitting an AND gate happens by imposing two additional constraints that are \emph{mutually exclusive} and \emph{exhaustive}, in particular by making two opposing variable assignments. Executing a split creates partial copies of the gate and some of its decedents. Furthermore, one can choose to duplicate additional nodes up to a fixed depth (3 in our experiments). We refer readers to \citet{Liang2017} for further details on the algorithm for executing splits. Splits are ideal primitives to change the classifier induced by a logistic circuit: they directly affect the circuit flows (see Figure~\ref{fig: split with flow change}). By imposing constraints on AND gates, splits alter the node probabilities associated with the affected AND gates. Following Definition~\ref{definition: probabilistic flow}, the circuit flows on the wires out of those AND gates adapt accordingly. While Figure~\ref{fig: split with flow change} focuses on the immediately affected wires, the effect of a split on circuit flows can propagate downward for several levels, depending on the depth of node duplication. Still the effects of a split on both the structure of a logistic circuit and the circuit flows are very local and contained in the sub-circuit rooted at the OR parent of the split AND gate. However, its effect on the parameters is global. Once a split is executed, the whole parameter set needs to be re-trained. \begin{figure}[t] \centering \begin{subfigure}[t]{0.22\textwidth} \centering \scalebox{0.85}{ \begin{tikzpicture}[circuit logic US, nnf] \def30pt{30pt} n (output) [] at (455pt,1.88*30pt){}; n (root) [nnf2or] at ($($(output) + (0pt,-0.7*30pt)$)$){}; n (a1) [nnf2and] at ($(root) + (-20pt,-1.0*30pt)$){}; n (a2) [nnf2and] at ($(root) + (20pt,-1.0*30pt)$){}; n (o12) [nnf2or] at ($(root) + (0pt,-1.95*30pt)$){}; n (ta1) [nnfterm] at ($(o12) + (-10pt,-0.7*30pt)$){$A$}; n (ta0) [nnfterm] at ($(o12) + (10pt,-0.7*30pt)$){$\neg A$}; n (tb1) [nnfterm] at ($(a1) + (-2.8pt,-0.8*30pt)$){$B$}; n (tb0) [nnfterm] at ($(a2) + (2.8pt,-0.8*30pt)$){$\neg B$}; \begin{scope}[on background layer] \draw [nnfedge] (output) -- (root.output); \draw [nnfedge] (ta1.north) -- ++(up:0.10) -| (o12.input 1); \draw [nnfedge] (ta0.north) -- ++(up:0.10) -| (o12.input 2); \draw [nnfedge] (tb1.north) -- (a1.input 1); \draw [nnfedge] (tb0.north) -- (a2.input 2); \draw [nnfedge] (o12.output) -- ++(up:0.10) -| (a1.input 2); \draw [nnfedge] (o12.output) -- ++(up:0.10) -| (a2.input 1); \draw [nnfedge,red] (a1.output) -- ++(up:0.10) -| (root.input 1) node[pos=0.3,above left,red] {$f_0$}; \draw [nnfedge] (a2.output) -- ++(up:0.10) -| (root.input 2); \end{scope} \end{tikzpicture} } \caption{Before split of $f_0$ on $A$} \label{fig: split:before} \end{subfigure} ~~~ \begin{subfigure}[t]{0.22\textwidth} \centering \scalebox{0.85}{ \begin{tikzpicture}[circuit logic US, nnf] \def30pt{30pt} n (output) [] at (455pt,1.88*30pt){}; n (root) [nnf3or] at ($($(output) + (0pt,-0.7*30pt)$)$){}; n (a11) [nnf2and] at ($(root) + (-50pt,-1.0*30pt)$){} n (a12) [nnf2and] at ($(root) + (-20pt,-1.0*30pt)$){} n (a2) [nnf2and] at ($(root) + (20pt,-1.0*30pt)$){} n (o12) [nnf2or] at ($(root) + (0pt,-1.95*30pt)$){} n (ta1) [nnfterm] at ($(o12) + (-10pt,-0.7*30pt)$){$A$}; n (ta0) [nnfterm] at ($(o12) + (10pt,-0.7*30pt)$){$\neg A$}; n (ta0c) [nnfterm] at ($(o12) + (-52.8pt,-0.7*30pt)$){$\neg A$}; n (tb1) [nnfterm] at ($(a1) + (-9pt,-1.65*30pt)$){$B$}; n (tb0) [nnfterm] at ($(a2) + (2.8pt,-0.8*30pt)$){$\neg B$}; \begin{scope}[on background layer] \draw [nnfedge] (output) -- (root.output); \draw [nnfedge] (ta1.north) -- ++(up:0.10) -| (o12.input 1); \draw [nnfedge] (ta0.north) -- ++(up:0.10) -| (o12.input 2); \draw [nnfedge] (tb0.north) -- (a2.input 2); \draw [nnfedge] (tb1.north) -- ++(up:0.50) -| (a12.input 1); \draw [nnfedge] (tb1.north) -- ++(up:0.50) -| (a11.input 2); \draw [nnfedge] (ta0c.north) -- ++(up:0.10) -| (a11.input 1); \draw [nnfedge] (ta1.north) -- ++(up:0.10) -| (a12.input 2); \draw [nnfedge] (o12.output) -- ++(up:0.10) -| (a2.input 1); \draw [nnfedge,red] (a11.output) -- ++(up:0.20) -| (root.input 1) node[pos=0.3,above left,red] {$f_1$}; \draw [nnfedge,red] (a12.output) -- ++(up:0.10) -| (root.input 2) node[pos=0.2,below right,red] {$f_2$}; \draw [nnfedge] (a2.output) -- ++(up:0.10) -| (root.input 3); \end{scope} \end{tikzpicture} } \caption{After split of $f_0$ on $A$} \label{fig: split:after} \end{subfigure} \\[5pt] \begin{subfigure}[t]{0.48\textwidth} \centering \begin{sc} {\fontsize{9}{9}\selectfont \begin{tabular}{ @{} ll c c c cc@{} } \toprule $A$ & $B$ & & $f_0$ & & $f_1$ & $f_2$ \\ \midrule \midrule 1 & 1 & &1 & & 0 & 1 \\ 0 & 1 & &1 & & 1 & 0 \\ \midrule 0.5 & 0.6 & & 0.6 & & 0.30 & 0.30 \\ 0.4 & 0.8 & & 0.8 & & 0.48 & 0.32 \\ \bottomrule \end{tabular} } \end{sc} \caption{Circuit flow before and after the split.}\label{table: flow change} \end{subfigure} \caption{A split changes the circuit flow.}\label{fig: split with flow change} \end{figure} \subsection{Learning Algorithm} The overall structure learning algorithm for logistic circuits, built on top of the split operation, proceeds as follows. Iteratively, one split is executed to change the structure, followed by parameter learning. We only consider single-variable split constraints and first select which AND gate to split, followed by a selection of which variable to split on. When using gradient descent, one hopes that the parameter on the AND gate output consistently has its partial derivatives pointing in the same direction for all training examples. This will steadily push the parameter to a large magnitude. If this is not the case, we will use splits to alter the flow of examples through the circuit. Specifically, those AND gates whose associated output parameter has a large variance of its partial derivative (that is, the derivative of the loss function w.r.t.~that parameter) requires splitting for the parameters to improve. We simply select the AND gate whose output parameter has the highest training variance. Given an AND gate to split, we consider candidate variables $X$ to execute the split with. We construct two sets of training examples that affect this node: in one group, each example is weighted by the marginal probability of $X$; in the other, with the marginal probability of $\neg X$. Next, we calculate the within-group weighted variances of the partial derivatives. The variable with the smallest weighted variances gets picked, as this suggests the split will introduce new parameters with gradients that align in one direction. \begin{table}[t] \centering \begin{minipage}{0.48\textwidth} \caption{Classification accuracy of logistic circuits in context with commonly used existing models. We report the details of those existing models in Appendix~\ref{s: model details}.} \label{table: accuracy} \centering {\fontsize{8.5}{9}\selectfont \begin{sc} \begin{tabular}{ @{}l c c @{} } \toprule Accuracy $\%$ on Dataset & Mnist & Fashion \\ \midrule\midrule Baseline: Logistic Regression & 85.3 & 79.3 \\ Baseline: Kernel Logistic Regression & 97.7 & 88.3 \\ Random Forest & 97.3 & 81.6 \\ 3-layer MLP\label{3MLP} & 97.5 & 84.8\\ RAT-SPN \cite{rat-spn2018} &98.1 & 89.5\\ SVM with RBF Kernel & 98.5 & 87.8 \\ 5-Layer MLP & 99.3 & 89.8 \\ \midrule Logistic Circuit (binary) & 97.4 & 87.6 \\ Logistic Circuit (real-valued) & 99.4 & 91.3\\ \midrule CNN with 3 conv layers & 99.1 &90.7\\ Resnet \cite{he2016cvpr}& 99.5 & 93.6 \\ \bottomrule \end{tabular} \end{sc} } \end{minipage} \end{table} \begin{table}[tb] \centering \begin{minipage}{0.48\textwidth} {\footnotesize \caption{Number of parameters of logistic circuits in context with existing SGD-based models, when achieving the classification accuracy reported in Table~\ref{table: accuracy} } \label{table: size} \centering {\fontsize{8.3}{9}\selectfont \begin{sc} \begin{tabular}{ @{}l r r @{} } \toprule Number of Parameters & Mnist & Fashion \\ \midrule\midrule Baseline: Logistic Regression & $<$1K & $<$1K \\ Baseline: Kernel Logistic Regression & 1,521 K & 3,930K\\ \midrule Logistic Circuit (real-valued) & 182K & 467K\\ Logistic Circuit (binary) & 268K & 614K \\ \midrule 3-layer MLP & 1,411K & 1,411K \\ RAT-SPN~ \cite{rat-spn2018} & 8,500K & 650K \\ CNN with 3 conv layers & 2,196K & 2,196K\\ 5-Layer MLP & 2,411K & 2,411K \\ Resnet \cite{he2016cvpr} & 4,838K & 4,838K \\ \bottomrule \end{tabular} \end{sc} }} \end{minipage} \end{table} \begin{table*}[tb] \caption{Comparison of logistic circuits with MLPs when trained with different percentages of the dataset.} \label{table: data efficiency} \centering {\fontsize{9}{9}\selectfont \begin{sc} \begin{tabular}{ @{}l c c c c c c@{} } \toprule \multirow{2}{*}{Accuracy $\%$ with $\%$ of Training Data }& \multicolumn{3}{c}{MNIST} & \multicolumn{3}{c}{Fashion}\\ \cmidrule{2-4} \cmidrule{5-7} &100$\%$ & 10$\%$ & 2$\%$ & 100$\%$ & 10$\%$ & 2$\%$ \\ \midrule\midrule 5-layer MLP & 99.3 & {\bf 98.2} & 94.3 & 89.8 & 86.5 & 80.9 \\ CNN with 3 Conv Layers & 99.1 & 98.1 & 95.3 &90.7 & 87.6 & 83.8 \\ \midrule Logistic Circuit (Binary) & 97.4 & 96.9 & 94.1 & 87.6 & 86.7 & 83.2 \\ Logistic Circuit (Real-Valued) & {\bf 99.4} & 97.8 & {\bf 96.1} & {\bf 91.3} & {\bf 87.8} & {\bf 86.0} \\ \bottomrule \end{tabular} \end{sc} } \end{table*} \section{Empirical Evaluation} \label{s:experiments} In this section, we empirically evaluate the competitiveness of our learner on three aspects: classification accuracy, model complexity, and data efficiency.\footnote{Open-source code and experiments are available at \url{https://github.com/UCLA-StarAI/LogisticCircuit}.} Moreover, we visualize the most important active feature with regards to the given sample to provide local interpretation for why the learned logistic circuit makes such classification. \subsection{Setup \& Data Preprocessing} We choose MNIST and Fashion\footnote{A dataset of Zalando's images, intended as a more challenging drop-in replacement of MNIST \cite{fashion2017}.} as our testbeds. Since logistic circuits are intended for binary classification, we use the standard ``one vs.~rest" approach to construct an ensemble multi-class classifier such that our method can be evaluated on these two datasets. When running the binary logistic circuit, we transform pixels that are smaller than their mean plus $0.05$ standard deviation to 0 and the rest to 1. When running the real-valued version, we transform pixels to $[0,1]$ by dividing them by 255. All experiments start with a predefined initial structure; we defer its details to Appendix~\ref{appendix: initial structure}. The learned structure with the highest F1 score on validation after 48 hours of running is used for evaluation. All experiments are run on single CPUs. \begin{figure}[t] \centering \begin{subfigure}[t]{0.22\textwidth} \centering \includegraphics[width=0.95\textwidth]{digit0} \end{subfigure} ~ \begin{subfigure}[t]{0.22\textwidth} \centering \includegraphics[width=0.95\textwidth]{tshirt} \end{subfigure} \caption{Visualization of the single compositional feature that contributes most to the classification probability with regards to the input image. Features are marked in orange. Left: a digit 0 from MNIST. Right: a t-shirt from Fashion. } \label{fig: feature visualization} \end{figure} \subsection{Classification Accuracy} \label{s: classification accuracy} Table~\ref{table: accuracy} summarizes the classification accuracy on test data. Learning a logistic circuit on the binary data is on par with a 3-layer MLP; the real-valued version outperforms 5-layer MLPs and even CNNs with 3 convolutional layers. The fact that logistic circuits achieve better accuracy than CNNs is surprising, since logistic circuits do not use convolutions, which are specifically designed to exploit image invariances. In addition, we would like to emphasis our comparison with two of the baselines. As parameter learning of logistic circuits is equivalent to logistic regression, one can view structure learning of logistic circuits as a process of constructing composite features from raw samples. The significant improvement over standard logistic regression demonstrates the effectiveness of our method in extracting valuable features; using kernel logistic regression can only partially bridge the gap in performance, yet as shown later, it does so at the cost of introducing many more parameters. We also want to call attention to our comparison with RAT-SPN, the current state of the art in discriminative learning for probabilistic circuits. SPN is another form of circuit representation, with less restrictive structure. Parameter learning in SPN is not convex and generally requires other techniques such as EM or non-convex optimization. The empirical observation that our method achieves significantly better classification accuracy than RAT-SPN demonstrates that in structure learning, imposing more restrictions on the model's structural syntax may be beneficial. The syntactic restriction of logistic circuits requires decomposability and determinism; without them, convex parameter learning does not appear to be possible. As structure learning is built on top of parameter learning, a well-behaved parameter learning loss with a unique optimum can provide more informative guidance about how to adapt the structure, leading to a more competitive structure learning algorithm overall. \begin{figure*}[t] \centering \begin{subfigure}[t]{0.62\textwidth} \centering \scalebox{0.7}{ \begin{tikzpicture}[circuit logic US, nnf] \def30pt{30pt} n (output) [] at (105pt,1.88*30pt){} n (root) [nnf2or] at ($($(output) + (0pt,-0.7*30pt)$)$){}; n (ar1) [nnf2and] at ($(root) + (-60pt,-0.9*30pt)$){}; n (ar2) [nnf2and] at ($(root) + (60pt,-0.9*30pt)$){}; n (ty1) [nnfterm] at ($(ar1) + (20pt,-0.8*30pt)$){$Y$}; n (ty0) [nnfterm] at ($(ar2) + (-20pt,-0.8*30pt)$){$\neg Y$}; n (rootL) [nnf2or] at ($(0,0) + (0pt,-0.7*30pt)$){} n (a1) [nnf2and] at ($(rootL) + (-60pt,-1*30pt)$){}; n (a2) [nnf2and] at ($(rootL) + (60pt,-1*30pt)$){}; n (o12) [nnf2or] at ($(a1) + (20pt,-1*30pt)$){}; n (o21) [nnf2or] at ($(a2) + (-20pt,-1*30pt)$){}; n (aeq1) [nnf2and] at ($(rootL) + (-60pt,-3*30pt)$){}; n (aneq) [nnf2and] at ($(rootL) + (0pt,-3*30pt)$){}; n (aeq2) [nnf2and] at ($(rootL) + (60pt,-3*30pt)$){}; n (oeq) [nnf2or] at ($(rootL) + (20pt,-4.6*30pt)$){}; n (oneq) [nnf2or] at ($(rootL) + (80pt,-4.6*30pt)$){}; n (se1) [nnf2or] at ($(rootL) + (-70pt,-4.7*30pt)$){}; n (se2) [nnf2or] at ($(rootL) + (-30pt,-4.7*30pt)$){}; n (aab11) [nnf2and] at ($(oeq) + (-14.8pt,-1*30pt)$){}; n (aab00) [nnf2and] at ($(oeq) + (14.7pt,-1*30pt)$){}; n (aab10) [nnf2and] at ($(oneq) + (-14.8pt,-1*30pt)$){}; n (aab01) [nnf2and] at ($(oneq) + (15.2pt,-1*30pt)$){}; n (tc1) [nnfterm] at ($(a1) + (-20pt,-.8*30pt)$){$A$}; n (tc0) [nnfterm] at ($(a2) + (20pt,-.8*30pt)$){$\neg A$}; n (te1) [nnfterm] at ($(rootL) + (-72.5pt,-6.1*30pt)$){$B$}; n (te0) [nnfterm] at ($(rootL) + (-27.4pt,-6.1*30pt)$){$\neg B$}; n (ta1) [nnfterm] at ($(rootL) + (2.5pt,-7*30pt)$){$C$}; n (ta0) [nnfterm] at ($(rootL) + (32.1pt,-7*30pt)$){$\neg C$}; n (tb1) [nnfterm] at ($(rootL) + (97.8pt,-7*30pt)$){$D$}; n (tb0) [nnfterm] at ($(rootL) + (67.8pt,-7*30pt)$){$\neg D$}; \begin{scope}[on background layer] \draw [nnfedge] (output) -- (root.output); \draw [nnfedge] (ar1.output) -- ++(up:0.10) -| (root.input 1) node[pos=0.4,above left] {$0.6$}; \draw [nnfedge] (ar2.output) -- ++(up:0.10) -| (root.input 2) node[pos=0.4,above right] {$0.4$}; \draw [nnfedge] (rootL.output) -- ++(up:0.10) -| (ar1.input 1); \draw [nnfedge] (ty1.north) -- ++(up:0.15) -| (ar1.input 2); \draw [nnfedge] (a1.output) -- ++(up:0.15) -| (rootL.input 1) node[pos=0.4,above left] {$0.9$}; \draw [nnfedge] (a2.output) -- ++(up:0.15) -| (rootL.input 2) node[pos=0.4,above right] {$0.1$}; \draw [nnfedge] (o12.output) -- ++(up:0.15) -| (a1.input 2); \draw [nnfedge] (o21.output) -- ++(up:0.15) -| (a2.input 1); \draw [nnfedge] (aeq1.output) -- ++(up:0.15) -| (o12.input 1) node[pos=0.3,above left] {$0.2$}; \draw [nnfedge] (aneq.output) -- ++(up:0.15) -| (o12.input 2) node[pos=0.4,above right] {$0.8$}; \draw [nnfedge] (aeq2.output) -- ++(up:0.15) -| (o21.input 2) node[pos=0.3,above right] {$0.4$}; \draw [nnfedge] (aneq.output) -- ++(up:0.15) -| (o21.input 1) node[pos=0.4,above left] {$0.6$}; \draw [nnfedge] (oeq.output) -- ++(up:0.15) -| (aeq1.input 2); \draw [nnfedge] (oeq.output) -- ++(up:0.15) -| (aeq2.input 2); \draw [nnfedge] (oneq.output) -- ++(up:0.65) -| (aneq.input 2); \draw [nnfedge] (se1.output) -- ++(up:0.27) -| (aeq1.input 1); \draw [nnfedge] (se2.output) -- ++(up:0.52) -| (aeq2.input 1); \draw [nnfedge] (se2.output) -- ++(up:0.52) -| (aneq.input 1); \draw [nnfedge] (aab11.output) -- ++(up:0.15) -| (oeq.input 1) node[pos=0.3,above left] {$0.1$}; \draw [nnfedge] (aab00.output) -- ++(up:0.15) -| (oeq.input 2) node[pos=0.3,above right] {$0.9$}; \draw [nnfedge] (aab10.output) -- ++(up:0.15) -| (oneq.input 1) node[pos=0.3,above left] {$0.3$}; \draw [nnfedge] (aab01.output) -- ++(up:0.15) -| (oneq.input 2) node[pos=0.3,above right] {$0.7$}; \draw [nnfedge] (tc1.north) -- ++(up:0.15) -| (a1.input 1); \draw [nnfedge] (tc0.north) -- ++(up:0.15) -| (a2.input 2); \draw [nnfedge] (te1.north) -- ++(up:0.15) -| (se1.input 1) node[pos=0.65,above left] {$0.1$}; \draw [nnfedge] (te0.north) -- ++(up:0.40) -| (se1.input 2) node[pos=0.52,above right] {$0.9$}; \draw [nnfedge] (te1.north) -- ++(up:0.15) -| (se2.input 1) node[pos=0.66,above left] {$0.8$}; \draw [nnfedge] (te0.north) -- ++(up:0.40) -| (se2.input 2) node[pos=0.47,above right] {$0.2$}; \draw [nnfedge] (ta1.north) -- ++(up:0.15) -| (aab11.input 1); \draw [nnfedge] (ta1.north) -- ++(up:0.15) -| (aab10.input 1); \draw [nnfedge] (ta0.north) -- ++(up:0.35) -| (aab01.input 1); \draw [nnfedge] (ta0.north) -- ++(up:0.35) -| (aab00.input 1); \draw [nnfedge] (tb1.north) -- ++(up:0.55) -| (aab11.input 2); \draw [nnfedge] (tb1.north) -- ++(up:0.55) -| (aab01.input 2); \draw [nnfedge] (tb0.north) -- ++(up:0.75) -| (aab00.input 2); \draw [nnfedge] (tb0.north) -- ++(up:0.75) -| (aab10.input 2); \end{scope} n (rootR) [nnf2or] at ($(210pt,0) + (0pt,-0.7*30pt)$){} n (a1) [nnf2and] at ($(rootR) + (-60pt,-1*30pt)$){} n (a2) [nnf2and] at ($(rootR) + (60pt,-1*30pt)$){} n (o12) [nnf2or] at ($(a1) + (20pt,-1*30pt)$){} n (o21) [nnf2or] at ($(a2) + (-20pt,-1*30pt)$){} n (aeq1) [nnf2and] at ($(rootR) + (-60pt,-3*30pt)$){} n (aneq) [nnf2and] at ($(rootR) + (0pt,-3*30pt)$){} n (aeq2) [nnf2and] at ($(rootR) + (60pt,-3*30pt)$){} n (oeq) [nnf2or] at ($(rootR) + (20pt,-4.6*30pt)$){} n (oneq) [nnf2or] at ($(rootR) + (80pt,-4.6*30pt)$){} n (se1) [nnf2or] at ($(rootR) + (-70pt,-4.7*30pt)$){} n (se2) [nnf2or] at ($(rootR) + (-30pt,-4.7*30pt)$){} n (aab11) [nnf2and] at ($(oeq) + (-14.8pt,-1*30pt)$){} n (aab00) [nnf2and] at ($(oeq) + (14.7pt,-1*30pt)$){} n (aab10) [nnf2and] at ($(oneq) + (-14.8pt,-1*30pt)$){} n (aab01) [nnf2and] at ($(oneq) + (15.2pt,-1*30pt)$){} n (tc1) [nnfterm] at ($(a1) + (-20pt,-.8*30pt)$){$A$}; n (tc0) [nnfterm] at ($(a2) + (20pt,-.8*30pt)$){$\neg A$}; n (te1) [nnfterm] at ($(rootR) + (-72.5pt,-6.1*30pt)$){$B$}; n (te0) [nnfterm] at ($(rootR) + (-27.4pt,-6.1*30pt)$){$\neg B$}; n (ta1) [nnfterm] at ($(rootR) + (2.5pt,-7*30pt)$){$C$}; n (ta0) [nnfterm] at ($(rootR) + (32.1pt,-7*30pt)$){$\neg C$}; n (tb1) [nnfterm] at ($(rootR) + (97.8pt,-7*30pt)$){$D$}; n (tb0) [nnfterm] at ($(rootR) + (67.8pt,-7*30pt)$){$\neg D$}; \begin{scope}[on background layer] \draw [nnfedge] (rootR.output) -- ++(up:0.10) -| (ar2.input 2); \draw [nnfedge] (ty0.north) -- ++(up:0.15) -| (ar2.input 1); \draw [nnfedge] (a1.output) -- ++(up:0.15) -| (rootR.input 1) node[pos=0.4,above left] {$0.4$}; \draw [nnfedge] (a2.output) -- ++(up:0.15) -| (rootR.input 2) node[pos=0.4,above right] {$0.6$}; \draw [nnfedge] (o12.output) -- ++(up:0.15) -| (a1.input 2); \draw [nnfedge] (o21.output) -- ++(up:0.15) -| (a2.input 1); \draw [nnfedge] (aeq1.output) -- ++(up:0.15) -| (o12.input 1) node[pos=0.3,above left] {$0.2$}; \draw [nnfedge] (aneq.output) -- ++(up:0.15) -| (o12.input 2) node[pos=0.4,above right] {$0.8$}; \draw [nnfedge] (aeq2.output) -- ++(up:0.15) -| (o21.input 2) node[pos=0.3,above right] {$0.3$}; \draw [nnfedge] (aneq.output) -- ++(up:0.15) -| (o21.input 1) node[pos=0.4,above left] {$0.7$}; \draw [nnfedge] (oeq.output) -- ++(up:0.15) -| (aeq1.input 2); \draw [nnfedge] (oeq.output) -- ++(up:0.15) -| (aeq2.input 2); \draw [nnfedge] (oneq.output) -- ++(up:0.65) -| (aneq.input 2); \draw [nnfedge] (se1.output) -- ++(up:0.27) -| (aeq1.input 1); \draw [nnfedge] (se2.output) -- ++(up:0.52) -| (aeq2.input 1); \draw [nnfedge] (se2.output) -- ++(up:0.52) -| (aneq.input 1); \draw [nnfedge] (aab11.output) -- ++(up:0.15) -| (oeq.input 1) node[pos=0.3,above left] {$0.8$}; \draw [nnfedge] (aab00.output) -- ++(up:0.15) -| (oeq.input 2) node[pos=0.3,above right] {$0.2$}; \draw [nnfedge] (aab10.output) -- ++(up:0.15) -| (oneq.input 1) node[pos=0.3,above left] {$0.5$}; \draw [nnfedge] (aab01.output) -- ++(up:0.15) -| (oneq.input 2) node[pos=0.3,above right] {$0.5$}; \draw [nnfedge] (tc1.north) -- ++(up:0.15) -| (a1.input 1); \draw [nnfedge] (tc0.north) -- ++(up:0.15) -| (a2.input 2); \draw [nnfedge] (te1.north) -- ++(up:0.15) -| (se1.input 1) node[pos=0.65,above left] {$0.6$}; \draw [nnfedge] (te0.north) -- ++(up:0.40) -| (se1.input 2) node[pos=0.52,above right] {$0.4$}; \draw [nnfedge] (te1.north) -- ++(up:0.15) -| (se2.input 1) node[pos=0.66,above left] {$0.9$}; \draw [nnfedge] (te0.north) -- ++(up:0.40) -| (se2.input 2) node[pos=0.47,above right] {$0.1$}; \draw [nnfedge] (ta1.north) -- ++(up:0.15) -| (aab11.input 1); \draw [nnfedge] (ta1.north) -- ++(up:0.15) -| (aab10.input 1); \draw [nnfedge] (ta0.north) -- ++(up:0.35) -| (aab01.input 1); \draw [nnfedge] (ta0.north) -- ++(up:0.35) -| (aab00.input 1); \draw [nnfedge] (tb1.north) -- ++(up:0.55) -| (aab11.input 2); \draw [nnfedge] (tb1.north) -- ++(up:0.55) -| (aab01.input 2); \draw [nnfedge] (tb0.north) -- ++(up:0.75) -| (aab00.input 2); \draw [nnfedge] (tb0.north) -- ++(up:0.75) -| (aab10.input 2); \end{scope} \end{tikzpicture} } \caption{Probabilistic circuit for joint distribution $\Pr(Y,A,B,C,D)$} \label{circuit: prop circuit: prob } \end{subfigure} ~~~~~ \begin{subfigure}[t]{0.34\textwidth} \centering \scalebox{0.7}{ \begin{tikzpicture}[circuit logic US, nnf] \def30pt{30pt} n (output) [] at (455pt,1.88*30pt){} n (root) [nnf3or] at ($($(output) + (0pt,-0.7*30pt)$)$){}; n (rootLL) [nnf2or] at ($(0,0) + (455pt,-0.7*30pt)$){} n (a1) [nnf2and] at ($(rootLL) + (-60pt,-1*30pt)$){}; n (a2) [nnf2and] at ($(rootLL) + (60pt,-1*30pt)$){}; n (o12) [nnf2or] at ($(a1) + (20pt,-1*30pt)$){}; n (o21) [nnf2or] at ($(a2) + (-20pt,-1*30pt)$){}; n (aeq1) [nnf2and] at ($(rootLL) + (-60pt,-3*30pt)$){}; n (aneq) [nnf2and] at ($(rootLL) + (0pt,-3*30pt)$){}; n (aeq2) [nnf2and] at ($(rootLL) + (60pt,-3*30pt)$){}; n (oeq) [nnf2or] at ($(rootLL) + (20pt,-4.6*30pt)$){}; n (oneq) [nnf2or] at ($(rootLL) + (90pt,-4.6*30pt)$){}; n (se1) [nnf2or] at ($(rootLL) + (-80pt,-4.7*30pt)$){}; n (se2) [nnf2or] at ($(rootLL) + (-35pt,-4.7*30pt)$){}; n (aab11) [nnf2and] at ($(oeq) + (-14.8pt,-1*30pt)$){}; n (aab00) [nnf2and] at ($(oeq) + (14.7pt,-1*30pt)$){}; n (aab10) [nnf2and] at ($(oneq) + (-14.8pt,-1*30pt)$){}; n (aab01) [nnf2and] at ($(oneq) + (15.2pt,-1*30pt)$){}; n (tc1) [nnfterm] at ($(a1) + (-30pt,-.8*30pt)$){$A$}; n (tc0) [nnfterm] at ($(a2) + (30pt,-.8*30pt)$){$\neg A$}; n (te1) [nnfterm] at ($(rootLL) + (-82.5pt,-6.1*30pt)$){$B$}; n (te0) [nnfterm] at ($(rootLL) + (-32.4pt,-6.1*30pt)$){$\neg B$}; n (ta1) [nnfterm] at ($(rootLL) + (2.5pt,-7*30pt)$){$C$}; n (ta0) [nnfterm] at ($(rootLL) + (32.1pt,-7*30pt)$){$\neg C$}; n (tb1) [nnfterm] at ($(rootLL) + (107.8pt,-7*30pt)$){$D$}; n (tb0) [nnfterm] at ($(rootLL) + (77.8pt,-7*30pt)$){$\neg D$}; \begin{scope}[on background layer] \draw [nnfedge] (output) -- (root.output); \draw [nnfedge] (rootLL.output) -- (root.input 2) node[pos=0.4,right] {$\ln\frac{0.6}{0.4}$}; \draw [nnfedge] (a1.output) -- ++(up:0.15) -| (rootLL.input 1) node[pos=0.4,above left] {$\ln\frac{0.9}{0.4}$}; \draw [nnfedge] (a2.output) -- ++(up:0.15) -| (rootLL.input 2) node[pos=0.4,above right] {$\ln\frac{0.1}{0.6}$}; \draw [nnfedge] (o12.output) -- ++(up:0.15) -| (a1.input 2); \draw [nnfedge] (o21.output) -- ++(up:0.15) -| (a2.input 1); \draw [nnfedge] (aeq1.output) -- ++(up:0.15) -| (o12.input 1) node[pos=0.3,above left] {$\ln\frac{0.2}{0.2}$}; \draw [nnfedge] (aneq.output) -- ++(up:0.15) -| (o12.input 2) node[pos=0.4,above right] {$\ln\frac{0.8}{0.8}$}; \draw [nnfedge] (aeq2.output) -- ++(up:0.15) -| (o21.input 2) node[pos=0.3,above right] {$\ln\frac{0.4}{0.3}$}; \draw [nnfedge] (aneq.output) -- ++(up:0.15) -| (o21.input 1) node[pos=0.4,above left] {$\ln\frac{0.6}{0.7}$}; \draw [nnfedge] (oeq.output) -- ++(up:0.15) -| (aeq1.input 2); \draw [nnfedge] (oeq.output) -- ++(up:0.15) -| (aeq2.input 2); \draw [nnfedge] (oneq.output) -- ++(up:0.65) -| (aneq.input 2); \draw [nnfedge] (se1.output) -- ++(up:0.27) -| (aeq1.input 1); \draw [nnfedge] (se2.output) -- ++(up:0.52) -| (aeq2.input 1); \draw [nnfedge] (se2.output) -- ++(up:0.52) -| (aneq.input 1); \draw [nnfedge] (aab11.output) -- ++(up:0.15) -| (oeq.input 1) node[pos=0.3,above left] {$\ln\frac{0.1}{0.8}$}; \draw [nnfedge] (aab00.output) -- ++(up:0.15) -| (oeq.input 2) node[pos=0.3,above right] {$\ln\frac{0.9}{0.2}$}; \draw [nnfedge] (aab10.output) -- ++(up:0.15) -| (oneq.input 1) node[pos=0.3,above left] {$\ln\frac{0.3}{0.5}$}; \draw [nnfedge] (aab01.output) -- ++(up:0.15) -| (oneq.input 2) node[pos=0.3,above right] {$\ln\frac{0.7}{0.5}$}; \draw [nnfedge] (tc1.north) -- ++(up:0.15) -| (a1.input 1); \draw [nnfedge] (tc0.north) -- ++(up:0.15) -| (a2.input 2); \draw [nnfedge] (te1.north) -- ++(up:0.15) -| (se1.input 1) node[pos=0.60,above left] {$\ln\frac{0.1}{0.6}$}; \draw [nnfedge] (te0.north) -- ++(up:0.40) -| (se1.input 2) node[pos=0.45,above right] {$\ln\frac{0.9}{0.4}$}; \draw [nnfedge] (te1.north) -- ++(up:0.15) -| (se2.input 1) node[pos=0.25,below] {$\ln\frac{0.8}{0.9}$}; \draw [nnfedge] (te0.north) -- ++(up:0.40) -| (se2.input 2) node[pos=0.47,above right] {$\ln\frac{0.2}{0.1}$}; \draw [nnfedge] (ta1.north) -- ++(up:0.15) -| (aab11.input 1); \draw [nnfedge] (ta1.north) -- ++(up:0.15) -| (aab10.input 1); \draw [nnfedge] (ta0.north) -- ++(up:0.35) -| (aab01.input 1); \draw [nnfedge] (ta0.north) -- ++(up:0.35) -| (aab00.input 1); \draw [nnfedge] (tb1.north) -- ++(up:0.55) -| (aab11.input 2); \draw [nnfedge] (tb1.north) -- ++(up:0.55) -| (aab01.input 2); \draw [nnfedge] (tb0.north) -- ++(up:0.75) -| (aab00.input 2); \draw [nnfedge] (tb0.north) -- ++(up:0.75) -| (aab10.input 2); \end{scope} \end{tikzpicture} } \caption{Logistic circuit for $\Pr(Y=1 \mid A,B,C,D)$} \label{circuit: prop circuit: equivtoprob} \end{subfigure} \caption{A probabilistic circuit with parallel structures under class variable $Y$ and its equivalent logistic circuit for predicting~$Y$} \label{circuit: prop circuit} \end{figure*} \subsection{Model Complexity \& Data Efficiency} Table~\ref{table: size} summarizes the size of all compared models when achieving the reported accuracy. We can conclude that logistic circuits are significantly smaller than the alternatives, despite attaining higher accuracy. We design the next set of experiments to specifically investigate how well our structure learning algorithm performs under the setting where the number of training samples is limited. We have two additional sets of experiments, where only $2\%$ and $10\%$ of the original training data is supplied. Table~\ref{table: data efficiency} summarizes the performance in this limited-data setting. We mainly compare against a 5-layer MLP and CNN with 3 convolutional layers, whose performance is on par with our method under the full-data setting. As summarized in Table~\ref{table: data efficiency}, except on MNIST with $10\%$ training samples, real-valued logistic circuits achieve the best classification accuracy. Moreover, in both versions of logistic circuits, when the available training samples are reduced from $100\%$ to $2\%$, the accuracy only drops by around $3\%$ when evaluating on MNIST; around $5\%$ on Fashion. In contrast, a much larger drop occurs for 5-layer MLP and CNN. Specifically, MLP's accuracy drops by $5\%$ ($9\%$) while CNN's accuracy drops by $4\%$ ($7\%$) on MNIST (Fashion). This small magnitude of accuracy decrease illustrates how data efficient our proposed structure learning algorithm is. Except on MNIST with $10\%$ training samples, real-valued logistic circuits achieve the best classification accuracy. From a top-down perspective, each OR gate of a logistic circuit presents a weighted choice between its wires. Hence, one can view a logistic circuit as a decision diagram. Under this perspective, splits refine OR gates' branching rules. As each branching rule naturally applies to multiple samples, we hypothesize that the splits selected by our structure learning algorithm reflect the general conditional feature information present in the dataset. \subsection{Local Explanation} \label{s: interpretability} Next, we aim to share some insights about how to explain the learned logistic circuit. Specifically, we investigate the question: ``Why does the logistic circuit classify a given sample ${\bf x}$ as $y$?'' Since any logistic circuit can be reduced to a logistic regression classifier, we can easily find the active global flow feature that contributes most to the given sample's classification probability. That is, the feature that maximizes $\mathbbm{x}\cdot\theta$. We visualize one such feature for MNIST data and one for Fashion in Figure~\ref{fig: feature visualization} by marking the variables used in the their corresponding logical sentences. \section{Connection to Probabilistic Circuits} \label{s:generativeconnection} In recent years, a large number of tractable probabilistic models have been proposed as a target representation for generative learning of a joint probability distribution: arithmetic circuits~\cite{lowd:uai08}, weighted SDD~\cite{BekkerNIPS15}, PSDD \cite{KisaVCD14}, cutset networks~\cite{rahman2014cutset} and sum-product networks (SPNs) \cite{poon2011sum}. These representations have various syntactic properties. Some put probabilities on terminals, others on edges. Some use logical notation (AND, OR), others use arithmetic notation ($\times$,$+$). Nevertheless, they are all circuit languages built around the properties of decomposability and/or determinism. For our purpose, we consider a simple probabilistic circuit language based on the logistic circuit syntax, where now the $\theta$ parameters are assumed to be normalized probabilities.\footnote{We also assume \emph{smoothness}~\cite{darwicheJAIR02}.} \begin{definition}[Probabilistic Circuit Semantics] \label{de: probabilistic circuit semantics} A probabilistic circuit node $n$ defines the following joint distribution. \begin{itemize} \item[--] If $n$ is a leaf (input) node, then $\Pr_n({\bf x}) =[{\bf x} \models n]$. \item[--] If $n$ is an AND gate with children $c_1,\dots,c_m$, then \begin{align*} \Pr{_n}({\bf x}) = \prod_{i=1}^m \Pr{_{c_i}}({\bf x}). \end{align*} \item[--] If $n$ is an OR gate with (child node, wire parameter) inputs $(c_1,\theta_1),\dots, (c_m, \theta_m)$, then \begin{align*} \Pr{_n}({\bf x}) = \sum_{i=1}^m \Pr{_{c_i}}({\bf x})\cdot \theta_i. \end{align*} \end{itemize} \end{definition} Figure~\ref{circuit: prop circuit: prob } shows a probabilistic circuit for the joint distribution $\Pr(Y,A,B,C,D)$. This tractable circuit language is a relaxation of PSDDs \cite{KisaVCD14} and a specific type of SPN \cite{poon2011sum} where determinism holds throughout. It is also a type of arithmetic circuit. We are now ready to connect logistic and probabilistic circuits. It is well known that logistic regression is the discriminative counterpart of a naive Bayes generative model~\cite{ng2002discriminative}. A similar correspondence holds between our logistic and probabilistic circuits. \begin{proposition} \label{prop: correspondence} Consider a probabilistic circuit whose structure is of the form $(Y \land \alpha) \lor (\neg Y \land \beta)$, where sub-circuits $\alpha$ and $\beta$ are structurally identical. Then, there exists an equivalent logistic circuit for the conditional probability of $Y$ in the probabilistic circuit. Moreover, this logistic circuit has structure $\lor \alpha$ and its parameters can be computed in closed form as log-ratios of probabilistic circuit probabilities. \end{proposition} We first depict this correspondence intuitively in Figure~\ref{circuit: prop circuit}. The logistic circuit has the same structure as the two halves of the probabilistic circuit, and its parameters are computed from the probabilistic circuit probabilities. The distributions $\Pr(Y=1 \mid A,B,C,D)$ represented by the circuits in Figures~\ref{circuit: prop circuit: prob } and~\ref{circuit: prop circuit: equivtoprob} are identical. \paragraph{Formal Correspondence} Next, we present the formal proof of this correspondence for binary ${\bf x}$. Recall that in our circuits, only the input wires of OR gates are parameterized. Let $\mathcal{W}_\delta$ be the set that contains all these wires in circuit~$\delta$: $$\mathcal{W}_\delta = \left\{(n, c) \mid c\text{ is a gate with parent OR gate } n \right\}.$$ After expanding the equations in Definition~\ref{de: probabilistic circuit semantics} and following the top-down definition of global circuit flow (i.e., following Definition~\ref{definition: global flow}), one finds that the joint distribution induced by a probabilistic circuit $\delta$ can be rewritten as $$ \Pr{_\delta}({\bf x}) = \prod_{(n, c) \in \mathcal{W}_\delta} f_\delta(n,{\bf x},c) \cdot \theta_{(n,c)}^\delta. $$ We will exploit this finding in the derivation of the conditional distribution induced by the probabilistic circuit~$\gamma = (Y \land \alpha) \lor (\neg Y \land \beta)$. \begin{align} &\Pr{_{\gamma}}(Y=1 \mid {\bf x}) \nonumber \\ &\quad =\frac{\Pr_\gamma(Y\!=\!1)\Pr_\alpha({\bf x})}{\Pr_\gamma(Y\!=\!0)\Pr_\beta({\bf x}) + \Pr(Y\!=\!1)\Pr_\alpha({\bf x})} \nonumber\\ &\quad = \frac{1}{1+\frac{\Pr_{\gamma}(Y=0)\Pr_\beta({\bf x})}{\Pr_{\gamma}(Y=1)\Pr_\alpha({\bf x})}} \nonumber \\ &\quad =\frac{1}{1+\frac{\Pr_{\gamma}(Y=0)\prod_{(n,c) \in \mathcal{W}_\beta} f_\beta(n,{\bf x},c)\theta_{(n,c)}^\beta}{\Pr_{\gamma}(Y=1)\prod_{(n,c) \in \mathcal{W}_\alpha} f_\alpha(n,{\bf x},c)\theta^\alpha_{(n,c)}}} \nonumber \end{align} As stated in Proposition~\ref{prop: correspondence} and shown in Figure~\ref{circuit: prop circuit}, sub-circuits $\alpha$ and $\beta$ share the same structure. Therefore, we can further simplify this equation as follows. \begin{align} &\Pr{_{\gamma}}(Y=1 \mid {\bf x}) \nonumber \\ &\quad = \frac{1}{1+\frac{\Pr_{\gamma}(Y=0)}{\Pr_{\gamma}(Y=1)}\prod_{(n,c) \in \mathcal{W}_{\alpha} } f_{\lor \alpha}(n, {\bf x}, c) \frac{\theta^\beta_{(n,c)}}{\theta^\alpha_{(n,c)}} } \nonumber \\ &\quad = \frac{1}{1+\exp\left[- g({\bf x})) \right]} = \Pr{_{\lor \alpha}}(Y=1 \mid {\bf x}) \nonumber \end{align} where \begin{align} g({\bf x})&= \log\frac{\Pr_\gamma(Y\!=\!1)}{\Pr_\gamma(Y\!=\!0)} + \!\!\!\sum_{(n,c) \in \mathcal{W}_{\alpha}} \!\! f_{\lor \alpha}(n,{\bf x},c) \log \frac{\theta^\alpha_{(n,c)}}{\theta^\beta_{(n,c)}} \label{eq:nb}\\ &=\theta^{\lor \alpha}_{\mathit{root}} + \sum_{(n,c) \in \mathcal{W}_{\alpha}}f_{\lor \alpha}(n,{\bf x},c) \cdot \theta_{(n,c)}^{\lor \alpha}. \label{eq:lr} \end{align} The transformation from Equation~\ref{eq:nb} to~\ref{eq:lr} expresses the logistic circuit parameters as the log-ratios of probabilistic circuit probabilities. For example, the class priors captured in the output wires of $\alpha$ and $\beta$ are now combined as a log-ratio to form the bias term for $\lor \alpha$, expressed by the root parameter. This proof also provides us with a new perspective to understand the semantics of the learned parameters. The parameters represent the log-odds ratio of the features given different classes. Note that by Bayes' theorem, a naive Bayes model would derive its induced distribution in a sequence of steps similar to the ones above, resulting in Equation~\ref{eq:nb}. Given this correspondence, one can also view our proposed structure learning method as a way to construct meaningful features for a naive Bayes classifier. We know that after training, naive Bayes classifiers are equivalent to logistic regression classifiers (as in Equation~\ref{eq:lr}). \section{Related Work} \citet{gens2012discriminative} proposed the first parameter learning algorithm for discriminative SPNs, using MPE inference as a sub-routine. Without the support of the determinism property, parameter learning of general SPNs is a relatively harder question than its logistic circuit counterpart, since it is non-convex. \citet{adel2015learning} boost the accuracy of SPNs on MNIST to $97.6\%$ by extracting more representative features from raw inputs based on the Hilbert-Schmidt independence measure. \citet{rat-spn2018} further improved the classification ability of SPNs by drastically simplifying SPN structure requirements and utilizing a loss objective that hybrids cross-entropy (discriminative learning) with log-likelihood (generative learning). \citet{rooshenas2016discriminative} developed a discriminative structure learning algorithm for arithmetic circuits. The method updates the circuit that represents a corresponding conditional random field (CRF) model by adding features conditioned on arbitrary evidence to the model. This work further relaxes decomposability and smoothness properties of ACs for a more compact representation. However, it targets the setting where there are a large number of output variables, not single-variable classification. All the aforementioned literature conforms to a common trend of abandoning properties of the chosen circuit representations for easier structure learning and better prediction accuracy. They argue that those special syntactic restrictions complicate the learning process. On the contrary, this paper chooses perhaps the most structure-restrictive circuit as the target representation. Instead of relaxing the target representation's syntactical requirements, our proposed method fully leverages the valuable properties that stem from these restrictions, and in particular convexity. \section{Conclusions} We have presented logistic circuits, a novel circuit-based classification model with convex parameter learning and a simple structure learning procedure based on local search. Logistic circuits outperform much larger classifiers and perform well in a limited data regime. Compared to other symbolic, circuit-based approaches, logistic circuits present a leap in performance on image classification benchmarks. Future work includes support for convolution, parameter tying, and structure sharing in the logistic circuits framework.
{ "timestamp": "2019-03-01T02:04:14", "yymm": "1902", "arxiv_id": "1902.10798", "language": "en", "url": "https://arxiv.org/abs/1902.10798" }
\section{Introduction} Many important economic decisions faced by individuals are binary in nature, including labour force participation, retirement, college enrolment, adoption of a new technology or health product, and so forth. This paper concerns nonparametric analysis of binary choice under general unobserved heterogeneity and income effects. The paper has two goals. The first is to understand, theoretically, what nonparametric restrictions does utility maximization by heterogeneous consumers impose upon choice-probabilities, i.e. whether there are analogs of Slutsky restrictions for binary choice under general unobserved heterogeneity and income effects, and conversely, whether these restrictions are also sufficient for observed choice-probabilities to be rationalizable. This issue is important for logical coherency between theory and empirics and, in particular, for prediction of demand and welfare in situations involving counterfactual, i.e. previously unobserved, budget sets. It is important in these exercises to allow for general unobserved heterogeneity, because economic theory typically does not restrict its dimension or distribution, and does not specify how it enters utility functions. To date, closed-form Slutsky conditions for rationalizability of demand under general heterogeneity were available only for continuous choice. The present paper, to our knowledge, is the first to establish them for the leading case of discrete demand, viz. binary choice. The second goal of the present paper is a practical one. It is motivated by the fact that in empirical applications of binary choice, requiring the estimation of elasticities, welfare calculations and demand predictions, researchers typically use parsimonious functional-forms for conditional choice probabilities. This is because fully nonparametric estimation is often hindered by curse of dimensionality, the sensitivity of estimates to the choice of tuning parameters and insufficient price variation, especially in consumer data from developed countries. The question therefore arises as to whether the economic theory of consumer behavior can inform the choice of such functional forms. Answering this question is our second objective. Since McFadden, 1973, discrete choice models of economic behavior have been studied extensively in the econometric literature, mostly under restrictive assumptions on utility functions and unobserved heterogeneity including, inter alia, quasi-linear preferences implying absence of income effects and/or parametrically specified heterogeneity distributions (c.f. Train, 2009 for a textbook treatment). Matzkin (1992) investigated the nonparametric identification of binary choice models with additive heterogeneity, where both the distribution of unobserved heterogeneity and the functional form of utilities were left unspecified. More recently, Bhattacharya (2015, 2018) has shown that in discrete choice settings, welfare distributions resulting from price changes are nonparametrically point-identified from choice probabilities without any substantive restriction on preference heterogeneity, and even when preference distribution and heterogeneity dimension are not identified. In the present paper, we consider a setting of binary choice by a population of budget-constrained consumers with general, unobserved heterogeneity. In this setting, we develop a characterization of utility maximization which takes the form of simple, closed-form shape restrictions on choice probability functions in the population. These nonparametric shape-restrictions can be consistently tested in the usual asymptotic econometric sense and are extremely easy to impose on specifications of choice-probabilities -- akin to testing or imposing monotonicity of regression functions. Most importantly, they lead to computationally simple bounds for theory-consistent demand and welfare predictions on counterfactual budgets sets -- an important goal of empirical demand analysis. Interestingly, our shape-restrictions differ in form from the well-known Slutsky inequalities for continuous goods. The above results are developed in a fully nonparametric context; nonetheless, they can help guide applied researchers intending to use simple parametric or semiparametric models. As a specific example, consider the popular probit/logit type model for binary choice of whether to buy a product or not.\ A standard specification is that the probability of buying depends (implicitly conditioning on other observed covariates) on its price p$ and the decision-maker's income $y$, e.g. $\bar{q}\left( p,y\right) =F\left( \gamma _{0}+\gamma _{1}p+\gamma _{2}y\right) $, where $F\left( \cdot \right) $ is a distribution function. We will show below that these choice-probabilities are consistent with utility maximization by a heterogenous population of consumers, if and only if $\gamma _{1}\leq 0$, and $\gamma _{1}+\gamma _{2}\leq 0$. While the first inequality simply means that demand falls with own price (holding income fixed), the second inequality is less obvious, and constitutes an important empirical characterization of utility maximization. For the case of \textit{continuous} goods, Lewbel, 2001 explored the question of when average demand, generated from maximization of heterogeneous individual preferences, satisfies standard properties of non-stochastic demand functions. More recently, for the case of two continuous goods (i.e. a good of interest plus the numeraire) under general heterogeneity, Hausman and Newey, 2016 have shown that constrained utility maximization is equivalent to quantiles of demand satisfying standard Slutsky negativity. The analog of the two goods setting in discrete choice is the case of binary alternatives. Accordingly, our main result (Theorem 1 below) may be viewed as the discrete choice counterpart of Hausman and Newey, 2016, Theorem 1. Note however that quantiles are degenerate for binary outcomes, and indeed, the forms of our Slutsky-like shape restrictions are completely different from Hausman-Newey's quantile-based conditions for continuous choice. An alternative, algorithmic -- as opposed to closed-form and analytic -- approach to rationalizability of demand is the so-called \textquotedblleft revealed stochastic preference\textquotedblright\ (SRP, henceforth) method, which applies to very general choice settings where a heterogeneous population of consumers faces a finite number of budget sets, c.f. McFadden and Richter, 1990, McFadden, 2005. When budget sets are numerous or continuously distributed, as in household surveys with many income and/price values, SRP is well-known to be operationally prohibitive, c.f. Anderson et al 1992, Page 54-5 and Kitamura and Stoye, 2016, Sec 3.3. Furthermore, the SRP conditions are difficult to impose on parametric specifications commonly used in practical applications, they change entirely in form upon addition of new budget sets, and are cumbersome to use for demand prediction on counterfactual budgets, especially in welfare calculations that typically require simultaneous prediction on a continuous range of budget-sets. In contrast, our approach yields rationality conditions which (a) are global, in that they characterize choice probability \textit{functions}, and their forms remain invariant to which and how many budget sets are observed in a dataset, and (b) are closed-form, analytic shape-restrictions, hence easy to impose, standard to test, and simple to use for counterfactual predictions of demand and welfare. As such, these shape-restrictions establish the analogs of Slutsky conditions -- the cornerstone of classical demand analysis -- for binary choice under general unobserved heterogeneity and income effects. \section{The Result} Consider a population of heterogeneous individuals, each choosing whether or not to buy an indivisible good. Let $N$ represent the quantity of numeraire which an individual consumes in addition to the binary good. If the individual has income $Y=y$, and faces a price $P=p$ for the indivisible good, then the budget constraint is $N+pQ=y$ where $Q\in \left\{ 0,1\right\} $ represents the binary choice. Individuals derive satisfaction from both the indivisible good as well as the numeraire. Upon buying, an individual derives utility from the good but has a lower amount of numeraire $y-p$ left; upon not buying, she enjoys utility from her outside option and a higher quantity of numeraire $y$. There is unobserved heterogeneity across consumers which affect their choice, and so on each budget set defined by a price $p$ and consumer income $y$, there is a (structural) probability of buying, denoted by $q\left( y,y-p\right) $; that is, if each member of the entire population were offered income $y$ and price $p$, then a fraction q\left( y,y-p\right) $ would buy the good. It is more standard to write this choice probability as conditional on price and income, i.e., in the form \bar{q}\left( p,y\right) $, but the equivalent $q\left( y,y-p\right) $ expression is an important and helpful step toward obtaining closed-form rationalizability results, as will become clear below. Indeed, one can go back and forth between the two specifications because $\bar{q}\left( c,d\right) \equiv q\left( d,d-c\right) $ and $q\left( a,b\right) \equiv \bar q}\left( a-b,a\right) $. Also, for now, we implicitly condition our analysis on observed covariates, and later show how to incorporate them into the results. Our main result establishes conditions that are necessary and sufficient for the conditional choice probability function to be generated from utility maximization by a heterogeneous population, where no a priori restriction is imposed on the dimension and functional form of the distribution of unobserved heterogeneity or on the functional form of utilities.\medskip \begin{theorem} For binary choice under general heterogeneity, the following two statements are equivalent: (i) The structural choice probability function $q\left( \cdot ,\cdot \right) $, defined above, satisfies that (A) $q\left( a,b\right) $ is non-increasing in $a$ for each fixed $b$, and non-decreasing in $b$ for each fixed $a$; (B) for each fixed $b\in \mathbb{R}$, it holds that $\lim_{a\downarrow -\infty }q\left( a,b\right) =1$, and (C) $q\left( a,b\right) $ is continuous in $a$ for each fixed $b$. (ii) There exists a pair of utility functions $W_{0}\left( \cdot ,\eta \right) $ and $W_{1}\left( \cdot ,\eta \right) $, where the first argument denotes the amount of numeraire, and $\eta $ denotes unobserved heterogeneity, and a distribution $G\left( \cdot \right) $ of $\eta $ such tha \begin{equation*} q\left( a,b\right) =\int 1\left\{ W_{0}\left( a,\eta \right) \leq W_{1}\left( b,\eta \right) \right\} dG\left( \eta \right) \text{,} \end{equation* where (A') for each fixed $\eta $, $W_{0}\left( a,\eta \right) $ is continuous and strictly increasing in $a$, and $W_{1}\left( b,\eta \right) $ is non-decreasing in $b$; (B') for each fixed $b$ and $\eta $, $W_{1}\left( b,\eta \right) >\lim_{a\downarrow -\infty }W_{0}\left( a,\eta \right) $; (C') for any $a,b$, it holds that $\int 1\left\{ W_{1}\left( b,\eta \right) =W_{0}\left( a,\eta \right) \right\} dG\left( \eta \right) =0$.\smallskip \end{theorem} Intuitively, conditions (A/A') mean that having more numeraire ceteris paribus is (weakly) better for every consumer. To interpret condition (B), note that for fixed $b$, $\lim_{a\downarrow -\infty }q\left( a,b\right) \equiv \lim_{\left( a-b\right) \downarrow -\infty }q\left( a,b\right) $. Now, since $a-b=p$ is the price, condition (B/B') say that everyone can be persuaded to buy product 1 by making its price low enough (perhaps even negative). Condition (C/C') -- the \textquotedblleft no-tie\textquotedblright\ assumption -- is standard in discrete choice models, and intuitively means that there is a continuum of tastes. Note that (A)-(C) place \textit{no restriction} on income effects, including its sign. In statement (ii), the functions $W_{j}\left( x,\eta \right) $ will correspond to the utility from choosing alternative $j\in \left\{ 0,1\right\} $ and being left with a quantity $x$ of the numeraire, and with \eta $ denoting unobserved heterogeneity. This notation allows for the case where different vectors of unobservables enter the two utilities, i.e. where the utilities are given by $u_{0}\left( \cdot ,\eta _{0}\right) $ and u_{1}\left( \cdot ,\eta _{1}\right) $, respectively, with $\eta _{0}\neq \eta _{1}$; simply set $\eta \equiv \left( \eta _{0},\eta _{1}\right) $, W_{0}\left( \cdot ,\eta \right) \equiv u_{0}\left( \cdot ,\eta _{0}\right) , $W_{1}\left( \cdot ,\eta \right) \equiv u_{1}\left( \cdot ,\eta _{1}\right) $. In the proof of the above theorem, when showing (ii) implies (i), $\eta $ will be allowed to have \textit{any arbitrary and unknown dimension and distribution; in showing (i) implies (ii) we will construct a scalar heterogeneity distribution that will rationalize the choice probabilities (see further discussion on this point under the heading "Observational Equivalence" in the next section). \medskip \begin{proof} That (ii) implies (i) is straightforward. In particular, letting W_{0}^{-1}\left( \cdot ,\eta \right) $ denote the inverse of $W_{0}\left( \cdot ,\eta \right) $, we have that \begin{equation*} q\left( a,b\right) =\int 1\left\{ a\leq W_{0}^{-1}\left( W_{1}\left( b,\eta \right) ,\eta \right) \right\} dG\left( \eta \right) \end{equation* whence (C') implies continuity of $q\left( \cdot ,b\right) $, (B') implies that $\lim_{a\downarrow -\infty }q\left( a,b\right) =1$ for each $b$, and (A') implies (A). We now show that (i) implies (ii). Note that $\lim_{a\downarrow -\infty }q\left( a,b\right) =1$ for each $b$ implies that for any $u\in \left[ 0, \right] $ and $b\in \mathbb{R}$, the set $\left\{ x:q\left( x,b\right) \geq u\right\} $ is non-empty. For any fixed $b\in \mathbb{R}$ and for $u\in \left[ 0,1\right] $, defin \begin{equation} q^{-1}\left( u,b\right) \overset{def}{=}\sup \left\{ x:q\left( x,b\right) \geq u\right\} \text{.} \label{4} \end{equation By condition (A) in (i), $q^{-1}\left( u,\cdot \right) $ must be non-decreasing. Now, consider a random variable $V\simeq Uniform\left( 0,1\right) $. Define $W_{0}\left( a,V\right) \overset{defn}{=}a$ and W_{1}\left( b,V\right) \overset{defn}{=}q^{-1}\left( V,b\right) $, and note that by construction, $W_{0}\left( a,V\right) $ and $W_{1}\left( b,V\right) $ satisfy properties (A')-(C') listed in (ii) above. We will now show that W_{0}\left( \cdot ,V\right) $ and $W_{1}\left( \cdot ,V\right) $ will rationalize the choice-probabilities $q\left( \cdot ,\cdot \right) $. Indeed, given any fixed $b$, since $q\left( \cdot ,b\right) $ is continuous and non-increasing, we have that for any $v\in \left( 0,1\right) $ \begin{equation} a\leq q^{-1}\left( v,b\right) \overset{\text{by }q\left( \cdot ,b\right) \text{ }non\uparrow }{\Longrightarrow }q\left( a,b\right) \geq q\left( q^{-1}\left( v,b\right) ,b\right) \overset{\text{by }q\left( \cdot ,b\right) \text{ }cont.}{\Longrightarrow }q\left( a,b\right) \geq v\text{.} \label{a} \end{equation To see why continuity is required for the last implication in (\ref{a}), suppose for some $v\in \left( 0,1\right) $, we have that $q\left( x,b\right) >v$ for all $x<c$, but $q\left( c,b\right) <v$, i.e. $q\left( \cdot ,b\right) $ takes a discontinuous `plunge' at $c$. Then $q^{-1}\left( v,b\right) =\sup \left\{ x:q\left( x,b\right) \geq v\right\} =c$, but q\left( c,b\right) =q\left( q^{-1}\left( v,b\right) ,b\right) <v$. Continuity of $q\left( \cdot ,b\right) $ rules this out, and guarantees that $q\left( c,b\right) =q\left( q^{-1}\left( v,b\right) ,b\right) \geq v$; therefore, in (\ref{a}), $q\left( a,b\right) \geq q\left( q^{-1}\left( v,b\right) ,b\right) \Longrightarrow q\left( a,b\right) \geq v$. Finally, by definition of $q^{-1}\left( \cdot ,b\right) $ as the supremum in (\ref{4}), we have tha \begin{equation} q\left( a,b\right) \geq v\Longrightarrow a\leq q^{-1}\left( v,b\right) \text .} \label{b} \end{equation Therefore, by (\ref{a}) and (\ref{b}), we have that $q\left( a,b\right) \geq v\Longleftrightarrow a\leq q^{-1}\left( v,b\right) $, and thus, for $V\simeq U\left( 0,1\right) $, it follows tha \begin{equation*} \Pr \left( q^{-1}\left( V,b\right) \geq a\right) =\Pr \left( V\leq q\left( a,b\right) \right) =q\left( a,b\right) \text{.} \end{equation* Therefore, the utility functions $W_{0}\left( a,V\right) \equiv a$ and W_{1}\left( b,V\right) \equiv q^{-1}\left( V,b\right) $ with heterogeneity V\simeq Uniform\left( 0,1\right) $ rationalize the choice probabilities q\left( \cdot ,\cdot \right) $, and satisfy all the properties specified in panel (ii) of Theorem 1. In particular, $W_{1}\left( b,\eta \right) $ is non-decreasing in $b$ (see (\ref{4})). \end{proof} \section{Discussion} \textbf{A. Slutsky Form}: To see the analogy between the shape restrictions in Theorem 1 and the traditional Slutsky inequality constraints with smooth demand, rewrite the choice probability on a budget set $\left( p,y\right) $ in the standard form as a function of price and income, viz. $\bar{q}\left( p,y\right) \equiv q\left( y,y-p\right) $ i.e., $q\left( a,b\right) \equiv \bar{q}\left( a-b,a\right) $. Then, under continuous differentiability, the shape restrictions (A) from Theorem 1 are equivalent to $\frac{\partial } \partial b}\bar{q}\left( a-b,a\right) =\frac{\partial }{\partial b}q\left( a,b\right) \geq 0$, and $\frac{\partial }{\partial a}\bar{q}\left( a-b,a\right) =\frac{\partial }{\partial a}q\left( a,b\right) \leq 0$, i.e., for all $p,y$, \begin{eqnarray} \frac{\partial }{\partial p}\bar{q}\left( p,y\right) &\leq &0\text{,} \label{3} \\ \frac{\partial }{\partial p}\bar{q}\left( p,y\right) +\frac{\partial } \partial y}\bar{q}\left( p,y\right) &\leq &0\text{.} \label{2} \end{eqnarray The forms of these inequalities are distinct from textbook Slutsky conditions for \textit{nonstochastic} demand $q^{\ast }\left( p,y\right) $ for a \textit{continuous} good, which are given b \begin{equation} \frac{\partial }{\partial p}q^{\ast }\left( p,y\right) +q^{\ast }\left( p,y\right) \frac{\partial }{\partial y}q^{\ast }\left( p,y\right) \leq \text{ for all }p,y\text{.} \label{15} \end{equation For a continuous good and under general unobserved heterogeneity, Hausman and Newey, 2016 show that (\ref{15}) also holds with $q^{\ast }\left( p,y\right) $ denoting any quantile of the demand distribution for fixed \left( p,y\right) $ (see also Dette, Hoderlein and Neumeyer, 2016). Thus, for binary choice with general heterogeneity, the forms of the Slutsky inequality (\ref{3}) and (\ref{2}) are different from the continuous choice counterpart (\ref{15}).\footnote Bhattacharya, 2015 (see also Lee and Bhattacharya, 2018) noted that (\ref{3 ) (resp, (\ref{2})) is necessary for the CDF of equivalent variation (resp., compensating variation) resulting from price-changes to be non-decreasing.} In particular, the inequalities (\ref{3}) and (\ref{2}) are \textit{linear} in $\bar{q}\left( \cdot ,\cdot \right) $ (and $q\left( \cdot ,\cdot \right) ), unlike (\ref{15}), and hence easier to impose on nonparametric estimates of $q\left( \cdot ,\cdot \right) $ using, say, shape-preserving sieves that guarantee that $\frac{\partial }{\partial b}\hat{q}\left( a,b\right) \geq 0 , and $\frac{\partial }{\partial a}\hat{q}\left( a,b\right) \leq 0$ for all a,b$. \begin{remark} It is tempting to think of (\ref{3}) and (\ref{2}) as (\ref{15}) with the level $q^{\ast }\left( p,y\right) $ replaced by 0 and 1 corresponding to either of the two possible individual choices. However, this interpretation is incorrect, since $\bar{q}\left( p,y\right) $ is \textbf{average} demand, and takes values strictly inside $\left( 0,1\right) $. In other words, $\bar q}\left( p,y\right) $ is neither a quantile, nor individual demand at price p$ and $y$, and generically (e.g. in a probit model) does not take the values of 0 and 1. Thus (\ref{3}) and (\ref{2}) \textbf{cannot} be rewritten a \begin{equation*} \frac{\partial }{\partial p}\bar{q}\left( p,y\right) +\bar{q}\left( p,y\right) \frac{\partial }{\partial y}\bar{q}\left( p,y\right) \leq 0\text{ for all }p,y\text{,} \end{equation* and, as such, are different from the continuous choice counterpart (\ref{15 ). \end{remark} \textbf{B. Observational Equivalence}: The construction in our proof of (ii) $\Rightarrow $ (i) shows that a rationalizable binary choice model with general heterogeneity of unspecified dimension is observationally equivalent to one where a scalar heterogeneity enters the utility function of one of the alternatives in a monotonic way, and the utility of the other alternative is non-stochastic.\footnote For quantile demand in the continuous case, a result of similar spirit is discussed in Hausman-Newey, 2016, Page 1228-9, following Theorem 1. In general, a result holding for the continuous case with two goods does not necessarily imply that it also holds for the binary case. For example, welfare related results are different for the binary and the two-good continuous case, c.f. Hausman-Newey 2016, and Bhattacharya 2015, and so are Slutsky negativity conditions, as discussed above.} An intuitive explanation of this equivalence is that in the binary case, choice probabilities are determined solely by the marginal distribution of reservation price (given income) for alternative 1, and not the relative ranking of individual consumers in terms of their preferences within that distribution. So, as income varies, choice probabilities change only insofar as the marginal distribution of the reservation price changes, irrespective of how individual consumers' relative positions change within that distribution. It is worth pointing out here that a binary choice model with \textit additive} scalar heterogeneity -- the so-called ARUM model -- is restrictive, and \textit{not} observationally equivalent to a binary choice model with general heterogeneity. To see this, suppose choice probabilities are generated via the ARUM model, viz \begin{eqnarray} q\left( a,b\right) &=&\Pr \left[ W_{1}\left( b\right) +\eta _{1}>W_{0}\left( a\right) +\eta _{0}\right] \notag \\ &=&\Pr \left[ \eta _{0}-\eta _{1}<W_{1}\left( b\right) -W_{0}\left( a\right) \right] \notag \\ &=&F_{\eta _{0}-\eta _{1}}\left[ W_{1}\left( b\right) -W_{0}\left( a\right) \right] \text{.} \label{7} \end{eqnarray Assuming smoothness and strict monotonicity of $F_{\eta _{0}-\eta _{1}}\left[ \cdot \right] $, $W_{1}\left( \cdot \right) $ and $W_{0}\left( \cdot \right) $, and thus of $q\left( \cdot ,\cdot \right) $, it follows tha \begin{eqnarray*} &&\frac{\partial ^{2}}{\partial a\partial b}\ln \left[ -\frac{\frac{\partial }{\partial b}q\left( a,b\right) }{\frac{\partial }{\partial a}q\left( a,b\right) }\right] \\ &=&\frac{\partial ^{2}}{\partial a\partial b}\ln \left( \frac{W_{1}^{\prime }\left( b\right) }{W_{0}^{\prime }\left( a\right) }\right) \text{, from (\re {7})} \\ &=&\frac{\partial ^{2}}{\partial a\partial b}\left[ \ln \left( W_{1}^{\prime }\left( b\right) \right) -\ln \left( W_{0}^{\prime }\left( a\right) \right) \right] \\ &=&0\text{,} \end{eqnarray* for every $a$ and $b$. This equality is obviously not true for a general smooth and strictly monotone $q\left( \cdot ,\cdot \right) $ satisfying Assumptions i(A)-i(C) of Theorem 1. \begin{remark} The construction of $q^{-1}\left( V,\cdot \right) $ in our proof of (ii) \Rightarrow $ (i) is unrelated to the almost sure representation of a continuous random variable $X$ as $F_{X}^{-1}\left( U\right) $ with U=F_{X}\left( X\right) $, where $F_{X}$ and $F_{X}^{-1}$ denote the CDF and quantile function of $X$, and $U$ is $U\left( 0,1\right) $. Indeed, if we were to apply this so-called "probability-integral transform" to X=W_{1}\left( a_{1},\eta \right) $ for a fixed $a_{1}$, we will have W_{1}\left( a_{1},\eta \right) \overset{a.s.}{=}F_{W_{1}\left( a_{1},\eta \right) }^{-1}\left( U\left( a_{1}\right) \right) $, where the scalar-valued uniform process $U\left( a_{1}\right) \equiv F_{W_{1}\left( a_{1},\eta \right) }\left( W_{1}\left( a_{1},\eta \right) \right) $ will vary with a_{1}$, unlike $V$ in the proof of our theorem above, and therefore cannot represent unobserved heterogeneity in consumer preferences. In other words, our constructed $q^{-1}\left( V,a_{1}\right) $ will \textit{not} equal the data generating process $W_{1}\left( a_{1},\eta \right) $ almost surely, but the probability that $q^{-1}\left( V,a_{1}\right) \geq a_{0}$ will equal the probability that $W_{1}\left( a_{1},\eta \right) \geq W_{0}\left( a_{0},\eta \right) $ for all $\left( a_{0},a_{1}\right) $.\medskip \end{remark} \textbf{C. Giffen Goods}: Our rationalizability condition (\ref{3}) says that own price effect on average demand is negative. This condition has no counterpart in the continuous case, appears to rule out Giffen behavior and may, therefore, appear restrictive. We now show that that is not the case: indeed, Giffen goods cannot arise in binary choice models if utilities are non-satiated in the numeraire. To see this, let the utility of options $0$ and $1$ be given by $W_{0}\left( \cdot ,\eta \right) $ and $W_{1}\left( \cdot ,\eta \right) $ as in Theorem 1 above. Now note that if option 1 is Giffen for an $\eta $ type consumer with income $y$, then for some prices p<p^{\prime }$ she buys at price $p^{\prime }$ but does not buy at $p$. Therefore \begin{equation*} W_{1}\left( y-p,\eta \right) <W_{0}\left( y,\eta \right) <W_{1}\left( y-p^{\prime },\eta \right) \text{,} \end{equation* which is a contradiction, since $W_{1}\left( \cdot ,\eta \right) $ is strictly increasing. In contrast, consider a \textit{continuous} good with utilities $W\left( x,y-px,\eta \right) $, where $x$ denotes the quantity of the continuous good, and $W\left( \cdot ,\cdot ,\eta \right) $ is increasing in both arguments. Now it is possible that $x$ is bought at price $p$ and x^{\prime }$ is bought at price $p^{\prime }$ with $p<p^{\prime }$ and x<x^{\prime }$. That is, we can hav \begin{equation*} W\left( x,y-px,\eta \right) <W\left( x^{\prime },y-p^{\prime }x^{\prime },\eta \right) \text{,} \end{equation* if $x^{\prime }$ is preferred sufficiently over $x$. The intuitive reason for this difference between the discrete and the continuous case is that in the former, the only non-zero option is 1. Indeed, in the continuous case, it is also not possible that $W\left( x,y-px,\eta \right) <W\left( x,y-p^{\prime }x,\eta \right) $ for any \textit{common} $x$ if $p<p^{\prime } $. Also, note that although Giffen behavior cannot arise in binary choice, there is no restriction on the \textit{sign of the income effect}. Indeed, \ref{3}) and (\ref{2}) are compatible with both $\frac{\partial }{\partial y \bar{q}\left( p,y\right) \geq 0$ and $\frac{\partial }{\partial y}\bar{q \left( p,y\right) \leq 0$.\medskip \textbf{D. Parametric and Semiparametric Models}: For a probit/logit specification of the buying decision, viz \begin{eqnarray} &&\bar{q}\left( p,y\right) \notag \\ &=&F\left( \gamma _{0}+\gamma _{1}p+\gamma _{2}y\right) \notag \\ &=&F\left( \gamma _{0}+\left( \gamma _{1}+\gamma _{2}\right) y-\gamma _{1}\left( y-p\right) \right) \text{,} \label{14} \end{eqnarray where $F\left( \cdot \right) $ is a strictly increasing CDF, the shape restrictions of Theorem 1 amount to requiring $\gamma _{1}\leq 0$ and \gamma _{1}+\gamma _{2}\leq 0$. While the first inequality is intuitive, and simply says that own price effect is negative, the second condition $\gamma _{1}+\gamma _{2}\leq 0$ is not a priori obvious, and shows the additional restriction implied by budget-constrained utility maximization. Now, applying Theorem 1, we obtai \begin{eqnarray*} &&F\left( \gamma _{0}+\left( \gamma _{1}+\gamma _{2}\right) y-\gamma _{1}\left( y-p\right) \right) \\ &=&\Pr \left( V\leq F\left( \gamma _{0}+\left( \gamma _{1}+\gamma _{2}\right) y-\gamma _{1}\left( y-p\right) \right) \right) \\ &=&\Pr \left( \frac{F^{-1}\left( V\right) -\gamma _{0}+\gamma _{1}\left( y-p\right) }{\gamma _{1}+\gamma _{2}}\geq y\right) \text{,} \end{eqnarray* since $\gamma _{1}+\gamma _{2}<0$ (note that the condition \lim_{a\downarrow -\infty }q\left( a,b\right) =1$ for each $b$ rules out \gamma _{1}+\gamma _{2}=0$), implying the rationalizing utility function \begin{eqnarray*} U_{1}\left( y-p,V\right) &=&\frac{F^{-1}\left( V\right) -\gamma _{0}} \gamma _{1}+\gamma _{2}}+\underset{>0}{\underbrace{\left( \frac{\gamma _{1}} \gamma _{1}+\gamma _{2}}\right) }}\left( y-p\right) \text{,} \\ U_{0}\left( y,V\right) &=&y\text{.} \end{eqnarray* where $V\simeq U\left( 0,1\right) $. \begin{remark} Note that since the restrictions $\gamma _{1}\leq 0$ and $\gamma _{1}+\gamma _{2}\leq 0$ are linear in parameters, it is computationally straightforward to maximize a globally concave likelihood, such as probit or logit, subject to these constraints. \end{remark} The above discussion also applies to \textit{semiparametric} models where one need not specify the exact functional form of $F\left( \cdot \right) $. For example, the semiparametric method of Bhattacharya (2008), which only utilizes the strict monotonicity of the CDF $F\left( \cdot \right) $, can be applied to estimate the binary choice model, subject to our sign restriction and standard scale-normalization, viz. $\gamma _{1}=-1$ and $\gamma _{1}+\gamma _{2}\leq 0$, i.e. using the specification that $\bar{q}\left( p,y\right) $ is a strictly increasing function of the linear index -p+\gamma _{2}y$ with $\gamma _{2}\leq 1$.\smallskip \textbf{E. Random Coefficients}: An alternative parametric specification in this context is a random coefficient structure, popular in IO applications. It takes the for \begin{eqnarray*} &&\Pr \left( 1|price=p,income=y\right) \\ &=&\int F\left( \gamma _{1}p+\gamma _{2}y\right) dG\left( \gamma _{1},\gamma _{2},\theta \right) \\ &=&\int F\left( \left( \gamma _{1}+\gamma _{2}\right) y-\gamma _{1}\left( y-p\right) \right) dG\left( \gamma _{1},\gamma _{2},\theta \right) \\ &\equiv &H\left( y,y-p,\theta \right) \text{,} \end{eqnarray* where $\gamma _{1}$ and $\gamma _{2}$ are now random variables with joint distribution $G\left( \cdot ,\cdot ,\theta \right) $, indexed by an unknown parameter vector $\theta $, and $F\left( \cdot \right) $ is a specified CDF (e.g. a logit). Theorem 1 then implies that the distribution $G\left( \cdot ,\cdot ,\theta \right) $ must be such that the choice probability function H\left( \cdot ,\cdot ,\cdot \right) $ satisfies $\frac{\partial }{\partial y H\left( y,\cdot ,\theta \right) \leq 0$ and $\frac{\partial }{\partial \left( y-p\right) }H\left( \cdot ,y-p,\theta \right) \geq 0$. One way to guarantee this would be to specify the support of $\gamma _{1}$ and of \gamma _{1}+\gamma _{2}$ to lie in $\left( -\infty ,0\right) $. Using Theorem 1, a utility structure that would rationalize such a model is \begin{equation*} U_{1}\left( y-p,\eta \right) =h\left( y-p,V,\theta \right) \text{; \ U_{0}\left( y,\eta \right) =y\text{,} \end{equation* where $V\simeq U\left( 0,1\right) $, and $h\left( y-p,v,\theta \right) $ is \sup \left\{ x:H\left( x,y-p,\theta \right) \geq v\right\} $.\footnote Note that an alternative preference distribution producing the same choice probabilities is given by $U_{1}\left( y-p,\eta \right) =-\gamma _{1}\left( y-p\right) $, $U_{0}\left( y,\eta \right) =\gamma _{0}-\left( \gamma _{1}+\gamma _{2}\right) y$, $\gamma _{0}\perp \left( \gamma _{1},\gamma _{2}\right) $, $\gamma _{0}\simeq F\left( \cdot \right) $, $\left( \gamma _{1},\gamma _{2}\right) \simeq G\left( \cdot ,\cdot ,\theta \right) $, \gamma _{1}<0$, $\gamma _{1}+\gamma _{2}\leq 0$ w.p.1. This shows that the rationalizing preference distribution may not be unique.}\medskip \textbf{F. Counterfactuals}: Theorem 1 can be used to nonparametrically predict theory-consistent choice probabilities on counterfactual, i.e. previously unobserved, budget-sets. Obviously, without shape restrictions, there is no nonparametric restriction on demand on counterfactual budgets. To see how to use shape-restrictions, let $A$ denote the set of $\left( p,y\right) $ observed in the data. Then, using part (i) condition (A) of our theorem, the probability $\bar{q}\left( p^{\prime },y^{\prime }\right) $ of buying at counterfactual (i.e. previously unobserved) price $p^{\prime }$ and income $y^{\prime }$ can be bounded a \begin{equation} \bar{q}\left( p^{\prime },y^{\prime }\right) \equiv q\left( y^{\prime },y^{\prime }-p^{\prime }\right) \in \left[ \sup_{\substack{ \left( p,y\right) \in A:\text{ }y\geq y^{\prime }, \\ y-p\leq y^{\prime }-p^{\prime }}}q\left( y,y-p\right) ,\inf_{\substack{ \left( p,y\right) \in A:\text{ }y\leq y^{\prime }, \\ y-p\geq y^{\prime }-p^{\prime }}}q\left( y,y-p\right) \right] \text{.} \label{1} \end{equation The above calculation is extremely simple; for example, the lower bound requires collecting those observed budget sets $\left( p,y\right) $ in the data that satisfy $y\geq y^{\prime },$ $y-p\leq y^{\prime }-p^{\prime }$ (a one-line command in STATA), evaluating choice probabilities on them, and sorting these values.\smallskip \textbf{G. Welfare bounds}: Given bounds on choice probabilities, one can obtain lower and upper bounds on economically interesting functionals thereof, such as average welfare. For example, the average compensating variation -- i.e. utility preserving income compensation -- corresponding to a price change from $p_{0}$ to $p_{1}$ at income $y$ is given by \int_{p_{0}}^{p_{1}}q\left( y+p-p_{0},y-p_{0}\right) dp$ (c.f. Bhattacharya, 2015).\footnote The results in Bhattacharya (2015) are stated in terms of the standard forms of choice probabilities, viz. $\bar{q}\left( p,y\right) $ in our notation above. In particular, average CV is $\int_{p_{0}}^{p_{1}}\bar{q}\left( p,y+p-p_{0}\right) dp$ which, in our present notation, is \int_{p_{0}}^{p_{1}}q\left( y+p-p_{0},y-p_{0}\right) dp$.} This requires prediction of demand on a continuum of budget sets, viz. $\left\{ q\left( y+p-p_{0},y-p_{0}\right) :p\in \left[ p_{0},p_{1}\right] \right\} $. Now, it follows from our discussion above, and by Theorem 1, that pointwise bounds on $q\left( y+p-p_{0},y-p_{0}\right) $ are given b \begin{eqnarray} L\left( y+p-p_{0},y-p_{0}\right) &\equiv &\sup_{\left( p^{\prime },y^{\prime }\right) \in A\text{, }y^{\prime }-p^{\prime }\leq y-p_{0}\text{, }y^{\prime }\geq y+p-p_{0}}q\left( y^{\prime },y^{\prime }-p^{\prime }\right) \notag \\ &\leq &q\left( y+p-p_{0},y-p_{0}\right) \notag \\ &\leq &\inf_{\left( p^{\prime },y^{\prime }\right) \in A\text{, }y^{\prime }-p^{\prime }\geq y-p_{0}\text{, }y^{\prime }\leq y+p-p_{0}}q\left( y^{\prime },y^{\prime }-p^{\prime }\right) \equiv M\left( y+p-p_{0},y-p_{0}\right) \text{.} \label{5} \end{eqnarray This implies that average CV at $y$ is bounded below by \int_{p_{0}}^{p_{1}}L\left( y+p-p_{0},y-p_{0}\right) dp$, and above by \int_{p_{0}}^{p_{1}}M\left( y+p-p_{0},y-p_{0}\right) dp$. We can, in fact, make a stronger statement, viz., that the smallest set S\left( y,p_{0},p_{1}\right) $ containing \textit{all feasible values} of the average CV, based on the restrictions of Theorem 1, is given by the interva \begin{eqnarray*} &&I\left( y,p_{0},p_{1}\right) \\ &=&\left[ \int_{p_{0}}^{p_{1}}L\left( y+p-p_{0},y-p_{0}\right) dp,\int_{p_{0}}^{p_{1}}M\left( y+p-p_{0},y-p_{0}\right) dp\right] \text{.} \end{eqnarray* This assertion requires a justification, because $I\left( y,p_{0},p_{1}\right) $ includes integrals of functions that violate the shape restrictions of Theorem 1 but nonetheless satisfy the pointwise bounds (\ref{5}). That justification is as follows. First note that by definition, the \textit{set} $S\left( y,p_{0},p_{1}\right) $ is given b \begin{equation*} S\left( y,p_{0},p_{1}\right) =\left[ \int_{p_{0}}^{p_{1}}f\left( y+p-p_{0},y-p_{0}\right) dp\text{, }f\in \mathcal{F}\right] \text{,} \end{equation* where $\mathcal{F}$ is the collection of all functions $f\left( \cdot ,\cdot \right) :R\times R\rightarrow \left[ 0,1\right] $, satisfying the conditions (i) of Theorem 1, viz. non-increasing and continuous in the first argument and non-decreasing in the second argument, satisfying $L\left( y+p-p_{0},y-p_{0}\right) \leq f\left( y+p-p_{0},y-p_{0}\right) \leq M\left( y+p-p_{0},y-p_{0}\right) $ for all $p\in \left[ p_{0},p_{1}\right] $ \footnote The limit condition (B) in Theorem 1 can be dropped in defining $S$ because y,p_{0},p_{1}$ are all finite.} We want to show that $I\left( y,p_{0},p_{1}\right) =S\left( y,p_{0},p_{1}\right) $. First, note that $S\left( y,p_{0},p_{1}\right) \sqsubseteq I\left( y,p_{0},p_{1}\right) $, because by definition, $L\left( y+p-p_{0},y-p_{0}\right) \leq f\left( y+p-p_{0},y-p_{0}\right) \leq M\left( y+p-p_{0},y-p_{0}\right) $ for each $p$, and $I\left( y,p_{0},p_{1}\right) $ is a connected interval. Now, we show that $I\left( y,p_{0},p_{1}\right) \sqsubseteq S\left( y,p_{0},p_{1}\right) $. To see this, note that we can write any $i\in I\left( y,p_{0},p_{1}\right) $ a \begin{eqnarray*} i &=&\lambda \left( i\right) \times \int_{p_{0}}^{p_{1}}L\left( y+p-p_{0},y-p_{0}\right) dp+\left( 1-\lambda \left( i\right) \right) \times \int_{p_{0}}^{p_{1}}M\left( y+p-p_{0},y-p_{0}\right) dp \\ &=&\int_{p_{0}}^{p_{1}}\left[ \underset{=H^{\lambda \left( i\right) }\left( y+p-p_{0},y-p_{0}\right) \text{, say}}{\underbrace{\lambda \left( i\right) \times L\left( y+p-p_{0},y-p_{0}\right) +\left( 1-\lambda \left( i\right) \right) \times M\left( y+p-p_{0},y-p_{0}\right) }}\right] dp\text{,} \end{eqnarray* for some real number $\lambda \left( i\right) \in \left( 0,1\right) $. But by definition, for every $\lambda \in \left[ 0,1\right] $, the functio \begin{eqnarray*} &&H^{\lambda }\left( y+p-p_{0},y-p_{0}\right) \\ &\equiv &\lambda \times L\left( y+p-p_{0},y-p_{0}\right) dp+\left( 1-\lambda \right) \times M\left( y+p-p_{0},y-p_{0}\right) \end{eqnarray* belongs to $\mathcal{F}$, since both $L\left( y+p-p_{0},y-p_{0}\right) $ and $M\left( y+p-p_{0},y-p_{0}\right) $, by definition, belong to $\mathcal{F}$, and monotonicity and continuity are preserved under convex additions. Hence the integral $i=\int_{p_{0}}^{p_{1}}H^{\lambda \left( i\right) }\left( y+p-p_{0},y-p_{0}\right) dp\in S\left( y,p_{0},p_{1}\right) $, and thus I\left( y,p_{0},p_{1}\right) \sqsubseteq S\left( y,p_{0},p_{1}\right) $. Intuitively, even if $I\left( y,p_{0},p_{1}\right) $ contains the integral (say of value $v$) of a function satisfying the pointwise bounds but not the shape restrictions, there is another function satisfying the shape restrictions and respecting the same pointwise bounds, \textit{whose integra } has the same magnitude $v$.\medskip \textbf{H. Compatibility with SRP}: The welfare calculation above requires prediction of demand on a continuum of budget sets indexed by $p\in \left[ p_{0},p_{1}\right] $, which is operationally difficult -- if not practically impossible -- to implement, using the finite-dimensional matrix equation based SRP approach. But in simple cases where there are a small, countably finite number of budget sets, and it is easy to verify the SRP conditions, a natural question is whether our shape restrictions i(A) of Theorem 1 are compatible with the SRP based criterion for rationalizability; condition i(B) and i(C) of Theorem 1 are of course irrelevant in such cases. Indeed, it is not hard to show that our shape restrictions are in fact \textit necessary} for the SRP criterion to be satisfied. To see this, suppose we observe behavior on two budget sets corresponding to price and income equal to $\left( p^{1},y\right) $ and $\left( p^{2},y\right) $. Let $a_{0}=y$ and a_{1}^{j}=y-p^{j}$ for $j=1,2$. Then there are three alternatives to consider, viz. $\left( 0,a_{0}\right) ,\left( 1,a_{1}^{1}\right) $ and \left( 1,a_{1}^{2}\right) $. WLOG assume $p^{1}\,<p^{2}$, i.e. a_{1}^{1}>a_{1}^{2}$. Under nonsatiation in numeraire, there are 3 possible preference profiles in the population, given by (i) $\left( 0,a_{0}\right) \succ \left( 1,a_{1}^{1}\right) \succ \left( 1,a_{1}^{2}\right) $, (ii) \left( 1,a_{1}^{1}\right) \succ \left( 0,a_{0}\right) \succ \left( 1,a_{1}^{2}\right) $ and (iii) $\left( 1,a_{1}^{1}\right) \succ \left( 1,a_{1}^{2}\right) \succ \left( 0,a_{0}\right) $; assume the population proportions of these three profiles are $\left( \pi _{1},\pi _{2},\pi _{3}\right) $, respectively. Let $q\left( a_{0},a_{1}^{1}\right) $, $q\left( a_{0},a_{1}^{2}\right) $ denote choice probabilities of alternative 1 on the two budgets, respectively. Then the SRP approach asks whether matrix equatio \begin{eqnarray} \left[ \begin{array}{ccc} 0 & 1 & 1 \\ 0 & 0 & \end{array \right] \left[ \begin{array}{c} \pi _{1} \\ \pi _{2} \\ \pi _{3 \end{array \right] &=&\left[ \begin{array}{c} q\left( a_{0},a_{1}^{1}\right) \\ q\left( a_{0},a_{1}^{2}\right \end{array \right] \text{, i.e.} \notag \\ \pi _{2}+\pi _{3} &=&q\left( a_{0},a_{1}^{1}\right) \text{, }\pi _{3}=q\left( a_{0},a_{1}^{2}\right) \text{,} \label{6} \end{eqnarray has a solution $\left( \pi _{1},\pi _{2},\pi _{3}\right) $ in the unit positive simplex. Clearly, we need that $\pi _{2}+\pi _{3}\geq \pi _{3}$ (guaranteeing $\pi _{2}\geq 0$) which is precisely our shape restriction q\left( a_{0},a_{1}^{1}\right) \geq q\left( a_{0},a_{1}^{2}\right) $ (as a_{1}^{1}>a_{1}^{2}$). Similarly, by considering the budget sets $\left( p^{1},y^{1}\right) $ and \left( p^{2},y^{2}\right) $ with $y^{1}<y^{2}$ and $y^{1}-p^{1}=y^{2}-p^{2 \equiv a_{1}$, say and $a_{0}^{1}\equiv y^{1}<y^{2}\equiv a_{0}^{2}$, one can show that our shape restriction $q\left( a_{0}^{1},a_{1}\right) \geq q\left( a_{0}^{2},a_{1}\right) $ (as $a_{0}^{1}<a_{0}^{2}$) is necessary for the SRP condition analogous to (\ref{6}) to have an admissible solution. With more budget sets, the corresponding higher dimensional matrix equations analogous to (\ref{6}) quickly become operationally impractical and cumbersome, as is well-known in the literature (see introduction). In contrast, our shape-restrictions, by being global conditions on the $q\left( \cdot ,\cdot \right) $ functions, remain invariant to which and how many budget sets are considered. Furthermore, we already know via Theorem 1 above, that these shape restrictions are also \textit{sufficient} for rationalizability for \textit{any} collection -- finite or infinite -- of budget sets.\footnote It does not seem possible to show directly, i.e. \textit{without using Theorem 1}, that our shape restrictions are also \textit{sufficient} for existence of admissible solutions to the analog of (\ref{6}) corresponding to \textit{every} arbitrary collection of budget sets. But given theorem 1, this exercise is probably of limited interest.}\medskip \textbf{I. Observed Covariates}: One can accommodate observed covariates in our theorem. For example, let $X$ denote a vector of observed covariates, and let $q\left( a,b,x\right) $ denote the choice probability when $Y=a$, Y-P=b$ and $X=x$. If for each fixed $x$, $q\left( a,b,x\right) $ satisfies the same properties as (i) A-C in the statement of Theorem 1, then lettin \begin{equation*} q^{-1}\left( u,b,x\right) \overset{def}{=}\sup \left\{ z:q\left( z,b,x\right) \geq u\right\} \text{,} \end{equation* we can rationalize the choice probabilities $q\left( a,b,x\right) $ by setting $W_{1}\left( y-p,V,x\right) \equiv q^{-1}\left( V,y-p,x\right) $ and $W_{0}\left( y,V,x\right) \equiv y$, where $V\simeq U\left( 0,1\right) .\medskip \textbf{J. Endogeneity}: Our results in Theorem 1 are stated in terms of \textit{structural} choice probabilities $q\left( \cdot ,\cdot \right) $. If budget sets are independent of unobserved heterogeneity (conditional on observed covariates), then these structural choice probabilities are equal to the observed conditional choice probabilities, i.e. \begin{equation*} q\left( a,b\right) =\Pr \left( 1|Y=a,Y-P=b\right) \text{.} \end{equation* To date, all existing results on rationalizability of demand under heterogeneity, including McFadden and Richter, 1990, Lewbel, 2001 and Hausman-Newey, 2016 maintain independence. If the independence condition is violated (even conditional on observed covariates), then Theorem 1 continues to remain valid as stated, since it concerns the structural choice probability $q\left( \cdot ,\cdot \right) $, but consistent estimation of q\left( \cdot ,\cdot \right) $ will be more involved. In applications, if endogeneity of budget sets is a potential concern, then it would be advisable to estimate structural choice-probabilities using methods for estimating average structural functions. A specific example is the method of control functions, c.f. Blundell and Powell, 2003, which requires that $\eta \perp \left( P,Y\right) |V$, where $V$ is an estimable \textquotedblleft control function\textquotedblright\ -- typically a first stage residual from a regression of endogenous covariates on instruments.\pagebreak \begin{center} \textbf{References} \end{center} \begin{enumerate} \item Anderson, S.P., De Palma, A. and Thisse, J.F. 1992. Discrete choice theory of product differentiation. MIT press. \item Bhattacharya, D., 2015. Nonparametric welfare analysis for discrete choice. Econometrica, 83(2), pp.617-649. \item Bhattacharya, D., 2018. Empirical welfare analysis for discrete choice: Some general results. Quantitative Economics, 9(2), pp.571-615. \item Bhattacharya, D., 2008. A Permutation-Based Estimator for Monotone Index Models. Econometric Theory, 24(3), pp.795-807. \item Blundell, R., and James L. Powell, 2003. Endogeneity in nonparametric and semiparametric regression models. Econometric society monographs 36: 312-357. \item Dette, H., Hoderlein, S. and Neumeyer, N., 2016. Testing multivariate economic restrictions using quantiles: the example of Slutsky negative semidefiniteness. Journal of Econometrics, 191(1), pp.129-144. \item Hausman, J.A. and Newey, W.K., 2016. Individual heterogeneity and average welfare. Econometrica, 84(3), pp.1225-1248. \item Kitamura, Y. and Stoye, J., 2016. Nonparametric analysis of random utility models, forthcoming, Econometrica. \item Lee, Y.Y. and Bhattacharya, D., 2017. Applied welfare analysis for discrete choice with interval-data on income, mimeo. UC Irvine. \item Lewbel, A., 2001. Demand Systems with and without Errors. American Economic Review, pp. 611-618. \item McFadden, D., 1973. Conditional logit analysis of qualitative choice behavior. \item McFadden, D. and Richter, M.K., 1990. Stochastic rationality and revealed stochastic preference. Preferences, Uncertainty, and Optimality, Essays in Honor of Leo Hurwicz, Westview Press: Boulder, CO, pp.161-186. \item McFadden, D. 2005. Revealed Stochastic Preference: A Synthesis. Economic Theory 26(2): 245--264. \item Matzkin, R.L., 1992. Nonparametric and distribution-free estimation of the binary threshold crossing and the binary choice models. Econometrica, pp.239-270. \item Train, K.E., 2009. Discrete choice methods with simulation. Cambridge University Press. \end{enumerate} \end{document}
{ "timestamp": "2019-03-01T02:15:26", "yymm": "1902", "arxiv_id": "1902.11012", "language": "en", "url": "https://arxiv.org/abs/1902.11012" }
\section{Introduction} \noindent Recent multi-messenger observations of neutron stars and their mergers have ushered in a golden age of nuclear astrophysics. A few days after their formation in supernovae, neutron stars cool to temperatures well below $1$ MeV \cite{Prakash:2000jr} \cite{Yakovlev:2004yr} \cite{Yakovlev:2004iq} \cite{Brown:2017gxd}. The vast majority of matter is located in the core, where nucleons are compressed to densities from $0.5\,n_0$ to several times $n_0$, where $n_0=2.3\cdot\,10^{17} \,\textrm{kg}/\textrm{m}^3$ is the nuclear saturation density. Under such conditions nuclear matter forms a degenerate plasma composed of electrons, muons, protons, and neutrons. The relative abundances of these particles are deduced from the requirements of charge neutrality and beta equilibrium, resulting in highly asymmetric nuclear matter, with typical proton fractions below $10$ \%. During merger events temperatures are projected to be significantly higher, reaching up to $30$ Mev \cite{Oechslin:2006uk} \cite{Kiuchi:2009jt} \cite{Radice:2016dwd} \cite{Baiotti:2016qnr}. Densities up to four times nuclear saturation density (and beyond, if phase transitions are taken into account \cite{Bauswein:2018bma}) can be achieved, resulting in partially degenerate matter in near-equilibrium.\newline A particularly exciting prospect is the utilization of compact stars as ``laboratories" to study the properties of nuclear matter under extreme conditions. For many observables transport phenomena represent a crucial link to microscopic physics, where they are calculated from reaction rates of particles traversing through the plasma. In this article, I calculate the rate of electromagnetic scattering (or Moeller scattering), taking into account the environment of fully and partially degenerate nuclear matter. Leptons and hadrons are treated as quasiparticles, immersed in weakly and strongly interacting Fermi liquids respectively. Electrons, and depending on the density also muons, are relativistic and weakly interacting, and their scattering is thus a particularly important mechanism for transport. The scattering rate $\Gamma$ (damping rate, interaction rate) may either be obtained from a direct calculation according to Fermi`s golden rule, or, via the optical theorem, from the imaginary part of the fermion self energy. As outlined by Braaten and Pisarski \cite{Braaten:1989mz}, a consistent calculation of scattering amplitudes in hot or dense plasmas relies on the separation of two scales, termed ``hard" and ``soft". In the present case the hard scale is set by the Fermi momentum $k_f$, while the soft scale is of order $e\,k_f$, where $e\ll1$ is the gauge coupling constant of electromagnetic interactions. Dispersion relations of particles carrying soft momenta are strongly modified by interactions with the surrounding medium, while hard momentum particles will traverse unhindered through the plasma. Transport phenomena in degenerate matter are predominantly determined by fermions in close proximity to the Fermi surface, whose momenta are hard by definition. Medium modifications to their dispersion relation stemming from electromagnetic interactions can be neglected for all practical purposes. The momentum of the exchanged photon, on the other hand, may either be hard or soft, depending on the angle between the scatterers. Medium modifications to the photon propagator thus play an important role, and need to be resummed to obtain consistent results. Vertex resummations become necessary if all momenta flowing into the or out of a vertex are soft, and can thus be neglected in the present context. To obtain the the dressed photon propagator the (relativistic) random phase approximation (RPA) is employed, which amounts to resumming one-loop polarization functions.\newline The surrounding medium manifests itself in the photon spectrum in two ways: electromagnetic interactions are screened, and a longitudinal component of the photon field, termed plasmon, arises. The latter corresponds to a pure collective excitation, and disappears from the spectrum upon approaching the hard momentum limit. When the exchanged momentum is soft, longitudinal and transverse scattering amplitudes exhibit very different characteristics. Interactions in the longitudinal channel are predominantly modified by Debye screening, stemming from the real part of the longitudinal polarization tensor $\Pi_L$. Debye screening persists even in the static limit, where one obtains the screening mass (Debye mass) as $\Pi_L(q_0,\,|\boldsymbol{q}|\rightarrow 0)=-m_D^2$. The transverse polarization tensor $\Pi_\perp$, in contrast, vanishes in the static limit, and the dominant contribution to \textit{dynamical} screening originates from Landau damping, encoded in the imaginary part of $\Pi_\perp$. The difference of Debye screening and Landau damping becomes crucial at finite temperature. While Debye screening renders the longitudinal scattering rate $\Gamma_L$ finite, \textit{dynamical} screening in the transverse channel is unable to do so, resulting in a logarithmic divergence of $\Gamma_\perp$ in the infrared, see Ref. \cite{Bellac:2011kqa} for a pedagogical review. The only exception is the fully degenerate (i.e., $T=0$) limit, where strict Pauli blocking prevents an infrared catastrophe \cite{LeBellac:1996kr} \cite{Vanderheyden:1996bw}\cite{Manuel:2000mk}. A finite result for $\Gamma_\perp$ at finite temperature requires for an advanced resummation scheme, developed in Ref. \cite{Blaizot:1996az}. Fortunately it is not the total rate $\Gamma$, but rather the energy loss per distance traveled, $-dE/dx$, which ultimately enters the transport integral. Both expressions differ by an additional factor of $q_0 / |\boldsymbol{v}|$ \cite{Braaten:1991jj}, where $\boldsymbol{v}$ is the velocity of the fermion. Combined with the effects from Landau damping, the additional power of $q_0$ is just about enough to compensate the infrared divergence stemming from the Bose distribution $n_b\sim T / q_0$. The distinct characteristics of electric and magnetic screening are most pronounced at high densities and low temperatures, where they lead to very different results for the scattering rates in both channels: Heiselberg and Pethick discovered \cite{Heiselberg:1993cr}, that the scattering of ultra relativistic particles is dominated by the exchange of (transverse) photons, while the scattering of non-relativistic particles is dominated by the exchange of longitudinal plasmons. In the context of nuclear matter the former are represented by electrons, and the latter are represented by nucleons. Muons interpolate between both cases, at least at lower densities.\newline Within the RPA it is straightforward to generalize the calculation of scattering rates to multi-component plasmas: the total energy loss of, say, an electron, is simply given by the sum of the individual rates due to collisions with other electrons, muons, and protons in the plasma. In each case, the screening is provided by $\textit{all}$ plasma constituents. This is essentially the approach used to calculate the lepton contribution to transport coefficients in neutron star cores in Ref. \cite{Shternin:2008es} \cite{Shternin:2018jop}, see Ref. \cite{Schmitt:2017efp} for a recent review. If strong interactions are taken into account, photons and plasmons couple to electromagnetically and strongly charged protons, which correlates both types of interactions. These correlations allow for medium induced lepton-neutron scattering \cite{Bertoni:2014soa}, which is otherwise negligible since it arises only due to the small magnetic moment of the neutron \cite{FlowersItoh1976}. Given that proton fractions in dense nuclear matter are small, the question arises whether there are circumstances under which the impact of induced scattering is particularly amplified. Systematic studies have shown \cite{Stetina:2017ozh} that resumming induced interactions strongly modifies the photon spectrum at lower densities $n\sim(0.5-0.6)\, n_0$, where \textit{homogeneous} nuclear matter is projected to become unstable \cite{Baym:1971ax} \cite{Muller:1995ji} \cite{Li:1997ra}. The onset of this instability manifests itself, among other things, in a divergence of the Debye screening of \textit{strong} interactions, which, owing to the presence of protons, drags the electromagnetic screening in the longitudinal channel along with it. Since protons are non-relativistic at lower densities, the transverse channel is only mildly affected. The consistent inclusion of induced interactions into the electromagnetic scattering rates is a central aspect of this paper. \newline \newline In the context of neutron star phenomenology, the results obtained in this article are particularly relevant for the damping of oscillatory modes of a star, most importantly r-modes. While the excitation of r-modes in fast spinning stars seems unavoidable, they are known to become unstable with respect to the emission of gravitational waves \cite{Andersson:1997xt} \cite{Haskell:2015iia}. The fact that fast spinning stars are nevertheless observed in nature points towards an efficient damping mechanism. Viscous damping in the crust-core transition region has been identified as a promising candidate, but would have to be several times larger than previously calculated \cite{Ho:2011tt}. In the crust-core interface the impact of induced interactions is most pronounced, which makes it a region of particular interest to this article. It is commonly assumed that protons in the outer core are superconducting. Predictions for the magnitude of the superconducting gap vary, and even the complete absence of superconductivity has been projected, see Ref. \cite{Sedrakian:2018ydt} for a recent review. The interplay of induced interactions and superconductivity will be discussed further in the outlook. Whether the outer core of neutron stars connects directly to the crust is currently unknown, and the possibility of an intermediate layer comprised of nuclear clusters of various geometries, collectively called ``nuclear pasta", has been suggested \cite{Pethick:1995di}. The existence of a ``pasta phase" depends on the properties of nuclear interactions at subnuclear densities. Its absence would promote the outermost region of the core to a very privileged position, with profound impact on the spin evolution of neutron stars. \newline In light of the recent observation of a neutron star merger \cite{GBM:2017lvd} \cite{TheLIGOScientific:2017qsa} \cite{Metzger:2017wot} the question has emerged whether transport phenomena are potentially relevant for the modelling of merger events. Current simulations are based mostly on ideal (magneto)hydrodynamics. The importance of viscous effects has been investigated in Refs. \cite{Alford:2017rxf} \cite{Harutyunyan:2018mpe}, which estimate that electromagnetic scattering takes to much time to play a role during merger events. It is emphasized \cite{Harutyunyan:2018mpe}, however, that a definitive clarification can be reached only through fully numerical studies which include all possible dissipative effects. The calculation of scattering rates in partially degenerate matter with temperatures up to $\sim30$ MeV is consequently in demand. It is reasonable to assume that under such conditions the rates will exhibit quite different characteristics compared to those computed for cold neutron star matter. \newline \newline \noindent This paper is organized as follows: Section \ref{sec:OT} reviews the relationship of the scattering rate $\Gamma$ and the fermion self energies in detail, and discusses various approximations to the full one-loop resummation. It contains subsection \ref{subsec:multi}, which discusses the generalization to a multi-component plasma and subsection \ref{subsec:induced} which introduces induced interactions. Section \ref{sec:FullDegen} evaluates $\Gamma$ at zero temperature, which is appropriate for old neutron stars. Finite temperature studies are discussed in Sec. \ref{sec:PartDegen}, in a range of $T=(0.1-1)$ MeV, covering the life span of neutron stars, and up to $30$ MeV relevant for partially degenerate matter in hot regions of neutron star mergers. Throughout this paper I use natural units $\hbar=c=k_b=1$, and the electric charge $e^2=4\pi\alpha_f$ where $\alpha_f=1/137$ is the fine structure constant, and a mostly negative metric convention $g^{\mu\,\nu}=\textrm{diag} (1, -1 ,-1 ,-1)$. \section{Scattering rate from optical theorem} \label{sec:OT} \begin{figure}[t] \includegraphics[scale=1]{2loop1.pdf} \caption{\label{fig:2loop} Central cuts of the two-loop fermion self energy correspond to Moeller scattering (a), Compton scattering (b), and interference terms between different channels of both (c). Cutting diagram (c) from the bottom left to top right puts all fermions on shell, and consequently yields the interference term to Moeller scattering. The opposite diagonal cut yields the interference contribution to Compton scattering. } \end{figure} \begin{figure}[t] \includegraphics[scale=1.13]{AllCuts.pdf} \\[2ex] \setlength{\belowcaptionskip}{-40pt} \caption{\label{fig:scatter} Processes contributing to Moeller scattering originate from (central) cuts of diagram (a) and (c) in Fig. \ref{fig:2loop}. Both, $t$ and $u$ channel matrix elements, correspond to cuts of diagram (a), the interference contribution can be extracted from diagram (c), as indicated. The exchanged four-momenta are conventionally labeled according to the Mandelstam variables $t=(p-p^\prime)^2=(k^\prime-k)^2$ and $u=(p-k^\prime)^2=(p^\prime-k)^2$, and are space-like in both channels. Note, that according to the unitarity rules the right hand side of each cut corresponds to the complex conjugate amplitude, which differs from the left hand side in that the momentum flow is reversed. In addition to the $M_t\,M^*_u$ interference term, there is consequently the term $M^*_t\,M_u$ which can be obtained from diagram (c) upon reversing all arrows in the loop.} \end{figure} \noindent The scattering rate $\Gamma$ of a fermion immersed in a QED plasma can either be calculated directly, i.e., according to Fermi`s golden rule, or via the optical theorem, which relates $\Gamma$ to the imaginary part of the fermion self energy. \textit{On-shell} fermions receive no contributions from their one-loop self energy, processes such as fermionic Landau damping are kinematically forbidden. At two-loop level there are three diagrams contributing to the self energy, see Fig. \ref{fig:2loop}. The optical theorem relates the corresponding imaginary parts to rates of elementary $2\rightarrow2$ scattering processes in QED, namely Moeller scattering and Compton scattering. The imaginary parts can be calculated using cutting rules (or unitarity rules, see e.g., Refs. \cite{Kobes:1985kc} \cite{Das:1997gg}): Moeller and Compton scattering are obtainable from central cuts of diagrams (a) and (b) respectively. As both processes may occur in more than one channel, a complete description needs to account for interference terms. These can be extracted from diagram (c), which allows for two distinct central cuts, one contributing to Moeller scattering and one contributing to Compton scattering. \newline Moeller scattering is the dominant process leading to energy loss of fermions in cold and dense matter, and its study is the main focus of this article. If the fermions engaging in the scattering are \textit{identical}, there are indeed two channels available: direct (t-channel) scattering, and exchange (u-channel) scattering, see Fig. \ref{fig:scatter}. The total matrix element squared, $|M_t+M_u|^2$, thus contains the interference terms $M_t\,M_u^*$ and $M_u\,M_t^*$, which can be obtained from diagram (c), cutting diagonally from the bottom left to the top right corner. \newline In the following I outline the essential steps in the calculation of the scattering rate using a \textit{resummed} photon propagator. For simplicity, we shall consider a dense plasma composed of a single fermion species. One may object that a single component plasma at large chemical potential fails the requirement of charge neutrality. Leaving this issue aside for a moment, the present discussion should be viewed as a mere preparation for the multi-component case introduced in the next section. The starting point is the \textit{retarded} self energy of on-shell fermions, $\Sigma^+_R$, which is obtained by projecting $\Sigma_R$ onto positive energy states\footnote{To calculate the scattering rate of antiparticles one proceeds analogous \cite{Manuel:2000mk}. In the present context antiparticles can safely be ignored.} \begin{equation}\label{eq:SigmaPlus} \Sigma^+=\frac{1}{2}\text{Tr}\left[\Lambda^+_{\boldsymbol{p}}\,\gamma_0\,\Sigma(p_0,\,\boldsymbol{p})\right]=\frac{1}{4\,\epsilon_{\boldsymbol{p}}}\text{Tr}\left[\left(\slashed{p}+m\right)\,\Sigma(p_0,\,\boldsymbol{p})\right]\,. \end{equation} Positive and negative energy projectors are given by \begin{equation} \Lambda_{\boldsymbol{p}}^{\pm}=\frac{1}{2}\left(1\pm\gamma_{0}\frac{\boldsymbol{\gamma}\cdot\boldsymbol{p}+m}{\epsilon_{\boldsymbol{p}}}\right)\,,\label{eq:LambdaPM} \end{equation} with the usual relativistic dispersion relation $\epsilon_{\boldsymbol{p}}=\sqrt{\boldsymbol{p}^2+m^2}$. Note that the fermion self energy is in general not gauge invariant, except for when it is evaluated on the fermion mass-shell. The interaction rate in turn is related to the imaginary part of the retarded self energy via \cite{Weldon:1983jn} \begin{equation} \label{eq:OTheorem} \Gamma(|\boldsymbol{p}|)=-2\,\text{Im}\,\Sigma^+_R=-\frac{1}{2\,\epsilon_{\boldsymbol{p}}}\text{Tr}\left[\left(\slashed{p}+m\right)\,\text{Im}\,\Sigma_R(p_0=\epsilon_{\boldsymbol{p}},\,\boldsymbol{p})\right]\,. \end{equation} The above expression corresponds to the \textit{total} rate, adding the contributions of particles and holes. If one is specifically interested in the interaction rate of the former or latter, Eq. \ref{eq:OTheorem} needs to be multiplied by $1-n_f(\epsilon_{\boldsymbol{p}})$ or $n_f(\epsilon_{\boldsymbol{p}})$ respectively. At strictly zero temperature Eq. \ref{eq:OTheorem} thus corresponds to the scattering rate of holes for $\epsilon_{\boldsymbol{p}}<\mu$, and to the scattering rate of particles for $\epsilon_{\boldsymbol{p}}>\mu$. A generic expression for the imaginary part of the retarded \textit{one loop} fermion self energy is derived in Appendix \ref{sec: RTFcalc}, and reads \begin{equation} \text{Im}\,\Sigma_{R}(p) = -\frac{e^{2}}{4}\int\frac{d^{4}q}{(2\pi)^{2}}\,\text{I}_{DB}(q_0,\,p_0)\,\gamma_{\mu}\left(\slashed{p}^{\prime}+m\right)\gamma_{\nu}\,\delta(p^{\prime2}-m^{2})\,\delta(q^{2})\,G^{\mu\nu}(q)\,,\label{eq:ImSigma} \end{equation} with $p^\prime=p-q$ as in Fig. \ref{fig:2loop}(a). In the following we shall assume that we are always interested in retarded self energies and drop the index "R". The above expression includes the detailed balance factor \begin{equation}\label{eq:IDB} \text{I}_{DB}(q_0,\,p_0)=\text{sign}(p_{0}^{\prime})\left[1+2n_{b}(q_{0})\right]+\text{sgn}(q_{0})\left[1-2N_{f}(p_{0}^{\prime})\right]\,, \end{equation} where the Fermi distribution covering particles and antiparticles is defined as \begin{equation} N_{f}(p_{0})=n_{f}(p_{0}-\mu)\,\Theta(p_{0})+n_{f}(p_{0}+\mu)\,\Theta(-p_{0})\,.\label{eq:FermiDistr} \end{equation} Calculations are carried out in Coulomb gauge, subject to the gauge fixing condition $\boldsymbol{\nabla}\cdot A=0$. The gauge fixing dependent factor $G^{\mu\nu}$ in Eq. \ref{eq:ImSigma} reads \begin{equation} G^{\mu\nu}(q)=\frac{q^{2}}{\boldsymbol{q}^{2}}\, g^{\mu0}g^{\nu0}+\delta^{\mu i}\delta^{\mu j}(\delta_{ij}-\hat{q}_{i}\hat{q}_{j})\,.\label{eq:PhotonGauge} \end{equation} The imaginary parts of diagrams corresponding to Compton scattering and Moeller scattering can be obtained from Eq. \ref{eq:ImSigma} by replacing the bare spectral function of the photon or fermion with dressed ones, i.e., by making the replacements \begin{equation} \delta(q^{2})\,\,G^{\mu\nu}(q)\rightarrow \rho_L\,P_L^{\mu\nu}+\rho_\perp\,P_\perp^{\mu\nu},\hspace{1cm} \left(\slashed{p}^{\prime}+m\right)\,\delta(p^{\prime\,2}-m^2)\rightarrow\gamma_{0}\,\rho_{0}+\boldsymbol{\gamma}\cdot\hat{\boldsymbol{p}}^\prime\,\rho_{\boldsymbol{p}^\prime}+\rho_m\,. \end{equation} \noindent Diagrams (a) and (b) in Fig. \ref{fig:2loop} leading to tree-level processes are obtained by dressing the internal photon or fermion propagator with a single self-energy insertion. Screening is taken into account, if the self energy insertions are resummed, leading to the scattering processes depicted in Fig. \ref{fig:ResumScatt}. Note, that self energy insertions cannot produce diagram (c), not even at tree level. Interference contributions are consequently not included in the RPA resummation program, at least not in its simplest realization (i.e., in the Hartree approximation). To generate diagram (c) one needs to include vertex corrections in $\Sigma$, which, as mentioned in the introduction, can be neglected if one is interested in the dynamics of hard momentum fermions. In Ref. \cite{Shternin:2008es} interference contributions to electron and muon scattering in neutron star cores are computed from Fermi`s golden rule, using bare vertices but a screened photon propagator. The obtained results are small compared to pure t- and u-channel contributions. The exact relationship between vertex corrections and screening effects in the various scattering channels is an interesting question which warrants further studies. \newline To proceed with the calculation of the self-energy on the left of Fig. \ref{fig:ResumScatt} we require the resummed photon spectrum. Employing the random phase approximation, longitudinal and transverse components read (in Coulomb gauge) \bea \rho_{L}(q) & = & -\frac{1}{\pi}\frac{\text{Im}\,\text{\ensuremath{\Pi}}^{00}}{(\text{Re}\,\Pi^{00} - \boldsymbol{q}^{2})^{2}+(\text{Im}\,\Pi^{00} )^{2}} + Z_L(q_0=\omega_L)\,\delta\left(\text{Re}\,\Pi^{00} -\boldsymbol{q}^2 \right)\,, \label{eq:SpecL}\\[3ex] \rho_{\perp}(q) & = & -\frac{1}{\pi}\frac{\text{Im}\,\Pi_\perp }{(\text{Re}\,\Pi_\perp -q^{2})^{2}+(\text{Im}\,\Pi_\perp )^{2}} + Z_\perp(q_0=\omega_\perp)\,\delta\left(\text{Re}\,\Pi_\perp-q^2\right)\,,\label{eq:SpecPerp}\vspace{5cm} \eea where $\Pi_{00}$ and $\Pi_\perp=(\delta_{ij}-\hat{q}_i\hat{q}_j)\Pi_{ij}$ are the longitudinal and transverse photon polarization tensor. A detailed review of the photon spectrum in degenerate matter can be found in Ref. \cite{Stetina:2017ozh}. Expressions \ref{eq:SpecL} and \ref{eq:SpecPerp} each consist of a continuum contribution and a $\delta$ function corresponding to the energies of photons and plasmons\footnote{The delta distributions included in Eqs. \ref{eq:SpecL} and \ref{eq:SpecPerp} indicate that on-shell photons and plasmons are undamped. This is obviously an artifact of the one-loop resummation of the photon spectrum, which only captures imaginary parts due to Landau damping, and, at much higher energies, pair creation. Compton scattering and inverse Bremsstrahlung, which appear at two-loop order in QED, potentially fill this gap.}. The functions $Z_{L,\perp}$ represent the residua of the propagator evaluated at the poles $p_0=\omega_{L,\,\perp}$. Since the kinematics of Moeller scattering require the intermediate photons to carry space-like momenta, the poles located in the time-like region do not contribute to the calculation of scattering rate. The continuum contribution in the space-like region corresponds to Landau damping, i.e., the scattering of soft photons with hard fermions thermalized in the plasma. \begin{figure}[t] \includegraphics[scale=1.15]{scatterSingle.pdf} \caption{\label{fig:ResumScatt} Left: Moeller scattering, obtainable from cuts of the fermion self-energy with a dressed a photon propagator. Right: Compton scattering, obtainable from cuts of the fermion self-energy with a dressed fermion propagator.} \end{figure} \newline Equipped with the spectral function of the photon we may put the scattering rate Eq. \ref{eq:OTheorem} together. To match the momentum labels of final and initial states indicated in Figs. \ref{fig:2loop} and \ref{fig:scatter}, we attribute the momentum $p^\prime=p-q$ to the fermion propagator in the self energy Eq. $\ref{eq:ImSigma}$, and the momenta $k$ and $k^\prime=k+q$ to the fermion propagators in the photon polarization tensor. Ignoring anti-particles Eq. \ref{eq:OTheorem} becomes \begin{equation} \Gamma(\epsilon_{\boldsymbol{p}}) = -\frac{e^{2}}{4\epsilon_{\boldsymbol{p}}}\int\frac{d^{4}q}{(2\pi)^{2}}\frac{1}{2\epsilon_{\boldsymbol{p}^{\prime}}}\delta(\epsilon_{\boldsymbol{p}}-\epsilon_{\boldsymbol{p}^{\prime}}-q_{0})\left[1+2n_{b}(q_{0})+\text{sgn}(q_{0})(1-2n_{f}^{-}(p_{0}^{\prime}))\right]\left[\rho_{L}(q)\,g^{\mu0}g^{\nu0}+\rho_{\perp}(q)\,P_{\perp}^{\mu\nu}\right]T_{\mu\nu}\,, \end{equation} where the trace reads \begin{equation} T_{\mu\text{\ensuremath{\nu}}}=4\left[p_{\mu}p_{\nu}^{\prime}+p_{\mu}^{\prime}p_{\nu}-g_{\mu\nu}(p\cdot p^{\prime}-m^{2})\right]\,, \end{equation} and is to be evaluated at $p_0=\epsilon_{\boldsymbol{p}}$, and $p_0^\prime=\epsilon_{\boldsymbol{p}^\prime}$. Note that a negative energy transfer $q_0<0$ corresponds to the inverse rather than the direct process. In this case $n_b(-q_0)=-[1+n_b(q_0)]$, such that the detailed balance factor obtains an overall negative sign. This sign is compensated by the spectral function of the photon which is an odd function of $q_0$. In a fully degenerate plasma, direct and inverse processes correspond to the scattering rates of particles and holes respectively: a direct process involves a particle with energy $\epsilon_{\boldsymbol{p}}>\mu$, which scatters on a constituent of the Fermi sea with energy $\epsilon_{\boldsymbol{k}}<\mu$. As a result of Pauli blocking, both particles occupy final states above the Fermi surface, i.e, $\epsilon_{\boldsymbol{k}^\prime}>\mu$, $\epsilon_{\boldsymbol{p}^\prime}>\mu$. The energy transfer $q_0=\epsilon_{\boldsymbol{k}^\prime}-\epsilon_{\boldsymbol{k}}=\epsilon_{\boldsymbol{p}}-\epsilon_{\boldsymbol{p}^\prime}$ is positive, and a maximum of $q_0=\epsilon_{\boldsymbol{p}}-\mu$ can be transferred. To picture the inverse process one may think of a hole with energy $\epsilon_{\boldsymbol{p}}<\mu$, which is being filled by a particle with energy $\epsilon_{\boldsymbol{p}^\prime}$ "falling" into it from above in the Fermi sea, whereby it emits a virtual photon and leaves behind a hole in the final state with energy $\epsilon_{\boldsymbol{p}^\prime}$. The energy difference $\epsilon_{\boldsymbol{p}}-\epsilon_{\boldsymbol{p}^\prime}<0$ is transferred to the state $\epsilon_{\boldsymbol{k}^\prime}$, which is extracted from the Fermi sea. The roles of final and initial states are consequently interchanged, which is reflected in the detailed balance factor, see Eqs . \ref{eq:FermiRateP} and \ref{eq:FermiRateH} below. \newline After contracting the trace with longitudinal and transverse projectors, the final results for the rates are \begin{eqnarray}\label{eq:GammaL} \Gamma_{L}(\epsilon_{\boldsymbol{p}}) & = & \frac{e^{2}}{2}\,\int\frac{d^{3}\boldsymbol{q}}{(2\pi)^{2}}\,\rho_{L}(\epsilon_{\boldsymbol{p}}-\epsilon_{\boldsymbol{p}^{\prime}},\,\boldsymbol{q})\,\left[1+n_{b}(\epsilon_{\boldsymbol{p}}-\epsilon_{\boldsymbol{p}^{\prime}})-n_{f}^{-}(\epsilon_{\boldsymbol{p}^{\prime}})\right]\,\left(1+\frac{\epsilon_{\boldsymbol{p}}^{2}-\boldsymbol{p}\cdot\boldsymbol{q}}{\epsilon_{\boldsymbol{p}}\,\epsilon_{\boldsymbol{p}^{\prime}}}\right)\,,\\ \nonumber \\ \Gamma_{\perp}(\epsilon_{\boldsymbol{p}}) & = & e^{2}\,\int\frac{d^{3}\boldsymbol{q}}{(2\pi)^{2}}\,\rho_{\perp}(\epsilon_{\boldsymbol{p}}-\epsilon_{\boldsymbol{p}^{\prime}},\,\boldsymbol{q})\,\left[1+n_{b}(\epsilon_{\boldsymbol{p}}-\epsilon_{\boldsymbol{p}^{\prime}})-n_{f}^{-}(\epsilon_{\boldsymbol{p}^{\prime}})\right]\,\left(1+\frac{\boldsymbol{p}_{\perp}^{2}-\epsilon_{\boldsymbol{p}}^{2}+\boldsymbol{p}\cdot\boldsymbol{q}}{\epsilon_{\boldsymbol{p}}\,\epsilon_{\boldsymbol{p}^{\prime}}}\right)\,.\label{eq:GammaP} \end{eqnarray} Note the global factor of $2$ in $\Gamma_\perp$, which stems from the two transverse polarizations of the photon. It is an instructive exercise to obtain the tree level rate as the leading order term in an $\alpha_f$ expansion of the resummed result. A detailed derivation of the results below can be found in Appendix \ref{sec:scatrate}. In the context of degenerate matter tree-level rates correspond to the scattering of fermions far away from the Fermi surface, governed by the exchange of hard-momentum photons\footnote{In this case the Rutherford singularity is unscreened, leading to divergent results for $\Gamma_\perp$ and $\Gamma_L$. To carry out the momentum integration one may follow the approach of Braaten and Yuan \cite{Braaten:1991dd}, and introduce a cutoff scale $q^*$ with $e k_f\ll q^* \ll k_f$, which separates the soft region (where medium effects are essential) from the hard region. Adding soft and hard contributions one finds that the $q^*$ dependence drops out \cite{LeBellac:1996kr}. The calculations in this article are based on the full one-loop resummation, and consequently cover both regions automatically. The introduction of an intermediate scale is not necessary.}. To expand $\rho_\perp$ (for hard momenta the photon is purely transverse) we briefly indicate the $e^2$ dependence of the polarization tensor $\Pi$ explicitly. To leading order one finds \begin{equation} \label{eq:rhoExpand} \rho_{\perp}(q) = - \frac{1}{\pi}\frac{e^2\,\text{Im}\,\Pi_\perp}{(e^2\,\text{Re}\,\Pi_\perp-q^{2})^{2}+(e^2\,\text{Im}\,\Pi_\perp)^{2}}\sim-\frac{e^2}{\pi}\frac{1}{q^{2}}\,\,\text{Im}\,\,\Pi_\perp\,\frac{1}{q^{2}}+\mathcal{O}(e^4)\,. \end{equation} Each factor of $1/q^2$ represents a free photon propagator. Plugging Eq. \ref{eq:rhoExpand} into the imaginary part of the fermion self energy yields the imaginary part of the two-loop diagram Fig. \ref{fig:2loop}(a). Note, that it is imperative to include the factors $1-n_f(\epsilon_{\boldsymbol{p}})$ and $n_f(\epsilon_{\boldsymbol{p}})$ to obtain the correct detailed balance relation of particle and hole scattering respectively. After a variable transformation and a reorganization of the thermal distribution functions one finds the tree-level (order $e^4$) rate of particles, \begin{equation} \Gamma_p(\epsilon_{\boldsymbol{p}})=\frac{1}{2\epsilon_{\boldsymbol{p}}}\,\int_{k}\,\frac{1}{2\epsilon_{\boldsymbol{k}}}\,n_{f}^{-}(\epsilon_{\boldsymbol{k}})\,\int_{k^{\prime}}\,\frac{1}{2\epsilon_{\boldsymbol{k}^\prime}}\left[1-n_{f}^{-}(\epsilon_{\boldsymbol{k}^{\prime}})\right]\,\int_{p^{\prime}}\,\,\frac{1}{2\epsilon_{\boldsymbol{p}^\prime}}\,\left[1-n_{f}^{-}(\epsilon_{\boldsymbol{p}^{\prime}})\right]\,\left(2\pi\right)^{4}\delta(p+k-p^{\prime}-k^{\prime})\,\left|M\right|^{2}\,,\label{eq:FermiRateP} \end{equation} and holes, \begin{equation} \Gamma_h(\epsilon_{\boldsymbol{p}})=\frac{1}{2\epsilon_{\boldsymbol{p}}}\,\int_{k}\,\frac{1}{2\epsilon_{\boldsymbol{k}}}\,\left[1-n_{f}^{-}(\epsilon_{\boldsymbol{k}})\right]\,\int_{k^{\prime}}\,\frac{1}{2\epsilon_{\boldsymbol{k}^\prime}}\,n_{f}^{-}(\epsilon_{\boldsymbol{k}^{\prime}})\,\int_{p^{\prime}}\,\frac{1}{2\epsilon_{\boldsymbol{p}^\prime}}\,n_{f}^{-}(\epsilon_{\boldsymbol{p}^{\prime}})\,\left(2\pi\right)^{4}\delta(p+k-p^{\prime}-k^{\prime})\,\left|M\right|^{2}\,,\label{eq:FermiRateH} \end{equation} with the standard short-hand notation for the integration measure, \begin{equation} \int_{p}=\int\frac{d^{3}\boldsymbol{p}}{(2\pi)^{3}}\,.\label{eq:measure} \end{equation} The squared matrix element $|M|^2$ emerges from the product of the traces included in $\Gamma$ and $\Pi$, and corresponds to $t$-channel or $u$-channel scattering, i.e., $|M|^2=|M_t|^2$ or $|M|^2=|M_u|^2$ (see Eq. \ref{eq:RateCompare} in Appendix \ref{sec:scatrate} for the explicit result). The matrix element of the interference term does not factorize, and, as mentioned above, has to be extracted from the imaginary part of Fig. \ref{fig:2loop}(c). Deriving the rate according to Fermi`s golden rule from field theory serves as a useful check to ensure sure all statistical factors are accounted for correctly. The step from the bare to screened interactions is now straight-forward: writing the spectral functions Eq. \ref{eq:SpecL} and Eq. \ref{eq:SpecPerp} as $\rho_{j}=-(1/\pi)\,|D_{j}|^2\,\textrm{Im}\,\Pi_j$, where $j=\{L,\,\perp\}$ and $D_j$ is the dressed photon propagator, \begin{equation} D_{L}=\frac{1}{\boldsymbol{q}^2-\Pi_{00}}, \hspace{1cm}D_{\perp}=\frac{1}{q^2-\Pi_\perp}\,, \end{equation} one recovers the expansion Eq. \ref{eq:rhoExpand}, except that bare propagators are replaced by dressed ones. In conclusion $\Gamma$ corresponds to the scattering rate depicted on the left hand side of Fig. \ref{fig:ResumScatt}. \subsection{Scattering in the Multi Component Plasma} \label{subsec:multi} \begin{figure}[t] \includegraphics[scale=1.15]{scatterMulti.pdf} \caption{\label{fig:ScatterMulti} Total scattering rate calculated from the self energy of electrons, propagating through nuclear matter composed of other electrons, muons, and protons. In each case the scattering occurs via the exchange of plasmons and photons, whose dispersion relations at soft momenta are strongly modified by screening and damping effects of all fermions in the plasma. The photon spectrum in the multi-component plasma is obtained from the imaginary part of the dressed photon propagator Eq. \ref{eq:PropEMP}.} \end{figure} \noindent Within the RPA it is particularly simple to extend the calculation of the scattering rate to a QED plasma composed of electrons, protons, and muons (EMP plasma): The total photon polarization tensor is simply given by the sum of the individual polarizations, i.e., $\Pi\rightarrow\Pi_e+\Pi_\mu+\Pi_p$, such that the dressed photon propagator reads \begin{equation} \label{eq:PropEMP} D^{\mu\nu}(q)=\frac{1}{\Pi_{00,\,e}(q)+\Pi_{00,\,\mu}(q)+{\Pi}_{00,\,p}(q)-\boldsymbol{q}^{2}}\,g^{\mu0}g^{\nu0}+\frac{1}{\Pi_{\perp,\,e}(q)+\Pi_{\perp,\,\mu}(q)+\Pi_{\perp,\,p}(q)-q^{2}}\,P_{\perp}^{\mu\nu}\,. \end{equation} It is easy to check that an expansion in $\alpha_f$ produces all possible combinations of one-loop insertions. The spectral functions $\rho_L$ and $\rho_\perp$ are accordingly obtained from the imaginary part of Eq. \ref{eq:PropEMP}, and can as well be expressed as a sum, $\rho_{j}=-(1/\pi)\,|D_{j}|^2\,\textrm{Im}\,(\Pi_{e,\,j}+\Pi_{\mu,\,j}+\Pi_{p,\,j})$. The results are inserted into the same expressions for $\Gamma_L$ and $\Gamma_\perp$, Eqs. \ref{eq:GammaL} and \ref{eq:GammaP}. If $\Gamma$ is computed from the self energy $\Sigma$ of, say, an electron, it may be interpreted as the sum of individual scattering rates of electrons with all fermion species present in the plasma: $\Gamma_e=\Gamma_{e,\,e}+\Gamma_{e,\mu}+\Gamma_{e,\,p}$. In each of these channels, the screening receives contributions from all constituents, see Fig. \ref{fig:ScatterMulti}. Naturally, the same applies to the self energy of all other fermions.\newline The evaluation of electron, muon and proton loops requires for the determination of their respective chemical potentials, and in case of the protons in addition for the determination of the effective mass $m^*_p$. Under degenerate conditions these quantities can be extracted from an energy density functional, as outlined in detail in Ref. \cite{Stetina:2017ozh}, see in particular section IIIA and Appendix B2 therein. In the following the essential steps are summarized. The properties of nuclear matter at a given density are extracted from an energy functional based on Skyrme type interactions \cite{Chamel:2006rc}, \begin{equation} \mathcal{E}[n]=\sum_{T=0,1}\left[\delta_{T,0}\frac{\hbar^{2}}{2m}\tau_{T}+C_{T}^{n}[n]\,n_{T}^{2}+C_{T}^{\tau}n_{T}\,\tau_{T}+C_{T}^{\boldsymbol{j}}\,\boldsymbol{j}_{T}^{2}\right]\,,\label{eq:EpsChamel} \end{equation} \noindent where the coefficients $C_{T}^{ {n,\tau,\boldsymbol{j} } }$ are related to standard Skyrme parameters \cite{Chamel:2006rc}. The functional Eq. \ref{eq:EpsChamel} depends on densities $n_a$, kinetic energies $\tau_q$, and currents $\boldsymbol{j}_a$, which in turn are related the quasiparticle occupations via \begin{equation}\label{eq:functionals} n_{a}[n_{\boldsymbol{k,}a}]=\int_{k}n_{\boldsymbol{k},a}\,,\hspace{1cm}\tau_{a}[n_{\boldsymbol{k,}a}]=\int_{k}\boldsymbol{k}^{2}n_{\boldsymbol{k},a}\,,\hspace{1cm}\boldsymbol{j}_{a}[n_{\boldsymbol{k,}a}]=\int_{k}\boldsymbol{k}\,n_{\boldsymbol{k},a}\,, \end{equation} \noindent with the flavor index $a = n, p$. Isoscalar (T=0) and isovector (T=1) densities are given by $n_{0}=n=n_{n}+n_{p}$, $n_{1}=n_{n}-n_{p}$ (and similarly for $\boldsymbol{j}$ an $\tau$). Quasiparticle dispersions, effective masses, and residual (density and current) interactions are obtained by taking the derivatives \begin{equation} \label{eq:derivatives} e_{\boldsymbol{k},a} = \frac{\delta\mathcal{E}}{\delta n_{a,\boldsymbol{k}}} = \frac{\hbar^{2}\boldsymbol{k}^{2}}{2m_{a}^{*}}+U_{a}\,,\hspace{1cm}\frac{\hbar^{2}}{2m_{a}^{*}} := \frac{\delta\mathcal{E}}{\delta\tau_{a}}\,,\hspace{1cm}f_{ab}=\frac{\delta^{2}\mathcal{E}}{\delta n_{a,\boldsymbol{k}}\,\delta n_{b,\boldsymbol{k}}}\,,\hspace{1cm}\bar{f}_{ab} \,\delta_{ij} = \frac{\delta^{2}\mathcal{E}}{\delta j_{a}^{i}\,\delta j_{b}^{j}\,.} \end{equation} Relations \ref{eq:derivatives} are obviously non-relativistic, and need to be matched with their relativistic counterparts before they can be incorporated into the RPA resummation. In particular the kinetic contributions to single-particle energies and chemical potentials (at zero temperature) are related via \begin{equation} e_{\boldsymbol{k},\,(kin,\,rel)}=\sqrt{\boldsymbol{k}^{2}+m^{*}}+m-m^{*}\,,\hspace{1cm}\mu_{(kin,\,rel)}=\sqrt{k_{f}^{2}+m^{*2}}+m-m^{*}\,,\label{eq:EpsMuRPA} \end{equation} where adding $\delta m=m-m^*$ ensures that the leading term in a large $m^*$ expansion consists of the \textit{bare} mass plus the non-relativistic expression. The residual quasiparticle interactions will be required for the introduction of induced interactions in the next section. In a partial wave expansion, single particle energies and density-density potentials are related to $l=0$ Landau parameters, while effective masses and current-current potentials correspond to $l=1$ Landau parameters. The latter exhibit a stronger model dependence. At zero temperature, the derivatives Eq. \ref{eq:derivatives} are evaluated at the Fermi surface., i.e. by setting $n_{a,\,\boldsymbol{k}}\rightarrow n_{a,\,0}=\Theta(k_f-|\boldsymbol{k}|)$. Strictly speaking the residual quasiparticle interactions are only valid in the static limit $f_{ab}(\boldsymbol{q}=\boldsymbol{0})$ and $\bar{f}_{ab}(\boldsymbol{q}=\boldsymbol{0})$, where $\boldsymbol{q} = \boldsymbol{k}-\boldsymbol{k^\prime}$ is the momentum transfer in the scattering of two quasi-particles. While it would certainly be desirable to study the momentum dependence of nuclear interactions, the current approach is a reasonable approximation, in particular with regard to the calculation of scattering rates which are dominated by soft momentum exchange. To reduce the dependency on a single parameter set several modern Skyrme forces recommended in Ref. \cite{Dutra:2012mb}, including KDEv01 \cite{Agrawal:2005ix}, SKRA \cite{SKRA}, SQMC700 \cite{Goriely:2001rbd}, LNS \cite{Cao:2005ac} and NRAPR \cite{Steiner:2004fi} are employed. A comparison of proton fractions and effective masses is shown in Fig. \ref{fig:Skyrmes}. \begin{figure} \includegraphics[scale=0.8]{pfrac.pdf}\hspace{0.8cm}\includegraphics[scale=0.58]{mstar.pdf} \caption{\label{fig:Skyrmes} Comparison of proton fractions and effective masses at zero temperature, using the Skyrme parameter sets recommended in Ref. \cite{Dutra:2012mb}. Homogeneous nuclear matter is stable above a critical density $n_c$ which has to be determined separately for each parameter set. The results depicted above are consequently cut off at slightly different values at the left hand side of each plot.} \end{figure} \begin{table}[t] \setlength{\belowcaptionskip}{-0.5cm} \begin{tabular}{|c|c|c|c|c|c|} \hline \textbf{parameter set} & NRAPR & SKRA & SQMC700 & LNS & KDE0v1\tabularnewline \hline \hline $n_{c}\,\,[n_{0}]$ & 0.539 & 0.543 & 0.539 & 0.590 & 0.594\tabularnewline \hline $n_{\mu}\,[n_{0}]$ & 0.747 & 0.772 & 0.773 & 0.801 & 0.698\tabularnewline \hline \end{tabular} \caption{\label{tab:CritDens}Critical densities for the stability of homogeneous nuclear matter and for the onset of muons at zero temperature. The results for $n_c$ are obtained from evaluating condition Eq. \ref{eq:Stability} in $\beta$ equilibrium, employing the Skyrme parameters recommended in Ref. \cite{Dutra:2012mb}. The results for $n_\mu$ are obtained from the conditions $n_\mu$=0 and $m_\mu=\mu_e$.} \end{table} Stable homogeneous nuclear matter requires a positive curvature of the groundstate energy density $\mathcal{E}_0$ in the space spanned by $n_{n}$ and $n_{p}$, \begin{equation} \frac{\partial^{2}\mathcal{E}_{0}}{\partial n_{n}^{2}}\cdot\frac{\partial^{2}\mathcal{E}_{0}}{\partial n_{p}^{2}}-\frac{\partial^{2}\mathcal{E}_{0}}{\partial n_{n}\partial n_{p}}>0\,.\label{eq:Stability} \end{equation} The above condition defines a critical density $n_c$, below which nuclear matter strives to be in a clustered state \cite{Baym:1971ax} \cite{Muller:1995ji} \cite{Li:1997ra}, and which has to be determined for each given model. Note that Eq. \ref{eq:Stability} does not account for electromagnetism. Doing so turns the second-order phase transition at the critical density into a first order one \cite{Lamb:1983djd}. While the nature of the phase transition is not relevant in the present context it should be stressed that the screening mass, being a second order derivative of the energy density, is in any case sensible to the instability and diverges upon approaching $n=n_c$ from above. This behavior will be essential for the discussion in the next section.\newline At finite temperature, a rigorous evaluation of the derivatives Eq. \ref{eq:derivatives} requires for a self-consistent approach, see e.g. Ref. \cite{Rrapaj:2014yba} and \cite{Tan:2016ypx}. In a first approximation the particle fractions at a given temperature can be obtained from the usual conditions imposed by $\beta$ equilibrium and charge neutrality, \bea \label{eq:BetaT} \mu_n(n,x_p,T)-\mu_p(n,x_p,T)=\mu_e(n,x_e,T)=\mu_\mu(n,x_\mu,T)\,,\hspace{1cm}x_p=x_e+x_\mu\,, \eea where in each case $\mu$ is obtained by inverting the expressions for the currents ($a=p,\,n$, $l=e,\,\mu$), \begin{equation} \label{eq:currents} x_a\,n=\frac{1}{\pi^2}\int d\boldsymbol{p}\,\boldsymbol{p}^2\,n_f\left[(e_{\boldsymbol{k},\,a}+U_a-\mu_a)/T\right]\,,\hspace{1cm}x_l\,n=\frac{1}{\pi^2}\int d\boldsymbol{p}\,\boldsymbol{p}^2\,n_f\left[(\sqrt{\boldsymbol{p}^2+m_l^2}-\mu_l)/T\right]\,. \end{equation} This procedure determines one specific $x_p(T)$ for each chosen temperature. Note, that $U_a=U_a(n,\,x_p)$ and $m^*_a=m^*_a(n,\,x_p)$ vary implicitly with temperature, because they depend on $x_p$. The mean-field shift $U_a$ in the single particle energies only matters for the determination of the particle fractions $x_j$. Once they are known, the actual (effective) chemical potentials to be used in a loop calculation are obtained from the same relation Eq. \ref{eq:currents} neglecting $U_a$: as far as the RPA is concerned, nucleons are regarded as free fermions (with effective masses). Particle fractions and chemical potentials at $T=10$ MeV, $T=20$ MeV, and $T=30$ MeV are displayed in Fig. \ref{fig:FracT}. Higher temperatures tend to reduce the difference of $x_p$ and $x_n$ and of $x_e$ and $x_\mu$, in agreement with the findings of Ref. \cite{Tan:2016ypx}. With increasing density or decreasing temperature the particle fractions smoothly approach those obtained at $T=0$. The variation of the chemical potentials is fairly small for all temperatures considered. For future reference chemical potentials and effective masses for three different densities are listed in Tab. \ref{tab:NRAPR}. \newline \begin{figure} \includegraphics[scale=0.6]{FracT.pdf}\hspace{0.8cm}\includegraphics[scale=0.6]{FracT0.pdf}\\[2ex] \includegraphics[scale=0.84]{PlotMuE.pdf}\hspace{0.8cm}\includegraphics[scale=0.6]{PlotMuNucl.pdf} \caption{\label{fig:FracT} Particle fractions (top) and (effective) chemical potentials (bottom) at three different temperatures $T=10,\,20,\,30$ MeV, calculated using NRAPR (Non-Relativistic Akmal, Pandharipande and Ravenhall \cite{Akmal:1998cf}) Skyrme forces. The upper right panel compares the relative abundances of protons (solid), electrons (dashed), and muons (dot-dashed) at $T=10$ MeV (gray) and $T=0$ (thick black), and shows that the finite temperature results approach the zero temperature limit with increasing density. While the particle fractions vary considerably with temperature, the impact on the chemical potentials is comparatively small. } \end{figure} \noindent To conclude this subsection the space-like region of $\rho_L$ and $\rho_\perp$ are displayed in Fig. \ref{fig:RhoMulti}. In the degenerate limit $\rho_L$ displays characteristic peaks, located in the vicinity of $v_{f,\,i}\,|\boldsymbol{q}|$, where $v_{f,\,i}$ is the Fermi velocity of a given species. With increasing temperature these structures are "washed out", in agreement with the findings of Ref. \cite{Heiselberg:1993cr}, which attests particular importance to dynamical screening in degenerate matter. The three peaks in the spectrum originate from poles of the (real part of the) longitudinal photon propagator, indicating that there is not one but three plasmon modes, corresponding to the collective response of electrons, muons, and protons \cite{Stetina:2017ozh} \cite{Baldo:2008pb}. As expected, the transverse spectrum is much less structured due to its lack of static screening. For the same reason it is much larger than its longitudinal counterpart at soft energies. The characteristic landscape displayed in Fig. \ref{fig:RhoMulti} defines dynamical screening in the multi-component plasma. Note that we have ignored neutrons entirely; the propagator Eq. \ref{eq:PropEMP} includes only those fermion loops which couple directly to the photon. \begin{table}[] \setlength{\belowcaptionskip}{0.5cm} \begin{tabular}{|c|c|c|c|c|c|} \hline & $\mu_{e} = \mu_\mu$ ~{[}MeV{]}~~~ & $\mu_{p}$~ {[}MeV{]}~~~ & $\mu_{n}$~ {[}MeV{]}~~~ & $m_{p}^{*}$~ {[}MeV{]}~~~ & $m_{n}^{*}$~ {[}MeV{]}~~~\tabularnewline \hline \hline $n=0.55\,n_{0}$ & 88 & 699 & 872 & 693 & 830\tabularnewline \hline $n=0.65\,n_{0}$ & 97 & 670 & 860 & 663 & 812\tabularnewline \hline $n=n_{0}$ & 122 & 589 & 819 & 575 & 752\tabularnewline \hline $n=2\,n_{0}$ & 162 & 457 & 739 & 419 & 618\tabularnewline \hline \end{tabular} \caption{Chemical potentials and effective masses calculated in $\beta$ equilibrium at fixed density and zero temperature. At the lower two densities muons are absent. The parameters listed above are extracted using NRAPR Skyrme forces, and matched to their relativistic counterparts. At first glance it may seem surprising that the chemical potentials of the nucleons decrease with increasing density. The difference $\mu-m^*$, however, increases as expected. } \label{tab:NRAPR} \end{table} \begin{figure} \includegraphics[scale=0.6]{RhoLongT.pdf}\hspace{0.8cm}\includegraphics[scale=0.82]{RhoPerpT.pdf} \caption{\label{fig:RhoMulti} Longitudinal (left) and transverse (right) spectral functions in the EMP plasma at saturation density. Chemical potentials and effective masses are obtained using NRAPR skyrme forces, see Tab. \ref{tab:NRAPR}. The momentum is fixed at $|\boldsymbol{q}|$= 3 MeV, and the continuum contributions due to Landau damping are displayed as a function of the photon energy $q_0$. The temperatures are set to $T=0.1$ MeV (solid), $T=10$ MeV (dashed), and $T=20$ MeV (dot-dashed). In addition, the thin blue and red lines display the respective spectral functions in the absence of muons and protons. Under degenerate conditions, the longitudinal spectral function displays three characteristic peaks, located roughly at the plasma frequencies $\omega_{0,\,i}$ of each particle species. The transverse spectral function is less sensitive to temperature variations.} \end{figure} \subsection{Induced interactions} \label{subsec:induced} \begin{figure}[t] \includegraphics[scale=0.55]{Induced.pdf} \caption{\label{fig:inducedEN}Electron-neutron scattering induced by the polarizability of strongly and electromagnetically charged protons. Squared vertices depict strong interaction potentials. These contributions are resummed to obtain the dressed polarization tensor $\tilde{\Pi}_p$, which in turn enters the photon propagator. To connect again with a photon propagator, another proton loop has to be attached on the right hand side. The leading contribution to electromagnetic interactions is thus of order $\alpha_f^2$\,$f_{pn}^2$. } \end{figure} \begin{figure} \includegraphics[scale=0.8]{ScreeningCompare.pdf}\hspace{0.8cm}\includegraphics[scale=0.6]{ScreeningZoom.pdf} \setlength{\belowcaptionskip}{-0.5cm} \caption{\label{fig:Screen} Comparison of the resummed screening mass Eq. \ref{eq:DebyePN} over the one-loop expression $\bar{m}_{D,\,p}=\tilde{m}_{D,\,p}/m^\prime_{D,\,p}$ with $m^{\prime\,2}_{D,p}=\mu\,k_f/\pi^2$, using the Skyrme parameter sets recommended in Ref. \cite{Dutra:2012mb}. To better resolve the various model that constitute the gray band the right hand side displays an enlargement of the left hand side. The screening mass diverges upon approaching the critical density of homogeneous nuclear matter for above. Around $n\sim(1.5-2)\,n_0$ the ratio assumes a minimum, at $n=n_0$ it is close to $1$.} \end{figure} \noindent In a final step neutrons are included into the RPA resummation. Neglecting their small magnetic moment, neutrons modify electromagnetic scattering via an interaction induced by the polarizability of strongly and electromagnetically charged protons. The resulting channel for lepton-neutron scattering is depicted in Fig. \ref{fig:inducedEN}. By definition, induced interactions do not alter the appearance of the photon propagator Eq. \ref{eq:PropEMP}. Using the residual density-density ($f_{ab}$) and current-current ($\bar{f}_{ab}$) potentials from Eq. \ref{eq:derivatives}, polarization effects due to strong interactions are resummed to obtain the polarization function $\tilde{\Pi}_p$, \bea \label{eq:ReProtonFull} \tilde{\Pi}_{00,\,p} & = & e^2 \frac{\Pi_{00,\,p}^{\prime}\,(1+f_{nn}\,\Pi_{00,\,n}^{\prime})}{1+f_{nn}\,\Pi_{00,\,n}^{\prime}+f_{pp}\,\Pi_{00,\,p}^{\prime}+\Pi_{00,\,p}^{\prime}\,\,\Pi_{00,\,n}^{\prime}(f_{pp}f_{nn}-f_{np}^{2})}\,,\\[3ex] \tilde{\Pi}_{\perp,\,p} & = & e^2 \frac{\Pi_{\perp,\,p}^{\prime}\,(1+\bar{f}_{nn}\,\Pi_{\perp,\,n}^{\prime})}{1+\bar{f}_{nn}\,\Pi_{\perp,\,n}^{\prime}+\bar{f}_{pp}\,\Pi_{\perp,\,p}^{\prime}+\Pi_{\perp,\,p}^{\prime}\,\,\Pi_{\perp,\,n}^{\prime}(\bar{f}_{pp}\bar{f}_{nn}-\bar{f}_{np}^{2})}\,, \label{eq:ReProtonFullPerp} \eea which replaces $\Pi_p$ in the dressed photon propagator Eq. \ref{eq:PropEMP}. The resummed quantity is consequently of order $e^2$, while the polarization functions $\Pi^\prime$ are independent on $e$. In order to obtain Eqs. \ref{eq:ReProtonFull} and \ref{eq:ReProtonFullPerp} we have assumed that the Lorentz structure of the nuclear potentials $f$ and $\bar{f}$ is identical to that of the photon Eq. \ref{eq:PhotonGauge}, that is, we have projected nuclear interactions on the vector channel, see Ref. \cite{Stetina:2017ozh} for details. This renders the RPA resummation particularly simple, the calculation of axial and mixed correlation functions is not required. In analogy to a relativistic mean field model, one may think of the potentials $f$ and $\bar{f}$ as the static limit of interactions mediated by a massive vector meson, $g_V^2/m_{meson}^2$. The (resummed) Debye mass can be obtained from the static limit of Eq. \ref{eq:ReProtonFull}, and reads \newline \begin{equation} \label{eq:DebyePN} \tilde{m}_{D,\,p}^{2} = -\tilde{\Pi}_{00,\,p}(q_0,\,\boldsymbol{q}\rightarrow0)=e^2 \frac{m_{D,p}^{\prime\,2}\left(1+m_{D,n}^{\prime 2}f_{nn}\right)}{1+m_{D,n}^{\prime 2}f_{nn}+m_{D,p}^{\prime 2}f_{pp}+m_{D,p}^{\prime 2}m_{D,n}^{\prime 2}(f_{pp}f_{nn}-f_{np}^{2})}\,, \end{equation} \newline with the relativistic definition of the (one-loop) Debye mass $m^{\prime 2}_{D}=\mu\,k_{f}/\pi^2$ (note the missing factor of $e^2$). This expression can alternatively be obtained from the thermodynamic relation $m_D^2= \partial \mu_p / \partial n_p$, assuming that the proton and neutron chemical potential are related (in the present case by $\beta$ equilibrium), $\mu_p=\mu_p(\mu_n)$. The denominator of Eq. \ref{eq:DebyePN} is precisely stability condition Eq. \ref{eq:Stability}, and the static Debye screening consequently diverges upon approaching the critical density $n_c$ from above. This behaviour is illustrated in Fig. \ref{fig:Screen} for several Skyrme parameter sets. The rapid increase of Debye screening due to protons at lower densities will prove to be of great importance for electromagnetic scattering.\newline Due to the large mass of protons, induced interactions predominantly modify the longitudinal spectrum, while changes to the transverse spectrum are negligible \cite{Stetina:2017ozh}. The temperature dependence of the longitudinal photon spectrum including induced interactions is depicted in Fig. \ref{fig:RhoLongT}. Since neutrons are incorporated into the polarization function of protons, no additional peak in the vicinity of $v_{f,\,n}\,|\boldsymbol{q}|$ appears. Their presence is manifest in the part of the photon spectrum that is predominantly shaped by protons, at very space-like energies $q_0\leq v_{f,\,p}\,|\boldsymbol{q}|$. At lower temperatures one additionally finds a significant reduction of the height of the proton peak, which is relevant for the spectrum of collective excitations \cite{Stetina:2017ozh}. \begin{figure} \includegraphics[scale=0.5]{RhoLT1.pdf}\includegraphics[scale=0.5]{RhoLT2.pdf}\includegraphics[scale=0.5]{RhoLT3.pdf} \caption{\label{fig:RhoLongT} Temperature dependence of the longitudinal photon spectrum including induced interactions (thick blue line) at $n=0.55\,n_0$, calculated using NRAPR Skyrme forces. The momentum is fixed at $|\boldsymbol{q}|=3$ MeV, and the spectra are plotted for space-like energies. The zero temperature case is most relevant for the outer core of neutron stars. The plasma is composed of electrons, protons and neutrons, muons are absent. Thin gray lines show the corresponding spectra ignoring induced interactions. Significant modifications occur at very low energies $q_0$, and, at low temperatures in proximity to the proton peak. the former are particularly relevant for scattering, the latter have repercussions on the spectrum of collective excitations. } \end{figure} \section{Scattering in the fully degenerate plasma} \label{sec:FullDegen} \noindent With the preparations of Sec. \ref{sec:OT} in hand the scattering rates can be computed. This section elaborates on the calculation of scattering at strictly zero temperature. Subsection \ref{subsec:approx} discusses approximations to the full one-loop result, subsection \ref{subsec:EMP} considers the multi-component plasma (EMP plasma), and subsection \ref{subsec:ResultsInduced} takes induced interactions into account (EMPN plasma). The generic expressions for the longitudinal and transverse rates Eqs. \ref{eq:GammaL} and \ref{eq:GammaP} can be simplified in fully degenerate matter: The angular dependence of the thermal distribution functions can be eliminated by a shift of the integration variable $\boldsymbol{q}\rightarrow\boldsymbol{p}-\boldsymbol{q}$. Trading momentum and azimuthal angle integration for integrations over the energies $\epsilon_{\boldsymbol{q}}=\sqrt{\boldsymbol{q}^2+m^2}$ and $\epsilon_{\boldsymbol{p}^\prime}$, and taking the degenerate limit of the distribution function $1+n_b(x)\rightarrow\theta(x)$, $n_f(x)\rightarrow\theta(\mu-x)$ one arrives arrives at \begin{eqnarray}\label{eq:GammaL0} \Gamma_{L}(\epsilon_{\boldsymbol{p}}) & = & \frac{e^{2}}{4\pi\left|\boldsymbol{p}\right|}\,\Theta\left[\pm(\epsilon_{\boldsymbol{p}}-\mu)\right]\int_{\mu}^{\epsilon_{\boldsymbol{p}}}\,d\epsilon_{\boldsymbol{q}}\,\epsilon_{\boldsymbol{q}}\,\int_{\epsilon^{-}}^{\epsilon+}d\epsilon_{\boldsymbol{p}^{\prime}}\,\epsilon_{\boldsymbol{p}^{\prime}}\,\rho_{L}(\epsilon_{\boldsymbol{p}}-\epsilon_{\boldsymbol{q}},\,\sqrt{\epsilon_{\boldsymbol{p}^{\prime}}^{2}-m^{2}})\,K_{L}\,,\\[3ex] \Gamma_{\perp}(\epsilon_{\boldsymbol{p}}) & = & \frac{e^{2}}{2\pi\left|\boldsymbol{p}\right|}\,\Theta\left[\pm(\epsilon_{\boldsymbol{p}}-\mu)\right]\int_{\mu}^{\epsilon_{\boldsymbol{p}}}\,d\epsilon_{\boldsymbol{q}}\,\epsilon_{\boldsymbol{q}}\,\int_{\epsilon^{-}}^{\epsilon+}d\epsilon_{\boldsymbol{p}^{\prime}}\,\epsilon_{\boldsymbol{p}^{\prime}}\,\rho_{\perp}(\epsilon_{\boldsymbol{p}}-\epsilon_{\boldsymbol{q}},\,\sqrt{\epsilon_{\boldsymbol{p}^{\prime}}^{2}-m^{2}}\,)\,K_{\perp}\,,\label{eq:GammaP0} \end{eqnarray} \newline where the positive and negative signs correspond to particles and holes respectively. We shall ignore this prefactor for the remainder of this section, keeping in mind that the cases $\epsilon_{\boldsymbol{p}}>\mu$ and $\epsilon_{\boldsymbol{p}}<\mu$ always correspond to the scattering rates of particles and holes. In terms of the new variables, the coefficients in the integrands of longitudinal and transverse rates read \begin{eqnarray} K_{L} & = & 1+\frac{1}{2\epsilon_{\boldsymbol{q}}\,\epsilon_{\boldsymbol{p}}}\left(\epsilon_{\boldsymbol{q}}^{2}+\epsilon_{\boldsymbol{p}}^{2}-\epsilon_{\boldsymbol{p}^{\prime}}^{2}+m^{2}\right)\,,\\[2ex] K_{\perp} & = & 1+\frac{1}{2\epsilon_{\boldsymbol{p}}\,\epsilon_{\boldsymbol{q}}}\left[\epsilon_{\boldsymbol{p}}^{2}+\epsilon_{\boldsymbol{p}^{\prime}}^{2}-\epsilon_{\boldsymbol{q}}^{2}-3m^{2}-\frac{1}{2}\frac{(\epsilon_{\boldsymbol{p}}^{2}+\epsilon_{\boldsymbol{p}^{\prime}}^{2}-\epsilon_{\boldsymbol{q}}^{2}-m^{2})^{2}}{\epsilon_{\boldsymbol{p}^{\prime}}^{2}-m^{2}}\right]\,, \end{eqnarray} and the new integration boundaries are \begin{equation} \epsilon^{\pm}(\epsilon_{\boldsymbol{q}})=\sqrt{(\sqrt{\epsilon_{\boldsymbol{q}}^{2}+m^{2}}\pm\left|\boldsymbol{p}\right|)^{2}+m^{2}}\,. \end{equation} Note that as a result of Pauli blocking all integration boundaries are finite, which greatly simplifies the numerical evaluation. To calculate the energy loss in a multi-component plasma and to take into account induced interactions,the spectral function in Eqs. \ref{eq:GammaL0} and \ref{eq:GammaP0} has to be adapted accordingly. \subsection{Approximations to the full one-loop result} \label{subsec:approx} \noindent Hard dense loop (HDL) and weak screening approximations are frequently used in the calculation of transport coefficients. In the following these approximations are derived and compared to the full one-loop results Eqs. \ref{eq:GammaL0} and \ref{eq:GammaP0}. \subsubsection{Hard Dense Loop approximation} \noindent The hard dense loop (HDL) approximation takes into account that fermions contributing to scattering in degenerate matter carry hard momenta, and that their interaction rates are dominated by the exchange of soft photons. The definition of ``soft" depends on the Fermi momenta $k_{f,i}$, and therefor on the masses of the fermions thermalized in the plasma. Expanding Eqs. \ref{eq:GammaL} and \ref{eq:GammaP} accordingly, and using $\epsilon_{\boldsymbol{p}^{\prime}}\simeq\epsilon_{\boldsymbol{p}}-\boldsymbol{v}\cdot\boldsymbol{q}$, where $\boldsymbol{v}=\boldsymbol{p}/\epsilon_{p}$ is the velocity of the fermions, one finds to leading order \begin{equation}\label{eq:GammaHDL} \Gamma_{L}(\epsilon_{\boldsymbol{p}})+\Gamma_{\perp}(\epsilon_{\boldsymbol{p}})=e^{2}\int\frac{d^{3}\boldsymbol{q}}{(2\pi)^{2}}\,\left[1+n_{b}(\boldsymbol{v}\cdot\boldsymbol{q})-n_{f}^{-}(\epsilon_{\boldsymbol{p}}-\boldsymbol{v}\cdot\boldsymbol{q})\right]\,\bigg[\rho_{L}(\boldsymbol{v}\cdot\boldsymbol{q}\,,\,\left|\boldsymbol{q}\right|)+\boldsymbol{v}^{2}\left(1-\text{cos}\,\theta^{\,2}\right)\,\rho_{\perp}(\boldsymbol{v}\cdot\boldsymbol{q}\,,\,\left|\boldsymbol{q}\right|)\,\bigg]\,. \end{equation} The assembly of the appropriate approximations of the spectral functions Eqs. \ref{eq:SpecL} and \ref{eq:SpecPerp} requires for the HDL expressions of $\Pi_{00}$ and $\Pi_\perp$, see e.g. Appendix A1 of Ref. \cite{Stetina:2017ozh} for a derivation, \begin{equation} \label{eq:RePiHDL} \Pi_{\textrm{HDL}}^{00}(q)=-m_{D}^{2}\left[1-\frac{1}{2}\, x\,\text{log}\, f(x)\right]\,,\,\hspace{1em}\hspace{1em}\Pi_{\perp,\,\textrm{HDL}}(q)=\frac{1}{2}m_{D}^{2}v_{f}^{2}x\left[x+\frac{1}{2}\left(1-x^{2}\right)\text{log}\, f(x)\right]\,. \end{equation} From the above expressions one obtains the imaginary parts \begin{equation} \label{eq:ImPiHDL} \text{Im}\,\Pi_{\text{HDL}}^{00}(q)=-\frac{\pi}{2}m_{D}^{2}\, x\,\Theta(1-x^{2})\,,\,\hspace{1em}\hspace{1em}\text{Im}\, \Pi_{\perp,\,\text{HDL}}(q)=-\frac{\pi}{4}m_{D}^{2}\, v_{f}^{2}\, x\left(1-x^{2}\right)\Theta(1-x^{2})\,, \end{equation} with the dimensionless variable $x=q_{0}/(v_{f}\left|\boldsymbol{q}\right|)$, the standard definition of the relativistic Debye mass $m_{D}^{2}=e^{2}\mu k_{f}/\pi^{2}$, the Fermi velocity $v_{f}=k_{f}/\mu$ and the function $f(x)=\left|(x+1)/(x-1)\right|$. The step function $\Theta(1-x^2)$ restricts the spectral function to the domain in which Landau damping operates, i.e. $q_0< v_f\,|\boldsymbol{q}|$ \footnote{\noindent This is the approximate boundary in the soft region. When the full one-loop polarization tensor is used, Landau damping resides in the region $0\leq q_0\leq -\mu+\sqrt{\mu^2+|\boldsymbol{q}|^2+2k_f\mu}$ for $|\boldsymbol{q}|<2 k_f$.}. Using $u=\boldsymbol{v}\cdot\boldsymbol{q}$ as integration variable Eq. \ref{eq:GammaHDL} simplifies to \begin{equation} \label{eq:GammaHDL2} \Gamma_{L}(\epsilon_{\boldsymbol{p}})+\Gamma_{\perp}(\epsilon_{\boldsymbol{p}})=\frac{e^{2}}{(2\pi)}\frac{1}{\left|\boldsymbol{v}\right|}\int_{0}^{B}du\,\int_{0}^{\infty}\left|\boldsymbol{q}\right|d\left|\boldsymbol{q}\right|\left[\rho_{L}(u\,,\,\left|\boldsymbol{q}\right|)+\left(\boldsymbol{v}^{2}-\frac{u^2}{\boldsymbol{q}^2}\right)\,\rho_{\perp}(u\,,\,\left|\boldsymbol{q}\right|)\right]\,. \end{equation} where the upper boundary is either determined by kinematics or Pauli blocking, $B=\textrm{min}\left(\left|\boldsymbol{v}\right|\left|\boldsymbol{q}\right|,\left|\mu-\epsilon_{\boldsymbol{p}}\right|\right)$. The absolute value used in the integration boundary takes care of particles and holes. Examples for calculations based on the HDL approximation of the scattering rate include degenerate quark plasmas \cite{Heiselberg:1993cr} \cite{Baym:1990uj} \cite{Baym:1991qf} \cite{Heiselberg:1992ha}, degenerate electron systems \cite{Shternin:2006uq}, and warm neutron star crusts \cite{Harutyunyan:2016rxm}. \subsubsection{Weak screening approximation} \noindent The HDL result Eq. \ref{eq:GammaHDL2} still requires a numerical evaluation. Following Refs. \cite{LeBellac:1996kr} \cite{Vanderheyden:1996bw} \cite{Manuel:2000mk} analytic insights can be obtained by evaluating Eq. \ref{eq:GammaHDL2} in close proximity to the Fermi surface. To do so one may approximate $\left|\boldsymbol{v}\right|$ by $v_{f}$, and expand the polarization functions for $x\ll1$. The fundamental difference between longitudinal and transverse scattering now becomes evident: plasmons exchange is screened by the Debye mass, and the longitudinal rate can be evaluated in the \textit{static} limit without subtleties. The leading contribution reads \begin{equation}\label{eq:GLongA} \Gamma_{L}\simeq\frac{e^{2}}{(4\pi)}\frac{m_{D}^{2}}{v_f^2}\int_{0}^{\left|\mu-\epsilon_{\boldsymbol{p}}\right|}u\, du\,\int_{0}^{\infty}d\left|\boldsymbol{q}\right|\,\frac{1}{(m_{D}^{2}+\boldsymbol{q}^{2})^{2}}=\frac{e^2}{32}\frac{1}{m_{D}v_{f}^{2}}\left(\mu-\epsilon_{\boldsymbol{p}}\right)^{2}\,. \end{equation} The above result corresponds essentially to an interaction rate calculated using a Thomas-Fermi screened interaction. Transverse photons on the other hand exhibit no static screening, $\Pi_{\perp}(q_{0}\rightarrow0)=0$. The leading contribution to $\Pi_\perp$ is linear in $q_{0}$ and imaginary, see Eq. \ref{eq:ImPiHDL}. As it turns out, implementing this simple form of \textit{dynamical} screening into the transverse spectral function is sufficient to obtain a finite result for the scattering rate at order $u^2$, as long as the temperature is strictly zero. It reads \begin{equation}\label{eq:GPerpA} \Gamma_{\perp}\simeq\frac{e^{2}}{(2\pi)}m_{D}^{2}\,\int_{0}^{\left|\mu-\epsilon_{\boldsymbol{p}}\right|}du\,\int_{0}^{\infty}d\left|\boldsymbol{q}\right|\frac{4\,v_f^2\, u\,}{16\boldsymbol{q}^{4}+\pi^{2}\,m_{D}^{4}\,v_f^2\,u^{2}/\boldsymbol{q}^{2}}=\frac{e^2}{12\pi}v_{f}\left|\epsilon_{\boldsymbol{p}}-\mu\right|\,. \end{equation} \begin{figure}[t] \begin{framed} \includegraphics[scale=0.5]{ElLongN0.pdf}\hspace{1.4cm}\includegraphics[scale=0.5]{ElPerpn0.pdf}\\[2ex] \includegraphics[scale=0.5]{MuLongN0.pdf}\hspace{1.4cm}\includegraphics[scale=0.5]{MuPerpN0.pdf}\\[2ex] \hrule\vspace{0.2cm} \includegraphics[scale=0.34]{SmallELong.pdf}\,\,\,\includegraphics[scale=0.34]{SmallEPerp.pdf}\,\,\,\includegraphics[scale=0.335]{SmallMuLong.pdf}\,\,\,\includegraphics[scale=0.335]{SmallMuPerp.pdf} \end{framed} \setlength{\belowcaptionskip}{-8pt} \caption{\label{fig:ScreenComp} Longitudinal (blue) and transverse (red) scattering rates of electrons and muons at saturation density and zero temperature, calculated using NRAPR Skyrme forces, see Tab. \ref{tab:NRAPR} for a list of parameters. Dashed lines display the HDL approximations, dot-dashed lines display the weak screening approximation. $\Gamma$ is divided by the chemical potential to obtain dimensionless results, and multiplied by a factor of $10^4$ ( $\mu_e/10^4\sim 0.012$). In fully degenerate plasmas $\Gamma_{L}$ and $\Gamma_{\perp}$ are exactly zero at the Fermi surface, for $|\boldsymbol{p}|< k_f$ the rates describe to the damping of holes, for $|\boldsymbol{p}|> k_f$ they describe the damping of particles. The weak screening results Eqs. \ref{eq:GLongA} and \ref{eq:GPerpA} capture the momentum dependence in close proximity to the Fermi surface, (see bottom four figures). The transverse rate of muon scattering exhibits sizeable deviations from the full result, HDL results represent an excellent approximation. } \end{figure} \noindent Both results, first obtained in Ref. \cite{Manuel:2000mk}\footnote{Mind the difference of a global factor of 2, stemming from the definition of $\Gamma$, Eq. \ref{eq:OTheorem} here and Eq. 19 in Ref. \cite{Manuel:2000mk}.}, highlight the distinct importance of electric and magnetic interactions for heavy and light particles in close proximity to the Fermi surface. In non-relativistic systems $v_f\ll1$, and longitudinal scattering becomes increasingly important. In the ultra-relativistic limit transverse scattering dominates. The weak screening approximation has been applied in the calculation of transport phenomena in white dwarfs and neutron star cores, see e.g., \cite{Shternin:2008es}, \cite{Shternin:2018jop}, \cite{Potekhin:1999yv}. \newline A comparison of weak-screening approximation, hard dense loop approximation, and full one-loop results is shown in Fig. \ref{fig:ScreenComp} for electrons and muons, which resemble ultra-relativistic and mildly-relativistic particles respectively. In each case the scattering rates are plotted in a range of $\pm$ 20 MeV around $|\boldsymbol{p}|=k_f$, where the quasiparticles are stable. It should be noted that the energies of particles and holes available for scattering are typically of the order of the temperature. The fully degenerate case is supposed to closely resembles conditions encountered in the core of neutron stars, where temperatures are well below $1$ MeV, while lepton chemical potentials are of the order of $100$ MeV. The calculation of scattering rates with $|\epsilon_{\boldsymbol{p}}-\mu|\gg1$ MeV is therefore a somewhat academic exercise. Equations \ref{eq:GammaL0} and \ref{eq:GammaP0} determine the rate at which a particle or hole of a given momentum $\boldsymbol{p}$, added to the system ``by hand", scatters with particles of the Fermi sea. The information whether such particles are available in the system is not inherent. To see this, take a look at Eqs. \ref{eq:FermiRateP} and \ref{eq:FermiRateH}, where the thermal distribution functions have been reorganized to reproduce the results of Fermi's golden rule: no Fermi distributions for the initial particles with energies $\epsilon_{\boldsymbol{p}}$ remain. It is nevertheless interesting to study the impact of dynamical screening with increasing distance from the Fermi surface, in particular in light of the subsequent discussion of partially degenerate plasmas, where the momentum exchange can be significantly higher. \newline Each rate displayed in Fig. \ref{fig:ScreenComp} is essentially determined by two ingredients: the parameters of the scattering particle, i.e., its mass, momentum, and chemical potential, and the spectral function of the photon. The former determines the phase-space available for scattering (encoded in the integration boundaries of Eqs. \ref{eq:GammaL0} and \ref{eq:GammaP0}), and specific features of the integrand, in particular the $\boldsymbol{v}^2$ dependence of the transverse rate (see Eq. \ref{eq:GammaHDL}). The latter determines the screening in each channel. For ultra-relativistic particles (in the present case electrons) transverse scattering is indeed found to dominate in close proximity to the Fermi surface, as projected by Heiselberg and Pethick \cite{Heiselberg:1993cr}. With increasing $|\epsilon_{\boldsymbol{p}}-\mu|$, however, longitudinal scattering catches up quickly and both rates become equally important. Muons interpolate between the two extremes of ultra-relativistic and non-relativistic particles: very close to the Fermi surface ($|\epsilon_{\boldsymbol{p}}-\mu|<1$ MeV) their transverse rates are larger, at about $1$ MeV distance both rates are essentially equal in magnitude, and even further away from the Fermi surface the longitudinal channel becomes dominant. Naturally the characteristics of the rates are density dependent, and muon scattering becomes similar to electron scattering with increasing density. The HDL approximation works remarkably well in all four cases, up to about $10$ MeV distance from the Fermi surface. The weak-screening approximation can be applied in a range of $|\boldsymbol{p}|=(k_f\pm 1)$ MeV, though sizeable deviations occur for muons in the transverse channel. \subsection{Scattering in the EMP plasma } \label{subsec:EMP} \begin{figure} \includegraphics[scale=0.475]{wsn055L.pdf}\,\,\,\includegraphics[scale=0.45]{wsn0L.pdf}\,\,\,\includegraphics[scale=0.45]{ws2n0L.pdf}\\[2ex] \includegraphics[scale=0.45]{wsn055P.pdf}\,\,\,\includegraphics[scale=0.45]{wsn0P.pdf}\,\,\,\,\includegraphics[scale=0.45]{ws2n0P.pdf} \caption{\label{fig:WeakFull} Full one-loop results (solid) and weak-screening results (dot-dashed) in a range of ($k_f\pm1$) MeV, for three different densities. The relevant parameters are listed in Tab. \ref{tab:NRAPR}. The lowest density corresponds to a region close to the crust-core boundary of neutron stars, where muons are absent. At lower densities dynamical screening becomes important, even for particles with momenta $|\boldsymbol{p}|\sim k_f$, see also Fig. \ref{fig:RhoLongStat}. The longitudinal rates increase by more than two orders of magnitude upon approaching the crust-core boundary region from deeper inside the core. The transverse rates decrease, although by a much smaller amount.} \end{figure} \begin{figure} \includegraphics[scale=0.55]{ScreenLong.pdf}\hspace{1.5cm}\includegraphics[scale=0.55]{ScreenLong2.pdf} \setlength{\belowcaptionskip}{-8pt} \caption{\label{fig:RhoLongStat} Density dependence of the photon spectrum, illustrated by the example of $\rho_L$. Full one-loop results (solid) and weak-screening approximation (dot-dashed) of $\rho_L$ in the multi-component plasma (black) and a pure electron plasma (gray) are shown. In each case the photon momentum is fixed at $|\boldsymbol{q}|=1$ MeV. At very low energies $q_0<v_{f,\,p}|\boldsymbol{q}|$ ($|\boldsymbol{q}|\ll k_f$) the spectrum can properly be described by $\rho_L$ as given by Eq. \ref{eq:RhoWeak}. Collisions with higher energy transfer $q_0$ probe the spectral function in a region where it becomes highly dynamical, and where the weak screening approximation is practically useless. As the Fermi velocity of the protons increases with density, the accuracy of the weak screening results improves. } \end{figure} \begin{figure} \begin{framed} \,\,\includegraphics[scale=0.5]{EMPCOmpElong.pdf}\hspace{1.2cm}\includegraphics[scale=0.51]{EMPCOmpEperp.pdf}\\[2ex]\includegraphics[scale=0.5]{EMPCOmpMulong.pdf}\hspace{1.2cm}\includegraphics[scale=0.51]{EMPCOmpMuperp.pdf}\\[2ex] \hrule\vspace{0.2cm} \includegraphics[scale=0.33]{empELong.pdf}\,\,\,\includegraphics[scale=0.33]{empEPerp.pdf}\,\,\,\includegraphics[scale=0.33]{empMLong.pdf}\,\,\,\includegraphics[scale=0.33]{empMPerp.pdf} \end{framed} \setlength{\belowcaptionskip}{-10pt} \caption{\label{fig:EMPComp} Comparison of longitudinal and transverse scattering rates in an EMP plasma (thick) and a single-component plasma (thin) at saturation density and zero temperature. Relevant parameters can be found in Tab. \ref{tab:NRAPR}. Dashed lines indicate the corresponding HDL approximations. Further away from the Fermi surface scattering rates increase significantly in the EMP plasma, with the exception of transverse electron scattering, which decrease due to dynamical screening effects, see also Fig. \ref{fig:EEScat}. Though less pronounced, the impact of additional plasma constituents in close proximity to the Fermi surface is still sizeable (lower four figures). Electrons and muons, however, react in opposite manner: longitudinal (transverse) rates are increased (decreased) for electrons, while they are decreased (increased) for muons. } \end{figure} \begin{figure} \begin{centering} \includegraphics[scale=0.5]{EeLong.pdf}\,\,\includegraphics[scale=0.49]{EmLong.pdf}\,\,\includegraphics[scale=0.49]{EpLong.pdf}\\[2ex]\includegraphics[scale=0.5]{EePerp.pdf}\,\includegraphics[scale=0.5]{EmPerp.pdf}\,\includegraphics[scale=0.5]{EpPerp.pdf}\\[2ex] \includegraphics[scale=0.515]{MeLong.pdf}\,\,\includegraphics[scale=0.47]{MmLong.pdf}\,\,\includegraphics[scale=0.47]{MpLong.pdf}\\[2ex] \includegraphics[scale=0.515]{MePerp.pdf}\,\includegraphics[scale=0.5]{MmPerp.pdf}\,\includegraphics[scale=0.5]{MpPerp.pdf} \end{centering} \caption{\label{fig:EEScat} Partial rates of longitudinal and transverse scattering of electrons and muons with other electrons, muons, and protons in a fully degenerate plasma at saturation density and zero temperature. Parameters according to Tab. \ref{tab:NRAPR}. The calculation of each contribution takes into account, that interactions are screened by \textit{all} constituents of the plasma. Thin lines display the scattering rates in the fictitious single component plasmas, see also Fig. \ref{fig:EMPComp}. The total interaction rate of electrons is clearly dominated by electron-electron scattering, although electron-muon and electron-proton scattering yield sizable contributions in the longitudinal channel. Screening effects of muons and protons \textit{increase} the rate of electron-electron scattering in the \textit{longitudinal} channel, while they \textit{decrease} it in the \textit{transverse} channel. In the latter case the reduction outweighs the small gains from transverse collisions with muons and protons, such that total transverse rate of electrons is effectively reduced in the EMP plasma.\newline The energy loss of muons results predominantly from longitudinal scattering, except for a small region around $k_f$, where the curvature of the longitudinal rate remains flat, and the linear increase of $\Gamma_\perp$ becomes the dominant feature. With increasing energy transfer, the longitudinal rates of muons become comparable to those of electrons. Compared to scattering in a ``muon plasma", screening reduces the rates in both channels. In the transverse channel, however, muon-electron scattering dominates such that the total transverse rate is in fact larger in the EMP plasma, see Fig. \ref{fig:EMPComp}.} \end{figure} \noindent To calculate the scattering rate of electrons and muons in a degenerate EMP plasma the photon spectrum derived in subsection \ref{subsec:multi} is employed. Correlations with nuclear interactions are ignored for the moment. The main purpose of this subsection is the study of the relative importance of new scattering channels arising from collisions with the respective other plasma constituents. It is instructive to revisit the weak screening approximation, which, as before, can be obtained from the HDL result Eq. \ref{eq:GammaHDL2} upon replacing the longitudinal and transverse photon spectra by \begin{equation} \rho_L\rightarrow\frac{1}{2}\sum_i\frac{\,x_i\,m_{D,\,i}^2\,\Theta_i}{\left(\boldsymbol{q}^2+\sum_i\,m_{D,\,i}^2\right)^2}\,,\hspace{1cm} \rho_\perp\rightarrow\frac{3}{4}\sum_i\frac{x_i\,\omega_{0,\,i}^2\,\Theta_i}{\boldsymbol{q}^4+(3\,\pi / 4)^2\,\sum_i \left(x_i\,\omega_{0,\,i}^2\,\Theta_i\right)^2}\,, \label{eq:RhoWeak} \end{equation} with the plasma frequencies $\omega_{0,\,i}^2=e^2\,k_{f,\,i}^3/(3\pi^2\mu_{i})$, the parameter $x_i=q_0/ (v_{f,\,i}\,|\boldsymbol{q}|)$, and the kinematic restriction for Landau damping $\Theta_i=\Theta(1-x_i)$. The damping rate of quasi-particles with $|\boldsymbol{p}|\sim k_f$ is calculated analogously to Eqs. \ref{eq:GLongA} and \ref{eq:GPerpA}. For a given lepton species $l=\{e,\,\mu\}$ scattering with all other constituents of the plasma (including themselves) one finds \begin{equation}\label{eq:LongEMPAn} \Gamma_{L,\,l}\sim\frac{e^2}{32}\,\frac{1}{v_{f,\,l}}\,\left(\epsilon_{\boldsymbol{p}}-\mu_l\right)^2\,\frac{1}{M^3}\,\left(\sum_i \frac{m_{D,\,i}^2}{v_{f,\,i}}\right)\,,\hspace{1cm}\epsilon_{\boldsymbol{p}}=\sqrt{\boldsymbol{p}^2+m_l^2}\,,\hspace{1cm}M=\sqrt{\sum_i m_{D,\,i}^2}\,, \end{equation} where, depending on the density, $i=\{e,\,p\}$ or $i=\{e,\,\mu,\,p\}$. To leading order, the \textit{transverse} rate retains its form Eq. \ref{eq:GPerpA} in the multi-component plasma, i.e. remains ignorant of other plasma constituents, see e.g. Fig. \ref{fig:EMPComp}. A comparison with the full one-loop results for $\Gamma(\boldsymbol{p})$ in a range $|\boldsymbol{p}|=(k_f\pm 1)$ MeV is shown in Fig. \ref{fig:WeakFull}. The accuracy of the weak-screening approximation increases with the density. At lower densities weak-screening results tend to underestimate longitudinal rates, and overestimate transverse rates. The transverse rates are particularly sensitive to higher order screening effects. The discrepancy in the longitudinal channel can be understood by taking a closer look at the photon spectrum, see Fig. \ref{fig:RhoLongStat}. At very low energies $q_0\ll v_{f,\,p}|\boldsymbol{q}|$ the longitudinal spectrum is predominantly shaped by Landau damping of protons, which is captured correctly by the approximate expression \ref{eq:RhoWeak}. As the Fermi velocity $v_{f,\,p}$ decreases with the density, the domain in which Eq. \ref{eq:RhoWeak} describes the photon spectrum correctly becomes smaller, and the range of $|\epsilon_{\boldsymbol{p}}-\mu|$ in which reliable results for the rates can be obtained decreases accordingly.\newline Full one-loop and HDL results for electrons and muons with momenta $(k_f\pm 30)$ MeV are displayed in Fig. \ref{fig:EMPComp}, and compared to the corresponding rates in a \textit{single component} plasma. The HDL results remain an excellent approximation for quasiparticles with momenta in a range of roughly $|\boldsymbol{p}|\sim(k_f\pm 10)$ MeV, in the range of $|\boldsymbol{p}|\sim(k_f\pm 1)$ MeV they are virtually indistinguishable from full one-loop results. In addition, Fig. \ref{fig:EEScat} shows a decomposition of $\Gamma_{L,\,\perp}$ into the partial energy losses inflicted by collisions with electrons, muons, and protons. The \textit{longitudinal} rates of electrons increase strongly in the multi-component plasma, both, far away, and in close proximity to the Fermi surface. This happens to a small extent because of screening effects, see Fig. \ref{fig:EEScat} (a), and predominantly because of contributions from electron-muon (b) and electron-proton (c) scattering. The rates of electrons in the \textit{transverse} channel, in contrast, are clearly dominated by electron-electron scattering (d), contributions from electron-muon (e) and electron-proton (f) scattering are small. Screening effects of heavy fermions \textit{reduce} the rate of transverse electron-electron scattering, and the small gains from collisions with muons and protons cannot compensate this reduction. The net result is, that the transverse damping rate of electrons is reduced in the EMP plasma for any given momentum $|\boldsymbol{p}|$. In total, electron scattering remains dominated by the exchange of transverse photons close to the Fermi surface, but becomes dominated by the exchange of longitudinal plasmons further away, see Fig. \ref{fig:EMPComp}. Muons close to the Fermi surface exhibit the opposite characteristics of electrons: screening in the EMP plasma reduces (increases) the magnitude of longitudinal (transverse) rates, and transverse rates remain the dominant contribution. Further away, the rates in both channels increase strongly. Particularly important in this respect are muon-proton scattering (i) for the longitudinal channel, and muon-electron scattering (j) for the transverse channel. All things considered, scattering rates of fermions in close proximity to their respective Fermi surfaces in an EMP plasma are clearly dominated by transverse electron-electron scattering. \subsection{Scattering in the EMPN plasma} \label{subsec:ResultsInduced} \begin{figure}[t] \hspace{0.6cm}\includegraphics[scale=0.5]{ComparisonLongN0.pdf}\hspace{1cm}\includegraphics[scale=0.5]{ComparisonLongInducedN0.pdf}\\[2ex] \hspace{0.6cm}\includegraphics[scale=0.5]{ComparisonMuLongN0.pdf}\hspace{1cm}\includegraphics[scale=0.5]{ComparisonMuLongInducedN0.pdf} \caption{\label{fig:ComparisonN0} Comparison of longitudinal interaction rates of electrons and muons, without (left) and with (right) induced interactions. The same Skyrme models as in Fig. \ref{fig:Screen} are used, parameters can be found in Tab. \ref{tab:NRAPR}. At fixed density, the Fermi momenta of electrons calculate to slightly different values in each model, and the rates are plotted in a range of $\pm2$ MeV around these values. To facilitate a comparison the results are not normalized over $\mu_e$ as in the previous figures. At saturation density, the bands on the left and right hand side overlap almost perfectly, indicating that the impact of induced interactions is minimal.} \end{figure} \begin{figure} \begin{centering} \,\,\includegraphics[scale=0.56]{CompareBandN61.pdf}\hspace{0.85cm}\includegraphics[scale=0.56]{ComparisonLongInducedN61.pdf}\\[2ex]\includegraphics[scale=0.56]{CompareBand2N0.pdf}\hspace{0.75cm}\includegraphics[scale=0.56]{ComparisonLongInduced2N0.pdf}\\[2ex] \includegraphics[scale=0.56]{CompareBandMuon2N0.pdf}\hspace{0.85cm}\includegraphics[scale=0.56]{ComparisonMuLongInduced2N0.pdf} \end{centering} \setlength{\belowcaptionskip}{-8pt} \caption{\label{fig:InducedComp} Scattering rates of electrons close to the crust-core boundary, and of electrons and muons deeper inside the core. Gray and blue bands represent the results for $\Gamma_L$ with and without induced interactions respectively. The panels on the right-hand side resolve the various Skyrme models that constitute the gray bands. It seems counter-intuitive at first that the width of the gray band broadens at lower densities, where Skyrme models are supposed to be well constrained. This happens mainly because the results are sensitive to the relative distance of $n$ to the critical density $n_c$, which is slightly different in each model. As expected, the impact of induced interactions at $n=0.61\,n_0$ is substantial. Even when induced interactions are ignored the scattering rate of electrons increases more than an order of magnitude going from $n=2\,n_0$ to $n=0.61\,n_0$. The muon rates increase by roughly a factor of $2$ going from $n=2\,n_0$ to $n=n_0$, see Fig. \ref{fig:inducedEN}. Deeper inside the core the scattering rates of muons become as large as those of electrons. } \end{figure} \begin{figure} \begin{centering} \includegraphics[scale=0.6]{PerpCompElectron.pdf}\,\hspace{0.5cm}\includegraphics[scale=0.62]{PerpCompMuon.pdf}\\[2ex] \end{centering} \caption{\label{fig:PerpCompare} Transverse rates of electrons (left) und muons (right) at $n=0.61\,n_0$ (where muons are absent), $n=n_0$, and $n=2\,n_0$, using the Skyrme parameters listed in Tab. \ref{tab:NRAPR}. The transverse rates increase with density, in particular for muons, which become increasingly relativistic in nature. Transverse scattering of electrons hardly exhibits any model dependence. The relative model dependence for muon rates is larger, in part due to variations in the critical densities for muon onset, see Tab. \ref{tab:CritDens}. } \end{figure} \begin{figure}[!h] \begin{centering} \includegraphics[scale=0.5]{Compare11.pdf}\,\includegraphics[scale=0.5]{Compare21.pdf}\includegraphics[scale=0.5]{Compare31.pdf}\\[2ex] \includegraphics[scale=0.5]{CompareP1.pdf}\,\includegraphics[scale=0.5]{CompareP21.pdf}\,\includegraphics[scale=0.5]{CompareP31.pdf} \end{centering} \setlength{\belowcaptionskip}{-8pt} \caption{\label{fig:LowDens} Electromagnetic scattering of electrons and protons close to the crust-core boundary, computed using NRAPR and SQMC700 Skyrme parameters, which exhibit similar critical densities, see Tab. \ref{tab:NRAPR}. All rates are normalized over the same chemical potential $\mu_e\sim87.2$ MeV (NRAPR at $n\sim n_c$). Solid and dashed blue lines depict the longitudinal rates with and without induced interactions. Dashed red lines depict the transverse rates for comparison, for which induced interactions can safely be ignored. The three densities cover the region in which muons are absent. The transverse rates of protons are tiny. The longitudinal rates are still small compared to those of electrons, but the impact of induced interactions is striking. The inclusion of induced interactions limits the dominance of transverse rates for electrons to a tiny region around the Fermi surface, in particular at densities close to $n_c$. } \end{figure} \noindent In a final step neutrons are included into the framework of electromagnetic scattering. As summarized in Sec. \ref{subsec:induced} this is a two-step process: strong interactions of protons with other protons and neutrons are resummed to obtain the dressed proton polarization function $\tilde{\Pi}_p$, which then replaces the bare polarization function $\Pi_p$ in the dressed photon propagator Eq. \ref{eq:PropEMP}. Strong interactions thus appear nested into electromagnetic ones. The properties of nuclear matter exhibit significant model dependence at higher densities, see e.g. Fig. \ref{fig:Skyrmes}, and a comparison of different Skyrme forces becomes crucial. Since the transverse channel remains mostly unaffected by induced scattering with nucleons, this section focuses on the computation of longitudinal scattering rates. The strongest impact can be expected at densities well below nuclear saturation density $n_0$, where screening effects originating from strong interactions increase rapidly, see Fig. \ref{fig:Screen}. The EMPN plasma at lower densities resembles the environment encountered in the crust-core boundary of neutron stars. Note, that Skyrme forces are well constrained at lower densities and allow for reliable predictions. Deeper inside the core, induced interactions lead to a decrease of electric screening, although by a much smaller amount. To set the stage, and to connect with the results shown in the previous subsections, it is instructive to take a look at the case $n=n_0$ first. Fig. \ref{fig:ComparisonN0} compares the longitudinal rates with and without induced interactions, employing all Skyrme models listed in subsection \ref{subsec:induced}. At fixed density each model predicts slightly different values for chemical potentials, effective masses, and residual quasiparticle interactions in $\beta$ equilibrium. The bands encompassing the rates with and without induced lepton-neutron scattering overlap almost perfectly, indicating that induced interactions are negligible at saturation density. \newline The impact of induced interactions at lower and higher densities is studied at $n=0.61\,n_0$ and $n=2\,n_0$. The former density is chosen such that all Skyrme models are very well within the stability region of homogeneous nuclear matter, see Tab. \ref{tab:CritDens}. The latter corresponds to a region deeper in the core, where a small muon population of about 2 \% is present, see Fig. \ref{fig:FracT}. The results are shown in Fig. \ref{fig:InducedComp}, blue and gray bands correspond to predictions for the rates without and with induced interactions respectively. Even in the absence of induced interactions, the longitudinal rate of electrons increases strongly upon approaching the crust-core boundary, compared to $n=2\,n_0$ by roughly an order of magnitude (see also Fig. \ref{fig:WeakFull}). At $n\sim0.6\,n_0$, induced interactions lead to an additional increase of more than a factor of 2, depending on the momentum of the quasi-particles. The transverse rates of electrons \textit{decrease} upon approaching the crust-core boundary, albeit by a much smaller amount, see Fig. \ref{fig:PerpCompare}. The range of momenta for which transverse scattering dominates is thus drastically reduced at lower densities. As for muons, their longitudinal rates become comparable in magnitude to those of electrons around $n=2\,n_0$ . The decrease (increase) of longitudinal (transverse) rates in between $n_0$ and $2\,n_0$ amounts in both cases to roughly a factor of $2$.\newline The fact that induced interactions primarily modify scattering in the longitudinal channel makes them particularly relevant for the damping rates of \textit{heavy} fermions. The absence of muons in the crust-core transition region leaves protons as the only massive particles that may engage directly in electromagnetic scattering. The proton fraction is small at lower densities, and in their interaction rates are certainly dominated by collisions with other protons and neutrons mediated by strong interactions. It is nevertheless interesting to investigate electromagnetic scattering of heavy fermions at densities close to $n_c$ on the example of protons. To perform the calculation and to compare the results with corresponding rates of electrons two Skyrme models, NRAPR and SQMC700, with almost identical critical densities are employed, see Table \ref{tab:CritDens}. The results are displayed in Fig. \ref{fig:LowDens}. The proton rates indeed benefit several times more from the impact of induced interactions than the electron rates, and reach up to $20$ \% of the latter. Given this drastic increase it seems reasonable to expect a similar effect in scattering processes mediated by strong interactions - after all the observed effect originates from a sharp increase of \textit{strong} Debye screening shown in Fig. \ref{fig:Screen}. This point will be discussed further in the outlook. \section{Finite Temperature}\label{sec:PartDegen} \noindent Having conducted a thorough study of scattering in (fully) degenerate nuclear matter, one may ask how the results evolve with increasing temperature. Without a sharp Fermi surface quasiparticles exhibit a finite lifetime at any given momentum. Hole states with $\epsilon_{\boldsymbol{p}}>\mu$ as well as particle states with $\epsilon_{\boldsymbol{p}}<\mu$ become available. A major obstacle is the infrared divergence of the transverse rate $\Gamma_\perp$ in the absence of strict Pauli blocking. To see this, consider the HDL expression Eq. \ref{eq:GammaHDL}, ignore the term proportional to $1-n_f$ (as it is infrared finite), and approximate the Bose distribution by $n_b\sim T/q_0$, where as usual $q_0=\epsilon_{\boldsymbol{p}}-\epsilon_{\boldsymbol{p}^\prime}\simeq \boldsymbol{v}_p\cdot\boldsymbol{q}$. The singular region can by isolated by considering collisions involving quasistatic ($q_0\rightarrow0$), ultra-soft ($\boldsymbol{q}\rightarrow0$) photons. The resulting integrand differs to that of Eq. \ref{eq:GPerpA} only by the additional factor $T/q_0$, \begin{equation}\label{eq:diverge} \Gamma_{\perp}\simeq\frac{e^{2}}{(2\pi)}m_{D}^{2}\,v_f^2\,T\,\int_{0}^{\infty}d\left|\boldsymbol{q}\right|\int_{0}^{v_f\,\left|\boldsymbol{q}\right|}du\,\frac{4}{16\,\boldsymbol{q}^{4}+\pi^{2}\,m_{D}^{4}\,v_f^{2}\,u^{2}/\boldsymbol{q}^{2}}=\frac{e^{2}}{2\pi^2}\,v_f\,T\int_0^{\infty}d|\boldsymbol{q}|\,\frac{1}{|\boldsymbol{q}|}\textrm{arctan}\left(\pi\frac{m_D^2\,v_f}{4\,\boldsymbol{q}^2}\right).\\[1ex] \end{equation} In the limit $\boldsymbol{q}\rightarrow0$ the arctangent approaches $\pi/2$ and the transverse rate is logarithmically divergent. The consistent calculation of the lifetime of fermions in gauge theory plasmas at finite temperature is a long standing problem, see e.g. Refs. \cite{Blaizot:1996az}, \cite{Pisarski:1988vd,Lebedev:1990kt,Rebhan:1992ak,Pisarski:1993rf,Flechsig:1995sk,Takashiba:1995qa}. In QCD, where self interactions of gluons lead to a magnetic screening mass, the introduction of an infrared cut-off seems natural. In QED the resolution of the infrared problem has to lie elsewhere, and techniques to resumm the leading order divergences have been developed for hot matter at $\mu=0$, see Refs. \cite{Blaizot:1996az}, \cite{Takashiba:1995qa,Blaizot:1997kw}. An application of these methods in the present context is certainly beyond the scope of this article. Instead, I shall compute the energy loss per distance traveled, $-dE/dx$, which is easily derivable from $\Gamma$, and which is infrared finite. From a formal point of view the quantity $-dE/dx$ is less fundamental, as it does not represent a direct link between the fermion self energy and scattering theory. In addition, the physical interpretation as the (inverse) quasiparticle lifetime is lost. From a practical point of view it is worth studying $-dE/dx$, because it is the quantity which ultimately enters the transport integral. Following Braaten and Thoma \cite{Braaten:1991jj}, $-dE/dx$ is obtainable from $\Gamma_{L}$ and $\Gamma_{\perp}$ upon inserting $(\epsilon_{\boldsymbol{p}}-\epsilon_{\boldsymbol{p}^\prime})/v_{\boldsymbol{p}}$ into their integrands, which precisely compensates the divergence stemming from the Bose distribution in the transverse channel. Note, that the logarithmic divergence in Eq. \ref{eq:diverge} is obtained despite the inclusion of dynamical screening. In the absence of Landau damping Eq.\ref{eq:diverge} is quadratically divergent, while $-dE/dx$ is logarithmically divergent. \newline To connect with the zero temperate results it is worth taking a look at $-dE/dx$ in the (fully) degenerate limit. One may again ask for the behaviour in immediate proximity to the Fermi surface, which amounts to adding the factor of $u/v_f$ in the integrands of Eqs. \ref{eq:GLongA} and \ref{eq:GPerpA}. Using the same notation as in Eq. \ref{eq:LongEMPAn} the results for a given lepton species $l=\{e,\,\mu\}$ read \begin{equation} \label{eq:dEdxT0} -\left.\frac{dE}{dx}\right|_{L,\,l}\sim\frac{e^2}{48}\,\frac{1}{v_{f,\,l}^2}\,\left|\epsilon_{\boldsymbol{p}}-\mu_l\right|^3\,\frac{1}{M^3}\,\left(\sum_i \frac{m_{D,\,i}^2}{v_{f,\,i}}\right)\,,\hspace{1cm}-\left.\frac{dE}{dx}\right|_{\perp,\,l}\sim\frac{e^2}{18\pi}\left|\epsilon_{\boldsymbol{p}}-\mu_l\right|^2\,.\\[1ex] \end{equation} The additional power of $|\epsilon_{\boldsymbol{p}}-\mu|$ in both channels results in a smaller curvature (i.e., a slower increase) of $-dE/dx$ around the Fermi surface. Note, that transport coefficients are determined by the integral of $-dE/dx$ over the momenta $|\boldsymbol{p}|$ of the incoming fermions, and thus depend on the area under $-dE/dx$ rather than the energy loss itself. The relative increase of this area due to induced interactions is particularly large if the region of interest around $k_f$ is small, and the temperatures are very small compared to $\mu$, see Fig. \ref{fig:DeDxLowT} below. In the following the scattering rates are investigated for a wide range of temperatures. \subsection{Degenerate matter: neutron stars} \begin{figure}[t] \begin{centering} \includegraphics[scale=0.55]{dEdxLongN055-Tlow.pdf}\hspace{1cm}\includegraphics[scale=0.55]{dEdxPerpN055-Tlow.pdf}\\[1ex] \includegraphics[scale=0.55]{dEdxLongN055-Tlow3.pdf}\hspace{1cm}\includegraphics[scale=0.55]{dEdxPerpN055-Tlow3.pdf}\\[1ex] \includegraphics[scale=0.55]{dEdxLongN055-Tlow2.pdf}\hspace{1cm}\includegraphics[scale=0.55]{dEdxPerpN055-Tlow2.pdf} \end{centering} \setlength{\belowcaptionskip}{-8pt} \caption{\label{fig:DeDxLowT} Energy loss of electrons (p) and holes (h) in nuclear matter (EP plasma) at $n=0.55\,n_0$ and three different temperatures, calculated using NRAPR Skyrme forces, see Tab. \ref{tab:NRAPR}. The conversion factor to obtain the energy loss in units of $\textrm{MeV}^2$ reads $\mu_e^2\,10^{-6}\sim7.8\cdot10^{-3}$. Blue and red lines correspond to longitudinal and transverse rates for particles, hole contributions are shown in black. Dashed lines in the longitudinal channel display the corresponding results in the absence of induced interactions. The rates at very low temperatures of $T=0.1$ MeV closely resemble the fully degenerate case. The energy loss increases rapidly with temperature, and sizeable contributions to the scattering rate of particles with $|\boldsymbol{p}|\leq k_f$ (or holes with $|\boldsymbol{p}|\geq k_f$) emerge. The impact of induced interactions on the energy loss remains large, but in relative terms it is much smaller than at $T=0.1$ MeV. Screening effects are less important, and the longitudinal rates are larger than their transverse counterpart for any given momentum $|\boldsymbol{p}|$ at $T=0.5$ MeV and $T=1$ MeV.} \end{figure} \noindent Temperatures in the range of $T=0.1$ MeV to $T=1$ MeV are relevant for young and middle-aged neutron stars. While tiny compared to the chemical potentials of leptons and nucleons, finite temperature effects become important for the calculation of $-dE/dx$ close to $|\boldsymbol{p}|=k_f$, where the rates otherwise go to zero. Fig. \ref{fig:DeDxLowT} displays the energy loss for electrons and electron-holes, computed using the full one-loop expression for $-dE/dx$. As before the density is set to $n=0.55\,n_0$, and temperatures of $T=0.1$ MeV, $T=0.5$ MeV, and $T=1.0$ MeV are studied. The tiny ratio of $T/\mu$ renders the numerical integration over Fermi distributions particularly challenging, and the comparison with strictly zero temperature results (which are comparatively easy to obtain) serves as valuable crosscheck. \newline At finite temperature $-dE/dx$ assumes a finite value for any given momentum $|\boldsymbol{p}|$. The intersection of the results for electrons and electron-holes marks the location of the Fermi surface at zero temperature, where the quasiparticles are no longer stable. The most interesting aspect to study is the interplay of finite temperature and induced interactions. At $T=0.1$ MeV the rates are almost completely identical to the results obtained at zero temperature: the imaginary part at $|\boldsymbol{p}|=k_f$ is tiny, contributions of particles (holes) with $\epsilon_{\boldsymbol{p}}<\mu$ ($\epsilon_{\boldsymbol{p}}>\mu$) play no role, and induced interactions boost the longitudinal rates by a large amount, particularly in close proximity to the (zero temperature) Fermi surface. At $T=0.5$ and $T=1.0$ MeV the available phase space for scattering increases, and longitudinal and transverse contributions to $-dE/dx$ grow accordingly. The distinct characteristics of both channels at very low temperatures, arising mainly due to screening effects, are much less noticeable. The dominance of transverse scattering has disappeared completely, energy loss due to the exchange of longitudinal plasmons is larger at any given momentum. The fact that all three temperatures satisfy the condition $T\ll\mu_e$ highlights the importance of finite temperature effects, even in highly degenerate systems. Induced interactions still lead to a sizeable increase of the longitudinal rates. However, in comparison with the corresponding energy loss due to pure electromagnetic scattering the impact is much less pronounced. In general induced interactions are important whenever screening effects are the dominant feature, and therefore play a particularly important role for old neutron stars. \subsection{Partially degenerate matter: neutron star mergers} \begin{figure}[t] \includegraphics[scale=0.55]{GammaEdEdxLong-ThighEMP.pdf}\hspace{1cm}\includegraphics[scale=0.55]{GammaEdEdxPerp-ThighEMP.pdf} \setlength{\belowcaptionskip}{-8pt} \caption{\label{fig:DeDxHighTEMP}Energy loss of electrons (p) and electron-holes (h) at saturation density at $T=10$ MeV (solid), $T=20$ MeV (dashed), and $T=30$ MeV (dot-dashed), calculated using NRAPR Skyrme forces. The results differ substantially from those calculated for (fully) degenerate matter. Particles (holes) with $|\boldsymbol{p}|<k_f$ ($|\boldsymbol{p}|>k_f$) add sizeable contributions to the total rate. At $T=30$ MeV the rates increase (decrease) almost linearly with increasing momentum $|\boldsymbol{p}|$.} \end{figure} \noindent \begin{figure}[t] \includegraphics[scale=0.55]{dEdxLongN055-ThighINDUCED.pdf}\hspace{1cm}\includegraphics[scale=0.55]{dEdxPerpN055-Thigh.pdf}\\[2ex] \includegraphics[scale=0.55]{dEdxLongN0-ThighINDUCED.pdf}\hspace{1cm}\includegraphics[scale=0.55]{dEdxPerp-ThighEMP.pdf}\\[2ex] \includegraphics[scale=0.55]{dEdxLong2N0-ThighINDUCED.pdf}\hspace{1cm}\includegraphics[scale=0.55]{dEdxPerp2N0-Thigh.pdf}\\[2ex] \setlength{\belowcaptionskip}{-8pt} \caption{\label{fig:DeDxHighTINDUCED} Energy loss of electrons in an EPN plasma ($n=0.55\,n_0$), and an EMPN plasma ($n=n_0$ and $n=2\,n_0$), at $T=10$ MeV (solid), $T=20$ MeV (dashed), and $T=30$ MeV (dot-dashed). Calculations are based on NRAPR Skyrme forces, and account for induced lepton-neutron scattering. Thin lines correspond to the results neglecting induced interactions. As usual, considerable modifications occur predominantly at lower densities, but are far less relevant than in the degenerate regime. In contrast to the fully degenerate limit longitudinal and transverse rates now both decrease with density. In the longitudinal channel this decrease is more pronounced, albeit much less pronounced than in the degenerate limit.} \end{figure} \begin{figure}[t] \begin{centering} \includegraphics[scale=0.55]{dEdxLongN0Muons-ThighINDUCED.pdf}\hspace{1cm}\includegraphics[scale=0.55]{GammaMdEdxPerp-ThighEMP.pdf}\\[2ex] \includegraphics[scale=0.55]{dEdxLong2N0Muons-ThighINDUCED.pdf}\hspace{1cm}\includegraphics[scale=0.55]{dEdxPerp2N0mu-Thigh.pdf} \end{centering} \caption{\label{fig:DeDxHighTINDUCEDMuons} Energy loss of muons in an EMPN plasma at $n=n_0$ and $n=2n_0$, taking into account induced interactions. Same parameters as in Fig. \ref{fig:DeDxHighTINDUCED}. The reduction of the energy loss at $n=2\,n_0$ is more pronounced than for electrons, see Fig. \ref{fig:DeDxHighTINDUCED}.} \end{figure} \noindent \noindent Finally I take a first glimpse at scattering rates at higher temperatures, ranging from $T=10$ MeV to $T=30$ MeV. While far too high to be realized in neutron stars at any stage of their evolution, temperatures of tens of MeV are very relevant for the hot regions in neutron star mergers. The results presented in this section are subject to two major sources of uncertainty: First partially degenerate conditions in principle require for a rigorous iterative determination of effective masses, chemical potentials, and residual quasiparticle interactions for each given temperature. The simple estimates carried out in subsection \ref{subsec:multi} suggest that temperature variations of these quantities are small, but a thorough check is certainly desirable. Second the increasing thermal energy leads to larger transfer of energy and momentum during collisions. The momentum dependence of the residual quasiparticle interactions $f_{ab}$, not included in the present approach, may thus become important for the correct description of dynamical screening effects in the strong sector of the RPA resummation. Previous results indicate, that the importance of induced interactions is anyway reduced at higher temperatures, and the associated uncertainties should not change the outcome of this study on a qualitative level. In any case the results of this section have to be regarded as a first step towards the more ambitious goal of obtaining a quantitatively accurate description of electromagnetic scattering in partially degenerate nuclear matter.\newline Extrapolating the results of Fig. \ref{fig:DeDxLowT}, one would expect that increasing temperatures favor the longitudinal scattering rates. Fig. \ref{fig:DeDxHighTEMP} shows that the longitudinal rates are indeed several times larger than transverse ones for temperatures above $10$ MeV. The local suppression of the total rate around $|\boldsymbol{p}|=k_f$ has completely disappeared; at $T=30$ MeV the scattering rates of electrons (electron holes) increase (decrease) almost linearly with the momentum. Hole states (particle states) with $|\boldsymbol{p}|>k_f$ ($|\boldsymbol{p}|<k_f$) are increasingly populated, and their energy loss becomes an important contribution to transport. A realistic calculation of transport coefficients in partially degenerate matter should take these features into account.\newline It remains to take a look at the impact of induced interactions for the temperatures considered in Fig. \ref{fig:DeDxHighTINDUCED}. Induced interactions inherit their temperature dependence from the neutron and proton polarization functions $\Pi_{n,p}$, incorporated in the resummed proton polarization function $\tilde{\Pi}_p$. Increasing temperatures enlarge the low energy tail of the longitudinal photon spectrum at $q_0<v_{f\,,p}\,|\boldsymbol{q}|$, which is of primary importance for Coulomb scattering, see Fig. \ref{fig:RhoLongT}. A numerical survey at $n=0.55\,n_0$ (EPN plasma) and $n=n_0$, $n=2\,n_0$ (EMPN plasma) shows again the emergence of a familiar pattern: at lower densities the impact of induced interactions is clearly noticeable, increasing longitudinal rates by up to $20$ \%. Around saturation density the impact is negligible, and above saturation density it leads to slightly reduced rates in the longitudinal channel. The reduction at $n=2\,n_0$ is much more pronounced for muons than for electrons, see Fig. \ref{fig:DeDxHighTINDUCEDMuons}. In all cases the relative impact of induced interactions is indeed much smaller than in the degenerate regime discussed in Fig. \ref{fig:DeDxLowT}. It is noteworthy that now both, longitudinal and transverse rates, increase with decreasing density. The increase is more pronounced in the longitudinal channel, but not as large as in the degenerate limit, see Fig. \ref{fig:InducedComp}. \section{Summary and Outlook} \noindent I have presented an extensive study of electromagnetic scattering in the environment of dense homogeneous nuclear matter. Effective masses, chemical potentials, and residual interactions of the nucleon quasiparticles in beta equilibrium have been extracted from an energy functional based on Skyrme type interactions. The relationship between the scattering rate $\Gamma$ and the loop expansion of the fermion self energy has been discussed in detail. Debye screening and Landau damping of electromagnetic interactions due to electrons, muons, protons, and neutrons have been incorporated employing the random phase approximation (RPA). Hard dense loop (HDL) and weak-screening approximations of the full RPA result have been discussed and compared. At finite temperature the energy loss per distance travelled, $ -dE/dx$, rather than the scattering rate has been calculated, as the latter suffers from an infrared divergence within the RPA. A central aspect of this article have been correlations of strong and electromagnetic interactions, occurring as a result of collisions with protons, which carry both charges. The induced interactions with neutrons affect electromagnetic scattering in an implicit manner, i.e., via screening effects embodied the photon propagator. While the calculation of lepton scattering rates has been the natural focus of this article, electromagnetic scattering of protons has been examined as well. In the following the main results are summarized. \subsection{Summary of results} \noindent The results for the scattering rates at zero temperature highlight the importance of screening effects in (fully) degenerate matter. A comparison of \textit{weak-screening} approximation and full RPA results reveals, that the former delivers reliable results for the scattering rates of light plasma constituents (i.e., electrons and electron-holes) at very high densities. At lower densities of about $n\sim0.6\,n_0$ dynamical screening in the multi-component plasma beyond leading order in the photon energy $q_0$ becomes important, see Fig. \ref{fig:WeakFull}. The \textit{hard dense loop} resummation captures these effects correctly, and represent an excellent approximation at any given density, as long as one is interested in the energy loss of fermions with momenta in a range of $(|\boldsymbol{p}|-k_f)\lesssim \pm5$ MeV. Given that HDL results are fairly easy to handle they represent the best compromise of accuracy and computational effort, in particular if the energy transfer in collisions is not too large (which is practically always the case in the core of neutron stars). The scattering of relativistic fermions in highly degenerate plasmas has been studied in detail by Heiselberg and Pethick, who conclude that the exchange of (transverse) photons represent the dominant channel for interaction in such an environment \cite{Heiselberg:1993cr}. To which extent this remains true for leptons in nuclear matter is again a density dependent question: the longitudinal rates of electrons \textit{increase} by more than an order of magnitude going from $n=2\,n_0$ to $n=0.55\,n_0$, see Fig. \ref{fig:WeakFull}, while the transverse rates \textit{decrease} slightly, see Fig. \ref{fig:PerpCompare}. The dominance of transverse scattering is thus much less pronounced at lower densities. A similar pattern emerges for muons, which are mildly relativistic at $n=n_0$. At saturarion density their rates are much smaller than those of electrons, in particular in the transverse channel, see Fig. \ref{fig:PerpCompare}. At $n=2\,n_0$ the rates of electrons and muons are comparable in both channels, and transverse scattering clearly dominates. In general the scattering rate of a given fermion species increases considerably when subjected to a multi-component plasma. The only exception is the transverse rate of electrons, which is dominated by electron-electron scattering, and, as a result of screening effects decreases slightly in the presence of muons and protons. \newline \noindent When correlations with nuclear interactions are taken into account, the photon propagator becomes sensitive to ground state properties of nuclear matter. This is reflected in the proton polarization function $\tilde{\Pi}_p$, resummed in the subspace of protons and neutrons interacting via the residual quasiparticle potentials $f_{pp}$, $f_{pn}$, and $f_{nn}$. In the static limit $\tilde{\Pi}_p$ yields the (strong) Debye mass, which diverges upon approaching the critical density $n_c$ for the stability of homogeneous nuclear matter, see Fig. \ref{fig:Screen}. The quantitative repercussions on the scattering rates are to some extent model dependent, and the robustness of the results has been tested using five different modern Skyrme forces. The sharp increase of static screening close to $n_c$ enlarges the low energy tail of the photon spectrum, see Fig. \ref{fig:RhoLongT}, which in turn increases the longitudinal scattering rates by more than a factor of $2$. At densities above $n_0$ induced interactions \textit{reduce} the Debye screening of protons (and consequently $\Gamma_L$), although by a much smaller amount. Muons are absent at densities below $n\sim0.7\,n_0$, where the impact of induced interactions is most pronounced, and thus only experience the decreasing effects at higher densities. The lack of static screening in the \textit{transverse} channel entails, that induced interactions are of minor importance for $\Gamma_\perp$. The additional boost of longitudinal scattering further represses the dominance of transverse rates at densities close to $n_c$, see Fig. \ref{fig:LowDens}. To study the impact of induced interactions on \textit{heavy} fermions at the edge of stability, the scattering rates of protons have been computed, see Fig. \ref{fig:LowDens}. The impact on the longitudinal rates of protons is indeed striking, resulting in a boost of roughly a factor of $5$. All of the above results are robust, and to a large extent independent of the chosen Skyrme model. At lower densities variations in the numerical results occur at fixed density $n$, because the impact of induced interactions is very sensitive to the relative distance to the critical density $n_c$, which computes to a slightly different value in each model. Using Skyrme parameter sets with similar critical densities results again in very narrow bands, see Fig.\ref{fig:LowDens}. \newline \noindent At finite temperature the energy loss per distance travelled, $-dE/dx$, has been computed under degenerate conditions, with $T\leq 1$ MeV, and under partially degenerate conditions, with $10\,\textrm{MeV} \leq T \leq 30$ MeV. At temperatures of $T=0.1$ MeV the scattering rates are basically indistinguishable from those calculated at zero temperature. As the phase space available for scattering increases with the temperature, the characteristics of the scattering rates change in several ways: temperatures of $T=(0.5 -1)$ MeV, though tiny compared to the electron chemical potential, lead to a substantial increase of the energy loss of fermions with $|\boldsymbol{p}|\sim k_f$. At a distance of about $|\boldsymbol{p}|\sim(k_f\pm1)$ MeV from the Fermi surface the rates at $T=1$ MeV are still more than twice as large as those at $T=0$, see Fig. \ref{fig:DeDxLowT}. The dominance of the transverse channel has completely disappeared, as longitudinal contributions are larger for fermions at any given momentum. Induced interactions lose much of their importance: in the absence of a (sharp) Fermi surface, there is no longer a region around $k_f$, where induced lepton-neutron scattering represent the dominant contribution to the longitudinal rate. This trend continues in partially degenerate matter at temperatures of $T=(10-30)$ MeV. Longitudinal rates easily outgrow transverse ones, in the case of muons by about 5-10 times, in the case of electrons still by about 3 times, see Fig. \ref{fig:DeDxHighTEMP}. At $T=30$ MeV the rates of electrons (holes) increase (decrease) almost linearly with their momenta, sharing little in common with the typical characteristics of scattering in fully degenerate matter. Longitudinal and transverse rates both decrease with increasing density, in contrast to degenerate matter where transverse rates increase, see Fig. \ref{fig:PerpCompare}. In most cases induced interactions are of minor importance, with the exception of electrons at lower densities, where the increase be can as large as $20$\%, see Fig. \ref{fig:DeDxHighTINDUCED}. \subsection{Relevance for neutron star phenomenology and outlook} \noindent A particular focus of this article has been the computation of electron and muon scattering rates under conditions similar to those realized in the crust-core boundary of neutron stars. The transport properties of this regions play an important role for the spin evolution of neutron stars. Among the biggest mysteries in nuclear astrophysics is the observation of high-frequency pulsars. As a neutron star spins up by accretion of matter various oscillation modes, among them r-modes, are excited. R-modes are known to be generically unstable with respect to the emission of gravitational waves, to which they transmit angular momentum. As a result, the spin-up of the star should be limited by a certain critical frequency. The r-mode amplitudes grow provided that potential damping mechanisms operate on a longer timescale than the emission gravitational waves. One of them is viscosity in the crust-core interface, which, however, would have to be several times larger than previously calculated \cite{Ho:2011tt} to stabilize fast spinning stars. Several effects, including suerpconductivity/superfluidity \cite{Gusakov:2013jwa} or meson condensation \cite{Kolomeitsev:2014gfa} have have been considered in order to explain the appearant discrepancy. Because electrons are weakly interacting and ultra-relativistic, they are considered a dominant contributor to transport. Induced interactions strongly increase the scattering rates of electrically charged particles at densities which are just about high enough to support stable homogeneous nuclear matter, i.e., at the boundary to the crust. In addition, they are particularly pronounced at the very small temperatures typical of accreting neutron stars. The energy loss $-dE/dx$ of electrons integrated in a small range of momenta around $k_f$ increases by more than a factor of $3$. The increase of the scattering rate (or in other words the reduction of the mean free path) indicates that the capability of electrons to transport heat, charge, or momentum has previously been overestimated. How large the friction between crust and core can become depends, among other things, on the existence of a ``nuclear pasta phase" \cite{Pethick:1995di}, which potentially smears out the transition, and reduces the damping effect of the viscous boundary layer \cite{Haskell:2012vg}. The rapid increase of Debye screening particularly impacts heavy fermions, and motivates a refined study of the transport properties of nucleons in the crust-core interface. It is certainly interesting to ask whether or not nuclear matter in its simplest manifestation is sufficient to explain the observation of fast spinning stars.\newline If gravitational wave asteroseismology of isolated neutron stars becomes available it would represent an outstanding tool to probe the interior of neutron stars. Neutron star mergers are well within reach of current gravitational-wave detector sensitivities, and the relevance of transport phenomena for their simulation is the subject of several ongoing studies. The calculations carried out under partially degenerate conditions reveal substantial differences to the scattering rates in highly degenerate matter. These should be taken into account in the calculation of transport coefficients, allowing for a better assessment of their importance. Induced interactions most likely play a minor role at higher temperatures. \newline \newline \noindent Several approximations employed in this article warant further studies. Vertex corrections, briefly discussed in section \ref{sec:OT}, might become important in the calculation of scattering rates at higher temperatures. Without vertex corrections interference contributions to Moeller scattering cannot be extracted from the fermion self energy. An example is the diagram depicted in Fig. \ref{fig:scatter} (c), which may be expressed as $\Sigma(p)\propto\int d^4 q\,\Gamma(q)^\mu\,D_{\mu\nu}(q)\,\gamma^\nu\,S(p-q)$, where $\Gamma^\mu$ is the one-loop vertex. It is certainly not unusual to include vertex corrections in the Braaten and Pisarski resummation programm. However, how resummed vertices relate to screening effects in the various scattering channels has not been explored in detail yet. In addition, the momentum dependence of residual particle-hole interactions should be extracted from microscopic approaches, see e.g. Ref. \cite{Benhar:2017oli}. To do so would allow for a rigorous study of dynamical screening effects in the strong section of the RPA resummation. Finally, a fully iterative determination of the mean-field energies and interaction potentials of nucleon quasiparticles at finite temperature would be desirable. The refined results for effective masses and chemical potentials improve the accuracy of the RPA calculation of scattering rates in partially degenerate nuclear matter. \newline In addition to the technical aspects mentioned above it would be important to asses the impact of superconductivity and magnetic fields on the calculations presented in this article. It is commonly, though not unanimously, projected that protons in the outer core of neutron stars are superconducting. Scattering at temperatures below the corresponding critical temperature is subject to the Meissner effect, which introduces static screening to the transverse channel. The resulting interplay of the Meissner effect and induced interactions should be studied in detail. Finally, to calculate the scattering rates in the presence of a magnetic field requires for a careful reevaluation of the fermion self energy (i.e., the scattering matrix element). \section*{Acknowledgements} \noindent I would like to thank Sanjay Reddy and Ermal Rrapaj for many useful discussions during the execution of this project, Ingo Tews and Ermal Rrapaj for a critical reading of the manuscript, and Alessandro Roggero for providing valuable help in the development of the numerical code. I have been supported by a Schroedinger Fellowship of the Austrian Science Fund FWF, project no. J3639.
{ "timestamp": "2019-03-01T02:01:31", "yymm": "1902", "arxiv_id": "1902.10745", "language": "en", "url": "https://arxiv.org/abs/1902.10745" }
\section{Introduction}\label{sec:01} Microring resonators (MRRs) have been widely employed for on-chip optical interconnects, nonlinear optics, and sensings relying on their compact size, high quality ($Q$) factor, and compatibility with the integration of other passive and active photonic devices~\cite{Lipson2018, Alex2018, sun2015single, kippenberg2011microresonator, ferdous2011spectral, zhang2016highly}. MRRs are normally side-coupled with a bus-waveguide to access their resonant modes, yielding periodic resonant dips (usually symmetric Lorentzian-type) in the bus-waveguide's transmission spectra. To carry out MRR-based optical sensing, filtering, modulation, and switching, transmission variations of the bus-waveguide around the resonant dips are required, which are achieved by shifting MRR's resonance either toward or away from the signal wavelength. To obtain a large transmission variation, a shift of the resonant wavelength larger than the linewidth of the resonant dip is desired~\cite{Fanoreview, chao2003biochemical, fan2002sharp}. Thus, lineshapes of MRR's resonant dips, i.e. sharpness and linewidth, would strongly determine the performances of MRR-based devices, such as power consumption, sensing sensitivity, modulation depth, extinction ratio, etc~\cite{fan2002sharp, mario2006asymmetric}. Considerable works have been implemented to modify MRR's resonance lineshape into a Fano-type, which has an asymmetric and sharp slope around the resonant wavelength. As opposed to the usual symmetric Lorentzian resonances, the wavelength range for tuning the Fano resonance transmission from 0\% to 100\% could be much narrower than the full linewidth of the resonance itself~\cite{fan2002sharp}. It therefore enables devices with improved performances, such as all-optical switches with one order of magnitude energy reduction and a ratio-metric wavelength monitor with an ultrahigh resolution of 0.8 pm~\cite{heuck2013improved, mario2006asymmetric, wang2016fano}. Fano resonances in MRRs are normally realized by interfering the resonance pathway with a coherent background pathway. A straightforward approach is the utilization of Mach-Zehnder interferometers (MZIs), where MRRs are either side-coupled to one arm of a MZI or inserted into a MZI to cause the two resonant beams propagating in the MRRs to interfere. MZI's excess waveguide-arms allow easy tunability of Fano lineshapes. Unfortunately, MRR devices incorporating MZI lose their compactness. By coherently coupling multiple resonant modes from multi-MRRs into one bus-waveguide, Fano resonances could also be realized. Nevertheless, it is challenging to achieve precise structure designs and device fabrications~\cite{tobing2008box, darmawan2005phase}. Also, more MRRs generate a larger footprint. It is also possible to alter the MRR's resonance into an EIT lineshape, showing a narrow transparency peak residing in a broad transmission valley. It has potentials in ultra-dense on-chip wavelength division multiplexer~\cite{mancinelli2011coupled} as well as enhancing the cavity's finesse~\cite{tobing2008finesse}. EIT resonances originate from coherent interferences between coupled resonant modes. They are realized in structures consisting of two or more MRRs coupled with a bus-waveguide or a MZI~\cite{darmawan2008resonance, mancinelli2011coupled, smith2004coupled, totsuka2007slow}, which have large device footprint and require rigorous considerations of the resonant modes' overlap. {In addition, while Lorentzian, Fano, and EIT resonance lineshapes have different functionalities, to the best of our knowledge, a MRR-based structure supporting these three lineshapes simultaneously has not been reported. Also, the modification of resonance lineshapes at a specific wavelength among the three types by simply tuning the structure parameters is desired.} In this letter, we demonstrate that, by simply coupling a MRR with a bus-waveguide inserted with two air-holes, as shown schematically in Fig.~\ref{fig:model}(a), all of the above mentioned resonance lineshapes could be realized. The structure is compact and promises great design and fabrication tolerances. The two air-holes constitute a low-finesse FP cavity in the bus-waveguide, producing broadband resonant peaks to couple with MRR's multiple resonant modes. {Here, the air-holes are designed with circular shapes, considering their less fabrication imperfections than air-holes with triangular or rectangular shapes.} The transmission spectrum of the coupled system is analyzed by the transfer matrix method. Lorentzian, Fano, and EIT resonance lineshapes are obtained over a free spectral range (FSR) of the FP cavity. For a specific resonant mode of the MRR, its lineshape could be tuned readily among the three types by changing the distance between the two air-holes. The results of the theoretical analysis are verified experimentally by devices fabricated on a silicon-on-insulator chip. Different from the previously reported structures, the proposed design has only one MRR and a bus-waveguide, and thus making it very compact. In addition, the broadband resonances of the FP cavity facilitate their overlaps with MRR's narrowband resonances. Thence, it is not necessary to carefully consider the design and fabrication of the two air-holes. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{figure1.pdf} \caption{ (a) Schematic and (b) theoretical model of the proposed structure, where the MRR is coupled with a bus-waveguide inserted with two air-holes. (c) Calculated transmission spectra of a bus-waveguide with two air-holes (FP), and a MRR coupled with a bus-waveguide with (FP-MRR) and without (WG-MRR) air-holes. {(d) Simulated transmission spectrum of the FP-MRR structure using a FDTD method.} } \label{fig:model} \end{figure} \section{Model and theory}\label{sec:02} The transmission spectrum of the proposed structure could be obtained by applying a transfer matrix analysis on the model shown in Fig.~\ref{fig:model}(b)~\cite{fan2002sharp}. The bus-waveguide has the forward (backward) propagating mode with the input and output of $a_1$ ($b_1$) and $a_2$ ($b_2$), respectively. The two air-holes work as partial reflectors of the propagating modes in the bus-waveguide. For simplicity, they are designed with the same diameters. With an amplitude reflection coefficient of $r_h$, one of the air-holes generates a transfer matrix for the propagating mode of \begin{equation} M_h=\frac{1}{i\sqrt{1-{r_h}^2}}\begin{bmatrix} {-1}&{-r_h}\\ {r_h}&1 \end{bmatrix} \end{equation} The two air-holes are located at distances of $l_1$ and $l_2$ to the waveguide-MRR coupling point. When light propagates through the two waveguide sections, the transfer matrixes are determined by the phase shifts in the form of \begin{equation} M_{l_1}=\begin{bmatrix} {e^{i2\pi{nl_1}/\lambda}}&{0}\\ {0}&{e^{-i2\pi{nl_1}/\lambda}} \end{bmatrix}, M_{l_2}=\begin{bmatrix} {e^{i2\pi{nl_2}/\lambda}}&{0}\\ {0}&{e^{-i2\pi{nl_2}/\lambda}} \end{bmatrix} \end{equation} Here, $n$ is the effective refractive index of the propagating mode, and $\lambda$ is the operating wavelength. In MRRs, the resonances are traveling-waves, and the backward scattering in the waveguide-MRR coupling could be neglected~\cite{bogaerts2012silicon}. For a forward or backward waveguide mode propagating through the coupling point, its transmission spectrum is governed by $t_R(\lambda)=\frac{t-ae^{i2\pi{nL_R}/\lambda}}{1-tae^{i2\pi{nL_R}/\lambda}}$~\cite{heebner2008optical}, where $t$ is the field transmission coefficient at the waveguide-MRR coupling region, and $a=\exp(-\alpha{L_R})$ is the round trip amplitude for a MRR with perimeter $L_R$ and linear loss coefficient $\alpha$. The corresponding transfer matrix for an incident light after the directional waveguide-MRR coupling could be described as \begin{equation} M_R=\begin{bmatrix} {t_R}&{0}\\ {0}&1 \end{bmatrix} \end{equation} The transfer matrix equation for the incoming and outgoing wave amplitudes of the entire coupled system is governed by \begin{equation} \begin{bmatrix} b_1\\ a_2\end{bmatrix}=M_hM_{l_2}M_RM_{l_1}M_h\begin{bmatrix} a_1\\ b_2 \end{bmatrix} \end{equation} To be consistent with the common operations of MRR-based devices, we consider only the left input port has an incoming normalized light, i.e., $a_1=1$, $b_1=0$. The final power transmission spectrum of the coupled system could be calculated as \begin{equation} T(\lambda)=\left|{\frac{a_2}{a_1}}\right|^2=\left|\frac{{(1-r^2_h)}{t_R(\lambda)}e^{\frac{i2\pi{nl}}\lambda}}{1-{r^2_h}{t_R(\lambda)}e^{\frac{i4\pi{nl}}\lambda}}\right|^2 \label{eq:T} \end{equation} where $l=l_1+l_2$. We could transform Eq.~\ref{eq:T} into a straightforward form to indicate the coupling between the MRR and FP cavity, as shown below \begin{equation} T(\lambda)=\left|\frac{{(1-r^2_h)}e^{\frac{i2\pi{nl}}\lambda}}{1-{r^2_h}e^{\frac{i4\pi{nl}}\lambda}}\right|^2\left|t_R(\lambda)\right|^2\left|\frac{1-{r^2_h}e^{\frac{i4\pi{nl}}\lambda}}{1-{r^2_h}{t_R(\lambda)}e^{\frac{i4\pi{nl}}\lambda}}\right|^2 \label{eq:T6} \end{equation} On the right side of Eq.~\ref{eq:T6}, the first and second terms are sole transmissions of the FP cavity and the side-coupled MRR, respectively; the third term indicates their resonance coupling. By assuming $r_h=0$, i.e., no air-holes in the bus-waveguide, the transmission spectrum of a conventional bus-waveguide coupled MRR could be calculated from Eq.~\ref{eq:T6}, as displayed in the red curve (WG-MRR) of Fig.~\ref{fig:model}(c). Here, the parameters of the waveguide-coupled MRR are $t=0.9$, $a=0.99$, $nL_R=200~\mu$m, and the spectrum is calculated over a range of 1,400-1,700 nm. Periodic resonant dips with a FSR of $\sim$12 nm are obtained. Because $t\neq{a}$, the extinction ratio (ER) of the dips is not high, which is calculated as $10\log_{10}({\frac{1-ta}{t-a}})^2=1.67$~dB. Their quality ($Q$) factors determined by $t$ and $a$ are estimated as 3,500. In another case, if we modify $t=1$, $a=0$ and set the two air-holes with parameters of $nl=10~\mu$m and $r_h=0.78$, no MRR is coupled with the bus-waveguide. The coupled system's transmission is determined only by the FP cavity of the two air-holes. The calculated transmission from Eq.~\ref{eq:T6} is plotted as shown by the green curve (FP) in Fig.~\ref{fig:model}(c). When the phase delay between the two air-holes $\delta=2\pi{nl/\lambda}$ is an integer multiple of $\pi$, there are resonant peaks with broad linewidths ($Q$$\approx$87) due to the low reflection coefficients of the air-holes. The ER of the resonant peaks is $10\log_{10}({\frac{1+r^2_h}{1-r^2_h}})^2=12.47$~dB. The blue curve in Fig.~\ref{fig:model}(c) displays the transmission spectrum of a coupled system (FP-MRR) with parameters of $t=0.9$, $a=0.99$, $nL_R=200~\mu$m, $nl=10~\mu$m and $r_h=0.78$, which includes both the MRR and FP cavity. Because of couplings between their resonances, Lorentzian, Fano, and EIT resonance lineshapes are realized simultaneously over each FSR of the broad FP oscillation background. At a FP cavity's resonant peak, i.e., $\delta$ is an integer multiple of $\pi$, if there is a resonance of MRR, their coherent overlap gives rise to a symmetric Lorentzian resonant dip. As indicated in Fig.~\ref{fig:model}(c), the Lorentzian resonant dips in the transmissions of WG-MRR and FP-MRR have significantly different ERs and $Q$ factors. While the maximum transmissions around the dips of WG-MRR and FP-MRR are similar, the bottom of the FP-MRR dip reduces to an ultralow value of $0.0443$, resulting in a greatly improved ER of $10\log_{10}({\frac{1-ta-r^2_h(t-a)}{(1-r^2_h)(t-a)}})^2=13.54$~dB. In addition, for the optical field of the resonant mode in the FP cavity, it will couple with the MRR many times during its backward and forward reflections, which therefore introduces extra waveguide-coupling losses to the MRR and represents a decreased $Q$ factor. The $Q$ factor of the Lorentzian dip in the FP-MRR is estimated to be 1,000 by a Lorentzian fitting. At a specific wavelength, if the phase delay between the two air-holes $\delta$ is an odd multiple of $\pi/2$, the FP cavity is in a completely destructive interference state. A MRR's resonance coupled with this state would generate an EIT transparency peak over the broad transmission valley, as illustrated in Fig.~\ref{fig:model}(c). Calculated from Eq.~\ref{eq:T6}, ER of EIT lineshape has a value of $10\log_{10}({\frac{(1+r^2_h)(t-a)}{1-ta+r^2_h(t-a)}})^2=8.67$~dB, which is larger than the ER of the Lorentzian resonant dips in WG-MRR. Its $Q$ factor is estimated as 13,000 by a Lorentzian fitting, which is much higher than that of the resonant dips in WG-MRR. This increased $Q$ factor in the EIT lineshape represents an improvement of photon's lifetime and finesse of the MRR by adding a FP cavity in the bus-waveguide. An optimized parameter could be derived for a much higher ER; for instance, ER $=$~13 dB is obtained for $r_h=0.9$. The destructive interference between the two air-holes results in a weakened waveguiding mode around the waveguide-MRR coupling region, which therefore reduces the waveguide-MRR coupling coefficient. For light resonant in the MRR, less waveguide-coupling loss therefore promises an improved $Q$ factor. In conventional waveguide-MRR coupling systems, to achieve transmission peaks, an additional drop-waveguide is required to couple with the MRR~\cite{bogaerts2012silicon}, which makes the device complicated and produces more coupling losses in the MRR. In the proposed FP-MRR, the EIT lineshape provides a transmission peak with only one bus-waveguide and the linewidth is much narrower, potentially expanding compact MRR's applications in nonlinear optics, sensing, filtering, and switching. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{figure2.pdf} \caption{ {(a) ERs, (b) $Q$ factors, (c) SRs, and (d) Fano parameters ($q$) of the different resonance lineshapes in the FP-MRR spectrum with the parameter $nL_R$ equalling to $200~\mu$m (blue curves with squares) and $400~\mu$m (scattered triangular points). } } \label{fig:ER} \end{figure} When $\delta$ is in the ranges of $(k\pi,k\pi+\pi/2)$ and $(k\pi+\pi/2,k\pi+\pi)$, where $k$ is an integer, the waveguiding modes are off-resonance from the FP cavity. Their couplings with MRR's resonances give rise to Fano resonance lineshapes over the slopes of the FP oscillation background, as shown in Fig.~\ref{fig:model}(c). The Fano resonance lineshapes depend critically on the resonant wavelengths with respect to the FP resonant peaks. At different sides of the FP resonant peaks, the asymmetric directions of the Fano resonances are opposite. {The above obtained resonance lineshapes are further verified by mode simulations using a finite difference time domain (FDTD) technique (Lumerical Inc.). In the modelled FP-MRR, the bus-waveguide and MRR have the same height and width of 220 nm and 500 nm, and the refractive index is chosen as 3.4. The MRR has a radius of 7.2 $\mu$m, and is coupled with the bus-waveguide with a gap of 100 nm. Two circular air-holes with a radius of 160 nm are inserted in the bus-waveguide with a distance of 2.8 $\mu$m. The simulated transmission spectrum of the proposed FP-MRR is plotted in Fig.~\ref{fig:model}(d), showing consistent results with the theoretical prediction (blue curve in Fig.~\ref{fig:model}(c)). Lorentzian dips and EIT transparency peaks are obtained at the resonant peaks and transmission valleys of the FP oscillation background, respectively. There are Fano resonance lineshapes over the slopes of FP cavity background.} To describe distinct features of the three types of lineshapes, we plot ERs, $Q$ factors, slope rates (SRs), and Fano parameters $q$~\cite{Fanoreview} of the resonance lineshapes in the FP-MRR spectrum {(blue curve in Fig.~\ref{fig:model}(c)). }While ERs of the Lorentzian and EIT resonance lineshapes could be deduced from Eq.~\ref{eq:T6}, Fano resonance's ER has no explicit expression. From the transmission spectrum of the FP-MRR, ERs are calculated numerically by counting the maximum and minimum around each resonance, as shown in Fig.~\ref{fig:ER}(a). Fano resonances' ERs are between those of the Lorentzian and EIT resonances; ERs for all the resonance lineshapes present a sine-like function with wavelength. $Q$ factors of the three resonance lineshapes are extracted as well by fitting them with Lorentzian or Fano functions, as illustrated in Fig.~\ref{fig:ER}(b). The maximum and minimum $Q$ factors are achieved in EIT and Lorentzian resonances, respectively, while Fano resonances have $Q$ factors lying between them. Different $Q$ factors are attributed to the varied coupling strengths between MRR and waveguiding modes, which are determined by the interference states of the optical field confined by the two air-holes in the bus-waveguide. SR, as another important figure of merit for resonance lineshapes, is calculated by taking derivations of the slopes in the resonance lineshapes. It determines the wavelength range for tuning the transmission between maximum and minimum. A calculated SR spectrum for the resonance lineshapes is plotted in Fig.~\ref{fig:ER}(c). For the symmetric Lorentzian and EIT resonance lineshapes, the SR is determined by the $Q$ factors, i.e., linewidths. Lorentzian (EIT) resonance with the lower (higher) $Q$ factor has smaller (larger) SR. Note that for Fano resonances with asymmetric lineshapes, the SRs could be higher than that of the EIT resonance, though the $Q$ factors might be lower. It would promise devices with higher performances relying on the much narrower wavelength range for switching the maximum and minimum transmissions, while its ER is not the highest (as shown in Fig.~\ref{fig:ER}(a)). Fano parameter $q$ characterizes the specific asymmetric profile of the Fano lineshape~\cite{Fanoreview}, which is estimated by fitting them with Fano functions, as plotted in Fig.~\ref{fig:ER}(d). A cotangent-like function is obtained for different $q$ at the resonances, showing $q=0$ ($q=\pm{\infty}$) at the Lorentzian (EIT) resonance. Additionally, both of the Lorentzian dips and EIT peaks in the FP-MRR have the same central wavelengths as those of the Lorentzian dips in the WG-MRR. However, the maximum and minimum of the Fano lineshapes shift slightly from the WG-MRR's resonant wavelengths due to the complex interference mechanism. {We also study the resonance lineshapes of another FP-MRR by changing the MRR's perimeter into $nL_R=400~\mu$m. With the same principle of mode interferences, Lorenzian, Fano, and EIT resonance lineshapes are obtained. The figure of merits of the resonance lineshapes are displayed in the scattered triangular points of Fig.~\ref{fig:ER}, which have the same trends as those in FP-MRR with perimeter of $nL_R=200~\mu$m (see blue curves with squares in Fig.~\ref{fig:ER}). The calculated ER, $Q$, SR, and $q$ of this FP-MRR with larger perimeter could be explained according to the above analysis. Because of less scattering loss for the larger MRR, the $Q$ factors are increased for the resonant modes. As a result, SRs, which are mainly determined by the resonance linewidth, have much higher values for the FP-MRR with larger MRR. Since the coupling gap between the MRR and bus-waveguide is not changed, the factors of ER and $q$ are almost unmodified.} \section{Experimental results}\label{sec:03} To verify the above theoretical analysis, we fabricate the proposed MRR device on a silicon-on-insulator chip with a 220 nm thick top silicon layer and a 2 $\mu$m thick buried oxide layer. Electron beam lithography is used to define the device patterns, which are then transferred onto the top silicon layer by inductively coupled plasma etching. Grating couplers with two-dimensional air-hole arrays are designed at both ends of the bus-waveguides~\cite{liu2010high}. The bus-waveguide and MRR are designed with the same width of 500 nm, and their coupling gap is 120 nm. Figure~\ref{fig:exp}(a) displays an optical microscope image of the fabricated device, where the MRR has a radius of 80 $\mu$m. The scanning electron microscope (SEM) image of the waveguide-MRR coupling region is shown in Fig.~\ref{fig:exp}(b), displaying the two air-holes with a distance of 20 $\mu$m and a radius of 150 nm. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{figure3.pdf} \caption{ (a) Optical microscope image of one of the fabricated device; (b) SEM image of the waveguide-MRR coupling region with two air-holes inserted in the bus-waveguide; {(c) Measured transmission spectrum of the fabricated device (black line) and fitting result by Eq.~\ref{eq:T} (red dashed line), showing Lorentzian, Fano, and EIT resonance lineshapes over the FP oscillations. } } \label{fig:exp} \end{figure} The fabricated devices are characterized by coupling a narrowband tunable laser (TL) into the input grating coupler, and the transmission powers are monitored using a photodiode. By tuning the TL wavelength over a range of { 1,557-1,583 nm }in steps of 0.02 nm, the device transmission spectrum could be analyzed. {The black line in Fig.~\ref{fig:exp}(c) }shows the result of the measurement from the device shown in Fig.~\ref{fig:exp}(a). Confined by the two air-holes, the waveguide transmission presents FP oscillations with FSRs of about 14 nm. Superimposed on them are the various lineshapes of the MRR's resonances. {On the top of the FP oscillation(at the wavelengths of 1563.9 nm and 1578.4 nm), the symmetric Lorentzian resonance dips are observed.} Fano resonance lineshapes with opposite asymmetric directions are formed on both sides of the FP slopes. {At the valley of the FP background (at 1570.5 nm), an EIT transparency peak presents at the transmission valley of the FP oscillation background.} These experimental results are consistent with the predictions in Fig.~\ref{fig:model}. {As shown in the red dashed line of Fig.~\ref{fig:exp}(c), the experimental result is fitted by the theoretical prediction of Eq.~\ref{eq:T} with parameters of $t=0.75$, $a=0.96$ and $r_h=0.61$, presenting their good consistency. Both of them have Lorentzian, Fano, and EIT resonance lineshapes at the corresponding resonant wavelengths. However, because of fabrication imperfections, ERs of the FP oscillations are lower than the theoretical values since the lower reflectivities of the air-holes, which also causes that the EIT peaks are not as prominent as the theoretical fitting. From the experiment measurements, the maximum ERs and SRs of the Fano resonances are estimated as 20 dB and 280 dB/nm, respectively, averaged over tens of devices. The values calculated from the corresponding theoretical predictions are 30 dB and 450 dB/nm, respectively. By optimizing the fabrication processes in future devices, better figure of merits are expected from the proposed structure. } \section{Conclusions}\label{sec:04} In conclusion, we demonstrated a compact structure for realizing Lorentzian, Fano and EIT resonance lineshapes in a waveguide side-coupled MRR. By adding two air-holes into the bus-waveguide, all of the three lineshapes could be obtained simultaneously in the waveguide transimission. For a specific resonance, its lineshape could be designed reliably by changing the distance between the two air-holes. The different resonance lineshapes achieved flexibly in the proposed structure promise great potentials to expand MRRs' applications in optical interconnects, nonlinear optics, and sensing. \\ \section*{Acknowledgments}\label{sec:05} Financial support was provided by National Natural Science Foundations of China (61522507, 61775183, 11634010); the Key Research and Development Program (2017YFA0303800, 2018YFA0307200); the Key Research and Development Program in Shaanxi Province of China (2017KJXX-12); the Fundamental Research Funds for the Central Universities (3102018jcc034, 3102017jc01001). \bibliographystyle{plain}
{ "timestamp": "2019-03-01T02:09:37", "yymm": "1902", "arxiv_id": "1902.10902", "language": "en", "url": "https://arxiv.org/abs/1902.10902" }
\section{Introduction} \paragraph*{Fixed-parameter tractability.} For many important computational problems, the best known algorithms have a worst-case running time that scales exponentially or worse with the size of the input. Generally however, the size of an input instance is a poor indicator of whether the instance is indeed difficult to solve. This is because for most natural problems, a good fraction of all instances of a given size can be solved much more efficiently than the worst-case instance of that size. To gain a better understanding of the \emph{complexity of individual instances}, we might define a function $\kappa:{\{\bits0, \bits1\}}^\ast\rightarrow \mathbb{N}$ that assigns to each instance $x$ a numeric \emph{parameter} $\kappa(x)$. This parameter then indicates the extent to which certain features that we have identified as a potential cause of computational hardness are present in the given instance. If the function $\kappa$ is itself polynomial-time computable, we call it a \emph{parameterization}. We shall assume that $\kappa(x) \leq \length{x}$ holds for all $x \in {\{\bits0, \bits1\}}^\ast$. Consider a problem for which the fastest known algorithm has a worst-case running time in $2^{\bigO(\length{x})}$. If, for some parameterization $\kappa$, we can give an algorithm of which the worst-case running time on any instance $x$ is in $2^{\bigO(\kappa(x))}\poly(\length{x})$ and, furthermore, we have that $\kappa(x) \ll \length{x}$ holds for at least \emph{some} arbitrarily large instances, then we can argue that $\kappa$ is a more accurate measure of the complexity of instances than is their size, since the running time of the second algorithm is exponential \emph{only} in the parameter value. Note that this implies that interesting parameterizations cannot be monotonic functions. More generally, for $X \subseteq {\{\bits0, \bits1\}}^\ast$ and a parameterization $\kappa$, a \emph{parameterized problem} $(X, \kappa)$ is said to be \emph{fixed-parameter tractable (fpt)} if, for some computable function $f$ and constant $c \geq 0$, there is an algorithm solving any instance $x$ of $X$ in time $f(\kappa(x))\length{x}^c$.\footnote{ From here onward, we may write $k$ for $\kappa(x)$ when there is no risk of confusion. Also, $n$ stands for $\length{x}$ when specifying the complexity of an algorithm. } The essential feature of such running times is that the parameter value and instance size appear only in separate factors. \paragraph*{Kernelization.} An important notion in the study of fixed-parameter tractability is that of \emph{kernelization}. Informally, a kernelization (or kernel) for a parameterized problem is a polynomial-time algorithm that, for any input instance, outputs an equivalent instance of which the size is upper-bounded by a function of the parameter. This type of algorithm is usually presented as a formalization of preprocessing in the parameterized setting. It reduces any instance with large size but small parameter value to an equivalent smaller instance, after which some other algorithm (possibly one with large complexity) is used to solve the reduced instance. Another explanation, which fits well with the idea of studying the complexity of individual instances, is that a kernelization extracts the \emph{hard core} of an instance. Of particular interest is the case where the upper bound on the size of the output instance of a kernelization is itself a polynomial function in the parameter. Such \emph{polynomial kernelizations} are important because they offer a quick way to obtain efficient fpt-algorithms for a problem. If $X$ is solvable in exponential-time, then the existence of a polynomial kernelization for $(X, \kappa)$ means that the problem can be solved in time $2^{\poly(k)}\poly(n)$, which roughly corresponds to what we might reasonably consider to be useful in practice. Conversely, for many parameterized problems that can be solved by algorithms with such running times (for example, \pr{$k$-Vertex-Cover}), it is also possible to show the existence of polynomial kernelizations. However, there are also exceptions, such as the \pr{$k$-Path} problem, where an algorithm with time complexity $2^{\bigO(k)}\poly(n)$, but no polynomial kernelization, is known. It was a long-standing open question whether the existence of polynomial kernelizations is equivalent to having fpt-algorithms with a particular kind of running time. Eventually, \citet{bodlaender2009problems} showed that for many fixed-parameter tractable problems (including \pr{$k$-Path}), the existence of polynomial kernels would imply the unlikely complexity-theoretic inclusion $\cl{NP} \subseteq \cladv{coNP}{poly}$. This framework for proving conditional lower bounds against polynomial kernels was subsequently considerably extended and strengthened \citep{bodlaender2014kernelization,drucker2015new} (see also the survey of \citet{kratsch2014recent}). In the same paper, \citeauthor{bodlaender2009problems} also unconditionally prove the existence of a parameterized problem that is solvable in time $\bigO(2^k n)$, but has no polynomial kernels, thus ruling out the possibility of an equivalence between polynomial kernels and fpt-algorithms with running times of the form $2^{\poly(k)}\poly(n)$. \paragraph*{Generalized kernelization.} A \emph{Turing kernelization} is an algorithm that can solve any instance of a parameterized problem in polynomial-time, provided it can query an oracle for the same problem with instances of which the size is upper-bounded by a function of the parameter value of the input. The idea here is that if we are willing to run an inefficient algorithm on an instance of size bounded in terms of the parameter alone (as was the case with \emph{regular} kernelizations), then we might as well run this algorithm on more than one such instance. A regular kernelization can be regarded as a particular, restricted type of Turing kernelization that a) runs the polynomial kernelization algorithm on the input, b) queries the oracle for the resulting output instance, and c) outputs the oracle's answer. As in the case of regular kernelizations, a \emph{polynomial Turing kernelization} is such that the bound on the size of the query instances is itself a polynomial function. Polynomial Turing kernelizations are not as well-understood as regular kernels. The methods for proving lower bounds against the size of regular kernels do not seem to apply to them. Indeed, there are problems that most likely have no polynomial kernels, but which \emph{do} admit a polynomial Turing kernelization. An example being \mbox{\pr{$k$-Leaf-Subtree}} (called \pr{Max-Leaf-Subtree} in \citep{cygan2016parameterized}). Furthermore, there are only a few examples of non-trivial polynomial Turing kernelizations for problems that are not believed to admit polynomial regular kernelizations, such as restricted versions of \pr{$k$-Path} \citep{jansen2016turing,jansen2018turing} and of \pr{$k$-Independent Set} \citep{thomasse2017polynomial}. Whether the general versions of these problems also have polynomial Turing kernels are major open questions in this field. Compared to the regular kind, polynomial Turing kernelizations have a number of computational advantages, such as the ability to output the opposite of the oracle's answer to a query (non-monotonicity), the ability to make polynomially (in the size of the input) many queries, and the ability to adapt query instances based on answers to previous queries (adaptiveness). Rather than focus on specific computational problems to determine the difference in strength between Turing and regular kernelizations, we instead look into the possibility of unconditionally separating the computational strengths of these two types of algorithms in general. We investigate and answer a number of questions that, to our knowledge, were all open until now: \begin{itemize} \item Without relying on any complexity-theoretic assumptions, can we prove the existence of parameterized problems that admit polynomial Turing but not polynomial regular kernelizations? If so, which of the computational advantages of Turing kernelizations are sufficient for an unconditional separation? Note that for \mbox{\pr{$k$-Leaf-Subtree}}, only a larger number of queries is used, the known polynomial Turing kernel being both monotone and non-adaptive \citep[see][Section~9.4]{cygan2016parameterized}. On the other hand, the kernels in \citep{jansen2016turing} and \citep{thomasse2017polynomial} are adaptive. \item Does every parameterized problem that is decidable in time $2^{\poly(k)}\poly(n)$, also admit a polynomial Turing kernelization? \item To what extent can we relax the restrictions on regular kernelizations (viewed as Turing kernelizations), while still being able to apply known lower bound techniques? For example, can we rule out, for some natural problems, the existence of non-monotone kernels that make a few adaptive oracle queries? \end{itemize} \subsection{Overview of our results} \begin{wrapfigure}[20]{r}{0pt} \begin{tikzpicture} \graph[grow up=1.363,nodes={anchor=center,align=center,inner xsep=1.4em},edges={double equal sign distance,-implies}]{ ker/"polynomial kernels" -- cTuring/"polynomial Turing kernels with\\a constant number of queries" -- psize/"psize kernels" -- tt/"polynomial\\truth-table kernels" -- T/"polynomial\\Turing kernels" -- super/"fixed-parameter tractable"; }; \coordinate (bottom left) at (current bounding box.south west); \path (current bounding box.north east) ++(0pt, -16pt) coordinate (top right); \pgfresetboundingbox; \useasboundingbox (bottom left) rectangle (top right); \end{tikzpicture} \caption{ A hierarchy of polynomial kernels. Arrows signify a strict increase in computational power. } \label{fig:hierarchy} \end{wrapfigure} We show that each of the advantages of polynomial Turing kernelizations over polynomial regular kernelizations is, by itself, enough to unconditionally separate the two notions. This produces a hierarchy of kernelizability within the class of problems that admit polynomial Turing kernelizations, Figure~\ref{fig:hierarchy}. Specifically, we show that: \begin{itemize} \item there are problems that are not polynomially kernelizable, but do admit a polynomial Turing kernelization that makes a single oracle query (Theorem~\ref{thm:h1}); \item there are problems that admit non-adaptive polynomial Turing kernelizations (also known as polynomial \emph{truth-table} kernelizations), but cannot be solved by polynomial Turing kernelizations making a constant number of queries, even adaptively (Theorem~\ref{thm:htt} \& \ref{thm:hpsize}); \item there are problems that admit adaptive polynomial Turing kernelizations but not polynomial truth-table kernelizations (Theorem~\ref{thm:ht}). \end{itemize} Next, we show (Theorem~\ref{thm:htop}) that it is not enough for a problem to be decidable in time $2^{\poly(k)}\poly(n)$ in order for it to have a polynomial Turing kernelization. In fact, the problem we construct can be solved in time $\bigO(2^k n)$. Our theorem is stronger than a comparable result of \citeauthor{bodlaender2009problems}, who only exclude regular kernelizations. We obtain a considerably simpler proof, harnessing the Time Hierarchy Theorem in favor of a direct diagonalization. Finally, we ask how far up the hierarchy the known methods for proving lower bounds against polynomial kernelization can be applied. The example of \mbox{\pr{$k$-Leaf-Subtree}} shows that they should already fail somewhere below polynomial truth-table kernelizations. Indeed, we identify what we call \emph{psize kernelizations} as the apparently strongest type of polynomial Turing kernel that can be ruled out by current lower bound techniques (Section~\ref{sec:lowerbounds}). A psize kernelization makes $\poly(k)$ non-adaptive oracle queries (of size $\poly(k)$), and then feeds the oracle's answers into a poly-sized circuit to compute its own final answer. In terms of computational power, this type of kernelization stands between polynomial Turing kernelizations that make only a constant number of queries and polynomial truth-table kernelizations (Section~\ref{sec:separations}, Theorem~\ref{thm:htt} \& \ref{thm:hpsize}). \subsection{Proof techniques} The price we pay for being able to prove unconditional separations is that the problems we construct in the proofs are artificial rather than natural. This is unavoidable, however, because computational problems that arise naturally will typically belong to classes that are hard to separate from $\cl{P}$ (such as $\cl{NP}$, $\cl{PH}$, $\cl{PP}$, etc.). Thus, any claim that some parameterized version of a natural problem admits no polynomial kernelization, would currently have to rely on some complexity-theoretic assumptions. In the construction of every problem witnessing a separation, diagonalization will be involved, in one way or another. However, the application of diagonalization arguments in this context has some subtle issues. An intuitive reason for this is the fact that it is very difficult to control the complexity of a problem that is constructed via an argument using diagonalization against polynomial-time machines. Without additional complexity-theoretic assumptions, such problems can be forced to reside in powerful classes such as $\cl{EXP}$. Positioning them in any interesting smaller classes is not straightforward. By contrast, the difference between $\cl{P}$ and the class of problems that can be decided in polynomial-time with a very restricted form of access to an oracle, seems rather thin, and it is by no means clear whether a problem that is constructed via diagonalization can be placed between these two classes. In Section~\ref{sec:separations} we discuss these issues, as well as how to overcome them, in detail. Here, let us mention that the overall structure of our artificial problems resembles that of examples of natural problems which, subject to complexity-theoretic assumptions, admit polynomial Turing but not regular kernelizations. Because of this, even the artificial examples we construct provide new insights into the power of Turing kernelization. \section{Preliminaries} We assume familiarity with standard notations and the basics of parameterized complexity theory, and refer the reader to \citep{flum2006parameterized} for the necessary background. Here we review only the definitions of the notions most important for our work. \begin{definition} A \emph{kernelization} (or \emph{kernel}) for a parameterized problem $(X, \kappa)$, where $X$ is a subset of ${\{\bits0, \bits1\}}^\ast$ and $\kappa$ is a parameterization, is a polynomial-time algorithm that, on a given input $x \in {\{\bits0, \bits1\}}^\ast$, outputs an instance $x' \in {\{\bits0, \bits1\}}^\ast$ such that $x \in X \Leftrightarrow x' \in X$ holds, and, for some fixed computable function $f$, we have $\length{x'} \leq f(\kappa(x))$. The function $f$ is referred to as the size of the kernel. The kernel is said to be \emph{polynomial} if $f$ is a polynomial. \end{definition} \begin{definition} A \emph{Turing kernelization} for a parameterized problem $(X, \kappa)$ is a polynomial-time algorithm that decides any instance $x$ of $X$ using oracle queries to $X$ of restricted size. For some fixed computable function $f$ that is independent of the input, the size of the queries must be upper bounded by $f(\kappa(x))$. A Turing kernelization is \emph{polynomial} if $f$ is a polynomial. A Turing kernelization is a \emph{truth-table kernelization} if, on every input, all of its oracle queries are independent of the oracle's answers. Thus, as an oracle machine, a truth-table kernelization is non-adaptive. \end{definition} A parameterized problem that exemplifies the relevance of our results is \mbox{\pr{$k$-Leaf-Subtree}}, where a graph $G$ and integer $k$ are given, and the question is whether $G$ has a subtree with at least $k$ leaves. This problem admits a polynomial Turing kernelization but no polynomial regular kernelization, unless $\cl{NP} \subseteq \cladv{coNP}{poly}$. See Section~9.4 of \citep{cygan2016parameterized} for a proof of the former, and Chapter~15 of the same reference for a proof of the latter fact. \section{Separations} \label{sec:separations} To prove an unconditional separation between polynomial Turing kernelizability and polynomial regular kernelizability (or between two intermediate kinds of kernelizability), we construct a problem of which the instances can be solved in polynomial-time with oracle queries for small instances of the same problem. We shall make sure that the instances cannot be solved in polynomial-time without such queries (remember, polynomial kernelizations are also poly-time decision procedures). These requirements prevent us from constructing the classical part of our parameterized problem via simple diagonalization against polynomial-time machines. The instances of the resulting language would not depend on each other in a way that would allow oracle queries to be useful, nor would all instances be solvable in time $p(n)$ for some \emph{fixed} polynomial $p$. Solving an instance of such a language requires simulating Turing machines (\emph{TM}s) for a polynomial number of steps, but the degree of these polynomials increases with $n$. Thus, a hypothetical polynomial Turing kernelization would neither be able to solve the instances of such a language directly within the allowed time, nor use its oracle access to speed up the computation. An additional difficulty arises due to the bound on the size of the oracle queries (polynomial in $k$). If the parameter value of an instance $x$ is too small relative to $\length{x}$, then the restricted oracle access of a polynomial Turing kernelization may offer no computational advantage, since the instances for which the oracle can be queried will be small enough to be solved directly within the required time bound. These issues can be overcome by designing a problem that shares what seems to be the essential feature of natural problems that, under complexity-theoretic assumptions, admit polynomial Turing but not polynomial (regular) kernelizations, such as the \mbox{\pr{$k$-Leaf-Subtree}} problem. Recall that for this problem, a quadratic kernelization exists for the case when the input graph is connected, but that a polynomial kernelization for general graphs is unlikely to exist. The known polynomial Turing kernelization for this problem works on general graphs by computing the kernel for each connected component of the input graph, and then querying the oracle for each of the $\bigO(n)$ resulting instances of size $\bigO(k^2)$ \citep[see][Section~9.4]{cygan2016parameterized}. The crucial aspect here is that although the general problem may not admit polynomial kernelizations, it has a subproblem that does. Furthermore, the polynomial Turing kernelization only queries instances of this subproblem. The problems we construct will also have a polynomially kernelizable ``core,'' as well as a ``shell'' of instances that can be solved efficiently with small queries to the core. Taking $V$ to be some decidable language, we can define \begin{equation*} X(V) = \{\texttt{0}x \suchthat x \in V\} \cup \left\{\texttt{1}x \suchthat[\middle] \textrm{. . .}\right\}, \end{equation*} where the ellipsis stands for a suitable condition that can be verified with small queries to $V$. With the parameterization $\kappa$ such that $\kappa(\texttt{0}x) = \length{x}$ and $\kappa(\texttt{1}x) = \log\length{x}$ for all $x \in {\{\bits0, \bits1\}}^\ast$, the first set in the above disjoint union plays the role of the polynomially kernelizable core (it admits the trivial kernelization), while the second set plays the role of the shell. The crucial observation now is that we can choose the condition that determines membership of an element of the form $\texttt{1}x$ in $X(V)$ in such a way that a polynomial-time algorithm can decide the instance using small queries of the form $\texttt{0}w$, \emph{regardless of the choice of $V$}. Having thus secured the existence of a polynomial Turing kernelization (perhaps one that is further restricted), we are now free to construct $V$ via diagonalization against some weaker type of kernelization, so as to get the desired separation. Using this approach, we prove that each of the computational advantages a polynomial Turing kernelization has over polynomial (regular) kernelizations, results in a strictly stronger type of kernelization, as shown in Figure~\ref{fig:hierarchy}. \begin{theorem} \label{thm:h1} There is a parameterized problem that has a polynomial Turing kernelization using only a single oracle query, but admits no polynomial kernelizations. \end{theorem} \begin{proof} Given any decidable set $V$, we can define \begin{equation*} X(V) = \{\texttt{0}x \suchthat x \in V\} \cup \left\{\texttt{1}x \suchthat[\middle] \log\length{x} \in \mathbb{N}\text{ and }\texttt{0}^{\log\length{x}} \notin V\right\}, \end{equation*} parameterized so that for all $x \in {\{\bits0, \bits1\}}^\ast$, $\kappa(\texttt{0}x) = \length{x}$ and $\kappa(\texttt{1}x) = \log\length{x}$. Clearly, the problem $(X(V), \kappa)$ has a polynomial Turing kernelization making a single query, regardless of the decidable set $V$. For instances of the form $\texttt{0}x$, the answer can be obtained by querying the oracle directly for the input, and if the input is $\texttt{1}x$, one can query $\texttt{0}^{\log\length{x} + 1}$ and output the opposite answer. We shall construct the set $V$ by diagonalization, ensuring that $X(V)$ does not admit a polynomial (regular) kernelization. Note that the kernelization procedures we diagonalize against can query $X(V)$, whereas we only decide the elements of $V$. Because every problem that admits a polynomial kernelization can also be decided by a polynomial-time TM that makes a single query of size $\poly(k)$ and then outputs the oracle's answer, we only need to diagonalize against this type of TM. As in a standard diagonalization argument, we run every such machine for an increasing number of steps, using as input the string $\texttt{1}\texttt{0}^{2^n}$ (the parameter value of which is $n$), where $n$ is chosen large enough for decisions made at previous stages to not interfere with the current simulation. Each machine is simulated until it runs out of time or makes an oracle query. Whenever the machine makes an oracle query different from $\texttt{1}\texttt{0}^{2^n}$, we answer it according to the current state of the set $V$. To complete the diagonalization, we either add $\texttt{0}^n$ to $V$ or not, so as to ensure the machine's answer is incorrect. Note that for sufficiently large values of $n$, the string $\texttt{1}\texttt{0}^{2^n}$ cannot be queried, because $2^n$ outgrows any fixed polynomial in $n$ ($\in \poly(k)$). Additionally, a query to $\texttt{0}\texttt{0}^n$ is of no concern as the machine is incapable of negating the answer of the oracle. \end{proof} Next, we show that polynomial truth-table kernelizations, which can make $\poly(n)$ oracle queries of size $\poly(k)$ but cannot change their queries based on the oracle's previous answers, are more powerful than a restricted version of the same type of kernelization that makes at most $\poly(k)$ queries. This restricted form of polynomial truth-table kernelization is of further interest because it can be ruled out by the current lower bounds techniques (see Section~\ref{sec:lowerbounds}). We give the definition here. \begin{definition} \label{def:psize} A polynomial truth-table kernelization is a \emph{psize kernelization} if, on any input instance with parameter value $k$, it makes at most $\poly(k)$ oracle queries and its output can be expressed as the output of a $\poly(k)$-sized circuit that takes the answers of the oracle queries as input. \end{definition} The proof of the next theorem follows the same pattern as that of Theorem~\ref{thm:h1}, except that in the diagonalization part of the proof we now use the restriction on the number of queries the machines can make. Recall that in Theorem~\ref{thm:h1} we made use of the machine's monotonicity, that is, the fact that its output must be equivalent to the outcome of its single oracle query. \begin{theorem} \label{thm:htt} There is a parameterized problem that has a polynomial truth-table kernelization but no psize kernelization. \end{theorem} \begin{proof} Given any decidable set $V$, we can define \begin{equation*} X(V) = \{\texttt{0}v \suchthat v \in V\} \cup \left\{\texttt{1}x \suchthat[\middle] \log\length{x} \in \mathbb{N}\text{ and }{\{\bits0, \bits1\}}^{\log\length{x}} \cap V \neq \emptyset\right\}, \end{equation*} parameterized so that for all $x \in {\{\bits0, \bits1\}}^\ast$, $\kappa(\texttt{0}x) = \length{x}$ and $\kappa(\texttt{1}x) = \log\length{x}$. Clearly, $(X(V), \kappa)$ has a polynomial truth-table kernelization regardless of $V$: on input $\texttt{0}x$ it queries the oracle for the input, and on input $\texttt{1}x$, with $\log\length{x} \in \mathbb{N}$, it queries the oracle with each string $\texttt{0}y$, for all $y \in {\{\bits0, \bits1\}}^{\log\length{x}}$, and accepts if one of the queries has a positive answer (otherwise it rejects). This procedure runs in polynomial time and makes at most $n$ oracle queries on any input of length $n+1$. We construct $V$ by diagonalizing against psize kernelization algorithms. To do this, we consider a computable list of TMs such that every machine appears infinitely often. At stage $i$ of the construction we choose $n$, a power of $2$, so that membership in $V$ has not been decided at a previous stage for any strings of length at least $\log n$. We then run the $i$-th machine on input $\texttt{1}\texttt{0}^n$ for $n^i$ steps. All new oracle queries are answered with `no', all other queries are answered so as to be consistent with previous answers. If the machine at stage $i$ terminates without having queried the oracle for all strings of the form $\texttt{0}y$ with $y \in {\{\bits0, \bits1\}}^{\log n}$, we add an unqueried string of this length to $V$ if and only if the machine rejects. If $P$ is a psize kernelization, then the number of oracle queries it makes on an input $\texttt{1}x$ is upper-bounded by $q(\log\length{x})$, for some fixed polynomial $q$. This is clearly $o(\length{x})$, so for some sufficiently large $i$ and $n$, $P$ will terminate without having queried all $n$ strings which can determine the correct answer. Thus, our diagonalization procedure will ensure that it terminates with the incorrect answer. On the other hand, the above-mentioned polynomial truth-table kernelization will always query all necessary strings in order to output the correct answer. \end{proof} Note that the conclusion of the above proof is actually that there exists a parameterized problem with a polynomial truth-table kernelization making $n-1$ oracle queries, that admits no polynomial (possibly adaptive!) Turing kernelization making fewer than $n-2$ queries on certain inputs of length $n$. A psize kernel fits this condition, but is much more restricted (in particular, the number of allowed queries is polynomial in the parameter value). Via a very similar proof, with a diagonalization argument relying on the number of oracle queries a machine can make, we can show that psize kernelizations are stronger than polynomial Turing kernelizations making any fixed finite number of queries, even adaptively. \begin{theorem} \label{thm:hpsize} There is a parameterized problem that has a psize kernelization but no polynomial Turing kernelization making only a constant number of (possibly adaptive) queries. \end{theorem} We can also show that adaptive queries provide a concrete computational advantage. The proof of the separation between general polynomial Turing and truth-table kernelizations also follows the pattern of the previous three theorems, but with a more involved diagonalization argument, due to the need to distinguish between adaptive and non-adaptive oracle TMs. \begin{theorem} \label{thm:ht} There is a parameterized problem that has a polynomial Turing kernelization but no polynomial truth-table kernelization. \end{theorem} \begin{proof} For any decidable set $V$ we can define the function: $s^V: {\{\bits0, \bits1\}}^\ast \to {\{\bits0, \bits1\}}^\ast$ by \begin{equation*} s^V(q) = \begin{cases} \texttt{0}q & \text{if $q \notin V$}, \\ \texttt{1}q & \text{if $q \in V$}. \end{cases} \end{equation*} Also for a decidable set $V$, we define the following parameterized problem: \begin{equation*} X(V) = \{\texttt{0}x \suchthat x \in V\} \cup \left\{\texttt{1}x \suchthat[\middle] \log\length{x} \in \mathbb{N}\text{ and } \underbrace{(s^V \circ s^V \circ \cdots \circ s^V)}_{(\log\length{x})^2\text{ times}}(\texttt{0}^{\log{\length{x}}}) \in V\right\}, \end{equation*} where the parameterization is defined so that for all $x \in {\{\bits0, \bits1\}}^\ast$, $\kappa(\texttt{0}x) = \length{x}$ and $\kappa(\texttt{1}x) = \log\length{x}$. The problem $X(V)$ has a polynomial Turing kernelization regardless of the set $V$: On inputs of the form $\texttt{0}x$, the machine queries the oracle with its input (whose size is linear in the parameter value), and outputs the answer. On inputs of the form $\texttt{1}x$ the machine makes the following $(\log\length{x})^2$ queries: $\texttt{0}^{\log\length{x}+1}$, $\texttt{0}b_1 \texttt{0}^{\log\length{x}}$, $\texttt{0}b_2 b_1 \texttt{0}^{\log\length{x}},\ldots,\texttt{0}b_{(\log\length{x})^2}\ldots b_1 \texttt{0}^{\log\length{x}}$, where $b_i$ is the outcome of the $i$-th query, for each $i \le (\log\length{x})^2$. The output is the answer of the last oracle query. Since each of the queries in the second case is of size at most quadratic in $\kappa(1x) = \log\length{x}$, this procedure is a polynomial Turing kernelization. We now construct the set $V$ so that no polynomial truth-table kernelization can solve $X(V)$. Consider a variant of oracle TMs where the oracle can be queried for an arbitrary number of queries at once. Let $P_1, P_2, \ldots$ be a computable list of all such TMs in which each machine appears infinitely often. At each stage $i \in \mathbb{N}$, we set $n$ to be the smallest positive integer so that no oracle queries to $X(V)$ at any previous stage of the simulation depend on instances of $V$ of size at least $n$, and so that $n > i$ and $2^n > n^i$. At stage $i$ of the construction, we run $P_i$ on input $\texttt{1}\texttt{0}^{2^n}$ for $(2^n)^i$ steps (note that this is a polynomial of degree $i$ in $2^n+1$, the size of the input). In case the machine queries the oracle, let $S$ be the set of strings it queries. If $S$ includes strings of length at least $2^n$, we move on to the next stage. In particular, when no query of length $2^n + 1$ is made, $P_i$ is not making a query with prefix \texttt{1} that is equivalent to the input. By the time bound, we have $\size{S} \leq 2^{ni}<2^{n^2}$, so there must be a string $y = b_{n^2}\ldots b_{2}b_{1}\texttt{0}^{n}$, $b_j \in {\{\bits0, \bits1\}}$, such that $\texttt{0}y$ is not in $S$. The queries in $S$ are answered as follows: all queries also made at previous stages are answered so as to be consistent with previous answers; all queries of the form $\texttt{0}b_j\ldots b_2b_1\texttt{0}^n$, with $j \le n^2 - 1$, are answered with $b_{j+1}$; all other queries are answered with \texttt{0} (`no'). For all $j \le n^2 - 1$ such that $b_{j+1} = \texttt{1}$, we place $b_j\ldots b_2b_1\texttt{0}^n$ into $V$. After thus answering the queries in $S$, we resume the simulation of $P_i$ for the remainder of its allotted $2^{ni}$ steps and treat every subsequent invocation of the query instruction as a crash. Finally, we place $y$ into $V$ if and only if $P_i$ terminated within the time bound and rejected, making $\texttt{1}\texttt{0}^{2^n}$ a `yes'-instance if and only the $P_i$ rejects it. Assume now that there is a polynomial truth-table kernelization for $X(V)$. Such a procedure will eventually be targeted in the above construction. Indeed, a problem has a truth-table kernelization precisely when it is decided by a machine that runs in polynomial time and can make all its queries at once. Let $i$ be such that $P_i$ is a polynomial truth-table kernelization for $X(V)$, running in time $p(\length{x})$ on any input of the form $\texttt{1}x$, and non-adaptively making oracle queries of size at most $q(\log\length{x})$, where $p$ and $q$ are fixed polynomials. As this machine occurs infinitely often in the list $P_1, P_2, \ldots$, we may assume that $i$ and its corresponding $n$ are large enough for $P_i$ to terminate on input $\texttt{1}\texttt{0}^{2^n}$, because we have $p(2^n+1) < 2^{ni}$. Moreover, we may assume that $i$ and $n$ are large enough for $q(n) < n^i < 2^n$ to hold. As $P_i$ will not be able to query all strings of the form $\texttt{0}y\texttt{0}^{n}$ with $\length{y} = n^2$, it will, by our construction of $V$, incorrectly decide some instance of $X(V)$. \end{proof} Finally, we show that decidability in time $2^{\poly(k)}\poly(n)$ does not guarantee the existence polynomial Turing kernelizations for the same problem. This strengthens a theorem of \citet{bodlaender2009problems}, who construct a problem with the above complexity but rule out only polynomial regular kernelizations. \begin{theorem} \label{thm:htop} For every time-constructible function $g(k) \in 2^{o(k)}$, there is a problem that is solvable in time $\bigO(2^k n)$ but admits no Turing kernelization of size $g(k)$. In particular, there is a problem that is solvable in time $\bigO(2^k n)$ but admits no polynomial Turing kernelization. \end{theorem} \begin{proof} Let $g(k)$ be a time-constructible function in $2^{o(k)}$. Without loss of generality, we may assume that $g(k)$ is also in $\Omega\left(2^{(\log k)^2}\right)$. Let $\kappa: \mathbb{N} \to \mathbb{N}$ be a time-constructible function such that we have $\kappa(n) \in \omega(\log n)\cap o(n)$ as well as $\kappa(g(k)) \in o(k)$ (for example, $\kappa(n) = \log n \log\left(\frac{g^{-1}(n)}{\log n}\right)$ is suitable). Let $t(n) = 2^{\kappa(n)}n$ and let $L$ be a language in $\mathbf{DTIME}(t(n))\setminus \mathbf{DTIME}(o(t(n)/\log(t(n)))$. Such a language exists by the Time Hierarchy Theorem. Assigning each instance $x$ of $L$ the parameter value $k = \kappa(\length{x})$, we find that $L$ can be solved in time $\bigO(2^k n)$. Furthermore, we have \begin{equation*} \frac{t(n)}{\log t(n)} = \frac{2^{\kappa(n)}n}{\kappa(n)+\log n} \in \Omega\left(2^{\kappa(n)}\right), \end{equation*} so we may conclude $2^{o(\kappa(n))}\subseteq o(t(n)/\log(t(n))$. Assume now that for some polynomial $p$, there exists a Turing kernelization for $L$ that runs in time $p(n)$ and queries the oracle with instances of size bounded by $g(k)$, where we set $k = \kappa(n)$. We show that such a Turing kernelization can be used to solve $L$ in time $o(t(n)/\log(t(n))$, contradicting the choice of the language. Our new algorithm will solve any instance $x$ with parameter value $k = \kappa(\length{x})$ by running the Turing kernelization on it, except that the instances for which the oracle is supposed to be queried are solved directly using the $\bigO(2^{\kappa(n)} n)$-time algorithm whose existence is guaranteed by the choice of $L$. The total running time of this new algorithm is then upper-bounded by: \begin{equation*} p(n) + p(n)2^{\kappa\left(g(k)\right)}g(k) = 2^{o(k)} = 2^{o(\kappa(n))}, \end{equation*} which contradicts the lower bound on the deterministic time complexity of $L$. \end{proof} \section{Lower Bounds} \label{sec:lowerbounds} An immediate consequence of the separations arrived at in the previous section is that not all fixed-parameter tractable problems have polynomial kernelizations. However, for any particular parameterized problem the (non-)existence of a polynomial kernelization may not be easy to establish. The most fruitful program for deriving superpolynomial lower bounds on the size of regular kernelizations was started by \citet{bodlaender2009problems}. While a straightforward application of their technique to Turing kernelizations is not possible, an extension to the psize level in our hierarchy, Figure~\ref{fig:hierarchy}, is feasible. In order to keep our presentation focussed, we shall include only a limited exposition of the lower bound technique. For a more complete overview, refer to \citep{downey2016fundamentals,kratsch2014recent}, or turn to \citep{bodlaender2014kernelization} for an in-depth treatment. Central to the lower bounds engine are two similar looking classifications of instance aggregation. The first of these does not involve a parameterization. \begin{definition} A \emph{weak \textbf{and}-distillation} (\emph{weak \textbf{or}-distillation}) of a set $X$ into a set $Y$ is an algorithm that \begin{compactitem} \item receives as input a finite sequence of strings $x_1, x_2, \ldots, x_t$, \item uses time polynomial in $\sum_{i=1}^t \length{x_i}$, \item outputs a string $y$ such that \begin{compactitem} \item we have $y \in Y$ if and only if for all (any) $i$ we have $x_i \in X$, \item $\length{y}$ is bounded by a polynomial in $\max_{1 \le i \le t} \length{x_i}$. \end{compactitem} \end{compactitem} \end{definition} Note how the size of the output of a distillation is bounded by a polynomial in the \emph{maximum} size of its inputs and not by the sum of the input sizes. Originally, distillations where considered where the target set $Y$ was equal to $X$, hence the \emph{weak} designator in this more general definition. The parameterized counterpart to distillations is, as we shall soon see, more lenient than the non-parameterized one. \begin{definition} An \emph{\textbf{and}-compositional} (\emph{\textbf{or}-compositional}) parameterized problem $(X, \kappa)$ is one for which there is an algorithm that \begin{compactitem} \item receives as input a finite sequence of strings $x_1, x_2, \ldots, x_t$ sharing a parameter value $k = \kappa(x_1) = \kappa(x_2) = \ldots = \kappa(x_t)$, \item uses time polynomial in $\sum_{i=1}^t \length{x_i}$, \item outputs a string $y$ such that \begin{compactitem} \item we have $y \in X$ if and only if for all (any) $i$ we have $x_i \in X$, \item $\kappa(y)$ is bounded by a polynomial in $k$. \end{compactitem} \end{compactitem} \end{definition} Here, a bound is placed on the \emph{parameter value} of the output of the algorithm, instead of on the \emph{length} of the output. Additionally, this bound is a function of the \emph{unique} parameter value shared by all input strings. Conceptually, a bound of this kind makes sense as parameter values serve as a proxy of the computational hardness of instances. Thus, a parameterized problem is compositional, when instances can be combined efficiently, without an increase in computational hardness. Generalizing the results of \citet{bodlaender2009problems,bodlaender2014kernelization}, we find that not just regular polynomial kernelizations, but also psize kernelizations tie the two ways of aggregating instances together. For our proof to work, two aspects of the definition of psize kernelizations on page~\pageref{def:psize} that were not made explicit are crucial. Firstly, because a psize kernelization is a \emph{polynomial} truth-table kernelization, the size of the queries can be bounded by a polynomial of the parameter value. Secondly, it is important to note that the circuits involved must be uniformly computable from the input instances. \begin{theorem} If $(X, \kappa)$ is an \textbf{and}-compositional (\textbf{or}-compositional) parameterized problem that has a psize kernelization, then $X$ has a weak \textbf{and}-distillation (weak \textbf{or}-distillation). \end{theorem} \begin{proof} Given a set $X$, consider the following set based on circuits and inputs derived from membership in $X$, \begin{equation*} C(X) = \{\langle\phi, (x_1, x_2, \ldots, x_t)\rangle \suchthat \text{$\phi$ is a circuit with $t$ inputs, accepting $(x_1 \in X, \ldots, x_t \in X)$}\}. \end{equation*} Note that a pairing of the specification of a circuit $\phi$ and $t$ strings $(x_1, x_2, \ldots, x_t)$ can be done so that $\length{\langle\phi, (x_1, x_2, \ldots, x_t)\rangle}$ is bounded by a polynomial in $\length{\phi} + \length{x_1} + \length{x_2} + \ldots + \length{x_t}$. We sketch the proceedings of a distillation that is given $x_1, x_2, \ldots, x_t$ as input. This procedure is adapted from \citep{bodlaender2009problems}. First, the inputs are grouped by their parameter value $k_i = \kappa(x_i)$ and the composition algorithm is applied to each group, obtaining $(y_1, k'_1), (y_2, k'_2), \ldots, (y_s, k'_s)$. Taking $k_\mathrm{max} = \max_{1 \le i \le t} k_i$, we have $s \le k_\mathrm{max}$ and, for some polynomial $p$, all $k'_i$ are bounded by $p(k_\mathrm{max})$. Next, the psize kernelization is applied to each $(y_i, k'_i)$, obtaining $s$ polynomial sized circuits and $s$ sequences of strings to query in order to get the inputs of the circuits. These circuits and strings can be amalgamated (dependent on the type of composition) into a single circuit $\phi$ and sequence of strings $(z_1, z_2, \ldots, z_r)$. We claim that the mapping of $(x_1, x_2, \ldots, x_t)$ to $\langle\phi, (z_1, z_2, \ldots, z_r)\rangle$ constitutes a weak distillation of $X$ into $C(X)$. Both $s$ and $k_\mathrm{max}$ are bounded by $\max_{1 \le i \le t} \length{x_i}$, since, for all $i$, we have $k_i \le \length{x_i}$. Therefore, the proposed weak distillation procedure produces an output of which the size is bounded by a polynomial in $\max_{1 \le i \le t} \length{x_i}$ and its running time is indeed polynomial in $\sum_{i=1}^t \length{x_i}$. Moreover, by definition of a psize kernelization the required preservation of membership is satisfied, hence the procedure is truly a weak distillation of $X$ into $C(X)$. \end{proof} Assuming we have $\cl{NP} \not\subseteq \cladv{coNP}{poly}$, it has been shown that \cl{NP}-hard problems admit neither weak \textbf{or}-distillations \citep{fortnow2011infeasibility}, nor weak \textbf{and}-distillations \citep{drucker2015new}. Thus we can further our generalization of the results of \citet{bodlaender2014kernelization}. \begin{corollary} If $(X, \kappa)$ is an \textbf{and}-compositional (\textbf{or}-compositional) parameterized problem and $X$ is \cl{NP}-hard, then $(X, \kappa)$ does not have a psize kernelization unless $\cl{NP} \subseteq \cladv{coNP}{poly}$. \end{corollary} Accordingly, our hierarchy of polynomial kernels is not merely synthetic and the place of many natural problems in the hierarchy is lower bounded. In light of the more general setting of \citet{bodlaender2014kernelization}, we remark that a generalization of our results to cross-composition (generalizing compositionality) and psize compression (generalizing psize kernelization) is immediate. \section{Classical Connections} Algorithms for fixed-parameter tractable problems are not easily diagonalized against. Such algorithms have a running time of the form $f(\kappa(x))\length{x}^c$, where $f$ is a computable function and $c$ a constant. The challenge in diagonalizing is caused by the absence of a computable sequence of computable functions such that every computable function is outgrown by a member of the sequence. However, as witnessed by this document, diagonalization can be used to uncover structure \emph{inside} \cl{FPT}. Key to this possibility is that a problem is fixed-parameter tractable precisely when it is kernelizable, and the running time bound for kernelizations does not include arbitrary computable functions. While, to our knowledge, not done before in a parameterized context, separating many--one, truth-table, and Turing reductions is an old endeavour, dating back to \citet{ladner1975comparison}. Indeed, kernelizations are in essence reductions, more specifically, they are \emph{autoreductions} in the spirit of \citet{trakhtenbrot1970autoreducibility}. Since kernelizations come with a time bound, a Turing kernelization could more accurately be described as a \emph{bounded Turing} kernelization, or \emph{weak truth-table} kernelization \citep[see][Section~3.8]{soare2016turing}. However, the adaptiveness of a Turing kernelization entails that the number of different queries it \emph{could} make (unaware of the answers of the Oracle) is much higher than that of a truth-table kernelization, given the same time bound. In that sense, our separation based on adaptiveness, Theorem~\ref{thm:ht}, is also a separation based on the number of queries made. An important feature of kernelizations is not covered by an interpretation of kernelizations as autoreductions. Where the definition of an autoreduction excludes querying the input string, the definition of a kernelization imposes a stronger condition on the queries, namely a size bound as a function of the parameter value. In this light, it may be worthwhile comparing kernelizations to a more restrictive type of autoreduction, the \emph{self-reduction} \citep[see][Section~4.5]{balcazar1995structural}. Self-reducibility is defined in \citep{balcazar1995structural} as autoreducibility where all queries are shorter than the input. However, many of the results around self-reducibility extend to more general orders than the ``shorter than''-order and the definition can be generalized \citep{ko1983self}. While the size bound on the queries that is required of kernelizations does not fit the self-reducibility scheme perfectly, the similarities in the definitions urge the consideration of other forms of self-reducibility in a parameterized context. In particular, reducibility with a decreasing parameter value may be of interest. \bibliographystyle{plainnat}
{ "timestamp": "2019-03-01T02:15:15", "yymm": "1902", "arxiv_id": "1902.11006", "language": "en", "url": "https://arxiv.org/abs/1902.11006" }
\section{Conclusion} \label{sec:conclusion} This paper presented an algorithm for generating a feasible parameter set partition applicable to hybrid MPC problems. The algorithm consists of systematically breaking down the feasible parameter set into smaller simplices until these can be assigned an integer solution that is feasible everywhere in them. Convergence in a finite number of iterations was proven with novel insight into an overlap characteristic of the MINLP. The on-line evaluation of the partition is polynomial time and thus can be used as a guaranteed real-time warm start of a mixed-integer solver. Extensive testing on randomly generated systems confirmed the complexity calculations, showed favorable convergence properties and suggests that the algorithm is robust enough to be applied on a wide variety of hybrid MPC problems. \section{Illustrative Example} \label{sec:example} This section tests Algorithm~\ref{alg:phase1} on a set of randomly generated dynamical systems. The goal is to demonstrate robustness by showing that the algorithm runs successfully for a generic system and to verify the complexity analysis of Section~\ref{subsec:complexity}. \subsection{MPC Problem Instance Generation} We first explain how the MPC problem instance is created for a randomly generated dynamical system. Consider the following multiple degree-of-freedom (DOF) oscillator, in continuous time (time is omitted for notational simplicity) and in its configuration basis: \begin{equation} \label{eq:mdof_config} M\ddot r+C\dot r+Kr = Lu, \end{equation} where $M\in\mathbb R^{n_r\times n_r}$, $M\succ 0$, is the mass matrix, $C\in\mathbb R^{n_r\times n_r}$ is the damping matrix, $K\in\mathbb R^{n_r\times n_r}$, $K\succeq 0$, is the stiffness matrix, $L\in\mathbb R^{n_r\times n_u}$ is an input map, $r\in\mathbb R^{n_r}$ is a vector of generalized coordinates and $u\in\mathbb R^{n_u}$ is the input. The task is to generate $M,C,K$ and $L$ such that the system is controllable and has poles located in a prescribed region of the complex plane. Assuming that Caughey's condition holds \cite{SrikanthaPhani2003} such that $M,C,K$ are simultaneously diagonalizable, (\ref{eq:mdof_config}) can be written in its modal basis: \begin{equation} \label{eq:mdof_modal} \ddot\eta+\Lambda\dot\eta+\Omega\eta = \Gamma u, \end{equation} where $\eta=T^{\scriptscriptstyle\mathsf{T}} M^{1/2}r\in\mathbb R^{n_r}$ are the modal coordinates, $T\in\mathbb R^{n_r\times n_r}$ is the modal matrix, $\Lambda=T^{\scriptscriptstyle\mathsf{T}} M^{-1/2}CM^{-1/2}T$, $\Omega = T^{\scriptscriptstyle\mathsf{T}} M^{-1/2}KM^{-1/2}T$ and $\Gamma=T^{\scriptscriptstyle\mathsf{T}} M^{-1/2}L$. The matrices $\Lambda$ and $\Omega$ are diagonal such that each row of (\ref{eq:mdof_modal}) is a 1-DOF oscillator which contributes two poles to the overall system: \begin{equation} \label{eq:3} \ddot\eta_i+2\zeta_i\omega_{\mathrm{n},i}\dot\eta_i+\omega_{\mathrm{n},i}^2\eta_i = u_i\quad i=1,\dots,n_r, \end{equation} where we chose $\Gamma=I_{n_r}$ which ensures that (\ref{eq:mdof_config}) is controllable, $\zeta_i$ is the damping ratio and $\omega_{\mathrm{n},i}$ is the natural frequency of the $i$-th mode. Note that this implies $n_u=n_r$. To generate (\ref{eq:mdof_config}), it remains to choose $M$, $T$, $\Lambda$ and $\Omega$. We choose a uniform random diagonal $M=\mathrm{diag}(\{m_i\in [0.1,1]\}_{i=1}^{n_r})$, $T$ as the orthogonal matrix from QR decomposition of a Gaussian random matrix and $\Lambda,\Omega$ from randomly generating poles in the $s$-plane such that 1) the damping rate is $\in [1,10]$ s and 2) the damped frequency is $\le 2\pi$ rad/s and 3) damping ratio $\zeta\le 1$ with a probability of $0.2$ that $\zeta=1$. Once generated, the system (\ref{eq:mdof_config}) is written in state space form $\dot x = Ax+Bu+Ew$: \begin{equation} \label{eq:mdof_ss} \hspace{-4mm} \setlength\arraycolsep{1pt} \begin{bmatrix} \dot r \\ \ddot r \end{bmatrix} = \begin{bmatrix} 0 & I_{n_r} \\ -M^{-1}K & -M^{-1}C \end{bmatrix} \begin{bmatrix} r \\ \dot r \end{bmatrix} + \begin{bmatrix} 0 \\ M^{-1}L \end{bmatrix}u + \begin{bmatrix} 0 \\ I_{n_r} \end{bmatrix} w, \end{equation} where $w\in\mathbb R^{n_r}$ is an exogenous disturbance force acting along each generalized coordinate. This system is discretized via zero-order hold with sampling frequency $\omega_{\mathrm{s}}=10\max_{\lambda\in\mathrm{spec}(A)}|\lambda|$, i.e. ten times faster than the fastest natural frequency present in the system \cite{Antsaklis2007}. Following Section~\ref{subsec:Theta}, $\Theta$ is chosen to be the smallest robust invariant set for (\ref{eq:mdof_ss}) using the uncertainty set $\mathcal W\triangleq\{w:\|w\|_\infty\le 10^{-3}\}$ and an LQR controller with a $Q_{\mathrm{lqr}}=0.1I_{2n_r}$ state penalty and an $R_{\mathrm{lqr}}=I_{n_u}$ input penalty \cite{Hennet1995,Trodden2016}. For the MPC law, the uncertainty model is changed to be norm-bounded: \begin{equation} \label{eq:norm_bounded_uncertainty} \mathcal W'\triangleq\{w\in\mathbb R^{n_r}:\|w\|_2\le 0.4\cdot\frac{10^{-3}\|x\|_2}{\max_{v\in\mathcal V(\Theta)}\|v\|_2}\}, \end{equation} which is a smaller uncertainty but, importantly, introduces second-order cone constraints into (\ref{eq:minlp}) \cite{Malyuta2019}. The ad hoc factor of $0.4$ is used to reduce uncertainty such that a planning horizon of $N=3$ is feasible for the robust MPC law, whereas only $N=1$ is guaranteed by the computation method for $\Theta$ \cite{Trodden2016}. Finally, to make the control problem mixed-integer the control input is constrained to be in a non-convex set: \begin{align} \label{eq:input_constraint} u &\in \mathcal U=\{0\}\cup(\mathcal U_{\mathrm{ext}}\setminus \mathcal U_{\mathrm{int}}), \\ \mathcal U_{\mathrm{ext}} &\triangleq \{u\in\mathbb R^{n_r}:-u_{\max}\le u\le u_{\max}\}, \nonumber \\ \mathcal U_{\mathrm{int}} &\triangleq \{u\in\mathbb R^{n_r}:-10^{-3}u_{\max}\le u\le 10^{-3}u_{\max}\}, \nonumber \end{align} where $u_{\max,i}=\max_{v\in\mathcal V(\Theta)}|e_i^{\scriptscriptstyle\mathsf{T}} K_{\mathrm{lqr}}v|$ is the largest input magnitude required by the LQR controller along the $i$-th generalized coordinate. Note that since $\mathcal U_{\mathrm{ext}}$ and $\mathcal U_{\mathrm{int}}$ are origin-centered hyperrectangles, one can write $\mathcal U=\cup_{i=1}^{2n_r+1}\mathcal U_i$ where $\mathcal U_i$ are convex polytopes and $\mathcal U_1=\{0\}$. There are then $N(2n_r+1)$ degrees of freedom to choose which convex subsets of $\mathcal U$ the control inputs are to be in. The robust MPC law is then: \begin{equation} \label{eq:mpc_law} \hspace{-4mm} \@ifnextchar[{\@with}{\@without}[ x_0 = \theta, \\ & x_{k+1} = Ax_k+Bu_k+Ew_k\quad k=0:N-1, \\ & x_k\in\Theta\,\,\forall w_j\in\mathcal W'\,\,\,\, j=0:k-1,k=1:N, \\ & u_k\in\mathcal U_i\text{ for some }i\in\{1:2n_r+1\}, k=0:N-1, ]{}{\min}{x_k,u_k}{\sum_{k=0}^{N-1}u_k^{\scriptscriptstyle\mathsf{T}} R_{\mathrm{lqr}}u_k +x_{k+1}^{\scriptscriptstyle\mathsf{T}} Q_{\mathrm{lqr}}x_{k+1}} \hspace{-10mm} \end{equation} which can be transformed into the form (\ref{eq:minlp}) via H\"older's inequality as was shown in \cite{Malyuta2019}. Note that $p=2n_r$, therefore (\ref{eq:mpc_law}) is constrained to even parameter dimensions. \subsection{Algorithm Performance Statistics} Algorithm~\ref{alg:phase1} is applied to 100 randomly generated instances of (\ref{eq:mpc_law}) with $n_r=1,2,3$. This demonstrates that the algorithm can scale up to at least $p=6$ and $m=21$, with higher dimensions likely possible as discussed in Section~\ref{sec:future}. \begin{figure} \centering \begin{subfigure}[t]{.5\columnwidth} \centering \includegraphics[width=1\linewidth]{vol_vs_cell.pdf} \caption{Volume versus cell count.} \label{fig:vol_vs_cell} \end{subfigure}% \begin{subfigure}[t]{.5\columnwidth} \centering \includegraphics[width=1\linewidth]{vol_vs_time.pdf} \caption{Volume versus runtime.} \label{fig:vol_vs_time} \end{subfigure} \caption{Normalized convergence plots. Cumulative closed volume, cumulative closed leaf count and runtime are normalized by their final respective values. The solid/dashed lines show the envelope max/min while the dotted reference line shows linear convergence.} \label{fig:convergence} \end{figure} \begin{figure} \centering \begin{subfigure}[t]{.5\columnwidth} \centering \includegraphics[width=1\linewidth]{runtime_vs_cell.pdf} \caption{Total runtime statistics.} \label{fig:runtime_scatter} \end{subfigure}% \begin{subfigure}[t]{.5\columnwidth} \centering \includegraphics[width=1\columnwidth]{cell_boxplot.pdf} \caption{Cell count statistics.} \label{fig:cell_boxplot} \end{subfigure} \begin{subfigure}[t]{.5\columnwidth} \centering \includegraphics[width=1\linewidth]{depth_boxplot.pdf} \caption{Tree depth statistics.} \label{fig:depth_boxplot} \end{subfigure}% \begin{subfigure}[t]{.5\columnwidth} \centering \includegraphics[width=1\linewidth]{kappa_boxplot.pdf} \caption{Normalized overlap statistics.} \label{fig:kappa_boxplot} \end{subfigure} \caption{Final partition statistics. Whiskers show the full range.} \label{fig:statistics} \end{figure} Figure~\ref{fig:convergence} shows convergence plots using the fraction of the volume of $\Theta$ made up by the closed leafs as the metric. Since at first none and in the end all leaves are closed, this metric goes from $0$ at the start to $1$ at the end of Algorithm~\ref{alg:phase1} and is easily evaluated since the volume of a simplex is well known \cite{Stein1966}. The algorithm has a favorable convergence characteristic in that no convergence curve in our tests deviated significantly from a linear rate. The practical significance of this is that the algorithm progresses steadily towards filling up the entire volume of $\Theta$ rather than being very slow at the beginning and very fast at the end or vice versa. Both cases would be poor for the user to supervise since, especially for $p>3$, it becomes difficult to diagnose the reason for slow convergence. Figure~\ref{fig:runtime_scatter} shows the wall-clock total runtime corresponding to each run, which appears to increase linearly and with unity slope as a function of the final partition cell count. The linear trend agrees with the linear output complexity of Lemma~\ref{lemma:outputcomplexity} while the unity slope may be interpreted as that, regardless of $p$, it takes the current implementation on average 1 second to add 1 closed leaf to the partition tree. Note that this measurement includes the time taken to traverse potentially many layers of the tree until adding a closed leaf (up to about 20 layers according to Figure~\ref{fig:depth_boxplot}). The fact that $p=2,4,6$ all lie on the same trend line indicates that the current implementation's bottleneck is not the complexity of the intermediate MINLPs that have to be solve but rather e.g. database access speed. Figures~\ref{fig:cell_boxplot}, \ref{fig:depth_boxplot} and \ref{fig:kappa_boxplot} show statistics on the final tree leaf count and depth along with fitted complexity curves resulting from Section~\ref{subsec:complexity}. Figure~\ref{fig:cell_boxplot} shows clearly that the partition tree leaf count increases exponentially with $p$ as stipulated by Corollary~\ref{corollary:phase1cellcountcomplexity}. Interestingly, we note from Figure~\ref{fig:depth_boxplot} that for some instances of (\ref{eq:mpc_law}) the tree depth does not go beyond the first layer. In other words, sometimes the Delaunay triangulation of $\Theta$ on line~\ref{alg:phase1:line:initialdelaunay} of Algorithm~\ref{alg:phase1} suffices. Note that because the complexities in Theorem~\ref{theorem:phase1treedepthcomplexity} and Corollary~\ref{corollary:phase1cellcountcomplexity} depend also on the overlap $\kappa$, which currently cannot be computed a priori, the regressions in Figures~\ref{fig:cell_boxplot} and \ref{fig:depth_boxplot} carry an omitted variable bias. However, Corollary~\ref{corollary:phase1cellcountcomplexity} allows to compute normalized values for $\kappa$ by assuming that the deviations in Figure~\ref{fig:cell_boxplot} of the actual cell count from the fitted one are due to $\kappa$ alone. This effectively captures the normalized variation required from $\kappa$ in order to explain the deviation of observed results from the regressed theoretical values. This is shown in Figure~\ref{fig:kappa_boxplot} where $\kappa=1/e$ if the match between the fitted and predicted cell counts is perfect. As expected, the effect of $\kappa$ diminishes for higher $p$ where the exponential complexity in $p$ dominates over the polynomial complexity in $\kappa$. \section{Future Work} \label{sec:future} Algorithm~\ref{alg:phase1} is subject to several potential improvements. First, it would be interesting to compute the overlap $\kappa$ given $\Theta$ and (\ref{eq:minlp}). This way, Algorithm~\ref{alg:phase1} could be certified to converge a priori. Next, the partitioning process is parallelizable since lines~\ref{alg:phase1:line:getfirstopenleaf}-\ref{alg:phase1:line:closeleaf} can be executed in parallel for different leafs. Assuming that database communication latency can be highly optimized to the point of being negligible, we can say that lines~\ref{alg:phase1:line:getfirstopenleaf}-\ref{alg:phase1:line:closeleaf} can be made to execute entirely in parallel. Amdahl's law then predicts that $t_{\mathrm{p}}=t_{\mathrm{s}}/n_{\text{proc}}$ where $t_{\mathrm{p}}$, $t_{\mathrm{s}}$ and $n_{\text{proc}}$ are the parallel total runtime, serial total runtime and number of processors used respectively. The total runtime can thus be reduced in inverse proportion to the number of processors available. Given that modern university facilities can typically provide access to on the order of $10^2$ processors and that another $10^2$ factor can be achieved by using a compiled programming language, we expect that a compiled parallel implementation can yield a speedup of at least $10^4$. According to Figure~\ref{fig:runtime_scatter}, this means that one could compute partitions with $p=4$ in 0.1~s, with $p=6$ in 10~s and (extrapolating) with $p=8$ in 1000~s. \section{Introduction} \label{introduction} Model predictive control (MPC) is a discrete-time control technique in which a receding horizon optimization problem is solved in order to determine the optimal control input at each time step. Advanced formulations of MPC include hybrid MPC (HMPC) and robust MPC (RMPC) \cite{Mayne2000,Bemporad2007,Mayne2014}. HMPC handles systems like chemical powerplants, pipelines and aerospace vehicles whose dynamics involve either explicit discrete switches such as valves \cite{Bemporad1999a,OcampoMartinez2007} or have nonlinearities that can be appropriately modeled via a piecewise affine approximation \cite{Blackmore2012,Schouwenaars2006}. RMPC handles systems that are affected by uncertainties such as in their dynamics, in the state estimate and in the input \cite{Bemporad2007,Malyuta2019}. Many practical applications call for a combined control of uncertain hybrid systems which requires solving a convex mixed-integer nonlinear program (MINLP) \cite{Richards2003,Hen2002,Corona2006}. While possible on powerful hardware, on-line MINLP solution is both slow and $\mathcal{NP}$-complete{} \cite{Bemporad1999a}, meaning that there is generally no real-time performance guarantee. To improve MPC on-line computational efficiency, some authors have worked on explicit MPC techniques which reduce on-line computational demand by pre-computing off-line all or part of the optimal solution. On-line it typically remains to evaluate a piecewise affine (PWA) function. Various explicit MPC methodologies have been proposed \cite{Alessio2009,Bemporad2006,Pistikopoulos2012}. When the MPC law is a linear or a quadratic program, an exact explicit law can be obtained by solving a multiparametric program. Exact solutions for more general programs are generally not possible due to non-convexity of common active constraint regions \cite{Bemporad2006b}. Instead, approximate solutions have been proposed via local linearization \cite{Pistikopoulos2007a,Oberdieck2017} or via optimal cost bounding by PWA functions over simplices \cite{Bemporad2006b,MunozDeLaPena2004,MunozDeLaPena2006} or hyperrectangles \cite{Johansen2004}. An approximate explicit solution to mixed-integer quadratic programs has been proposed based on difference of convex programming \cite{Alessio2006} and for MINLPs based on local linearization and primal/master subproblems \cite{Dua1998,Rowe2003}. Multiparametric programming, however, is restricted to relatively low dimensional systems due to the worst-case exponential partition complexity. Several authors in hybrid MPC have therefore suggested to retain on-line mixed-integer programming and either to reduce the integer variable's degrees of freedom \cite{Ingimundarson2007} or to abort the solver at a suboptimal solution when computation time becomes excessive \cite{Bemporad1999a}. The former solution, however, has no rigorous way of selecting a reference integer solution while the latter relies on the ability to use the previous time step's solution to warm start the mixed-integer solver, which is not always possible such as, for example, in some robust MPC schemes \cite{Malyuta2019}. To address the issue of guaranteed real-time computation of a feasible integer solution in the general setting, our main contribution is a novel partitioning algorithm which pre-computes feasible integer solutions in a polytopic subset of the state space. This partition is stored as a tree which can be queried in polynomial time. As a result, the partition provides a guaranteed real-time warm start capability to the mixed-integer solver and thus is helpful for \cite{Bemporad1999a,Ingimundarson2007}. Our second contribution is a convergence proof of the algorithm which for the first time in literature considers an overlap property as being a driver of partition complexity. The paper is organized as follows. In Section~\ref{sec:problem_formulation} the scope of MPC formulations that our algorithm can handle is defined as a generic MINLP. The algorithm is then described in Section~\ref{sec:phase1} and its convergence, complexity and use-cases are discussed in Section~\ref{sec:properties}. The algorithm is tested on a large set of randomly generated dynamical systems with up to 6 states and 21 integer variable degrees of freedom, indicating that it is robust and can scale to medium dimensional systems. Section~\ref{sec:future} outlines future research directions and is followed by some concluding remarks in Section~\ref{sec:conclusion}. \section{Feasible Commutation Map Computation} \label{sec:phase1} This section proposes an algorithm for computing $f_\delta$. The main idea is to generate a coarse simplicial partition $\mathcal R=\{(\mathcal R_i,\delta_i)\}_{i=1}^{|\mathcal R|}$ such that $\Theta=\bigcup_{i=1}^{|\mathcal R|}\mathcal R_i$ and where each cell $\mathcal R_i$ is associated with a fixed commutation $\delta_i$ that is feasible everywhere in $\mathcal R_i$, i.e. $\mathcal R_i\subseteq\Theta^*_{\delta_i}$. We then define: \begin{equation} \label{eq:feasible_commutation_function} f_\delta(\theta)=\delta_i\text{ such that }\theta\in\mathcal R_i. \end{equation} \begin{lemma} \label{lemma:convexity} For any fixed $\delta\in\mathbb I^m$, $\Theta^*_\delta$ is a convex set and $V^*_\delta$ is a convex function. \begin{proof} Suppose $\theta',\theta''\in\Theta^*_\delta$. Let $\alpha',\alpha''\in [0,1]$, $\alpha'+\alpha''=1$ and $\theta=\alpha'\theta'+\alpha''\theta''$. Since $g,h$ are affine in their first argument and $\mathcal K$ is a convex cone: \begin{equation*} \begin{split} g(\theta,x,\delta)= \alpha'g(\theta',x,\delta)+\alpha''g(\theta'',x,\delta)=0, \\ h(\theta,x,\delta)= \alpha'h(\theta',x,\delta)+\alpha''h(\theta'',x,\delta)\in\mathcal K, \end{split} \end{equation*} so $\theta\in\Theta^*_\delta$ which is thus a convex set. Next, (\ref{eq:nlp}) is a minimization of $f$ over a convex set in $x$ which preserves convexity in $\theta$ by the joint convexity property of $f$ \cite{Boyd2004}. Thus, $V^*_\delta$ is a convex function. \end{proof} \end{lemma} \begin{algorithm} \centering \begin{algorithmic}[1] \State $\mathcal R\gets\emptyset$, $\bar\Theta\gets \Theta$ \For{all $\delta\in \mathbb I^m$} \State $\mathcal R\gets\{(\mathcal R',\delta): \mathcal R'\in\bar\Theta\cap\Theta^*_\delta\}\cup\mathcal R$ \State $\bar\Theta\gets\bar\Theta\setminus\Theta^*_\delta$ \If{$\bar\Theta=\emptyset$} \State STOP \EndIf \EndFor \caption{Brute force $f_\delta$ computation.} \label{alg:phase1bf} \end{algorithmic} \end{algorithm} Since $\Theta^*_\delta$ is convex, an arbitrarily precise inner approximation of it can be found \cite{Dueri2016}. A conceptually trivial method for generating $\mathcal R$ is given by Algorithm~\ref{alg:phase1bf}. The set difference and intersection operations in Algorithm~\ref{alg:phase1bf} are element-wise \cite{Baotic2009}. The idea is to exploit the ability to inner approximate $\Theta^*_\delta$ to procedurally ``cover'' $\Theta$ by intersecting it with fixed-commutation feasible parameter sets. The filling problem is combinatorial, however, such that in the worst case all $2^m$ possible values of $\delta$ are needed, excepting those that are infeasible directly due to the constraints in (\ref{eq:minlp}). Furthermore, accurate polytopic inner approximation of $\Theta^*_\delta$ in higher-dimensional spaces than about $\mathbb R^4$ suffers from excessive vertex count \cite{Dueri2016}. Last but not least, the set intersection and set difference operations used by Algorithm~\ref{alg:phase1bf} have poor numerical properties such as creating badly conditioned (i.e. quasi lower-dimensional) polytopes. One would like instead an algorithm which may 1) potentially avoid exploring all $2^m$ combinations for $\delta$, 2) minimizes vertex count and 3) uses only numerically robust operations. We thus propose Algorithm~\ref{alg:phase1} in which the first requirement is addressed via (\ref{eq:maxvol}) discussed below, the second by using only simplices (which have the lowest vertex count among full-dimensional polytopes) and the third by working solely in the vertex representation which is much more numerically robust than the halfspace representation and avoids the numerically fragile operation of converting between the two representations. \begin{algorithm} \centering \begin{algorithmic}[1] \State Create empty tree with open leaf $\Theta$ as root \State $\mathcal S\gets\mathtt{delaunay}(\mathcal V(\Theta))$ \label{alg:phase1:line:initialdelaunay} \State Add child open leaves $\mathcal S_i$ $\forall\mathcal S_i\in\mathcal S$ \label{alg:phase1:line:init} \While{any open leaf exists} \label{alg:phase1:line:whileloop} \State $\mathcal R\gets\text{the first open leaf}$ \label{alg:phase1:line:getfirstopenleaf} \If{(\ref{eq:minlp}) is infeasible for $\theta=c^{\mathcal R}$} \label{alg:phase1:line:barycenterinfeascheck} \State STOP, $\Theta^*\setminus\Theta\ne\emptyset$ \label{alg:phase1:line:stop} \Else \State $\hat\delta\gets\text{solve (\ref{eq:maxvol})}$ \label{alg:phase1:line:findmaxvolcommutation} \If{(\ref{eq:maxvol}) is infeasible} \State $\bar v,\bar v'\gets\arg\max_{v,v'\in\mathcal V(\mathcal R)}\|v-v'\|_2$ \label{alg:phase1:line:longestedge} \State $v_{\mathrm{mid}}\gets (\bar v+\bar v')/2$ \State Add child open leaf ${\mathop \mathrm{co}}\{(\mathcal V(\mathcal R)\setminus\{\bar v\})\cup\{v_{\mathrm{mid}}\}\}$ \State Add child open leaf ${\mathop \mathrm{co}}\{(\mathcal V(\mathcal R)\setminus\{\bar v'\})\cup\{v_{\mathrm{mid}}\}\}$ \label{alg:phase1:line:splitalonglongestedge2} \Else \label{alg:phase1:line:vertexfeasibilitytest} \State Replace leaf with closed leaf $(\mathcal R,\hat\delta)$ \label{alg:phase1:line:closeleaf} \EndIf \EndIf \EndWhile \caption{Proposed computation of $f_\delta$.} \label{alg:phase1} \end{algorithmic} \end{algorithm} Algorithm~\ref{alg:phase1} creates a simplicial partition of $\Theta$ as follows. The partition is stored as a tree whose leaves are cells $(\mathcal S,\delta)$ storing the set $\mathcal S$ and associated commutation $\delta$. Non-leaf nodes in the tree store just the sets and make evaluating (\ref{eq:feasible_commutation_function}) more efficient (see Section~\ref{subsec:complexity}). A ``closed leaf'' refers to a cell that will be a leaf in the final tree while an ``open leaf'' will be further partitioned at the next iteration and thus will be merely a non-leaf node in the final tree. In the algorithm, we refer to nodes directly by their contents, i.e. $\mathcal S$ for open and $(\mathcal S,\delta)$ for closed leaves. On line~\ref{alg:phase1:line:init} the tree root is initialized to $\Theta$ and, since generally $\Theta$ is not a simplex, Delaunay triangulation is first applied \cite[Section~9.3]{deBerg2008}. The tree is then iterated on line~\ref{alg:phase1:line:whileloop} in a depth-first manner until no open leaves are left. By doing a depth-first search, Assumption~\ref{assumption:positiveoverlap} in Section~\ref{subsec:phase1convergence} is disproved more quickly in case that it does not hold, such that the algorithm fails with less wasted time. An open leaf $\mathcal R$ in the tree is selected on line~\ref{alg:phase1:line:getfirstopenleaf}. First, it is checked whether (\ref{eq:minlp}) is feasible at its barycenter. If not, this point is a certificate of infeasibility of (\ref{eq:minlp}) in $\Theta$, in which case Section~\ref{subsec:Theta} should be consulted. If however (\ref{eq:minlp}) is feasible, then lines~\ref{alg:phase1:line:findmaxvolcommutation}-\ref{alg:phase1:line:closeleaf} partition $\mathcal R$ into at most $2$ simplices. First, it is checked whether $\mathcal R$ is fully contained inside some particular $\Theta^*_\delta$. The following lemma is used for this purpose. \begin{lemma} \label{lemma:containment} $\mathcal R\subseteq\Theta^*_{\hat\delta}$ $\Leftrightarrow$ (\ref{eq:nlp}) is feasible $\forall\theta\in\mathcal V(\mathcal R)$ and $\delta=\hat\delta$. \begin{proof} $(\Rightarrow)$ Since $\mathcal R\subseteq\Theta^*_{\hat\delta}$, $\theta\in\mathcal V(\mathcal R)\Rightarrow\theta\in\Theta_{\hat\delta}^*$. Since $\Theta_{\hat\delta}^*$ is the fixed-commutation feasible parameter set, (\ref{eq:nlp}) is by definition feasible. $(\Leftarrow)$ Any $\theta$ such that (\ref{eq:nlp}) is feasible satisfies, by definition, $\theta\in\Theta^*_{\hat\delta}$. By Lemma~\ref{lemma:convexity}, since $\Theta^*_{\hat\delta}$ is convex, ${\mathop \mathrm{co}}\{\theta\in\mathcal V(\mathcal R)\}\equiv\mathcal R\subseteq\Theta^*_{\hat\delta}$. \end{proof} \end{lemma} Using Lemma~\ref{lemma:containment}, one can efficiently check if $\mathcal R\subseteq\Theta^*_\delta$ for some $\delta$ via the following feasibility MINLP: \begin{equation} \label{eq:maxvol} \@ifnextchar[{\@with}{\@without}[ g(\theta,x_\theta,\delta)=0 \quad & \forall \theta\in\mathcal V(\mathcal R), \\ &&& h(\theta,x_\theta,\delta)\in\mathcal K& \forall \theta\in\mathcal V(\mathcal R), \\ &&& \delta\in\mathbb I^m, ]{\hat \delta(\mathcal R)}{\find}{}{\delta} \tag{V$^{\mathcal R}$} \end{equation} where $x_\theta$ denotes a feasible decision vector corresponding to the particular value of $\theta$. Problem (\ref{eq:maxvol}) can be solved in the standard fashion as a MINLP and the found feasible commutation can be associated with $\mathcal R$ and the leaf can be subsequently closed. If (\ref{eq:maxvol}) is infeasible, however, then $\mathcal R$ is not fully contained in any $\Theta^*_\delta$. In this case, $\mathcal R$ is split into two smaller simplices at the midpoint of its longest edge. As explained in Section~\ref{subsec:phase1convergence}, this yields a volume reduction that necessarily leads to convergence if Assumption~\ref{assumption:positiveoverlap} holds. \section{Phase II: Computing $f_\delta^\epsilon$} \label{sec:phase2} This section presents the next and final phase of the algorithm in which $f_\delta$ output by phase I is refined into the suboptimal commutation function $f_\delta^\epsilon$. This is achieved by subdividing the phase I partition of $\Theta$ further into simplices until an $\epsilon$-suboptimal $\delta$ choice is found. We begin by explaining the core logic step in the algorithm. Let $(\mathcal R,\delta)$ be one of the cells of the phase I partition. We wish to know whether $\delta$ is $\epsilon$-suboptimal in $\mathcal R$. This can be encoded in the following two bilevel MINLPs for the absolute and relative errors respectively: \begin{align} \label{eq:absolute_error} e_{\mathrm{a}}^*(\mathcal R) &= \max_{\theta\in\mathcal R,\delta'\in\mathbb I^m\setminus\{\delta\}} V_\delta^*(\theta)-V_{\delta'}^*(\theta), \tag{E$^{\mathcal R}_{\mathrm{a}}$} \\ \label{eq:relative_error} e_{\mathrm{r}}^*(\mathcal R) &= \max_{\theta\in\mathcal R,\delta'\in\mathbb I^m\setminus\{\delta\}} (V_\delta^*(\theta)-V_{\delta'}^*(\theta))/V_{\delta'}^*(\theta). \tag{E$^{\mathcal R}_{\mathrm{r}}$} \end{align} If $e_{\mathrm{a}}^*(\mathcal R)\le\epsilon_{\mathrm{a}}$ or $e_{\mathrm{r}}^*(\mathcal R)\le\epsilon_{\mathrm{r}}$ then, according to Definition~\ref{definition:suboptimal_commutation_function}, $\delta$ is $\epsilon$-suboptimal in $\mathcal R$. Both problems, however, are non-convex and as such their globally optimal solution is not readily obtainable. Nevertheless, we can formulate a tractable alternative optimization that is \textit{sufficient} to guarantee $e_{\mathrm{a}}^*(\mathcal R)\le\epsilon_{\mathrm{a}}$ or $e_{\mathrm{r}}^*(\mathcal R)\le\epsilon_{\mathrm{r}}$. We focus first on $e_{\mathrm{a}}^*(\mathcal R)$. \begin{definition} Let $v_i\in\mathcal V(\mathcal R)$ be the $i$-th vertex of $\mathcal R$ and let $\theta=\sum_{i=1}^{|\mathcal V(\mathcal R)|}\alpha_iv_i$, $\sum_{i=1}^{|\mathcal V(\mathcal R)|}\alpha_i=1$ and $\alpha_i\ge 0$ $\forall i=1:|\mathcal V(\mathcal R)|$. The affine over-approximator over $\mathcal R$ of $V^*_\delta(\theta)$ is: \begin{equation} \label{eq:over_approximator} \bar V_\delta(\theta)\triangleq\sum_{i=1}^{|\mathcal V(\mathcal R)|}\alpha_iV^*_\delta(v_i). \end{equation} \end{definition} \begin{proposition} \label{proposition:over_approximator} The affine over-approximator (\ref{eq:over_approximator}) satisfies $V^*_{\delta}(\theta)\le\bar V_{\delta}(\theta)$ $\forall\theta\in\mathcal R$ and with equality at the vertices. \begin{proof} Since $V^*_{\delta}$ is convex (Lemma~\ref{lemma:convexity}), Jensen's inequality implies $V^*_\delta(\theta)\le\bar V_\delta(\theta)$. \end{proof} \end{proposition} Consider now the following convex MINLP: \begin{equation} \label{eq:absolute_error_approx} \bar e_{\mathrm{a}}(\mathcal R) = \max_{\theta\in\mathcal R,\delta'\in\mathbb I^m\setminus\{\delta\}} \bar V_\delta(\theta)-V^*_{\delta'}(\theta) \tag{$\bar{\text{E}}^{\mathcal R}_{\mathrm{a}}$} \end{equation} \begin{lemma} \label{lemma:absolute_error} The error measures satisfy $e^*_{\mathrm{a}}(\mathcal R)\le \bar e_{\mathrm{a}}(\mathcal R)$. \begin{proof} Let $\theta\in\mathcal R$ and $\delta'\in\mathbb I^m\setminus\{\delta\}$. By Proposition~\ref{proposition:over_approximator}, $V^*_\delta(\theta)\le\bar V_\delta(\theta)\Rightarrow V^*_\delta(\theta)-V^*_{\delta'}(\theta)\le\bar V_\delta(\theta)-V^*_{\delta'}(\theta)$. The result follows directly. \end{proof} \end{lemma} \begin{theorem} \label{theorem:absolute_error_check_sufficiency} Verifying $\bar e_{\mathrm{a}}(\mathcal R)\le\epsilon_{\mathrm{a}}$ is a sufficient for $e_{\mathrm{a}}^*(\mathcal R)\le\epsilon_{\mathrm{a}}$ to hold. \begin{proof} By Lemma~\ref{lemma:absolute_error}, $e_{\mathrm{a}}^*(\mathcal R)\le\bar e_{\mathrm{a}}(\mathcal R)\le\epsilon_{\text{a}}$. \end{proof} \end{theorem} As a tractable alternative to (\ref{eq:relative_error}), we suggest the following tractable over-approximation \cite{MunozDeLaPena2004}: \begin{equation*} \label{eq:relative_error_approx} \bar e_{\mathrm{r}}(\mathcal R) = \frac{ \bar e_{\mathrm{a}}(\mathcal R) }{ \min_{\theta\in\mathcal R}V^*_\delta(\theta) }. \tag{$\bar{\text{E}}^{\mathcal R}_{\mathrm{r}}$} \end{equation*} Evidently, $e_{\mathrm{r}}^*(\mathcal R)\le \bar e_{\mathrm{r}}(\mathcal R)$. As for the absolute error, we have the following theorem. \begin{theorem} \label{theorem:relative_error_check_sufficiency} Verifying $\bar e_{\mathrm{r}}(\mathcal R)\le\epsilon_{\mathrm{r}}$ is a sufficient for $e_{\mathrm{r}}^*(\mathcal R)\le\epsilon_{\mathrm{r}}$ to hold. \begin{proof} We have $e_{\mathrm{r}}^*(\mathcal R)\le\bar e_{\mathrm{r}}(\mathcal R)\le\epsilon_{\text{r}}$. \end{proof} \end{theorem} The result of Theorems~\ref{theorem:absolute_error_check_sufficiency} and \ref{theorem:relative_error_check_sufficiency} is that the intractable (\ref{eq:absolute_error}) and (\ref{eq:relative_error}) can be substituted for the tractable (\ref{eq:absolute_error_approx}) and (\ref{eq:relative_error_approx}) in order to guarantee the $\epsilon$-suboptimality requirement. Note that because $\epsilon$-suboptimality checks only for $\delta'\in\mathbb I^m\setminus\{\delta\}$, it is less stringent than requiring it for all $\delta\in\mathbb I^m$. While in the former case partitioning stops as soon as the sufficient conditions are met, the latter case would partition all the way until $V^*_\delta$ is sufficient accurately approximated. The result is a coarser partition and is thanks to our semi-explicit programming approach. When both $\bar e_{\mathrm{a}}(\mathcal R)>\epsilon_{\mathrm{a}}$ and $\bar e_{\mathrm{r}}(\mathcal R)>\epsilon_{\mathrm{r}}$, however, nothing can be said as to whether $e_{\mathrm{a}}^*(\mathcal R)\le\epsilon_{\text{a}}$. In this case, the algorithm will subdivide $\mathcal R$ in order to decrease the over-approximation error $\max_{\theta\in\mathcal R}\bar V_\delta(\theta)-V^*_\delta(\theta)$. We now propose Algorithm~\ref{alg:phase2} to compute $f_\delta^\epsilon$. Aside from $\epsilon_{\mathrm{a}}$ and $\epsilon_{\mathrm{r}}$, the algorithm requests $\bar\rho$, the maximum tolerated simplex condition number. Given $v_i\in\mathcal V(\mathcal R)$ where $\mathcal R\subset\mathbb R^p$ is a simplex, the condition number is given by: \begin{equation} \label{eq:condition_number} \rho(\mathcal R)\triangleq\frac{\overline\sigma( \begin{bmatrix} v_2-v_1 & \cdots & v_{p+1}-v_1 \end{bmatrix} )}{ \underline\sigma( \begin{bmatrix} v_2-v_1 & \cdots & v_{p+1}-v_1 \end{bmatrix} )}, \end{equation} where $\overline\sigma$ and $\underline\sigma$ are the largest and smallest singular values respectively. Since Algorithm~\ref{alg:phase1} cannot produce lower-dimensional simplices, no division by zero can occur in (\ref{eq:condition_number}). Through $\bar\rho$, Algorithm~\ref{alg:phase2} takes the practical approach of giving up the $\epsilon$-suboptimality guarantee in very badly conditions cells. If $\bar\rho$ is large, $\epsilon$-suboptimality will eventually be guaranteed everywhere but numerics may become sensitive and the partition complexity prohibitive. By reducing the $\bar\rho$ value, the partition will be computed more quickly by neglecting a few cells where further partitioning will cause $\rho(\mathcal R)>\bar\rho$. In our experience, a good $\bar\rho$ setting yields $<1\%$ of such cells in the final partition (see Section~\ref{sec:example}). The algorithm also requires parameters $\pi_{\mathrm{a}}$ and $\pi_{\mathrm{r}}$ which define the absolute and relative keep-out radii by which $\bar\theta_{\mathrm{a}}(\mathcal R)$, the arg-maximizer of (\ref{eq:absolute_error_approx}), has to be away from any $v\in\mathcal V(\mathcal R)$. Practical experience has shown that if $\exists v\in\mathcal V(\mathcal R)$ such that $\|v-\bar\theta_{\mathrm{a}}(\mathcal R)\|_2$ is very small, the partitioning process slows down significantly. In this case Algorithm~\ref{alg:phase2} prefers to partition $\mathcal R$ along the midpoint of its longest edge, which is a heuristic that we have found to work well when $\mathcal R$ needs to be split but $\theta^*_{\mathrm{a}}(\mathcal R)$ cannot be split at due to $\pi_{\mathrm{a}}$ or $\pi_{\mathrm{r}}$. \begin{algorithm}[t] \centering \begin{algorithmic}[1] \State Open all leaves of tree output by Algorithm~\ref{alg:phase1} \While{any open leaf exists} \For{every open leaf $(\mathcal R,\delta)$} \If{$\mathcal R$ is not a simplex} \label{alg:phase2:line:polytosimplex} \State $\mathcal S\gets\mathtt{delaunay}(\mathcal V(\mathcal R))$ \State Add child open leaves $(\mathcal S_i,\delta)$ $\forall\mathcal S_i\in\mathcal S$ \label{alg:phase2:line:polytosimplexend} \Else \State $\bar e_{\mathrm{a}}(\mathcal R)\gets\text{solve (\ref{eq:absolute_error_approx})}$ \label{alg:phase2:line:suboptimalitycheckstart} \State $\bar e_{\mathrm{r}}(\mathcal R)\gets\text{solve (\ref{eq:relative_error_approx})}$ \State $\text{infeasible}\gets\text{(\ref{eq:absolute_error_approx}) or (\ref{eq:relative_error_approx}) infeasible}$ \State $\text{$\epsilon$-suboptimal}\gets\text{$\bar e_{\mathrm{a}}(\mathcal R)\le\epsilon_{\mathrm{a}}$ or $\bar e_{\mathrm{r}}(\mathcal R)\le\epsilon_{\mathrm{r}}$}$ \If{infeasible or $\epsilon$-suboptimal} \State Close leaf \label{alg:phase2:line:suboptimalitycheckend} \Else \State $\bar l\gets\max_{v,v'\in\mathcal V(\mathcal R)}\|v-v'\|_2$ \State $r_{\text{ko}}\gets\max\{\pi_{\mathrm{a}},\pi_{\mathrm{r}}\bar l\}$ \State $\bar\theta_{\mathrm{a}}(\mathcal R),\bar\delta_{\mathrm{a}}(\mathcal R)\gets\arg\max\text{ of (\ref{eq:absolute_error_approx})}$ \If{$\|\bar\theta_{\mathrm{a}}(\mathcal R)-v\|_2>r_{\text{ko}}$ $\forall v\in\mathcal V(\mathcal R)$} \State $\bar v,\bar v'\gets\arg\max_{v,v'\in\mathcal V(\mathcal R)}\|v-v'\|_2$ \label{alg:phase2:line:edgesplit} \State $\mathcal S\gets\mathtt{triangulate}(\mathcal R,(\bar v+\bar v')/2)$ \If{$\exists \mathcal S_i\in\mathcal S$ such that $\rho(\mathcal S_i)>\bar\rho$} \State Close leaf with warning \label{alg:phase2:line:suboptimalclose} \EndIf \Else \State $\mathcal S\gets\mathtt{triangulate\_snap}(\bar V_\delta,\mathcal R,\bar\delta_{\mathrm{a}}(\mathcal R))$ \If{$|\mathcal S|=1$} \State \Goto{alg:phase2:line:edgesplit} \EndIf \EndIf \For{each $\mathcal S_i\in\mathcal S$} \label{alg:phase2:line:changecommutation} \If{(\ref{eq:nlp}) feas. for $\delta=\bar\delta_{\mathrm{a}}(\mathcal R)$, $\forall\theta\in\mathcal V(\mathcal S_i)$} \State Add child open leaf $(\mathcal S_i,\bar\delta_{\mathrm{a}}(\mathcal R))$ \Else \State Add child open leaf $(\mathcal S_i,\delta)$ \EndIf \EndFor \EndIf \EndIf \EndFor \EndWhile \label{alg:phase2:line:end} \caption{Phase II computation of $f_\delta^{\epsilon}$.} \label{alg:phase2} \end{algorithmic} \end{algorithm} \begin{algorithm}[t] \centering \begin{algorithmic}[1] \Function{$\mathtt{triangulate\_snap}$}{$V$,$\mathcal R$,$\delta$} \State Denote $\mathcal V(\mathcal R)=\{v_1,\dots,v_{\ell}\}$ \State $c\gets v_{\ell},E\gets \setlength\arraycolsep{1pt} \begin{bmatrix} v_1-v_{\ell} & \cdots & v_p-v_{\ell} \end{bmatrix}$ \label{alg:snap:line:basis} \State $\theta'\gets\text{solve (\ref{eq:absolute_error_approx_constrained})}$ \State $\mathcal S\gets\mathtt{triangulate}(\mathcal R,\theta')$ \State $i\gets\arg\max_{i=1,\dots,|\mathcal S|}\rho(\mathcal S_i)$ \If{$\rho(\mathcal S_i)>\bar\rho$} \State $\text{facet}_i\gets {\mathop \mathrm{co}}(\mathcal V(\mathcal S_i)\setminus\{\theta'\})$ \State $\mathcal S\gets\mathtt{triangulate\_snap}(V,\text{facet}_i,\delta)$ \State $v\gets\mathcal V(\mathcal R)\setminus\text{facet}_i$ \State $\mathcal S\gets\{{\mathop \mathrm{co}}(\mathcal V(\mathcal D)\cup\{v\}):\mathcal D\in\mathcal S\}$ \EndIf \State \Return $\mathcal S$ \EndFunction \caption{Condition number-aware triangulation.} \label{alg:triangulate_snap} \end{algorithmic} \end{algorithm} Algorithm~\ref{alg:phase2} proceeds as follows. First, all leaves of the tree output by Algorithm~\ref{alg:phase1} are re-opened for further partitioning. Lines~\ref{alg:phase2:line:polytosimplex}-\ref{alg:phase2:line:end} partition the cell $(\mathcal R,\delta)$. Because Algorithm~\ref{alg:phase1} can return general polytopes, these are first partitioned via Delaunay triangulation on lines~\ref{alg:phase2:line:polytosimplex}-\ref{alg:phase2:line:polytosimplexend}. When $\mathcal R$ is a simplex, the sufficient conditions provided by Theorems~\ref{theorem:absolute_error_check_sufficiency} and \ref{theorem:relative_error_check_sufficiency} are used to check $\epsilon$-suboptimality on lines~\ref{alg:phase2:line:suboptimalitycheckstart}-\ref{alg:phase2:line:suboptimalitycheckend}. If $\delta$ is the only feasible commutation in $\mathcal R$, (\ref{eq:absolute_error_approx}) and (\ref{eq:relative_error_approx}) will fail. In this case, $\delta$ is certainly the optimal commutation as it is the only feasible one. However, if $\epsilon$-suboptimality is not verified, $\mathcal R$ must be further partitioned. We first check that the point of maximum suboptimality $\bar\theta_{\mathrm{a}}(\mathcal R)$ is away from any vertex by the aforementioned keep-out radius. If not, $\mathcal R$ is partitioned into two simplices at the midpoint of its longest edge where $\mathtt{triangulate}(\mathcal S,\theta)$ partitions $\mathcal S$ into $\{\mathcal S'={\mathop \mathrm{co}}((\mathcal V(\mathcal S)\setminus v)\cup\{\theta\}):v\in\mathcal V(\mathcal S),\mathcal S'\text{ full-dimensional}\}$. If the resulting simplices exceed the condition number tolerance, $\mathcal R$ is not partitioned and is closed on line~\ref{alg:phase2:line:suboptimalclose} with a warning that $\bar e_{\mathrm{a}}(\mathcal R)>\epsilon_{\mathrm{a}}$ and $\bar e_{\mathrm{r}}(\mathcal R)>\epsilon_{\mathrm{r}}$. Line~\ref{alg:phase2:line:suboptimalclose} is the only case where the user-requested $\epsilon$-suboptimality is not achieved and deserves justification. First, we note that by setting $\bar\rho$ large enough, one can cause line~\ref{alg:phase2:line:suboptimalclose} to never be visited. However, two practical issues may prevent arbitrarily large $\bar\rho$ settings: 1) badly conditioned simplices can cause failure due to numerical precision error and 2) bad conditioning is symptomatic of a prohibitively complex partitioning resulting from too small $\epsilon_{\mathrm{a}}$ or $\epsilon_{\mathrm{r}}$. In this respect, $\bar\rho$ is advantageous because in practice the partition complexity required by a certain $\epsilon_{\mathrm{a}}$ and $\epsilon_{\mathrm{r}}$ cannot be known a priori \cite{MunozDeLaPena2004}. Thanks to $\bar\rho$, however, excessive partitioning is abandoned and one can check post factum the \textit{achieved} $\bar e_{\mathrm{a}}(\mathcal R)$ and $\bar e_{\mathrm{r}}(\mathcal R)$ for all leaves of the partition tree. One can then either increase $\bar\rho$ or accept the achieved values. When $\bar\theta_{\mathrm{a}}(\mathcal R)$ passes the keep-out check, $\mathcal R$ is partitioned at $\bar\theta_{\mathrm{a}}(\mathcal R)$ via Algorithm~\ref{alg:triangulate_snap}. On line~\ref{alg:snap:line:basis}, a basis of convex set $\mathcal R$, which is at most a simplex, is constructed. The point at which to split $\mathcal R$ is then obtained by solving a fixed-commutation maximum absolute error NLP on the affine set $\mbox{\textrm{range}}(E)+c$: \begin{equation} \label{eq:absolute_error_approx_constrained} \@ifnextchar[{\@with}{\@without}[ \theta=E\alpha+c \\ &&& \alpha\ge 0,\text{ }\textstyle{\sum}_{i=1}^\ell\alpha_i=1. ]{\theta_{\delta}(c,E,V)}{\arg\max}{\alpha,\theta}{V(\theta)-V_\delta^*(\theta)} \tag{$\bar{\text{E}}_{\mathrm{a}}^{\delta,c,E,V}$} \end{equation} It may happen, however, that $\theta_{\delta}(c,E,V)$ is close to some, for example, a facet of $\mathcal R$ at which point $\mathtt{triangulate}$ will return at least one very badly conditioned simplex. In this situation, we identify the worst-conditioned simplex in the triangulation of $\mathcal R$ and re-run $\mathtt{triangulate\_snap}$ on the lower-dimensional polytope formed by the vertices of $\mathcal R$ associated with the worst-conditioned simplex. This recursion returns either some well-conditioned partition of $\mathcal R$ or, if solving the lower-dimensional problems did not help to improve the condition number, $\mathcal R$ itself. In the latter case, Algorithm~\ref{alg:phase2} falls back to the longest edge midpoint partitioning heuristic. With the partition $\mathcal S$ of $\mathcal R$ obtained, lines~\ref{alg:phase2:line:changecommutation}-\ref{alg:phase2:line:end} of Algorithm~\ref{alg:phase2} create new open leaves for each cell $\mathcal S$ with the associated commutation $\bar\delta_{\mathrm{a}}(\mathcal R)$ whenever possible. In situations where it turns out that $\bar\delta_{\mathrm{a}}(\mathcal R)$ is feasible at $\bar\theta_{\mathrm{a}}(\mathcal R)$ but not $\forall v\in\mathcal V(\mathcal R)$, the leaf is created with the original $\delta$ and we rely on the partitioning process to eventually assign the $\epsilon$-suboptimal commutation. \section{Problem Formulation} \label{sec:problem_formulation} This section defines a template MINLP that generates all MPC laws that our algorithm can handle. Because MPC is fundamentally an optimization problem, we do this without specific mention of a receding-horizon optimal control problem. We use the following notation. $\mathbb R$ denotes the set of reals, $\mathbb I\triangleq\{0,1\}$ the binary set and $\mathbb B\triangleq\{x:\|x\|_2\le 1\}$ the unit ball. Unless otherwise specified, matrices are uppercase (e.g. $A$), scalars, vectors and functions are lowercase (e.g. $x$) and sets are calligraphic uppercase (e.g. $\mathcal S$). We use $1_n\in\mathbb R^n$ to denote the vector of ones and $I_n\in\mathbb R^{n\times n}$ to denote the identity matrix. The scalar $\ell$ is a placeholder for some integer value. The operator $\mathrm{diag}(\{x_i\}_{i=1}^\ell)\in\mathbb R^{\ell\times\ell}$ creates a diagonal matrix with value $x_i$ at row and column $i$ and zero otherwise. The constraint $M\succ(\succeq) 0$ means that $M\in\mathbb R^{n\times n}$ is positive (semi)definite. $\mathcal S^{\mathrm{c}}$, $\partial\mathcal S$, $c^{\mathcal S}$ and $\mathcal V(\mathcal S)$ denote respectively the complement, boundary, barycenter and vertices of $\mathcal S$ (with the latter only relevant when $\mathcal S$ is a polytope). Given $\mathcal A\subseteq\mathbb R^n$, $s\in\mathbb R$ and $b\in\mathbb R^n$, $\mathcal A+b\triangleq\{a+b\in\mathbb R^n:a\in\mathcal A\}$ translates and $s\mathcal A\triangleq\{sa: a\in\mathcal A\}$ scales the set $\mathcal A$. The shorthand $a:b$ denotes the integer sequence $a,\dots,b$. The following multiparametric conic MINLP generates all MPC formulations that our algorithm can handle: \begin{equation} \label{eq:minlp} \@ifnextchar[{\@with}{\@without}[ g(\theta,x,\delta)=0, \\ &&& h(\theta,x,\delta)\in\mathcal K, \\ &&& \delta\in\mathbb I^m, ]{V^*(\theta)}{\min}{x,\delta}{f(\theta,x,\delta)} \tag{P$_\theta$} \end{equation} where $\theta\in\mathbb R^p$ is the parameter, $x\in\mathbb R^n$ is the decision vector and $\delta\in\mathbb I^m$ is a binary vector called the \textit{commutation}. The cost function $f:\mathbb R^p\times\mathbb R^n\times\mathbb I^m\to\mathbb R$ is jointly convex in its first two arguments while the constraint functions $g:\mathbb R^p\times\mathbb R^n\times\mathbb I^m\to\mathbb R^{\ell}$ and $h:\mathbb R^p\times\mathbb R^n\times\mathbb I^m\to\mathbb R^d$ are affine in their first two arguments. The functions can be nonlinear in the last argument. The convex cone $\mathcal K=\mathcal C_1^{c_1}\times\cdots\times\mathcal C_{q}^{c_q}\subset\mathbb R^d$ is a Cartesian product of $q$ convex cones where $\mathcal C_i^{c_i}\subset\mathbb R^{c_i}$ and $d\triangleq \sum_{i=1}^{q}c_i$, similar to \cite{Domahidi2013}. Examples include the positive orthant $\mathbb R_+^n$, the second-order cone $\mathcal Q^{\ell}=\{(t,z)\in\mathbb R\times\mathbb R^{\ell-1}: t\ge\|z\|_2\}$ and the positive semidefinite cone $\mathcal S_+^\ell=\{Z\in\mathbb R^{\ell\times\ell} : Z\succeq 0\}$. We also define the following fixed-commutation multiparametric conic NLP: \begin{equation} \label{eq:nlp} \@ifnextchar[{\@with}{\@without}[ g(\theta,x,\delta)=0, \\ &&& h(\theta,x,\delta)\in\mathcal K, ]{V^*_\delta(\theta)}{\min}{x}{f(\theta,x,\delta)} \tag{P$_{\theta}^\delta$} \end{equation} which corresponds to (\ref{eq:minlp}) where $\delta$ has been fixed, i.e. assigned a specific value. The following definitions closely follow \cite{Bemporad2006b}. \begin{definition} The feasible parameter set $\Theta^*\subset\mathbb R^p$ is the set of all $\theta$ parameters for which (\ref{eq:minlp}) is feasible. \end{definition} \begin{definition} The fixed-commutation feasible parameter set $\Theta_{\delta}^*\subset\mathbb R^p$ is the set of all $\theta$ parameters for which (\ref{eq:nlp}) is feasible. \end{definition} \begin{definition} The feasible commutation map $f_\delta:\Theta^*\to\mathbb I^m$ maps $\theta\in\Theta^*$ to a commutation $\delta$ such that $\theta\in\Theta^*_\delta$. \end{definition} This paper presents an algorithm for computing $f_\delta$ over a subset of its domain $\Theta\subseteq\Theta^*$. We shall assume in Section~\ref{sec:phase1} that the set $\Theta$ is available as a convex and full-dimensional polytope in vertex representation. Section~\ref{subsec:Theta} discusses how one might obtain $\Theta$. It is implicitly assumed throughout this paper that (\ref{eq:minlp}) and all related problems are appropriately scaled. We suggest a per-axis unit scaling such that $\max_{\theta\in\Theta}|e_i^{\scriptscriptstyle\mathsf{T}}\theta|=1$ $\forall i=1,\dots,p$ where $\{e_i\}_{i=1}^p$ is the standard Eucledian basis. \section{Properties} \label{sec:properties} \subsection{Convergence} \label{subsec:phase1convergence} The main result of this section is Theorem~\ref{theorem:phase1convergence} which guarantees convergence of Algorithm~\ref{alg:phase1} under an assumption. \begin{definition} \label{definition:overlap} Let $\Delta\triangleq\{\delta\in\mathbb I^m:\Theta^*_\delta\cap\Theta\ne\emptyset\}$. The largest value $\kappa\in\mathbb R_+$ such that $\forall\theta\in\Theta$ $\exists\delta\in\Delta$ such that $(\kappa\mathbb B+\theta)\setminus(\Theta^*\cap\Theta)^{\mathrm{c}}\subseteq\Theta^*_\delta$ is called the overlap. \end{definition} \begin{assumption} \label{assumption:positiveoverlap} The overlap is positive, i.e. $\kappa>0$. \end{assumption} The overlap depends on (\ref{eq:minlp}) and the choice of $\Theta$. Assumption~\ref{assumption:positiveoverlap} implies that a non-zero overlap between fixed-commutation feasible parameter sets exists everywhere near $\partial\Theta^*_\delta$ $\forall\delta\in\Delta$. This ``fuzzy'' commutation transition property is instrumental for the convergence proof in Theorem~\ref{theorem:phase1convergence}. \begin{theorem} \label{theorem:phase1convergence} If Assumption~\ref{assumption:positiveoverlap} holds then Algorithm~\ref{alg:phase1} either converges or fails in a finite number of iterations. \begin{proof} Let $\mathcal R_k$ be the leaf chosen at the $k$-th call of line~\ref{alg:phase1:line:getfirstopenleaf}. Whenever $\mathcal R_k$ is not closed, it can be shown that the partition along its longest edge on lines~\ref{alg:phase1:line:longestedge}-\ref{alg:phase1:line:splitalonglongestedge2} halves the volume, therefore $\lim_{k\to\infty}\mathrm{vol}(\mathcal R_k)=0$. Since the longest edge length is also halved, $\exists k$ large enough such that $\mathcal R_k\subseteq(\kappa\mathbb B+c^{\mathcal R_k})$. Two possibilities exist: 1) $\mathcal R_k\subseteq(\kappa\mathbb B+c^{\mathcal R_k})\setminus(\Theta^*\cap\Theta)^{\mathrm{c}}$ or 2) $\mathcal R_k\cap(\Theta^*)^{\mathrm{c}}\ne\emptyset$. In the first case, the $\hat\delta$ picked on line~\ref{alg:phase1:line:findmaxvolcommutation} is then the one feasible $\forall\theta\in\mathcal V(\mathcal R_k)$. Leaf $\mathcal R_k$ is therefore closed on line~\ref{alg:phase1:line:closeleaf}. By this logic, for a large enough (but finite) $k$ all regions that do not intersect the infeasible parameter set $(\Theta^*)^{\mathrm{c}}$ get closed. If the second case does not occur, the algorithm terminates. In the second case, recall that $\lim_{k\to\infty}\mathrm{vol}(\mathcal R_k)=0$. Since $\mathcal R_k\cap(\Theta^*)^{\mathrm{c}}\ne 0$, in a finite number of iterations $c^{\mathcal R_k}\notin\Theta^*$ so line~\ref{alg:phase1:line:barycenterinfeascheck} will evaluate to true and the algorithm will fail on line~\ref{alg:phase1:line:stop}. \end{proof} \end{theorem} \begin{corollary} \label{corollary:phase1nonconvergence} If Assumption~\ref{assumption:positiveoverlap} does not hold then Algorithm~\ref{alg:phase1} does not converge. \begin{proof} If Assumption~\ref{assumption:positiveoverlap} does not hold then there exists a region $\Theta'\subseteq\Theta$ such that $\forall\theta\in\Theta'$, $(\kappa\mathbb B+\theta)\setminus(\Theta^*\cap\Theta)^{\mathrm{c}}\subseteq\Theta^*_\delta$ for some $\delta\in\Delta\Leftrightarrow\kappa=0$. This, however, implies that the only simplex that would validate Lemma~\ref{lemma:containment} is a lower-dimensional one, i.e. with zero volume. Since this occurs at iteration $k=\infty$, Algorithm~\ref{alg:phase1} does not converge. \end{proof} \end{corollary} Theorem~\ref{theorem:phase1convergence} and Corollary~\ref{corollary:phase1nonconvergence} suggest that $\kappa>0$ is not only necessary and sufficient for convergence but that $\kappa$ also drives the convergence rate. A small $\kappa$ implies a high iteration count $k$ until Assumption~\ref{assumption:positiveoverlap} guarantees leaf closure. We call a MINLP with large $\kappa$ ``well-conditioned'' and Algorithm~\ref{alg:phase1} will converge more quickly with a rate that is derived in Corollary~\ref{corollary:phase1cellcountcomplexity} of Section~\ref{subsec:complexity}. \subsection{Complexity} \label{subsec:complexity} In this section we analyze the complexity of Algorithm~\ref{alg:phase1} in terms of the partition cell count as well as the on-line evaluation complexity. \begin{theorem} \label{theorem:phase1treedepthcomplexity} The maximum tree depth $\tau$ of Algorithm~\ref{alg:phase1} is $\mathcal O(p^2\log(\kappa^{-1}))$. \begin{proof} Algorithm~\ref{alg:phase1} reduces search space volume by halving the longest edge length on lines~\ref{alg:phase1:line:longestedge}-\ref{alg:phase1:line:splitalonglongestedge2}. Suppose that $l_0$ is the longest edge length of a simplex $\mathcal R\subset\mathbb R^p$, then its length is $l_k\triangleq l_0/2^k$ after $k$ divisions. We wish to determine the number of divisions necessary until $\mathcal R\subseteq\kappa\mathbb B+c^{\mathcal R}$ and gets closed by Theorem~\ref{theorem:phase1convergence}. This approximately requires $l_k\le\kappa\Rightarrow k\ge\log_2(l_0/\kappa)$. Since $\mathcal R$ has $p(p+1)/2$ edges then an approximate number of required subdivisions, i.e. the depth of the partition tree, is given by: \begin{equation} \label{eq:minkexact} \tau = \left\lceil\frac{p(p+1)\log_2(l_0/\kappa)}{2}\right\rceil, \end{equation} which yields $\tau=\mathcal O(p^2\log(\kappa^{-1}))$. \end{proof} \end{theorem} \begin{corollary} \label{corollary:phase1cellcountcomplexity} The maximum tree leaf count $\eta$ of Algorithm~\ref{alg:phase1} is $\mathcal O(2^{p^2\log(\kappa^{-1})})$. \begin{proof} In the worst case, Algorithm~\ref{alg:phase1} generates a full binary tree of depth $\tau=\mathcal O(p^2\log(\kappa^{-1}))$ according to Theorem~\ref{theorem:phase1treedepthcomplexity}, neglecting the first layer where the node count depends on $|\mathcal S|$ on line~\ref{alg:phase1:line:initialdelaunay}. Such a tree contains $\eta=2^\tau=\mathcal O(2^{p^2\log(\kappa^{-1})})$ leaves. \end{proof} \end{corollary} Corollary~\ref{corollary:phase1cellcountcomplexity} tells us that the leaf count is exponential in the parameter dimension and polynomial in the overlap. However, if we assume that the algorithm terminates with a given finite leaf count then the following lemma states that a linearly proportional number of problems will have had to be solved. \begin{lemma} \label{lemma:outputcomplexity} The iteration count $\iota$ of Algorithm~\ref{alg:phase1} is $\mathcal O(\eta)$. \begin{proof} A full binary tree with $\eta$ leaves has $2\eta-1$ nodes, thus at most $\iota=\mathcal O(\eta)$ iterations will have occured. \end{proof} \end{lemma} We have analyzed the tree complexity alone, with disregard for the complexity of the optimization problems that need to be solved at each iteration of Algorithm~\ref{alg:phase1}. Unlike \cite{Bemporad2006b} where convex NLPs need to be solved at each iteration (due to their original, implicit MPC algorithm being non-hybrid), we must solve MINLPs whose solution time is $\mathcal O(n^\ell2^m)$ in the worst case. However, the basic assumption is that the problem \textit{is} solvable in the first place and that on-line computation is offloaded to an off-line solution. Therefore, we do not consider the issue of practically solving (\ref{eq:minlp}) for a given $\theta\in\Theta$. \begin{theorem} \label{theorem:onlinefdeltacomplexity} The on-line evaluation complexity of $f_\delta$ is $\mathcal O(p^3)$. \begin{proof} Algorithm~\ref{alg:phase1} outputs a tree with $\eta_{\mathrm{o}}$ nodes at the first level followed by a binary tree thereafter, where $\eta_{\mathrm{o}}$ is the number of simplices generated by the Delaunay triangulation of $\Theta$ on line~\ref{alg:phase1:line:initialdelaunay}. Since a simplex in $\mathbb R^p$ has $p+1$ facets, there are $p+1$ inequalities to evaluate in order to check $\theta\in\mathcal R$. Given a depth $\tau$, there are at most $(\eta_{\mathrm{o}}+\tau-2)(p+1)$ inequalities to evaluate. Since $\tau=\mathcal O(p^2)$ by Theorem~\ref{theorem:phase1treedepthcomplexity}, the evaluation complexity of $f_\delta$ is $\mathcal O(p^3)$. \end{proof} \end{theorem} Since the evaluation complexity of $f_\delta$ is polynomial by Theorem~\ref{theorem:onlinefdeltacomplexity}, $f_\delta$ can be used for a guaranteed real-time warm start of an on-line mixed-integer solver. \subsection{Choice of $\Theta$} \label{subsec:Theta} Throughout this paper we have assumed that $\Theta$ is available. We now explain possible methods of obtaining it. First of all, $\Theta\subseteq\Theta^*$ should hold. If it does not, per Theorem~\ref{theorem:phase1convergence} Algorithm~\ref{alg:phase1} will report it in a finite number of iterations and a different $\Theta$ must be chosen. Assuming the task of (\ref{eq:minlp}) is to drive $\theta$ (e.g. the current state) to the origin, a $\Theta$ in a small enough neighborhood of $0\in\mathbb R^p$ should satisfy this property as long as (\ref{eq:minlp}) is a well-defined controller. Since $f_\delta$ is defined only over $\Theta$, in practice it must be ensured that $\theta\notin\Theta$ is never encountered during runtime. This requires $\Theta$ to be a controlled invariant set of (\ref{eq:minlp}). A straightforward method for ensuring this is to include the constraint $\Theta^+\subseteq\Theta$ in (\ref{eq:minlp}) where $\Theta^+$ models all possible future values of $\theta$. This is a common constraint in RMPC. In this case, which is the most common one in practice, $\Theta$ is explicitly known. Note that convergence of Algorithm~\ref{alg:phase1} in this case certifies recursive feasibility of (\ref{eq:minlp}). Finally, note from the proof of Theorem~\ref{theorem:onlinefdeltacomplexity} that the practical complexity of $f_\delta$ is $\mathcal O(\eta_{\mathrm{o}})$ since in practice $\eta_{\mathrm{o}}\gg p$ and $\eta_{\mathrm{o}}\gg \tau$. There is therefore a considerable interest to make the Delaunay triangulation of $\Theta$ yield a small $\eta_{\mathrm{o}}$, e.g. by using a simplex or a small number of simplices to define $\Theta$. \subsection{Extensions} \label{subsec:extensions} The algorithm has thus far been presented as a method for partitioning $\Theta\subseteq\Theta^*$. A possible extension is to partition the entire $\Theta^*$. Since by Lemma~\ref{lemma:convexity} $\Theta^*_\delta$ is convex, it may be inner-approximated with arbitrary precision \cite{Dueri2016}. Doing so for each $\delta$, one can run the algorithm for each commutation $\delta\in\mathbb I^m$ by taking $\Theta=\Theta^*_\delta$. To remove overlapping regions, the intersection of $\Theta^*_\delta$ with all previously considered fixed-commutation feasible parameter sets can be removed. \section{Acknowledgment} Part of the research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. Government sponsorship acknowledged. The authors would like to extend special gratitude to Daniel P. Scharf, Jack Aldritch and Carl Seubert for their helpful insight and discussions. \bibliographystyle{ieeetr}
{ "timestamp": "2019-03-01T02:14:50", "yymm": "1902", "arxiv_id": "1902.10989", "language": "en", "url": "https://arxiv.org/abs/1902.10989" }
\section{Introduction} Estimations for character sums have wide applications in number theory. A fundamental result is the well-known P\'olya-Vinogradov inequality (see for example \cite[Chap. 23]{Da}), which asserts that for any non-principal Dirichlet character $\chi$ modulo $q$, $M \in mz$ and $N \in \ensuremath{\mathbb N}$, \begin{align} \label{pv} \sum_{M < n \leq M+N} \chi(n) \ll q^{1/2}\log q. \end{align} One may regard the P\'olya-Vinogradov inequality as a mean value estimate for characters and one expects to obtain asymptotic formulas if an extra average over the characters is introduced. We are thus led to the investigation on the following expression \begin{align*} \sum_{\chi \in S}\sum_{n} \chi(n) \end{align*} where $S$ is a certain set of characters. For example, when $S$ is the set of all Dirichlet characters modulo $q$, then it follows from the orthogonality relation of the characters that the above sum really amounts to a counting on the number of integers which are congruent to $1$ modulo $q$. \newline Another interesting choice for $S$ is to take $S$ to contain characters of a fixed order. A basic and important case is the set of quadratic characters. In \cite{CFS}, J. B. Conrey, D. W. Farmer and K. Soundararajan studied the following sum: \begin{align*} S_2(X,Y)= \sum_{\substack {m \leq X \\ (m, 2)=1}}\sum_{\substack {n \leq Y \\ (n, 2)=1}} \leg {m}{n}, \end{align*} where $\leg {m}{n}$ is the Jacobi symbol. \newline While it is relatively easy to obtain an asymptotic formula of $S_2(X, Y)$ when $Y=o(X/\log X)$ or $X=o(Y/\log Y)$ using \eqref{pv}, it is more subtle to treat the remaining $XY$-ranges. Using a Poisson summation formula developed in \cite{sound1}, a valid asymptotic formula of $S_2(X, Y)$ for all $X,Y$ is obtained in \cite{CFS}. Most interestingly, the formula exhibits a transition in the behavior of $S_2(X,Y)$ when $X$ and $Y$ are of comparable size. \newline Recently, the authors \cite{G&Zhao2019} studied the mean values of some quadratic, cubic and quartic {H}ecke characters. These are analogues to $S_2(X,Y)$ in number fields. Again, the most interesting case is when $X, Y$ are of comparable size. Another similar behavior is shown to exist by I. Petrow in an earlier study \cite{Petrow} on the mean values of shifted convolution sums of Fourier coefficients of Hecke eigenforms. \newline In this paper, we return to the classical setting by studying the mean values of cubic and quartic Dirichlet characters. To form such sums, we note that primitive cubic Dirichlet characters exist modulo any rational prime $p$ if and only if $p \equiv 1 \pmod 3$ and there are precisely two such characters which are complex conjugate of each other when they exist. The same conclusions apply to primitive quartic Dirichlet characters as well except in we need to have $p \equiv 1 \pmod 4$. Based on these observations, we introduce the following sets of Dirichlet characters that we aim to study. We define the set $S_{3,1}$ to be the set that contains the principal Dirichlet character modulo $1$. For any integer $n > 1$, we define the set $S_{3,n}$ to be non-empty if and only if $n$ is a product of powers of primes which are congruent to $1$ modulo $3$, in which case by writing $n=\prod^{k}_{i=1}p^{\alpha_i}_i$ with $p_i \equiv 1 \pmod 3$ and $\alpha_i \geq 1$, we define \begin{align} \label{S3} S_{3,n}=\left\{ \prod^k_{i=1}\chi^{\alpha_i}_i: \chi_i \hspace{0.05in} \text{primitive cubic Dirichlet character modulo $p_i$} \right\}. \end{align} Similarly, let $S_{4,1}$ be the set that contains the principal Dirichlet character modulo $1$. For any integer $n > 1$, we define the set $S_{4,n}$ to be non-empty if and only if $n$ is a product of powers of primes which are congruent to $1$ modulo $4$, in which case by writing $n=\prod^{k}_{i=1}p^{\alpha_i}_i$ with $p_i \equiv 1 \pmod 4$ and $\alpha_i \geq 1$, we define \begin{align} \label{S4} S_{4,n}=\left\{ \prod^k_{i=1}\chi^{\alpha_i}_i: \chi_i \hspace{0.05in} \text{primitive quartic Dirichlet character modulo $p_i$} \right\}. \end{align} We are now ready to define the following character sums of our interest. For $i=3$, $4$, $X$, $Y>1$, let \begin{align*} S_i(X,Y) =\sum_{n \leq Y}\sum_{\chi \in S_{i,n}} \sum_{m \leq X} \chi(m). \end{align*} Our goal in this paper is to evaluate $S_i(X,Y)$ asymptotically for $i=3,4$. Our method here only allows us to treat the situation in which $Y \leq X$, a condition we shall assume henceforth. In this case, one expects that the main contribution for $S_i(X,Y)$ comes from the terms when $n$ is a cube if $i=3$ or a fourth-power if $i=4$. Treating the remaining terms using the P\'olya-Vinogradov inequality \eqref{pv}, we deduce that \begin{align} \label{Sieleasmp} S_i(X,Y) = C_i\frac {XY^{1/i}}{\sqrt{\log Y}}+O \left( Y^{3/2+\epsilon} \right) , \quad i=3,4, \end{align} for some constants $C_i$. The above allows us to obtain asymptotic formulas for $S_i(X, Y), i=3,4$ when $Y$ is small compared to $X$. Thus, it is again a subtlety to obtain asymptotic formulas for $S_i(X, Y), i=3,4$ when $Y$ is close to $X$. This is precisely what we want to study in this paper. In view of \eqref{Sieleasmp}, we may assume that $Y \geq X^{6/7}$ when studying $S_3(X,Y)$ and that $Y \geq X^{4/5}$ when studying $S_4(X,Y)$. Our main result is \begin{theorem} \label{cubicquarticmean} For large $X \geq Y$ and any $\varepsilon>0$, we have for $Y \geq X^{6/7}$, \begin{align} \label{S3asymp} S_3(X,Y)= C_1\frac {XY^{1/3}}{\sqrt{\log Y}}+O \left (\frac {XY^{1/3}}{(\log Y)^{3/2}}+Y^{4/3}+ Y^{24/17+\epsilon}\left ( \frac {Y}{X} \right )^{5/17} \right ). \end{align} We also have for $Y \geq X^{4/5}$, \begin{align} \label{S4asymp} S_4(X,Y)= C_2\frac {XY^{1/4}}{\sqrt{\log Y}}+O \left (\frac {XY^{1/4}}{(\log Y)^{3/2}}+Y^{91/62+\epsilon}\left(\frac {Y}{X} \right)^{9/31}+ X Y^{13/31+\epsilon} \left ( \frac {Y}{X} \right )^{38/31} \right ) . \end{align} Here $C_1, C_2$ are constants given in \eqref{C1} and \eqref{C2}, respectively. \end{theorem} We point out here that $C_1$ and $C_2$ are positive (see the discussions below \eqref{C1} and \eqref{C2}). Our proof of Theorem~\ref{cubicquarticmean} follows the line of treatment in \cite{CFS} by first applying the Poisson summation formula to covert the sum over $m$ in $S_i(X,Y)$ to its dual sum. The zero frequency gives us the main term as usual. To treat the contribution of remaining terms, we transform the sum over $n$ in $S_i(X,Y)$, using Lemma \ref{lemma:cubicquarticclass}, into another sum over algebraic integers in an imaginary quadratic number field. The resulting sums involve with certain Gauss sums for which we shall use a result of S. J. Patterson \cite{P} (see Lemma~\ref{lem1}). This treatment is also inspired by the method used in \cite{B&Y}. \subsection{Notations} The following notations and conventions are used throughout the paper.\\ \noindent $e(z) = \exp (2 \pi i z) = e^{2 \pi i z}$, $\omega=e(1/3)$. \newline $f =O(g)$ or $f \ll g$ means $|f| \leq cg$ for some unspecified positive constant $c$. \newline $\varepsilon$ denotes an arbitrary small positive number, which may be different from line to line. \newline $N(n)$ denotes the norm of $n \in \ensuremath{\mathbb Z}[\omega]$ or $n \in \ensuremath{\mathbb Z}[i]$. \newline $\varpi$ denotes a prime element in $\ensuremath{\mathbb Z}[\omega]$ or $\ensuremath{\mathbb Z}[i]$ and $p$ denotes a prime in $\ensuremath{\mathbb Z}$. \newline \section{Preliminaries} \label{sec 2} \subsection{Cubic and quartic symbols} \label{sec2.4} For any number field $K$, let $\mathcal{O}_K$ denote the ring of integers in $K$ and $U_K$ the group of units in $\mathcal{O}_K$. Throughout this paper, set $K_{\omega}=\ensuremath{\mathbb Q}(\omega), K_i=\ensuremath{\mathbb Q}(i)$, where $\omega=\exp(2\pi i/3)$. It is well-known that both $\ensuremath{\mathbb Q}(i)$ and $\ensuremath{\mathbb Q}(\omega)$ have class number one and that $\mathcal{O}_{K_{\omega}}=\ensuremath{\mathbb Z}[\omega], \mathcal{O}_{K_i}=\ensuremath{\mathbb Z}[i]$. Recall that every ideal in $\mathbb{Z}[\omega]$ co-prime to $3$ has a unique generator congruent to $1$ modulo $3$ (see \cite[Proposition 8.1.4]{BEW}) and every ideal in $\mathbb{Z}[i]$ coprime to $2$ has a unique generator congruent to $1$ modulo $(1+i)^3$ (see the paragraph above Lemma 8.2.1 in \cite{BEW})). These generators are called primary. \newline Let $\leg{\cdot}{n}_3$ be the cubic residue symbol in $\mathcal{O}_{K_{\omega}}$. For a prime $\varpi \in \mathcal{O}_{K_{\omega}}$ with $N(\varpi) \neq 3$, the cubic symbol is defined for $a \in \mathcal{O}_{K_{\omega}}$ , $(a, \varpi)=1$ by $\leg{a}{\varpi}_3 \equiv a^{(N(\varpi)-1)/3} \pmod{\varpi}$, with $\leg{a}{\varpi}_3 \in \{ 1, \omega, \omega^2 \}$. When $\varpi | a$, we define $\leg{a}{\varpi}_3 =0$. Then the cubic symbol can be extended to any composite $n$ with $(N(n), 3)=1$ multiplicatively. We extend the definition of $\leg{\cdot }{n}_3$ to $n \in U_{K_{\omega}}$ by setting $\leg{\cdot}{n}_3=1$. Recall that \cite[Theorem 7.8]{Lemmermeyer} the cubic reciprocity law states that for two primary $m, n \in \mathcal{O}_{K_{\omega}}$, \begin{equation*} \leg{m}{n}_3 = \leg{n}{m}_3. \end{equation*} The quartic case is similar. Let $\leg {\cdot}{n}_4$ be the quartic residue symbol in $\mathcal{O}_{K_{i}}$. Suppose that $\varpi \in \mathcal{O}_{K_{i}}$ is a prime with $N(\varpi) \neq 2$. If $a \in \mathcal{O}_{K_{i}}$, $(a, \varpi)=1$, then we define $\leg{a}{\varpi}_4$ by $ \leg{a}{\varpi}_4 \equiv a^{(N(\varpi)-1)/4} \pmod{\varpi}$, with $\leg{a}{\varpi}_4 \in \{ \pm 1, \pm i \}$. Set $\leg{a}{\varpi}_4 =0$ if $\varpi | a$. Now the quartic character can be extended to any composite $n$ with $(N(n), 2)=1$ multiplicatively. As before, we extend the definition of $\leg{\cdot }{n}_4$ to $n \in U_{K_{i}}$ by setting $\leg{\cdot}{n}_4=1$. The quartic reciprocity law, \cite[Theorem 6.9]{Lemmermeyer}, states that for two primary $m, n \in \mathcal{O}_{K_{i}}$, \begin{equation} \label{quartrec} \leg{m}{n}_4 = \leg{n}{m}_4(-1)^{((N(n)-1)/4)((N(m)-1)/4)}. \end{equation} Similar to \cite[Lemma 2.1]{B&Y} and \cite[Section 2.2]{G&Zhao}, we have the following description of $S_{i,n}$, $i=3$, $4$ defined in \eqref{S3} and \eqref{S4} using the cubic and quartic residue symbols. \begin{lemma} \label{lemma:cubicquarticclass} For any non-empty $S_{3,n}$, there is a bijection between $S_{3,n}$ and the set of cubic residue symbols of the form $\chi_{3,q}:m \rightarrow (\frac{m}{q})_3$ for some $q \in \mathcal{O}_{K_{\omega}}$. $q$ is primary, not divisible by any $\ensuremath{\mathbb Q}$-rational primes and $N(q) = n $. Similarly, a bijection exists between any non-empty $S_{4,n}$ and the set of quartic residue symbols of the form $\chi_{4,q}:m \rightarrow (\frac{m}{q})_4$ for some $q \in \mathcal{O}_{K_{i}}$ with $q$ primary, not divisible by any $\ensuremath{\mathbb Q}$-rational primes and $N(q) = n $. \end{lemma} \subsection{Gauss sums} \label{section:Gauss} Let $\chi$ be a Dirichlet character of modulus $n$. For any $r \in \ensuremath{\mathbb Z}$, we define the Gauss sum $\tau(r, \chi)$ as follows: \begin{align} \label{taur} \tau(r, \chi)=\sum_{x \bmod {n}}\chi(x)e(rx). \end{align} Similarly, for any $n, r \in \mathcal{O}_{K_{\omega}}$, we define \begin{align*} g_3(r,n) = \sum_{x \bmod{n}} \leg{x}{n}_3 \widetilde{e}_{\omega}\leg{rx}{n}, \quad \mbox{where} \quad \widetilde{e}_{\omega}(z) =\exp \left( 2\pi i \left( \frac {z}{\sqrt{-3}} - \frac {\bar{z}}{\sqrt{-3}} \right) \right). \end{align*} Similarly, for any $n, r \in \mathcal{O}_{K_{i}}$, set \begin{align*} g_4(r,n) = \sum_{x \bmod{n}} \leg{x}{n}_4 \widetilde{e}_i\leg{rx}{n}, \quad \mbox{with} \quad \widetilde{e}_i(z) =\exp \left( 2\pi i \left( \frac {z}{2i} - \frac {\bar{z}}{2i} \right) \right) . \end{align*} The following property of $g_i(r,n)$ for $i=3,4$ can be found in \cite[(12)]{B&Y} and \cite[Lemma 2.3]{G&Zhao4}: \begin{align} \label{eq:gmult} g_i(rs,n) & = \overline{\leg{s}{n}}_i g_i(r,n), \quad (s,n)=1, \qquad \mbox{$n$ primary}. \end{align} Note that we have \cite[(13)]{B&Y} for $(n_1, n_2)=1$ and $n_1, n_2$ primary, \begin{align} \label{gprod} g_3(k, n_1n_2)=\overline{\leg {n_1}{n_2}}_3 g_3(k, n_1)g_3(k,n_2). \end{align} Similarly, \cite[Lemma 2.3]{G&Zhao4} gives that for $(n_1, n_2)=1$ and $n_1, n_2$ primary, \begin{align} \label{g4prod} g_4(r,n_1 n_2) =\leg{n_2}{n_1}_4\leg{n_1}{n_2}_4g_4(r, n_1) g_4(r, n_2)=(-1)^{((N(n_1)-1)/4)((N(n_2)-1)/4)}\leg{n^2_1}{n_2}_4g_4(r, n_1) g_4(r, n_2), \end{align} where the last equality above follows from the quartic reciprocity law, \eqref{quartrec}. It is well-known (see \cite[(11)]{B&Y} and \cite[Prop. 6.5]{Lemmermeyer}) that for a primary prime $\varpi$ (belonging to the corresponding ring of integers), \begin{align*} |g_i(1,\varpi)|=N(\varpi)^{1/2}, \quad i=3,4. \end{align*} Using \eqref{gprod} or \eqref{g4prod} to reduce the general case to the prime case and applying the above bound, we deduce that when $r, n$ are in the corresponding ring of integers with $n$ being square-free, $g_i(r,n) \neq 0$, $i=3$, $4$ only when $(r,n)=1$, in which case we get \begin{align} \label{grnbound} |g_i(r,n)| \leq N(n)^{1/2}, \quad i=3,4. \end{align} In the proof of Theorem \ref{cubicquarticmean}, we need a relation between $\tau$ and $g_i$, $i=3$, $4$. Analogous to the discussions in \cite[Section 2.2]{B&Y} (but be aware that the notion of $g_3(r,n)$ there is slightly different from ours), one shows that for $r \in \ensuremath{\mathbb Z}$ and any $\chi \in S_{3,n}$ which corresponds to $\chi_{3,q}$ by Lemma \ref{lemma:cubicquarticclass}, \begin{align} \label{taug3} \tau(r, \chi)=\overline{\leg {\sqrt{-3}}{q}}_3 g_3(r, q). \end{align} A similar relation exists for quartic Gauss sums. For $r \in \ensuremath{\mathbb Z}$ and any $\chi \in S_{4,n}$ which corresponds to $\chi_{4,q}$ by Lemma \ref{lemma:cubicquarticclass}, we have (\cite[p. 894]{G&Zhao}) \begin{equation} \label{tau} \tau(r, \chi)=\overline{\leg {2i}{q}}_4 \leg {\overline{q}}{q}_4 g_4(r, q). \end{equation} In order to apply \eqref{tau}, we need to express $\leg {\overline{q}}{q}_4$ in terms of ray class characters $\pmod {16}$ in $\ensuremath{\mathbb Q}(i)$. To that end, note that we have $q \equiv 1 \pmod {(1+i)^3}$ with $q$ having no rational prime divisors. If we write $q=a+bi$ with $a,b \in \ensuremath{\mathbb Z}$, then we deduce that $(a, b)=1$ so that \begin{align*} \leg {\overline{q}}{q}_4= \leg {a-bi}{a+bi}_4=\leg {2a}{a+bi}_4=\leg {2(-1)^{(N(n)-1)/4}}{a+bi}_4\leg {(-1)^{(N(n)-1)/4}a}{a+bi}_4. \end{align*} We further observe that $q=a+bi$, $a$, $b \in \ensuremath{\mathbb Z} $ in $\ensuremath{\mathbb Z}[i]$ is congruent to $1 \bmod{(1+i)^3}$ if and only if $a \equiv 1 \pmod{4}$, $b \equiv 0 \pmod{4}$ or $a \equiv 3 \pmod{4}$, $b \equiv 2 \pmod{4}$ by \cite[Lemma 6, p. 121]{I&R}. It follows from this that we have \begin{align} \label{a&b} a \equiv (-1)^{(N(q)-1)/4} \pmod 4, \quad b \equiv 1-(-1)^{(N(q)-1)/4} \pmod 4. \end{align} As $(-1)^{(N(q)-1)/4}a$ is primary according to \eqref{a&b}, we have by the quartic reciprocity law, \begin{align*} \leg {(-1)^{(N(q)-1)/4}a}{a+bi}_4=(-1)^{((N(a)-1)/4)((N(q)-1)/4)} \leg {a+bi}{a}_4= \leg {a+bi}{a}_4= \leg {bi}{a}_4= \leg {b}{a}_4 \leg {i}{a}_4= \leg {i}{a}_4, \end{align*} where the last equality follows from \cite[Proposition 9.8.5]{I&R}, which states that for $a, b \in \ensuremath{\mathbb Z}, (a, 2b)=1$ , \begin{align*} \leg {b}{a}_4=1. \end{align*} One of the supplement laws to the quartic reciprocity law states that if $n=a+bi$ being primary, then \begin{align} \label{2.05} \leg {i}{n}_4=i^{(1-a)/2}. \end{align} It follows from \eqref{2.05} that \begin{align*} \leg {i}{a}_4=i^{(1-(-1)^{\frac {N(q)-1}{4}}a)/2}=(-1)^{(a^2-1)/8} =: \lambda_0(q). \end{align*} It is easy to check that $\lambda_0$ is a ray class character $\pmod {16}$ in $\ensuremath{\mathbb Q}(i)$. \newline On the other hand, note that by the definition that \begin{align*} \leg {(-1)^{(N(q)-1)/4}}{a+bi}_4=(-1)^{\frac {N(q)-1}{4}\cdot \frac {N(q)-1}{4}}=(-1)^{(N(q)-1)/4}=\leg {-1}{q}_4. \end{align*} We then deduce that \begin{align*} \leg {\overline{q}}{q}_4= \leg {-2}{q}_4\lambda_0(q). \end{align*} We conclude from this and \eqref{tau} that for $r \in \ensuremath{\mathbb Z}$ and any $\chi \in S_{4,n}$ which corresponds to $\chi_{4,q}$ by Lemma \ref{lemma:cubicquarticclass}, we have \begin{align*} \tau(r, \chi)=\overline{\leg {-i}{q}}_4 \lambda_0(q)g_4(r, q). \end{align*} \subsection{Analytic behavior of Dirichlet series associated with Gauss sums} \label{section: smooth Gauss} In the proof of Theorem \ref{cubicquarticmean}, we need to know the analytic behavior of certain Dirichlet series associated with cubic or quartic Gauss sums. For any ray class character $\psi_3$ $\pmod {9}$ in $\ensuremath{\mathbb Q}(\omega)$ and any ray class character $\psi_4$ $\pmod {16}$ in $\ensuremath{\mathbb Q}(i)$, we define \begin{align*} G_3(s,dk; \psi_3)=\sum_{\substack{n \equiv 1 \bmod {3}\\ (n,d)=1}} \frac { \psi_3(n) g_3(dk,n)}{N(n)^{s}}, \quad G_4(s,dk;\psi_4) =\sum_{\substack{n \equiv 1 \bmod {(1+i)^3} \\ (n,d)=1}} \frac {\psi_4(n) g_4(dk,n)}{N(n)^{s}}. \end{align*} We deduce from a general result of S. J. Patterson \cite[Lemma, p. 200]{P} the following analytic behavior of $G_i$, with $i=3,4$ (see also \cite[Lemma 3.5]{B&Y}). \begin{lemma} \label{lem1} The functions $G_i(s,dk; \psi_i), i=3,4$ has meromorphic continuation to the half plane with $\Re (s) > 1$. It is holomorphic in the region $\sigma=\Re(s) > 1$ except possibly for a pole at $s = 1+1/i$. For any $\varepsilon>0$, letting $\sigma_1 = 3/2+\varepsilon$, then for $\sigma_1 \geq \sigma \geq \sigma_1-1/2$, $|s-(1+1/i)|>1/(2i)$, we have \[ G_i(s,dk;\psi_i) \ll N(dk)^{\frac 12(\sigma_1-\sigma+\varepsilon)}(1+t^2)^{\frac {i-1}2(\sigma_1-\sigma+\varepsilon)}, \] where $t=\Im(s)$ and the norm is taken in the corresponding number field. Moreover, the residue satisfies \[ \mathrm{Res}_{s=1+1/3}G_3(s,dk;\psi_3) \ll N((dk)_1)^{-1/6+\varepsilon}, \quad \mathrm{Res}_{s=1+1/4}G_4(s,dk;\psi_4) \ll N(dk)^{1/8+\varepsilon}, \] where we write $dk=(dk)_1(dk)^2_2(dk)^3_3$ with $(dk)_1(dk)^2_2$ cubic-free in $\ensuremath{\mathbb Z}[\omega]$. \end{lemma} \section{Proof of Theorem \ref{cubicquarticmean}} \label{sec 3} \subsection{Initial Reductions} Let $\Phi(t), W(t)$ be two real-valued and non-negative smooth functions compactly supported in $(0,1)$, satisfying $\Phi(t)=W(t)=1$ for $t \in (1/U, 1-1/U)$ and such that $\Phi^{(j)}(t), W^{(j)}(t)\ll_j U^j$ for all integers $j \geq 0$. We consider the following smoothed sum \begin{align*} S_i(X,Y;U) =\sum_{n}\sum_{\chi \in S_{i,n}} \sum_{m} \chi(m) \Phi\left( \frac {n}{Y} \right) W \left( \frac {m}{X} \right), \quad i=3,4. \end{align*} where $U$ is a parameter to be chosen later. Applying the P\'olya-Vinogradov inequality in a way similar to the argument described in the Introduction, we see that when $Y \leq X$, for any $\varepsilon>0$, \begin{align} \label{1stredn} \Big |S_i(X,Y)- S_j(X,Y;U) \Big | \ll \frac {XY^{1/i+\varepsilon}+Y^{3/2+\varepsilon}}{U}. \end{align} As the treatments for $S_3(X,Y)$ and $S_4(X,Y)$ are similar, we shall concentrate on the proof of the case $S_3(X,Y)$ in what follows and discuss briefly the proof of $S_4(X,Y)$ at the end of the paper. Let $\chi$ be a Dirichlet character of modulus $q$. Then we have the following well-known Poisson summation formula: \begin{equation} \label{eq:Poisson1dim} \sum_{m \in \ensuremath{\mathbb Z}}\chi(m) w\leg{m}{M} = \frac{M}{q} \sum_{k \in \ensuremath{\mathbb Z}} \tau(k, \chi) \widetilde{w}\leg{kM}{q}, \end{equation} where $\tau(k, \chi)$ is the Gauss sum defined in \eqref{taur} and $\widetilde{w}$ is the Fourier transform of $w$. \newline Applying \eqref{eq:Poisson1dim} and \eqref{taug3}, we see that \begin{align*} S_3(X,Y;U) &=X \sum_{k \in \ensuremath{\mathbb Z} } \sum_{n}\sum_{\chi \in S_{i,n}} \frac { \tau(k, \chi)}{n} \Phi \left( \frac {n}{Y} \right)\widetilde{W} \leg{kX}{n} \\ & =X \sum_{k \in \ensuremath{\mathbb Z} } \ \sideset{}{'}\sum_{n \equiv 1 \bmod 3} \frac { \overline{\leg {\sqrt{-3}}{n}_3}g_3(k, n)}{N(n)} \Phi \left( \frac {N(n)}{Y} \right)\widetilde{W} \left(\frac {kX}{N(n)}\right) =: M_0+R, \end{align*} where $\Sigma'$ indicates that the sum is over $n \in \ensuremath{\mathbb Z}[\omega]$ with no $\ensuremath{\mathbb Q}$-rational prime divisor, \[ M_0 =X\widetilde{W} \left(0\right) \sideset{}{'}\sum_{n \equiv 1 \pmod 3} \frac { \overline{\leg {\sqrt{-3}}{n}}_3 g_3(0, n)}{N(n)} \Phi \left( \frac {N(n)}{Y} \right), \] and \[ R =X \sum_{\substack{k \in \ensuremath{\mathbb Z} \\ k \neq 0} } \sideset{}{'}\sum_{n \equiv 1 \pmod 3} \frac { \overline{\leg {\sqrt{-3}}{n}}_3 g_3(k, n)}{N(n)} \Phi \left( \frac {N(n)}{Y} \right)\widetilde{W} \left(\frac {kX}{N(n)}\right). \] \subsection{The Term $M_{0}$} We estimate $M_{0}$ first. It follows directly from the definition that $g_3(0,n)=\varphi_{\omega}(n)$ if $n$ is a cubic power and $g_3(0,n)=0$ otherwise. Here $\varphi_{\omega}(n)$ denotes the number of reduced residue classes in $\ensuremath{\mathbb Z}[\omega]/(n)$. Thus \begin{align*} M_{0}= X\widetilde{W}(0)\sideset{}{'}\sum_{\substack {n \equiv 1 \bmod {3} \\ \text{$n$ a cubic}}}\frac {\varphi_{\omega}(n)}{N(n)}\Phi \left( \frac {N(n)}{Y} \right). \end{align*} As it is easy to see that $n^3$ has no $\ensuremath{\mathbb Q}$-rational prime divisor if and only if $n$ has no $\ensuremath{\mathbb Q}$-rational prime divisor, we can recast $M_0$ upon replacing $n$ by $n^3$ as \begin{align*} M_{0}= X\widetilde{W}(0)\sideset{}{'}\sum_{\substack {n \equiv 1 \bmod {3}}}\frac {\varphi_{\omega}(n^3)}{N(n^3)}\Phi \left( \frac {N(n^3)}{Y} \right) =X\widetilde{W}(0)\sideset{}{'}\sum_{\substack {n \equiv 1 \bmod {3}}}\frac {\varphi_{\omega}(n)}{N(n)}\Phi \left( \frac {N^3(n)}{Y} \right). \end{align*} By Mellin inversion, we have \begin{align*} \Phi \left( \frac {N^3(n)}{Y} \right) = \frac 1{2\pi i}\int\limits_{(2)} \left( \frac{Y}{N^3(n)} \right)^s\widehat{\Phi}(s) \mathrm{d} s \quad \mbox{where} \quad \widehat{\Phi}(s)=\int\limits^{\infty}_{0}\Phi(t)t^{s-1} \mathrm{d} t. \end{align*} Integration by parts shows $\widehat{\Phi}(s)$ is a function satisfying the bound for all $\Re(s) > 0$, and integers $A>0$, \begin{align} \label{boundsforphi} \widehat{\Phi}(s) \ll (1+|s|)^{-A} U^{A-1}. \end{align} We then deduce that \begin{align} \label{M0} M_{0}= \frac{X\widetilde{W}(0)}{2\pi i}\int\limits_{(2)}Y^s\widehat{\Phi}(s)\left( \sideset{}{'}\sum_{n \equiv 1 \bmod 3}\frac {\varphi_{\omega}(n)}{N(n)^{1+3s}} \right) \mathrm{d} s . \end{align} Note that (see \cite[Prop. 9.1.2]{I&R}) a prime $\varpi \in \ensuremath{\mathbb Z}[\omega]$ is not $\ensuremath{\mathbb Q}$-rational if and only if $N(\varpi)=p \equiv 1 \pmod 3$ . We then deduce that when $\Re(s)>1$, \begin{align*} & \sideset{}{'}\sum_{n \equiv 1 \bmod 3}\frac {\varphi_{\omega}(n)}{N(n)^{1+s}} = \sum_{\substack{ n \\ \varpi | n \Rightarrow N(\varpi)=p \equiv 1 \bmod 3}} \frac {\varphi_{\omega}(n)}{N(n)^{1+s}} \\ = & \prod_{\substack{p \\ p \equiv 1 \bmod 3}} \left( 1+ \left( 1-\frac 1p \right)\frac {p^{-s}}{1-p^{-s}} \right) =\prod_{\substack{p}}\left(1+\left ( \frac {\chi_{0,3}(p)+\chi_{1,3}(p)}{2} \right )(1-\frac 1p)\frac {p^{-s}}{1-p^{-s}} \right ), \end{align*} where $\chi_{0,3}$ is the principal Dirichlet character modulo $3$ and $\chi_{1,3}$ the non-principal Dirichlet character modulo $3$. Let $L(s, \chi_{i,3})$ stand for the corresponding Dirichlet $L$-functions for $i=0$, $1$. Now we define for $\Re(s)>1$, \begin{align*} f(s)= L^{-1}(s, \chi_{0,3})L^{-1}(s, \chi_{1,3}) \prod_{\substack{p}}\left(1+\left ( \frac {\chi_{0,3}(p)+\chi_{1,3}(p)}{2} \right ) \left( 1-\frac 1p \right)\frac {p^{-s}}{1-p^{-s}} \right )^2 . \end{align*} Observe that $f(x)=\prod_pf_p(s)$ with \begin{align*} f_p(s)= & \prod_p \left(1+\frac {\chi_{0,3}(p)+\chi_{1,3}(p)}{p^{2s}}\cdot \frac {1-p^{s-1}}{1-p^{-s}}- \frac {(\chi_{0,3}(p)+\chi_{1,3}(p))^2}{4} \left(1-\frac 1p \right) \frac {1}{p^{2s}(1-p^{-s})} \right. \\ & \hspace*{1cm} +\frac {\chi_{0,3}(p)\chi_{1,3}(p)}{p^{2s}}+\frac {(\chi_{0,3}(p)+\chi_{1,3}(p))\chi_{0,3}(p)\chi_{1,3}(p)}{4} \left( 1-\frac 1p \right) \frac {1}{p^{3s}(1-p^{-s})} \\ & \hspace*{1cm} \left. +\frac {(\chi_{0,3}(p)+\chi_{1,3}(p))^2}{4}\left( 1-\frac 1p \right) \frac {1}{p^{2s} (1-p^{-s} )^2}\left(1-\frac {\chi_{0,3}(p)+\chi_{1,3}(p)}{p^s}+\frac {\chi_{0,3}(p)\chi_{1,3}(p)}{p^{2s}} \right ) \right). \end{align*} It follows from the expression of $f_p(s)$ that $f(s)$ is analytic for $\Re(s)>1/2$. \newline We then derive from \eqref{M0} that \begin{align} \label{M0g} M_{0} = X\widetilde{W}(0)\frac 1{2\pi i}\int\limits_{(2)}\frac {Y^sg(s)}{\sqrt{s-1/3}} \mathrm{d} s, \end{align} where \begin{align*} g(s)=\widehat{\Phi}(s)\sqrt{ \left( s-\frac 13 \right)L(3s, \chi_{0,3})L(3s, \chi_{1,3})f(3s)}. \end{align*} It is easy to see that $g(s)$ is analytic in a neighbourhood of $1/3$ and that \begin{equation} \label{gform} g(s)=g(1/3)+O(|s-1/3|) \end{equation} when $s$ is near $1/3$. \newline We now follow a method of E. Landau \cite{Landau1} (see also \cite[p. 187, exercise 21]{MVa1}) by shifting the contour of integration in \eqref{M0g} to $\mathcal{C'} \cup \mathcal{C}$, where $\mathcal{C'}$ is the contour running from $1/3-\varepsilon_0-i\infty$ to $1/3 -\varepsilon_0 - i\delta$ vertically and from $1/3-\varepsilon_0 + i\delta$ to $1/3-\varepsilon_0+i\infty$ vertically. The contour $\mathcal{C}$ is from $1/3-\epsilon_0-i\delta$ to $1/3 - i\delta$ horizontally, then along the semicircle $1/3 + \delta e(i\theta)$ , $-\pi/2 \leq \theta \leq \pi/2$, and finally along a horizontal line to $1/3 -\varepsilon_0+ i\delta$. Here $\varepsilon_0$ is sufficiently small and $\delta = 1/ \log Y$. \newline To estimate the integral over $\mathcal{C'}$, we note that $L(s, \chi_{1,3})$ is bounded when $\Re(s)>0$ and that (see \cite[p. 100, exercise 3, ]{iwakow}) when $\sigma=\Re(s) \geq 0$, \begin{align} \label{Lchibound} L(s, \chi_{i,3}) \ll (1+|s|)^{(1-\sigma)/2+\epsilon}, \quad i=0,1. \end{align} We now divide the integral over $\mathcal{C'}$ into two parts, \[ \int\limits_{\mathcal{C}'} = \int\limits_{\mathcal{C}'_1} + \int\limits_{\mathcal{C}'_2} ,\] where $\mathcal{C}'_1$ is the part of $\mathcal{C'}$ with $|\Im(s)| \leq T$ ($T$ is to be chosen later) and $\mathcal{C}'_2$ being the rest. Applying \eqref{Lchibound} with \eqref{boundsforphi} by taking $A=1$ for the integral over $\mathcal{C}'_1$ and \eqref{Lchibound} with \eqref{boundsforphi} by taking $A=2$ for the integral over $\mathcal{C}'_2$, we deduce that the integral over $\mathcal{C'}$ is \begin{align} \label{estC} \ll XY^{1/3-\epsilon_0} \left( T^{2\epsilon_0}+\frac {U}{T^{1-2\varepsilon_0}} \right) \ll XY^{1/3-\varepsilon_0}U^{2\varepsilon_0} , \end{align} upon setting $T=U$. \newline Next, the integral over $\mathcal{C}$ is, using \eqref{gform}, \begin{align*} =\frac 1{2\pi i}\int\limits_{\mathcal{C}}\frac {Y^sg(1/3)}{\sqrt{s-1/3}}ds+O \left( \int\limits_{\mathcal{C}}\frac {Y^{\sigma}|s-1/3|}{\sqrt{s-1/3}} \mathrm{d} |s| \right). \end{align*} Applying \cite[Theorem C.3]{MVa1} with $s = 1/2$, we see that \begin{align} \label{estC'1} \frac 1{2\pi i}\int\limits_{\mathcal{C}}\frac {Y^sg(1/3)}{\sqrt{s-1/3}} \mathrm{d} s=\frac {g(1/3)Y^{1/3}}{\sqrt{\pi \log Y}}+O\left( Y^{1/3-\varepsilon_0} \right). \end{align} On the other hand, it is easy to show that \begin{align} \label{estC'2} \int\limits_{\mathcal{C}}\frac {Y^{\sigma}|s-1/3|}{\sqrt{s-1/3}} \mathrm{d} |s| \ll \frac {Y^{1/3}}{(\log Y)^{3/2}} . \end{align} Combining \eqref{estC}, \eqref{estC'1} and \eqref{estC'2}, we see that \begin{align*} M_{0} = g \left(\frac 13 \right)\widetilde{W}(0)\frac {XY^{1/3}}{\sqrt{\pi \log Y}}+O \left( \frac {XY^{1/3}}{(\log Y)^{3/2}}+XY^{1/3-\varepsilon_0}U^{2\varepsilon_0} \right). \end{align*} Using the observations that \begin{align*} \widehat{\Phi} \left( \frac 13 \right)=3+O \left(\frac 1U \right) \quad \mbox{and} \quad \widetilde{W}(0)=1+O\left( \frac 1U \right), \end{align*} we can further rewrite $M_0$ as \begin{align} \label{M0asymp} M_{0} = C_1\frac {XY^{1/3}}{\sqrt{\log Y}}+O \left( \frac {XY^{1/3}}{(\log Y)^{3/2}}+\frac {XY^{1/3}}{(\log Y)^{1/2}U}+XY^{1/3-\epsilon_0}U^{2\varepsilon_0} \right), \end{align} where \begin{align} \label{C1} C_1=\frac {3}{\sqrt{\pi}}\sqrt{\lim_{s \rightarrow 1/3} \left( s-\frac 13 \right)L(3s, \chi_{0,3})L(3s, \chi_{1,3})f(3s)}. \end{align} We note that as $\chi_{1,3}$ is quadratic, it is easy to see that $C_1>0$. \subsection{The Term $R$} Now suppose $k \neq 0$. We first apply the M\"obius function to detect the condition that $n \equiv 1 \pmod 3$ has no rational prime divisor to see that \begin{align*} R & =X \sum_{\substack{k \in \ensuremath{\mathbb Z} \\ k \neq 0} } \ \sum_{\substack{ d \in \ensuremath{\mathbb Z} \\ d \equiv 1 \bmod 3}} \mu_{\ensuremath{\mathbb Z}}(d) \sum_{\substack{ n \in \ensuremath{\mathbb Z}[\omega] \\ n \equiv 1 \bmod 3}} \frac { \overline{\leg {\sqrt{-3}}{nd}}_3 g_3(k, nd)}{N(nd)} \Phi \left( \frac {N(nd)}{Y} \right)\widetilde{W} \left(\frac {kX}{N(nd)}\right). \end{align*} where we define $\mu_{\ensuremath{\mathbb Z}}(d)=\mu(|d|)$, the usual M\"obius function. \newline We now apply \eqref{gprod} to conclude that \begin{align*} R &= X \sum_{\substack {k \in \ensuremath{\mathbb Z} \\ k \neq 0}} \sum_{\substack{ d \in \ensuremath{\mathbb Z} \\ d \equiv 1 \bmod 3}} \mu_{\ensuremath{\mathbb Z}}(d)\frac { \overline{\leg {\sqrt{-3}}{d}}_3 g_3(k, d)}{N(d)} H(k,d; X,Y), \end{align*} where \begin{align*} H(k,d; X, Y)= \sum_{\substack{ n \in \ensuremath{\mathbb Z}[\omega] \\ n \equiv 1 \bmod 3}} \frac { \overline{\leg {d\sqrt{-3}}{n}}_3 g_3(k, n)}{N(n)} \Phi \left( \frac {N(nd)}{Y} \right)\widetilde{W} \left(\frac {kX}{N(nd)}\right). \end{align*} We now write \begin{align*} R = R_1(Z)+R_2(Z), \end{align*} with \[ R_1(Z) = X \sum_{\substack {k \in \ensuremath{\mathbb Z} \\ k \neq 0}} \sum_{\substack{ d \in \ensuremath{\mathbb Z} \\ d \equiv 1 \pmod 3 \\ |d| \leq Z}} \mu_{\ensuremath{\mathbb Z}}(d)\frac { \overline{\leg {\sqrt{-3}}{d}_3}g_3(k, d)}{N(d)} H(k,d; X,Y) \] and \[ R_2(Z) = X \sum_{\substack {k \in \ensuremath{\mathbb Z} \\ k \neq 0}} \sum_{\substack{ d \in \ensuremath{\mathbb Z} \\ d \equiv 1 \pmod 3 \\ |d| > Z}} \mu_{\ensuremath{\mathbb Z}}(d)\frac { \overline{\leg {\sqrt{-3}}{d}_3}g_3(k, d)}{N(d)} H(k,d; X,Y) . \] We first estimate $R_2(Z)$ by noting that it follows from the definition of $\Phi$ that $H=0$ unless $N(nd) \leq Y$. We also note that if $d$ is square-free as a rational integer, it is also square-free in $\ensuremath{\mathbb Z}[\omega]$. Hence it follows from \eqref{grnbound} that \begin{align} \label{gdbound} g_3(k,d) \leq N(d)^{1/2}. \end{align} We further note that it follows from the definition of $W$ and integration by parts that for any $l \geq 0$, $j \geq 1$, \begin{align} \label{Wbound} \widetilde{W}^{(l)}(t) \ll \frac {U^{j-1}}{|t|^j}. \end{align} Applying the above bound with $l=0$, $j=1$ or $j=2$ together with the trivial bound $g_3(k,n) \leq N(n)$, we deduce that \begin{align*} H(k,d; X, Y) \ll & \sum_{N(n) \leq Y/d^2} \left| \widetilde{W} \left(\frac {kX}{N(nd)}\right) \right| \ll \min \left(\sum_{N(n) \leq Y/d^2}\frac {N(nd)}{|k|X}, \sum_{N(n) \leq Y/d^2}\left(\frac {N(nd)}{kX}\right)^2U \right ) \\ \ll & \min \left(\sum_{N(n) \leq Y/d^2}\frac {Y}{|k|X}, \sum_{N(n) \leq Y/d^2}\left(\frac {Y}{kX}\right)^2U \right )\ll \min \left(\frac {Y^2}{|k|d^2X}, \frac {Y}{d^2}\left(\frac {Y}{kX}\right)^2U \right ). \end{align*} Using \eqref{gdbound} and the above bound for $H$ leads us to the bound \begin{align} \label{R2} R_2(Z) \ll X \sum_{\substack {k \in \ensuremath{\mathbb Z} \\ k \neq 0 \\ |k| \leq U}} \sum_{\substack{ d \in \ensuremath{\mathbb Z} \\ d \equiv 1 \bmod 3 \\ |d| > Z}} \frac {Y^2}{|k|d^2X}+ X \sum_{\substack {k \in \ensuremath{\mathbb Z} \\ k \neq 0 \\ |k|>U}} \sum_{\substack{ d \in \ensuremath{\mathbb Z} \\ d \equiv 1 \bmod 3 \\ |d| > Z}} \frac {Y}{d^3}\left(\frac {Y}{kX}\right)^2U \ll \frac {XY}{Z^2}\left(\frac {Y}{X}\right)^2U^{\varepsilon}. \end{align} Now, to estimate $R_1(Z)$, we apply the Mellin inversion to obtain \[ \Phi \left( \frac {N(n)}{Y} \right) \widetilde{W}\left(\frac {kX}{N(n)}\right) = \frac 1{2\pi i}\int\limits_{(2)} \left( \frac{Y}{N(n)} \right)^s\tilde{f}(s,k) \mathrm{d} s, \quad \mbox{where} \quad \tilde{f}(s,k)=\int\limits^{\infty}_{0}\Phi(t)\widetilde{W}\left(\frac {kX}{Yt}\right) t^{s-1} \mathrm{d} t. \] Integration by parts and using \eqref{Wbound} shows $\tilde{f}(s)$ is a function satisfying the bound \begin{align} \label{boundsforf} \tilde{f}(s,k) \ll (1+|s|)^{-D} \left( 1+\frac {|k|X}{Y} \right)^{-E+D} U^{E-1}, \end{align} for all $\Re(s) > 0$, and integers $D \geq 0, E>0$. We deduce from the above discussions and \eqref{eq:gmult} that \begin{align*} H(k,d; X, Y) &= \frac 1{2\pi i}\int\limits_{(2)}\tilde{f}(s,k)\left ( \frac {Y}{d^2} \right )^sG(1+s,dk) \mathrm{d} s, \quad \mbox{where} \quad G(1+s,dk) = \sum_{\substack{n \in \ensuremath{\mathbb Z}[\omega] \\ n \equiv 1 \bmod 3 \\ (n,d)=1}} \frac { \overline{\leg {\sqrt{-3}}{n}}_3 g_3(dk, n)}{N(n)^{1+s}}. \end{align*} We now move the line of integration to the line $\Re(s) = \varepsilon$. By Lemma \ref{lem1}, the only possible poles are at $s = 1/3$. Thus we may write $R_1(Z) = R_{1,1}(Z)+R_{1,2}(Z)$, where \begin{align*} R_{1,1}(Z) = & X Y^{1/3} \sum_{\substack {k \in \ensuremath{\mathbb Z} \\ k \neq 0}} \ \sum_{\substack{ d \in \ensuremath{\mathbb Z} \\ d \equiv 1 \bmod 3 \\ |d| \leq Z}} \mu_{\ensuremath{\mathbb Z}}(d)\frac { \overline{\leg {\sqrt{-3}}{d}}_3 g_3(k, d)}{N(d)d^{2/3}} \tilde{f} \left( \frac1{3}, k \right)\text{Res}_{s=1/3} G(1+s, dk), \\ R_{1,2}(Z) = & \frac{X}{2\pi i} \sum_{\substack {k \in \ensuremath{\mathbb Z} \\ k \neq 0}} \ \sum_{\substack{ d \in \ensuremath{\mathbb Z} \\ d \equiv 1 \bmod 3 \\ |d| \leq Z }} \mu_{\ensuremath{\mathbb Z}}(d)\frac { \overline{\leg {\sqrt{-3}}{d}}_3 g_3(k, d)}{N(d)} \int\limits_{(\varepsilon)}\tilde{f}(s,k)\left ( \frac {Y}{d^2} \right )^sG(1+s,dk) \mathrm{d} s. \end{align*} To estimate $R_{1,1}(Z)$, we note first that by the remark above \eqref{grnbound}, we may restrict the sum over $d$ to those satisfying $(d,k)=1$. We then apply Lemma \ref{lem1} and the bound \eqref{boundsforf} with $D=0, E=1$ to obtain \begin{align} \label{R11} R_{1,1}(Z) \ll & X Y^{1/3} \sum_{\substack {k=k_1k_2^2k_3^3 \in \ensuremath{\mathbb Z} \\ k \neq 0}} \left ( \frac {Y}{|k|X} \right ) \sum_{\substack{ d \in \ensuremath{\mathbb Z} \\ 0 \neq d \leq Z\\ (d,k)=1}} \frac {N(dk_1)^{-1/6+\varepsilon}}{d^{5/3}} \ll X Y^{1/3}\left ( \frac {Y}{X} \right ). \end{align} To estimate $R_{1,2}(Z)$, we apply Lemma \ref{lem1} and bound \eqref{boundsforf} with $D=2$, $E=4$ to get that \begin{equation} \label{R12} \begin{split} R_{1,2}(Z) \ll & X Y^{\epsilon} \sum_{\substack {k \in \ensuremath{\mathbb Z} \\ k \neq 0 }}\left ( \frac {Y}{kX} \right )^2U^3 \sum_{\substack{ d \in \ensuremath{\mathbb Z} \\ 0 \neq |d| \leq Z}} \frac {N(d)^{1/4}N(k)^{1/4}}{|d|} \int\limits_{\ensuremath{\mathbb R}}(1+|t|)^{-1-\varepsilon} \mathrm{d} t \\ \ll & X Y^{\varepsilon} \left ( \frac {Y}{X} \right )^2 U^{3}Z^{1/2}. \end{split} \end{equation} Combining \eqref{R2}, \eqref{R11} and \eqref{R12}, we conclude that \begin{align} \label{R} R \ll & \frac {XY}{Z^2}\left(\frac {Y}{X}\right)^2U^{\varepsilon}+X Y^{1/3}\left ( \frac {Y}{X} \right )+ X Y^{\varepsilon} \left ( \frac {Y}{X} \right )^2 U^{3}Z^{1/2}. \end{align} \subsection{Conclusion} We now combine \eqref{1stredn}, \eqref{M0asymp} and \eqref{R} and adjust the value of $\varepsilon_0$ to see that \begin{equation} \label{S3final} \begin{split} S_3(X,Y)= C_1\frac {XY^{1/3}}{\sqrt{\log Y}}+& O\left( \frac {XY^{1/3}}{(\log Y)^{3/2}}+X Y^{1/3} \left( \frac {Y}{X} \right) + XY^{1/3-\varepsilon}U^{2\varepsilon} \right.\\ & \hspace*{1cm} \left. + \frac {XY^{1/3+\varepsilon}+Y^{3/2+\varepsilon}}{U}+\frac {XY}{Z^2}\left(\frac {Y}{X}\right)^2U^{\varepsilon}+ X Y^{\varepsilon} \left ( \frac {Y}{X} \right )^2 U^{3}Z^{1/2} \right) , \end{split} \end{equation} where $C_1$ is given in \eqref{C1}. As $Y \geq X^{6/7}$ implies that $Y^{3/2} \geq XY^{1/3}$, we now choose the values of $U, Z$ so that \begin{align} \label{UZ1} \frac {Y^{3/2}}{U}= \frac {XY}{Z^2}\left(\frac {Y}{X}\right)^2= X\left ( \frac {Y}{X} \right )^2 U^{3}Z^{1/2} . \end{align} We then deduce that \begin{align*} U = Y^{3/34}\left(\frac {X}{Y}\right)^{5/17}. \end{align*} Substituting the above value of $U$ into \eqref{S3final}, making use of \eqref{UZ1} and our assumption that $Y \geq X^{6/7}$, we arrive at the expression given in \eqref{S3asymp}. \newline We end this paper by giving a sketch on the proof of \eqref{S4asymp}. We proceed in a way similar to the proof \eqref{S3asymp} and the main term corresponding to $M_0$ given in \eqref{M0asymp} is \begin{align*} C_2\frac {XY^{1/4}}{\sqrt{\log Y}}+O \left( \frac {XY^{1/4}}{(\log Y)^{3/2}}+\frac {XY^{1/4}}{(\log Y)^{1/2}U}+XY^{1/4-\epsilon_0}U^{3\varepsilon_0} \right), \end{align*} where \begin{equation} \label{C2} C_2= \frac {4}{\sqrt{\pi}}\sqrt{\lim_{s \rightarrow 1/4} \left( s-\frac 14 \right)L(4s, \chi_{0,4})L(4s, \chi_{1,4})h(4s)}, \end{equation} with \[ h(s)= L^{-1}(s, \chi_{0,4})L^{-1}(s, \chi_{1,4})\left ( \prod_{\substack{p}}\left(1+\left ( \frac {\chi_{0,4}(p)+\chi_{1,4}(p)}{2} \right ) \left(1-\frac 1p \right)\frac {p^{-s}}{1-p^{-s}} \right ) \right )^2. \] Here $\chi_{0,4}$ is the principal Dirichlet character modulo $4$ and $\chi_{1,4}$ the non-principal Dirichlet character modulo $4$. $L(s, \chi_{i,4})$ is the corresponding Dirichlet $L$-functions for $i=0,1$. It is also easy to see that $C_2>0$. \newline The estimation corresponding to $R_2$ for $S_3(X,Y)$ remains the same for $S_4(X,Y)$. The estimation corresponding to $R_{1,1}$ for $S_3(X,Y)$ in this case becomes (note that by \eqref{g4prod} and \eqref{eq:gmult}, we have $g_4(d^2k, n)$ in place of $g_3(dk,n)$) \begin{align*} X Y^{1/4} \sum_{\substack {k \in \ensuremath{\mathbb Z} \\ k \neq 0 \\ |k| \leq YU^3/X}} & \left ( \frac {Y}{|k|X} \right ) \sum_{\substack{ d \in \ensuremath{\mathbb Z} \\ 0 \neq |d| \leq Z}} \frac {N(d^2)^{1/8+\epsilon}N(k)^{1/8+\epsilon}}{|d|}+ X Y^{1/4} \sum_{\substack {k \in \ensuremath{\mathbb Z} \\ k \neq 0 \\ |k| >YU^3/X}}\left ( \frac {Y}{|k|X} \right )^2U^3 \sum_{\substack{ d \in \ensuremath{\mathbb Z} \\ 0 \neq |d| \leq Z}} \frac {N(d^2)^{1/8+\epsilon}N(k)^{1/8+\epsilon}}{|d|} \\ \ll & X Y^{1/4} \left ( \frac {Y}{X} \right )^{5/4} U^{3/4+\epsilon}Z^{1/2+\epsilon}, \end{align*} provided that \begin{align} \label{Ucondition} YU^3/X \geq 1. \end{align} The terms in $S_4(X,Y)$ analogous to $R_{1,2}$ in $S_3(X,Y)$ become (by applying Lemma \ref{lem1} and \eqref{boundsforf} with $D=3$, $E=5$) \begin{align*} \ll X Y^{\epsilon} \sum_{\substack {k \in \ensuremath{\mathbb Z} \\ k \neq 0}}\left ( \frac {Y}{kX} \right )^2U^4 \sum_{\substack{ d \in \ensuremath{\mathbb Z} \\ 0 \neq |d| \leq Z}} \frac {N(d^2)^{1/4}N(k)^{1/4}}{|d|} \int\limits_{\ensuremath{\mathbb R}}(1+|t|)^{-3/2} \mathrm{d} t \ll X Y^{\epsilon} \left ( \frac {Y}{X} \right )^2 U^4 Z. \nonumber \end{align*} We then conclude that when $Y \geq X^{4/5}$, \begin{align*} S_4(X,Y)=C_2 \frac {XY^{1/4}}{\sqrt{\log Y}} +& O \left( \frac {XY^{1/4}}{(\log Y)^{3/2}} + XY^{1/4-\varepsilon}U^{3\varepsilon} \right. \\ & \hspace*{1cm} \left. +X Y^{1/4}\left ( \frac {Y}{X} \right )^{5/4}U^{3/4+\varepsilon}Z^{1/2+\varepsilon}+ \frac {Y^{3/2+\varepsilon}}{U}+\frac {XY}{Z^2}\left(\frac {Y}{X}\right)^2U^{\varepsilon}+ X Y^{\varepsilon} \left ( \frac {Y}{X} \right )^2 U^4Z \right) , \end{align*} where $C_2$ is given in \eqref{C2}. \newline We now choose $U, Z$ such that \begin{align*} X Y^{1/4}\left ( \frac {Y}{X} \right )^{5/4}U^{3/4}Z^{1/2}= \frac {XY}{Z^2}\left(\frac {Y}{X}\right)^2=X \left ( \frac {Y}{X} \right )^2 U^4Z. \end{align*} This implies that \begin{align*} Z=Y^{9/31}\left (\frac {Y}{X}\right )^{12/31}, \quad U=Y^{1/31}\left (\frac {X}{Y} \right )^{9/31}. \end{align*} One checks that \eqref{Ucondition} is satisfied when $Y \geq X^{4/5}$ and this leads to the expression for $S_4(X,Y)$ in \eqref{S4asymp}. \newline \noindent{\bf Acknowledgments.} P. G. is supported in part by NSFC grant 11871082 and L. Z. by the FRG grant PS43707 and the Faculty Silverstar Award PS49334. Parts of this work were done when P. G. visited the University of New South Wales (UNSW) in August 2018. He wishes to thank UNSW for the invitation, financial support and warm hospitality during his pleasant stay.
{ "timestamp": "2019-03-01T02:14:46", "yymm": "1902", "arxiv_id": "1902.10987", "language": "en", "url": "https://arxiv.org/abs/1902.10987" }
\section{Introduction} In this paper, we study how noisy memory stochastic differential equations (SDEs), introduced in Dahl et al.~\cite{Dahl}, are connected to Volterra equations. We also discuss existence and uniqueness of solutions to noisy memory SDEs. Since such equations usually can not be solved analytically, we derive an Euler-Maruyama scheme for a numerical approximation of the solution. We prove that this scheme has mean square order of convergence $\sqrt{\Delta t}$. One should note the following unique features of the analysis: \begin{itemize} \item{The stochastic differential equation (SDE) is driven by {\it generalized noisy memory}: The evolution of the state $X$ at any time $t$ is dependent on its past history $\int_{t-\delta}^t \phi(t,s) X(s) \, dB(s)$ where $\delta$ is the memory span and $dB$ is white noise.} \item{Noisy memory SDEs where the memory does not include a time-dependent function can be rewritten as two dimensional SDEs with delay (see Dahl et al.~\cite{Dahl}). Hence, one may solve such equations using numerical methods for delay SDEs, see e.g., Buckwar~\cite{Buckwar}, Carletti~\cite{Carletti}, Mao and Sabanis~\cite{MaoSabanis} and Milstein and Tretyakov~\cite{Milstein}. However, scaling the memory by a time-dependent function implies that generalized noisy memory SDEs cannot be rephrased as SDEs with delay. To the best of our knowledge, no current numerical methods work for approximating the solutions of generalized noisy memory SDEs. However, our numerical scheme works for all noisy memory SDEs, including the generalized ones.} \item{We prove that the Euler-Maruyama scheme has mean square order of convergence $\sqrt{\Delta t}$. This is the same as the Euler-Maruyama method for classical SDEs. Hence, the added complexity from the noisy memory in the SDE does not reduce the order of convergence of the Euler-Maruyama scheme.} \end{itemize} Noisy memory SDEs can be applied to model animal populations where the population growth depends in some stochastic way on the previous population states, as well as the current number of animals. This kind of memory effect can be useful in the modeling of species where there is a natural delay in the population growth caused by e.g., hatching of eggs for fish, or larva becoming butterflies. This delay may depend on time, such as seasonal weather effects. This motivates generalized noisy memory SDEs. For applications of stochastic delay equations connected to population dynamics, see~\cite{Kuang}. Other applications of stochastic delay equations include spread of infectious diseases, see Beretta et al.~\cite{BerettaEtAl}, applications in physics and engeneering, see Kolmanovskii and Myshkis~\cite{Kolmanovskii} and financial applications, see {\O}ksendal and Sulem~\cite{OksendalSulem}. Stochastic systems with memory, and the related field of stochastic systems with delay, has been an important field of research over the last years, see for example Mohammed and Zhang~\cite{MohammedZhang}, Mohammed~\cite{Mohammed_Memory} and {\O}ksendal, Sulem and Zhang~\cite{OSZ1}. Introducing a noisy memory $Z(t)$, as opposed to a perfect memory, is a natural generalization of this research. Verriest and Florchinger~\cite{Verriest}, Verriest~\cite{Verriest2} and Verriest and Michiels~\cite{VerriestMichiels} all consider stochastic delay differential equations and derive corresponding stability results on the solution. Li and Cao~\cite{Li} derive a two-step Euler-Maruyama method for a nonlinear neural stochastic delay differential equation. The structure of the paper is as follows: In Section~\ref{sec: NoisySDEandEuler} we introduce the noisy memory SDE, show a connection between noisy memory SDEs and stochastic Volterra equations and give the Euler scheme to approximate the solution of the noisy SDE. Then, in Section~\ref{sec: main_result} we state our main result on the order of convergence of the Euler method and prove several lemmas needed to prove this. In Section~\ref{sec: proof_main}, we complete the proof of the main theorem. Finally, in Section~\ref{sec: num_ex}, we derive an analytical solution to a noisy SDE, and give a numerical example illustrating the convergence of the Euler method. \section{The noisy memory SDE and the Euler scheme} \label{sec: NoisySDEandEuler} In this section, we introduce a stochastic differential equation with noisy memory, and derive the corresponding Euler scheme. Let $B_t(\omega) = B(t, \omega); (t, \omega) \in [-\delta, \infty) \times \Omega$ be a Brownian motion on a complete filtered probability space $(\Omega, \mathcal{F}, \{\mathcal{F}_t\}_{t \geq 0}, P)$. We assume that $\mathbb{F} := \{\mathcal{F}_t\}_{t \geq 0}$ is the filtration generated by $\{B_t\}_{t \geq 0}$ (augmented with the $P$-null sets). Let $\delta > 0$ be a time-delay determining the length of the memory-interval. Also, consider the functions $b : \Omega \times [0,T] \times \mathbb{R} \times \mathbb{R} \rightarrow \mathbb{R}$, $\sigma : \Omega \times [0,T] \times \mathbb{R} \times \mathbb{R} \rightarrow \mathbb{R}$, $\xi : [0,T] \rightarrow \mathbb{R}$ and $\phi :[0,T] \times [0,T] \rightarrow \mathbb{R}$. We will develop a numerical method for the following stochastic differential equation with noisy memory: \begin{equation} \label{eq: noisy_mem_SDE} \begin{array}{llll} dX(t) &=& b(t,X(t),Z(t))dt + \sigma(t,X(t),Z(t))dB(t), \mbox{ } t \in (0,T] \\[\smallskipamount] X(t) &=& \xi(t), \mbox{ } t \in [-\delta,0], \end{array} \end{equation} \noindent where the stochastic process \begin{equation} \label{eq: Z} Z(t) := \int_{t-\delta}^t \phi(t, s)X(s) dB(s) \end{equation} \noindent is the (generalized) \emph{noisy memory} of $X(t)$, see also Dahl et al.~\cite{Dahl}. If $\phi(t,s)=1$ for all $t,s \in [0,T]$, equation~\eqref{eq: noisy_mem_SDE} is a non-generalized (or regular) noisy memory SDE. For more on stochastic differential equations in general, see for instance {\O}ksendal~\cite{Oksendal}. The parameter $\delta$ is the memory parameter which gives the length of the memory-interval. Note that the memory is noisy due to the It{\^o} integral in the definition. Intuitively, this means that the system does not have a perfect memory, but a slightly distorted one. The (deterministic) function $\phi$ inside the noisy memory It{\^o} integral allows this noisy memory to vary with time, both the time of the memory, but also the current time. \begin{remark} \label{remark: Buckwar} \rm{Buckwar~\cite{Buckwar2} considers the same type of memory as in~\eqref{eq: Z}, but with a deterministic Lebesgue integral instead of a stochastic It{\^o} integral. That is, they consider a deterministic distributed memory instead of a stochastic distributed memory like us.} \end{remark} Note that the Brownian motion is defined for negative times is $[-\delta,0]$. For a detailed presentation of how this is done, see Holden et al.~\cite{Holden} (Section 2.1.1). \subsection{A connection between noisy memory SDEs and stochastic Volterra equations} \label{sec: Volterra} Consider a very simple SDE with noisy memory, \begin{equation} \label{eq: simple_SDE_noisy} dX(t) = Z(t)dt = (\int_{t-\delta}^t X(u) dB(u)) dt \end{equation} \noindent where $Z$ is given as in equation~\eqref{eq: Z} with $\phi(t,s)=1$ for all $t,s \in [0,T]$. By changing the order of integration in \eqref{eq: simple_SDE_noisy}, we see that \begin{equation} \label{eq: noisySDE_Volterra_simple} \begin{array}{llll} X(t) &=& X(0) + \int_0^t \int_{s-\delta}^s X(u) dB(u) ds \\[\smallskipamount] &=& X(0) + \int_0^t \int_u^{\min\{u+\delta, t\}} 1 ds X(u) dB(u) \\[\smallskipamount] &=& X(0) + \int_0^t \min\{t-u, \delta\} X(u) dB(u). \end{array} \end{equation} This is a linear stochastic Volterra equation, see {\O}ksendal and Zhang~\cite{OksendalZhang_Volterra}. Such equations do not have an simple analytical solution. However, they can be solved using an iterative method, see {\O}ksendal and Zhang~\cite{OksendalZhang_Volterra} and \cite{OksendalZhang_linear}. Now, consider equation~\eqref{eq: noisy_mem_SDE}. If we assume that $b$ may be decomposed as $b(t,X(t),Z(t))= \tilde{b}(t,X(t)) + aZ(t)$, where $a \in \mathbb{R}$ and $\sigma(t,X(t),Z(t)) = \sigma(t,X(t))$, then we can rewrite equation~\eqref{eq: noisy_mem_SDE} as a stochastic Volterra equation: \begin{equation} \label{eq: noisy_mem_SDE_Volterra} \begin{array}{llll} X(t) &=& X(0) + \int_0^t b(s,X(s),Z(s))ds + \int_0^t\sigma(s,X(s))dB(s) \\[\smallskipamount] &=& X(0) + \int_0^t \tilde{b}(s,X(s))ds + a\int_0^t \int_{s-\delta}^s \phi(s,u) X(u)dB(u)ds \\[\smallskipamount] &&+ \int_0^t\sigma(s,X(s))dB(s) \\[\smallskipamount] &=& X(0) + \int_0^t \tilde{b}(s,X(s))ds + a\int_0^t \int_{u}^{\min\{u+\delta,t \}} \phi(s,u) ds X(u)dB(u) \\[\smallskipamount] &&+ \int_0^t\sigma(s,X(s))dB(s) \\[\smallskipamount] &=& X(0) + \int_0^t \tilde{b}(s,X(s))ds + \int_0^t \{ \tilde{\phi}(t,s) X(s) + \sigma(s,X(s)) \}dB(s) \\[\smallskipamount] \end{array} \end{equation} \noindent where the third equality follows from the same kind of calculations as in~\eqref{eq: noisySDE_Volterra_simple} and $\tilde{\phi}(t,s):= a \int_{s}^{\min\{ s+\delta,t \}} \phi(u,s) du$. This is a stochastic Volterra equation. Conditions for the existence of a unique solution to such equations can be found in e.g., Wang~\cite{Wang}. The previous argument shows that non-trivial noisy memory SDEs are at least as difficult to solve as stochastic Volterra equations. \subsection{Existence of solution} In this section, we prove some results on the existence of a unique solution to equation~\eqref{eq: noisy_mem_SDE}. \begin{theorem} \label{thm: existence_non_general} In the (non-generalized) case where $\phi(t,s)=1$ for all $t,s \in [0,T]$, the following assumptions on the coefficient functions $b$ and $\sigma$ are sufficient for the existence of a unique solution to equation~\eqref{eq: noisy_mem_SDE}: \begin{enumerate} \item[$(i)$] The functions $ b(\omega, t,\cdot)$ and $\sigma(\omega,t, \cdot)$ are assumed to be $C^1$ for each fixed $\omega \in \Omega,t \in [0,T]$. \item[$(ii)$] The functions $b(\cdot,x,z)$ and $\sigma(\cdot, x,z)$ are predictable for each $x,z$. \item[$(iii)$] \emph{Lipschitz condition:} The functions $b$ and $\sigma$ are Lipschitz continuous in the variables $x$ and $z$ with a Lipschitz constant $D$ which is independent of the variables $t, \omega$, i.e.: \[ \begin{array}{lll} |b(t,x_1,z_1)-b(t,x_2,z_2)| \leq D(|x_1-x_2| + |z_1-z_2|) \\[\smallskipamount] |\sigma(t,x_1,z_1)-\sigma(t,x_2,z_2)| \leq D(|x_1-x_2| + |z_1-z_2|). \end{array} \] \item[$(iv)$] \emph{Linear growth condition:} The functions $b$ and $\sigma$ satisfy the linear growth condition in the variables $x$ and $z$ with the linear growth constant $C$ independent of the variables $t,\omega$, i.e.: \[ \begin{array}{lll} |b(t,x,z)| + |\sigma(t,x,z)| \leq C(1 + |x| + |z|). \end{array} \] \end{enumerate} \end{theorem} \begin{proof} Assumptions $(i)$ and $(ii)$ are sufficient to ensure that the integrands in equation~\eqref{eq: noisy_mem_SDE} have predictable versions, whenever $X$ is c\`{a}dl\`{a}g and adapted. Together with the Lipschitz and linear growth conditions, this ensures that there exists a unique c\`{a}dl\`{a}g adapted solution $X$ to the equation~\eqref{eq: noisy_mem_SDE}, satisfying \[ E[\sup_{t\in[-\delta,T]}|X(t)|^2]<\infty. \] This can be seen by regarding equation~\eqref{eq: noisy_mem_SDE} as a stochastic functional differential equation in the sense of Mohammed~\cite{MR754561}. \end{proof} However, we are also interested in having conditions for there to exist a unique solution to equation~\eqref{eq: noisy_mem_SDE} for some general function $\phi(t,s)$: \begin{theorem} \label{thm: generalized} Consider the generalized case, where $\phi(t,s)$ is some arbitrary function. Assume that $b(t,X(t),Z(t))= \tilde{b}(t,X(t)) + aZ(t)$, where $a \in \mathbb{R}$ and $\sigma(t,X(t),Z(t)) = \sigma(t,X(t))$. Then, under some fairly weak additional regularity conditions (see Wang~\cite{Wang}), there exists a unique solution to the noisy memory SDE~\eqref{eq: noisy_mem_SDE}. \end{theorem} \begin{proof} In this setting, the derivation of Section~\ref{sec: Volterra} combined with the conditions in Wang~\cite{Wang} guarantees existence of a unique solution. \end{proof} Note that it may be possible to prove the existence and uniqueness of a solution to equation~\eqref{eq: noisy_mem_SDE} in general (without assumptions on the functions $b$ and $\sigma$). However, this is beyond the scope of this paper. The Euler method presented here holds for all SDEs of the form \eqref{eq: noisy_mem_SDE}. \subsection{The Euler scheme} Let $\omega \in \Omega$ be a scenario. Let $N > 0$ be a (large) natural number and let the time step in the approximation, $\Delta t := T/N$. Then, $\Pi_{pos}:=\{n \Delta t\}_{n =0,1,\ldots,N}$ is a partition of the time interval $[0,T]$. Similarly, one can partition the interval $[-\delta,0]$ as $\Pi_{neg} :=\{-\delta, -\delta + \Delta t, ..., -\delta + k \Delta t\}$, where $k$ is the largest integer such that $-\delta + k \Delta t \leq 0$. For $i=1,\ldots, N$, let $\Pi_i$ denote the partition of the interval $[t_i-\delta, t_i]$ given by \[ \Pi_i := (\Pi_{pos} \cup \Pi_{neg}) \cap [t_i-\delta,t_i], \] \noindent i.e., the partition of $[t_i-\delta,t_i]$ coming from the partition of the whole time interval. A natural generalization of the Euler scheme for standard SDEs (see for instance Iacus~\cite{Iacus}) to the noisy memory SDE case is the following: \begin{equation} \label{eq: euler_scheme} \begin{array}{lll} X_{i+1}(\omega) = X_i(\omega) + b(t_i, X_i(\omega), Z_i(\omega)) \Delta t + \sigma(t_i,X_i(\omega),Z_i(\omega)) \Delta B_i(\omega) \end{array} \end{equation} \noindent where $\Delta B_i(\omega) := B(t_{i+1},\omega) - B(t_{i},\omega)$ with distribution $N(0,\sqrt{\Delta t})$ are increments of the Brownian motion and $Z_i(\omega) := \sum_{j \in \Pi_i} \phi(t_i,t_j) X_j(\omega) \Delta B_j(\omega)$ approximates the noisy memory process. Note also that this is a pathwise (i.e., $\omega$-wise) approximation. However, in the next section, we will study the mean square error of the approximation in order to determine the convergence properties of this approximation to the exact solution. Throughout the paper, we will assume that $\delta > \Delta t$. This assumption is valid, as we are interested in what happens for small time steps. \section{The main result} \label{sec: main_result} It turns out that the noisy memory Euler scheme~\eqref{eq: euler_scheme} has mean-square order of convergence $\sqrt{\Delta t}$, which is the same as for ordinary SDEs (see Allen~\cite{Allen} and Mao~\cite{Mao}, Theorem 7.3). We summarize this in the following main result: \begin{theorem} \label{thm: Euler_scheme} The Euler approximation scheme for the solution of the stochastic noisy memory SDE \eqref{eq: noisy_mem_SDE} with constant time steps $\Delta t =\frac{T}{N}$ has mean-square order of convergence $\sqrt{\Delta t}$. That is, there exists a constant $\tilde{C}(T)$ such that if $X$ is the exact solution of the noisy memory SDE and $X_i$ is the approximated solution (at the same point), then \[ E[(X(t_i) - X_i)^2] \leq \tilde{C}(T) \Delta t \] \noindent in all the approximation points $t_i$, $i=1, \ldots, N$. \end{theorem} The rest of this section is devoted to some lemmas which are needed to prove this theorem. The final proof of Theorem~\ref{thm: Euler_scheme} will be given in Section~\ref{sec: proof_main}. \subsection{Some lemmas} In this section we prove some lemmas concerning the solution of the noisy memory SDE, which will be used later on in order to compute the order of convergence for the Euler approximation scheme. We need some Lipschitz-type conditions on the given functions. Assume that there exists constants $K_1, K_2 > 0$ (independent of $\omega \in \Omega$) such that \begin{equation} \label{assumption: b_and_sigma} \begin{array}{llll} |b(t,x_1,z_1) - b(s,x_2,z_2)|^2 \leq K_1(|t-s| + |x_1-x_2|^2 + |z_1-z_2|^2) \\[\smallskipamount] |\sigma(t,x_1,z_1) - \sigma(s,x_2,z_2)|^2 \leq K_2(|t-s| + |x_1-x_2|^2 + |z_1-z_2|^2). \end{array} \end{equation} \noindent We also assume that there exists constants $K_3, K_4 > 0$ such that \begin{equation} \label{assumption: b_and_sigma_2} \begin{array}{llll} b(t,x,z)^2 \leq K_3(1 + x^2 + z^2) \\[\smallskipamount] \sigma(t,x,z)^2 \leq K_4(1 + x^2 + z^2). \end{array} \end{equation} \noindent For notational simplicity, we let $k=\max\{K_1, K_2, K_3, K_4\}$. In addition, we assume that the (real valued, deterministic) function $\phi$ is square integrable, so there exists a constant $\tilde{K}$ such that \begin{equation} \label{eq: sq_int} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \phi(t,s)^2 dt ds \leq \tilde{K}. \end{equation} % % \noindent In the following, let $X$ be the solution of the noisy memory SDE~\eqref{eq: noisy_mem_SDE} and let $Z$ be the corresponding noisy memory process. In the following proofs, we will often use the inequality \begin{equation} \label{eq: quad} 2|ab| \leq a^2 + b^2.\hspace{0.5cm} \end{equation} \noindent Note that inequality \eqref{eq: quad} implies that $(a+b)^2 \leq 2a^2 + 2b^2$. \begin{lemma} \label{lemma: Xbounded} There exists a constant $M > 0$ such that $E[X(t)^2] \leq M$ for all $t \in [0,T]$. \end{lemma} \begin{proof} Define $v(t) := E[X(t)^2]$. Then, \[ \begin{array}{lll} v(t) &= E[(X(0) + \int_0^t b(s,X(s),Z(s)) ds + \int_0^t \sigma(s,X(s),Z(s)) dB(s))^2] \\[\smallskipamount] &\leq 4E[X(0)^2] + 4E[(\int_0^t b(s,X(s),Z(s))ds)^2] + 2E[(\int_0^t \sigma(s,X(s),Z(s))dB(s))^2] \\[\smallskipamount] &= 4E[X(0)^2] + 4E[(\int_0^t b(s,X(s),Z(s))ds)^2] + 2E[(\int_0^t \sigma(s,X(s),Z(s))^2 ds] \\[\smallskipamount] &\leq 4E[X(0)^2] + 4t k \int_0^t E[1 + X(s)^2+ (\int_{s-\delta}^s \phi(s,u) X(u)dB(u))^2]ds \\[\smallskipamount] &\hspace{0.3cm} +2 k \int_0^t E[1 + X(s)^2+ (\int_{s-\delta}^s \phi(s,u) X(u)dB(u))^2]ds \\[\smallskipamount] &\leq 4E[X(0)^2] + 4t k \tilde{K} \int_0^t E[1 + X(s)^2+ \int_0^t X(u)^2 du]ds \\[\smallskipamount] &\hspace{0.3cm} +2 k \tilde{K} \int_0^t E[1 + X(s)^2+ \int_0^t X(u)^2du]ds \\[\smallskipamount] &= 2(2E[X(0)^2] + k \tilde{K} t (2t+1)) + 2 k \tilde{K} (2t+1)(1+t) \int_0^t v(s)ds \end{array} \] \noindent where the first inequality us inequality \eqref{eq: quad} twice, the second equality uses the It{\^o} isometry, the second inequality uses the Cauchy-Schwarz inequality and assumption \eqref{assumption: b_and_sigma_2}, the third equality uses the It{\^o} isometry. Hence, \[ v(t) \leq a(t) + b(t)\int_0^t v(s)ds, \] \noindent where $a(t), b(t)$ are real-valued functions given by \[ \begin{array}{lll} a(t):=2(2E[X(0)^2] + k \tilde{K} t (2t+1)) \\[\smallskipamount] b(t) := 2 k \tilde{K} (2t+1)(1+t). \end{array} \] By Gr{\"o}nwall's inequality (see {\O}ksendal~\cite{Oksendal}), this implies that \[ \begin{array}{lll} E[X(t)^2] &\leq a(T) + b(T) \int_0^T e^{b(T-s)}ds =:M, \end{array} \] \noindent where the second inequality uses that $k > 0$ and \[ M := a(T) + b(T)\frac{e^{2 k \tilde{K} (3T + 2 T^2 +1)}}{2 k \tilde{K} (4T+3)} - \frac{e^{2 k \tilde{K}}}{6 k \tilde{K}}. \] This proves that $E[X(t)^2]$ is bounded. \end{proof} \begin{lemma} \label{lemma: X_lipschitz} There exists a constant $c > 0$ such that for all $t,s \in [0,T]$, \[ E[(X(t) - X(s))^2] \leq c|t-s|. \] \end{lemma} \begin{proof} Let $t,s \in [0,T]$, $s \leq t$ (if $t < s$, change roles), \[ \begin{array}{lll} E[(X(t)-X(s))^2] &= E[(\int_s^t b(u,X(u), Z(u))du + \int_s^t \sigma(u,X(u), Z(u))dB(u))^2] \\[\smallskipamount] &\leq 2E[(\int_s^t b(u,X(u), Z(u))du)^2] + 2E[(\int_s^t \sigma(u,X(u), Z(u))dB(u))^2] \\[\smallskipamount] &\leq 2(t-s) \int_s^t E[b(u,X(u),Z(u))^2] du + 2\int_s^tE[\sigma(u,X(u),Z(u))]du \\[\smallskipamount] &\leq 2(t-s) k \int_s^t (1 + E[X(u)^2] + E[(\int_{u-\delta}^u \phi(u,w) X(w) dB(w))^2])du \\[\smallskipamount] &\hspace{0.3cm} + 2 k \int_s^t (1 + E[X(u)^2] + E[(\int_{u-\delta}^u \phi(u,w) X(w) dB(w))^2]) du \\[\smallskipamount] &= 2(t-s) k \tilde{K} \int_s^t (1 + E[X(u)^2] + E[\int_{u-\delta}^u X(w)^2 dw])du \\[\smallskipamount] &\hspace{0.3cm}+ 2 k \tilde{K} \int_s^t (1 + E[X(u)^2] + E[\int_{u-\delta}^u X(w)^2 dw]) du \\[\smallskipamount] &\leq 2(t-s) k \tilde{K} \int_s^t (1 + M + \int_{u-\delta}^u M dw)du \\[\smallskipamount] &\hspace{0.3cm}+ 2 k \tilde{K} \int_s^t (1 + M + \int_{u-\delta}^u M dw) du \\[\smallskipamount] &= 2 k \tilde{K} (t-s)^2(1 + M + M \delta) + 2 k \tilde{K}(1 + M + M \delta)(t-s) \\[\smallskipamount] &\leq c (t-s) \end{array} \] \noindent where $c > 0$ is a constant and the first inequality uses some algebra and inequality \eqref{eq: quad}, the second inequality uses the It{\^o} isometry and the Cauchy-Schwartz inequality, the third inequality follows from assumption~\eqref{assumption: b_and_sigma_2}, the second equality follows from the It{\^o} isometry and the fourth inequality follows from Lemma~\ref{lemma: Xbounded}. Note that the final inequality holds since $(t-s) \leq T$. \end{proof} \begin{lemma} \label{lemma: Z_bounded} It holds that $E[Z(t)^2] \leq \tilde{K} M\delta$ for all $t \in [0,T]$. \end{lemma} \begin{proof} \[ \begin{array}{lll} E[Z(t)^2] &=& E[(\int_{t-\delta}^t \phi(t,s) X(s) dB(s))^2] \\[\smallskipamount] &=& E[\int_{t-\delta}^t \phi(t,s)^2 X(s)^2 ds] \\[\smallskipamount] &\leq& \int_{t-\delta}^t \tilde{K} M ds = \tilde{K} M \delta, \end{array} \] \noindent where the second equality uses the It{\^o} isometry (see e.g., {\O}ksendal~\cite{Oksendal}) and the inequality follows from Lemma~\ref{lemma: Xbounded}. \end{proof} \begin{lemma} \label{lemma: Z_lipschitz} There exists a constant $\tilde{N} > 0$ such that \[ E[|Z(t)-Z(s)|^2] \leq \tilde{N}|t-s|. \] \end{lemma} \begin{proof} Assume that $t > s$. If not, change the roles of $t$ and $s$. We consider two cases: \begin{enumerate} \item[$(i)$] $s \notin [t-\delta,t]$, i.e., $s < t-\delta$: Then, \[ \begin{array}{llll} E[|Z(t)-Z(s)|^2] = E[|\int_{t-\delta}^t \phi(t,u)X(u) dB(u) - \int_{s-\delta}^s \phi(s,u)X(u) dB(u)|^2] \\[\smallskipamount] % \hspace{1.7cm} \leq 2E[(\int_{t-\delta}^t \phi(t,u) X(u) dB(u))^2] + 2E[(\int_{s-\delta}^s \phi(s,u) X(u) dB(u))^2] \\[\smallskipamount] % \hspace{1.7cm} = 2E[\int_{t-\delta}^t \phi(t,u)^2 X(u)^2 du] + 2E[\int_{s-\delta}^s \phi(s,u)^2 X(u)^2 du] \\[\smallskipamount] \hspace{1.7cm} \leq 2 \int_{t-\delta}^t \tilde{K}M du + 2\int_{s-\delta}^s \tilde{K}M du \leq 4M\tilde{K} (t-s) \end{array} \] \noindent where the first inequality uses inequality \eqref{eq: quad}, the second equality uses the It{\^o} isometry, the second inequality uses Lemma~\ref{lemma: Xbounded} and the final inequality follows from $s < t-\delta$, i.e., $\delta < t-s$. \item[$(ii)$] $s \in [t-\delta,t]$: In this case, \[ \begin{array}{llll} E[|Z(t)-Z(s)|^2] = E[|\int_{t-\delta}^t \phi(t,u) X(u) dB(u) - \int_{s-\delta}^s \phi(s,u) X(u) dB(u)|^2] \\[\smallskipamount] % \hspace{1.3cm} =E[|-\int_{s-\delta}^{t-\delta} \phi(s,u)X(u)dB(u) + \int_{t-\delta}^s (\phi(t,u) - \phi(s,u)) X(u)dB(u) \\[\smallskipamount] \hspace{1.7cm} + \int_s^t \phi(t,u)X(u) dB(u)|^2] \\[\smallskipamount] % \hspace{1.3cm} \leq 2E[|\int_{t-\delta}^{s} (\phi(t,u) - \phi(s,u))X(u)dB(u)|^2] \\[\smallskipamount] \hspace{1.7cm} + 2E[|\int_s^t\phi(t,u) X(u)dB(u) - \int_{s-\delta}^{t-\delta} \phi(s,u) X(u) dB(u)|^2] \\[\smallskipamount] % \hspace{1.3cm} \leq 4 E[(\int_s^t \phi(t,u)X(u)dB(u))^2] + 4E[(\int_{s-\delta}^{t-\delta} \phi(s,u)X(u) dB(u))^2] \\[\smallskipamount] \hspace{1.7cm} + 2\int_{t-\delta}^{s} (\phi(t,u) - \phi(s,u))^2 E[X(u)^2]du \\[\smallskipamount] % \hspace{1.3cm} \leq 4\int_s^t \tilde{K} E[X(u)^2]du + 4\int_{s-\delta}^{t-\delta} \tilde{K} E[X(u)^2]du + 2\int_{t-\delta}^s \tilde{K}(t-s)M du \\[\smallskipamount] % \hspace{1.3cm}\leq 2(t-s)\tilde{K}M(4+ \delta) \end{array} \] \noindent where the second equality follows from $s \in [t-\delta,t]$, the first and second inequality follows from inequality \eqref{eq: quad}, the third inequality follows from the It{\^o} isometry and assumptions~\eqref{eq: phi_begrenset}-\eqref{eq: phi_lin}, the final inequality follows from Lemma~\ref{lemma: Xbounded}. \end{enumerate} Hence, by combining the two items above, we see that \[ E[|Z(t)-Z(s)|^2] \leq \max\{2M\tilde{K} (4+ \delta), 4M\tilde{K}\}(t-s). \] The lemma follows by defining $\tilde{N}$ to be this maximum. \end{proof} \section{Error analysis and proof of the main theorem} \label{sec: proof_main} In this section, we derive an error bound for the Euler approximation method for SDEs with generalized noisy memory. We shall see that the approximation converges to the solution of the noisy memory SDE and find the order of convergence, and thereby prove our main result, Theorem~\ref{thm: Euler_scheme}. Similarly to Allen~\cite{Allen}, for $t \in [t_i,t_{i+1}]$, $i=1, \ldots, N$, define \begin{equation} \label{eq: X_tilde} \hat{X}(t) := X_i + \int_{t_i}^t b(t_i, X_i, Z_i)ds + \int_{t_i}^t \sigma(t_i,X_i(\omega),Z_i(\omega)) dB(s). \end{equation} Note that $\hat{X}(t_i) = X_i$ for $i=1,2,\ldots,N$, i.e., in the time nodes, the process $\hat{X}$ equals the approximation to the solution of the noisy memory process. We study the error \begin{equation} \label{eq: error} \epsilon(t) = X(t) - \hat{X}(t) \end{equation} \noindent where $X$ is the exact solution to the noisy memory SDE~\eqref{eq: noisy_mem_SDE}. The goal of this section is to prove that there exists a constant $\tilde{C}$ such that $E[\epsilon(t_i)^2] \leq \tilde{C} \Delta t$ for $t_i \in \Pi_{pos}$. From the definitions, \[ d\epsilon(t) = (b(t,X(t),Z(t)) - b(t_i, X_i, Z_i)) dt + (\sigma(t,X(t),Z(t)) - \sigma(t_i,X_i,Z_i)) dB(t) \] \noindent and $\epsilon(t_i)=X(t_i)-X_i$ for $i=1, 2, \ldots, N$. From It{\^o}'s formula applied to the function $g(t,\epsilon)=\epsilon^2$, we see that \[ \begin{array}{llll} d[\epsilon(t)^2] &=& 2 (X(t) - \hat{X}(t))(b(t,X(t),Z(t)) - b(t_i, X_i,Z_i)) dt \\[\smallskipamount] &&+ 2(X(t)-\hat{X}(t))(\sigma(t,X(t),Z(t)) - \sigma(t_i, X_i, Z_i)) dB(t) \\[\smallskipamount] &&+ (\sigma(t,X(t),Z(t)) - \sigma(t_i,X_i,Z_i))^2 dt. \end{array} \] Hence, \begin{equation} \label{eq: stjerne} \begin{array}{lll} E[\epsilon(t_{i+1})^2] &=& E[\epsilon(t_i)^2] + 2E[\int_{t_i}^{t_{i+1}} (X(s) - \hat{X}(s)) (b(s,X(s),Z(s)) - b(t_i,X_i,Z_i)) dt] \\[\smallskipamount] &&+ E[\int_{t_i}^{t_{i+1}} (\sigma(t,X(t),Z(t)) - \sigma(t_i,X_i,Z_i))^2 dt] \\[\smallskipamount] &\leq& E[\epsilon(t_i)^2] + \int_{t_i}^{t_{i+1}} E[(X(s) - \hat{X}(s))^2] dt \\[\smallskipamount] &&+ \int_{t_i}^{t_{i+1}} E[(b(s,X(s),Z(s)) - b(t_i,X_i,Z_i))^2] dt \\[\smallskipamount] &&+ \int_{t_i}^{t_{i+1}} E[(\sigma(t,X(t),Z(t)) - \sigma(t_i,X_i,Z_i))^2] dt \end{array} \end{equation} \noindent where the inequality follows from inequality \eqref{eq: quad}. Note that \[ \begin{array}{lll} E[(b(t,X(t),Z(t)) - b(t_i,X_i,Z_i))^2] \\[\smallskipamount] \hspace{0.7cm}= E[(b(t,X(t),Z(t)) - b(t_i,X(t_i),Z(t_i)) + b(t_i,X(t_i),Z(t_i))- b(t_i,X_i,Z_i))^2] \\[\medskipamount] \hspace{0.7cm}\leq 2E[(b(t,X(t),Z(t)) - b(t_i,X(t_i),Z(t_i))^2] + 2E[(b(t_i,X(t_i),Z(t_i))- b(t_i,X_i,Z_i))^2] \\[\medskipamount] \hspace{0.7cm}\leq 2k E[|t-t_i| + |X(t) - X(t_i)|^2 + |Z(t) - Z(t_i)|^2 + |X(t_i) - X_i|^2 + |Z(t_i) - Z_i|^2] \end{array} \] \noindent where the first inequality follows from the triangle inequality and inequality \eqref{eq: quad}. The final inequality follows from the assumption~\eqref{assumption: b_and_sigma}. Similarly, one can prove that \[ \begin{array}{lll} E[(\sigma(t,X(t),Z(t)) - \sigma(t_i,X_i,Z_i))^2] &\leq& 2k E[|t-t_i| + |X(t) - X(t_i)|^2 \\[\smallskipamount] &&+ |Z(t) - Z(t_i)|^2 + |X(t_i) - X_i|^2 + |Z(t_i) - Z_i|^2]. \end{array} \] Therefore, combining this with inequality~\eqref{eq: stjerne} and using the definition of the error $\epsilon(t)$, \begin{equation} \label{eq: error_X_mellom} \begin{array}{lll} E[\epsilon(t_{i+1})^2] &\leq& E[\epsilon(t_i)^2] + \int_{t_i}^{t_{i+1}} \epsilon(t) dt + 4k \int_{t_i}^{t_{i+1}} E[|t-t_i| + |X(t) - X(t_i)|^2 \\[\smallskipamount] &&+ |Z(t) - Z(t_i)|^2 + |X(t_i) - X_i|^2 + |Z(t_i) - Z_i|^2] dt. \end{array} \end{equation} Due to the noisy memory process, there is an additional source of error, compared to approximation of regular SDEs. In the following, let $X(t)$ be an exact solution of the noisy memory SDE ~\eqref{eq: noisy_mem_SDE}, and let $X_j$, $t_j \in [0,T]$ be its approximation from the Euler method~\eqref{eq: euler_scheme}. For $i=1, \ldots, N$, define $Z_i^B := \sum_{j \in \Pi_i} X(t_j) \Delta B_j$, i.e., the approximated noisy memory process involving the exact solution $X$. \begin{lemma} \label{lemma: Z_minus_Z_B} For a time $t_i$ in the partition of the time interval and $Z_i^B = \sum_{j \in \Pi_i} \phi(t_i,t_j)X(t_j) \Delta B_j$, we have \[ E[|Z(t_i) - Z_i^B|^2] \leq \Delta t \delta \tilde{K}(M + c). \] \end{lemma} \begin{proof} From the definitions, \[ \begin{array}{lll} E[|Z(t_i) - Z_i^B|^2] &=& E[|\int_{t_i-\delta}^{t_i} \phi(t_i,s) X(s) dB(s) - \sum_{j \in \Pi_i} \phi(t_i,t_j)X(t_j) \Delta B_j|^2] \\[\smallskipamount] &=& E[|\sum_{j \in \Pi_i} \int_{t_j}^{t_{j+1}} (\phi(t_i,s)X(s) - \phi(t_i,t_j)X(t_j))dB(s) |^2] \\[\smallskipamount] &=& E[|\sum_{j \in \Pi_i} \int_{t_j}^{t_{j+1}} ( \{\phi(t_i,s)X(s) - \phi(t_i,t_j)X(s) \} \\[\smallskipamount] &&+ \{ \phi(t_i,t_j)X(s)- \phi(t_i,t_j)X(t_j)\} )dB(s) |^2] \\[\smallskipamount] &=& 2E[\sum_{j \in \Pi_i} \int_{t_j}^{t_{j+1}} X(s)^2(\phi(t_i,s) - \phi(t_i,t_j))^2 ds] \\[\smallskipamount] &&+2E[\sum_{j \in \Pi_i} \int_{t_j}^{t_{j+1}} (X(s)-X(t_j))^2 \phi(t_i,t_j)^2 ds] \\[\smallskipamount] &\leq& 2 \sum_{j \in \Pi_i} \int_{t_j}^{t_{j+1}} M \tilde{K} (s-t_j) ds + 2 \sum_{j \in \Pi_i} \int_{t_j}^{t_{j+1}} c(s - t_j) \tilde{K} ds \\[\smallskipamount] &=& \delta \Delta t \tilde{K}(M + c ) \end{array} \] \noindent where the fourth equality follows from the It\^{o} isometry (see e.g. {\O}ksendal \cite{Oksendal}) and the inequality from Lemma~\ref{lemma: Xbounded}, Lemma~\ref{lemma: X_lipschitz} and assumptions~\eqref{eq: phi_begrenset}-\eqref{eq: phi_lin}. \end{proof} We can now prove the following lemma which relates the error in the noisy memory process $Z$ to the error in the solution process $X$. \begin{lemma} \label{lemma: error_Z} For $i=1, \ldots, N$, let $Z(t_i)$ be the noisy memory process, and $Z_i$ the approximated noisy memory process, then \[ E[|Z(t_i)-Z_i|^2] \leq 2\Delta t \delta \tilde{K}(M + c) + \tilde{K} \Delta t \sum_{j \in \Pi_i} E[\epsilon(t_j)^2]. \] \end{lemma} \begin{proof} First, note that \[ \begin{array}{lll} E[|Z_i^B-Z_i|^2] &=& E[(\sum_{j \in \Pi_i} X(t_j) \phi(t_i,t_j) \Delta B_j - \sum_{j \in \Pi_i} X_j \phi(t_i,t_j) \Delta B_j)^2] \\[\smallskipamount] &=& \sum_{j \in \Pi_i} \phi(t_i,t_j)^2 E[(X(t_j) - X_j)^2] \Delta t \\[\smallskipamount] &\leq& \tilde{K} \Delta t \sum_{j \in \Pi_i} E[\epsilon(t_j)^2] \end{array} \] \noindent where the final equality uses the discrete It{\^o} isometry. Therefore, \[ \begin{array}{lll} E[|Z(t_i)-Z_i|^2] &=& E[|Z(t_i) - Z_i^B + Z_i^B-Z_i|^2] \\[\smallskipamount] &\leq& 2E[|Z(t_i) - Z_i^B|^2] + 2E[|Z_i^B-Z_i|^2] \\[\smallskipamount] &\leq& 2\Delta t \delta \tilde{K} (M + c) + \tilde{K} \Delta t \sum_{j \in \Pi_i} E[\epsilon(t_j)^2] \end{array} \] \noindent where the first inequality uses inequality \eqref{eq: quad} and the second inequality uses Lemma~\ref{lemma: Z_minus_Z_B}. \end{proof} We are nearly ready to prove our main result, Theorem~\ref{thm: Euler_scheme}. However, we need one more lemma: \begin{lemma} \label{lemma: exponent} Let $x >0$ and $n \in \mathbb{N}$. Then, \[ (1+\frac{x}{n})^n \leq e^x. \] \end{lemma} \begin{proof} The exponential function is convex, and therefore it dominates its first order Taylor approximation at $0$, so $e^y \geq 1+y$ for all $y$. By insering $y=x/n$ and taking the $n$'th power, the desired inequality follows. \end{proof} \noindent Finally, using all of these lemmas, we are ready to prove our main result. \medskip \noindent \textbf{Proof of Theorem~\ref{thm: generalized}:} Recall from Theorem~\ref{thm: Euler_scheme} that we would like to prove that the expected squared error of the numerical scheme is bounded by some constant (depending on the terminal time) times the time step. That is, we want to prove that $E[\epsilon(t_{i})^2] \leq \tilde{C}(T) \Delta t$. By combining inequality \eqref{eq: error_X_mellom} with Lemma~\ref{lemma: X_lipschitz}, Lemma~\ref{lemma: Z_lipschitz} and Lemma~\ref{lemma: error_Z}, we see that \[ \begin{array}{lll} E[\epsilon(t_{i+1})^2] &\leq& E[\epsilon(t_i)^2] + \int_{t_i}^{t_{i+1}} E[\epsilon(t)^2]dt + 4kE[\epsilon(t_i)^2]\Delta t + 2k(\Delta t)^2 \\[\smallskipamount] &&+ 4k c \int_{t_i}^{t_{i+1}} (t-t_i)dt + 4k\int_{t_i}^{t_{i+1}} \tilde{N}(t-t_i)dt \\[\smallskipamount] &&+ 4k\int_{t_i}^{t_{i+1}}(2\Delta t \delta (M\tilde{K} + \tilde{K}c ) + \tilde{K} \Delta t \sum_{j \in \Pi_i} E[\epsilon(t_j)^2])dt \\[\smallskipamount] % &=&E[\epsilon(t_i)^2](1+ 4k \Delta t) + 2k(\Delta t)^2(1 + c + \tilde{N}+ 2(2 \delta (M \tilde{K} + \tilde{K} c) \\[\smallskipamount] &&+ \tilde{K} \sum_{j \in \Pi_i} E[\epsilon(t_j)^2]) + \int_{t_i}^{t_{i+1}}E[\epsilon(t)^2] dt. \end{array} \] Hence, by using the Bellman-Gr{\"o}nwall inequality, we see that \[ \begin{array}{lll} E[\epsilon(t_{i+1})^2] &\leq& E[\epsilon(t_i)^2](1+ 4k \Delta t) + 2k(\Delta t)^2(1 + c + \tilde{N}+ 2(2 \delta (M \tilde{K} + \tilde{K} c)\\[\smallskipamount] &&+ \tilde{K} \sum_{j \in \Pi_i} E[\epsilon(t_j)^2]) + \int_{t_i}^{t_{i+1}} e^{t_{i+1}-t} \{E[\epsilon(t_i)^2](1+ 4k \Delta t) \\[\smallskipamount] &&+ 2k(\Delta t)^2(1 + \tilde{N} + c+ 2(2 \delta (M\tilde{K} + \tilde{K}c)+ \tilde{K} \sum_{j \in \Pi_i} E[\epsilon(t_j)^2])\} dt \\[\smallskipamount] &=& e^{\Delta t}\{E[\epsilon(t_i)^2](1+ 4k \Delta t) + 2k(\Delta t)^2(1 + c + \tilde{N}+ 4 \delta (M\tilde{K} + \tilde{K} c)\\[\smallskipamount] &&+ 2\tilde{K} \sum_{j \in \Pi_i} E[\epsilon(t_j)^2]) \} \end{array} \] For $i=1, \ldots,N$, define $a_i := E[\epsilon(t_i)^2]$. From the previous computations, we know that \begin{equation} \label{eq: a_i} a_{i+1} \leq R a_i + S \sum_{j \in \Pi_i} a_j + A \end{equation} \noindent where \[ R:=e^{\Delta t}(1 + 4k\Delta t) > 0, \mbox{ } S:= 4 \tilde{K} k (\Delta t)^2 e^{\Delta t} > 0 \] \noindent and \[ A:= 2k(\Delta t)^2 e^{\Delta t}(1 + c + \tilde{N} + 4 \delta (M\tilde{K} + \tilde{K} c)) > 0. \] Note that, \begin{equation} \label{eq: for_indusksjon} \begin{array}{lll} a_{i+1} &\leq& Ra_i + A + S\sum_{j \in \Pi_i} a_j \\[\smallskipamount] &\leq& R \max_{j \in \Pi_i} a_j + S \sum_{j \in \Pi_i} \max_{j \in \Pi_i} a_j + A \\[\smallskipamount] &=& (R + S \frac{\delta}{\Delta t})\max_{j \in \Pi_i} a_j +A \\[\smallskipamount] &=:& \tilde{R} \max_{j \in \Pi_i} a_j +A \end{array} \end{equation} \noindent where $\tilde{R} := R + S \frac{\delta}{\Delta t} = e^{\Delta t} (1 + 4k \Delta t (1+\delta \tilde{K}))$. By induction, and the fact that the initial approximation error is $0$, inequality~\eqref{eq: for_indusksjon} implies that $a_n \leq \tilde{A} \frac{\tilde{R}^n -1}{\tilde{R} - 1}$ for $n=1, \ldots, N$, i.e., \begin{equation} \label{eq: nesten_ferdig} \begin{array}{lll} E[\epsilon(t_n)^2] &\leq& 2e^{\Delta t} k (\Delta t)^2 (1 + c + \tilde{N} + 4\delta (M\tilde{K} + \tilde{K} c)) \frac{e^{N\Delta t}(1 +4k(1 +\delta \tilde{K}) \Delta t)^n - 1}{e^{\Delta t}(1 + 4k(1 +\delta \tilde{K}) \Delta t) -1} \\[\smallskipamount] &\leq& \Delta t \frac{e^T}{2(1+\delta \tilde{K})}(1 + 4k (1 +\delta \tilde{K}) \Delta t)^n (1 + c + k + 4 \delta (M \tilde{K} + \tilde{K} c)). \end{array} \end{equation} Now, note that $e^{4k(1 +\delta \tilde{K})T} \geq (1+\frac{4k(1 +\delta \tilde{K})T}{n})^n$ because of Lemma~\ref{lemma: exponent}. By combining Lemma~\ref{lemma: exponent} with the inequality~\eqref{eq: nesten_ferdig} and recalling that $\Delta t = \frac{T}{N}$, we reach our goal \[ \begin{array}{lll} E[\epsilon(t_n)^2] &\leq& \Delta t \frac{e^{T(1+4k(1 +\delta \tilde{K}) )}}{2 (1+\delta \tilde{K})}(1 + c + \tilde{N} + 4\delta (M \tilde{K} + \tilde{K} c)) \\[\smallskipamount] &=:& \Delta t \tilde{C}(T) \end{array} \] \noindent where $\tilde{C}(T):=\frac{e^{T(1+4k(1 +\delta \tilde{K}) )}}{2(1+\delta \tilde{K})}(1 + c + \tilde{N} + 4\delta (M \tilde{K} + \tilde{K} c))$. This completes the proof of Theorem~\ref{thm: Euler_scheme}. \QED \section{A noisy memory SDE with an analytical solution and a numerical example} \label{sec: num_ex} In this section, we will compare the exact solution of a (very simple) SDE with noisy memory to the approximation given by the Euler method. We consider the following SDE with noisy memory: \begin{equation} \label{eq: num_ex_simple_SDE} \begin{array}{lll} dX(t) &=& Z(t)dB(t) \mbox{ for } t \in [0,T] \\[\smallskipamount] X(t) &=& 1, \mbox{ } t \in [-\delta,0). \end{array} \end{equation} \noindent where $Z(t) := \int_{t-\delta}^t X(s) dB(s)$, so $\phi(t,s)=1$ for all $t,s \in [0,T]$. We can solve \eqref{eq: num_ex_simple_SDE} analytically by using a technique from Dahl et al. \cite{Dahl}, based on rewriting the noisy SDE~\eqref{eq: num_ex_simple_SDE} as a two-dimensional SDE with delay. This kind of delay equation can be solved iteratively for each $\delta$-interval. First, we rewrite the noisy SDE~\eqref{eq: num_ex_simple_SDE} by defining $X_1(t) :=X(t)$ and $X_2(t):=\int_{-\delta}^t X_1(s)dB(s)$, $t \in [-\delta,T]$. Note that from these definitions, $Z(t) = X_2(t)-X_2(t-\delta)$, $t \in [-\delta,T]$. Then, the noisy SDE~\eqref{eq: num_ex_simple_SDE} can be rewritten as a two-dimensional SDE with delay: \begin{equation} \label{eq: rewritten_SDE} \begin{array}{llll} dX_1(t)&=& (X_2(t)-X_2(t-\delta))dB(t), t \in (0,T], \\[\smallskipamount] dX_2(t)&=&X_1(t)dB(t), t \in (-\delta,T], \\[\smallskipamount] X_1(t)&=&1, t \in [-\delta,0], \\[\smallskipamount] X_2(-\delta)=0. \end{array} \end{equation} Note that $X_1(t)$ and $X_2(t)$ are known from the initial conditions for $t \in [-\delta,0]$. We write \eqref{eq: rewritten_SDE} in matrix form. Define $Y(t) := (X_1(t), X_2(t))^{\intercal}$ (where $(\cdot,\cdot)^{\intercal}$ denotes the transpose), $t \in [-\delta,T]$. Then, from \eqref{eq: rewritten_SDE} \begin{equation} \label{eq: rewritten_matrix} \begin{array}{lll} dY(t) &=& \textbf{a} Y(t) dB(t) + \textbf{b} Y(t-\delta)dB(t), t \in [0,T]\\[\smallskipamount] Y(t) &=& (1, B(t)-B(-\delta))^{\intercal}, t \in [-\delta,0], \end{array} \end{equation} \noindent where $\textbf{a}$ and $\textbf{b}$ are in $\mathbb{R}^{2 \times 2}$ and defined by \[ \textbf{a}= \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}, \mbox{ } \textbf{b}= \begin{bmatrix} 0 & -1 \\ 0 & 0 \end{bmatrix}, \mbox{ } \] For $t \in [0,\delta]$, we may rewrite \eqref{eq: rewritten_matrix} as \[ \begin{array}{lll} dY(t) &=& \textbf{a} Y(t)dB(t) + \begin{bmatrix} B(-\delta)-B(t-\delta) \\ 0 \end{bmatrix} dB(t), \end{array} \] \noindent which is a regular SDE without delay. For notational simplicity, define $K(t-\delta):=B(t-\delta)-B(-\delta)$. Note that for $t \in [0,\delta]$, $K(t-\delta)$ is a known process which is independent of everything after time $0$. To solve this equation, define \[ F(t) := \exp(-\textbf{a}B(t) + \frac{1}{2}\textbf{a}^2 t) \] \noindent where we (in general) define the matrix exponential for a matrix $\textbf{A} \in \mathbb{R}^{n \times n}$, $n \in \mathbb{N}$, as \[ \exp(\textbf{A}) = \sum_{n=0}^{\infty} \frac{1}{n!} \textbf{A}^n. \] By this definition, we find (by analyzing the infinite sum) that \[ \begin{array}{lll} F(t)=e^{\frac{1}{2}t} \begin{bmatrix} \cosh B(t) & -\sinh B(t) \\ -\sinh B(t) & \cosh B(t) \end{bmatrix}. \end{array} \] Note also that \[ \textbf{a}^2=\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}, \] \noindent which clearly commutes with the matrices $F(t)$ (for all $t$) and $\textbf{a}$. Also, the matrices $\textbf{a}$ and $F(t)$ commute (for all $t$). This justifies the following calculations: By the two-dimensional It{\^o} formula, \[ \begin{array}{lll} dF(t) &=& F(t)(\textbf{a}^2 dt - \textbf{a} dB(t)). \end{array} \] Hence, by the It{\^o} product rule, \[ d(F(t)Y(t))= F(t)\begin{bmatrix} -K(t-\delta) \\ 0 \end{bmatrix}dB(t) - \textbf{a} F(t) \begin{bmatrix} -K(t-\delta) \\ 0 \end{bmatrix} dt. \] By integrating between times $0$ and $t$, \begin{equation} \label{eq: solution_SDE_exact} \begin{array}{lll} Y(t) &=& e^{\textbf{a} B(t) - \frac{1}{2} \textbf{a}^2 t} \Big( Y(0) + \int_0^t F(s) \begin{bmatrix} -K(s-\delta) \\ 0 \end{bmatrix} dB(s) \\[\smallskipamount] &&- \int_0^t \textbf{a} F(s) \begin{bmatrix} -K(s-\delta) \\ 0 \end{bmatrix} ds \Big) \end{array} \end{equation} \noindent where $Y(0) = (1, -B(-\delta))^{\intercal}$. The first component of this solution $Y(t)$ is the exact solution $X(t)$ of the noisy memory SDE~\eqref{eq: num_ex_simple_SDE} for times $t \in [0, \delta]$. Furthermore, one can continue and iteratively solve \eqref{eq: num_ex_simple_SDE} for the interval $[\delta,2\delta]$ using the solution based on~\eqref{eq: solution_SDE_exact} as an initial condition. By continuing like this, one can solve the equation on the entire interval $[0,T]$. We will not calculate more solutions, as the one calculated above is sufficient for our goal of illustrating the Euler method. We now compare the exact solution just derived to the numerical approximation based on the Euler method. Let $\delta=1$ and $T=1$. It would perhaps be more realistic to choose $T$ larger than $\delta$ (i.e., the time span of interest is greater than the time of memory). However, as the previous exact solution gets very complicated for $\delta < T$, we restrict ourselves to the case $\delta=T$. Figure \ref{fig: plot} shows $1000$ different simulations of the paths of the exact solution~\eqref{eq: solution_SDE_exact} have been plotted against the corresponding paths of the approximated Euler solution using time steps of size $\Delta t= 1/100$. In addition, the corresponding mean square error has been computed by Monte Carlo simulation (with these $1000$ simulations), and this error has also been plotted. As seen by the dashed line (representing the mean square error of the Euler approximation method) in Figure \ref{fig: plot}, the Euler method approximates the exact solution well in a mean square sense. \begin{figure}[h!] \centering \includegraphics[width=1.1\textwidth]{error_and_paths_timestep100_MC1000.pdf} \caption{Plot of $1000$ paths of the exact solution of noisy SDE and the corresponding Euler approximation paths as well as the mean square error computed by Monte Carlo simulation.} \label{fig: plot} \end{figure}
{ "timestamp": "2019-03-01T02:15:22", "yymm": "1902", "arxiv_id": "1902.11010", "language": "en", "url": "https://arxiv.org/abs/1902.11010" }
\section{Preliminary results}\label{pre} To prove the main theorems of this paper, we need the followings results concerning convexity, the truncation operator and some properties of weights. \begin{lem}{\cite[Lemma 2.7]{BBG}}\label{conv} For every finite set $A\subset \mathbb N$, we have $$\text{co}(\lbrace \mathbf{1}_{\varepsilon A} : \vert \varepsilon\vert=1\rbrace)= \left\lbrace \sum_{n\in A}z_n e_n : \vert z_n\vert \leq 1\right\rbrace,$$ where $\text{co}(S)=\lbrace \sum_{j=1}^n \alpha_j x_j : x_j \in S, 0\leq \alpha_j\leq 1, \sum_{j=1}^n \alpha_j = 1, n\in\mathbb N\rbrace$. \end{lem} For each $\alpha>0$, we define the \textit{truncation function} of $z\in\mathbb F$ as $$T_\alpha (z) = \alpha\mathrm{sgn \,}(z),\; \vert z\vert > \alpha,\;\; T_\alpha(z) = z,\; \vert z\vert \leq\alpha.$$ We can extend $T_\alpha$ to an operator in $\mathbb{X}$ by $T_\alpha(x)\sim \sum_{i=1}^\infty T_\alpha(e_i^*(x))e_i$, that is, $$T_\alpha(x):=\sum_{i=1}^\infty T_\alpha(e_i^*(x))e_i = \alpha\mathbf{1}_{\varepsilon\Gamma_\alpha}+P_{\Gamma_\alpha^c}(x),$$ where $\Gamma_\alpha = \lbrace n : \vert e_n^*(x)\vert > \alpha\rbrace$ and $\varepsilon_j = \mathrm{sgn \,}(e_j^*(x))$ with $j\in \Gamma_\alpha$. Hence, this is a well-defined operator for all $x\in \mathbb{X}$ since $\Gamma_\alpha$ is a finite set. This operator was introduced in \cite{DKK} to show the equivalence between almost-greediness and semi-greediness and they proved that for quasi-greedy bases, this operator is uniformly bounded. Also, in \cite{BBG}, the authors showed the same result but with a slight improvement of the boundedness constant. \begin{prop}{\cite[Lemma 2.5]{BBG}}\label{truncation} Assume that $\mathcal B$ is $C_q$-quasi-greedy basis in a Banach space $\mathbb{X}$. Then, for every $\alpha>0$, $$\Vert T_\alpha(x)\Vert\leq C_q\Vert x\Vert,\; \forall x\in\mathbb{X}.$$ \end{prop} The last one result that we will use is related to weights. \begin{prop}\label{p:find c0} Let $\mathcal{B}$ be a basis in a Banach space $\mathbb X$. \begin{enumerate} \item[i)] Assume that $w(A)\leq \lim\sup_{n\rightarrow\infty} w_n$. If $\mathcal B$ is $C_{sg}$-$w$-semi-greedy, then $$\max_{\vert\varepsilon\vert=1}\Vert \mathbf{1}_{\varepsilon A}\Vert \leq C_{sg}c_2(1+c_2^2).$$ If in addition $\mathcal B$ is Schauder, it is possible to get that $\max_{\vert\varepsilon\vert=1}\Vert \mathbf{1}_{\varepsilon A}\Vert \leq 2C_{sg}\mathfrak{K}_b$. If $\mathcal B$ is $C_{sd}$-$w$-disjoint-super-democratic, then $$\max_{\vert\varepsilon\vert=1}\Vert \mathbf{1}_{\varepsilon A}\Vert \leq c_2C_{sd}.$$ \item[ii)] If $\mathcal B$ is $C_{sg}$-$w$-semi-greedy or $C_{sd}$-$w$-disjoint-super-democratic and $\sup_n w_n = \infty$ or $\sum_n w_n <\infty$, then $\mathcal B$ is equivalent to the $c_0$-basis. \item[iii)] If $\mathcal B$ is $C_{sg}$-$w$-semi-greedy or $C_{sd}$-$w$-disjoint-super-democratic and $\inf_n w_n = 0$, $\mathcal B$ contains a subsequence equivalent to the $c_0$-basis. \end{enumerate} \end{prop} \begin{proof} The case of $w$-disjoint-super-democracy is proved in \cite{BDKOW} assuming $w$-super-democracy, but the same proof is also valid for $w$-disjoint-super-democracy. The case of $w$-semi-greediness is proved in \cite{DKTW} but assuming that the basis is Schauder. Here, we show this result for general bases using similar ideas. \begin{itemize} \item[i)] Find $n\in\mathbb N\setminus A$ such that $w(A)<w_n$. Hence, if we consider the element $x:=\mathbf 1_{\varepsilon A} + (1+\delta)e_n$ with $\delta>0$ and $\varepsilon$ a sign, applying the TCGA with $\lbrace n\rbrace$ the greedy set of $x$, \begin{eqnarray*} \Vert \mathbf{1}_{\varepsilon A}\Vert &\leq& \Vert \mathbf{1}_{\varepsilon A}+\lambda e_n\Vert + \Vert \lambda e_n\Vert\\ &\leq& C_{sg}\sigma_{w_n}^w(x) + \lambda c_2\\ &\leq& C_{sg}\Vert (1+\delta)e_n\Vert + c_2\vert e_n^*(\mathbf{1}_{\varepsilon A}+\lambda e_n)\vert\\ &\leq& C_{sg}c_2(1+\delta) +c_2^2\Vert \mathbf{1}_{\varepsilon A}+\lambda e_n\Vert \\ &\leq& (1+\delta)(C_{sg}c_2+C_{sg}c_2^3). \end{eqnarray*} Taking limits when $\delta$ goes to $0$, $\Vert\mathbf{1}_{\varepsilon A}\Vert \leq C_sgc_2(1+c_2^2)$. \item[ii)] If $\sup_n w_n<\infty$, by a), $\Vert \mathbf{1}_{\varepsilon A}\Vert \leq C_{sg}c_2(1+c_2^2)$ for any finite set $A$ and for all possible sign $\varepsilon$. Hence, the basis is equivalent to the $c_0$-basis. If $\sum_n w_n<\infty$, we can choose a number $m\in\mathbb N$ such that $\sum_{i=m+1}^\infty w_n<w_1$. We can assume that $\min_{i\in A} i\geq m+1$. Hence, with the same procedure as in a), $\Vert \mathbf{1}_{\varepsilon A}\Vert \leq C_{sg}c_2+C_{sg}c_2^3$. \item[iii)] Choosing a subsequence $(n_k)_k$ such that $\sum_{k=1}^\infty w_{n_k}<\infty$, we apply the item b) and we have that $(e_{n_k})_k$ is equivalent to the $c_0$-basis. \end{itemize} \end{proof} \section{The Property (C)}\label{propc} It is well known that one of the properties that quasi-greediness preserves is the so called Property (C). \begin{defn} We say that a basis $\mathcal B$ in a Banach space $\mathbb X$ has the \textbf{Property (C)} if for any $x\in\mathbb X$ and $G$ a greedy set of $x$, there exists a positive constant $C$ such that $$\min_{j\in G}\vert e_j^*(x)\vert \Vert \mathbf{1}_{\varepsilon G}\Vert \leq C\Vert x\Vert,\; \forall \vert\varepsilon\vert=1.$$ We denote by $C_u$ the least constant that verifies the above inequality and we say that $\mathcal B$ has the $C_u$-Property (C). \end{defn} Although quasi-greediness implies Property (C), the converse is false as \cite[Example 5.5]{BBG} shows. The following is a known inequality from \cite{DKKT}. \begin{lem}{\cite[Lemma 2.2]{DKKT}}\label{propC} If $\mathcal B$ is a $C_q$-quasi-greedy basis in $\mathbb X$, then, for all $x\in\mathbb X$ and for all greedy set $G$ of $x$, we have \begin{eqnarray}\label{ineqC} \min_{j\in G}\vert e_j^*(x)\vert \Vert \mathbf{1}_{\varepsilon G}\Vert \leq 2C_q\Vert x\Vert, \end{eqnarray} where $\varepsilon = \lbrace\mathrm{sgn \,}(e_j^*(x))\rbrace$. \end{lem} \begin{rem} Since any quasi-greedy basis is unconditional for constant coefficients (see \cite{Woj}), that is, $\Vert \mathbf{1}_{\varepsilon A}\Vert \approx \Vert \mathbf{1}_A\Vert$ for any sign $\varepsilon$ and any finite set $A$, from Lemma \ref{propC} we can deduce that any quasi-greedy basis has the Property (C). \end{rem} \cite[Proposition 4.10]{BDKOW} shows that any Schauder and $w$-semi-greedy basis satisfies the Property (C) assuming that $0<\inf_n w_n\leq \sup_n w_n<\infty$. Here, we extend this result proving that $w$-semi-greediness implies the Property (C) for any weight $w$. Before that, we study the following lemma. \begin{lem}\label{lemma1} Assume that $\mathcal B$ is a Schauder basis with constant $\mathfrak{K}_b$ and $C_{sg}$-$w$-semi-greedy in a Banach space $\mathbb X$. Then, if $F$ is a set such that $F>\mathrm{supp \,}(x)$ and $w(F)\leq w(G)$ for some greedy set $G$, hence $$\min_{i\in G}\vert e_i^*(x)\vert \Vert\mathbf{1}_{\eta F}\Vert \leq C_{sg}(1+\mathfrak{K}_b)\Vert x\Vert,\; \forall \vert\eta\vert=1.$$ \end{lem} \begin{proof} Take $x\in\mathbb X$, $G$ a greedy set of $x$ and a set $F$ and a sign $\eta$ as in the statement of the lemma. Define the following element $y:= \min_{i\in G}\vert e_i^*(x)\vert \mathbf{1}_{\eta F}+ P_{G^c}(x)+ \sum_{i\in G}(e_i^*(x)+\delta\varepsilon_i)e_i$, where $\delta>0$ and $\varepsilon\equiv \lbrace\mathrm{sgn \,} (e_j^*(x))\rbrace$. Then, for the element $y$, the set $G$ is a greedy set. Hence, applying the TCGA, \begin{eqnarray*} \min_{i\in G}\vert e_i^*(x)\vert \Vert\mathbf{1}_{\eta F}\Vert&\leq& (1+\mathfrak{K}_b)\Vert \min_{i\in G}\vert e_i^*(x)\vert \mathbf{1}_{\eta F}+ P_{G^c}(x)+ \sum_{i\in G}a_ie_i\Vert\\ &\leq& (1+\mathfrak{K}_b)C_{sg}\sigma_{w(G)}^w(y)\\ &\leq& (1+\mathfrak{K}_b)C_{sg}\Vert y-\min_{i\in G}\vert e_i^*(x)\vert \mathbf{1}_{\eta F}\Vert\\ &=& (1+\mathfrak{K}_b)\Vert P_{G^c}(x)+ \sum_{i\in G}(e_i^*(x)+\delta\varepsilon_i)e_i\Vert. \end{eqnarray*} Taking limits when $\delta$ goes to $0$, we obtain the result. \end{proof} \begin{prop}\label{proop} Let $\mathcal B$ be a Schauder basis with constant $\mathfrak{K}_b$ in a Banach space $\mathbb X$. If $\mathcal B$ is $C_{sg}$-$w$-semi-greedy, then $\mathcal B$ has the $C_u$-Property (C) with $C_u\leq \mathfrak{K}_bC_{sg}((1+\mathfrak{K}_b)C_{sg}+c_2^2)$. \end{prop} \begin{proof} Take $x\in\mathbb X$ and let $G$ be a greedy set of $x$, $\alpha=\min_{i\in G}\vert e_i^*(x)\vert$ and $\vert \varepsilon\vert=1$. We consider different cases (these cases are inspired by \cite{DKTW}). \item \textbf{Case 1:} $\sum_{n=1}^\infty w_n = \infty$ and $\sup_n w_n <\infty$. \textbf{Case 1.1:} If $w(G)>\lim\sup_{n\rightarrow\infty} w_n$, since $\sum_{n}w_n = \infty$, we can choose $E$ and $n_0\in\mathbb N$ with $E> \mathrm{supp \,}(x)$ and $n_0>\max E$ such that $$w (E)\leq w(G)<w(E)+w_{n_0}.$$ Define then the element $y:=\alpha \mathbf{1}_{\varepsilon G}+(\alpha+\delta)\mathbf{1}_{F}$, where $\delta>0$ and $F=E\cup\lbrace n_0\rbrace$. Then, a greedy set of $y$ is $F$ and hence, applying the TCGA, \begin{eqnarray*} \alpha\Vert\mathbf{1}_{\varepsilon G}\Vert &\leq& \mathfrak{K}_b\Vert\alpha \mathbf{1}_{\varepsilon G}+\sum_{i\in F}a_i e_i\Vert\\\nonumber &\leq& \mathfrak{K}_bC_{sg}\sigma_{w(F)}^w(y)\leq \mathfrak{K}_bC_{sg}\Vert y- \alpha \mathbf{1}_{\varepsilon G}\Vert\\\nonumber &=& \mathfrak{K}_bC_{sg}\Vert (\alpha+\delta)\mathbf{1}_{F}\Vert. \end{eqnarray*} Taking limits when $\delta$ goes to $0$, \begin{eqnarray}\label{p_1} \alpha\Vert\mathbf{1}_{\varepsilon G}\Vert\leq \mathfrak{K}_bC_{sg}\Vert \alpha\mathbf{1}_{F}\Vert\leq \mathfrak{K}_bC_{sg}\Vert \alpha\mathbf{1}_{E}\Vert+ \mathfrak{K}_bC_{sg}c_2\alpha\leq \mathfrak{K}_bC_{sg}\Vert\alpha\mathbf{1}_{E}\Vert+ \mathfrak{K}_bC_{sg}c_2^2\Vert x\Vert. \end{eqnarray} Now, it is only necessary to estimate $\Vert \alpha \mathbf{1}_E\Vert$. For that, we only have to apply Lemma \ref{lemma1} and then, we obtain that \begin{eqnarray}\label{p_2} \Vert \alpha\mathbf{1}_{E}\Vert\leq C_{sg}(1+\mathfrak{K}_b)\Vert x\Vert. \end{eqnarray} Using \eqref{p_1} and \eqref{p_2}, we obtain the result in this case. \textbf{Case 1.2:} Now, if $w(G)\leq \lim\sup_{n\rightarrow\infty} w_n$, using Proposition \ref{p:find c0}, $$\max_{\vert\varepsilon\vert=1}\Vert\mathbf{1}_{\varepsilon G}\Vert\leq 2C_{sg}\mathfrak{K}_b.$$ Hence, $$\alpha \Vert\mathbf{1}_{\varepsilon G}\Vert \leq 2C_{sg}\mathfrak{K}_b\alpha \leq 2c_2C_{sg}\mathfrak{K}_b\Vert x\Vert.$$ \item \textbf{Case 2:} If $\sum_n w_n<\infty$ or $\sup_n w_n =\infty$, using Proposition \ref{p:find c0}, $\mathcal B$ is equivalent to the $c_0$-basis and the result is trivial. \end{proof} \section{Proof of the main results}\label{proofs} \begin{proof}[Proof of Theorem \ref{main1}]: First, we prove the item a). Assume that $\mathcal B$ is $C_a$-$w$-almost-greedy and take two sets $A, B\in\mathbb N^{<\infty}$ with $w(A)\leq w(B)$ and two signs $\varepsilon, \varepsilon'$. First, we show that \begin{eqnarray}\label{alm1} \Vert \mathbf{1}_{A'}\Vert \leq C_a\Vert \mathbf 1_B\Vert,\; \forall A'\subset A. \end{eqnarray} Define the element $x:=\mathbf{1}_{A'\setminus B}+\mathbf{1}_{A'\cap B}+(1+\delta)\mathbf{1}_{B\setminus A'}$, with $\delta>0$. Since $w(A')\leq w(A)\leq w(B)$, then $w(A'\setminus B)\leq w(B\setminus A')$. Thus, $$\Vert \mathbf{1}_{A'}\Vert= \Vert x-\mathcal G_{\vert B'\setminus A\vert}(x)\Vert \leq C_a\Vert x-P_{A'\setminus B}(x)\Vert\leq C_a\Vert \mathbf{1}_{A'\cap B}+(1+\delta)\mathbf{1}_{B\setminus A'}\Vert.$$ Hence, taking the limits when $\delta$ goes to $0$, we prove \eqref{alm1}. Taking into account that \eqref{alm1} does not change if we apply the estimate for $\lbrace\varepsilon'_n e_n\rbrace_n$ for any $\vert\varepsilon'\vert=1$, we can assume that $\varepsilon' \equiv 1$. Now, to conclude the result, we realize that $\mathbf{1}_{\varepsilon A}\in 2S$ for the real case and $\mathbf{1}_{\varepsilon A}\in 4S$ for the complex case, where $$S=\lbrace \sum_{A'\subset A}\vartheta_{A'}\mathbf{1}_{A'} : \sum_{A'\subset A}\vert \vartheta_{A'}\vert\leq 1\rbrace.$$ Hence, applying this remark in \eqref{alm1} and \cite[Lemma 6.4]{DKO}, $$\Vert \mathbf{1}_{\varepsilon A}\Vert\leq 2\kappa C_a\Vert \mathbf 1_{\varepsilon' B}\Vert.$$ This completes the $w$-super-democracy. To show the $w$-disjoint-super-democracy, take $A, B, \varepsilon, \varepsilon'$ as in the beginning with $A\cap B=\emptyset$. Define the element $x:=\mathbf{1}_{\varepsilon A}+(1+\delta)\mathbf{1}_{\varepsilon' B}$ with $\delta>0$. Hence, the set $B$ is the greedy set of $x$ with cardinality $m:=\vert B\vert$. Thus, $$\Vert \mathbf{1}_{\varepsilon A}\Vert=\Vert x-\mathcal{G}_m(x)\Vert\leq C_a\Vert x-P_A(x)\Vert = C_a\Vert (1+\delta)\mathbf{1}_{\varepsilon' B}\Vert.$$ Taking limits when $\delta$ goes to $0$, we obtain that $\mathcal B$ is $C_{sd}$-$w$-disjoint-super-democratic with $C_{sd}\leq C_a$. The proof of quasi-greediness is trivial since in the definition of $w$-almost-greediness we can take $A=\emptyset$. b) Assume now that $\mathcal B$ is $C_q$-quasi-greedy and $C_{sd}$-$w$-disjoint-super-democratic. Take $m\in\mathbb N$, $\mathcal G_m(x)=P_A(x)$ and $B$ such that $w(B)\leq w(A)$ and $\Vert x-P_B(x)\Vert<\tilde{\sigma}_{w(A)}^w(x)+\delta$ with $\delta>0$. We have the following decomposition: $$x-P_A(x)=P_{(A\cup B)^c}(x-P_B(x))+P_{B\setminus A}(x).$$ On the one hand, since $A\setminus B$ is a greedy set of $x-P_B(x)$, \begin{eqnarray}\label{one} \Vert P_{(A\cup B)^c}(x-P_B(x))\Vert \leq C_q\Vert x-P_B(x)\Vert. \end{eqnarray} On the other hand, since $w(B\setminus A)\leq w(A\setminus B)$, using Lemma \ref{conv} and $w$-disjoint-super-democracy with $\varepsilon\equiv\lbrace\mathrm{sgn \,} (e_j^*(x))\rbrace$, $$\Vert P_{B\setminus A}(x)\Vert \leq C_{sd}\max_{j\in B\setminus A}\vert e_j^*(x)\vert\Vert\mathbf{1}_{\varepsilon(A\setminus B)}\Vert\leq C_{sd}\min_{j\in A\setminus B}\vert e_j^*(x)\vert\Vert\mathbf{1}_{\varepsilon(A\setminus B)}\Vert$$ Now, by Lemma \ref{propC}, using that $A\setminus B$ is a greedy set of $x-P_B(x)$ and $\min_{j\in A\setminus B}\vert e_j^*(x)\vert = \min_{j\in A\setminus B}\vert e_j^*(x-P_B(x))\vert$, \begin{eqnarray}\label{two} \Vert P_{B\setminus A}(x)\Vert \leq C_{sd}\min_{j\in A\setminus B}\vert e_j^*(x-P_B(x))\vert\Vert\mathbf{1}_{\varepsilon(A\setminus B)}\Vert\leq 2C_qC_{sd}\Vert x-P_B(x)\Vert. \end{eqnarray} By \eqref{one} and \eqref{two}, we obtain that $\mathcal B$ is $C_a$-$w$-almost-greedy with $C_a\leq C_q+2C_qC_{sd}$. \end{proof} \begin{proof}[Proof of Theorem \ref{main}]: Assume that $\mathcal B$ is $C_{sg}$-$w$-semi-greedy. To exhibit the $w$-super-democracy and quasi-greediness we consider the different cases that we have considered in the Proposition \ref{proop}. \item \textbf{Case 1:} $\sum_{n=1}^\infty w_n = \infty$ and $\sup_n w_n <\infty$. \begin{itemize} \item To prove quasi-greediness, take $x\in\mathbb X$ such that $\vert \mathrm{supp \,}(x)\vert<\infty$, and without loss of generality we can assume that $\max_j \vert e_j^*(x)\vert \leq 1$, $m\in\mathbb N$ and consider that $w(A_m(x))>\lim\sup_{n\rightarrow\infty} w_n$. Since $\sum_{n}w_n = \infty$, we can choose $E$ and $n_0\in\mathbb N$ with $E> \mathrm{supp \,}(x)$ and $n_0>\max_{i\in E} i$ such that $$w (E)\leq w(A_m(x))<w(E)+w_{n_0}.$$ Set $F:=E\cup\lbrace n_0\rbrace$ and $\alpha=\min_{j\in A_m(x)}\vert e_j^*(x)\vert$. Define the element $$y:= (x-\mathcal G_m(x))+(\alpha+\delta)\mathbf{1}_F,$$ with $\delta>0$. Hence, the greedy set of $y$ is $F$ and then, if the scalars $(a_n)_n$ are given by the TCGA, \begin{eqnarray*} \Vert x-\mathcal G_m(x)\Vert &\leq&\mathfrak{K}_b\Vert x-\mathcal G_m(x)+\sum_{n\in F}a_n e_n\Vert\leq \mathfrak{K}_bC_{sg} \sigma_{w(F)}^w(y)\\ &\leq& C_{sg} \mathfrak{K}_b\Vert (x-\mathcal G_m(x))+\sum_{i\in A_m(x)}e_i^*(x)e_i +(\alpha + \delta)\mathbf{1}_F\Vert\\ &=& C_{sg} \mathfrak{K}_b\Vert x+(\alpha + \delta)\mathbf{1}_F\Vert. \end{eqnarray*} Taking limits when $\delta$ goes to $0$, \begin{eqnarray}\label{ine1} \Vert x-\mathcal G_m(x)\Vert \leq C_{sg} \mathfrak{K}_b(\Vert x\Vert +\Vert \alpha \mathbf{1}_E\Vert +\alpha\Vert e_{n_0}\Vert). \end{eqnarray} Of course, $\alpha\Vert e_{n_0}\Vert \leq c_2^2\Vert x\Vert$, so we only have to estimate $\Vert \alpha \mathbf{1}_E\Vert$. For that, using Lemma \ref{lemma1}, \begin{eqnarray} \alpha\Vert \mathbf{1}_E\Vert \leq C_{sg}(1+\mathfrak{K}_b)\Vert x\Vert. \end{eqnarray} Then, we have that the basis is quasi-greedy for elements with finite support with $$\Vert x-\mathcal G_m(x)\Vert \leq C_{sg} \mathfrak{K}_b(1+(1+\mathfrak{K}_b)C_{sg} +c_2^2)\Vert x\Vert.$$ Define now $C_1= C_{sg} \mathfrak{K}_b(1+(1+\mathfrak{K}_b)C_{sg} +c_2^2)$. To show the quasi-greediness for any $x\in\mathbb X$, we need the following result (see \cite[Lemma 2.2]{Oik}): if $x\in\mathbb{X}$ and $A_m(x)$ is the greedy set of cardinality $m$ of $x$, then for any $\varepsilon>0$ there exists $y\in\mathbb{X}$ with $\vert \mathrm{supp \,}(y)\vert<\infty$ such that $\Vert x-y\Vert<\varepsilon$ and $A_m(x)=A_m(y)$. Using that, we proceed as follows: \begin{eqnarray*} \Vert x-\mathcal G_m(x)\Vert &\leq& \Vert x-y\Vert + \Vert y-\mathcal G_m(y)\Vert + \Vert \mathcal G_m(y)-\mathcal G_m(x)\Vert\\ &=& \Vert x-y\Vert + \Vert P_{A_m(x)}(x-y)\Vert + C_1\Vert y\Vert\\ &\leq& \Vert x-y\Vert(1+\Vert P_{A_m(x)}\Vert)+C_1\Vert x-y\Vert + C_1\Vert x\Vert\\ &\leq& \Vert x-y\Vert(1+\Vert P_{A_m(x)}\Vert+C_q)+C_1\Vert x\Vert\\ &\leq&\varepsilon(1+\Vert P_{A_m(x)}\Vert+C_q)+C_1\Vert x\Vert \end{eqnarray*} Taking now limits when $\varepsilon$ goes to $0$, we obtain that $\mathcal B$ is $C_q$-quasi-greedy with $C_q\leq C_1$. Now, consider that $w(A_m(x))\leq \lim\sup_{n\rightarrow\infty} w_n$. Using Proposition \ref{p:find c0}, $$\max_{\vert\varepsilon\vert=1}\Vert\mathbf{1}_{\varepsilon A_m(x)}\Vert\leq 2C_{sg}\mathfrak{K}_b.$$ Then, using convexity, $$\Vert\mathcal G_m(x)\Vert \leq \max_{j}\vert e_j^*(x)\vert2C_{sg}\mathfrak{K}_b\leq 2c_2C_{sg}\mathfrak{K}_b\Vert x\Vert.$$ Hence, $\mathcal B$ is quasi-greedy with $C_q \leq 2c_2C_{sg}\mathfrak{K}_b+1$. \item To show the $w$-super-democracy in this case, take $A, B\in\mathbb N^{<\infty}$ such that $w(A)\leq w(B)$ and two signs $\varepsilon, \varepsilon'$. If $w(B)>\lim\sup_{n\rightarrow \infty} w_n$, we tan take the set $F$ as before, that is, $F=E\cup\lbrace n_0\rbrace$ such that $w(E)\leq w(B)<w(F)$, $n_0>\max E$ and $E>A\cup B$. Then, taking the element $x:=\mathbf{1}_{\varepsilon A}+(1+\delta)\mathbf{1}_F$, with $\delta>0$, the greedy set of $x$ is $F$. Using the scalars $(a_i)_{i\in F}$ given by the TCGA, we have that $$\Vert\mathbf{1}_{\varepsilon A}\Vert\leq \mathfrak{K}_b\left\Vert \mathbf{1}_{\varepsilon A}+\sum_{i\in F}a_i e_i\right\Vert \leq \mathfrak{K}_bC_{sg}\sigma_{w(F)}^w(x)\leq \mathfrak{K}_bC_{sg}\Vert (1+\delta)\mathbf{1}_F\Vert.$$ Taking $\delta \searrow 0$, $\Vert \mathbf{1}_{\varepsilon A}\Vert \leq \mathfrak{K}_bC_{sg} (\Vert \mathbf{1}_E\Vert + c_2^2\Vert \mathbf{1}_{\eta B}\Vert)$. Now, as $E>B$, taking the element $x:=(1+\delta)\mathbf{1}_{\varepsilon' B}+\mathbf{1}_E$, using the same ideas that before with $w(B)\geq w(E)$, we obtain that $$\Vert \mathbf{1}_E\Vert \leq 2\mathfrak{K}_bC_{sg} \Vert\mathbf{1}_{\varepsilon' B}\Vert.$$ Hence, the basis is $w$-super-democratic with constant $C_{s}\leq 2\mathfrak{K}_b^2C_{sg}^2 + \mathfrak{K}_bC_{sg} c_2^2$. If $w(B)\leq \lim\sup_{n\rightarrow \infty}w_n$, using Proposition \ref{p:find c0}, $$\Vert \mathbf{1}_{\varepsilon A}\Vert \leq 2c_2C_{sg}\mathfrak{K}_b\Vert\mathbf{1}_{\varepsilon' B}\Vert.$$ \end{itemize} \item \textbf{Case 2:} If $\sum_n w_n<\infty$ or $\sup_n w_n =\infty$, using Proposition \ref{p:find c0}, $\mathcal B$ is equivalent to the canonical basis of $c_0$ and the result is trivial. The item a) is proved. Now, we show b). Assume that $\mathcal B$ is $C_q$-quasi-greedy and $C_{sd}$-$w$-disjoint-super-democratic. Take $m\in\mathbb N$, $\mathrm{supp \,}(\mathcal G_m(x))=A_m(x)$ and $z=\sum_{n\in B}a_n e_n$ such that $\Vert x-z\Vert<\sigma_{w(A_m(x))}^w(x)+\delta$ with $\delta>0$. If $\alpha = \max_{j\not \in A_m(x)}\vert e_j^*(x)\vert$, we take the element $\nu$ as is defined in \cite{DKK}: $$\nu:=\sum_{i\in A_m(x)}T_\alpha(y_i)e_i + P_{(A_m(x))^c}(x) = \sum_{i=1}^\infty T_\alpha(y_i)e_i + \sum_{i\in B\setminus A_m(x)}(e_i^*(x)-T_\alpha(y_i))e_i,$$ where $y_i=e_i^*(x)-a_i$. Of course, $\nu$ satisfies that $\mathrm{supp \,}(x-\nu)\subset A_m(x)$ and we will prove that $\Vert \nu\Vert \leq (C_q + 4C_qC_{sd})\Vert x-z\Vert$. One the one hand, using Proposition \ref{truncation}, \begin{eqnarray}\label{on1} \Vert\sum_{i=1}^\infty T_\alpha(y_i)e_i\Vert \leq C_q\Vert x-z\Vert. \end{eqnarray} On the other hand, since $\vert e_i^*(x)-T_\alpha(y_i)\vert \leq 2\alpha$ for all $i\in B\setminus A_m(x)$, using $C_{sd}$-$w$-disjoint-super-democracy with $\eta\equiv\lbrace\mathrm{sgn \,}(e_j^*(x-z))\rbrace$, $w(B)\leq w(A_m(x))$ and Lemma \ref{conv}, \begin{eqnarray}\label{key} \Vert \sum_{i\in B\setminus A_m(x)}(e_i^*(x)-T_\alpha(y_i))e_i\Vert &\leq& 2\alpha C_{sd}\Vert\mathbf{1}_{\eta (A_m(x)\setminus B)}\Vert\\\nonumber &\leq& 2C_{sd}\min_{i\in A_m(x)\setminus B}\vert e_i^*(x)\vert\Vert\mathbf{1}_{\eta (A_m(x)\setminus B)}\Vert. \end{eqnarray} Now, if we take $\Lambda:=\lbrace n : \vert e_n^*(x-z)\vert \geq \min_{i\in A_m(x)\setminus B}\vert e_i^*(x)\vert, n\not\in A_m(x)\setminus B\rbrace$, $C=\Lambda\cup (A_m(x)\setminus B)$ is a greedy set of $x-z$. Using quasi-greediness, $$\Vert \mathbf{1}_{\eta(A_m(x)\setminus B)}\Vert \leq C_q\Vert\mathbf{1}_{\eta C}\Vert.$$ Finally, using this fact and Proposition \ref{propC}, \begin{eqnarray*} 2C_{sd}\min_{i\in A_m(x)\setminus B}\vert e_j^*(x)\Vert \mathbf{1}_{\eta (A_m(x)\setminus B)}\Vert&\leq& 2C_qC_{sd}\min_{i\in C}\vert e_i^*(x-z)\vert\Vert\mathbf{1}_{\eta C}\Vert\\ &\leq& 4C_{sd}C_q^2\Vert x-z\Vert. \end{eqnarray*} Hence, the basis $\mathcal B$ is $C_{sg}$-$w$-semi-greedy with $C_{sg}\leq C_q + 4C_{sd}C_q^2$ and b) is finished. Now, we proved the item c). For that, we only have to estimate the inequality \eqref{key} in a different way. Consider that the basis is $C_q$-quasi-greedy and $C_s$-$w$-super-democratic and take the sets $A_m(x), B,C$ and $\eta=\lbrace\mathrm{sgn \,} (e_j^*(x-z))\rbrace$ as in b). It is clear that $w(B\setminus A_m(x))\leq w(A_m(x)\setminus B)\leq w(C)$, hence, applying the $w$-super-democracy, Lemma \ref{conv} and Proposition \ref{propC} in \eqref{key}, \begin{eqnarray*} \Vert \sum_{i\in B\setminus A_m(x)}(e_i^*(x)-T_\alpha(y_i))e_i\Vert &\leq& 2\alpha C_{s}\Vert\mathbf{1}_{\eta C}\Vert\\ &\leq& 2C_{s}\min_{i\in A_m(x)\setminus B}\vert e_i^*(x)\vert\Vert\mathbf{1}_{\eta C}\Vert\\ &=&2C_{s}\min_{i\in C}\vert e_i^*(x-z)\vert\Vert\mathbf{1}_{\eta C}\Vert\\ &\leq & 4C_{s}C_q\Vert x-z\Vert. \end{eqnarray*} Thus, the basis is $C_{sg}$-$w$-semi-greedy with $C_{sg}\leq C_q+4C_qC_s$. This completes the proof. \end{proof} \end{section} \section{$\rho$-admissibility and semi-greediness}\label{rho} In the most papers where we study some characterization about greedy-type bases, the more general stage involves only the condition of (strong) Markushevich bases. Then, a natural question is if we can remove the condition of Schauder basis in Theorem \ref{main}. This question is so closed to the Question 1 established in \cite{B}. Here, we present a weaker condition than Schauder to give a characterization of semi-greediness<, that is, the version of Theorem \ref{main} for the weight $w=(1,1,...)$. For that purpose, we consider the following definition that we can find in \cite{BBGHO2}. \begin{defn} For $\rho\geq 1$, we say that $(e_n)_{n=1}^\infty$ is \textbf{$\rho$-admissible} if the following holds: for each finite set $A\subset \mathbb N$, there exists $n_0=n_0(A)$ such that, for all sets $B$ with $\min_{i\in B}i \geq n_0$ and $\vert B\vert\leq\vert A\vert$, $$\left\Vert \sum_{n\in A} \alpha_n e_n\right\Vert \leq \rho\left\Vert \sum_{n\in A\cup B} \alpha_n e_n\right\Vert,\; \forall \alpha_n\in\mathbb F.$$ \end{defn} Of course, this condition is satisfied for Schauder bases, but, in fact, it is satisfied for a more general bases. We remind some classical definitions: \begin{itemize} \item $(e_n)_{n=1}^\infty$ is \textbf{weakly null} if $$\lim_{n\rightarrow \infty} x^*(e_n)=0,\; \forall x^*\in\mathbb X^*.$$ \item Given $Y\subset\mathbb X^*$, $(e_n)_{n=1}^\infty$ is \textbf{$Y$-null} if $$\lim_{n\rightarrow\infty}y(e_n)=0,\; \forall y\in Y.$$ \item Given $\kappa\in (0,1]$, a set $Y\subset\mathbb X^*$ is \textbf{$\kappa$-norming} whenever $$\kappa\Vert x\Vert \leq \sup_{x^*\in Y, \Vert x^*\Vert\leq 1}\vert x^*(x)\vert,\; \forall x\in\mathbb X,$$ \end{itemize} In \cite{BBGHO2}, we can find the following result. \begin{prop} Let $\lbrace e_n, e_n^*\rbrace_{n=1}^\infty$ be a biorthogonal system in $\mathbb Xx\mathbb X^*$. Suppose that the sequence $\lbrace \tilde{e}_n := \Vert e_n^*\Vert e_n\rbrace_{n=1}^\infty\subset\mathbb X$ is $Y$-null, for some subset $Y\subset\mathbb X^*$ wich is $\kappa$-norming. Then, $\lbrace e_n\rbrace_{n=1}^\infty$ is $\rho$-admissible for every $\rho>1/k$. \end{prop} Some examples of bases that are not Schauder satisfying the above proposition can be found in Section 3 of \cite{BBGHO2}. One of them is the trigonometric system in $C([0,1])$. Our contribution in this case is the following theorem. \begin{thm} Assume that $\mathcal B$ is a $\rho$-admissible basis. Then, $\mathcal B$ is semi-greedy if and only if $\mathcal B$ is quasi-greedy and disjoint-super-democratic. \end{thm} \begin{proof} We only have to show that semi-greediness implies quasi-greediness and disjoint-super-democracy. The ideas that we use are the same than in Theorem \ref{main} (and \cite[Theorem 1.10]{B}) applying the condition of $\rho$-admissibility. First, we show super-democracy. Take two sets $A$ and $B$ such that $\vert A\vert\leq \vert B\vert$, $A\cap B=\emptyset$ and two signs $\varepsilon, \eta$. Since the basis is $\rho$-admissible, we can find a set $F$ such that $\vert F\vert=\vert A\cup B\vert$ and $F>A\cup B$. Now, select a set $C\subset F$ such that $\vert C\vert = \vert A\vert$. Hence, \begin{eqnarray}\label{super1} \left\Vert \sum_{n\in A\cup B} \alpha_n e_n\right\Vert \leq \rho\left\Vert \sum_{n\in A\cup B\cup C} \alpha_n e_n\right\Vert,\; \forall \alpha_n\in\mathbb F. \end{eqnarray} Consider the element $x:=\mathbf{1}_{\varepsilon A} +(1+\delta)\mathbf{1}_C$, with $\delta>0$. Using the TCGA, $$x-\mathcal{CG}_m(x)=\mathbf{1}_{\varepsilon A} + \sum_{i\in C} a_i e_i.$$ Using semi-greediness and \eqref{super1} with $\alpha_n = \varepsilon_n$ if $n\in A$, $\alpha_n = 0$ if $n\in B$ and $\alpha_n = a_n$ if $n\in C$, $$\Vert \mathbf{1}_{\varepsilon A}\Vert \leq \rho \Vert \mathbf{1}_{\varepsilon A} + \sum_{i\in C} a_i e_i\Vert \leq C_{sg}\rho \Vert x-\mathbf{1}_{\varepsilon A}\Vert = C_{sg}\rho\Vert (1+\delta)\mathbf{1}_C\Vert.$$ Taking limits when $\delta$ goes to $0$, we obtain that \begin{eqnarray}\label{super2} \Vert\mathbf{1}_{\varepsilon A}\Vert \leq C_{sg}\rho\Vert \mathbf{1}_C\Vert. \end{eqnarray} Now, consider $y:=(1+\delta)\mathbf{1}_{\eta B} +\mathbf{1}_C$ with $\delta>0$, $$y-\mathcal{CG}_m(y)=\sum_{i\in B} b_i e_i + \mathbf{1}_C.$$ As before, using semi-greediness and \eqref{super1} with $\alpha_n = 0$ if $n\in A$, $\alpha_n = b_n$ if $n\in B$ and $\alpha_n = 1$ if $n\in C$, \begin{eqnarray*} \Vert \mathbf{1}_{ C}\Vert \leq (1+\rho) \Vert \mathbf{1}_{ C} + \sum_{i\in B} b_i e_i\Vert \leq C_{sg}(1+\rho) \Vert x-\mathbf{1}_{C}\Vert = C_{sg}(1+\rho)\Vert (1+\delta)\mathbf{1}_{\eta B}\Vert. \end{eqnarray*} Taking limits when $\delta$ goes to $0$ and \eqref{super2}, we obtain that $$\Vert \mathbf{1}_{\varepsilon A}\Vert \leq C_{sg}^2 \rho(1+\rho)\Vert\mathbf{1}_{\eta B}\Vert.$$ Thus, the basis is $C_{sd}$-disjoint-super-democratic with $C_{sd}\leq C_{sg}^2\rho(1+\rho)$. Now, we prove quasi-greediness. Since our basis is strong Markushevich, it is enough to consider $x\in\mathbb X$ with finite support $A=\mathrm{supp \,}(x)$ as we have said in Theorem \ref{main}. Using the $\rho$-admissibility, we can find a set $C$ such that $\vert C\vert = \vert A\vert$, $C>A$ and \begin{eqnarray}\label{semi1} \Vert \sum_{n\in A}\alpha_n e_n\Vert \leq \rho\Vert \sum_{n\in A\cup C}\alpha_n e_n\Vert,\; \forall \alpha_n\in\mathbb F. \end{eqnarray} Take $m\in\mathbb N$ and $\delta>0$. Define the element $y:= (x-\mathcal G_m(x))+(\alpha+\delta)\mathbf{1}_F$, where $F\subset C$ with $\vert F\vert=m$, $\alpha = \min_{j\in A_m(x)}\vert e_j^*(x)\vert$ and $A_m(x)$ is the greedy set of $x$ with cardinality $m$. Then, using the TCGA, $$y-\mathcal{CG}_m(y) = (x-\mathcal G_m(x))+\sum_{i\in F}a_i e_i.$$ Using semi-greediness and \eqref{semi1} with $\alpha_n = 0$ if $n\in A_m(x)$, $\alpha_n = e_n^*(x)$ if $n\in A\setminus A_m(x)$, $\alpha_n = a_n$ if $n\in F$ and $\alpha_n = 0$ if $n\in C\setminus F$, $$\Vert x-\mathcal G_m(x)\Vert \leq \rho\Vert y-\mathcal{CG}_m(y)\Vert\leq C_{sg}\sigma_{m}(y) \leq C_{sg}\rho(\Vert x\Vert + \Vert(\alpha+\delta)\mathbf 1_F\Vert).$$ Taking limits when $\delta$ goes to $0$, \begin{eqnarray}\label{proof_1} \Vert x-\mathcal G_m(x)\Vert \leq C_{sg}\rho(\Vert x\Vert + \Vert \alpha \mathbf{1}_F\Vert). \end{eqnarray} Now, take $\eta\equiv\lbrace \mathrm{sgn \,}(e_i^*(x))\rbrace$ and define $z:=\sum_{i\in A_m(x)}(e_i^*(x)+\delta\eta_i)e_i + P_{(A_m(x))^c}(x)+\alpha \mathbf 1_F$ for $\delta>0$. Thus, by TCGA, $$z-\mathcal{CG}_m(z) = \sum_{i\in A_m(x)}b_i e_i + P_{(A_m(x))^c}(x)+\alpha \mathbf{1}_F.$$ Again, using semi-greediness and \eqref{semi1} with $\alpha_n = b_n$ if $n\in A_m(x)$, $\alpha_n = e_n^*(x)$ if $n\in A\setminus A_m(x)$, $\alpha_n = \alpha$ if $n\in F$ and $\alpha_n = 0$ if $n\in C\setminus F$, \begin{eqnarray*} \Vert \alpha \mathbf{1}_F\Vert &\leq& (1+\rho)\Vert \sum_{i\in A_m(x)}b_i e_i + P_{(A_m(x))^c}(x)+\alpha 1_F\Vert \leq C_{sg}(1+\rho)\Vert z-\alpha \mathbf{1}_F\Vert\\ &=& C_{sg}(1+\rho)\Vert \sum_{i\in A_m(x)}(e_i^*(x)+\delta\eta_i)e_i + P_{(A_m(x))^c}(x)\Vert. \end{eqnarray*} Taking limits when $\delta$ goes to $0$, \begin{eqnarray}\label{proof_2} \Vert \alpha \mathbf{1}_F\Vert\leq C_{sg}(1+\rho)\Vert x\Vert \end{eqnarray} By \eqref{proof_1} and \eqref{proof_2}, $\mathcal B$ is $C_q$-quasi-greedy with $C_q\leq C\rho(1+(1+\rho)C_{sg})$. \end{proof} \begin{rem} We have studied the characterization of semi-greediness using the $\rho$-admissibility. But, at the moment, we don't know if it is possible to prove the same characterization for $w$-semi-greediness since the condition of the $\rho$-admissibility talks about the cardinality over the sets and not over the weights. \end{rem} \section{Final comments}\label{final} In this last section we will discuss two questions. The first one is to show that Remark \ref{rem1} is an improvement respect to the bound of Theorem \ref{walm}. To proved that, we establish the following result that is the weighted version of \cite[Lemma 3.5]{BBG}. \begin{prop}\label{proprem} Assume that $\mathcal B$ is a basis in a Banach space $\mathbb X$. If $\mathcal B$ is $C_{d}$-$w$-democratic and $C_q$-quasi-greedy, then $\mathcal B$ is $C_{s}$-$w$-super-democratic with $C_{s}\leq 4\kappa^2 C_qC_d$, where $\kappa=1$ if $\mathbb F=\mathbb R$ and $\kappa=2$ if $\mathbb F=\mathbb C$. \end{prop} \begin{proof} First, we prove the result for the real case. Consider $A, B$ two sets with $w(A)\leq w(B)$ and two signs $\varepsilon, \eta$. If we denote by $A^{\pm} = \lbrace n\in A : \varepsilon_n = \pm 1\rbrace$, using the democracy with $w(A^{\pm})\leq w(A)\leq w(B)$ and quasi-greediness, \begin{eqnarray}\label{demo1} \Vert \mathbf{1}_{\varepsilon A}\Vert \leq \Vert \mathbf{1}_{A^+}\Vert + \Vert\mathbf{1}_{A^-}\Vert \leq 2C_d\Vert \mathbf{1}_B\Vert. \end{eqnarray} Now, we decompose $B$ as the set $A$, that is, $B^{\pm}= \lbrace n\in B : \eta_n = \pm 1\rbrace$. Hence, using quasi-greediness, \begin{eqnarray}\label{demo2} \Vert \mathbf{1}_B\Vert\leq \Vert \mathbf{1}_{B^+}\Vert + \Vert\mathbf{1}_{B^-}\Vert\leq 2C_q\Vert \mathbf{1}_{\eta B}\Vert. \end{eqnarray} Then, by \eqref{demo1} and \eqref{demo2}, the basis is $C_{s}$-$w$-super-democratic with $C_{s}\leq 4C_qC_d$. For the complex case, we can proceed using \cite[Lemma 6.4]{DKO} as in Theorem \ref{main1} to conclude that $\mathcal B$ is $C_{s}$-$w$-super-democratic with $C_{s}\leq 4\kappa^2C_qC_d$. \end{proof} The second question that we study is related to $w$-super-democracy and $w$-disjoint-super-democracy. We know that, if $w=(1,1,...)$, that is, $w(A)=\vert A\vert$, a basis $\mathcal B$ is super-democratic if and only if $\mathcal B$ is disjoint-super-democratic. Quantitatively, \begin{itemize} \item If $\mathcal B$ is $C_s$-super-democratic, then $\mathcal B$ is $C_{sd}$-disjoint-super-democratic with $C_{sd}\leq C_s$. \item If $\mathcal B$ is $C_{sd}$-disjoint-super-democratic, then $\mathcal B$ is $C_s$-super-democratic with $C_s\leq C_{sd}^2$. \end{itemize} This result is trivial. Indeed, if the basis is super-democratic, then it is automatically disjoint-super-democratic. For the converse, if we consider that $\mathcal B$ is $C_{sd}$-disjoint-super-democratic and take $\vert A\vert\leq \vert B\vert$ and $C$ such that $C>(A\cup B)$ with $\vert A\vert=\vert C\vert$, $$\dfrac{\Vert\mathbf{1}_{\varepsilon A}\Vert}{\Vert \mathbf{1}_{\varepsilon' B}\Vert}=\dfrac{\Vert\mathbf{1}_{\varepsilon A}\Vert}{\Vert \mathbf{1}_{\varepsilon' C}\Vert}\dfrac{\Vert\mathbf{1}_{C}\Vert}{\Vert \mathbf{1}_{\varepsilon' B}\Vert}\leq C_{sd}^2\Rightarrow C_s\leq C_{sd}^2.$$ Now, we ask the same equivalence for general weights. The result is the following: \begin{prop} Assume that $\mathcal B$ is a basis in a Banach space $\mathbb X$. \item[a)] If $\mathcal B$ is $C_s$-$w$-super-democratic, then $\mathcal B$ is $C_{sd}$-$w$-disjoint-super-democratic with $C_{sd}\leq C_s$. \item[b)] If $\mathcal B$ is $C_{sd}$-$w$-disjoint-super-democratic, then $\mathcal B$ is $C_s$-super-democratic with $C_s\leq C_{sd}(1+c_2^2C_{sd})$. \end{prop} \begin{proof} Only the item b) requires a proof. We proceed as in the proof of Theorem \ref{main}. Take $A$ and $B$ such that $w(A)\leq w(B)$. \item \textbf{Case 1:} $\sum_{n=1}^\infty w_n = \infty $ and $\sup_n w_n<\infty$. \textbf{Case 1.1:} Assume that $\lim\sup_{n\rightarrow \infty}w_n < w(B)$. Since $\sum_n w_n =\infty$, we can take $E$ and $n_0$ such that $n_0>E>A\cup B$ such that $$w(E)\leq w(B)<w(E\cup \lbrace n_0\rbrace).$$ In this case, since $A\cap (E\cup\lbrace n_0\rbrace)=\emptyset$, \begin{eqnarray}\label{sup1} \Vert \mathbf{1}_{\varepsilon A}\Vert \leq C_{sd}\Vert \mathbf{1}_{E\cup \lbrace n_0\rbrace}\Vert \leq C_{sd}\Vert\mathbf{1}_{E}\Vert + C_{sd}c_2\leq C_{sd}(1+c_2^2)\Vert\mathbf{1}_{E}\Vert . \end{eqnarray} On the other hand, due to $w(E)\leq w(B)$ and $E\cap B=\emptyset$, \begin{eqnarray}\label{sup2} \Vert \mathbf{1}_{E}\Vert \leq C_{sd}\Vert \mathbf{1}_{\varepsilon' B}\Vert. \end{eqnarray} Using \eqref{sup1} and \eqref{sup2}, we obtain that $\mathcal B$ is $C_s$-super-democratic with $C_s\leq C_{sd}^2(1+c_2^2)$. \textbf{Case 1.2}: $w(B)\leq \lim\sup_{n\rightarrow\infty}w_n$. Using the item i) of Proposition \ref{p:find c0}, we obtain that $$\Vert \mathbf{1}_{\varepsilon A}\Vert\leq C_{sd}c_2^2\Vert\mathbf{1}_{\eta B}\Vert.$$ \item \textbf{Case 2:} If $\sup_n w_n =\infty$, the basis is equivalent to the $c_0$-basis and the result is trivial. The proof is over. \end{proof} \textbf{Question:} Recently, in \cite{BBGHO2}, the authors proved that for Schauder bases and $w=(1,1,...)$, the constants of super-democracy and disjoint-super-democracy are of the same order up to the basis constant, that is, $C_{sd}\leq C_s\leq 2(\mathfrak{K}_b+1)C_{sd}+\kappa \mathfrak{K}_b$, where $\kappa =\sup_{n}\Vert e_n\Vert\Vert e_n^*\Vert$. Is it possible to show the same result for general weights?
{ "timestamp": "2019-03-01T02:14:46", "yymm": "1902", "arxiv_id": "1902.10986", "language": "en", "url": "https://arxiv.org/abs/1902.10986" }
\section{Introduction} Non-smooth functions have played a key role in analysis since the nineteenth century. One fundamental development in this vein came with the introduction of Sobolev spaces, which turned out to be a key tool in studying nonlinear partial differential equations and calculus of variations. Although classically Sobolev functions themselves were not smooth, they were defined on smooth objects such as domains in the Euclidean space or, more generally, Riemannian manifolds. By the late 1970s it became well recognized that several results in real analysis required little structure from the underlying ambient, and could be generalized to non-smooth settings, such as to the so-called spaces of homogeneous type. The latter spaces are metric spaces equipped with a doubling Borel measure (see \cite{CoWe71,CoWe77}). In fact, maximal functions, Hardy spaces, functions of bounded mean oscillation, and singular integrals of Calder\'on-Zygmund-type all continue to have a fruitful theory in context of spaces of homogeneous type. However, this rich theory was, in a sense, only zeroth-order analysis given that no derivatives were involved. The study of first-order analysis with suitable generalizations of derivatives, a fundamental theorem of calculus, and Sobolev spaces, in the setting of spaces of homogeneous type, was initiated in the 1990s. This area, known as analysis on metric spaces, has since grown into a multifaceted theory which continues to play an important role in many areas of contemporary mathematics. For an introduction to the subject we recommend \cite{AlMi,AGS,AT,BB,cheeger,hajlasz,SMP,HKST,HeinonenK,shanmugalingam}. One of the main objects of study in analysis on metric spaces are so called spaces supporting Poincar\'e inequalities introduced in \cite{HeinonenK}. To define this notion, recall that a {\em metric-measure space} $(X,d,\mu)$ is a metric space $(X,d)$ with a Borel measure $\mu$ such that $0<\mu\big(B(x,r)\big)<\infty$ for all $x\in X$ and all $r\in(0,\infty)$. If the measure $\mu$ is {\em doubling}, i.e., there exists a constant $C\in(0,\infty)$ such that $\mu(2B)\leq C\mu(B)$ for all balls\,\,$B\subseteq X$, then we call $(X,d,\mu)$ a {\em doubling metric-measure space}. The notation $\tau B$ stands for the dilation of the ball $B$ by the factor $\tau\in(0,\infty)$, i.e., $\tau B:=B(x,\tau r)$, $x\in X$, $r\in(0,\infty)$. A Borel function $g:X\to[0,\infty]$ is said to be an {\em upper gradient} of another Borel function $u:X\to\mathbb{R}$ if \begin{equation} |u(x)-u(y)|\leq\int_{\gamma_{xy}} g\,ds, \end{equation} holds for each $x,y\in X$ and all rectifiable curves $\gamma_{xy}$ joining $x,y$. Finally, a metric-measure space $(X,d,\mu)$ is said to {\em support a $p$-Poincar\'e inequality}, $p\in[1,\infty)$, if there exist constants $C\in(0,\infty)$ and $\sigma\in[1,\infty)$ such that \begin{equation} \label{ppoin} \mvint_{B} |u-u_{B}|\,d\mu\leq Cr \left(\,\, \mvint_{\sigma B}g^{p}\, d\mu\right)^{1/p}, \end{equation} whenever $B$ is a ball of radius $r\in(0,\infty)$, $u\in L^1_{\rm loc}(X,\mu)$, and $g:X\to[0,\infty]$ is an upper gradient of $u$. Here and in what follows the barred integral and $f_E$ stand for the integral average: $$ f_E=\mvint_Ef\, d\mu =\frac{1}{\mu(E)}\int_E f\, d\mu, $$ where $E$ is a $\mu$-measurable set of positive measure. To be consistent with the definition of the upper gradient, in what follows we will always assume that functions $u\in L^1_{\rm loc}(X,\mu)$ are everywhere finite Borel representatives. The above definitions of the upper gradient and spaces supporting Poincar\'e inequalites are due to Heinonen and Koskela in \cite{HeinonenK} (see also \cite{HKST} for a more detailed exposition). It was proved in \cite{SMP2} and \cite[Theorem~5.1]{SMP} that if a doubling metric-measure space supports a Poincar\'e inequality, then the $p$-Poincar\'e inequality self-improves in the sense that for some $q\in(p,\infty)$ and $C'\in(0,\infty)$, there holds \begin{equation} \label{REW-1} \left(\, \mvint_{B} |u-u_{B}|^{q}\, d\mu\right)^{1/q}\leq C'r \left(\,\, \mvint_{5\sigma B}g^{p}\, d\mu\right)^{1/p}, \end{equation} whenever $B$ is a ball of radius $r\in(0,\infty)$, $u\in L^1_{\rm loc}(X,\mu)$, and $g:X\to[0,\infty]$ is an upper gradient of $u$. The purpose of this note is to show that the family of inequalities in \eqref{REW-1} on a metric measure space imply that the underlying measure is doubling, and thus providing a characterization of doubling metric-measure spaces supporting Poincar\'e inequalities without assuming a priori that the measure is doubling, see Theorem~\ref{EPequiv}, below. This result is a minor refinement of a beautiful result in \cite{korobenkomr}, where it was proved that in a related context, a family of weak Sobolev inequalities imply that the measure is doubling. However, the authors considered Sobolev inequalities where the balls had the same radius on both sides, and such condition is stronger than the one in \eqref{REW-1}. Moreover, they did not address the important applications to Sobolev spaces supporting Poincar\'e inequalities. While the proof presented below is almost the same as the one in \cite{korobenkomr}, it is important to provide details:\ the proof employs an infinite iteration of Sobolev inequalities and since now we have balls of different size on both sides, it is not obvious without checking details that this will not cause estimates to blow up. This paper should be regarded as a supplement to the work of \cite{korobenkomr} and an advertisement of their work. Different, but related iterative arguments to the one presented below were used in \cite{carron,gorka,hajlaszkt1,hajlaszkt2,hebey} in the proofs that a Sobolev inequality implies a measure density condition. Another application of a method developed in \cite{korobenkomr} is given in a forthcoming paper \cite{AGH}. We now state the main result of this note. \begin{theorem} \label{EPequiv} Let $(X,d,\mu)$ be a metric-measure space and fix $p\in[1,\infty)$. Then the following two statements are equivalent. \begin{enumerate} \item[(a)] The measure $\mu$ is doubling and the space $(X,d,\mu)$ supports a $p$-Poincar\'{e} inequality. \vskip.08in \item[(b)] There exist $q\in(p,\infty)$, $C_P\in [1,\infty)$, and $\sigma\in[1,\infty)$ such that \begin{equation} \label{eq20} \left(\, \mvint_{B} |u-u_{B}|^{q}\, d\mu\right)^{1/q}\leq C_Pr \left(\,\, \mvint_{\sigma B}g^{p}\, d\mu\right)^{1/p}, \end{equation} whenever $B$ is a ball of radius $r\in(0,\infty)$, $u\in L^1_{\rm loc}(X,\mu)$, and $g:X\to[0,\infty]$ is an upper gradient of $u$. \end{enumerate} \end{theorem} \begin{remark} We could assume that \eqref{eq20} holds with $C_P\in (0,\infty)$, but the estimates presented below are more elegant if $C_P\geq 1$. Clearly, if \eqref{eq20} holds with a constant strictly greater than zero, then we can increase it to a constant greater than or equal to $1$. \end{remark} A positive locally integrable function $0<w\in L^1_{\rm loc}(\mathbb R^n)$ defines an absolutely continuous measure $d\mu=w(x)\, dx$ with the weight $w$. A class of the so called {\em $p$-admissible} weights plays a fundamental role in the nonlinear potential theory \cite{HKM}. To make the presentation brief, we will not recall the definition of a $p$-admissible weight, but we refer the reader to \cite{HKM} for details. As an immediate consequence of Theorem~\ref{EPequiv} and \cite[Theorem~2]{SMP2} we obtain a new characterization of $p$-admissible weights. A variant of this result has also been proved in \cite{korobenkomr}. \begin{corollary} A function $0<w\in L^1_{\rm loc}(\mathbb R^n)$ is a $p$-admissible weight for some $1<p<\infty$, if and only if there exist $q\in (p,\infty)$, $C\in [1,\infty)$ and $\sigma\in [1,\infty)$ such that $$ \left(\,\mvint_B |u-u_B|^q\, d\mu\right)^{1/q}\leq Cr\left(\,\mvint_{\sigma B} |\nabla u|^p\, d\mu\right)^{1/p} $$ whenever $B\subset\mathbb R^n$ is a ball of radius $r\in (0,\infty)$, $u\in C^\infty(\sigma B)$ and $d\mu=w\, dx$. \end{corollary} \begin{proof}[Proof of Theorem~\ref{EPequiv}] The implication {\it (a)} $\Rightarrow$ {\it (b)} follows immediately from \cite[Theorem~1]{SMP2}. Note however, that the constant $\sigma$ in \eqref{eq20} might be larger than that in the $p$-Poincar\'e inequality (see \eqref{REW-1}). Thus we will focus on proving that {\it(a)} follows from {\it (b)}. To this end, suppose that $X$ satisfies the condition displayed in \eqref{eq20}. Making use of H\"older's inequality and the fact that $1\leq p<q$, we may conclude that the $(q,p)$-Poincar\'e inequality in \eqref{eq20} implies that the space $(X,d,\mu)$ supports a $p$-Poincar\'e inequality (see \eqref{ppoin}). There remains to show that the condition in \eqref{eq20} forces the measure $\mu$ to be doubling. Fix a ball $B:=B(x,r)$, $x\in X$, $r\in(0,\infty)$, and observe that specializing \eqref{eq20} to the case when $B$ is replaced by $2\sigma B$ yields \begin{equation} \label{eq-JK2} \left(\,\, \mvint_{2\sigma B} |u-u_{2\sigma B}|^q\, d\mu\right)^{1/q} \leq 2C_P\sigma r\left(\,\, \mvint_{2\sigma^2 B} g^p\, d\mu\right)^{1/p}, \end{equation} whenever $u\in L^{1}_{\rm loc}(X,\mu)$ and $g:X\to[0,\infty]$ is an upper gradient of $u$. Since $p\geq1$, it follows from \eqref{eq-JK2} and H\"older's inequality that, \begin{align} \label{eq-JK3} \left(\,\, \mvint_{2\sigma B} |u|^q\, d\mu\right)^{1/q} &\leq \left(\,\, \mvint_{2\sigma B} |u-u_{2\sigma B}|^q\, d\mu\right)^{1/q} +|u_{2\sigma B}| \nonumber\\[4pt] &\leq 2\sigma rC_P \left(\,\, \mvint_{2\sigma^2 B} g^p\, d\mu\right)^{1/p}+ \left(\,\, \mvint_{2\sigma B} |u|^p\, d\mu\right)^{1/p}. \end{align} We now define a collection of functions $\{u_j\}_{j\in\mathbb{N}}$ as follows:\ for each fixed $j\in\mathbb{N}$, let $r_j:=(2^{-j-1}+2^{-1})r$ and set $B_j:=B(x,r_j)$. Then \begin{equation}\label{JG-1} \frac{1}{2}r<r_{j+1}<r_j\leq\frac{3}{4}r, \quad\forall\,j\in\mathbb{N}. \end{equation} For each $j\in\mathbb{N}$, let $u_j:X\to\mathbb{R}$ be the function defined by setting for each $y\in X$, \begin{eqnarray}\label{udef} u_j(y):= \left\{ \begin{array}{ll} \,\qquad 1\quad &\mbox{if $y\in B_{j+1}$,} \\[6pt] \displaystyle\frac{r_j-d(x,y)}{r_j-r_{j+1}} &\mbox{if $y\in B_j\setminus B_{j+1}$,} \\[15pt] \,\qquad 0 &\mbox{if $y\in X\setminus B_j$.} \end{array} \right. \end{eqnarray} Noting that $\displaystyle (r_j-r_{j+1})^{-1}=2^{j+2}r^{-1}$, a straightforward computation will show that $u_j$ is $2^{j+2}r^{-1}$-Lipschitz on $X$ and that the function $g_j:=2^{j+2}r^{-1}\chi_{B_j}$ is an upper gradient of $u$, where $\chi_{B_j}$ denotes the characteristic function of the set $B_j$. In particular, we have that $u_j\in L^{1}_{\rm loc}(X,\mu)$ and that the functions $u_j$ and $g_j$ satisfy \eqref{eq-JK3}. Observe that for each fixed $j\in\mathbb{N}$, we have (keeping in mind $\sigma\geq1$) \begin{eqnarray} \label{HD-1} 2\sigma rC_P\left(\,\, \mvint_{2\sigma^2 B} g_j^p\, d\mu\right)^{1/p} = \sigma C_P2^{j+3}\left(\frac{\mu(B_j)}{\mu(2\sigma^2 B)}\right)^{1/p} \leq \sigma C_P 2^{j+3}\left(\frac{\mu(B_j)}{\mu(2\sigma B)}\right)^{1/p} \end{eqnarray} and \begin{eqnarray} \label{HD-2} \left(\,\, \mvint_{2\sigma B}|u_j|^p\, d\mu\right)^{1/p} \leq \left(\frac{\mu(B_j)}{\mu(2\sigma B)}\right)^{1/p}. \end{eqnarray} Moreover, \begin{equation} \label{HD-3} \left(\,\, \mvint_{2\sigma B} |u_j|^{q}\, d\mu\right)^{1/q}\geq\left(\frac{\mu(B_{j+1})}{\mu(2\sigma B)}\right)^{1/q}. \end{equation} In concert, \eqref{HD-1}-\eqref{HD-3} and the extreme most sides of the inequality in \eqref{eq-JK3}, give \begin{align} \label{HD-4} \left(\frac{\mu(B_{j+1})}{\mu(2\sigma B)}\right)^{1/q} &\leq\sigma C_P 2^{j+4} \left(\frac{\mu(B_j)}{\mu(2\sigma B)}\right)^{1/p}, \quad\forall\,j\in\mathbb{N}. \end{align} Therefore \begin{align} \label{HD-42} \mu(B_{j+1})^{1/q} \leq\sigma C_P 2^{j+4}\frac{\mu(B_j)^{1/p}}{\mu(2\sigma B)^{(q-p)/pq}}, \quad\forall\,j\in\mathbb{N}. \end{align} With $\alpha:=q/p\in(1,\infty)$ we raise both sides of the inequality in \eqref{HD-42} to the power $p/\alpha^{j-1}$ in order to obtain \begin{align} \label{HD-5} \mu(B_{j+1})^{1/\alpha^{j}}\leq 2^{p(j+4)/\alpha^{j-1}} \bigg(\frac{\sigma C_P}{\mu(2\sigma B)^{(q-p)/pq}}\bigg)^{p/\alpha^{j-1}}\mu(B_j)^{1/\alpha^{j-1}}, \quad\forall\,j\in\mathbb{N}. \end{align} If we let $P_j:=\mu(B_j)^{1/\alpha^{j-1}}$, then the inequality in \eqref{HD-5} becomes \begin{align}\label{HD-6} P_{j+1}\leq 2^{p(j+4)/\alpha^{j-1}} \bigg(\frac{\sigma C_P}{\mu(2\sigma B)^{(q-p)/pq}}\bigg)^{p/\alpha^{j-1}}P_j, \quad\forall\,j\in\mathbb{N}, \end{align} which, together with an inductive argument and the fact that $P_1\leq\mu(B)$, implies \begin{align}\label{HD-7} P_{j+1}&\leq P_1\prod_{k=1}^j \left[2^{p(k+4)/\alpha^{k-1}} \bigg(\frac{\sigma C_P}{\mu(2\sigma B)^{(q-p)/pq}}\bigg)^{p/\alpha^{k-1}}\right] \nonumber\\[4pt] &\leq \mu(B)\prod_{k=1}^j \left[2^{p(k+4)/\alpha^{k-1}} \bigg(\frac{\sigma C_P}{\mu(2\sigma B)^{(q-p)/pq}}\bigg)^{p/\alpha^{k-1}}\right], \quad\forall\,j\in\mathbb{N}. \end{align} We claim that the product in \eqref{HD-7} converges as $j\to\infty$. Indeed, observe that \begin{align}\label{QW-1} \prod_{k=1}^\infty \bigg(\frac{\sigma C_P}{\mu(2\sigma B)^{(q-p)/pq}}\bigg)^{p/\alpha^{k-1}} &=\bigg(\frac{\sigma C_P}{\mu(2\sigma B)^{(q-p)/pq}}\bigg)^{p\sum_{k=1}^\infty\alpha^{1-k}} \nonumber\\[4pt] &= \bigg(\frac{\sigma C_P}{\mu(2\sigma B)^{(q-p)/pq}}\bigg)^{\frac{p\alpha}{\alpha-1}} =\bigg(\frac{\sigma C_P}{\mu(2\sigma B)^{(q-p)/pq}}\bigg)^{\frac{pq}{q-p}}, \end{align} and \begin{equation}\label{QW-3} \prod_{k=1}^\infty\big(2^{p(k+4)}\big)^{1/\alpha^{k-1}} =2^{\sum_{k=1}^\infty p(k+4)\alpha^{1-k}}=:A(p,q)\in(0,\infty). \end{equation} On the other hand, it follows from \eqref{JG-1} that \begin{equation} 0<\mu(2^{-1}B)^{1/\alpha^{j-1}}\leq P_j=\mu(B_j)^{1/\alpha^{j-1}}\leq \mu(B)^{1/\alpha^{j-1}}<\infty, \end{equation} which, in turn, further implies $\displaystyle\lim\limits_{j\to\infty}P_j=1$. Consequently, passing to the limit in \eqref{HD-7} yields \begin{align}\label{ME12} 1&\leq \mu(B)\frac{\big(\sigma C_P\big)^{pq/(q-p)}}{\mu(2\sigma B)}A(p,q). \end{align} Hence, \begin{align}\label{ME13} \mu(2\sigma B)\leq \big(\sigma C_P\big)^{pq/(q-p)}A(p,q)\, \mu(B). \end{align} Since $\sigma\geq1$, it follows that $\mu$ is doubling. This finishes the proof of the second implication and, in turn, the proof of the theorem. \end{proof} \begin{remark} In the proof of the {\it (b)} $\Rightarrow$ {\it (a)} in Theorem~\ref{EPequiv}, one can compute the constant $A(p,q)$ appearing in \eqref{QW-3} by observing that (keeping in mind $\alpha=q/p$), \begin{align} {\sum_{k=1}^\infty p(k+4)\alpha^{1-k}} &=p\sum_{k=1}^\infty \frac{k}{\alpha^{k-1}}+4p\sum_{k=1}^\infty \frac{1}{\alpha^{k-1}} \nonumber\\[4pt] &=\frac{p}{(1-1/\alpha)^2}+\frac{4p\alpha}{\alpha-1} =\frac{pq^2}{(q-p)^2}+\frac{4pq}{q-p}. \end{align} Therefore, $$A(p,q)=2^{\frac{pq^2}{(q-p)^2}+\frac{4pq}{q-p}}.$$ Hence, condition \eqref{eq20} implies that measure $\mu$ satisfies the following doubling condition: $$ \mu(2B)\leq\Big(\sigma C_P2^{\frac{q}{(q-p)}+4}\Big)^{pq/(q-p)}\mu(B)\quad \mbox{for all balls\,\,$B\subseteq X$.} $$ \end{remark}
{ "timestamp": "2019-03-01T02:08:44", "yymm": "1902", "arxiv_id": "1902.10876", "language": "en", "url": "https://arxiv.org/abs/1902.10876" }
\section{Introduction} \indent Due to the advantages and popularity of non-differentiable regularized and distributive computing for complex optimization problems, the Alternating Direction Method of Multipliers (ADMM) has received a great deal of attention in recent years \cite{boyd2011distributed}. The standard ADMM was originally proposed to solve the following separable convex optimization problem: \begin{align*} \min\nolimits_{x,z} f(x)+g(z)\ \ \ s.t. \ Ax+Bz=c \end{align*} where $f(x)$ and $g(z)$ are closed convex functions, $A$ and $B$ are matrices and $c$ is a vector. There are extensive reports in the literature exploring the theoretical properties for convex optimization problems related to ADMM and its variants, including multi-block ADMM \cite{deng2017parallel}, Bregman ADMM \cite{wang2014bregman}, fast ADMM \cite{goldstein2014fast,kadkhodaie2015accelerated}, and stochastic ADMM \cite{ouyang2013stochastic}. ADMM has now been extended to cover a wide range of nonconvex problems and has achieved significant performance in many practical applications \cite{xu2016empirical}. Unlike convex problems, nonconvex optimizations based on ADMM are much more difficult and the behavior of ADMM for nonconvex problems has been largely a mystery \cite{xu2016empirical}. Current theoretical analytics on nonconvex ADMM typically focus on special nonconvex problems with strict conditions. Most of the existing work imposes theoretical guarantees that require the assumption that $x$ and $z$ are either decoupled variables or both from convex sets. Recently, however, there have been an increasing number of real-world applications where the objective functions are multi-convex (i.e. nonconvex for all the variables but convex for each when all the others are fixed). For example, a descriptive model and a generative model may be optimized alternately in an adversarial learning framework; for example, the descriptive model may train a classifier while a generative model maximizes the probability of a classifier making mistakes \cite{goodfellow2014generative}, or a dictionary learning application may learn the fixed dictionary and coefficient simultaneously \cite{mairal2009supervised}. Nonnegative matrix factorization, which aims to decompose a matrix into a product of two matrices, has been applied widely in computer vision, machine learning and various other fields \cite{lee2001algorithms} and a bilinear matrix inequality problem has been designed for the analysis of linear and nonlinear uncertain systems \cite{hassibi1999path}. All of these can be considered special cases of the following problem, which is our focus in this paper: \begin{align*}\nonumber &\mbox{\textbf{Problem 1:}}\\ &\nonumber \min\nolimits_{x_1,\cdots,x_n,z} f(x_1,\cdots,x_n)+\sum\nolimits_{i=1}^n g_i(x_i)+h(z)\ \ \ \ \ \ \ \ \ \ \ \ \ \\\nonumber &\ \ \ \ \ \ \ s.t. \ l(x_1,\cdots,x_n)\leq 0,\ \sum\nolimits_{i=1}^n A_i x_i-z=0 \end{align*} \normalsize where $x_i\in \mathbb{R}^{p_i\times 1}(i=1,\cdots,n)$, $z\in \mathbb{R}^{q\times 1}$, $f(x_1,x_2,\cdots,x_n)$ and $l(x_1,x_2,\cdots,x_n): \mathbb{R}^p\in \mathbb{R}\cup\{\infty\}(p=\sum\nolimits_{i=1}^n p_i)$ are proper, continuous, multi-convex and possibly nonsmooth functions, $g_i(x_i)(i=1,\cdots,n)$ are proper, continuous, convex and possibly nonsmooth functions and $h(z)$ is a proper, differentiable and convex function. $A_i\in \mathbb{R}^{q\times p_i}(i=1,\cdots,n)$ are matrices with full column ranks.\\ However, Problem 1 is very difficult to solve. Firstly, the objective function $f(x_1,\cdots,x_n)$ is nonconvex: the coupled function $f(x_1,\cdots,x_n)$ is nonconvex, and the tightly coupled variables are on the nonconvex set. This type of problem has not yet been rigorously and systematically investigated. Secondly, Problem 1 has multiple constraints: Aside from the equality constraint $\sum\nolimits_{i=1}^n A_ix_i-z=0$, the inequality constraint has a coupled and nonsmooth function $l(x_1,\cdots,x_n)$. There is no ADMM framework to address optimization problems with coupled inequality constraints like Problem 1. Moreover, the convergence properties of the ADMM required to solve Problem 1 remain unknown. In order to address these challenges simultaneously, we propose a novel multi-convex inequality constrained Alternating Direction Method of Multipliers (miADMM) to solve Problem 1. Our proposed new method, miADMM, splits the complex Problem 1 into multiple smaller subproblems, each of which is projected onto a convex set and thus can be solved exactly. These solvable subproblems support the convergence guarantee of the miADMM. Furthermore, we propose the use of novel mild conditions to ensure the global convergence of miADMM, so it always converges to a critical point for any initialization \cite{lanckriet2009convergence}. Our contributions in this paper include: \begin{itemize} \item We propose a novel generic framework for multi-convex inequality constrained ADMM (miADMM) to solve Problem 1. The miADMM breaks the nonconvex Problem 1 into small local convex subproblems, which are then coordinated to find a solution to Problem 1. The standard ADMM is a special case of our miADMM. \item We investigate the convergence properties of the new miADMM. Specifically, we prove that the variables in Problem 1 and their gradients are bounded during iteration, and the objective value decreases monotonically. Moreover, miADMM is guaranteed to converge to a critical point. The converence rate of miADMM is $o(1/k)$. \item We demonstrate several important and promising applications that are special cases of our proposed miADMM framework, and benefit from its theoretical properties. Specifically, we present five applications in the fields of machine learning and control, and give concrete algorithms to solve them using our miADMM framework. \item We conduct extensive experiments to validate our proposed miADMM. Experiments on a synthetic dataset and ten real-world datasets demonstrate its effectiveness, scalability, and convergence properties. \end{itemize} \indent The rest of this paper is summarized as follows: Section \ref{sec:related_work} summarizes previous work related to this paper. Section \ref{sec:miADMM} introduces the new miADMM algorithm and its convergence properties. In Section \ref{sec:application}, the miADMM algorithm is applied to several important applications. The extensive experiments that have been conducted are described in Section \ref{sec:experiment}. The paper concludes with a summary of the work in Section \ref{sec:conclusion}. \section{Related Work} \label{sec:related_work} \textbf{Multi-convex optimization problem:} There are some works which studied the multi-convex problems. The earliest work required that the objective function was differentiable continuous and strictly convex \cite{warga1963minimizing}. Various conditions on separability and regularity on the objective functions have been discussed in \cite{tseng1993dual,tseng2001convergence}. In the most recent work, Xu and Wo presented three types of multi-convex algorithms and analyzed convergence based on either Lipschitz differentiable or strongly convex assumption \cite{xu2013block}. For a comprehensive survey, see \cite{shen2017disciplined}. However, to the best of our knowledge, few of them allow the objective function to be nonsmooth and coupled at the same time.\\ \textbf{Nonconvex ADMM}: Despite the outstanding performance of the nonconvex ADMM, the theorem research on it is not much due to the complexity of both multiple coupled variables and various (inequality and equality) constraints. Specifically, Hong et al. \cite{hong2014block} and Cui et al. \cite{cui2015convergence} proposed a majorized ADMM and gave convergence guarantee when the step length was either small or large. Gao and Zhang discussed the convergence properties when the coupled objective function was jointly convex \cite{gao2017first}. Wang et al. presented their convergence conditions when the coupled objective function was nonconvex and nonsmooth \cite{wang2015global}. Chen et al. discussed the quadratic coupled terms \cite{chen2015extended}. \section{Multi-convex Inequality-constrained ADMM (miADMM)} \label{sec:miADMM} \indent In this section, we present the framework of the new miADMM. Section \ref{sec:algorithm} shows the formulation of miADMM and in Section \ref{sec:convergence} we prove the theoretical convergence of the miADMM based on several mild assumptions. \vspace{-0.3cm} \subsection{The miADMM algorithm} \vspace{-0.3cm} \label{sec:algorithm} \indent In Problem 1, the variables in the inequality constraint $l(x_1,\cdots,x_n)\le 0$ are coupled and difficult to solve. To overcome this challenge, we include $l(x_1,\cdots,x_n)$ in an indicator function and thus the augmented Lagrangian function can be reformulated mathematically as follows: \begin{align} \nonumber &L_\rho(x_1,\!\cdots,\!x_n,\!z,\!y)\\\nonumber &=f(x_1,\!\cdots,\!x_n)\!+\!\mathbb{I}(l(x_1\!,\!\cdots\!,\!x_n))\!+\!\sum\nolimits_{i\!=\!1}^n g_i(x_i)\!+\!h(z)\\&\!+\!y^T(\sum\nolimits_{i\!=\!1}^n A_i x_i\!-\!z)\!+\!(\rho/2)\Vert \sum\nolimits_{i\!=\!1}^n A_i x_i\!-\!z \Vert^2_2 \label{eq:lagrangian function} \end{align} \normalsize where $\mathbb{I}(l(x_1,\cdots,x_n))$ is an indicator function which equals ``0'' if $l(x_1,\cdots,x_n)\leq 0$ and $+\infty$ otherwise, $y$ is a dual variable and $\rho>0$ is a penalty variable. The miADMM aims to optimize the following $n+1$ subproblems alternately. \begin{align} \nonumber x_i^{k+1} &\leftarrow\arg\min\nolimits_{x_i} L_\rho(\cdots,x^{k+1}_{i-1},x_i,x^k_{i+1},\cdots)(i\!=\!1,\!\cdots\!,\!n)\\\nonumber &= \arg\min\nolimits_{x_i} f(\cdots\!,\!x^{k\!+\!1}_{i\!-\!1}\!,\!x_i\!,\!x^{k}_{i\!+\!1},\cdots)\!+\!g_i(x_i)\!+\!(y^k)^T\!A_i\!x_i\\\nonumber&+\mathbb{I}(l(\cdots,x^{k\!+\!1}_{i\!-\!1},\!x_i\!,\! x^{k}_{i\!+\!1}\!,\!\cdots))\\\nonumber&\!+\!(\rho/2)\!\Vert\sum\nolimits_{j\!=\!1}^{i\!-\!1}\! A_j x^{k\!+\!1}_j\!+\!A_ix_i\!+\!\sum\nolimits_{j=i\!+\!1}^{n}\! A_jx^k_j\!-\!z^k\Vert^2_2\\ z^{k+1} &\leftarrow \arg\min\nolimits_{z} L_\rho(\cdots,x_n^{k+1},z)\label{eq:update z}\\\nonumber&\!=\!\arg\min\nolimits_z\! h(z)\!-\!(y^T)^k\!z\!+\!(\rho/2)\!\Vert \sum\nolimits_{i\!=\!1}^n A_ix^{k\!+\!1}_i\!-\!z\Vert^2_2. \end{align} The first $n$ subproblems can be written equivalently in the following form for $i=1,\cdots,n$ \begin{align} \nonumber x_i^{k+1} &\leftarrow \arg\min\nolimits_{x_i} f(\cdots\!,\!x^{k\!+\!1}_{i\!-\!1}\!,\!x_i\!,\!x^{k}_{i\!+\!1}\!,\!\cdots)\!+\!g_i(x_i)\!+\!(y^k)^T\!A_i\!x_i\\\nonumber&\!+\!(\rho/2)\Vert\sum\nolimits_{j\!=\!1}^{i\!-\!1} A_j x^{k+1}_j\!+\!A_ix_i\!+\!\sum\nolimits_{j=i\!+\!1}^{n} A_jx^k_j\!-\!z^k\Vert^2_2 \\& s.t. \ l(\cdots,x^{k+1}_{i-1},x_i, x^{k}_{i+1},\cdots)\leq 0 \label{eq:update x} \end{align} Algorithm \ref{algo:miADMM} is presented for Problem 1. Concretely, Line 3-5 and 6 update $x^{k+1}_i(i=1,\cdots,n)$ and $z^{k+1}$, respectively. Line 7 updates the dual variable $y^{k+1}$, which follows the routine of the standard ADMM. Each subproblem is convex and solveable. \begin{algorithm} \scriptsize \caption{ miADMM Algorithm to Solve Problem 1} \begin{algorithmic}[1] \REQUIRE $A_i(i=1\cdots,n)$. \ENSURE $x_i(i=1,\cdots,n),z$. \STATE Initialize $\rho$, $k=0$. \REPEAT \FOR{i=1 to n} \STATE Update $x^{k+1}_i$ in Equation \eqref{eq:update x}. \ENDFOR \STATE Update $z^{k+1}$ in Equation \eqref{eq:update z}. \STATE $y^{k+1}\leftarrow y^k+\rho(\sum\nolimits_{i=1}^n A_ix_i^{k+1}-z^{k+1})$.\\ \STATE $k\leftarrow k+1$. \UNTIL convergence. \STATE Output $x_i(i=1,\cdots,n),z$. \end{algorithmic} \label{algo:miADMM} \end{algorithm} \subsection{Convergence Analysis} \label{sec:convergence} \indent In this section, we analyze the conditions and properties required for the global convergence of miADMM. We first present necessary definitions and assumptions, then prove that several key properties that lead to the global convergence. \subsubsection{Definitions and assumptions} \indent First, recall the definition of Lipschitz differentiability \cite{cavalletti2016tangent}, which can be defined as follows: \begin{definition}[Lipschitz differentiability] Any arbitrary differentiable function $f: \mathbb{R}^m\rightarrow \mathbb{R}$ is Lipschitz differentiable if for any $x^{'}, x^{''}\in \mathbb{R}^m$, \begin{align*} \Vert \nabla f(x^{'})-\nabla f(x^{''})\Vert \leq D \Vert x^{'}-x^{''}\Vert \end{align*} where $D\geq 0$ is a constant and $\nabla f(x)$ denotes the gradient of $f(x)$. \end{definition} \indent This can be generalized to a new definition of Lipschitz subdifferentiability as follows: \begin{definition}[Lipschitz Subdifferentiability] Any arbitrary function $f: \mathbb{R}^m \rightarrow \mathbb{R}$ is Lipschitz subdifferentiable if for any $x^{'}$ and $x^{''}$, there exist two subgradients $d_1\in \partial f(x^{'})$ and $d_2\in \partial f(x^{''})$ such that \begin{align*} \Vert d_1-d_2\Vert \leq M \Vert x^{'}-x^{''}\Vert \end{align*} where $M\geq 0$ is a constant and $\partial f(x)$ denotes the subgradient of $f(x)$. \label{def: subdifferentiable} \end{definition} \indent It is easy to find that Lipschitz subdifferentiability is a generalization of Lipschitz differentiability \cite{beck2009fast}, as all Lipschitz differentiable functions are also Lipschitz subdifferentiable. Moreover, the indicator function $\mathbb{I}(\bullet)$ is not Lipschitz differentiable at $0$, but it satisfies the Lipschitz subdifferentiability when $M=0$. This property is crucial in proving Property \ref{pro:property 3}, as discussed later. Next, several mild assumptions are imposed to ensure global convergence of the new method: \begin{assumption}[Coercivity] $f(x_1,\cdots,x_n)+\sum\nolimits_{i=1}^n g_i(x_i)+h(z)$ is coercive over the nonempty set $F=\{(x_1,\cdots,x_n,z): l(x_1,\cdots,x_n)\leq 0, \sum\nolimits_{i=1}^n A_ix_i-z=0\}$. In other words, $f(x_1,\cdots,x_n)+\sum\nolimits_{i=1}^n g_i(x_i)+h(z) \rightarrow \infty$ if $(x_1,\cdots,x_n,z)\in F$ and $\Vert(x_1,\cdots,x_n,z)\Vert \rightarrow \infty$. \label{ass:assumption 1} \end{assumption} \indent Coercivity is such a weak condition for the objective function that many applications satisfy this assumption. For example, most common loss functions, including log loss, hinge loss and square loss, do so. \begin{assumption}[Lipschitz Differentiability and Subdifferentiability] $h(z)$ is Lipschitz differentiable with constant $H\geq 0$, $f(x_1,\cdots,x_n)$ is Lipschitz subdifferentiable with constant $M\geq 0$. \label{ass:assumption 2} \end{assumption} \indent Many problems can be reformulated to an equivalent miADMM formulation by introducing $z$ and making $h(z)=0$, as discussed below. Since $h(z)=0$ is Lipschitz differentiable with $0$, this assumption is satisfied. Based on Definition \ref{def: subdifferentiable}, $f(x_1,\cdots,x_n)+\mathbb{I}(l(x_1,\cdots,x_n))$ is also Lipschitz subdifferentiable with $M\geq 0$. \subsubsection{Key Properties} \indent This section focuses on the global convergence of the miADMM algorithm. Specifically, if Assumptions 1-2 are satisfied, then Properties 1-3 also hold, as shown below. They are key properties that ensure the convergence of the miADMM because as long as they hold, the miADMM is guaranteed to converge to a critical point globally. \begin{property}[Boundness] If $\rho> 2H$, then starting from any $(x_1^0,\cdots,x_n^0,z^0,y^0)$ such that $l(x_1^0,\cdots,x_n^0)\leq 0$, $\{x_1^k,\cdots,x_n^k,z^k,y^k\}$ is bounded, and $L_\rho(x_1^k,\cdots,x_n^k,z^k,y^k)($defined in Equation \eqref{eq:lagrangian function}$)$ is lower bounded.\label{pro:property 1} \end{property} \indent Property \ref{pro:property 1} confirms that all variables and the value of $L_\rho$ have lower bounds. It is proven under Assumptions \ref{ass:assumption 1} and \ref{ass:assumption 2} , and its proof can be found in Theorem \ref{thero: property 1} in the supplementary materials. \begin{property}[Sufficient Descent] \label{pro:property 2} If $\rho>2H$ so that $C_1=\rho/2-H/2-H^2/\rho>0$, then there exists $C_2=\min(\rho/2,C_1)$ such that\small \begin{align} \nonumber &L_\rho(x_1^k,\cdots,x_n^k,z^k,y^k)-L_\rho(x_1^{k+1},\cdots,x_n^{k+1},z^{k+1},y^{k+1})\\&\geq C_2(\Vert z^{k+1}-z^k\Vert^2_2+\sum\nolimits_{i=1}^n\Vert A_i(x_i^{k+1}-x_i^{k})\Vert^2_2)\label{eq: property2} \end{align}\normalsize \end{property} \indent The value of $L_\rho$ is guaranteed to decrease monotonically if $\rho$ is sufficiently large for Property \ref{pro:property 2}. Property \ref{pro:property 2} holds under Assumptions \ref{ass:assumption 1} and \ref{ass:assumption 2}, and its proof can be found in Theorem \ref{thero: property 2} in the supplementary materials. \begin{property}[Subgradient Bound] \label{pro:property 3} There exists $C_3>0$ and $d^{k+1}\in \partial L_\rho(x_1^{k+1},\cdots,x_n^{k+1},z^{k+1},y^{k+1})$ such that \begin{align} \Vert d^{k+1}\Vert \leq C_3(\Vert z^{k+1}-z^k\Vert+\sum\nolimits_{i=1}^n\Vert x_i^{k+1}-x_i^k\Vert)\label{eq: property3} \end{align} \end{property} \indent Property \ref{pro:property 3} states that the subgradient of $L_\rho$ has a upper bound, which requires Assumption \ref{ass:assumption 2}. Its proof can be found in Theorem \ref{thero: property 3} in the supplementary materials. The following three theorems summarize the convergence of the miADMM. The first theorem confirms that three properties are satisfied for miADMM. \begin{theorem}[Convergence Properties] If $\rho>2H$ and Assumptions \ref{ass:assumption 1} and \ref{ass:assumption 2} hold, then miADMM satisfies Properties \ref{pro:property 1}, \ref{pro:property 2} and \ref{pro:property 3}. \label{thero: theorem 1} \end{theorem} \begin{proof} It can be concluded by Theorem \ref{thero: property 1}, \ref{thero: property 2} and \ref{thero: property 3} in the supplementary materials. \end{proof} \indent The second theorem ensures that the miADMM converges to a critical point for any initial point. \begin{theorem}[Global Convergence] For the variables $(x_1,\cdots,x_n,z,y)$ in Problem 1, starting from any $(x_1^0,\cdots,x_n^0,z^0,y^0)$ such that $l(x_1^0,\cdots,x_n^0)\leq 0$, this sequence generated by miADMM has at least a limit point $(x_1^*,\cdots,x_n^*,z^*,y^*)$, and any limit point $(x_1^*,\cdots,x_n^*,z^*,y^*)$ is a critical point. That is, $0\in \partial L_\rho(x_1^*,\cdots,x_n^*,z^*,y^*)$. \label{thero: theorem 2} \end{theorem} \begin{proof} Since $(x_1^k,\cdots,x_n^k,z^k,y^k)$ is bounded, there exists a subsequence $(x_1^s,\cdots,x_n^s,z^s,y^s)$ such that $(x_1^s,\cdots,x_n^s,z^s,y^s)\rightarrow (x_1^*,\cdots,x_n^*,z^*,y^*)$ where $(x_1^*,\cdots,x_n^*,z^*,y^*)$ is a limit point. By Property \ref{pro:property 1} and \ref{pro:property 2}, $L_\rho(x_1^k,\cdots,x_n^k,z^k,y^k)$ is non-increasing and lower bounded, we prove that $\Vert A_i(x_i^{k+1}-x_i^{k})\Vert\rightarrow 0(i=1,\cdots,n)$ and $\Vert z^{k+1}-z^{k}\Vert\rightarrow 0$ as $k\rightarrow \infty$. We infer there exists $d^k\in \partial L_\rho(x_1^{k},\cdots,x_n^{k},z^k,y^k)$ such that $\Vert d^k\Vert \rightarrow 0$ as $k\rightarrow \infty$ based on Property \ref{pro:property 3}. Specifically, $\Vert d^s\Vert \rightarrow 0$ as $s\rightarrow \infty$. According to the definition of general subgradient (Defintion 8.3 in \cite{rockafellar2009variational}), we have $0\in \partial L_\rho(x_1^*,\cdots,x_n^*,z^*,y^*)$. \end{proof} \indent The third theorem proves that our proposed miADMM can achieve a convergence rate of $o(1/k)$, despite the nonconvex and complex nature of Problem 1. Such rate is the state-of-the-art even comparing to those methods for simpler convex problems. The theorem is shown as follows: \begin{theorem}[Convergence Rate] For a sequence $(x^k_1,\cdots,x^k_n,z^k,y^k)$, define $u_k=\min\nolimits_{0\leq l\leq k}(\Vert z^{l+1}-z^l\Vert^2_2+\sum\nolimits_{i=1}^n\Vert A_i(x_i^{l+1}-x_i^{l})\Vert^2_2)$, then the convergence rate of $u_k$ is $o(1/k)$. \label{thero: theorem 3} \end{theorem} \indent The proof of this theorem is in Appendix \ref{sec:convergence rate proof} in the supplementary materials. The $o(1/k)$ convergence rate of miADMM is consistent with much existing work analyzing the convex ADMM, including \cite{he20121,lin2015sublinear,deng2017parallel}. Our contribution in term of convergence rate is that we extend the guarantee of $o(1/k)$ into multi-convex problems (Problem 1). \section{Applications} \label{sec:application} \indent In this section, we apply our proposed miADMM to several real-world applications, all of which conform to Problem 1 and benefit from the convergence properties of the miADMM. \indent The formulation of Problem 1 is widely applied in many applications, including nonnegative matrix factorization, nonnegative tensor completion and dictionary learning \cite{shen2017disciplined,xu2013block}. In the following sections, five novel applications are introduced in turn: weakly constrained multi-task learning, learning with sign-network constraints, the bilinear matrix inequality problem, sparse dictionary learning, and nonnegative matrix factorization. \subsection{Weakly-constrained Multi-task Learning} \label{sec:weak_multitask} {\normalsize In multi-task learning problems, multiple tasks are learned jointly to achieve a better performance compared with learning tasks independently \cite{zhang2017survey}. Most work on multi-task learning has tended to enforce the assumption of similarity among the feature weight values across tasks \cite{argyriou2007multi,chen2011integrating,wang2017multi,zhang2017survey,zhou2011malsar} because this makes it possible to use convex regularization terms like $\ell_{2,1}$ norms \cite{wang2017multi} and Graph Laplacians \cite{zhou2011malsar}. However, this assumption is usually too strong and is seldom satisfied by the real-world data. Instead of requiring feature weights to be similar in magnitude, a more conservative but probably more reasonable assumption is that multiple tasks share similar polarities for the same feature, which means that if a feature is positively relevant to the output of a task, then its weight will also be positive for other related tasks. This assumption is appropriate for many applications. For example, the feature `number of clinic visits' will be positively related to flu outbreaks, while the feature `popularity of vaccination' will be negatively related to them, even though their feature weights can vary dramatically for different countries (namely tasks here). This is achieved by enforcing the requirement for every pair of tasks with neighboring indices to have the same weight sign. This optimization objective is shown as follows:} \small \begin{align} &\min\nolimits_{w_1,\cdots,w_n}\sum\nolimits_{i=1}^n Loss_i(w_i)+\Omega_i(w_i) \label{eq: weakly-constrained multi-task learning}\\ \nonumber &s.t., \ w_{i,j}w_{i+1,j}\geq 0 \ (i=1,2,\cdots,n-1, j=1,2,\cdots,m) \end{align} \normalsize where $n$ and $m$ denote the number of tasks and features, respectively, $w_{i,j}$ is the weight of the $j-th$ feature in the $i-th$ task, $w_i$ is the weight of the $i-th$ task, and $Loss_i(w_i)$ and $\Omega_i(w_i)$ are the loss function and the regularization term of the $i-th$ task, respectively. The inequality constraint implies that the $i-th$ task and the $(i+1)-th$ share the same sign for their weights.\\ \indent However, Equation \eqref{eq: weakly-constrained multi-task learning} is nonconvex and thus difficult for existing frameworks to optimize. Fortunately, our miADMM can address this issue by rewriting Equation \eqref{eq: weakly-constrained multi-task learning} in the following form:\small \begin{gather} \min\nolimits_{w_1,\cdots,w_n,z} \sum\nolimits_{i=1}^n Loss_i(w_i)+\Omega_i(w_i) \label{prob: multi-task learning}\\ \nonumber s.t. \ w_{i,j}w_{i+1,j}\geq 0 \ (i\!=\!1\!,\!2\!,\!\cdots\!,\!n\!-\!1\!,\! j\!=\!1\!,\!\cdots\!,\!m), z\!=\![w_1\!;\!\cdots\!;\!w_n] \end{gather}\normalsize where $z$ is an auxiliary variable that is applied to make this problem compatible with Problem 1. The miADMM algorithm for this case is shown in Appendix \ref{sec:multi-task learning} in the supplementary materials. \vspace{-0.4cm} \subsection{Learning with Signed-Network Constraints} \label{sec:network_constraints} \indent The application of network models for social network analysis has attracted the attention of a number of researchers \cite{carrington2005models}. For example, influential societal events often spread across many social networking sites and are expressed by different languages. Such multi-lingual indicators usually transmit similar semantic information through networks, and have thus been utilized to facilitate social event forecasting \cite{zhao2018distant}. The problem with network constraints is formulated as follows: \begin{gather*} \min\nolimits_{\beta_1,\cdots,\beta_n} Loss(\beta_1,\cdots,\beta_n)+\sum\nolimits_{i=1}^n \omega_i(\beta_i)\\ s.t. \exists (\beta_i,\beta_j) \in E_s, \exists (\beta_k,\beta_l) \in E_d (1\leq i,j,k,l\leq n) \end{gather*} where $\beta_i$ is the weight of the $i$-th node. $Loss(\beta_1,\cdots,\beta_n)$ is a loss function and $\omega_i(\beta_i)$ is a regularization term for the $i$-th node. $E_s$ and $E_d$ are two edge sets at represent two opposite relationships: $(\beta_i,\beta_j) \in E_s$ means that there exist $\beta_{i,u}$ and $\beta_{j,v}$ such that $\beta_{i,u}\beta_{j,v}\geq 0$, while $(\beta_i,\beta_j) \in E_d$ means that there exist $\beta_{i,u}$ and $\beta_{j,v}$ such that $\beta_{i,u}\beta_{j,v}\leq 0$, where $\beta_{i,u}$ and $\beta_{j,v}$ denote the $u$-th and $v$-th element of $\beta_i$ and $\beta_j$, respectively. This problem can be reformulated equivalently to the following: \vspace{-0.4cm} \begin{gather} \min\nolimits_{\beta_1,\cdots,\beta_n,z} Loss(\beta_1,\cdots,\beta_n)+\sum\nolimits_{i=1}^n \omega_i(\beta_i)\label{prob: muli-lingual}\\ \nonumber s.t. \ \exists (\!\beta_i\!,\!\beta_j\!) \in \! E_s, \!\exists\! (\beta_k\!,\!\beta_l) \!\in\! E_d(1\!\leq\! i,j,k,l\!\leq\! n)\!,\! z\!=\![\!\beta_1;\!\cdots\!;\!\beta_n\!] \end{gather} where $z$ is an auxiliary variable to fit this problem into Problem 1. The miADMM algorithm for this case is also shown in Appendix \ref{sec:muli-lingual} in the supplementary materials. \subsection{Bilinear Matrix Inequality Problem} \indent The Bilinear Matrix Inequality (BMI) problem has a broad application across many system and control designs \cite{vanantwerp2000tutorial, chiu2017method}. Consider the following BMI formulation: \begin{gather*} \min\nolimits_{\alpha,\beta} u^T \alpha+v^T \beta\\ s.t. \ S\!+\!\sum\nolimits_{i=1}^{n_1}\alpha_i\! U_i\!+\!\sum\nolimits_{j\!=\!1}^{n_2}\beta_j\! V_j\!+\!\sum\nolimits_{i\!=\!1}^{n_1}\sum\nolimits_{j\!=\!1}^{n_2} \alpha_i\!\beta_j \!K_{ij}\! \preceq\! 0 \end{gather*} where $S, \ U_i(i=1,\cdots,n_1),V_j(j=1,\cdots,n_2)$ and $K_{ij}(i=1,\cdots,n_1,j=1,\cdots,n_2)$ are symmetric matrices, $u$, $v$, $\alpha$ and $\beta$ are vectors and $\succeq$ denotes positive semi-definiteness. Minimizing $\alpha$ and $\beta$ alternately is a popular method for dealing with the BMI problem because of its simplicity and effectiveness \cite{chiu2017method}, as each subproblem is then a linear inequality matrix problem and can thus be solved efficiently. However, this method does not necessarily converge. Instead, the application of our miADMM ensures global convergence, as it can be reformulated as follows: \begin{gather} \min\nolimits_{\alpha,\beta,z} u^T \alpha+v^T \beta \label{prob: BMI}\\ \nonumber s.t. \ \!S\!+\!\sum\nolimits_{i\!=\!1}^{n_1}\!\alpha_i\! U_i\!+\!\sum\nolimits_{j\!=\!1}^{n_2}\beta_j\! V_j\!+\!\sum\nolimits_{i\!=\!1}^{n_1}\sum\nolimits_{j\!=\!1}^{n_2}\! \alpha_i\!\beta_j\! K_{ij}\! \preceq\! 0\\\nonumber z\!=\![\alpha;\!\beta] \end{gather} where $z$ is an auxiliary variable to fit this problem into Problem 1. The miADMM algorithm for this example is shown in Appendix \ref{sec:BMI} in the supplementary materials. \vspace{-0.4cm} \subsection{Sparse Dictionary Learning} \indent The sparse dictionary learning problem aims to decompose the data matrix $X$ into a product of a dictionary $D$ and a sparse matrix $Y$ \cite{shen2017disciplined}, which is formulated as follows: \begin{gather*} \min\nolimits_{D,Y} (1/2)\Vert DY-X\Vert^2_F+\gamma\Vert Y\Vert_1\ \ \ \ \ \ \ s.t., \ \Vert D\Vert_F \leqslant 1 \end{gather*} where $\gamma>0$ is a penalty parameter. It is reformulated mathematically below: \begin{gather} \min\nolimits_{D,Y,Z} (1/2)\Vert DY-X\Vert^2_F+\gamma\Vert Y\Vert_1 \label{prob: dictionary learning}\\ \nonumber s.t., \Vert D\Vert_F \leqslant 1, Z=[D;Y] \end{gather} where $Z$ is an auxiliary variable to fit this problem into Problem 1. The miADMM algorithm for this problem is shown in Appendix \ref{sec:SDL} in the supplementary materials. \vspace{-0.3cm} \subsection{Nonnegative Matrix Factorization} \indent Nonnegative matrix factorization is a classical problem that is broadly applicable to a number of different applications \cite{boyd2011distributed,lee2001algorithms}. The goal of the nonnegative matrix factorization problem is to decompose $U$ into a product of two nonnegative matrices $V$ and $W$, where $U,V$ and $W$ are all matrices. The problem is formulated as: \begin{gather*} \min\nolimits_{V,W} \Vert U-VW\Vert^2_F\ \ \ s.t.,\ V\geq 0, W\geq 0 \end{gather*} Unlike the solution suggested by \cite{boyd2011distributed}, our proposed miADMM, which includes a convergence guarantee, reformulates the problem as follows: \begin{gather} \min\nolimits_{V\!,\!W\!,\!Z} \Vert\! U\!-\!V\!W\!\Vert^2_F \label{prob: NMF} s.t. \ V\!\geq\! 0, \!W\!\geq\! 0,\! Z\!=\![V\!;\!W\!] \end{gather} where $Z$ is an auxiliary variable that is incorporated to fit this problem into Problem 1. The miADMM algorithm for this factorization is shown in Appendix \ref{sec:NMF} in the supplementary materials. \section{Experiments} \label{sec:experiment} In this section, we validate the miADMM using a synthetic dataset and ten real-world datasets on several applications. Scalability, effectiveness, and convergence properties are compared with several existing state-of-the-art methods on many real datasets. All the experiments were conducted on a 64-bit machine with Intel(R) core(TM) processor (i7-6820HQ CPU@ 2.70GHZ) and 16.0GB memory. \subsection{Experiment I: Synthetic Dataset} \indent A very straightforward numerical application on our miADMM framework is to solve the following regularized linear regression problem with biconvex constraints: \begin{align} \nonumber \label{eq:synthetic} \min\nolimits_{\alpha,\beta}&\sum\nolimits_{i=1}^N (y_i-\sum\nolimits_{j=1}^{M}\alpha_j x_{i,j}-\sum\nolimits_{j=1}^{M} \beta_i x_{i,j+M})^2\\&+\lambda_1(\sum\nolimits_{i=1}^M (\alpha^2_i+\beta^2_i))\\\nonumber &s.t. \ \alpha_i\beta_i\leq 0\ (i=1,\cdots,M) \end{align} where $y_i(i=1,\cdots,n)$ is the response of the $i-th$ sample, $x_{i,j}(i=1,\cdots,N,j=1,\cdots,2M)$ denotes the $j-th$ feature of the $i-th$ sample. $\alpha_i(i=1,\cdots,M)$ and $\beta_i(i=1,\cdots,M)$ represent the coefficients of the first $M$ features and the second $M$ features, respectively. $\lambda_1>0$ is a penalty parameter. Hence, $N$ and $2M$ are the number of samples and features, respectively. \textbf{Data Generation and Parameter Settings.} \indent The true $\alpha$ and $\beta$ were generated from a uniform distribution between $-1$ and $1$. The $2M$ features were generated from two uniform distributions between $-1$ and $1$. $y$ was generated from the linear regression $y=x[\alpha;\beta]+\varepsilon$ where the error term $\varepsilon$ follows Gaussian distribution. $N$ and $M$ were both set to $1,000$. $\lambda_1$ and $\rho$ were set to $1$ and $0.1$. \textbf{Baselines.} \indent In order to test the scalability of miADMM, two baselines were utilized for comparison: \emph{1) Block Coordinate Descent (BCD) \cite{xu2013block}.} BCD is an intuitive method to solve multi-convex problems, which optimizes each variable alternately. \emph{2) Interior Point Method (IPM) \cite{mehrotra1992implementation}.} IPM is a classic barrier method to solve nonlinear optimization problems. \begin{figure*}[h] \centering \includegraphics[width=\textwidth] {synthetic.pdf} \vspace{-0.5cm}\caption{Convergence and scalability on synthetic dataset.\vspace{-0.5cm}} \label{fig:synthetic} \end{figure* \\ \textbf{Performance on Convergence and Scalability.} Obviously, the problem in Equation \ref{eq:synthetic} satisfies the convergence conditions and thus is guaranteed to converge by our miADMM. This is further demonstrated by Figure \ref{fig:synthetic}(a), which illustrates the change of the residual along the iteration steps and shows its convergence. Additionally, the objective value is also shown to converge by Figure \ref{fig:synthetic}(b). Moreover, Figures \ref{fig:synthetic}(c) and (d) further show the scalability of our miADMM and the comparison methods in $N$ (i.e., the number of samples) and $M$ (i.e., half the number of features). The results show that the time cost increases linearly in both of $N$ and $M$. And miADMM generally cost the least amount of time among all these methods, especially compared to IPM. This is because our miADMM can split the biconvex constraints into two subproblems that are much easier to solve. \subsection{Experiment II: Weak-constrained Multi-task Learning} To evaluate the effectiveness of our method on the application of weak-constrained multi-task learning described in Equation \eqref{prob: multi-task learning}, a real-world school dataset is used to evaluate the effectiveness of our miADMM. It consists of the examination scores in three years of 15,362 students from 139 secondary schools, which are treated as tasks for examination scores prediction based on 27 input features such as year of the examination, school-specific features, and student-specific features. The dataset is publicly available and the detail description can be found in the original paper \cite{li2015multi}. $\rho$ was set to $1000$ for miADMM. \textbf{Metrics.} \indent In this experiment, five metrics were utilized to evaluate model performance. Mean Squared Error (MSE) measures the average of the squares of the difference between observation and estimation. Different from MSE, Mean Squared Logarithmic Error (MSLE) measures the ratio of observation to estimation. Mean Absolute Error(MAE) is also an error measurement but computed in the absolute value. The less the above three metrics are, the better a regression model is. Explained Variance (EV) computes the ratio of the variance of error to that of observation. The coefficient of determination or R2 score is the proportion of the variance in the dependent variable that is predictable from the independent variable. The higher score of EV and R2 are, the better a regression model is. \textbf{Baselines.} \indent In order to validate the effectiveness of miADMM, five benchmark multi-task learning models serve as comparison methods. Loss functions were set to least square errors. All parameters was set based on 5-fold cross validation on the training set.\\ \indent 1. multi-task learning with Joint Feature Selection (JFS) \cite{argyriou2007multi,zhou2011malsar} . JFS is one of the most commonly used strategies in multi-task learning. It captures the relatedness of multiple tasks by a constraint of weight matrix to share a common set of features.\\ \indent 2. Clustered Multi-Task Learning (CMTL) \cite{zhou2011clustered,zhou2011malsar}. CMTL assumes that multiple tasks are clustered into several groups. Tasks in the same group are similar to each other.\\ \indent 3. multi-task Lasso (mtLasso) \cite{zhou2011malsar}. mtLasso extends the classic Lasso model to the multi-task learning setting.\\ \indent 4. a convex relaxation of Alternating Structure Optimization (cASO) \cite{zhou2011malsar,ando2005framework}. cASO decomposes each task into two components: task-specific feature mapping and task-shared feature mapping.\\ \indent 5. Robust Multi-Task Learning (RMTL) \cite{chen2011integrating,zhou2011malsar}. RMTL aims to detect irrelevant tasks (outliers) from multiple tasks. One way to achieve this is to decompose the model into two parts: a low rank structure to capture task relatedness and a group-sparse structure to detect outliers. \textbf{Performance.} As discussed in Section \ref{sec:weak_multitask}, the convergence of our miADMM is guaranteed based on our theoretical framework. To verify this, Figures \ref{fig:convergences}(a) and \ref{fig:convergences}(b) illustrate the dual residuals and objective values in different iterations, which clearly demonstrates the convergence of the miADMM on this nonconvex problem. Then, the performance of examination score prediction on this dataset is illustrated in Table \ref{tab:multi-task performance}. It shows that the weak-constrained multitask learning model optimized by miADMM achieves the best performance in all the metrics, comparing to all the other five comparison methods. This is because our method only enforces the sign of the feature weight across different tasks are the same, while comparison methods typically perform too aggressive assumption on the similarity among tasks. For example, CMTL enforces that the correlated tasks need to have similar feature weights using squared regularization on the difference between feature weights. JFS, mtLasso, and RMTL still tend to enforce similar weights on features in different tasks by $\ell_{2,1}$ norm. Because their enforcement is weaker than CMTL, better performance from them is obtained. Finally, cASO gets relatively weak performance because it is to optimize an approximation of a nonconvex problem, and thus the solution points may be distant to that of optima in the original problem.\\ \textbf{Scalability.} To investigate the scalability of the miADMM compared with all baselines in Experiment II, we measured the training time of them in the school dataset when the number of features varies. The training time was averaged by running 20 times.\\ \indent Figure \ref{fig:scalability} shows the training time of all methods when the number of features ranges from 10 to 28. Obviously, the training time of all methods increased linearly with regard to the number of features. cASO was the most efficient of all methods, while the miADMM was ranked second. mtLasso, JFS, and RMTL also trained a model within 5 seconds on average. CMTL was time-consuming for training, which spent more than 10 seconds. \begin{figure*}[h] \centering \includegraphics[width=\textwidth] {convergences.pdf} \vspace{-0.8cm}\caption{Convergence curves on Experiments II and III.\vspace{-0.3cm}} \label{fig:convergences} \end{figure*} \begin{table} \small \centering \caption{Performance in Experiment II: miADMM outperformed the other methods in all the metrics.} \begin{tabular}{c|c|c|c|c|c} \hline\hline Method & MSE & MSLE&MAE &EV & R2 \\ \hline JFS&114.1583&0.4457&8.4560&0.2945&0.2945\\ \hline CMTL&115.5530&0.4517&8.5067&0.2859&0.2859\\ \hline mtLasso&115.2800&0.4522&8.4874&0.2876&0.2876 \\\hline cASO&157.9920&0.5235&9.4062&0.1472&0.1472\\\hline RMTL&114.1846&0.4478&8.4513&0.2944&0.2943 \\\hline miADMM&\textbf{113.6600}&\textbf{0.4457}&\textbf{8.4168}&\textbf{0.2976}&\textbf{0.2976}\\ \hline\hline \end{tabular} \label{tab:multi-task performance} \end{table} \begin{figure} \begin{center} \includegraphics[width=0.9\linewidth]{multitaskScalability.png} \end{center} \vspace{-0.5cm}\caption{The training time of all methods in Experiment II: the training time of all methods increased linearly with number of features.\vspace{-0.5cm}} \label{fig:scalability} \end{figure} \begin{table*} \centering\small \caption{Event forecasting performance in AUC in each of the 9 datasets\vspace{-0.3cm}}\label{tab:multi-lingual performance} \begin{tabular}{l|l|l|l|l|l|l|l|l|l}\hline\hline & BR & CL & CO & EC & EL & MX & PY & UY & VE \\\hline\hline LogReg & 0.686 & 0.677 & 0.644 & 0.599 & 0.618 & 0.661 & 0.616 & 0.628 & 0.667 \\ LASSO & 0.685 & 0.677 & 0.648 & 0.603 & 0.636 & 0.665 & 0.615 & 0.666 & 0.669 \\ MTL & 0.722 & 0.669 & 0.810 & 0.617 & 0.772 & \textbf{0.795} & 0.600 & 0.811 & 0.771 \\ MREF & 0.714 & 0.563 & 0.515 & 0.784 & 0.612 & 0.693 & 0.658 & 0.681 & 0.588 \\ DHML & \textbf{0.845} & \textbf{0.683} & \textbf{0.846} & \textbf{\underline{0.839} } & \textbf{\underline{0.780} }& 0.793 & \textbf{\underline{0.737} } & \textbf{0.835} & \textbf{0.835}\\ miADMM &\textbf{\underline{0.847}} &\textbf{\underline{0.691}} &\textbf{\underline{0.851} } & \textbf{0.838 } & \textbf{0.774} &\textbf{\underline{0.800} }& \textbf{0.736} &\textbf{\underline{0.836}} &\textbf{\underline{0.859}}\\\hline\hline \end{tabular} \end{table*} \begin{table}[] \centering \scriptsize \begin{tabular}{p{1cm}|p{0.6cm}|p{0.6cm}|p{0.6cm}|p{0.6cm}|p{0.6cm}|p{0.6cm}|p{0.6cm}|p{0.6cm}|p{0.6cm}}\hline & BR & CL & CO & EC & EL & MX & PY & UY & VE \\\hline LogReg&30,193&2,981&8,060&312&551&17,712&7,297&748&5,563 \\ LASSO&1,535&242&780&295&261&2,043&527&336&1,008\\ MTL&233&35&108&17&17&853&40&20&49 \\ MREF&25,889&6,521&14,714&4,332&4,669&31,349&9,495&5,305&5,769 \\ DHML&332&852&87&46&33&175&242&82&179 \\ miADMM &\textbf{20}&\textbf{12}&\textbf{17}&\textbf{7}&\textbf{3}&\textbf{30}&\textbf{6}&\textbf{4}&\textbf{22}\\\hline \end{tabular} \centering \captionof{table}{Comparison of running time (in seconds) on 9 datasets in Experiment III: the miADMM was the most efficient method.} \label{tab:runtime}\vspace{-0.3cm} \end{table}\vspace{-0.3cm} \vspace{-0.3cm}\subsection{Experiment III: Event Forecasting with Multi-lingual Indicators}\vspace{-0.3cm} \textbf{Datasets.} To evaluate the performance of our miADMM on the application in Section \ref{sec:network_constraints}, extensive experiments on nine real-world datasets have been performed. The dataset is obtained by randomly sampling 10\% (by volume) of the Twitter data from Jan 2013 to Dec 2014. The data in the first and second years are used and training and test set, respectively. For the topic (i.e., social unrest) of interest, 1,806 keywords in the three major languages in Latin America, namely English, Spanish, and Portuguese, as provided by the paper \cite{zhao2018distant}. Their translation relationships have also been labeled as semantic links among them, such as ``protest'' in English, ``protesta'' in Spanish, and ``protesto'' in Portuguese. The event forecasting results were validated against a labeled event set, known as the gold standard report (GSR), which is publicly available \cite{EN8FUW_2017}. \textbf{Metric and Baselines.} The metric used to evaluate the performance is Area Under the Receiver operating characteristic Curve (AUC). Five comparison methods including the state-of-the-arts Multi-task learning (MTL), Multi-resolution Event Forecasting (MREF), and Distant-supervision of Heterogeneous Multitask Learning (DHML) as well as classic methods logistic regression (LogReg) and Lasso. $\rho$ was set to 1 for miADMM. All the hyper-parameters were tuned by 5-fold cross-validation.\\ \textbf{Performance.} As shown in Figure \ref{fig:convergences}, miADMM generally performs the best among all the methods, with DHML the second-best performer. Both of them outperform the others typically by at least 5\%-10\%. This is because that both of them leverage the multilingual correlation among the features to boost up the model generalizability. Thanks to the framework of multi-task learning, MTL and MREF obtained a competitive performance with AUC typically over 0.7, which outperform simple methods like LogReg and LASSO by 5\% on average.\\ \textbf{Efficiency.} In Experiment III, we also compared the training time of the miADMM in comparison with all baselines on 9 datasets. The training time was averaged by running 5 times.\\ \indent The training time was shown in Table \ref{tab:runtime}. Overall, the miADMM was the most efficient of all methods whatever dataset we chose. It consumed no more than 30 seconds on all datasets. MTL was ranked second, but it spent hundreds of seconds on some datasets, like BR and MX. As the most time-confusing baselines, LogReg and MREF trained a model by thousands of seconds or more. \section{Conclusions} \label{sec:conclusion} We propose a novel generic framework for multi-convex inequality-constrained optimization with multiple coupled variables, which is a new variant of ADMM named miADMM. miADMM not only inherits the merits of general ADMMs but also provides advantageous theoretical properties on convergence conditions and properties under mild conditions. In addition, several machine learning applications of recent interest are provided as special cases of our proposed miADMM. Extensive experiments have been conducted on a synthetic dataset and ten real-world datasets, and demonstrate the effectiveness, scalability, and convergence properties of our proposed miADMM. In the future, we may explore milder conditions than Lipschitz subdifferentiability because some nonsmooth functions like $\max(\bullet)$ and $\min(\bullet)$ do not satisfy Lipschitz subdifferentiability. \bibliographystyle{plain}
{ "timestamp": "2019-03-01T02:09:05", "yymm": "1902", "arxiv_id": "1902.10882", "language": "en", "url": "https://arxiv.org/abs/1902.10882" }
\section{Introduction} \begin{table}[ht!] \footnotesize \centering \begin{tabular}{ p{0.9\linewidth} } \thickhline \textbf{Context} \\ \hline although united methodist practices and interpretation of beliefs have evolved over time , these practices and beliefs can be traced to the writings of the church 's founders , especially \textbf{john wesley and charles wesley} ( anglicans ) , but also philip william otterbein and martin boehm ( united brethren ) , and jacob albright ( evangelical association ) . \\ \hhline{=} \textbf{Answer} \\ \hline john wesley and charles wesley \\ \hhline{=} \textbf{Ground Truth Question} \\ \hline who were two of the founders of the united methodist church ? \\ \hhline{=} \textbf{No fine tuning} \\ \hline which two methodist can be traced to the church 's founders ? \\ \hhline{=} \textbf{LM reward} \\ \hline according to the writings of the church 's founders , according to the writings of the church 's founders , [...] \\ \hhline{=} \textbf{QA reward} \\ \hline who in anglicans ? \\ \hhline{=} \textbf{LM and QA reward} \\ \hline who are the writings of the church 's founders ? \\ \hhline{=} \textbf{Discriminator reward} \\ \hline who founded the church 's founders ? \\ \hhline{=} \textbf{Discriminator reward, adversarial discriminator} \\ \hline who were two western methodist practices ? \\ \hhline{=} \textbf{LM, QA and discriminator reward, adversarial discriminator} \\ \hline who are the anglicans of the church ? \\ \thickhline \end{tabular} \caption{Example generated questions for various fine-tuning objectives. The model trained on a QA reward has learned to simply point at the answer and exploit the QA model, while the model trained on a language model objective has learned to repeat common phrase templates.} \label{tab:rlexamplecomparison-2} \end{table} Posing questions about a document in natural language is a crucial aspect of the effort to automatically process natural language data, enabling machines to ask clarification questions \cite{quarc:emnlp18}, become more robust to queries \cite{Yu2018}, and to act as automatic tutors \cite{Heilman2010}. Recent approaches to question generation have used Seq2Seq \cite{Sutskever2014} models with attention \cite{Bahdanau2014} and a form of copy mechanism \cite{Vinyals2015, Gulcehre2016}. Such models are trained to generate a plausible question, conditioned on an input document and answer span within that document \cite{Zhou2018,Du,Du2018,Maluuba}. There are currently no dedicated question generation datasets, and authors have used the context-question-answer triples available in SQuAD. Only a single question is available for each context-answer pair, and models are trained using teacher forcing \cite{TeacherForce}. This lack of diverse training data combined with the one-step-ahead training procedure exacerbates the problem of exposure bias \cite{Ranzato2015}. The model does not learn how to distribute probability mass over sequences that are valid but different to the ground truth; during inference, the model must predict the whole sequence, and may not be robust to mistakes during decoding. Recent work has investigated training the models directly on a performance based objective, either by optimising for BLEU score \cite{Kumar2018} or other quality metrics \cite{Maluuba}. By decoupling the training procedure from the ground truth data, the model is able to explore the space of possible questions and become more robust to mistakes during decoding. While the metrics used often seem to be intuitively good choices, there is an assumption that they are good proxies for question quality which has not yet been confirmed. Our contributions are as follows. We perform fine tuning using a range of rewards, including an adversarial objective. We show that although fine tuning leads to increases in reward scores, the resulting models perform worse when evaluated by human workers. We also demonstrate that the generated questions exploit weaknesses in the reward models. \section{Background} Many of the advances in natural language generation have been led by machine translation \cite{Sutskever2014,Bahdanau2014,Gulcehre2016}. Previous work on question generation has made extensive use of these techniques. \citet{Du} use a Seq2Seq based model to generate questions conditioned on context-answer pairs, and build on this work by preprocessing the context to resolve coreferences and adding a pointer network \cite{Du2018}. Similarly, \citet{Zhou2018} use a part-of-speech tagger to augment the embedding vectors. Both authors perform a human evaluation of their models, and show significant improvement over their baseline. \citet{Kumar2018} use a similar model, but apply it to the task of generating questions without conditioning on a specific answer span. \citet{Song} use a modified context encoder based on multi-perspective context matching \cite{Wang2016}. \citet{Kumar} propose a framework for fine tuning using policy gradients, using BLEU and other automatic metrics linked to the ground truth data as the rewards. \citet{Maluuba} describe a Seq2Seq model with attention and a pointer network, with an additional encoding layer for the answer. They also describe a method for further tuning their model on language model and question answering reward objectives using policy gradients. Unfortunately they do not perform any human evaluation to determine whether this tuning led to improved question quality. For the related task of summarisation, \citet{Paulus2017} propose a framework for fine tuning a summarisation model using reinforcement learning, with the ROUGE similarity metric used as the reward. \begin{table*}[ht!] \footnotesize \centering \begin{tabular}{ c c c c | c c c c c} \thickhline \multicolumn{4}{c|}{\textbf{Features}} & \multicolumn{5}{c}{\textbf{Metrics}} \\ \rot{QA reward} & \rot{LM reward} & \rot{Discriminator reward} & \rot{Adversarial discriminator} & \rot{NLL} & \rot{BLEU} & \rot{QA} & \rot{LM} & \rot{Discriminator} \\ \hhline{====|=====} - & \checkmark & - & - & -0.7 & -1.9 & -3.7 & -13.4 & +1.5 \\ \checkmark & - & - & - & +1.7 & -4.5 & \textbf{+3.9} & +226 & +5.4 \\ \checkmark & \checkmark & - & - & -0.5 & -2.6 & +2.0 & \textbf{-16.3} & +2.9 \\ - & - & \checkmark & - & \textbf{-0.8} & -1.8 & -2.1 & -9.4 & +2.5 \\ - & - & \checkmark & \checkmark & +6.4 & -2.7 & -2.5 & -1.0 & \textbf{+10.8} \\ \checkmark & \checkmark & \checkmark & \checkmark & +1.0 & -2.4 & +1.3 & -6.2 & +10.0 \\ \thickhline \end{tabular} \caption{Changes in automatic evaluation metrics after models were fine tuned on various objectives. QA refers to the F1 score obtained by a question answering system on the generated questions. LM refers to the perplexity of generated questions under a separate language model. The discriminator reward refers to the percentage of generated sequences that fooled the discriminator. Lower LM and NLL scores are better. BLEU scores decreased in all cases.} \label{tab:results-rl} \end{table*} \begin{table*}[ht!] \footnotesize \centering \begin{tabular}{ l | c c c} \thickhline Model & Fluency & Relevance \\ \hhline{=|===} No fine tuning & \textbf{3.34} & \textbf{3.12} \\ +QA, LM rewards & 3.05 & 2.75 \\ +QA, LM, discriminator rewards +Adversarial discriminator & 2.89 & 2.82 \\ \hline Ground Truth & 4.67 & 4.72 \\ \thickhline \end{tabular} \caption{Summary of human evaluation of selected models} \label{tab:results-human} \end{table*} \begin{figure*}[ht] \centering \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[width=0.99\textwidth]{figures/human_qa_relevance.pdf} \caption{QA scores plotted against human relevance scores for all rated questions.} \end{subfigure}% ~ \begin{subfigure}[t]{0.49\textwidth} \centering \includegraphics[width=0.99\textwidth]{figures/human_lm_fluency.pdf} \caption{LM scores plotted against human fluency scores for all rated questions.} \end{subfigure} \caption{Comparison of human and automatic metrics.} \label{fig:human_qa_rel} \end{figure*} \section{Experimental setup} The task is to generate a natural language question, conditioned on a document and answer. For example, given the input document ``this paper investigates rewards for question generation" and answer ``question generation", the model should produce a question such as ``what is investigated in the paper?" \subsection{Model description} We use the model architecture described by \citet{Maluuba}. Briefly, this is a Seq2Seq model \cite{Sutskever2014} with attention \cite{Bahdanau2014} and copy mechanism \citep{Vinyals2015, Gulcehre2016}. \citet{Maluuba} also add an additional answer encoder layer, and initialise the decoder with a hidden state constructed from the final state of the encoder. Beam search \cite{BeamSearch} is used to sample from the model at inference time. The model was trained using maximum likelihood before fine tuning was applied. Our implementation achieves a competitive BLEU-4 score \cite{Papineni} of $13.5$ on the test set used by \citet{Du}, before fine tuning. \subsection{Fine tuning} Generated questions should be formed of language that is both \textit{fluent} and \textit{relevant} to the context and answer. We therefore performed fine tuning on a trained model, using rewards given either by the negative perplexity under a LSTM language model, or the F1 score attained by a question answering (QA) system, or a weighted combination of both. The language model is a standard recurrent neural network formed of a single LSTM layer. For the QA system, we use QANet \cite{Yu2018} as implemented by \citet{QANetGithub}. \subsection{Adversarial training} Additionally, we propose a novel approach by learning the reward directly from the training data, using a \textit{discriminator} detailed in Appendix~\ref{sec:discriminator}. We pre-trained the discriminator to predict whether an input question and associated context-answer pair were generated by our model, or originated from the training data. We then used as the reward the probability estimated by the discriminator that a generated question was in fact real. In other words, the generator was rewarded for successfully fooling the discriminator. We also experimented with interleaving updates to the discriminator within the fine tuning phase, allowing the discriminator to become adversarial and adapt alongside the generator. These rewards $R(\hat{Y})$ were used to update the model parameters via the REINFORCE policy gradient algorithm \cite{Williams1992}, according to $\nabla \mathcal{L} = \nabla \frac{1}{l} \sum \limits_t (\frac{R(\hat{Y})-\mu_R}{\sigma_R}) \log p(\hat{y}_t | \hat{y}_{< t}, \mathbf{D}, \mathbf{A})$. We teacher forced the decoder with the generated sequence to reproduce the activations calculated during beam search, to enable backpropagation. All rewards were normalised with a simple form of PopArt \cite{VanHasselt}, with the running mean $\mu_R$ and standard deviation $\sigma_R$ updated online during training. We continued to apply a maximum likelihood training objective during this fine tuning. \subsection{Evaluation} We report the negative log-likelihood (NLL) of the test set under the different models, as well as the corpus level BLEU-4 score \cite{Papineni} of the generated questions compared to the ground truth. We also report the rewards achieved on the test set, as the QA, LM and discriminator scores. For the human evaluation, we follow the standard approach in evaluating machine translation systems \cite{Koehn2006}, as used for question generation by \citet{Du2018}. We asked three workers to rate 300 generated questions between 1 (poor) and 5 (good) on two separate criteria: the fluency of the language used, and the relevance of the question to the context document and answer. \section{Results} Table~\ref{tab:results-rl} shows the changes in automatic metrics for models fine tuned on various combinations of rewards, compared to the model without tuning. In all cases, the BLEU score reduced, as the training objective was no longer closely coupled to the training data. In general, models achieved better scores on the metrics on which they were fine tuned. Jointly training on a QA \textit{and} LM reward resulted in better LM scores than training on only a LM reward. We conclude that fine tuning using policy gradients can be used to attain higher rewards, as expected. Table~\ref{tab:results-human} shows the human evaluation scores for a subset of the fine tuned models. The model fine tuned on a QA and LM objective is rated as significantly worse by human annotators, despite achieving higher scores in the automatic metrics. In other words, the training objective given by these reward sources does not correspond to true question quality, despite them being intuitively good choices. The model fine tuned using an adversarial discriminator has also failed to achieve better human ratings, with the discriminator model unable to learn a useful reward source. Table~\ref{tab:rlexamplecomparison-2} shows an example where fine tuning has not only failed to improve the quality of generated questions, but has caused the model to exploit the reward source. The model fine tuned on a LM reward has degenerated into producing a loop of words that is evidently deemed probable, while the model trained on a QA reward has learned that it can simply point at the location of the answer. This observation is supported by the metrics; the model fine tuned on a QA reward has suffered a catastrophic worsening in LM score of +226. Figure~\ref{fig:human_qa_rel} shows the automatic scores against human ratings for all rated questions. The correlation coefficient between human relevance and automatic QA scores was 0.439, and between fluency and LM score was only 0.355. While the automatic scores are good indicators of whether a question will achieve the lowest human rating or not, they do not differentiate clearly between the higher ratings: training a model on these objectives will not necessarily learn to generate better questions. A good question will likely attain a high QA and LM score, but the inverse is not true; a sequence may exploit the weaknesses of the metrics and achieve a high score \textit{despite} being unintelligible to a human. We conclude that fine tuning a question generation model on these rewards does not lead to better quality questions. \section{Conclusion} In this paper, we investigated the use of external reward sources for fine tuning question generation models to counteract the lack of task-specific training data. We showed that although fine tuning can be used to attain higher rewards, this does not equate to better quality questions when rated by humans. Using QA and LM rewards as a training objective causes the generator to expose the weaknesses in these models, which in turn suggests a possible use of this approach for generating adversarial training examples for QA models. The QA and LM scores are well correlated with human ratings at the lower end of the scale, suggesting they could be used as part of a reranking or filtering system.
{ "timestamp": "2019-03-01T02:17:25", "yymm": "1902", "arxiv_id": "1902.11049", "language": "en", "url": "https://arxiv.org/abs/1902.11049" }
\section{Introduction} All graphs considered in this paper are simple, undirected and finite. A graph $\Gamma$ is said to be \textit{$G$-oriented} (or $G$-half-arc-transitive) with respect to some group $G \leq \Aut(\Gamma)$ if $G$ acts transitively on the vertices and edges of $\Gamma$, but $G$ is not transitive on the arcs. In this case, $G$ has exactly two orbits on the arcs, and for each arc-orbit $\Delta$ and each edge $\{\alpha, \beta\}$, $\Delta$ contains exactly one of the arcs $(\alpha, \beta)$ or $(\beta, \alpha)$. Thus $\Delta$ is a $G$-invariant orientation of the edge set of $\Gamma$. Every $G$-oriented graph necessarily has even valency and all connected components of a $G$-oriented graph are pairwise isomorphic $G$-oriented graphs. It is thus natural to restrict attention to $G$-oriented graphs which are connected. For each even integer $m\geq2$, we let $\OG(m)$ denote the family of graph-group pairs $(\Gamma, G)$ where $\Gamma$ is connected, $m$-valent and $G$-oriented. It is easy to see that the family $\OG(2)$ consists only of oriented cycle graphs. On the other hand, study of the family $\OG(4)$ has been an active area of research for several decades and has taken a number of different directions (especially because of its connection with the embedding of graphs into Riemann surfaces). For a good summary of this research up to 1998 see \cite{maruvsivc1998recent}, for a more recent overview see \cite[Section 2]{al2015finite}. A particularly useful tool for studying $\OG(4)$ was given in \cite{maruvsivc1998half}, where several important combinatorial parameters were defined for graphs in this family based on certain cyclic subgraphs called \textit{$G$-alternating cycles}. This lead to the formulation of an approach to studying $\OG(4)$ by considering various quotients defined in terms of the $G$-alternating cycles, see \cite{maruvsivc1999tetravalent}. The combined results of \cite{maruvsivc1998half,maruvsivc1999tetravalent} provide a complete classification of some subfamilies of $\OG(4)$ and prove that pairs $(\Gamma, G)\in\OG(4)$ not contained in these subfamilies are covers of other members of $\OG(4)$ satisfying certain combinatorial conditions. In particular, this approach naturally identifies two subfamilies of $\OG(4)$ as `alternating-cycle-basic' in the sense that all 4-valent $G$-oriented graphs (other than those already classified) are covers of these basic members. However the analysis in \cite{maruvsivc1998half,maruvsivc1999tetravalent} provided no tools for studying the `basic' graphs relative to this reduction. More recently, a new framework for studying the family $\OG(4)$ was proposed in \cite{al2015finite} and developed further in \cite{al2016normal,al2017finite}. This new approach aims to analyse $\OG(4)$ using a normal quotient reduction, a method which has been successfully used to study other families of graphs with prescribed symmetry conditions, see for instance \cite{morris2009strongly, praeger1993nan, praeger1999finite}, but has never been applied to oriented graphs. The aim of this approach (explained in detail below) is to describe the family $\OG(4)$ in terms of graph quotients arising from normal subgroups of the groups contained in this family. In particular, it is again possible to identify three subfamilies of $\OG(4)$ which are `normal-quotient-basic' in the sense that all pairs $(\Gamma, G)\in \OG(4)$ are \textit{normal} covers of at least one of these basic pairs (see Section \ref{ssNormal}). It is likely that these two approaches may converge in a significant proportion of cases. The quotient graph $\Gamma_{\B}$ constructed in \cite{maruvsivc1999tetravalent} from a given pair $(\Gamma, G) \in \OG(4)$, related to the $G$-alternating cycles, has been studied again recently by Ramos Rivera and \v{S}parl \cite[Construction 5.4]{rivera2019new}. Provided a mild condition on parameters is satisfied (the attachment number should be less than the radius), they prove that $\Gamma_{\B}$ is a normal quotient \cite[Theorem 5.6]{rivera2019new} and hence may be studied using the powerful theory developed in \cite{al2016normal,al2017finite,al2015finite}, supplemented by the results of this paper. In this paper we answer \cite[Problem 1.2]{al2015finite} and provide a description of the pairs $(\Gamma, G)\in \OG(4)$ of biquasiprimitive type (one of the three families of pairs defined to be `basic' with respect to normal quotients). Our solution provides an important step towards a description of $\OG(4)$ in terms of normal quotients. For a detailed description of this programme and definitions of all basic pairs see Section \ref{ssNormal}. \bigskip \noindent\textbf{Biquasiprimitive Basic Pairs.} If ($\Gamma, G)\in \OG(4)$ is basic of biquasiprimitive type then the group $G$ contains a normal subgroup $N$ with exactly two orbits on the vertices of $\Gamma$, and all nontrivial normal subgroups of $G$ have at most two orbits. It is easy to see that $\Gamma$ is bipartite: since $\Gamma$ is connected there is an edge joining vertices in different $N$-orbits, and since $G$ normalizes $N$ and $\Gamma$ is $G$-edge-transitive, each edge joins vertices in different $N$-orbits. Thus the two orbits of $N$ form a bipartition of $\Gamma$. It follows that there is an index two subgroup $G^+$ of $G$ which fixes the two parts of the bipartition of $\Gamma$ setwise. The main result of this paper is the following theorem which describes the biquasiprimitive basic pairs $(\Gamma, G)\in \OG(4)$ in a manner analogous to \cite[Theorem 1.3]{al2015finite} for the quasiprimitive case. \begin{Theorem}\label{MainResult} Suppose that $(\Gamma, G)\in \OG(4)$ is basic of biquasiprimitive type. Then $G$ has a unique minimal normal subgroup $N= \soc(G)$ contained in a unique intransitive index $2$ subgroup $G^+ \leq G$. Furthermore, $N \cong T^k$ where $T$ is a finite simple group and exactly one of the following holds: \begin{enumerate}[(a)] \item $T$ is abelian and $k\leq 2$, or \item $T$ is nonabelian, $k \in \{1,2,4\}$, and $N$ is the unique minimal normal subgroup of $G^+$, or \item $T$ is nonabelian, $k= 2\ell$ with $\ell \in \{1,2,4\}$, $G^+$ has exactly two minimal normal subgroups each isomorphic to $T^\ell$, and $N$ is the direct product of these two subgroups. \end{enumerate} Moreover, there are infinitely many biquasiprimitive basic pairs $(\Gamma, G)\in \OG(4)$ described by each of the cases (a) - (c) and each value of $k$ in each case. \end{Theorem} The first part of this paper establishes that cases (a)-(c) of Theorem \ref{MainResult} must hold in two steps. In Section \ref{SecTwoTypes} we show that if $(\Gamma, G)\in \OG(4)$ is basic of biquasiprimitive type then $G$ has a unique minimal normal subgroup $N$ and one of the three cases (a) (b) or (c) holds for some $k \geq 1$. For this we use the structure theorem for biquasiprimitive groups given in \cite{praeger2003finite}. Then in Section \ref{SecRestricting} we use combinatorial arguments to obtain the various possibilities for the value of $k$ (the number of simple direct factors of $\soc(G)$) in each case. In the second part of this paper (Section \ref{SecConstructions}), we provide methods for constructing biquasiprimitive basic pairs and use them to construct various families of examples. In particular, we provide an infinite family of basic pairs for each of the cases described in Theorem \ref{MainResult}, for each possible value of $k$, thus proving the final assertion of Theorem \ref{MainResult}. \section{Preliminaries} Unless otherwise stated we will let $V\Gamma$, $E\Gamma$ and $A\Gamma$ denote the vertex-, edge-, and arc-set of a given graph $\Gamma$ (an \textit{arc} is an ordered pair of adjacent vertices). Given a vertex $\alpha \in V\Gamma$ we let $\Gamma(\alpha)$ denote the neighbourhood of $\alpha$ in $\Gamma$. For fundamental graph-theoretic concepts we refer the reader to \cite{godsil200AGT}, and for group-theoretic concepts not defined here, please refer to \cite{praeger2018permutation}. Given a group $G$ acting on a set $X$, we will always let $G^X$ denote the subgroup of $\Sym(X)$ induced by the group $G$. Given a permutation $g\in G$ and an element $x\in X$ we will let $x^g$ denote the image of $x$ under $g$. A permutation group $G^X$ is said to be \textit{semiregular} if only the identity element of $G$ fixes a point in $X$, and is said to be \textit{regular} if it is semiregular and transitive. \subsection{G-oriented Graphs} If $\Gamma$ is a $G$-oriented graph then the group $G$ is transitive on the vertices and edges but not on the arcs of $\Gamma$. It follows that the group $G$ has two orbits on the arc set of $\Gamma$ and these two orbits are paired. (Every arc $(u,v)$ in one orbit will have its reverse arc $(v,u)$ in the other orbit.) Either of these two $G$-orbits on the arc set of $\Gamma$ naturally give rise to a $G$-invariant orientation of the edges of $\Gamma$: simply take any arc $(u,v)$ of $\Gamma$ and then orient each edge $\{x,y\}$ from $x$ to $y$ if and only if $(u,v)^g = (x,y)$ for some $g\in G$. Given a pair $(\Gamma, G)\in \OG(4)$, a vertex $v_0 \in V\Gamma$ and any $G$-invariant orientation of $E\Gamma$, it will always be the case that two of the four edges incident to $v_0$ are oriented from $v_0$ to one of its neighbours, while the other two edges are oriented from a neighbour to $v_0$. In particular, the stabilizer $G_{v_0}$ of a vertex $v_0 \in V\Gamma$ will always have two orbits of length two on the neighbourhood of $v_0$, and we can think of these two orbits as the in-neighbours and out-neighbours of $v_0$ with respect to a given orientation. We denote these two sets by $\Gamma_{in}(v_0)$ and $\Gamma_{out}(v_0)$ respectively. Given a connected, 4-valent, $G$-vertex-transitive graph $\Gamma$, we may show that $(\Gamma, G) \in \OG(4)$ by showing that $G_{v_0}$ has two orbits of size 2 on $\Gamma(\alpha)$, and that no automorphism of $G$ can reverse an edge of $\Gamma$. An \textit{oriented $s$-arc} of a $G$-oriented graph is a sequence of vertices $(v_0, v_1,...,v_s)$ of $\Gamma$, such that for each $i \in \{0,...,s-1\}$, $v_i$ and $v_{i+1}$ are adjacent, and each edge $\{v_i, v_{i+1}\}$ is oriented from $v_i$ to $v_{i+1}$. We will make use of the following important fact concerning oriented $s$-arcs of $G$-oriented graphs, the proof of which can be found in the first part of the proof of \cite[Lemma 6.2]{al2015finite}. \begin{Lemma}\label{regular_s_arcs} Let $(\Gamma, G) \in \OG(4)$ and let $s\geq 1$ be the largest integer such that $G$ acts transitively on the oriented $s$-arcs of $\Gamma$. Then $G$ acts regularly on the oriented $s$-arcs of $\Gamma$. \end{Lemma} Now let $(\Gamma, G) \in \OG(4)$ and take a vertex $\alpha \in V\Gamma$. Let $s$ be as in the statement of Lemma \ref{regular_s_arcs} and consider an oriented $s$-arc $(v_0,v_1,...,v_s)$ of $\Gamma$ where $\alpha := v_0$. Since $G$ is regular on the oriented $s$-arcs of $\Gamma$, it follows that the vertex-stabilizer $G_{v_0}$ is regular on the oriented $s$-arcs starting at $v_0$. From this it follows that for each $i$ with $0\leq i \leq s$, the subgroup $G_{{v_0},...,{v_{s-i}}}$ has order $2^i$. In particular, $|G_{{v_0},...,{v_{s-1}}}| = 2$, and the stabilizer of a vertex $G_{v_0} = G_\alpha$ is a 2-group. \subsection{Normal Quotients} \label{ssNormal} Given a pair $(\Gamma, G) \in \OG(4)$ and a normal subgroup $N$ of $G$, we define a new graph $\Gamma_N$ called a \textit{$G$-normal-quotient} of $\Gamma$. The vertices of $\Gamma_N$ are the $N$-orbits on the vertices of $\Gamma$, with an edge between two $N$-orbits $\{B, C\}$ in $\Gamma_N$ if and only if there is an edge of the form $\{\alpha, \beta\}$ in $\Gamma$, with $\alpha \in B$ and $\beta \in C$. The group $G$ induces a group $G_N = G/K$ of automorphisms of $\Gamma_N$, where $K$ is the kernel of the $G$-action on $\Gamma_N$. By definition $N\leq K$, and hence the $K$-orbits are the same as the $N$-orbits so $\Gamma_K=\Gamma_N$. However $K$ may be strictly larger than $N$. If $(\Gamma_N, G_N)$ is itself a member of $\OG(4)$, that is, $\Gamma_N$ is a 4-valent $G_N$-oriented graph, then $\Gamma$ is said to be a \textit{$G$-normal cover of $\Gamma_N$}. In general however, the pair $(\Gamma_N, G_N)$ need not lie in $\OG(4)$, and the various possibilities for such normal quotient pairs $(\Gamma_N, G_N)$ were identified in \cite[Theorem 1.1]{al2015finite}. In particular, it was proved that for any $(\Gamma, G) \in \OG(4)$, and any nontrivial normal subgroup $N$ of $G$, either $(\Gamma_N, G_N)$ is also in $\OG(4)$ and $\Gamma$ is a $G$-normal cover of $\Gamma_N$, or $\Gamma_N$ is isomorphic to $K_1$, $K_2$ or a cycle $C_r$, for some $r\geq 3$. A pair $(\Gamma_N, G_N)$ where $\Gamma_N$ is isomorphic to one of $K_1$, $K_2$ or $C_r$ is defined to be \textit{degenerate}, while a pair ($\Gamma, G) \in \OG(4)$ for which $(\Gamma_N, G_N)$ is degenerate relative to every non-trivial normal subgroup $N$ of $G$ is defined to be \textit{basic}. Since \cite[Theorem 1.1]{al2015finite} ensures that every member of $\OG(4)$ is a normal cover of a basic pair, this result suggests a framework for studying the family $\OG(4)$ using normal quotient reduction. The goal of this framework is to improve understanding of this family by developing a theory to describe the basic pairs in $\OG(4)$, and subsequently developing a theory to describe the $G$-normal covers of these basic pairs. Work in this direction was initiated in \cite{al2015finite} where the basic pairs were further divided into three types and the basic pairs of quasiprimitive type were analysed. A pair $(\Gamma, G) \in \OG(4)$ is said to be basic of \textit{quasiprimitive type} if all $G$-normal quotients $\Gamma_N$ of $\Gamma$ are isomorphic to $K_1$. This occurs precisely when all non-trivial normal subgroups of $G$ are transitive on the vertices of $\Gamma$. A permutation group with this property is said to be quasiprimitive, and there is a general structure theorem available for quasiprimitive groups analogous to the O'nan-Scott Theorem for primitive permutation groups in \cite{praeger1993nan}. Using this tool, as well as combinatorial properties of the family $\OG(4)$, it was shown \cite[Theorem 1.3]{al2015finite} that if $(\Gamma, G) \in \OG(4)$ is basic of quasiprimitive type, then $G$ has a unique minimal normal subgroup $N \cong T^k$ where $T$ is a nonabelian finite simple group and $k\leq 2$. Of course, every pair $(\Gamma, G) \in \OG(4)$ will have at least one normal quotient $\Gamma_N$ isomorphic to $K_1$ since we may take the quotient with respect to the full group $G$. If the only normal quotients of a pair $(\Gamma, G) \in \OG(4)$ are the graphs $K_1$ or $K_2$, and $\Gamma$ has at least one $G$-normal quotient isomorphic to $K_2$, then $(\Gamma, G)$ is said to be basic of \textit{biquasiprimitive type}. The group $G$ here is biquasiprimitive: it is not quasiprimitive but each nontrivial normal subgroup has at most two orbits. Again, there is a structure theorem for biquasiprimitive groups available in \cite{praeger2003finite}. The basic pairs in $\OG(4)$ which are neither quasiprimitive nor biquasiprimitive must have at least one normal quotient isomorphic to a cycle graph $C_r$, and hence are said to be of \textit{cycle type.} Work towards describing the basic pairs of cycle type was initiated in \cite{al2016normal} where several important families of these graphs, which have already been discussed in the literature, were analysed from a normal quotient point of view. A more general analysis of these pairs was done in \cite{al2017finite}, however further work is required to understand this type. The above discussion outlining the three types of basic pairs $(\Gamma, G) \in \OG(4)$ is summarised in Table \ref{BasicTable}. This table also includes references to the papers where the corresponding basic pairs were previously studied. The objective of this paper is to describe the basic pairs $(\Gamma, G) \in \OG(4)$ of biquasiprimitive type, several families of which were constructed in \cite{PPMatrix}. \begin{table} \begin{tabular}{ l l l l } \hline Basic Type & Possible $\Gamma_N$ for $1\neq N \lhd G$ & Conditions on $G$-action on vertices & Reference \\ \hline Quasiprimitive & $K_1$ only & quasiprimitive & \cite{al2015finite}\\ Biquasiprimitive & $K_1$ and $K_2$ only & biquasiprimitive & -- \\ Cycle & At least one $C_r $ $(r\geq 3)$ & at least one quotient action $D_{2r}$ or $\mathbb{Z}_r$& \cite{al2016normal,al2017finite}\\ \hline \end{tabular}\caption{Types of Basic Pairs $(\Gamma, G) \in \OG(4)$.}\label{BasicTable} \end{table} \subsection{Bi-Cayley Graphs.}\label{ssBiCay} A \textit{bi-Cayley graph} $\Gamma$ is a graph which admits a semiregular group of automorphisms $H$ with two orbits on the vertex set of $\Gamma$. These graphs are important for our purposes as for many of the pairs $(\Gamma, G)\in \OG(4)$ studied in this paper, the group $G$ will have a normal subgroup $N$ contained in $G^+$ and acting semiregularly on the two parts of the bipartition of $\Gamma$. In such cases, $\Gamma$ is a bi-Cayley graph. Every bi-Cayley graph of a group $H$ may be constructed in the following way. Let $R$ and $L$ be inverse-closed subsets of $H$ which do not contain the identity, and let $S$ be a subset of $H$. Define the graph $\Gamma = \BiCay(H, R, L, S)$ to be a graph whose vertex set is the union of the sets $H_0=\{h_0 : h \in H\}$ and $H_1=\{h_1 : h \in H\}$ (two copies of the group $H$), and whose edge set is the union of the \textit{right edges} $\{\{h_0, g_0\} : gh^{-1}\in R\}$, the \textit{left edges} $\{\{h_1, g_1\} : gh^{-1}\in L\}$, and the \textit{spokes} $\{\{h_0, g_1\} : gh^{-1}\in S\}$. Note that if $\Gamma$ is connected then $H$ is generated by $R\cup L \cup S$ (however the converse does not necessarily hold). The group $H$ then acts by right multiplication on the vertices of $\Gamma$, and this action is semiregular with two orbits $H_0$ and $H_1$. See for instance \cite{conder2016edge,zhou2016automorphisms}. \section{Biquasiprimitive Basic Pairs: two types.} \label{SecTwoTypes} Suppose now that $(\Gamma, G) \in \OG(4)$ is a basic pair of biquasiprimitive type and recall that this implies that $\Gamma$ is bipartite. Let $X$ denote the vertex set of $\Gamma$ with $\{\Delta, \Delta'\}$ the bipartition of $X$, and let $G^+$ be the index 2 subgroup of $G$ fixing the two biparts $\Delta$ and $\Delta'$ setwise. Since $\Gamma$ is $G$-vertex-transitive it follows that $G^+$ is transitive on both $\Delta$ and $\Delta'$. In this section we will begin working towards the proof of Theorem \ref{MainResult}. We start with a lemma about the intransitive normal subgroups of $G$. \begin{Lemma}\label{G^+ faithful} Let $ (\Gamma, G) \in \OG(4)$ be basic of biquasiprimitive type, and let $X$ denote the vertex set of $\Gamma$. Let $G^+$ be the subgroup of $G$ of index two with orbits $\Delta, \Delta'$ (the biparts of $X$). Then \begin{enumerate} \item [(a)] $G^+$ is faithful on $\Delta$ $($and $\Delta')$, and \item[(b)] any non-trivial intransitive normal subgroup $N$ of $G$ must have the sets $\Delta$ and $ \Delta'$ as its two orbits on $X$. In particular, $N$ is contained in $G^+$. \end{enumerate} \end{Lemma} \begin{proof} (a). Let $K$ be the subgroup of $G^+$ fixing $\Delta$ pointwise and suppose that $K \neq 1$, and hence that $K$ acts non-trivially on $\Delta'$. If $g \in G\backslash G^+$ then $K^g$ is the pointwise stabilizer of $\Delta'$ in $G^+$, and hence $K \cap K^g = 1$, so $\langle K, K^g \rangle \cong K \times K^g$. Now since both $K$ and $K^g$ are normal in $G^+$, and since $g^2 \in G^+$ (because $|G:G^+| = 2$), it follows that $(K \times K^g)^g = K \times K^g$, and so $K\times K^g$ is a normal subgroup of $G$ contained in $G^+$. Thus $ K \times K^g$ has two orbits $\Delta$ and $\Delta'$. But this implies that $K$ is transitive on $\Delta'$, which is impossible since for any $\alpha \in \Delta$ we have $K \leq G_\alpha$, and $G_\alpha$ is not transitive on $\Gamma(\alpha) \subset \Delta'$. Thus part (a) holds. (b). Since $|V\Gamma| \geq |\{\alpha\} \cup \Gamma(\alpha)| = 5$, it follows that $|N| \geq \frac{1}{2}|V\Gamma| >2$, hence $N\cap G^+ \neq 1$ since $|N:N\cap G^+|\leq 2$. Thus $N\cap G^+$ is a nontrivial intransitive normal subgroup of $G$ contained in $G^+$, so its orbits are $\Delta$ and $\Delta'$, and these must also be the orbits of the intransitive normal subgroup $N$. \end{proof} Next we introduce a convenient framework for investigating these graphs, based on the Imprimitive Wreath Embedding Theorem \cite[Theorem 5.5]{praeger2018permutation} which identifies the vertex set $X$ with $\{ v_i\mid v\in V, i\in\{0,1\}\}$, and $G$ with a transitive subgroup of $\Sym(V)\wr\Sym(2)$ in its natural imprimitive action, so that $\Delta = \{ v_0\mid v\in V\}$ and $\Delta'=\{ v_1\mid v\in V\}$. Since $G$ is transitive, its subgroup $G^+$ induces transitive subgroups $(G^+)^\Delta$ and $(G^+)^{\Delta'}$ on $\Delta$ and $\Delta'$, each of which we identify with a transitive subgroup of $\Sym(V)$. Let $\tau\in\Sym(V)\wr\Sym(2)$ generate the top group, that is, $\tau: v_\varepsilon \rightarrow v_{1-\varepsilon}$ for each $v\in V, \varepsilon\in\{0,1\}$, and note that $\tau$ conjugates each element $(h_1, h_2)\in\Sym(V)\times\Sym(V)$ to its reverse $(h_2, h_1)$. For a group $H$, $y\in H$, and $\varphi\in\Aut(H)$, we denote by $\iota_y$ the inner automorphism of $H$ induced by $y$, that is $h\mapsto y^{-1}hy$, and by $ \Diag_\varphi(H\times H)=\{(h,h^\varphi)\mid h\in H\}$ the diagonal subgroup of $H\times H$ corresponding to $\varphi$. \begin{Proposition}\label{iso} Let $ (\Gamma, G) \in \OG(4)$ be basic of biquasiprimitive type, and let $X$ denote the vertex set of $\Gamma$. Let $G^+$ be the subgroup of $G$ of index two with orbits $\Delta, \Delta'$ in $X$, and let $H$ be the permutation group induced by $G^+$ on $\Delta$. Let $\alpha\in\Delta$ and $\beta\in \Gamma(\alpha)\subseteq \Delta'$. Then replacing $G$ by a conjugate in $\Sym(X)$ if necessary, we may take $X=\{ v_i\mid v\in V, i\in\{0,1\}\}, \Delta, \Delta'$ and $\alpha=u_0$ as above, where $u\in V$, and we may identify $H$ with a transitive subgroup of $\Sym(V)$, such that \begin{enumerate} \item[(a)] $G\leq H\wr\Sym(2)$, so $H = (G^+)^\Delta = (G^+)^{\Delta'}$; and \item[(b)] for some $y\in H$ and $\varphi\in\Aut(H)$ with $\varphi^2= \iota_y$, we have $G^+ = \Diag_\varphi(H\times H),$ and $G=\langle G^+, g\rangle$, where $g:=(y,1)\tau$, and $\beta= \alpha^g=(u^y)_1$. Also $G_\alpha = G_\alpha^+ \cong H_\alpha$ is a $2$-group. \end{enumerate} \end{Proposition} \begin{proof} The first assertion that we may choose the identification of $X, \Delta, \Delta'$ so that the transitive subgroups $(G^+)^\Delta$ and $(G^+)^{\Delta'}$ determine the same subgroup $H$ of $\Sym(V)$ follows from the embedding theorem \cite[Theorem 5.5]{praeger2018permutation}. Thus $G\leq H\wr\Sym(2)$, and $G^+\leq H\times H$. By Lemma~\ref{G^+ faithful}, $G^+$ is a diagonal subgroup of $H\times H$, so $G^+ = \Diag_\varphi(H\times H),$ for some $\varphi\in\Aut(H)$. Since $G$ is transitive on $X$, there exists $g=(h_1, h_2)\tau\in G$ such that $\beta=\alpha^g$. Set $s:= (1, h_2)\in H\times H$. Then $s$ induces a graph isomorphism from $\Gamma$ to the graph $\Gamma^s$ with vertex set $X$ and arc set consisting of all pairs $(v_\varepsilon^s, w_{1-\varepsilon}^s) = (v_\varepsilon, (w^{h_2})_{1-\varepsilon})$, where $(v_\varepsilon, w_{1-\varepsilon})$ is an arc of $\Gamma$. Moreover $(\Gamma^s, G^s)\in\OG(4)$, the group $G^s =\langle (\Diag_{\varphi}(H\times H))^s, g^s\rangle$, and we have $(\Diag_\varphi(H\times H))^s = \Diag_{\varphi\iota_{h_2}}(H\times H)$ and \[ g^s = (1, h_2^{-1})(h_1,h_2)\tau (1,h_2) = (h_1h_2, 1)\tau. \] Set $y:= h_1h_2$. Then $g^s$ maps $\alpha^s$ to its out-neighbour $\beta^s$ in $\Gamma^s$, and we have $\alpha^s = \alpha$, and $\beta^s = (\alpha^g)^s = (\alpha^{s})^{g^s} = \alpha^{g^s} = (u_0)^{(y,1)\tau} = (u^y)_1$. Now replace $\Gamma, G, g, \varphi, \alpha, \beta$ by $\Gamma^s, G^s, g^s, \varphi\iota_{h_2}, \alpha, \beta^s$. Then all assertions are proved apart from the equality $\varphi^2=\iota_y$, which we now prove (for the new $\varphi)$. Since $g=(y,1)\tau$ normalises $G^+ = \Diag_\varphi(H\times H)$, it follows that, for all $h\in H$, $G^+$ contains $(h,h^\varphi)^g = (h,h^\varphi)^{(y,1)\tau} = (h^\varphi, h^y)$ and hence we must have $h^y = (h^\varphi)^\varphi$ for all $h\in H$, that is to say, $\varphi^2=\iota_y$. Finally $G_\alpha = G^+_\alpha = \{(h,h^\varphi) : \alpha^{(h, h^\varphi)} = (u^h)_0 = u_0\} \cong H_u$, and we know already that $G_\alpha$ is a 2-group. \end{proof} Now we apply the structure theorem from \cite{praeger2003finite} for biquasiprimitive groups. It turns out that only two of the various possible structures given in Theorem 1.1 of \cite{praeger2003finite} can arise as groups of automorphisms of 4-valent oriented graphs of basic biquasiprimitive type. Note that the stabiliser $G_\alpha = \{(h,h^\varphi)\mid h\in H_u\}\cong H_u$. \begin{Proposition}\label{twotypes} Under the assumptions of Proposition~\ref{iso}, the automorphism $\varphi$ is nontrivial, and $G$ has a unique minimal normal subgroup $N=\soc(G)$. Moreover $N=\Diag_\varphi(M\times M)\cong M$ where $M=\soc(H)\cong T^k$ for some simple group $T$ and $k\geq1$, and either \begin{enumerate} \item[(a)] $H$ is quasiprimitive and $M$ is its unique minimal normal subgroup, or \item[(b)] $H$ is not quasiprimitive and $M=R\times R^\varphi$ where $R, R^\varphi$ are intransitive minimal normal subgroups of $H$. In this case $G^+$ has two minimal normal subgroups, namely $K:=\Diag_\varphi(R\times R)$ and $L = \Diag_\varphi(R^\varphi\times R^\varphi)$, and these are the only minimal normal subgroups if $T$ is nonabelian. Moreover, $N=K\times L$ (so $k=2\ell$ and $K\cong L\cong R\cong T^\ell$). \end{enumerate} \end{Proposition} \begin{proof} We examine the possibilities for the structure of $G$ given in \cite[Theorem 1.1]{praeger2003finite}. Since $G_\alpha$ is a 2-group, cases (b) and (c)(ii) do not arise, and since $G^+$ is faithful on $\Delta$, the possible cases are (a)(i) and (c)(i). Consider first case (a)(i). Since $|X|>4$, the element $g = (y,1)\tau$ does not centralise $G^+$. A straightforward computation shows that $C_{G^+}(g)$ consists of all pairs $(h,h^\varphi)$ such that $h\in C_H(\varphi)$. Thus $\varphi$ is nontrivial. Moreover in case (a)(i), $H$ is quasiprimitive on $V$ and the stabiliser $H_u\cong G_\alpha$ is a 2-group. We now apply the O'nan-Scott Theorem for quasiprimitive groups from \cite{praeger1993nan}. This theorem tells us that if $H$ has more than one minimal normal subgroup then the stabilizer $H_u$ is not solvable. Thus $H$ has a unique minimal normal subgroup $M=\soc{H}\cong T^k$ where $T$ is a simple group and $k\geq 1$. Now $G^+$ has a minimal normal subgroup $N=\Diag_\varphi(M\times M)\cong M$ and since $G^+ \cong H$ it follows that $N$ is the unique minimal normal subgroup of $G^+$. It remains to consider case (c)(i). Here again, $G$ has a unique minimal normal subgroup $N=\Diag_\varphi(M\times M)$, but in this case $M=\soc(H)=R\times R^\varphi$ where $R, R^\varphi$ are intransitive minimal normal subgroups of $H$. In particular $\varphi$ is nontrivial, and $R\cong R^\varphi\cong T^\ell$ and $M\cong T^k$ with $k=2\ell$. Here $K$ and $L$, as in part (b), are the minimal normal subgroups of $G^+$, and are interchanged by $g$ (noting that, for $(h,h^\varphi)\in K$, the conjugate $(h,h^\varphi)^g = (h^\varphi, h^y)\in L$, since $\varphi^2=\iota_y$, and vice versa). If $T$ is nonabelian then since $R$ is a minimal normal subgroup of $H$, it follows that $H$ permutes the simple direct factors of $R$ (and $R^\varphi$) transitively. Hence these are the only minimal normal subgroups of $H$, and $K$ and $L$ are the only minimal normal subgroups of $G^+$. On the other hand, if $T=C_p$ then as an $H_u$-module, $M$ has two composition factors each isomorphic to $R$. In particular, $H$ may have other minimal normal subgroups. However, for any such subgroup $S$ we have $S\cong R$ as there are just two composition factors and both are isomorphic to $R$. Also since $N$ is the unique minimal normal subgroup of $G$ it follows that $M = S \times S^\varphi$ also. \end{proof} In summary if $(\Gamma, G)\in \OG(4)$ is basic and biquasiprimitive, then $N:= \soc(G)$ is the unique minimal normal subgroup of $G$, and is contained in $G^+$. In particular $N$ is transitive on the two $G^+$-orbits $\Delta$ and $\Delta^+$, and since $G_\alpha = G^+_\alpha$, it follows that $G^+ = NG_\alpha$. Using the framework of Proposition~\ref{iso}, we can specify the neighbours of $\alpha = u_0$ and of $\alpha^{g^{-1}}=u_1$. We denote by $\Gamma_{out}(\gamma)$ and $\Gamma_{in}(\gamma)$ the 2-subsets of out-neighbours and in-neighbours of a vertex $\gamma$, respectively. Each of these two sets is an orbit of the stabiliser $G_\gamma$, and we can always choose an element of $G_\gamma$ that acts fixed-point-freely on $\Gamma(\gamma)$ (whether the induced group has order 2 or 4). For the vertex $\alpha$, such an element is of the form $(z^{\varphi^{-1}},z)$ for some $z\in (H_u)^\varphi$. Since we did not specify above, let us now decide that the vertex $\beta=(u^y)_1$ in Proposition~\ref{iso} lies in $\Gamma_{in}(\alpha)$. \begin{Lemma}\label{neighbours} Use the notation of Proposition~\ref{iso} (in particular that $g=(y,1)\tau$ and $\alpha=u_0$), and let $(z^{\varphi^{-1}},z)\in G_\alpha$ be fixed-point-free on $\Gamma(\alpha)$, for some $z\in (H_u)^\varphi$. Then \begin{enumerate} \item[(a)] $\Gamma_{in}(\alpha)=\{ (u^y)_1, (u^{yz})_1 \}$ and $\Gamma_{out}(\alpha) = \{ u_1, (u^z)_1 \}$; and \item[(b)] for $\gamma :=\alpha^{g^{-1}} = u_1$, $\Gamma_{in}(\gamma)=\{ u_0, (u^{yzy^{-1}})_0 \}$ and $\Gamma_{out}(\gamma) = \{ (u^{y^{-1}})_0, (u^{zy^{-1}})_0 \}$. \end{enumerate} \end{Lemma} \begin{proof} As mentioned above, we assume that the vertex $\beta=\alpha^g=(u^y)_1$ in Proposition~\ref{iso} lies in $\Gamma_{in}(\alpha)$. As $(z^{\varphi^{-1}},z)\in G_\alpha$ is fixed-point-free on $\Gamma(\alpha)$, the second vertex in $\Gamma_{in}(\alpha)$ is $\beta^{(z^{\varphi^{-1}},z)} = (u^{yz})_1$. Note that $g^{-1} = (1,y^{-1})\tau$. Applying $g^{-1}$ to $\{\alpha\}\cup\Gamma_{in}(\alpha)$ we find first that $\alpha^{g^{-1}}= u_1$ and then that $\Gamma_{in}(u_1)$ consists of the vertices $(u^y)_1^{g^{-1}} = u_0$ and $(u^{yz})_1^{g^{-1}} = (u^{yzy^{-1}})_1$. In particular $u_1\in\Gamma_{out}(u_0)$ and the second vertex in this set is therefore $u_1^{(z^{\varphi^{-1}},z)} = (u^{z})_1$. This completes the proof of part (a). Finally applying $g^{-1}$ to $\{\alpha\}\cup\Gamma_{out}(\alpha)$ we find that $\Gamma_{out}(u_1)$ consists of the vertices $(u)_1^{g^{-1}} = (u^{y^{-1}})_0$ and $(u^{z})_1^{g^{-1}} = (u^{zy^{-1}})_0$. \end{proof} \section{Biquasiprimitive Basic Pairs: restricting the socle.}\label{SecRestricting} We will now show that for any biquasiprimitive basic pair $(\Gamma, G) \in \OG(4)$, the unique minimal normal subgroup $N$ of $G$ is a direct product of $k$ finite simple groups where $k$ takes one of only several possible values depending on the structure of $G$. We deduce these values of $k$ by separately considering the cases when $N$ is abelian and nonabelian. We first consider the case where the minimal normal subgroup $N = \soc(G)$ is abelian. Since $N$ is contained in $G^+$, this implies that $N$ acts transitively and hence regularly on $\Delta$ (and $\Delta')$. In particular, $\Gamma$ is a bi-Cayley graph over $N$, that is, $\Gamma = \BiCay(N, \emptyset, \emptyset, S)$, and $N = C_p^k$ for some $k \geq 1$. \begin{Lemma}\label{abelianBound} Let $(\Gamma, G) \in \OG(4)$ be basic of biquasiprimitive type and suppose that $N = \soc(G)$ is abelian. Then $N = C_p^k$ with $k \leq 2$ and $p$ an odd prime. \end{Lemma} \begin{proof} Since $N = C_p^k$ is an abelian normal subgroup of $G$ contained in $G^+$, $N$ is regular on the two $G^+$-orbits $\Delta$ and $\Delta'$, and $\Gamma \cong\BiCay(N, \emptyset, \emptyset, S)$, for some subset $S \subseteq N$ as defined in Subsection \ref{ssBiCay}. We can view $\Delta$ and $\Delta'$ as two copies of the group $N$, so $\Delta = N_0$ and $\Delta' = N_1$, with each vertex $n_0 \in \Delta$ adjacent to $(n + s)_1 \in \Delta'$, where $s \in S$. Since $\Gamma$ is connected and 4-valent, it follows that $\langle S \rangle = N$ and $|S| = 4$, in particular $k \leq 4$. Suppose first that $k = 4$ and $S = \{s_1, s_2, s_3, s_4\} \subseteq N$. We may view $N = C_p^4$ as a 4-dimensional vector space over the finite field $\mathbb{F}_p.$ Since $S$ generates $N$, it follows that the elements of $S$ viewed as vectors of this space are linearly independent. Since $\Gamma$ is connected, there is a path from $(0_N)_0 \in \Delta$ to $(0_N)_1 \in \Delta'$. Moreover, this path has odd length since $\Delta$ and $\Delta'$ are independent sets. Let $P$ be a path from $(0_N)_0$ to $(0_N)_1$, then $P$ is of the form $$(0_N)_0, (t_1)_1, (t_1-t_2)_0, (t_1-t_2+t_3)_1,...,(t_1-t_2+t_3 - ...+t_r)_1 = (0_N)_1$$ where each $t_i \in S$ for $1\leq i\leq r$. In particular, since $P$ has odd length, $r$ is an odd integer. Now consider the expression $t_1-t_2+t_3 - ...+t_r = 0_N$. For each $j$ with $1\leq j\leq 4$, let $\alpha_j$ be the number of odd $i$ such that $t_i = s_j$, and let $\beta_j$ be the number of even $i$ such that $t_i = s_j$. Notice that since the length of the path $P$ is odd, the sum $\sum_{j=1}^4 \alpha_j$ is equal to exactly $1 + \sum_{j=1}^4 \beta_j$, and so $ \sum_{j=1}^{4} (\alpha_j-\beta_j) =1 $ (an equation over the integers $\mathbb{Z}$). On the other hand since $t_1-t_2+t_3 - ...+t_r = 0_N$ (an equation in the group $N$), we get that $$0_N = \sum_{j=1}^{4} (\alpha_j-\beta_j)s_j,$$ and since the elements $s_j$ of $S$ are linearly independent, we know that $\alpha_j \equiv \beta_j$ mod $p$, for each $j$. Hence $$0 \equiv \sum_{j=1}^{4} (\alpha_j-\beta_j)\mod p,$$ contradicting the fact that $ \sum_{j=1}^{4} (\alpha_j-\beta_j) =1$, thus $k\neq 4$. Next suppose that $N = C_p^k$ with $k= 3$. Since $k$ is odd, it follows by Proposition \ref{twotypes} that $G^+ = NG_\alpha$ is quasiprimitive on $\Delta = N$. In particular since $N$ is regular on $\Delta$, no proper non-trivial subgroup of $N$ is normal in $G^+$. Since $N$ acts trivially on itself by conjugation, this implies that conjugation by $G_\alpha$ fixes no proper non-trivial subgroup of $N$. However, $G_\alpha$ is a 2-group, and $N$ has exactly $p^2 + p +1$ subgroups of order $p$. Since this number is odd, some subgroup must be left fixed under conjugation by $G_\alpha$ and hence must be normal in $G^+$, a contradiction. Therefore $k \leq 2$. To see that $p$ must be odd notice that if $k = 2$ then again conjugation by $G_\alpha$ cannot fix any of the $p+1$ subgroups of $N$ of order $p$ implying that $p \neq 2$. On the other hand, if $k = 1$ then the fact that $|V\Gamma|>4$ implies that $N \neq C_2$. \end{proof} The next lemma concerns the case when $N =\soc(G)$ is nonabelian. The proof develops ideas used to prove a similar result for quasiprimitive basic pairs in \cite[Lemma 6.2]{al2015finite}. \begin{Lemma}\label{nonabelianBound} Let $(\Gamma, G) \in \OG(4)$ be basic of biquasiprimitive type and suppose that $N = \soc(G)$ is nonabelian. Then either \begin{enumerate}[(a)] \item $N$ is a minimal normal subgroup of $G^+$ and $N = T^k$, for some nonabelian simple group $T$ and $k \in \{1,2,4\}$; or \item $N = K \times K^g$ where $g\in G \backslash G^+$, and $K = T^\ell$ is a minimal normal subgroup of $G^+$ with $T$ a nonabelian simple group and $\ell \in \{1,2,4\}$. In particular, $N\cong T^k$ with $k = 2 \ell$. \end{enumerate} \end{Lemma} \begin{proof} Let $(\Gamma, G) \in \OG(4)$ be as in the statement of the theorem and suppose that $N= \soc(G)$ is nonabelian. The possible cases (a) and (b) here correspond directly to the two cases of Proposition \ref{twotypes}. The group $K$ in case (b) is the subgroup $K := \{(r, r^\varphi) : r\in R\}$ of Proposition \ref{twotypes}, and so $K^g = \{(r^\varphi, r^y) : r\in R\}$, where $R$ is an intransitive minimal normal subgroup of $H$. Since $N = \soc(G)$ is nonabelian, it follows that $N$ is a direct product of isomorphic nonabelian simple groups $T$. In particular, $N = T^k$ for $k\geq1$, and in case (b), $k = 2\ell$ where $K = T^\ell$ and $\ell \geq 1$. We will now show that $k$ divides $ 4$ in case (a) and $\ell$ divides $ 4$ in case (b). As $N = \soc(G)$, we will identify $N$ with its group of inner automorphisms Inn($N$), and regard $G$ as a subgroup of $\Aut(N) \cong \Aut(T) \wr \Sym(k)$. The representations of elements will therefore be different from Proposition \ref{twotypes}. Let $s\geq 1$ be the largest integer such that $G$ acts transitively on the oriented $s$-arcs of $\Gamma$. By Lemma \ref{regular_s_arcs}, this implies that $G$ is regular on the oriented $s$-arcs of $\Gamma$. Consider now an oriented $s$-arc $(v_0,v_1,...,v_s)$ of $\Gamma$, and suppose that the pointwise stabilizer $ G_{{v_0},...,{v_{s-1}}} $ of order 2 is generated by the element $h_1$, that is, $G_{{v_0},...,{v_{s-1}}} = \langle h_1 \rangle \cong C_2$. Now let $g \in G\backslash G^+$ be an automorphism of $\Gamma$ taking the oriented $s$-arc $(v_0,v_1,...,v_s)$ to the oriented $s$-arc $(v_1, v_2,...,v_s, v_{s+1})$ where $v_{s+1}$ is some out-neighbour of $v_s$. For each $2 \leq i \leq s$, define $h_i := h^{g^{-1}}_{i-1}$. It is clear that for each $i \leq s$ we have $$ G_{{v_0},...,{v_{s-i}}} = \langle h_1, ..., h_i\rangle.$$ We may write the automorphisms $h_1, g \in G$ as elements of $\Aut(N) \cong \Aut(T) \wr \Sym(k)$, so that $h_1 = f\sigma$ and $g = f'\tau$ where $f, f' \in \Aut(T)^k$ and $\sigma, \tau \in \Sym(k)$. In fact in case(b), $\sigma, \tau \in \Sym(\ell) \wr \Sym(2)$ with $\sigma \in \Sym(\ell)\times \Sym(\ell)$. In either case, $h_1^2 =1$ implies that $\sigma^2 =1$. Now let $\pi$ denote the projection map $\pi: \Aut(N) \rightarrow \Sym(k) , so that $(h_1)\pi = \sigma$ and $(g)\pi = \tau$, and let $P := (G^+)\pi = (NG_{v_{0}})\pi = (G_{v_0})\pi$. Note that $P$ is a 2-group since $G_{v_0}$ is a 2-group, and moreover $$P = (G_{v_0})\pi = \langle h_1,h_2,..., h_s\rangle\pi = \langle \sigma, \sigma^{\tau^{-1}},..., \sigma^{\tau^{-(s-1)}}\rangle.$$ We claim that $\sigma$ is not contained in any proper $\tau$-invariant subgroup of $P$. Suppose on the contrary that $\bar{P}$ is a proper $\tau$-invariant subgroup of $P$ containing $\sigma$. Since $\bar{P}$ is $\tau$-invariant it follows that $\sigma^{\tau^{-i}} \in \bar{P}$ for all $i \in \mathbb{Z}$, implying that $P\leq \bar{P}$ and hence that $P = \bar{P}$, a contradiction. Notice that $P$ is a subgroup of index 1 or 2 of $(G)\pi$, and $P$ is transitive in case (a) or has two orbits of length $\ell$ in case (b), so $k$ divides $|P|$, or $\ell$ divides $|P|$ respectively. We will now consider separately the two possibilities for the index of $P$ in $(G)\pi$ and show that in either case $|P|$ divides 4. Suppose first that $P = (G)\pi$ and let $M$ be a maximal subgroup of $P$ containing $\langle \sigma \rangle$. Since $P$ is a 2-group it follows that $M$ is normal in $P$ and, in particular must be $\tau$-invariant. Since $\sigma$ cannot be contained in any proper $\tau$-invariant subgroup of $P$, it follows that $P = \langle \sigma \rangle$ with order at most 2, and therefore that $k \leq 2$ (or $\ell \leq 2$). Suppose on the other hand that $P$ is an index 2 subgroup of $(G)\pi$, in particular, this implies that the order of $\sigma$ is 2. In this case $\tau\in (G)\pi \backslash P$. However $g^2 \in G^+$ and hence $\tau^2 \in P$. Furthermore, $\sigma$ does not lie in any proper $\tau$-invariant subgroup $H$ of $P$ (otherwise we can use the same argument as in the previous paragraph to show that $H = P$, a contradiction). Now let $L := \Phi(P)$, the Frattini subgroup of $P$ and note that $P/L$ is elementary abelian. Then $L$ is $\tau$-invariant since $\tau$ normalizes $P$, so $L$ does not contain $\sigma$. Setting $J := \langle L, \sigma \rangle$, it follows that $J/L$ has order 2, and conjugation by $\tau^{-1}$ maps $J/L$ to $(J^{\tau^{-1}})/L$. However, $J$ is normal in $P$ since $P/L$ is elementary abelian. In particular, since $\tau^2 \in P$, conjugation by $\tau^2$ fixes $J$ and $J/L$. Therefore repeated applications of conjugation by $\tau$ simply interchange the two (possibly equal) subgroups $J/L$ and $(J^{\tau^{-1}})/L$ of $P/L$ and each generator $\sigma^{\tau^{-i}}$ of $P$, lies in either $J$ or $J^{\tau^{-1}}.$ It follows that $P/L$ is generated by $J/L$ and $J^{\tau^{-1}}/L$, and hence that $P/L \cong C_2^c$ for $c \leq 2$. If $c =1$ then $P \cong \langle \sigma \rangle$ and this implies that $k= 2$ in case (a), or that $\ell =2$ in case (b). On the other hand, if $P/L \cong C_2^2$, then $P = \langle \sigma, \sigma^{\tau^{-1}} \rangle = \langle h_1, h_2 \rangle \pi$ and since we know that $\langle h_1, h_2 \rangle = G_{{v_0},...,{v_{s-2}}}$ has order $2^2 = 4$, it follows that the order of $P$ divides 4. In particular $k$ divides 4 in case (a), or $\ell$ divides 4 in case (b). This completes the proof. \end{proof} The first assertions of Theorem \ref{MainResult} now follow directly from Proposition \ref{twotypes} together with Lemmas \ref{abelianBound} and \ref{nonabelianBound}. \section{Constructing Biquasiprimitve Pairs}\label{SecConstructions} In this section we complete the proof of Theorem \ref{MainResult}. We do this by explicitly constructing examples of biquasiprimitive pairs corresponding to the different cases of Theorem \ref{MainResult}. In each of the three cases (a) - (c) of Theorem \ref{MainResult}, the parameter $k$ (the number of simple direct factors of the socle of $G$) can take several different values. In case (a) there are two possibilities for the value of $k$, while in each of the cases (b) and (c) there are three possibilities. Thus Theorem \ref{MainResult} gives a total of eight different possibilities for the structure of $\soc(G)$ of a biquasiprimitive pair $(\Gamma, G)$ where the number of simple direct factors is taken into account. To complete the proof, we therefore provide eight infinite families of biquasiprimitive basic pairs corresponding to these distinct cases. In Subsection \ref{Methods} we will describe two methods for constructing biquasiprimitive basic pairs. In short, Method \ref{BiMethod} uses the standard bi-Cayley graph construction described in Subsection \ref{ssBiCay}, while Method \ref{CosetMethod} is a more general coset graph construction developed from Proposition \ref{iso}. All of our constructions of biquasiprimitive pairs will use one of these two methods. The examples constructed to complete the proof of Theorem \ref{MainResult} are given in Constructions \ref{Abelian k=1} - \ref{Nonabelian ell=4} of this section. Table \ref{ConstructionTable} shows all of these constructions along with the explicit simple group $T$ used in each case. The `Methods Used' column refers to one of the two methods developed in Subsection \ref{Methods} for producing biquasiprimitive pairs. The construction numbers are included for easy reference. \begin{table}[h] \begin{tabular}{ l l l l l} \hline Case described in Theorem \ref{MainResult} & Value of $k$ & Simple Group $T$ & Construction \# & Method Used \\ \hline Case (a) & $k = 1$ &$\mathbb{Z}_p$, $p \equiv 1$ mod 4 & Construction \ref{Abelian k=1} & Method \ref{BiMethod}\\ & $k = 2$ & $\mathbb{Z}_p$, $p \equiv 3$ mod 4 & Construction \ref{Abelian k=2} & Method \ref{BiMethod} \\ Case (b) & $k = 1$ & Alt($n$), $n\geq5$, odd & Construction \ref{Nonabelian k=1} & Method \ref{BiMethod}\\ & $k = 2$ & Alt($n$), $n\geq5$, odd & Construction \ref{Nonabelian k=2} & Method \ref{BiMethod}\\ & $k = 4$ & $\PSL(2,p) ,$ $ p\geq 7 $& Construction \ref{Nonabelian k=4} & Method \ref{CosetMethod}\\ Case (c) & $k = 2$ & Alt($n$), $n\geq5$, odd & Construction \ref{Nonabelian ell=1} & Method \ref{BiMethod} \\ & $k = 4$ & $\PSL(2,p) ,$ $ p\geq 7 $ & Construction \ref{Nonabelian ell=2} & Method \ref{CosetMethod}\\ & $k = 8$ & $\PSL(2,p) ,$ $ p\geq 7 $ & Construction \ref{Nonabelian ell=4} & Method \ref{CosetMethod}\\ \hline \end{tabular}\caption{Constructions of basic biquasiprimitive pairs $(\Gamma, G)$ with $\soc(G) \cong T^k$ as described in the various cases of Theorem \ref{MainResult}. }\label{ConstructionTable} \end{table} \subsection{Two Methods for Constructing Biquasiprimitive Pairs}\label{Methods} One way to construct biquasiprimitive pairs is using the `standard' bi-Cayley construction described in Subsection \ref{ssBiCay}. Specifically, if $(\Gamma, G)\in \OG(4)$ is basic and biquasiprimitive, and the unique minimal normal subgroup $N$ of $G$ is semiregular on the two $G^+$-orbits, then we can take $\Gamma$ to be a bi-Cayley graph $\Gamma:= \BiCay(N, \emptyset, \emptyset, S)$ (for some subset $S$ of $N$ of cardinality 4). In our constructions involving bi-Cayley graphs presented in the form $\Gamma = \BiCay(N, \emptyset, \emptyset, S)$ we will always use the natural labelling of the vertex $V\Gamma$. That is, we let $V\Gamma = N_0\cup N_1$ consisting of two copies of the group $N$ with each vertex labelled $(n)_\epsilon$ for $n\in N$ and $\epsilon \in \{0,1\}$. Suppose now that $\Gamma = \BiCay(N, \emptyset, \emptyset, S)$ where $S = S^{-1}$. Of course, such a graph is bipartite with $N_0$ and $N_1$ forming the bipartition. In order to show that a $\Gamma$ is connected, it suffices to show that the vertex set $N_0$ lies in a single connected component of $\Gamma$, or in other words that there is a path from $(1_N)_0$ to $(n)_0$ for any $n\in N$ (vertex-transitivity then ensures that this holds for $N_1$ also). Any such path must have even length and consist of repeated left multiplication in $N$ by an element of $S$ followed by an element of $S^{-1} = S$. In particular, the graph $\Gamma$ is connected if $\langle S^2\rangle = N$. Hence we have the following simple method for constructing biquasiprimitive basic pairs $(\Gamma, G)$. \begin{method}\label{BiMethod} Take a group $N = T^k$ where $T$ is a simple group and $k \geq 1$, and construct a pair $(\Gamma, G)$ with $N = \soc(G)$ as follows: \begin{enumerate} \item Let $\Gamma = (N, \emptyset, \emptyset , S)$, where $S\subset N$ such that $S= S^{-1}$, $|S| = 4$, and $\langle S \rangle = \langle S^2 \rangle = N$. \item Take a group $G$ with $N \leq G \leq N_{\Aut(\Gamma)}(N)$ for which $\Gamma$ is $G$-oriented. This gives $(\Gamma, G)\in \OG(4)$. \item Show that $N$ is the unique minimal normal subgroup of $G$ to get that $(\Gamma, G)$ is biquasiprimitive. \end{enumerate} \end{method} Note that $N_{\Aut(\Gamma)}(N)$ (the normalizer of $N$ in $\Aut(\Gamma)$) was determined in \cite[Theorem 1.1]{zhou2016automorphisms}. In fact, in our constructions we will only use the following fact which follows from \cite[Lemmas 3.2 and 3.3]{zhou2016automorphisms}. \begin{Proposition}\label{AutBiCay} Let $\Gamma = \BiCay(N,\emptyset,\emptyset, S)$ as defined in Subsection \ref{ssBiCay} with $S = S^{-1}$. Suppose $\alpha \in \Aut(N)$ with $S^\alpha = S$. Then the permutations $\delta_\alpha$ and $\sigma_\alpha$ of $V\Gamma$ where $\delta_\alpha: $ $x_\varepsilon \mapsto (x^\alpha)_{1-\varepsilon}$, and $\sigma_\alpha:$ $x \mapsto (x^\alpha)_{\varepsilon}$ for $x\in N$ and $\varepsilon \in \{0,1\}$ are both automorphisms of $\Gamma$. Moreover both $\delta_\alpha$ and $\sigma_\alpha$ normalize the semi-regular subgroup $N\leq \Aut(\Gamma)$. \end{Proposition} More generally, we may construct biquasiprimitive pairs $(\Gamma, G)$ by using the coset graph construction. For a group $G$, a proper subgroup $S$, and an element $g \in G$, the \textit{coset graph} $\Gamma = \Cos(G, S, g)$ is the undirected graph with vertex set $\{Sx : x \in G\}$ and edges $\{Sx, Sy\}$ if and only if $xy^{-1}$ or $yx^{-1} \in SgS$. The group $G$ acting by right multiplication on $V\Gamma$ induces a vertex-transitive and edge-transitive group of automorphisms of $\Gamma$, and this action is faithful if and only if $S$ is core-free in $G$. Furthermore, the graph $\Gamma$ is connected if and only if $\langle S, g\rangle = G$, and is $G$-oriented and 4-valent if and only if $g^{-1} \notin SgS$ and $|S: S\cap S^g| = 2$ (see discussion at the beginning of \cite[Section 5]{al2015finite}). In summary, if $\Gamma = \Cos(G, S, g)$, then $(\Gamma, G) \in \OG(4)$ if and only if \begin{enumerate}[(1)] \item \hspace{1cm} $S$ is core-free in $G$, \hspace{.5cm} $g^{-1} \notin SgS$, \hspace{.5cm} $|S: S\cap S^g| = 2$, \hspace{.2cm} and \hspace{.2cm} $\langle S, g\rangle = G$. \end{enumerate} Moreover, for each pair $(\Gamma, G) \in \OG(4)$ there exist $S \le G$ and $g \in G$ such that $\Gamma = \Cos(G, S, g)$ and (1) holds. We can use Proposition \ref{iso} on the structure of biquasiprimitive basic pairs $(\Gamma, G) \in \OG(4)$ together with the coset graph construction given above to find examples of coset graphs of biquasiprimitive type. We begin by providing a general construction which uses a permutation group $H$ (with some prescribed properties) to produce a pair $(\Gamma, G)$ where $\Gamma$ is a coset graph for $G$, and $G$ has an index 2 subgroup isomorphic to $H$. In the remainder of this section we will show that under certain conditions the pairs $(\Gamma, G)$ constructed in this way are biquasiprimitive. \begin{Construction}\label{GeneralCoset} Take a permutation group $H$, a proper subgroup $V\leq H$, a non-identity element $y\in H$, and an automorphism $\varphi \in \Aut(H)$ such that $\varphi^2 = \iota_y$. Now consider the group $H \wr S_2$ and define two of its subgroups $G^+ := \Diag_{\varphi}(H\times H)$, and $S := \Diag_{\varphi}(V \times V)$. Also define an element $g := (y,1)(12)\in H \wr S_2$. Finally construct the graph-group pair $(\Gamma, G)$ where $G := \langle G^+, g \rangle \leq H \wr S_2$ and $\Gamma := \Cos(G, S, g)$. \end{Construction} It is clear that the construction of the group $G$ in this way corresponds to the formulation of the biquasiprimitive permutation group $G$ given in Proposition \ref{iso}. Notice in particular that using this construction, the pair $(\Gamma, G)$ is completely determined by the choices of appropriate $H, V, y$ and $\varphi$. Hence we will say that a tuple $(H, V, y, \varphi)$ is \textit{appropriate} if $H, V, y$ and $\varphi$ satisfy the conditions of Construction \ref{GeneralCoset}. In many of the constructions that follow, we will simply apply Construction \ref{GeneralCoset} on an appropriate $(H, V, y, \varphi)$ to create pairs $(\Gamma, G)$. The following lemma gives a sufficient condition for $(\Gamma, G)$ constructed in this way to be a member of $\OG(4)$. \begin{Lemma}\label{cond2} Let ($\Gamma, G$) be a graph-group pair constructed using Construction \ref{GeneralCoset} on an appropriate $(H, V, y, \varphi)$. Then $(\Gamma, G) \in \OG(4)$ if \begin{enumerate}[(2)] \item \hspace{1cm} $V$ is core-free in $H$, \hspace{.5cm} $y \notin VV^\varphi$, \hspace{.5cm} $|V: V\cap V^\varphi| = 2$, \hspace{.2cm} and \hspace{.2cm} $\langle V, y\rangle = H$. \end{enumerate} \end{Lemma} \begin{proof} Let $G^+$ and $S$ be the subgroups of $G$ defined in the construction, and let $\Gamma = \Cos(G, S, g)$. Suppose that (2) holds. We will show that $(\Gamma, G) \in \OG(4)$ by showing that (1) holds also. First, since $H \cong G^+$, $S \cong V$, and $V$ is core-free in $H$, it follows $S$ is core-free in $G^+$ and hence is core-free in $G$. Next, we will show that $y\notin V V^\varphi$ implies that $g^{-1} \notin SgS$. Notice that $g^{-1} = (1,y^{-1})(12)$, while for any element $z \in SgS$, $z = (s,s^\varphi)(y,1)(12)(t,t^\varphi) = (syt^\varphi, s^\varphi t)(12)$ for some $s,t \in V$. Thus if $g^{-1} = z$ for some $z \in SgS$, then $1 = syt^\varphi$ and hence $y \in VV^\varphi$. For the last two conditions notice that if we take $x \in G^+$ then $x^g = (h, h^\varphi)^g = (h^\varphi, h^y)$ for some $h\in H$. In particular, for $s \in S$ we have $s^g = (t^\varphi, t^y)$ where $t \in V$. So $s^g \in S$ if and only if $t^\varphi \in V$. Since $V \cong S$ we get that $|S:S\cap S^g| = |V: V \cap V^\varphi|$. Finally, it is easy to check that $g^2 = (y,y) \in G^+$. Hence if $\langle V, y \rangle = H$, then $\langle S, g^2 \rangle = \Diag_{\varphi}(\langle V, y\rangle\times \langle V, y\rangle) = \Diag_{\varphi}(H\times H) = G^+$ and so $\langle S, g \rangle = G$. \end{proof} Hence we have an easy condition for ensuring that pairs $(\Gamma, G)$ formed using Construction \ref{GeneralCoset} are contained in $\OG(4)$. Our next goal is to provide a simple condition under which such pairs are biquasiprimitive. \begin{Lemma}\label{Cosmin} Let ($\Gamma, G$) be a graph-group pair constructed using Construction \ref{GeneralCoset} on an appropriate $(H, V, y, \varphi)$. Let $G^+$ and $S$ as defined in that construction. Then \begin{itemize} \item Every minimal normal subgroup of $G$ is contained in $G^+$. \item If $\soc(G^+) \cong \soc(H)$ is a minimal normal subgroup of $G$ then it is the unique minimal normal subgroup of $G$. \end{itemize} \end{Lemma} \begin{proof} For the first part, notice that $|G:G^+| = 2$ (since $G = \langle G^+, g\rangle$, $g$ noramlizes $G^+$ and $g^2 = (y,y) \in G^+$). Now consider a minimal normal subgroup $N$ of $G$ and suppose that $G^+\cap N \neq N$. By the minimality of $N$ it follows that $G^+ \cap N = 1$ implying that $G = G^+ \times N$. But this implies that $N = \langle g \rangle$ with order 2, a contradiction since $g^2 = (y,y) \neq 1$. Hence $N\leq G^+$. For the second part, suppose that $\soc(G^+)$ is a minimal normal subgroup of $G$ and take a minimal normal subgroup $N$ of $G$ with $N \neq \soc(G^+)$. Then by the first part, $N$ is normal in $G^+$. In particular, $N\cap \soc(G^+) \neq 1$, a contradiction. \end{proof} The above result gives the following corollary. \begin{Corollary}\label{CorMin} Suppose that $(\Gamma, G) \in \OG(4)$ where $(\Gamma, G)$ is constructed by Construction \ref{GeneralCoset} on an appropriate $(H, V, y, \varphi)$. Let $G^+$, and $S$ as defined in that construction. Suppose further that $H = MV$ where $M = \soc(H) \cong T^k$ for some simple group $T$ and $k\geq1$. If $\soc(G^+) \cong \soc(H)$ is a minimal normal subgroup of $G$, then $(\Gamma, G)$ is biquasiprimitive. \end{Corollary} \begin{proof} The vertex set of $\Gamma$ is the set of right cosets of $S$ in $G$. Hence there are two $G^+$-orbits, namely $\Delta = \{Sx:x\in G^+\}$ and $\Delta' = \{Sgx:x\in G^+\}$. If $N = \soc(G^+) \cong M$ is a minimal normal subgroup of $G$ then $N$ is the unique such subgroup by Lemma \ref{Cosmin}. Moreover, the condition $H = MV$ implies that $G^+ \cong NS$ so $N$ is transitive on the two $G^+$-orbits $\Delta$ and $\Delta'$, and hence $G$ is biquasiprimitive on $V\Gamma$. \end{proof} The above results now provide the following method for constructing biquasiprimitive pairs in $\OG(4)$. \begin{method}\label{CosetMethod} Take a group $M = T^k$ for some simple group $T$ and $k\geq1$, and define a group $H := MV$ where $M = \soc(H)$ and $V$ is a proper subgroup $V\leq H$. Also take a non-identity $y \in H$ and an automorphism $\varphi \in \Aut(H)$ with $\varphi^2 = \iota_y$, so $(H, V, y ,\varphi)$ is appropriate. \begin{enumerate} \item Apply Construction \ref{GeneralCoset} on $(H, V, y$, $\varphi$) to create a pair $(\Gamma, G)$. \item Show that $H, V, y$ and $\varphi$ satisfy condition (2) of Lemma \ref{cond2} to get that $(\Gamma, G) \in \OG(4)$. \item Show that $\soc(G^+) \cong M$ is a minimal normal subgroup of $G$ to get that $(\Gamma, G)$ is biquasiprimitive (by Corollary \ref{CorMin}). \end{enumerate} \end{method} \subsection{Constructing Examples} We now provide constructions of basic biquasiprimitve pairs $(\Gamma, G) \in \OG(4)$ with the various possible structures for $\soc(G)$ as described in cases (a) - (c) of Theorem \ref{MainResult}. We will use both the bi-Cayley graph construction described in Subsection \ref{ssBiCay} (Method \ref{BiMethod}) and the coset graph construction developed in the last part of the previous section (Method \ref{CosetMethod}). We begin with examples of biquasiprimitive basic pairs ($\Gamma, G)$ with $\soc(G)$ abelian. Note that all 4-valent bi-Cayley graphs over an abelian group are arc-transitive \cite[Proposition 1.3]{conder2016edge}. \begin{Construction}\label{Abelian k=1} Take a prime $p \equiv 1$ mod 4 and let $q \in \mathbb{Z}_p$ such that $q^2 \equiv -1$ mod $p$. Let $\Gamma = \BiCay(N, \emptyset, \emptyset, S)$ with vertex set $N_0 \cup N_1$, where $N = \mathbb{Z}_p$ and $S = \{\pm 1, \pm q\}$. Define a permutation $\delta$ of the vertices of $\Gamma$ by $x^\delta_\varepsilon = (x\cdot q)_{1-\varepsilon}$ for $\varepsilon \in \{0,1\}$, and set $G := N \rtimes\langle \delta\rangle$. \end{Construction} \begin{Lemma}\label{firstConstructionLemma} For $\Gamma ,G$ as in Construction \ref{Abelian k=1}, $(\Gamma, G) \in \OG(4)$ and is basic of biquasiprimitive type with $\soc(G)$ as described in Theorem \ref{MainResult} case (a) with $k = 1$. \end{Lemma} \begin{proof} Since $|S| = 4$ and $\langle S \rangle = \langle S^2\rangle = N$ it follows that $\Gamma$ is 4-valent and connected. Also by Proposition \ref{AutBiCay}, $\delta \in \Aut(\Gamma)$ since it is induced by an automorphism of $N$ fixing $S$ setwise. Notice that the automorphism $\delta$ has order 4 and that the stabilizer of the vertex $(0)_0$ is $\langle \delta^2 \rangle \cong C_2$. This group has two orbits of length two on the neighbourhood of $(0)_0$, namely $\{(1)_1, (-1)_1\}$ and $\{(q)_1, (-q)_1\}$. Now, any automorphism $g\in G$ is of the form $g = n \delta^i$ with $n\in N$ and $i \in \{1..4\}$. In particular, any automorphism taking the vertex $(0)_0$ to its neighbour $(1)_1$ must be of the form $g = n \delta^i$ with $n\in N$ and $i \in \{1,3\}$. This gives just two possibilities for such an automorphism namely $g_1 = q^3 \delta$ and $g_2 = q \delta^3$ where $q\in N$. These two automorphisms map $(1)_1$ to $(1+q)_0$ and $(1-q)_0$ respectively. Thus no element of $G$ can reverse edges and $\Gamma$ is $G$-oriented. Since the only proper non-trivial normal subgroups of $G$ are $N$ and $N\langle \delta^2 \rangle$ it follows that $(\Gamma, G)$ is basic of biquasiprimitve type. \end{proof} \begin{Construction} \label{Abelian k=2} Let $\Gamma = \BiCay(N, \emptyset, \emptyset, S)$ where $N = \mathbb{Z}_p^2$ for a prime $p \equiv 3$ mod 4, and $S = \{\pm(1,0), \pm(0,1)\}$. Let $\delta$ be a permutation of $V\Gamma$ taking a vertex $(x,y)_{\epsilon}$ to $(y,-x)_{1-\epsilon}$ where $x, y\in \mathbb{Z}_p$ and $\varepsilon\in\{0,1\}$, and let $G := N\rtimes \langle \delta \rangle$. \end{Construction} \begin{Lemma} For $\Gamma ,G$ as in Construction \ref{Abelian k=2}, $(\Gamma, G) \in \OG(4)$ and is basic of biquasiprimitive type with $\soc(G)$ as described in Theorem \ref{MainResult} case (a) with $k = 2$. \end{Lemma} \begin{proof} First note that $|S| = 4$ and $\langle S \rangle = \langle S^2\rangle = N$ so $\Gamma$ is 4-valent and connected. Also by Proposition \ref{AutBiCay}, $\delta \in \Aut(\Gamma)$. Furthermore, the automorphism $\delta$ has order $4$, and for the vertex $\alpha = (0,0)_0$, we have $G_\alpha = \langle\delta^2\rangle \cong C_2$ with two orbits of length two on the neighbourhood of $\alpha$. Moreover any automorphism in $G$ taking the vertex $(0,0)_0$ to its neighbour $(1,0)_1$ must be either $g_1 = (0,1)\delta$ or $g_2 = (0,-1)\delta^3$ where $(0,1)$ and $(0,-1)$ are elements of $N$. However, neither of these automorphisms maps $(1,0)_1$ to $(0,0)_0$ and so no $g\in G$ can reverse edges of $\Gamma$ and $(\Gamma, G) \in \OG(4)$. To show that $(\Gamma, G)$ is basic of biquasiprimitive type, notice that the setwise stabilizer $G^+$ of the two parts $N_0$ and $N_1$ of $V\Gamma$ is $N \rtimes\langle \delta^2 \rangle$, with $\delta^2$ acting as inversion on $N$. Hence the nontrivial normal subgroups of $G^+$ are $N$, and the subgroups of $N$ isomorphic to $\mathbb{Z}_p$ (all intransitive on $N_0$ since $N$ is regular). Therefore we need to check that none of the subgroups of $N$ of order $p$ is normal in $G$. To this end, notice that the subgroups corresponding to the direct factors of $N$ are swapped by conjugation by $\delta$ in $G$, and hence aren't normal. All other nontrivial proper subgroups of $N$ are of the form $\langle(1, x)\rangle$ with $x \in \mathbb{Z}^*_p$. Hence if $\langle(1,x)\rangle^\delta = \langle(x,-1)\rangle = \langle(1,x)\rangle,$ then $c(1,x) = (x,-1)$ for some $c \in \mathbb{Z}^*_p$. It follows that $c = x$ and so $x^2 \equiv -1 $ mod $p$, but this is impossible since $p \equiv 3$ mod 4. Thus the only proper non-trivial normal subgroups of $G$ are $N$ and $G^+$, both of which are transitive on the two biparts of $V\Gamma$. \end{proof} Next we give constructions of biquasiprimitive basic pairs $(\Gamma, G)\in \OG(4)$ with $\soc(G)$ nonabelian. Note that any nonabelian simple group $T$ can be generated by an involution and an element of prime order \cite{king2017generation}. In particular all nonabelian simple groups can be generated by two elements. In each of our constructions of biquasiprimitive pairs with nonabelian socle we will use a simple group $T$ and a generating pair $\{a,b\}$ with prescribed properties. We begin with constructions of biquasiprimitive basic pairs $(\Gamma, G)\in \OG(4)$ with $\soc(G)$ nonabelian and as described in Theorem \ref{MainResult} case (b). \begin{Construction}\label {Nonabelian k=1} Let $T$ be a nonabelian simple group, and let $\{a,b\}$ be a generating set for $T$ where $a$ is an involution and the elements $b$ and $ab$ have odd order. Let $N = T$, $S_0 = \{ab, ba\}$, $S= S_0\cup S_0^{-1}$, and let $\Gamma = \BiCay(N, \emptyset, \emptyset, S)$. Define two permutations $\delta$ and $ \sigma $ of $V\Gamma$ where $x^\delta_\varepsilon = (x^a)_{1-\varepsilon}$, and $x^\sigma_\varepsilon = (x^a)_{\varepsilon}$ for $\varepsilon \in \{0,1\}$, and set $G := N \rtimes\langle \sigma, \delta\rangle$. \end{Construction} \begin{remark} For an explicit example of a simple group $T$ and generating set $\{a,b\}$ as in Construction \ref{Nonabelian k=1} take $T$ to be the alternating group Alt($n$) for odd $n\geq 5$, and let $a = (12)(34)$ and $b = (12 \dots n)$. \end{remark} \begin{Lemma}\label{k=1lemma} For $\Gamma ,G$ as in Construction \ref{Nonabelian k=2}, $(\Gamma, G) \in \OG(4)$ and is basic of biquasiprimitive type with $\soc(G)$ as described in Theorem \ref{MainResult} case (b) with $k = 1$. \end{Lemma} \begin{proof} Since $N$ is nonabelian and the order of $b$ is odd, it follows that $S_0 \cap S_0^{-1} = \emptyset$ and hence that $|S|= 4$ and $\Gamma$ is 4-valent. Again, using the fact that $b$ has odd order it is easy to check that $a,b \in \langle S \rangle$ and hence that $\langle S \rangle = N$. Now consider $S^2$. This set contains the elements $abab, b^2$ and $baba$. In particular, $\langle S^2 \rangle$ contains $b$ and hence also contains $aba$. Since $aba$ and $abab$ are contained in $\langle S^2 \rangle$ and the order of $ab$ is odd, it follows that $a \in \langle S^2 \rangle $ and hence $ \langle S^2 \rangle = N$. Therefore $\Gamma $ is connected. Next, notice that both $\sigma $ and $\delta$ are induced by conjugation by $a$ in $N$ and this automorphism fixes $S$ setwise. Hence $\sigma$ and $\delta$ are automorphisms of $\Gamma$ by Proposition \ref{AutBiCay}. The stabilizer of the vertex $(1_N)_0$ is $\langle \sigma \rangle$ with two orbits on the neighbours of $(1_N)_0$, namely $\{(ab)_1, (ba)_1\}$ and $\{(b^{-1}a)_1, (ab^{-1})_1\}$. Furthermore a straightforward check shows that the only automorphisms in $G$ mapping $1_0$ to $(ab)_1$ are $g_1 = (ab)\sigma\delta$ and $g_2 = (ba)\delta$ (where $(ab)$ and $(ba)$ are automorphisms contained in $N$) and neither of these map $(ab)_1$ to $1_0$. This implies that $\Gamma$ is $G$-oriented and hence that $(\Gamma, G) \in \OG(4)$. Now notice that neither $\langle \sigma \rangle$ nor $\langle \delta \rangle$ is normal in $G$. On the other hand, $N$ is a normal (and hence minimal normal) subgroup of $G$, and is the unique such subgroup. Since $N$ clearly has two orbits on $V\Gamma$, it follows that $G$ is biquasiprimitive on the vertices of $\Gamma$. \end{proof} \begin{Construction}\label {Nonabelian k=2} Let $T$ be a nonabelian simple group, and let $\{a,b\}$ be a generating set for $T$ such that no automorphism of $T$ swaps $a$ and $b$, and the elements $a$ and $b$ have odd order. Let $N = T\times T$, $S_0 = \{(a,b),(b,a)\}$, $S= S_0\cup S_0^{-1}$, and let $\Gamma = \BiCay(N, \emptyset, \emptyset, S)$. Define two permutations $\delta$ and $\sigma$ of $V\Gamma$ where $(x,y)^\delta_\varepsilon = (y,x)_{1-\varepsilon}$, and $(x,y)^\sigma_\varepsilon = (y,x)_{\varepsilon}$ for $\varepsilon \in \{0,1\}$. Set $G := N \rtimes\langle \sigma, \delta\rangle$. \end{Construction} \begin{remark} For an explicit example of a simple group $T$ and generating set $\{a,b\}$ as in Construction \ref{Nonabelian k=2} take $T$ to be the alternating group Alt($n$) for odd $n$, and let $a = (123)$ and $b = (12 \dots n)$. \end{remark} \begin{Lemma}\label{k=2lemma} For $\Gamma ,G$ as in Construction \ref{Nonabelian k=2}, $(\Gamma, G) \in \OG(4)$ and is basic of biquasiprimitive type with $\soc(G)$ as described in Theorem \ref{MainResult} case (b) with $k = 2$. \end{Lemma} \begin{proof} First notice that $S_0\cap S_0^{-1} = \emptyset$ since if $(a,b)^{-1} = (a,b)$, then both $a$ and $b$ are involutions, while if $(a,b)^{-1} = (b,a)$ then $\langle a,b \rangle = \langle a \rangle$ is cyclic, and neither of these is possible. In particular, $|S| = 4$ and $\Gamma$ is 4-valent. To see that $\Gamma$ is connected consider the following. The projections of $\langle S\rangle $ onto the simple direct factors of $N = T\times T$ are both equal to the group $\langle a, b \rangle = T$. Hence either $\langle S \rangle = N$ or $\langle S\rangle = \{(t,t^\varphi), t\in T\}$ for some $\varphi \in \Aut(T)$. In the latter case, $(a,b) = (a, a^\varphi)$ so $b= a^\varphi$, but also $(b,a) = (b, b^\varphi)$ so $a = b^\varphi$, but by our assumption no such automorphism $\varphi$ exists. Hence $N = \langle S\rangle$. Finally, notice that since both $a$ and $b$ have odd order, we have $(a,b) \in \langle (a^2,b^2) \rangle$ (and similarly $(b,a) \in \langle (b^2,a^2) \rangle$). In particular both $(a,b)$ and $(b,a)$ are contained in $\langle S^2 \rangle $, so $N = \langle S\rangle = \langle S^2 \rangle $, and $\Gamma$ is connected. Once again Proposition \ref{AutBiCay} implies that $\sigma, \delta \in \Aut(\Gamma)$. Now it is clear that $G$ acts transitively on the vertices of $\Gamma$ and the stabilizer of the vertex $(1_N)_0$ is exactly $\langle \sigma \rangle \cong C_2$ with two orbits on the neighbourhood of $(1_N)_0$. Moreover, it is easy to check that no automorphism can reverse edges as follows. The only automorphisms taking $(1_N)_0$ to $(a,b)_1$ are $g_1 = n_1\sigma\delta$ and $g_2 = n_2\delta$ where $n_1 = (a,b)$ and $n_2 = (b,a)$ are elements of $N$. Since neither of these maps $(a,b)_1$ to $(1_N)_0$, it follows that $\Gamma$ is $G$-oriented and $(\Gamma, G) \in \OG(4)$. Finally, since conjugation by $\sigma$ in $G$ interchanges the two simple direct factors of $N$, it follows that $N$ is a minimal normal subgroup of $G$ and so is the unique minimal normal subgroup. Of course, $N$ has two orbits on $V\Gamma$, thus $G$ is biquasiprimitive on the vertices of $\Gamma$. \end{proof} Next we give a construction of biquasiprimitive basic pairs as described in Theorem \ref{MainResult} case (b) with $k = 4$. This time we will use Method \ref{CosetMethod}. We will use the same simple group $T$ and generating pair $\{a,b\}$ in Constructions \ref{Nonabelian k=4}, \ref{Nonabelian ell=2} and \ref{Nonabelian ell=4}. Hence we begin with the following important remark. \begin{remark}\label{PSLgen} For a prime $p\geq 7$ let $T$ denote the simple group $\PSL(2,p)$. Then $T$ is generated by two elements $a$ and $b$ where $$a:=\begin{pmatrix} 0 & 1\\-1 & 0 \end{pmatrix} \hbox{ and } b:= \begin{pmatrix} 0 & 1\\-1 & 1 \end{pmatrix}.$$ Moreover $a$ and $b$ have orders 2 and 3 respectively, while $ab$ and $ab^2$ have order $p$, \cite[Section 7.5]{coxeter2013generators}. \end{remark} \begin{Construction}\label{Nonabelian k=4} For a prime $p \geq 7$ let $T$ denote the simple group $\PSL(2,p)$ generated by two elements $a$ and $b$ such that $a$ and $b$ have orders $2$ and $3 $ respectively while $ab$ and $ab^2$ have order $p$. Take the group $T \wr S_4$ with $S_4$ acting by permuting the four direct factors of $T^4$ and define the following elements of this group $$ \tilde{\varphi} := (b,ba,ab,aba)(13),$$ $$y:= \tilde{\varphi}^2 = (bab,baba,ab^2,ab^2a),$$ $$h_1 := (a,a,a,a)(12)(34),$$ $$h_2 := h_1^{\tilde{\varphi}} = (b^{-1}aba,ab^{-1}ab,b^{-1}aba,ab^{-1}ab)(14)(23).$$ Now let $V := \langle h_1, h_2 \rangle $ and define the subgroup $H := T^4 \rtimes V \leq T \wr S_4$. Notice that conjugation by $\tilde{\varphi}$ in $T \wr S_4$ induces an automorphism $\varphi \in \Aut(H)$, in particular $\varphi^2$ is the inner automorphism of $H$ corresponding to conjugation by $y \in H$. Finally apply Construction \ref{GeneralCoset} using $H, V, y $ and $ \varphi$ to get the pair $(\Gamma, G)$. \end{Construction} \begin{Lemma} Let $\Gamma ,G$ be as in Construction \ref{Nonabelian k=4}. Then $(\Gamma, G) \in \OG(4)$ and is basic of biquasiprimitive type with $\soc(G)$ as described in Theorem \ref{MainResult} case (b) with $k = 4$. \end{Lemma} \begin{proof} Since Construction \ref{Nonabelian k=4} is a special case of Construction \ref{GeneralCoset}, in order to show that $(\Gamma, G) \in \OG(4)$ it suffices to show that condition (2) of Lemma \ref{cond2} is satisfied. First notice that $V \cong \mathbb{Z}_2^2$ since $h_1$ and $h_2$ are commuting involutions. Also $V$ is core-free in $H$ since for instance $V\cap V^y = 1$. It is also easy to check that $V\cap V^\varphi = \langle h_2\rangle$, and so $|V: V\cap V^\varphi| = 2$. Now suppose that $y \in VV^\varphi$ so that $y = vu$ for some $v\in V,$ and $ u \in V^\varphi$. This implies that $vu\in T^4$, meaning that if we take $\pi$ to be the projection map $H \rightarrow S_4$, then $\pi(v) = \pi(u)$. Hence the only possibilities for $(v,u)$ such that $y = vu$ that need to be considered are $(h_1,h_2^\varphi), (h_2,h_2),$ and $ (h_1h_2,h_2h_2^\varphi)$, the second possibility gives $h_2^2 = 1 \neq y$, while the first and third of these possibilities both give $y = h_1h_2^\varphi$. It is easy to check however that $h_1h_2^\varphi = h_1h_1^y$ has $bab^2$ in its third coordinate while $y$ has $ab^2$ in its third coordinate. Hence $y \notin VV^\varphi$. Left to show is that $\langle V, y\rangle = H$. To prove this claim it is sufficient to show that $T^4 \leq \langle V, y\rangle$. To this end, let $y_1 := y^{h_1}$ and $y_2 := y^{h_2}$, so that we have \begin{align*} y &= (bab,baba,ab^2,ab^2a), \\ y_1 &= (abab,ababa,b^2,b^2a)\hbox{, and} \\ y_2 &= (b^2ab^2ab,ab^2abababa,b^2abab^2,ab^2). \end{align*} We will show that $T^4 = \langle y,y_1,y_2\rangle \leq \langle V,y\rangle$. First, it is straightforward to check that the group $\langle y,y_1,y_2\rangle$ projects onto each direct factor of $T^4$. Consider now the elements of $T$ appearing as coordinates of $y, y_1$ and $y_2$. It is easy to see that the three elements $ab^2a,$ $ b^2,$ and $b^2ab^2ab$ have order 3. On the other hand, using the fact that $ab$ and $ab^2$ have order $p$, we can check that $abab, bab$ and $b^2abab^2$ also have order $p$. The remaining elements appearing as coordinates of $y, y_1$ and $y_2$ are conjugates of these elements of order $p$ and hence also have the same order. In particular, since the only elements of order 3 ($ab^2a, b^2$, and $b^2ab^2ab$), appear in the fourth, third and first coordinates of $y, y_1$ and $y_2$ respectively, and $\langle y,y_1,y_2\rangle$ is a subdirect subgroup of $T^4$, it follows that $T^4 = \langle y,y_1,y_2\rangle$ and so $\langle V, y\rangle = H$. So by Lemma \ref{cond2}, $(\Gamma, G) \in \OG(4)$. Finally we show that $(\Gamma, G)$ is a biquasiprimitive basic pair. Since $H$ acts transitively on the simple direct factors of $T^4$, it follows that $T^4$ is a minimal normal subgroup of $H$, and is the unique such subgroup. Hence $G^+$ has a unique minimal normal subgroup $N = \Diag_\varphi(T^4 \times T^4) \cong T^4$ and this must be the unique minimal normal subgroup of $G$. Hence $(\Gamma, G)$ is biquasiprimitive by Corollary \ref{CorMin}. \end{proof} We conclude this section by giving constructions of basic biquasiprimitive ($\Gamma, G) \in \OG(4)$ as described in Theorem \ref{MainResult} case (c). The first construction is similar to Construction \ref{Nonabelian k=2}. As in that construction, the alternating group Alt($n$) with $n$ odd, and generators $a = (123)$ and $b = (1...n)$ will have the required properties. \begin{Construction}\label{Nonabelian ell=1} Let $T$ be a nonabelian simple group, and let $\{a,b\}$ be a generating set for $T$ such that no automorphism of $T$ swaps $a$ and $b$, and the elements $a$ and $b$ have odd order. Suppose further that there is an automorphism $\theta \in \Aut(T)$ which inverts both generators $a$ and $b$. Let $N = T\times T$, $S_0 = \{(a,b),(b,a)\}$, $S= S_0\cup S_0^{-1}$, and let $\Gamma = \BiCay(N, \emptyset, \emptyset, S)$. Define two permutations $\delta$ and $\sigma$ of $V\Gamma$, where $(x,y)^\delta_\varepsilon = (y,x)_{1-\varepsilon}$, and $(x,y)^\sigma_\varepsilon = (x^{\theta},y^{\theta})_{\varepsilon}$ for $\varepsilon \in \{0,1\}$. Set $G := N \rtimes\langle \sigma, \delta\rangle$. \end{Construction} \begin{Lemma} For $\Gamma ,G$ as in Construction \ref{Nonabelian ell=1}, $(\Gamma, G) \in \OG(4)$ and is basic of biquasiprimitive type with $\soc(G)$ as described in Theorem \ref{MainResult} case (c) with $\ell = 1$. \end{Lemma} \begin{proof} Since $\Gamma$ is the same graph from Construction \ref{Nonabelian k=2}, it follows from Lemma \ref{k=2lemma} that $\Gamma$ is 4-valent and connected. Again $\sigma$ and $\delta$ are induced by automorphisms of $N$ which fix $S$ and hence are automorphisms of $\Gamma$ by Proposition \ref{AutBiCay}. Moreover it is a straightforward check that the stabilizer of the vertex $(1_N)_0$ is $\langle\sigma\rangle \cong C_2$ and also that there are only two automorphisms in $G$ mapping the vertex $(1_N)_0$ to its neighbour $(a,b)_1$ but neither of these reverses the edge $\{(1_N)_0,(a,b)_1\}$. Hence $\Gamma$ is $G$-oriented and $(\Gamma, G) \in \OG(4)$. Now notice the setwise stabilizer of $\Delta : = N_0$ is $G^+ = N\langle \sigma \rangle$ and that $T\times 1 \leq N$ is a normal subgroup of $G^+$ which is intransitive on $\Delta$. In particular, $G^+$ is not quasiprimitve on $\Delta$. Moreover $\delta$ interchanges the two simple direct factors of $N$, and hence $N$ is the unique minimal normal subgroup of $G$. Since $N$ is contained in $G^+$ it follows that $\Gamma$ is basic of biquasiprimitive type as in Theorem \ref{nonabelianBound} case (b). \end{proof} The next two constructions both provide pairs $(\Gamma, G)$ as described in Theorem \ref{MainResult} case (c) with $\ell = 2$ and $\ell = 4$ respectively. In both cases $\soc(G) = T^{2\ell}$ where $T$ is the simple group $\PSL(2,p)$. In both cases we may use the same generating pairs $\{a,b\}$ as those used in Construction \ref{Nonabelian k=4} (see Remark \ref{PSLgen}). \begin{Construction}\label{Nonabelian ell=2} For a prime $p \geq 7$ let $T$ denote the simple group $\PSL(2,p)$ generated by two elements $a$ and $b$ such that $a$ and $b$ have orders $2$ and $3$ respectively while $ab$ and $ab^2$ have order $p$. Take the group $T \wr S_4$ with $S_4$ acting by permuting the four direct factors of $T^4$ and define the following elements of this group $$\tilde{\varphi} := (b^2ab, ab^2,b^2, a)(13)(24),$$ $$y := \tilde{\varphi}^2 = (b^2a, ab^2a, bab, b^2),$$ $$h_1 := (a,a,a,a)(12)(34).$$ Now let $V := \langle h_1 \rangle $ and define the subgroup $H := T^4 \rtimes V \leq T \wr S_4$. Notice that conjugation by $\tilde{\varphi}$ in $T \wr S_4$ induces an automorphism $\varphi \in \Aut(H)$, in particular $\varphi^2$ is the inner automorphism of $H$ corresponding to conjugation by $y \in H$. Finally apply Construction \ref{GeneralCoset} using $H, V, y $ and $ \varphi$ to get the pair $(\Gamma, G)$. \end{Construction} \begin{Lemma} Let $\Gamma ,G$ be as in Construction \ref{Nonabelian ell=2}. Then $(\Gamma, G) \in \OG(4)$ and is basic of biquasiprimitive type with $\soc(G)$ as described in Theorem \ref{MainResult} case (c) with $\ell=2$. \end{Lemma} \begin{proof} Again, since this is a special case of Construction \ref{GeneralCoset}, in order to show that $(\Gamma, G) \in \OG(4)$ it is sufficient to show that condition (2) of Lemma \ref{cond2} is satisfied. Here $V \cong C_2$ and since $h_1^\varphi \notin V$ we have that $V$ is core-free in $H$ and $|V:V\cap V^\varphi| = 2$. It is also easy to check that $y \notin VV^\varphi$ in this case, by noticing that $y \neq h_1h_1^\varphi$. Left to show is that $\langle V, y \rangle = H$. In fact, we will show that $T^4\leq \langle y_1, y\rangle$ where $y_1 := y^{h_1} = (b^2, ab^2, ab^2a, ababa)$, from which it follows that $\langle V, y \rangle = H$. First, $\langle y, y_1 \rangle $ projects onto each factor of $T^4$ so we only need to make sure that $\langle y, y_1 \rangle $ is not a product of diagonal subgroups of $T^4$. Now $y$ has elements of order $p$ in its first and third coordinates and elements of order 3 in its second and fourth coordinates. Thus all we need to check is that no automorphism of $T$ can map $b^2a$ to $bab$ and $b^2$ to $ab^2a$ and that no automorphism can map $ab^2a$ to $b^2$ and $ab^2$ to $ababa$. In the first case, such an automorphism must map $a$ to $(ab)^3$ which is impossible since $a$ is an involution. In the second case, such an automorphism must map $a$ to $(ab^2)^3$, which again is not an involution. Hence $T^4\leq \langle y_1, y\rangle$ and so $\langle V, y \rangle = H$. By Lemma \ref{cond2} this gives $(\Gamma, G) \in \OG(4)$. It is clear that the action of $H$ on the simple direct factors of $T^4$ has two orbits of length 2. Thus $H$ has two minimal normal subgroups isomorphic to $T^2$, and these are the only minimal normal subgroups of $H$. Furthermore, it is clear that the automorphism $\varphi$ of $H$ interchanges these normal subgroups. We will thus let $R$ and $R^\varphi$ denote these two minimal normal subgroups of $H$. Since $G^+ \cong H$, $G^+$ also has two minimal normal subgroups isomorphic to $R, R^\varphi \cong T^2$. Let $K$ and $L$ denote these minimal normal subgroups of $G^+$ so $K = \Diag_\varphi(R\times R)$ and $L = \Diag_\varphi(R^\varphi \times R^\varphi)$. Then conjugation by $g$ in $G$, interchanges $K$ and $L$ and so $G$ acts transitively on the direct factors of $\soc(G^+) = K\times L \cong T^4$. Hence $\soc(G^+)$ is a minimal normal subgroup of $G$ and $(\Gamma, G)$ is biquasiprimitive by Corollary \ref{CorMin}. \end{proof} \begin{Construction}\label{Nonabelian ell=4} For a prime $p \geq 7$ let $T$ denote the simple group $\PSL(2,p)$ generated by two elements $a$ and $b$ such that $a$ and $b$ have orders $2$ and $3$ respectively while $ab$ and $ab^2$ have order $p$. Take the group $T \wr S_8$ with $S_8$ acting by permuting the eight direct factors of $T^8$ and define the following elements of this group $$\tilde{\varphi} := (b,ba,ab,aba,b^2,ab,ba,ab^2a)(15)(28)(37)(46),$$ $$y := \tilde{\varphi}^2 = (1,a,ab^2a,ab^2,1,ababa,b^2,ab^2aba),$$ $$h_1 := (a,a,a,a,a,a,a,a)(12)(34)(56)(78),$$ $$h_2 := h_1^{\tilde{\varphi}} = (b^2,ab^2a,aba,b,b^2aba,ab^2ab,b^2aba,ab^2ab)(14)(23)(58)(67).$$ Now let $V := \langle h_1 ,h_2\rangle $ and define the subgroup $H := T^8 \rtimes V \leq T \wr S_8$. Notice that conjugation by $\tilde{\varphi}$ in $T \wr S_8$ induces an automorphism $\varphi \in \Aut(H)$, in particular $\varphi^2$ is the inner automorphism of $H$ corresponding to conjugation by $y \in H$. Finally apply Construction \ref{GeneralCoset} using $H, V, y $ and $ \varphi$ to get the pair $(\Gamma, G)$. \end{Construction} \begin{Lemma}\label{lastConstructionLemma} Let $\Gamma ,G$ be as in Construction \ref{Nonabelian ell=4}. Then $(\Gamma, G) \in \OG(4)$ and is basic of biquasiprimitive type with $\soc(G)$ as described in Theorem \ref{MainResult} case (c) with $\ell=4$. \end{Lemma} \begin{proof} As in previous constructions, we only need to check that condition (2) of Lemma \ref{cond2} is satisfied to show that $(\Gamma, G) \in \OG(4)$. Here we have $V = \langle h_1, h_2 \rangle \cong \mathbb{Z}_2^2$ with $V\cap V^\varphi = \langle h_2\rangle$ and $V\cap V^y = 1$. Hence $V$ is core-free in $H$ and $|V: V\cap V^\varphi| = 2$. To check that $y \notin VV^\varphi$ it is sufficient to check that $y \neq h_1h_1^y$ and this is clearly true since $y\neq 1$. Left to show is that $\langle V, y \rangle = H$. Of course, it is sufficient to show that $T^8 \leq \langle V, y \rangle$, and in fact we will show that $T^8 = \langle y, y_1,y_2\rangle \leq \langle V, y\rangle$ where $y_1:= y^{h_1}$, and $y_2 := y^{h_2}$. Hence we have $$ y = (1,a,ab^2a,ab^2,1,ababa,b^2,ab^2aba), $$ $$y_1 = (a,1,b^2a,b^2,bab,1,b^2ab,ab^2a),\hbox{ and }$$ $$y_2 = (b^2a, ab^2a,abab^2a,1,b^2ab,ab^2ab^2aba,b^2a,1). $$ It is easy to check that $\langle y,y_1,y_2\rangle$ projects onto each direct factor of $T^8$. Furthermore, notice that the identity element occurs in the first and fifth coordinates of $y$, the second and sixth coordinates of $y_1$, and the fourth and eighth coordinates of $y_2$. So if $\langle y,y_1,y_2\rangle$ is a product of diagonal subgroups of $T^8 = \Pi _{i =1} ^8 T_i$ then each direct factor of $\langle y,y_1,y_2\rangle$ must be either a full subgroup $T_j$ for some $1\leq j\leq 8$, or is a diagonal subgroup of a subproduct $T_m \times T_n$ where $(m,n) \in \{(1,5),(2,6),(3,7),(4,8)\}$. However, the elements in the first and fifth coordinates of $y_2$ have orders $p$ and 2 respectively, the elements in the second and sixth coordinates of $y$ have orders 2 and $p$ respectively, the elements in the third and seventh coordinates of $y_1$ have orders $p$ and 2 respectively, and the elements in the fourth and eighth coordinates of $y$ have orders $p$ and 2 respectively. Therefore $\langle y, y_1, y_2\rangle = \Pi _{i =1} ^8 T_i = T^8$ and hence $H = \langle V, y\rangle$. Lemma \ref{cond2} now implies that $(\Gamma, G) \in \OG(4)$. Notice that the action of $H$ on the simple direct factors of $T^8$ has two orbits of length 4. Thus $H$ has two minimal normal subgroups isomorphic to $T^4$, and these subgroups are interchanged by the automorphism $\varphi \in \Aut(H)$. As in previous constructions, we let $R$ and $R^\varphi$ denote these two minimal normal subgroups of $H$. Since $G^+ \cong H$, $G^+$ also has two minimal normal subgroups, namely $K = \Diag_\varphi(R\times R)$ and $L = \Diag_\varphi(R^\varphi \times R^\varphi)$, and conjugation by $g \in G$ interchanges $K$ and $L$, implying that $G$ acts transitively on the direct factors of $K\times L \cong T^8$. In particular, $\soc(G^+) = K \times L$ is a minimal normal subgroup of $G$, and hence $(\Gamma, G)$ is biquasiprimitive by Corollary \ref{CorMin}. This shows that $(\Gamma, G)$ is a basic biquasiprimitive pair as described in Theorem \ref{nonabelianBound} case(b) with $\ell=4$. \end{proof} Constructions \ref{Abelian k=1} - \ref{Nonabelian ell=4} together with Lemmas \ref{firstConstructionLemma} - \ref{lastConstructionLemma}, and the remarks in this section which give explicit simple groups and generating pairs for each construction, complete the proof of Theorem \ref{MainResult}. Note that in each of the explicit examples of biquasiprimitive pairs ($\Gamma, G)$ provided here, the group $G$ contains a subgroup $N$ acting semi-regularly with two orbits on $V\Gamma$, hence all of these examples are bi-Cayley graphs. Of course, it should not be too difficult to construct non-bi-Cayley examples using Method \ref{CosetMethod}. An interesting further question would be to determine which nonabelian simple groups $T$ can occur as the simple direct factors of the socle of $G$ where $(\Gamma, G)\in \OG(4)$ is biquasiprimitive. \subsection*{Acknowledgements} We are grateful for the opportunity to participate in the Tutte Memorial MATRIX retreat where this work began. The first author was supported by an Australian Government Research Training Program (RTP) Scholarship, and the second author acknowledges Australian Research Council Funding of DP160102323. We acknowledge the use of \textsc{Magma} \cite{MR1484478} for testing theories and constructing examples. \bibliographystyle{abbrv}
{ "timestamp": "2019-03-01T02:06:56", "yymm": "1902", "arxiv_id": "1902.10853", "language": "en", "url": "https://arxiv.org/abs/1902.10853" }
\section{Introduction} Quantum networks\cite{Kimble2008} provide a prominent template for the design and realization of scalable quantum information processing systems. Quantum network consists of nodes, which are formed with a physical system such as atoms. Nodes are then linked together through the quantum channel, and this usually is done with the help of photons referred in this context as ``flying qubits". The interaction between light and matter establishes the transfer and the manipulation of information between the ``flying qubits" and the nodes. Quantum networks may eventually play an important role in the future implementation of quantum computation, communication, and metrology\cite{Nielsen2000,Zoller2005,Gisin2007,Giovanetti2004,Giovanetti2010}. Trapped atoms in Fabry-Perot cavities have been one of the most fruitful systems for testing fundamentals of quantum optics in cavity QED setup\cite{Haroche1989,WV2006,SZ97,C91}. Single atoms in Fabry-Perot cavities have been demonstrated to be good candidates for a quantum network\cite{Raimond2001,BPK12,Cirac1997,RR2015,WV2006}, however, it turns out that these cavities fail to realize large scale networking. To overcome this issue, several types of microchip-based systems(microdisk, micropillar, micro bottle, and photonic crystal cavities)\cite{Vahala2004} have been engineered and successfully utilized for implementing cavity-QED type of experiments\cite{A2006,A2009,S2007,OJV2013,kippenberg2005,S2014,Dayan2008} by coupling them with trapped cold atoms, quantum dots. Numerical and theoretical methods have also been developed to understand the optical properties of these systems \cite{KDL1999,Spillane2005,SP07,S2006}. Microtoroidal and microdisk cavities hold a promise to realize scalable quantum networks and are fascinating platforms for realizing quantum optical experiments since the electrical field, with its small mode volume, reaches high values inside the cavity resulting in a large light-matter coupling. Experimentally, the strong coupling regime has been successfully reported for such systems\cite{A2006,A2009,S2007,OJV2013}. Due to their small losses, these systems have high quality factors (Q) and, in one experiment, $Q$ as large as $4\times 10^{8}$ have been realized \cite{kippenberg2005}. Moreover, by coupling tapered fibre with ring resonator the efficiency of coupling light in and out of the microtoroidal resonator can achieve up to 0.997 as demonstrated experimentally in Ref. \cite{A2006}. Photonic quantum devices \cite{Brien2009} are necessary components for implementing functional quantum network, and they play an important role in storing the quantum states of light, as well in controlling the propagation of light. Switching the direction of light propagation is one of the most important operations that need to be performed in the quantum network. To achieve this task a quantum light transistor\cite{V2012,B2009,YY2009,W2013} ] is needed and is implemented by changing an external parameter, which results in ``on" or ``off" state of the transistor, much like a gate valve in a water pipe. If this device is implemented solely through optical means, then this kind of switch is called ``all optical switch" \cite{DI2005,B2009,YY2009,W2013}. Quantum transistors act as ``gate valves" for quantum states of light\cite{Ste07,SZ97}. Both theoretical proposals \cite{Kyriienko2016,Hong2008} along with actual experimental implementations \cite{V2012,W2013,B2009} for realizing single-photon transistor have been put forward. In this paper, we focus on the realization of a quantum transistor for an incoming coherent field. An interesting result that quantum communication between two atomic ensembles can be achieved by means of only coherent laser fields has been theoretically proposed\cite{Duan2000} and later an entanglement between two atomic ensembles has been experimentally demonstrated in Ref.\cite{Polzik}. These findings demonstrate that quantum network can be formed with only coherent laser fields which overcomes the difficulty of creating quantum states of light for realizing quantum communication. In the Ref. \cite{PA14}, Parkins and Aoki suggested an interesting scheme for the coherent light quantum switch by utilizing clockwise and anti-clockwise cavity mods of the whispering gallery modes(WGM) of the ring resonator. They showed that, under certain parameter regime (strong cavity-fibre coupling along with strong cavity-atom coupling), it is possible to achieve a coherent light switch by tuning cavity-atom interaction strength $g$ going from weak coupling to strong coupling regimes. Here, we highlight, that controlling the interaction strength $g$ in the actual experiments could be very challenging because one needs to modify the distance between the atoms and ring cavity to modify the evanescent coupling. Moreover, in order to have a functional switch, it is desirable to have an easily tunable external parameter. To address this issue, we propose to replace the two-level atom with a three-level atom in a $\Lambda$-level configuration and the ``gate valve" is implemented by tuning the amplitude of the control field. In several theoretical articles~\cite{LY2012,Hong2008,YL2011} the interaction of three-level atom has been theoretically investigated, however in all these papers the typical EIT condition of zero two-photon detuning has been assumed. Contrary, for our protocol it is crucial to have non zero two-photon detuning, otherwise because of the coherent population trapping mechanism system behaves as transparent for any value of control field, because of the optical pumping into the dark state\cite{Ste07,SZ97}. We demonstrate that by choosing suitably two-photon detuning, an all-optical switch can be efficiently implemented in our system. In this manuscript, we argue that a quantum switch, controlled by varying the amplitude of the external field, is easier to implement experimentally compared to the previous proposals. Moreover, our protocol for a transistor works even for the bad-cavity limit, which overcomes experimental effort to bring the system in the strong coupling regime. However, it is important to point out that contrary to the strong coupling regime where reflected light does not change its statistics, in the bad cavity limit it becomes strongly quantum after being reflected. The manuscript is outlined as follows. In the section \ref{Model}, we provide a theoretical description of the system and set the stage for the numerical simulations of the master equation that governs the systems dynamics. In the section \ref{quantum switch}, we demonstrate, both numerically and analytically, that our system functions as a quantum transistor for an incoming coherent field (even within the bad cavity limit). In section \ref{photonstatistics}, we study the statistics of the transmitted and reflected fields, in strong coupling and bad cavity limits. Section \ref{conclusion} is devoted to the conclusions. In Appendix \ref{appendix} analytical results for the bad-cavity limit are derived using adiabatic elimination of the cavity modes. \section{The system and the master equation formalism} \label{Model} \begin{figure} \includegraphics[width=8cm]{fig1.pdf} \caption{ Scheme of a three-level atom coupled to a ring cavity and a tapered-fiber. Input fields $a_{in,ex},b_{in,ex}$ propagate through the fiber which is coupled with a rate $k_{ex}$ with a microtoroid cavity which has a resonant frequency of $ \omega_{r} $. Coherent probe field of frequency $\omega_{p}$ drives the mode $a$ with strength $\Omega_{p}$. Two counter-propagating WGM modes $a$ and $b$ are assumed to be coupled with a strength $h$ due to the scattering from imperfections. Both modes can leak out of the cavity with a rate $k_{i}$, and the outgoing fields resulting from the fiber are given by the $a_{out,ex},b_{out,ex}$ and related to the input and intra-cavity fields through the conventional input-output relations. Degenerate cavity modes $a$ and $b$ are coupled with the three-level atom and drive the transition $1-e$. Control field with amplitude $\Omega_{c}$ and the frequency $\omega_c$ drives the transition $2-e$. Atomic populations of the excited state $ e$ can decay through two decay channels either to the state $ 1 $ or to the state $ 2 $.} \label{scheme} \end{figure} A schematic representation of the system along with main parameter definitions is given in Fig.~\ref{scheme}. It is important to point out that once the anti-clockwise WGM mode $a$ is created, there are two different mechanisms that can give rise to the clockwise mode $b$. The first mechanism is the evanescent coupling with a strength $g$ with the two-level atom since atom can re-emit the photon in both directions: clockwise and anti-clockwise. The second mechanism is a result of the inhomogeneity in the dielectric media and is described by the parameter $h$(for more details, see Ref~\cite{SP07}). In this paper we will focus mainly on the case when $h=0$. In a rotating frame $U(t)=e^{i\omega_{p}t(a^{\dag}a+b^{\dag}b-\sigma_{11})-i\omega_{c}t\sigma_{22}}$, the Hamiltonian for the system takes form \begin{equation} \label{atom-ring} \begin{split} H&=\Delta_{r}(a^{\dag}a+b^{\dag}b)+h(a^{\dag}b+b^{\dag}a)+\Delta_{e1}\sigma_{ee}+\Delta_{21}\sigma_{22}\\ &+(g^{*}a^{\dag}\sigma_{1e}+ga\sigma_{e1})+(gb^{\dag}\sigma_{1e}+g^{*}b\sigma_{e1})\\ &+\Omega_{p}(a+a^{\dag})+\Omega_{c}(\sigma_{2e}+\sigma_{e2}),\\ \end{split} \end{equation} where $\Delta_{r}=\omega_{r}-\omega_{p}$, $\Delta_{e1}=\omega_{e1}-\omega_{p}$ and $\Delta_{21}=\omega_{21}+\omega_{c}-\omega_{p}$. After introducing the dissipative channels the system is described by the Lindblad master equation (here we assume zero temperature thermal reservoir): \begin{equation} \begin{split} \dot{\rho}&=-i[H,\rho]+\kappa\mathcal{D}[a]\rho+\kappa\mathcal{D}[b]\rho\\ &+\frac{\Gamma_{e1}}{2}\mathcal{D}[\sigma_{1e}]\rho+\frac{\Gamma_{e2}}{2}\mathcal{D}[\sigma_{2e}]\rho,\\ \end{split} \label{mast.eq} \end{equation} where $\mathcal{D}[\hat{o}]\rho=2\hat{o}\rho\hat{o}^{\dag}-\hat{o}^{\dag}\hat{o}\rho-\rho\hat{o}^{\dag}\hat{o}$ is the Lindblad superoperator and $\kappa=\kappa_{\text{ex}}+\kappa_{\text{i}}$, $\Gamma_{e1}$, $\Gamma_{e2}$ are the decay rates of cavity and atom respectively. \paragraph{Input-output formulation of the system.} The input and output fields which are schematically shown on Fig.\ref{scheme} and are related through the input-output relations (see the Chapter 7 of Ref.~\cite{WM94}), given in the Heisenberg picture by the following expressions: \begin{eqnarray} a_{out,ex}(\tau)&=&-a_{in,ex}(\tau)+\sqrt{2\kappa_{ex}}a(\tau), \\ b_{out,ex}(\tau)&=&-b_{in,ex}(\tau)+\sqrt{2\kappa_{ex}}b(\tau), \end{eqnarray} where input and output fields have delta function commutation relations in time. The field $\Omega_{p}$ in the Hamiltonian corresponds to coherent field incoming from the left and input field incoming from the right is assumed to be in the vacuum state, which is given by the average values of the input operators: \begin{equation} \langle a_{in} \rangle=-\frac{i\Omega_{p}}{\sqrt{2\kappa_{ex}}}, \quad \langle b_{in} \rangle=0. \end{equation} The transmission and the reflection coefficients, normalized to the input photon flux number, are given by \begin{equation} \label{transmission} T=\frac{\langle a^{\dagger}_{out,ex} a_{out,ex} \rangle }{|\Omega_{p}|^{2}/2\kappa_{ex}},\quad R=\frac{\langle a^{\dagger}_{out,ex} a_{out,ex}\rangle}{|\Omega_{p}|^{2}/2\kappa_{ex}}. \end{equation} \paragraph{Normal mode decomposition.} In order to achieve a better understanding of the system, it is instructive to rewrite the Hamiltonian in terms of the normal modes of cavity $A$ and $B$, defined as \begin{equation} A=\frac{a+b}{\sqrt{2}}, \quad B=\frac{a-b}{\sqrt{2}} \end{equation} After expressing $a$ and $b$ through the normal modes, the Hamiltonian of the system reads as \begin{align} \label{splitting} \nonumber H_{N.M.} &= \Delta_{e1}\sigma_{ee}+\Delta_{21}\sigma_{22}+\Omega_{c}(\sigma_{2e}+\sigma_{e2})+(\Delta_{r}+h) A^{\dagger}A \\ \nonumber &+\frac{1}{\sqrt{2}}(\Omega^{*}_{p}A+\Omega_{p}A^{\dagger})+(g_{A}^{*}A^{\dagger}\sigma_{1e}+g_{A} \sigma_{1e}A)\\ \nonumber &+(\Delta_{r}-h) B^{\dagger} B+\frac{1}{\sqrt{2}}(\Omega^{*}_{p}B+\Omega_{p}B^{\dagger}) \\ &+(g_{B}^{*}B^{\dagger}\sigma_{1e}+g_{B} \sigma_{1e}B) \end{align} where $g_{A}$ and $g_{B}$, are given by: \begin{align} g_{A}&=\sqrt{2}g_{0}f(r)\cos{(kx)} \\ g_{B}&=\sqrt{2}g_{0}f(r)\sin{(kx)} \label{dec} \end{align} Eqs.~(\ref{dec}) show that by properly choosing the location of the atom along the ring cavity, it is possible to decouple one of the cavity modes. In the rest of the manuscript assume that $\sin{(kx)}=0$, so that the mode $B$ is decoupled from the atom. It is easy to notice from the expression for $ H_{N.M.} $ that there are no terms in the Hamiltonian that couple mode $B$ with the atom or with other normal mode $A$ (that terms are given by the two last lines in the Eq.~\ref{splitting} and we denote that part of Hamiltonian as $H_{B}$). This in turn implies that systems $\Sigma+A$(here $\Sigma$ represents the subspace of a two-level atom ) and $B$ are non-interacting and the full system Hamiltonian and the density matrix are given by \begin{align} H_{N.M. }&=H_{\Sigma+A}+H_{B},\\ \label{dec1} \rho &=\rho_{\Sigma+A} \otimes \rho_{B}. \end{align} Next, we proceed to write the master equation of the system in the normal mode basis \begin{align} \label{masteq2} \dot{\rho}&=-i [H_{N.M.},\rho]+\kappa {\cal D}[A]\rho+\kappa {\cal D}[B]\rho \\ \nonumber &+\frac{\Gamma_{e1}}{2}\mathcal{D}[\sigma_{1e}]\rho+\frac{\Gamma_{e2}}{2}\mathcal{D}[\sigma_{2e}]\rho \end{align} After substituting Eq.~(\ref{dec1}) into Eq.~(\ref{masteq2}), and tracing out separately the subsystems $\Sigma+A$ and $B$, equations for the respective subsystems take the following form: \begin{align} \label{masteq3} \dot{\rho_{\Sigma+A}}&=-i [H_{\Sigma+A},\rho_{\Sigma+A}]+\kappa {\cal D}[A]\rho_{\Sigma+A} \\ \nonumber \label{masteq4} &+\frac{\Gamma_{e1}}{2}\mathcal{D}[\sigma_{1e}]\rho_{\Sigma+A}+\frac{\Gamma_{e2}}{2}\mathcal{D}[\sigma_{2e}]\rho_{\Sigma+A}, \\ \dot{\rho_{B}}&=-i [H_{B},\rho_{B}]+\kappa{\cal D}[B]\rho_{B}, \end{align} \paragraph{Remark.} It is important to notice that Eqs.~(\ref{masteq3}) and ~(\ref{masteq4}) present a significant numerical advantage compared to the Eq.~(\ref{mast.eq}), since in the first case full system density matrix is obtained as(as a tensor product) by solving two separate equations for density matrices of dimension $~O(n)$ contrary to the second case where one equation for the full system density matrix of the dimension $~O(n^{2})$ has to be solved , here $n$ shows truncation number of the Fock state for the cavity modes. \section{Quantum transistor} \label{quantum switch} The main result of this manuscript is shown in Fig.~\ref{switch_demo} and is obtained by numerical simulations of the master Eqs. ~\ref{masteq3} and ~\ref{masteq4} which takes into account all dissipative channels. Here we use the superspace method which is outlined in a great detail in Ref.~\cite{Nav15}. Moreover, analytical results for the bad cavity limit($ g < \kappa_{ex}$), which are outlined in detail in section Appendix~\ref{appendix} material, are also plotted in Fig.2 for comparison with numerics. In the Fig.~\ref{switch_demo} transmission and reflection are plotted as a function of the amplitude of the control field, for the set of parameters given in the caption. For the system parameters we use the realistic experimental values taken from the Ref.\cite{A2006}, where $\text{SiO}_{2}$ microtoroidal resonator was coupled with a cloud of cold cesium atoms. For some range of $\Omega_{c}$, $T \approx 0$ and $R \approx 1$ (we remark that $T+R < 1$ due to the losses in the system), which means that the system works as a quantum transistor. To gain better understanding about the behavior of transmitted and reflected intensities, in the right column of Fig.~\ref{switch_demo} we amplitudes of the modes $A$ and $B$ as a function of $\Omega_{c}$. The mode $B$ is decoupled from an atom so its population stays constant, however, mode $A$ is strongly coupled to the atom, which in turn is coupled to the external control field, and for some range of the control field amplitude the mode $A$ is going out of resonance, and this range coincides with the range where transmission goes to zero which is apparent by comparing the first and the second columns in Fig.~\ref{switch_demo}. This behaviour can be easily explained by expressing the output field $a_{out,ex}=-a_{in,ex}+\sqrt{ \kappa}(A+B)$, through the normal modes, and taking into account that in the switch region $\langle A \rangle \approx 0$. Since the normal mode $B$ is decoupled from the atom, its average value can be obtained by solving steady state equations for the empty cavity($g=0$). As it is shown in the Appendix \ref{appendix}, it follows from the Eq.~(\ref{empty_cavity}), that in the case $\Delta_{r}=0$ and $h=0$, $\sqrt{\kappa}\langle B \rangle=\langle a_{in,ex}\rangle $. Under mean-field approximation $T \approx \langle a^{\dagger}_{out,ex} \rangle \langle a_{out,ex} \rangle \approx 0$, because of the destructive interference between the input field and the mode $B$. Here, to estimate the intensity we applied a mean-field approximation which is not assumed later in the manuscript. The condition $\langle A \rangle \approx 0$ implies that mode $a$ and $b$ have the same amplitude with opposite sign, this means that atom in a way is acting like a pump which is redistributing photon fluxes between these two modes and ones this two modes get equally populated system is acting as a "mirror". From Fig.~\ref{switch_demo} it is seen that for small value and large values of control fields atom is "effectively" getting decoupled from the cavity. For small values of $\Omega_c$, this happens simply because atom is getting optically pumped to the level $2$, since $|\omega_{1e}-\omega_{2e}|>>\gamma_{1e},\gamma_{2e} $. For the large values of $\Omega_c$, the atom-field dressed energy level gets detuned on the large amount $\approx \Omega_{c}>>\gamma_{1e},\gamma_{2e}$ and cavity mode gets out of resonance with the dressed light atom energy state. This statements are substantiated by analytical results for the bad cavity limit which are presented in Appendix \ref{appendix}. As it is demonstrated there for both limits of very small and very large $\Omega_c$, $\rho_{1e}\approx 0$, which means that absorption is vanishing and light is propagating through the cavity without "feeling" the atom. An interesting feature of our system, that switch functionality regime can be made wider by changing the two-photon detuning, is apparent, for example, by comparing Fig.~\ref{switch_demo}(a), where $\Delta_{12}=70MHz$ with Fig.~\ref{switch_demo}(c)(strong-coupling limit), where $\Delta_{12}=140MHz$. Moreover, from that figures we see that analytical curves, given by the dashed lines, agree well with numerical simulations of master equations, given by blue and red curves. This agreement quite remarkably holds partially even in the strong coupling limit, which is apparent from Figs.~\ref{switch_demo}(a) and (c). As we can see from the second column of Fig.~\ref{switch_demo}, average value of the mode $A$ is one order of magnitude bigger in the bad-cavity limit, which results in having better switch in the strong coupling limit where transmission turns out to be smaller on one order of magnitude compared to the bad-cavity case. We comment, that the fact that mode $\langle A \rangle$ is bigger in strong coupling limit manifests itself in having different statistics for the reflected field in this limit compared to the strong coupling-limit, which we discuss in more detail in Section~\ref{photonstatistics}. Bigger is the range of $\Omega_c$ over which the system works as a quantum transistor, better is the quantum switch. To understand why is this the case it is constructive to consider the opposite limit when this range is extremely narrow, then experimental imperfections and noise can easily push the system out of the regime of functionality. Bearing this in mind, we make a series of contour plots for exploring the parameter regimes where the "range of functionality" is broad. In these contour plots, one axis represents the external control field and other axis denotes the physical parameter of interest. Fig.3 shows the series of contour plots where the left and right columns show transmission and reflection intensities. Range of functionality is given by the length of horizontal line (for a given value of parameter along the $y$-axis) which has a dark/blue colour corresponding to $T \approx 0$. \begin{figure} \label{intenisties} \begin{tabular}{ccccc} \includegraphics[width=4cm]{fig2a.pdf} & \includegraphics[width=4.1cm]{fig2b.pdf} \\ \includegraphics[width=4cm]{fig2c.pdf} & \includegraphics[width=4.1cm]{fig2d.pdf} \\ \includegraphics[width=4cm]{fig2e.pdf} & \includegraphics[width=4.1cm]{fig2f.pdf} \\ \includegraphics[width=4cm]{fig2g.pdf} & \includegraphics[width=4.1cm]{fig2h.pdf} \\ \end{tabular} \caption{(colour online). Normalized power transmission $T$, reflection $R$ and the populations of normal modes $A$ and $B$ as a function of control field strength $\Omega_{c}$. The blue and red solid lines are transmission and reflection functions resulting from the master equation simulations, respectively. The orange and green dashed lines are transmission and reflection functions resulting from adiabatic elimination, respectively. The parameters for strong coupling cases are $\{\Delta_{r},\Delta_{e1},h,g,\Omega_{p},\kappa_{\text{ex}},\kappa_{\text{i}},\Gamma_{e1},\Gamma_{e2}\}/2\pi=\{ 0,0,0,100,10,20,0.2,5.2,5.2\}$MHz and (a,b) $\Delta_{21}/2\pi=70$MHz; (c,d) $\Delta_{21}/2\pi=140$MHz. The parameters for bad cavity cases are $\{\Delta_{r},\Delta_{e1},h,g,\Omega_{p},\kappa_{\text{ex}},\kappa_{\text{i}},\Gamma_{e1},\Gamma_{e2}\}/2\pi=\{ 0,0,0,100,10,200,0.2,5.2,5.2\}$MHz and (e,f) $\Delta_{21}/2\pi=70$MHz; (g,h) $\Delta_{21}/2\pi=140$MHz.} \label{switch_demo} \end{figure} In Fig. \ref{switch_flux_strong}(a,b) we plot $T$ and $R$ in the $\Delta_{21}$-$\Omega_{p}$ plane, in the strong-coupling limit. For the small value of $\Omega_{c}$ we see double-peak structure which is a signature of vacuum Rabi splitting, because for the small value of control field excited state is splitting in the Jaynes-Cummings doublet, and peaks are located at $\approx \sqrt{2}g$. The factor $\sqrt{2}$ is a result of having standing waves in the microtoroidal cavity. From this figures, we conclude, that for the switch with a wide rang of functionality the optimal value of two photon detuning should be chosen equal $\Delta_{21}\approx \sqrt{2}g$. From Fig. \ref{switch_flux_bad}(a,b) we see that, in principle, many values of two photon detuning realize a good switch because in this case there is no vacuum Rabi splitting, and consequently no Rabi oscillations occur, as photons leave the cavity before being reabsorbed by the atom. However, for being consistent we also choose $\Delta_{21}\approx \sqrt{2}g$. The transmission/reflection in $\Omega_{p}$-$\Omega_{c}$ plane is shown in Fig. \ref{switch_flux_strong}(c,d), in the strong-coupling limit. With increasing amplitude of the input drive field, the range over which switch functions (i.e. the dark/red region ) narrows until it completely disappears. This behaviour occurs as a result of an atom being saturated on the $1-e$ transition. So the weaker is the amplitude of the input field, the better switch can function. We remark, that onset of saturation occurs for smaller value of $\Omega_{p}$, in the bad cavity limit (See Fig \ref{switch_flux_bad}(c,d), because in this limit atom gets saturated with relatively small number of photons. Moreover, in the bad-cavity limit the range of functionality for a given value of $\Omega_{p}$, is narrower compared to the strong-coupling limit. The transmission/reflection in $g$-$\Omega_{p}$ plane is shown in Fig. \ref{switch_flux_strong}(e,f). For very small values of $g$, photons will pass through the cavity without interacting with the atom which means $T \approx 1$ and system does not perform as a switch. We remark, that since here $h=0$, the only way for producing anti-clockwise photons is through interaction with the atom. With increasing $g$ the range of functionality gets larger and has an optimal value. Thus, transmission eventually goes to zero with increasing $g$, once destructive interference occurs between input field and intracavity field amplitudes. Here, we see that for a fixed two photon detuning $\Delta_{21}$, there is an optimal value of $g$ given by $g \approx \Delta_{21}/\sqrt{2}$ in agreement with existence vacuum Rabi splitting as was discussed above. Same kind of behaviour is observed in the bad-cavity limit(See~\ref{switch_flux_bad}(e,f), except in this regime there is no optimal value of $g$ for a given two-photon detuning, because of the absence of vacuum Rabi splitting. Transmitted and reflected intensities in $k_{ex}$-$\Omega_{p}$ plane are shown in Fig. \ref{switch_flux_strong}(e,f). Here we see that system performs as a switch both in strong-coupling ($k_{ex}<100$) and bad-cavity ($k_{ex}>100$) limits, showing wider range of functionality in the strong coupling limit in agreement with our findings for the saturation behaviour. In Fig. \ref{switch_flux_bad}(e,f) we show a zoom of Fig. \ref{switch_flux_strong}(e,f), as it reveals an interesting feature. System is performing as a switch only in the fiber-overcoupling regime which is given by the condition $k_{ex}>>k_{i}$, this condition ensures that most of the light is transferred in the cavity and then collected out of the cavity, which is obviously a necessary condition for strong light-matter interaction. \begin{figure} \begin{tabular}{ccccc} \includegraphics[width=4cm]{fig3a.pdf} & \includegraphics[width=4cm]{fig3b.pdf} \\ \includegraphics[width=4cm]{fig3c.pdf} & \includegraphics[width=4cm]{fig3d.pdf} \\ \includegraphics[width=4cm]{fig3e.pdf} & \includegraphics[width=4cm]{fig3f.pdf} \\ \includegraphics[width=4cm]{fig3g.pdf} & \includegraphics[width=4cm]{fig3h.pdf} \\ \end{tabular} \caption{(colour online). Contour plots for transmission and reflection profiles in strong coupling regime. The parameters are the same as Fig. \ref{switch_demo}(c) except $\Omega_{p}=1$.} \label{switch_flux_strong} \end{figure} \begin{figure} \begin{tabular}{ccccc} \includegraphics[width=4cm]{fig4a.pdf} & \includegraphics[width=4cm]{fig4b.pdf} \\ \includegraphics[width=4cm]{fig4c.pdf} & \includegraphics[width=4cm]{fig4d.pdf} \\ \includegraphics[width=4cm]{fig4e.pdf} & \includegraphics[width=4cm]{fig4f.pdf} \\ \includegraphics[width=4cm]{fig4g.pdf} & \includegraphics[width=4cm]{fig4h.pdf} \\ \end{tabular} \caption{(colour online). Contour plots for transmission and reflection profiles in bad-cavity regime. The parameters are the same as Fig. \ref{switch_demo}(g) except $\Omega_{p}=1$.}\label{switch_flux_bad} \end{figure} \section{photon statistics} \label{photonstatistics} \begin{figure} \begin{tabular}{ccccc} \includegraphics[width=4cm]{fig5a.pdf} & \includegraphics[width=4cm]{fig5b.pdf} \\ \includegraphics[width=4cm]{fig5c.pdf} & \includegraphics[width=4cm]{fig5d.pdf} \\ \includegraphics[width=4cm]{fig5e.pdf} & \includegraphics[width=4cm]{fig5f.pdf} \\ \includegraphics[width=4cm]{fig5g.pdf} & \includegraphics[width=4cm]{fig5h.pdf} \\ \end{tabular} \caption{(colour online). Contour plots for the photon statistics of transmitted and reflected photons in strong coupling regime. The parameters are the same as Fig. \ref{switch_demo}(c) except $\Omega_{p}=1$.} \label{stat} \end{figure} In this section, we study the photon statistics for the transmitted and reflected light fields, both in strong coupling and bad-cavity limits. Here we mainly focus on regions, where quantum switch is functioning, which means $ T \approx 0 $. In this region, most of the photon flux is reflected and our main focus here is to study the statistics of the reflected light, however since still small number of photons is passing in the forward direction two-photon correlation function can be observed through photo-detection. Two-photon correlation functions for transmitted and reflected fields are defined through the output fields as follows: \begin{equation} \label{correlation} \begin{split} g_{\text{T}}^{(2)}(0)&=\frac{\langle{a_{\text{out},\text{ex}}^{\dag}a_{\text{out},\text{ex}}^{\dag}a_{\text{out},\text{ex}}a_{\text{out},\text{ex}}}\rangle_{\text{ss}}}{(\langle{a_{\text{out},\text{ex}}^{\dag}a_{\text{out},\text{ex}}}\rangle_{\text{ss}})^2},\\ g_{\text{R}}^{(2)}(0)&=\frac{\langle{b_{\text{out},\text{ex}}^{\dag}b_{\text{out},\text{ex}}^{\dag}b_{\text{out},\text{ex}}b_{\text{out},\text{ex}}}\rangle_{\text{ss}}}{(\langle{b_{\text{out},\text{ex}}^{\dag}b_{\text{out},\text{ex}}}\rangle_{\text{ss}})^2}.\\ \end{split} \end{equation} If $g^{(2)}(0)<1$ (e.g. for the field in the Fock state $|n \rangle$, it can be easily shown that $g^{(2)}(0)=1-1/n$), and the field has sub-Poissonian statistics. If $g^{(2)}(0)=1$ (e.g. .any coherent field $| \alpha \rangle$), the field has a Poissonian statistics. Finally if $g^{(2)}(0)>1$, then the field has a super-Poissonian statistics (e.g. for the single-mode thermal field $g^{(2)}(0)=2$)\cite{SZ97}. \begin{figure} \begin{tabular}{ccccc} \includegraphics[width=4cm]{fig6a.pdf} & \includegraphics[width=4cm]{fig6b.pdf} \\ \includegraphics[width=4cm]{fig6c.pdf} & \includegraphics[width=4cm]{fig6d.pdf} \\ \includegraphics[width=4cm]{fig6e.pdf} & \includegraphics[width=4cm]{fig6f.pdf} \\ \includegraphics[width=4cm]{fig6g.pdf} & \includegraphics[width=4cm]{fig6h.pdf} \\ \end{tabular} \caption{(colour online). Contour plots for the photon statistics of transmitted and reflected photons in bad-cavity regime. The parameters are the same as Fig. \ref{switch_demo}(g) except $\Omega_{p}=1$.} \label{statis} \end{figure} Correlation functions are plotted on Figs.\ref{stat} and \ref{statis}, respectively for the strong-coupling and bad cavity limits, varying on x-axis the control field and on y-axis the physical parameter of interest. Here we truncate $g^{(2)}(0)$ function for values higher than 2 for convenience of graphical representation, since the regions of quantum light ($g^{(2)}(0)<1$) are easily noticeable in this case. We remark, that this kind of truncation still keeps all information about the statistics of the light only omitting the regions of extreme bunching which is not of interest in the current manuscript. As we can see from the second column of Figs.\ref{stat}, in the regime of functional switch(here $g>k_{ex}$) reflected light remains in the coherent state. To understand why this is the case, we write an expression for the reflected output field and take into account the $\langle A \rangle \approx 0$, as was demonstrated in Fig.~\ref{intenisties}. Then we can estimate, that $b_{out,ex} \approx \sqrt{k}B$, here we took into account that $\langle b_{in,ex} \rangle = 0$. The normal mode $\langle \sqrt{\kappa} B \rangle =-i\Omega_{p}\sqrt{2k}=\langle a_{in} \rangle$, which means the mode $B$ has the same statistics as the input field(this has been numerically demonstrated in the ref~\cite{SP07}) which is assumed to be in the coherent state. In contrast, in bad cavity limit the reflected light becomes quantum as $g^{2}_{R}(0)<< 1$, which corresponds to the dark/blue regions of the Fig.\ref{statis}. This is a result of a non-conventional photon blockade \cite{Dayan2008}, and can be understood by studying the output reflected field. After making the adiabatic elimination of cavity modes, which we outline in great detail in Appendix~\ref{appendix}, for calculating average values the following mapping $b_{out,ex}\rightarrow \beta_{0}+\beta_{-}\sigma_{1e}$. Moreover, in the case when $\Delta_{r}=0$ and $h=0$, the parameter $\beta_{0}=0$, which means reflected photons are solely generated by an atom. After an atom emits, the photon it is projected into its ground state, and it takes finite amount of time, given by $1/\Gamma$, where $\Gamma$ is the Purcell enhanced decay rate (See Appendix \ref{appendix} for the expression of $\Gamma$), for it to get re excited and emit a photon again. This also can be showed, from the analytical expression (\ref{badcavity}), from which it immediately follows that $g^{2}_{R}(0)=0$, which corresponds to the single photon statistics(as its been mentioned for the Fock state $|n \rangle$,$g^{(2)}(0)=1-1/n$, so when $n=1$,$g^{(2)}(0)=0$ ). It is important to notice, that $g^{2}_{R}(0)= 0$, does not hold in our numerical simulations, and $g^{2}_{R}(0)\approx 0.1$, since we are considering the case $k=2g$, and not really in the bad cavity limit, where $k>>g$. For the transmitted field interesting features appear because of the interference between the straight-through transmission of the coherent driving field and the forward scattered fluorescence from the atom. The consequences of this interference on the photon statistics were first time noted in the Ref.\cite{Rice88}, for the single-atom interacting with a single mode-cavity in a bad-cavity limit. As we can see from the first columns of Figs.\ref{stat} and \ref{statis}, in the regions of switch functionality, transmitted light shows bunching behavior (dark/red regions) as a result of destructive interference between the field radiated by the atom and an intracavity field. This behavior in terms of normal modes $A$ and $B$, has been explained in great detail in Ref.~\cite{PA14}, and it turns out that bunching behaviour is a consequence of normal mode $A$ being strongly bunched(we have numerically verified that this holds for our system in both limits). As it can be seen in Fig.~\ref{num_analyt} analytical and numerical results for the two-photon correlation function agree well, in the bad-cavity limit, with a drawback that numerical approach starts failing for obtaining $g^{(2)}_{R}(0)$ out of the switch functionality region, where we simply set it to zero, when it obtains values bigger than one. This happens because of the divergence of normalized two-photon correlation function, when photon flux is zero and no photons are detected(this gets even more apparent by considering correlation function for the Fock state $|n \rangle$, $g2(0)=1-1/n$ which diverges when $n=0$). This remark is substantiated by the fact that for the finite value of $h$, analytical and numerical approaches start to agree better with increasing value of $h$, because the photon flux for the reflected field never gets equal to zero. To summarize, $g^{(2)}_{R}(0)=0$ for the all values of $\Omega_{c}$, however, photodetection is going to reveal anti-bunched statistics in the region of switch functionality. \begin{figure} \begin{tabular}{ccccc} \includegraphics[width=4cm]{fig7a.pdf} & \includegraphics[width=4cm]{fig7b.pdf} \\ \end{tabular} \caption{(colour online). (a) and (b) are the comparison between numerical simulation and adiabatic elimination for Fig. \ref{switch_demo}(e) and \ref{switch_demo}(g). } \label{num_analyt} \end{figure} \section{Conclusions} \label{conclusion} In this paper, we suggested new scheme for realizing quantum transistor for the incoming coherent field. Our scheme is based on the coupling the fibre-coupled ring cavity with a single $\Lambda$-level atom. We have demonstrated that it is possible to tune the system by the external control field from a being fully transparent to being a fully reflective through numerical simulations of the master equations. We emphasize, that our proposal has an advantage of being easily implemented experimentally, compared to other proposals which may require high control over the ``gate valve" parameter, which is not easily tunable in current experiments. Here, we have concluded, that the switch functions both in strong coupling and weak coupling limits, under the condition of strong fibre-over-coupling, (showing better performance in the strong-coupling limit) for the reasonably large amplitude of the incoming field(up to $\approx 50\text{MHz}$, when $g>k_{ex}$, up to $\approx 40\text{MHz}$, when $g<k_{ex}$). Moreover, we have demonstrated that the regime of functionality can be extended by increasing two-photon detuning and found the optimal value for. Analytical results in bad-cavity limit are obtained through adiabatic elimination of cavity modes, and they are in good agreement with numerical simulations of respective master equations. Surprisingly, this approach works even in the strong coupling limit showing qualitative agreement for transmitted and reflected field intensities. It is important to mention that our protocol works only for non-zero two-photon detuning which means that we are not using conventional EIT based approach. By studying the statistics of transmitted and reflected fields, we have verified that quantum transistor does not modify the state of the incoming coherent field in the strong coupling limit, however in the bad-cavity limit our system can produce quantum states of light in the reflected field. So in bad-cavity limit our system can be used as a ``black box" which acts as a quantum device which takes as input coherent field and gives quantum light in the output. Our proposal has a potential interest in realizing quantum information protocols with coherent light states. For future projects, it would be interesting to concatenate several ring cavities to fibre and study if the system can work as a photon router for a few-photon incoming state. It also would be of interest to implement quantum repeater schemes such as DLCZ\cite{Duan,Malak}. In addition, there have been several interesting theoretical proposals on coupling NV centres with ring cavities for generating entangled states between the colour centres \cite{Shi2010,Yang1,Yang2,Liu2013}. Since colour centres are solid state systems there is no need to trap them as it is the case with cold atoms. Moreover, in the recent experimental realization a single photon source based on coupling ring cavity with SiV vacancies has been realized\cite{Jacques2018}. So it would be interesting to implement a quantum switch by coupling ring resonators with colour centers, which have a multi-level structure and can be utilized as $\Lambda$-level systems. \section{Acknowledgements} D.A, L.C.K and L.K. acknowledges support from NRF grant 2014NRF-CRP002-042. The IHPC A*STAR Team would like to acknowledge the National Research Foundation Singapore (Grant No. NRF2017NRF-NSFC002-015, NRF2016-NRF-ANR002, NRF-CRP 14-2014-04) and A*STAR SERC (Grant No. A1685b0005). D. A. would like to acknowledge E. Munro and A. Chia for stimulating discussions, reading the manuscript and making useful comments and suggestions.
{ "timestamp": "2019-03-01T02:17:28", "yymm": "1902", "arxiv_id": "1902.11052", "language": "en", "url": "https://arxiv.org/abs/1902.11052" }
\section{Introduction}\label{sec00} \allowdisplaybreaks Starting with J. Lepowsky and S. Milne \cite{LM}, the fascinating connection between Rogers--Ramanujan-type identities and affine Kac--Moody Lie algebras was extensively studied; see, e.g., \cite{LP,LW,MP,M} and references therein. The principal subspaces of standard modules, i.e. of integrable highest weight modules for the affine Lie algebras, introduced by B. L. Feigin and A. V. Stoyanovsky \cite{FS}, present a remarkable example of this interplay between combinatorics and algebra. In particular, their so-called quasi-particle bases provide an interpretation of the sum sides of various Rogers--Ramanujan-type identities; see \cite{B1,B2,B3,BS,FS,G,MiP}. Aside from quasi-particle bases, numerous research directions are focused on other aspects of principal subspaces and related structures such as certain generalized principal subspaces \cite{AKS}, Feigin--Stoyanovsky's type subspaces \cite{BPT,JP,P}, realizations of Jack symmetric functions \cite{CJ}, presentations of principal subspaces \cite{CLM1,CLM2,CLM3,CPS,PS,PS2,S1,S2}, Rogers--Ramanujan-type recursions \cite{CapLM1,CapLM2}, Koszul complexes \cite{Kan}, principal subspaces for quantum affine algebras and double Yangians \cite{c01,c05,c10} etc. The key ingredient that all the aforementioned studies have in common is the application of vertex-operator theoretic methods. Let $\Lambda_0,\ldots ,\Lambda_l$ be the fundamental weights of the untwisted affine Lie algebra $\widetilde{\mathfrak{g}}$ associated with the simple Lie algebra $\mathfrak{g}$ of rank $l$. In this paper, we consider the principal subspaces $W_{N(k\Lambda_0)}$ of the generalized Verma modules $N(k\Lambda_0)$ and the principal subspaces $W_{L(k\Lambda_0)}$ of the standard modules $L(k\Lambda_0)$ of highest weights $k\Lambda_0$ for $\widetilde{\mathfrak{g}}$ in types $D$, $E$ and $F$. The main result is a construction of the quasi-particle bases $\mathfrak{B}_{N(k\Lambda_0)}$ and $\mathfrak{B}_{L(k\Lambda_0)}$ of the corresponding principal subspaces: \makeatletter \def\thmhead@plain#1#2#3{% \thmname{#1}\thmnumber{\@ifnotempty{#1}{ }\@upn{#2}}% \thmnote{ {\the\thm@notefont#3}}} \let\thmhead\thmhead@plain \makeatother \begin{thm*}[\textbf{\ref{thm_baza}}] {\slshape For any positive integer $k$ the set $\mathfrak{B}_{V}$ forms a basis of the principal subspace $W_V$ of the $\widetilde{\mathfrak{g}}$-module $V=N(k\Lambda_0),L(k\Lambda_0)$.} \end{thm*} The bases are expressed in terms of monomials of certain operators, called quasi-particles, applied on the highest weight vector, whose charges and energies satisfy certain difference conditions. Theorem \ref{thm_baza} for $\mathfrak{g}$ of type $A_1$ goes back to Feigin and Stoyanovsky \cite{FS}. The $\mathfrak{g}=A_l$ case was proved by G. Georgiev \cite{G} for all rectangular weights, i.e. for all integral dominant highest weights $\Lambda=k_0\Lambda_0+k_j\Lambda_j$. The bases $\mathfrak{B}_{V}$ for $\mathfrak{g}=B_l, C_l, G_2$ were obtained by the first author in \cite{B1,B2,B3}. The $\mathfrak{g}=A_l$ case for basic modules can be also recovered from the recent result of K. Kawasetsu \cite{Kawa}. Our proof of Theorem \ref{thm_baza} in types $D$, $E$ and $F$ follows the approach in \cite{G} and relies on \cite{B1,B2,JP}. In addition to Theorem \ref{thm_baza}, in Theorem \ref{thm_baza_DE} we construct quasi-particle bases of the principal subspaces $W_{L(\Lambda)}$ for all rectangular highest weights $\Lambda$ in types $D$ and $E$, thus generalizing \cite{G}. Next, in Theorem \ref{thm_prezentacija}, we derive presentations of the principal subspaces $W_{L(k\Lambda_0)}$ for all types of $\mathfrak{g}$. The presentations of principal subspaces of standard $\widetilde{\mathfrak{g}}$-modules $L(\Lambda)$ for the level $k$ integral dominant highest weights $\Lambda$ were established by Feigin and Stoyanovsky \cite{FS} for $\mathfrak{g}=A_1$ and $k= 1$. Furthermore, the presentations were proved by C. Calinescu, Lepowsky and A. Milas \cite{CLM1,CLM2,CLM3} for $\mathfrak{g}=A_1$ and $k\geqslant 1$ and for $\mathfrak{g}=A,D,E$ and $k=1$, and by C. Sadowski \cite{S1} for $\mathfrak{g}=A_2$ and $k\geqslant 1$. As explained in \cite{CLM1}, these a priori proofs do not rely on the detailed underlying structure, such as bases of the standard modules or of the principal subspaces. Finally, Sadowski \cite{S2} proved the general case $\mathfrak{g}=A_l$ for all $k\geqslant 1$ using Georgiev's quasi-particle bases \cite{G}. In contrast with \cite{CLM1,CLM2,CLM3,S1}, our proof employs the sets $\mathfrak{B}_{L(k\Lambda_0)}$ from Theorem \ref{thm_baza}, thus solving a simpler problem. In addition, using the quasi-particle bases from Theorem \ref{thm_baza_DE} we obtain presentations of the principal subspaces $W_{L(\Lambda)}$ for all rectangular highest weights $\Lambda$ in types $D$ and $E$; see Theorem \ref{thm_prezentacija_DE}. It is worth noting that, aside from the aforementioned cases covered in \cite{CLM1,CLM2,CLM3,S1}, the a priori proof of these presentations, which were originally conjectured in \cite{S2}, is still lacking. In the end, we use the bases from Theorems \ref{thm_baza} and \ref{thm_baza_DE} to explicitly write the character formulae for the principal subspaces. In particular, by regarding two different bases for $W_{N(k\Lambda_0)}$ in types $D$, $E$ and $F$, we obtain three new families of combinatorial identities. \section{Preliminaries}\label{sec10} \numberwithin{equation}{section} Let $\mathfrak{g}$ be a complex simple Lie algebra of rank $l$ equipped with a nondegenerate invariant symmetric bilinear form $\langle \cdot,\cdot \rangle$ and let $\mathfrak{h} $ be its Cartan subalgebra. As the restriction of the form $\langle \cdot,\cdot \rangle$ on $\mathfrak{h}$ is nondegenerate, it defines a symmetric bilinear form on the dual $ \mathfrak{h}^{\ast}$. Let $\Pi=\left\{\alpha_1,\ldots ,\alpha_l\right\}\subset\mathfrak{h}^*$ be the basis of the root system $R$ of $\mathfrak{g}$ with respect to $\mathfrak{h}$ and let $x_\alpha\in\mathfrak{g}$ with $\alpha\in R$ be the root vectors. The simple roots $\alpha_1,\ldots ,\alpha_l$ are labelled\footnote{\label{note1} In contrast with \cite{H} and \cite[Table Fin]{K}, we reverse the labels in the Dynkin diagram of type $C_l$ in Figure \ref{figure}, so that the root lengths in the sequence $\alpha_1,\ldots ,\alpha_l$ increase for all types of $\mathfrak{g}$, thus getting a simpler formulation of Theorem \ref{thm_baza}. } as in Figure \ref{figure}. We denote by $\alpha_1^{\vee},\ldots ,\alpha_l^{\vee}$ the corresponding simple coroots. Let $\lambda_1,\ldots, \lambda_l\in\mathfrak{h}^*$ be the fundamental weights, i.e. the weights such that $\left<\lambda_i,\alpha_j\right>=\delta_{ij}$. Let $Q=\sum_{i=1}^l \mathbb{Z}\alpha_i$ and $P=\sum_{i=1}^l \mathbb{Z}\lambda_i$ be the root lattice and the weight lattice of $\mathfrak{g}$ respectively. We assume that the form $\langle \cdot,\cdot \rangle$ is normalized so that $\langle \alpha,\alpha \rangle=2$ for every long root $\alpha\in R$. Hence, in particular, we have $\langle \alpha_i,\alpha_i \rangle\in \left\{2/3, 1, 2\right\}$ for all $i=1,\ldots ,l$. Denote by $R_+$ and $R_{-}$ the sets of positive and negative roots. Let $$\mathfrak{g} =\mathfrak{n}_{-}\oplus \mathfrak{h}\oplus \mathfrak{n}_{+},\qquad\text{where}\qquad \mathfrak{n}_{\pm }=\bigoplus_{\alpha \in R_{\pm}}\mathfrak{n}_{\alpha}\quad\text{and}\quad \mathfrak{n}_{\alpha} =\mathbb{C}x_{\alpha}\text{ for all }\alpha\in R,$$ be the triangular decomposition of $\mathfrak{g}$; see \cite{H} for more details on simple Lie algebras. \tikzset{node distance=1.8em, ch/.style={circle,draw,on chain,inner sep=2pt},chj/.style={ch,join},every path/.style={shorten >=5pt,shorten <=5pt},line width=1pt,baseline=-1ex} \newcommand{\alabel}[1]{% \(\alpha_{\mathrlap{#1}}\) } \newcommand{\mlabel}[1]{% \(#1\) } \let\dlabel=\alabel \let\ulabel=\mlabel \newcommand{\dnode}[2][chj]{% \node[#1,label={below:\dlabel{#2}}] {}; } \newcommand{\dnodenj}[1]{% \dnode[ch]{#1} } \newcommand{\dnodebr}[1]{% \node[chj,label={right:\dlabel{#1}}] {}; } \newcommand{\dydots}{% \node[chj,draw=none,inner sep=1pt] {\dots}; } \newcommand{\QRightarrow}{% \begingroup \tikzset{every path/.style={}}% \tikz \draw (0,3pt) -- ++(1em,0) (0,1pt) -- ++(1em+1pt,0) (0,-1pt) -- ++(1em+1pt,0) (0,-3pt) -- ++(1em,0) (1em-1pt,5pt) to[out=-75,in=135] (1em+2pt,0) to[out=-135,in=75] (1em-1pt,-5pt); \endgroup } \newcommand{\QLeftarrow}{% \begingroup \tikz \draw[shorten >=0pt,shorten <=0pt] (0,3pt) -- ++(-1em,0) (0,1pt) -- ++(-1em-1pt,0) (0,-1pt) -- ++(-1em-1pt,0) (0,-3pt) -- ++(-1em,0) (-1em+1pt,5pt) to[out=-105,in=45] (-1em-2pt,0) to[out=-45,in=105] (-1em+1pt,-5pt); \endgroup } \begin{align*} &A_l &&\hspace{-15pt} \begin{tikzpicture}[start chain] \dnode{1} \dnode{2} \dydots \dnode{l-1} \dnode{l} \end{tikzpicture} &&B_l &&\hspace{-15pt} \begin{tikzpicture}[start chain] \dnode{1} \dnode{2} \dydots \dnode{l-1} \dnodenj{l} \path (chain-4) -- node[anchor=mid] {\(\Rightarrow\)} (chain-5); \end{tikzpicture} \\\\ &C_l &&\hspace{-15pt} \begin{tikzpicture}[start chain] \dnode{l} \dnode{l-1} \dydots \dnode{2} \dnodenj{1} \path (chain-4) -- node[anchor=mid] {\(\Leftarrow\)} (chain-5); \end{tikzpicture} &&D_l &&\hspace{-15pt} \begin{tikzpicture} \begin{scope}[start chain] \dnode{1} \dnode{2} \node[chj,draw=none] {\dots}; \dnode{l-2} \dnode{l-1} \end{scope} \begin{scope}[start chain=br going above] \chainin(chain-4); \dnodebr{l} \end{scope} \end{tikzpicture} \\\\ &E_6 &&\hspace{-15pt} \begin{tikzpicture} \begin{scope}[start chain] \foreach \dyni in {1,...,5} { \dnode{\dyni} } \end{scope} \begin{scope}[start chain=br going above] \chainin (chain-3); \dnodebr{6} \end{scope} \end{tikzpicture} &&E_7 &&\hspace{-15pt} \begin{tikzpicture} \begin{scope}[start chain] \foreach \dyni in {1,...,6} { \dnode{\dyni} } \end{scope} \begin{scope}[start chain=br going above] \chainin (chain-3); \dnodebr{7} \end{scope} \end{tikzpicture} \\\\ &E_8 &&\hspace{-15pt} \begin{tikzpicture} \begin{scope}[start chain] \foreach \dyni in {1,...,7} { \dnode{\dyni} } \end{scope} \begin{scope}[start chain=br going above] \chainin (chain-5); \dnodebr{8} \end{scope} \end{tikzpicture} &&F_4 &&\hspace{-15pt} \begin{tikzpicture}[start chain] \dnode{1} \dnode{2} \dnodenj{3} \dnode{4} \path (chain-2) -- node[anchor=mid] {\(\Rightarrow\)} (chain-3); \end{tikzpicture} \\\\ &G_2 &&\hspace{-15pt} \begin{tikzpicture}[start chain] \dnodenj{1} \dnodenj{2} \path (chain-1) -- node {\(\Rrightarrow\)} (chain-2); \end{tikzpicture} \end{align*} \begingroup\vspace*{-\baselineskip} \captionof{figure}{Finite Dynkin diagrams}\label{figure} \vspace*{\baselineskip}\endgroup The affine Kac--Moody Lie algebra $\widetilde{\mathfrak{g}}$ associated to $\mathfrak{g}$ is defined by $$\widetilde{ \mathfrak{g} }=\mathfrak{g}\otimes \mathbb{C}[t,t^{-1}]\oplus \mathbb{C}c\oplus \mathbb{C}d,$$ where the elements $x(m)=x\otimes t^m$ for $x\in \mathfrak{g}$ and $m\in\mathbb{Z}$ are subject to relations \begin{gather} \left[c,\widetilde{ \mathfrak{g}}\right]=0,\qquad \left[d,x(m)\right]=mx(m),\nonumber\\ \left[x(m),y(n)\right]= \left[x, y\right](m+n) + \left\langle x, y \right\rangle m \delta_{m+n\,0}\, c.\label{defrel} \end{gather} We denote by $\alpha_0,\alpha_1,\ldots ,\alpha_l$ and $\alpha_0^\vee,\alpha_1^{\vee},\ldots ,\alpha_l^\vee$ the simple roots and the simple coroots of $\widetilde{\mathfrak{g}}$. Let $\Lambda_i$ be the fundamental weights of $\widetilde{\mathfrak{g}}$, i.e. the weights such that $\Lambda_i (d)=0$ and $\Lambda_i (\alpha_j^{\vee})=\delta_{ij}$ for all $i,j=0,\ldots , l$. For more details on affine Lie algebras see \cite{K}. Let $k_0,\ldots ,k_l$ be nonnegative integers such that $k=k_0+\ldots +k_l$ is positive and let $\lambda=k_1\lambda_1 +\ldots +k_l \lambda_l$. Denote by $U_{\lambda}$ the finite-dimensional irreducible $\mathfrak{g}$-module of highest weight $\lambda$. The generalized Verma $\widetilde{\mathfrak{g}}$-module $N(\Lambda)$ of highest weight $\Lambda=k_0\Lambda_0+\ldots +k_l\Lambda_l$ and of level $k$ is defined as the induced $\widetilde{\mathfrak{g}}$-module $$N(\Lambda)= U(\widetilde{\mathfrak{g}})\otimes_{U(\widetilde{\mathfrak{g}}^{\geqslant 0})} U_{\lambda},$$ where the action of the Lie algebra $$\widetilde{\mathfrak{g}}^{\geqslant 0}=\bigoplus_{n\geqslant 0} (\mathfrak{g}\otimes t^n) \oplus\mathbb{C} c\oplus\mathbb{C} d $$ on $U_{\lambda}$ is given by $$ \mathfrak{g}\otimes t^n \cdot u=0\text{ for all }n> 0,\quad c\cdot u=k u\quad\text{and}\quad d\cdot u=0\qquad\text{for all }u\in U_{\lambda}. $$ Denote by $L(\Lambda)$ the standard $\widetilde{\mathfrak{g}}$-module of highest weight $ \Lambda $ and of level $k$, i.e. the integrable highest weight $\widetilde{\mathfrak{g}}$-module which equals the unique simple quotient of the generalized Verma module $N(\Lambda)$. In particular, for $\lambda=0$ we obtain the generalized Verma $\widetilde{\mathfrak{g}}$-module $N(k\Lambda_0)$ of highest weight $k\Lambda_0$ and level $k=k_0$ which possesses a vertex operator algebra structure. Moreover, $L(k\Lambda_0)$ is a simple vertex operator algebra and the level $k$ standard $\widetilde{\mathfrak{g}}$-modules are $L(k\Lambda_0)$-modules; see, e.g., \cite{LL,MP}. Finally, recall that Poincar\'{e}--Birkhoff--Witt theorem for the universal enveloping algebra implies the vector space isomorphism $$N(k\Lambda_0)\cong U(\widetilde{\mathfrak{g}}^{< 0}),\quad\text{where}\quad \widetilde{\mathfrak{g}}^{< 0}=\bigoplus_{n < 0} (\mathfrak{g}\otimes t^n) .$$ For more details on the representation theory of affine Lie algebras see \cite{K}. \section{Quasi-particle bases of principal subspaces}\label{sec20} In this section, we state our main results, Theorems \ref{thm_baza} and \ref{thm_baza_DE}. \subsection{Quasi-particles}\label{subsec21} Introduce the following subalgebras of $\widetilde{\mathfrak{g}}$: $$ \widetilde{\mathfrak{n}}_+=\mathfrak{n}_{+} \otimes \mathbb{C}[t,t^{-1}],\qquad \widetilde{\mathfrak{n}}_+^{\geqslant 0}=\mathfrak{n}_+ \otimes \mathbb{C}[t]\qquad\text{and}\qquad \widetilde{\mathfrak{n}}_+^{< 0}=\mathfrak{n}_+ \otimes t^{-1}\mathbb{C}[t^{-1}]. $$ Let $\Lambda$ be an arbitrary integral dominant weight of $\widetilde{\mathfrak{g}}$. Denote by $V$ the generalized Verma module $N( \Lambda )$ or the standard module $L(\Lambda )$ with a highest weight vector $v_V $. Following Feigin and Stoyanovsky \cite{FS}, we define the {\em principal subspace} $W_V$ of $V$ by \begin{equation*} W_V=U(\widetilde{\mathfrak{n}}_+) v_{V}. \end{equation*} Consider the vertex operators $$x_{\alpha_i}(z)=\sum_{m\in\mathbb{Z}} x_{\alpha_i}(m) z^{-m-1}\in \mathop{\mathrm{Hom}}(V,V((z)))\subset (\mathop{\mathrm{End}} V)[[z^{\pm 1}]], \quad i=1,\ldots ,l.$$ Note that \eqref{defrel} implies $[x_{\alpha_i}(z_1),x_{\alpha_i}(z_2)]=0$ so that \begin{equation}\label{quasi} x_{n\alpha_i}(z)=\sum_{m\in\mathbb{Z}} x_{n\alpha_i}(m) z^{-m-n}=\underbrace{x_{\alpha_i}(z)\cdots x_{\alpha_i}(z)}_{n\text{ times}}=x_{\alpha_i}(z)^n \end{equation} is a well-defined element of $\mathop{\mathrm{Hom}}(V,V((z)))$ for all $n\geqslant 1$. In fact, $x_{n\alpha_i}(z)$ is the vertex operator associated with the vector $x_{\alpha_i}(-1)^n v_V \in V$. As in \cite{G}, define the {\em quasi-particle of color $i$, charge $n$ and energy $-m$} as the coefficient $x_{n\alpha_i}(m)\in\mathop{\mathrm{End}} V$ of \eqref{quasi}. Consider the quasi-particle monomial \begin{equation}\label{monomial}\tag{$m$} b= \left(x_{n_{r_{l}^{(1)},l}\alpha_{l}}(m_{r_{l}^{(1)},l})\ldots x_{n_{1,l}\alpha_{l}}(m_{1,l})\right)\ldots \left( x_{n_{r_{1}^{(1)},1}\alpha_{1}}(m_{r_{1}^{(1)},1})\ldots x_{n_{1,1}\alpha_{1}}(m_{1,1})\right) \end{equation} in $\mathop{\mathrm{End}} V$. Note that the quasi-particle colors in \eqref{monomial} are increasing from right to left and that the integers $r_j^{(1)}\geqslant 0$ with $j=1,\ldots ,l$ denote the parts of the conjugate partition of $n_j=n_{r_j^{(1)},j}+\cdots +n_{1,j}$; see \cite{G,B1,B2,B3} for more details. It is convenient to write quasi-particle monomial \eqref{monomial} more briefly as \begin{equation}\label{briefly} b=b_{\alpha_l}\,\cdots \,b_{\alpha_2}b_{\alpha_1} ,\quad\text{where}\quad b_{\alpha_i}=x_{n_{r_{i}^{(1)},i}\alpha_{i}}(m_{r_{i}^{(1)},i})\ldots x_{n_{1,i}\alpha_{i}}(m_{1,i}) \text{ for } i=1,\ldots ,l. \end{equation} \subsection{Quasi-particle bases for \texorpdfstring{$\Lambda=k\Lambda_0$}{Lambda=kLambda0}}\label{subsec22} Suppose that $\Lambda=k\Lambda_0$ for some positive integer $k$ so that $V$ denotes the generalized Verma module $N( k\Lambda_0 )$ or the standard module $L(k\Lambda_0 )$. We introduce certain difference conditions for energies and charges of quasi-particles in \eqref{monomial}. First, for the adjacent quasi-particles of the same color we require that \begin{align} &\text{ for all}\quad i=1,\ldots ,l\quad\text{and}\quad p=1,\ldots, r_{i}^{(1)}-1\nonumber\\ &\qquad \text{if}\quad n_{p+1,i}=n_{p,i}\quad\text{then}\quad m_{p+1,i}\leqslant m_{p,i} -2 n_{p,i}.\label{d1}\tag{$c_1$} \end{align} Next, we turn to the difference conditions which describe the interaction of two quasi-particles of adjacent colors. For all $i=1,\ldots ,l$ define \begin{equation}\label{niovi3} \nu_i=\begin{cases} 1,&\text{if } \langle \alpha_i,\alpha_i \rangle =2,\\ 2,&\text{if } \langle \alpha_i,\alpha_i \rangle =1,\\ 3,&\text{if } \langle \alpha_i,\alpha_i \rangle =\frac{2}{3} \end{cases} \quad\text{and}\quad i'= \begin{cases} l-2,&\text{if } i=l\text{ and }\mathfrak{g}=D_l,\\ 3,&\text{if } i=l\text{ and }\mathfrak{g}=E_6,E_7,\\ 5,&\text{if } i=l\text{ and }\mathfrak{g}=E_8,\\ i-1,&\text{otherwise.} \\ \end{cases} \end{equation} Introduce the following difference conditions: \begin{align} &\text{for all}\quad i=1,\ldots ,l\quad\text{and}\quad p=1,\ldots, r_{i}^{(1)}\nonumber\\ &\qquad m_{p,i}\leqslant -n_{p,i} + \sum_{q=1}^{r_{i'}^{(1)}}\min\left\{\textstyle\frac{\nu_{i} }{\nu_{i'}}n_{q,i'},n_{p,i}\right\} - 2(p-1) n_{p,i} ,\label{d2}\tag{$c_2$} \end{align} where we set $r_0^{(1)}=0$ so that the sum in \eqref{d2} is zero for $i=1$. In the end, we impose the following restrictions on the quasi-particle charges: \begin{equation}\label{d3}\tag{$c_3$} n_{p,i}\leqslant k \nu_i \quad \text{for all}\quad i=1,\ldots ,l\quad\text{and}\quad p=1,\ldots, r_{i}^{(1)}. \end{equation} Let $B_{N(k\Lambda_0)} $ be the set of all monomials \eqref{monomial}, regarded as elements of $\mathop{\mathrm{End}} N(k\Lambda_0)$, satisfying conditions \eqref{d1} and \eqref{d2}. Moreover, let $B_{L(k\Lambda_0)} $ be the set of all monomials \eqref{monomial}, regarded as elements of $\mathop{\mathrm{End}} L(k\Lambda_0)$, satisfying \eqref{d1}, \eqref{d2} and \eqref{d3}. Finally, let $$ \mathfrak{B}_{V}=\left\{bv_V : b\in B_{V}\right\}\subset W_V \quad\text{for}\quad V=N(k\Lambda_0),L(k\Lambda_0). $$ \begin{thm}\label{thm_baza} For any positive integer $k$ the set $\mathfrak{B}_{V}$ forms a basis of the principal subspace $W_V$ of the $\widetilde{\mathfrak{g}}$-module $V=N(k\Lambda_0),L(k\Lambda_0)$. \end{thm} Even though Theorem \ref{thm_baza} is formulated for an arbitrary untwisted affine Lie algebra $\widetilde{\mathfrak{g}}$, we only give proof for $\mathfrak{g}$ of type $D$, $E$ and $F$; see Sections \ref{sec50} and \ref{sec60}. The proofs for the remaining types can be found in \cite{B1,B2,B3,G}. \subsection{Quasi-particle bases for rectangular weights in types \texorpdfstring{$D$}{D} and \texorpdfstring{$E$}{E}}\label{subsec23} Suppose that the affine Lie algebra $\widetilde{\mathfrak{g}}$ is of type $D_l^{(1)}$, $E_6^{(1)}$ or $E_7^{(1)}$. Let $\Lambda$ be the {\em rectangular weight}, i.e. the weight of the form \begin{equation} \label{lambdaDE} \Lambda = k_0\Lambda_0+k_j\Lambda_j, \end{equation} where $k_0,k_j$ are positive integers and $\Lambda_j$ is the fundamental weight of level one; cf. \cite{G}. Recall that $j=1,l-1,l$ for $\widetilde{\mathfrak{g}}=D_l^{(1)}$, $j=1, 6$ for $\widetilde{\mathfrak{g}}=E_6^{(1)}$ and $j=1$ for $\widetilde{\mathfrak{g}}=E_7^{(1)}$; see \cite{K}. Denote by $k=k_0+k_j$ the level of $\Lambda$. Define \begin{equation} \label{jde67} j_{t}=\begin{cases} 0,&\text{if }\ \ 1 \leqslant t \leqslant k_0,\\ j,& \text{if}\ \ k_0 < t \leqslant k_0+k_j. \end{cases} \end{equation} Introduce the following difference condition: \begin{align} &\text{for all}\quad i=1,\ldots ,l\quad\text{and}\quad p=1,\ldots, r_{i}^{(1)}\nonumber\\ &\qquad m_{p,i}\leqslant -n_{p,i} + \sum_{q=1}^{r_{i'}^{(1)}}\min\left\{\textstyle n_{q,i'},n_{p,i}\right\} - 2(p-1) n_{p,i} -\sum_{t=1}^{n_{p,i}}\delta_{i j_t}.\label{d2DE67}\tag{$c'_2$} \end{align} Note that this condition differs from \eqref{d2} by a new term $\sum_{t=1}^{n_{p,i}}\delta_{i j_t}$. For a given rectangular weight $\Lambda$ denote by $B_{L(\Lambda)} $ be the set of all monomials \eqref{monomial}, regarded as elements of $\mathop{\mathrm{End}} L(\Lambda)$, satisfying \eqref{d1}, \eqref{d2DE67} and \eqref{d3}. Finally, let $$ \mathfrak{B}_{L(\Lambda)}=\left\{bv_{L(\Lambda)}:b\in B_{L(\Lambda)}\right\}\subset W_{L(\Lambda)}. $$ \begin{thm}\label{thm_baza_DE} Let $\widetilde{\mathfrak{g}}$ be the affine Lie algebra of type $D_l^{(1)}$, $E_6^{(1)}$ or $E_7^{(1)}$. For any rectangular weight $\Lambda$ the set $\mathfrak{B}_{L(\Lambda)}$ forms a basis of the principal subspace $W_{L(\Lambda)}$. \end{thm} The proof of Theorem \ref{thm_baza_DE} is given in Section \ref{sec60}. \section{Presentations of the principal subspaces \texorpdfstring{$W_{L(k\Lambda_0)}$}{W-L}}\label{sec30} In this section, we give the presentations of the principal subspaces $W_{L(k\Lambda_0)}$ for an arbitrary untwisted affine Lie algebra $\widetilde{\mathfrak{g}}$; see Theorem \ref{thm_prezentacija} below. Next, in Theorem \ref{thm_prezentacija_DE}, we give the presentations of $W_{L(\Lambda)}$ for all rectangular weights $\Lambda$ in types $D$ and $E$. As pointed out in Section \ref{sec00}, the presentations of the principal subspaces of certain standard $\widetilde{\mathfrak{g}}$-modules in types $A$, $D$ and $E$ were originally found and proved in \cite{FS,CLM1,CLM2,CLM3,S1,S2} while their general form was conjectured in \cite{S2}. Let $\Lambda$ be an integral dominant highest weight. Consider the natural surjective map \begin{align} f_{L(\Lambda)}\,\colon\, U(\widetilde{\mathfrak{n}}_+)\,&\,\to\, W(\Lambda)\label{map}\\ a\,&\,\mapsto\, a\cdot v_{L(\Lambda)}.\nonumber \end{align} For any $i=1,\ldots ,l$ and integer $ m\geqslant k \nu_{i} +1$ define the elements $R_{\alpha_i} (-m)\in U(\widetilde{\mathfrak{n}}_+)$ by \begin{align*} R_{\alpha_i} (-m) =\sum_{\substack{m_1,\ldots ,m_{k\nu_{i}+1}\leqslant -1 \\ m_1+ \ldots + m_{k\nu_{i}+1}=-m }} x_{\alpha_i}(m_1)\ldots x_{\alpha_i}(m_{k\nu_{i}+1}). \end{align*} Let $I_{L(k\Lambda_0)}$ be the left ideal in the universal enveloping algebra $ U(\widetilde{\mathfrak{n}}_+)$ defined by \begin{align}\label{ideal} I_{L(k\Lambda_0)}\,=\,U(\widetilde{\mathfrak{n}}_+)\,\widetilde{\mathfrak{n}}_+^{\geqslant 0} \, + \, \sum_{i=1}^l \sum_{m\geqslant k \nu_{i}+1} U(\widetilde{\mathfrak{n}}_+)R_{\alpha_i} (-m). \end{align} We have the following natural presentations of the principal subspaces: \begin{thm}\label{thm_prezentacija} For all positive integers $k$ we have $$\kerf_{L(k\Lambda_0)} =I_{L(k\Lambda_0)} \qquad\text{or, equivalently,}\qquad W_{L(k\Lambda_0)}\cong U(\widetilde{\mathfrak{n}}_+)/I_{L(k\Lambda_0)}.$$ \end{thm} In Section \ref{sec50}, we employ the sets $\mathfrak{B}_{L(k\Lambda_0)}$ from Theorem \ref{thm_baza} to prove Theorem \ref{thm_prezentacija} for the affine Lie algebra $\widetilde{\mathfrak{g}} =F_4^{(1)}$. We omit the proof for other types of $\widetilde{\mathfrak{g}}$ since it goes analogously, by using the corresponding quasi-particle bases. Let $\widetilde{\mathfrak{g}}$ be the affine Lie algebra of type $D_l^{(1)}$, $E_6^{(1)}$ or $E_7^{(1)}$. As in \cite{S2}, for a given rectangular weight $\Lambda=k_0\Lambda_0 + k_j\Lambda_j$ define the left ideal in the universal enveloping algebra $ U(\widetilde{\mathfrak{n}}_+)$ by \begin{align}\label{idealDE} I_{L(\Lambda)}\,=\,I_{L((k_0+k_j)\Lambda_0)}\,+\,U(\widetilde{\mathfrak{n}}_+)x_{\alpha_j}(-1)^{k_0 +1}. \end{align} \begin{thm}\label{thm_prezentacija_DE} Let $\widetilde{\mathfrak{g}}$ be the affine Lie algebra of type $D_l^{(1)}$, $E_6^{(1)}$ or $E_7^{(1)}$. For a given rectangular weight $\Lambda$ we have $$\ker f_{L(\Lambda)} =I_{L(\Lambda)} \qquad\text{or, equivalently,}\qquad W_{L(\Lambda)}\cong U(\widetilde{\mathfrak{n}}_+)/I_{L(\Lambda)}.$$ \end{thm} The proof of Theorem \ref{thm_prezentacija_DE} is given in Section \ref{sec60}. \begin{rem}\label{remarkLP} The form of the elements $R_{\alpha_i} (-m)$ is motivated by the integrability condition \begin{equation}\label{integrability} x_{(k\nu_i +1) \alpha_i} (z)=0 \quad\text{on any level $k$ standard module}, \end{equation} which is due to Lepowsky and Primc \cite{LP}. It implies quasi-particle charges constraint \eqref{d3}. \end{rem} \section{Proof of Theorems \ref{thm_baza} and \ref{thm_prezentacija} in type \texorpdfstring{$F$}{F}}\label{sec50} In this section, we prove Theorems \ref{thm_baza} and \ref{thm_prezentacija} in type $F$. The proof is divided into six steps, i.e. Sections \ref{subsec51}--\ref{subsec56}. We consider the affine Lie algebra $\widetilde{\mathfrak{g}}$ of type $F_4^{(1)}$ so that $l=4$ and the basis $\Pi$ of the root system $R$ for the corresponding simple Lie algebra $\mathfrak{g}$ consists of the simple roots $\alpha_1,\alpha_2,\alpha_3,\alpha_4$; see \cite[Chap. III]{H}. The maximal root $\theta$ equals \begin{equation}\label{maxroot} \theta=2\alpha_1 +3\alpha_2+4\alpha_3+2\alpha_4\quad\text{and satisfies} \quad \alpha_i(\theta^{\vee})=\delta_{1i}\text{ for }i=1,2,3,4. \end{equation} \subsection{Linear order on quasi-particle monomials}\label{subsec51} In this section, we briefly cover some basic concepts originated in \cite{G} which are typically used to handle quasi-particle monomials. In particular, we introduce a certain linear order among such monomials which will come in useful in Section \ref{subsec55}. Let \begin{align} b=& \Big(x_{n_{r_{4}^{(1)},4}\alpha_{4}}(m_{r_{4}^{(1)},4})\ldots x_{n_{1,4}\alpha_{4}}(m_{1,4})\Big) \Big(x_{n_{r_{3}^{(1)},3}\alpha_{3}}(m_{r_{3}^{(1)},3})\ldots x_{n_{1,3}\alpha_{3}}(m_{1,3})\Big)\nonumber \\ & \Big(x_{n_{r_{2}^{(1)},2}\alpha_{2}}(m_{r_{2}^{(1)},2})\ldots x_{n_{1,2}\alpha_{2}}(m_{1,2})\Big) \Big( x_{n_{r_{1}^{(1)},1}\alpha_{1}}(m_{r_{1}^{(1)},1})\ldots x_{n_{1,1}\alpha_{1}}\label{monomial4}\tag{$m_{F_4}$}(m_{1,1})\Big) \end{align} be an element of $\mathop{\mathrm{End}} V$, where $V=N(k\Lambda_0)$ or $V=L(k\Lambda_0)$, such that \begin{equation}\label{extra} n_{r_{i}^{(1)},i}\leqslant \ldots \leqslant n_{1,i}\quad\text{and}\quad m_{r_{i}^{(1)},i}\leqslant \ldots \leqslant m_{1,i}\qquad\text{for all }i=1,2,3,4. \end{equation} Define the {\em charge-type} $\mathcal{C}$ and the {\em energy-type} $\mathcal{E}$ of $b$ by \begin{align} &\mathcal{C}=\left( n_{r_{4}^{(1)},4},\ldots , n_{1,4}; \, n_{r_{3}^{(1)},3},\ldots , n_{1,3};\, n_{r_{2}^{(1)},2},\ldots , n_{1,2};\, n_{r_{1}^{(1)},1},\ldots , n_{1,1}\right),\label{charge-type}\\ &\mathcal{E}=\left( m_{r_{4}^{(1)},4},\ldots , m_{1,4}; \, m_{r_{3}^{(1)},3},\ldots , m_{1,3};\, m_{r_{2}^{(1)},2},\ldots , m_{1,2};\, m_{r_{1}^{(1)},1},\ldots , m_{1,1}\right).\nonumber \end{align} Moreover, define the {\em color-type} of $b$ as the quadruple $(n_{4},n_{3},n_{2},n_{1})$ such that $n_j$ denotes the sum of charges of all color $j$ quasi-particles, i.e. such that $n_j= n_{r_{j}^{(1)},j}+\ldots + n_{1,j}$. Let $b_1,b_2$ be any two quasi-particle monomials of the same color-type, expressed as in \eqref{monomial4}, such that their charges and energies satisfy \eqref{extra}. Denote their charge-types and energy-types by $\mathcal{C}_1,\mathcal{C}_2$ and $\mathcal{E}_1,\mathcal{E}_2$ respectively. Define the strict linear order among quasi-particle monomials of the same color-type by \begin{equation}\label{order1} b_1< b_2\qquad \text{if}\qquad \mathcal{C}_1<\mathcal{C}_2\quad\text{or}\quad \mathcal{C}_1=\mathcal{C}_2 \text{ and } \mathcal{E}_1<\mathcal{E}_2, \end{equation} where the order on (finite) sequences of integers is defined as follows: \begin{align} &(x_p,\ldots ,x_1)< (y_r,\ldots ,y_1)\qquad \text{if there exists }s\text{ such that }\quad\label{order2}\\ &x_1=y_1,\,\ldots,\,x_{s-1}=y_{s-1}\qquad\text{and}\qquad s=p+1\leqslant r\quad \text{or}\quad x_s<y_s.\nonumber \end{align} \subsection{Projection of the principal subspace} \label{subsec52} As in \cite{B1}, we now generalize Georgiev's projection \cite{G} to type $F$. Consider quasi-particle monomial \eqref{monomial4} as an element of $\ndoL(k\Lambda_0)$. Suppose that its charges and energies satisfy \eqref{extra}. Define its {\em dual charge-type} $\mathcal{D}$ as \begin{align} &\mathcal{D}=\left(r^{(1)}_{4}, \ldots , r^{(2k)}_{4};\, r^{(1)}_{3}, \ldots , r^{(2k)}_{3};\, r^{(1)}_{2}, \ldots , r^{(k)}_{2}; \, r^{(1)}_{1}, \ldots , r^{(k)}_{1}\right),\label{dual-charge-type} \end{align} where $r_{i}^{(n)}$ denotes the number of color $i$ quasi-particles of charge greater than or equal to $n$ in the monomial. Observe that, due to \eqref{integrability}, the monomial does not posses any quasi-particles of color $i$ whose charge is strictly greater than $k\nu_i$. The standard module $L(k\Lambda_0)$ can be regarded as a submodule of the tensor product module $L(\Lambda_0)^{\otimes k}$ generated by the highest weight vector $ v_{L(k\Lambda_0)}=v_{L(\Lambda_{0})}^{\otimes k }$. Let $\pi_{\mathcal{D}}$ be the projection of the principal subspace $W_{L(k\Lambda_0)}$ on the tensor product space $$ W_{(\mu^{(k)}_{4};\mu^{(k)}_{3};r_{2}^{(k)};r_{1}^{(k)})} \otimes \cdots \otimes W_{(\mu^{(1)}_{4};\mu^{(1)}_{3};r_{2}^{(1)};r_{1}^{(1)})} \subset W_{L(\Lambda_0)}^{\otimes k}\subset L(\Lambda_0)^{\otimes k}, $$ where $W_{(\mu^{(t)}_{4};\mu^{(t)}_{3};r_{2}^{(t)};r_{1}^{(t)})}$ denote the $\mathfrak{h}$-weight subspaces of the level $1$ principal subspace $W_{L(\Lambda_0)}$ of weight $\mu^{(t)}_{4}\alpha_4+ \mu^{(t)}_{3}\alpha_3+r^{(t)}_{2}\alpha_2+ r_{1}^{(t)}\alpha_1 \in R$ with \begin{equation}\label{miovi} \mu^{(t)}_{i}=r^{(2t)}_{i}+ r^{(2t-1)}_{i}\qquad\text{for}\qquad t=1,\ldots ,k \quad\text{and}\quad i=3,4. \end{equation} We denote by the same symbol $\pi_{\mathcal{D}}$ the generalization of the projection to the space of formal series with coefficients in $W_{L(k\Lambda_0)}$. Applying the generating function corresponding to \eqref{monomial4} on the highest weight vector $v_{L(k\Lambda_0)}=v_{L(\Lambda_{0})}^{\otimes k } $ we obtain \begin{align} &\big(x_{n_{r_{4}^{(1)},4}\alpha_{4}}(z_{r_{4}^{(1)},4}) \cdots x_{n_{1,4}\alpha_{4}}(z_{1,4})\big) \big(x_{n_{r_{3}^{(1)},3}\alpha_{3}}(z_{r_{3}^{(1)},3}) \cdots x_{n_{1,3}\alpha_{3}}(z_{1,3})\big)\nonumber\\ &\times\big(x_{n_{r_{2}^{(1)},2}\alpha_{2}}(z_{r_{2}^{(1)},2})\cdots \cdots x_{n_{1,2}\alpha_{2}}(z_{1,2})\big) \big(x_{n_{r_{1}^{(1)},1}\alpha_{1}}(z_{r_{1}^{(1)},1})\cdots x_{n_{1,1}\alpha_{1}}(z_{1,1})\big)v_{L(k\Lambda_0)}.\label{eq:p1} \end{align} Relations \eqref{integrability} imply that by applying the projection $\pi_{\mathcal{D}}$ on \eqref{eq:p1} we get \begin{align} & \hspace{-6pt} \Big(x_{n_{r^{(2k-1)}_{4},4}^{(k)}\alpha_{4}}(z_{r_{4}^{(2k-1)},4})\cdots x_{n_{1,4}^{(k)}\alpha_{4}}(z_{1,4})\Big) \Big( x_{n_{r^{(2k-1)}_{3},3}^{(k)}\alpha_{3}}(z_{r_{3}^{(2k-1)},3})\cdots x_{n_{1,3}^{(k)}\alpha_{3}}(z_{1,3})\Big)\nonumber\\ &\hspace{-6pt}\times\Big( x_{n_{r^{(k)}_{2},2}^{(k)}\alpha_{2}}(z_{r_{2}^{(1)},2})\cdots x_{n{_{1,2}^{(k)}\alpha_{2}}}(z_{1,2})\Big) \Big( x_{n_{r^{(k)}_{1},1}^{(k)}\alpha_{1}}(z_{r_{1}^{(k)},1})\cdots x_{n{_{1,1}^{(k)}\alpha_{1}}}(z_{1,1}) \Big)\ v_{L(\Lambda_{0})}\nonumber\\ \otimes\cdots \otimes&\Big( x_{n_{r^{(1)}_{4},4}^{(1)}\alpha_{4}}(z_{r_{4}^{(1)},4})\cdots x_{n_{1,4}^{(1)}\alpha_{4}}(z_{1,4})\Big) \Big(x_{n_{r^{(1)}_{3},3}^{(1)}\alpha_{3}}(z_{r_{3}^{(1)},3})\cdots \cdots x_{n_{1,3}^{(1)}\alpha_{3}}(z_{1,3})\Big)\nonumber\\ &\hspace{-6pt}\times \Big( x_{n_{r^{(1)}_{2},2}^{(1)}\alpha_{2}}(z_{r_{2}^{(1)},2})\cdots x_{n{_{1,2}^{(1)}\alpha_{2}}}(z_{1,2})\Big) \Big( x_{n_{r^{(1)}_{1},1}^{(1)}\alpha_{1}}(z_{r_{1}^{(1)},1})\cdots x_{n{_{1,1}^{(1)}\alpha_{1}}}(z_{1,1}) \Big) v_{L(\Lambda_{0})}\label{find} \end{align} multiplied by some nonzero scalar, where we set $x_{0\alpha_i}(z)= 1$. The integers $n_{p,i}^{(t)}$ in \eqref{find} are uniquely determined by $$ 0 \leqslant n^{(k)}_{p,i}\leqslant \ldots \leqslant n^{(2)}_{p,i} \leqslant n^{(1)}_{p,i} \leqslant \nu_i \quad\text{and}\quad n_{p,i}=\sum_{t=1}^k n^{(t)}_{p,i}\qquad\text{for all } i=1,2,3,4 $$ and by the requirement that at most one $n_{p,i}^{(t)}$ equals $1$ when $i=3,4$. Therefore, for every variable $z_{r,i}$, where $i=1,2,3,4$ and $r=1,\ldots , r_i^{(1)}$, the projection $\pi_\mathcal{D}$ places at most one generating function $x_{\alpha_i}(z_{r,i})$ if $i=1,2$ and at most two generating functions $x_{\alpha_i}(z_{r,i})$ if $i=3,4$ on each tensor factor of $W(\Lambda_0)^{\otimes k}$. \subsection{Operators \texorpdfstring{$A_\theta$}{A-theta} and \texorpdfstring{$e_\alpha$}{e-alpha}}\label{subsec53} Let $b\in B_{L(k\Lambda_0)}$ be a quasi-particle monomial of charge-type $\mathcal{C}$ and dual charge-type $\mathcal{D}$. Denote the charges and the energies of its quasi-particles as in \eqref{monomial4}. In this section, generalizing the approach from \cite{B3}, we demonstrate how to reduce $b$ to obtain a new monomial $b'\in B_{L(k\Lambda_0)}$ such that its charge-type $\mathcal{C}'$ satisfies $\mathcal{C}'<\mathcal{C}$ with respect to linear order \eqref{order1}. This will be a key step in the proof of linear independence of the set $\mathfrak{B}_{L(k\Lambda_0)}$ in Section \ref{subsec54}. Let $A_{\theta}$ be the constant term of the operator $$x_{\theta}(z)=\sum_{r\in\mathbb{Z}} x_{\theta}(r) z^{-r-1}\in\mathop{\mathrm{End}} L(\Lambda_0)[[z^{\pm 1}]],$$ i.e. $A_{\theta} = x_{\theta} (-1)$, where $\theta$ is the maximal root; recall \eqref{maxroot}. Consider the image of the vector $\pi_\mathcal{D}\, bv_{L(k\Lambda_0)}\inW_{L(k\Lambda_0)}\subset W_{L(\Lambda_0)}^{\otimes k}$ with respect to the operator $$(A_{\theta})_s \coloneqq\underbrace{1\otimes\cdots\otimes 1}_{k-s}\otimes A_{\theta}\otimes \underbrace{1\otimes\cdots \otimes 1}_{s-1}\qquad\text{for}\qquad s=n_{1,1}. $$ This image can be obtained as the coefficient of the variables \begin{equation}\label{vars} \overline{z}\coloneqq z_{r_{4}^{(1)},4}^{-m_{r_{4}^{(1)},4}-n_{r_{4}^{(1)},4}}\cdots z_{2,1}^{-m_{2,1}-n_{2,1}} z_{1,1}^{-m_{1,1}-n_{1,1}} \end{equation} in the expression \begin{equation}\label{genfun} (A_{\theta})_s \, \pi_\mathcal{D}\, x_{n_{r_{4}^{(1)},4}\alpha_{4}}(z_{r_{4}^{(1)},4}) \cdots x_{n_{2,1}\alpha_{1}}(z_{2,1}) x_{n_{1,1}\alpha_{1}}(z_{1,1}) v_{L(k\Lambda_0)}. \end{equation} Due to \cite{FHL}, the operator $A_{\theta}$ commutes with the action of quasi-particles. Hence, using \eqref{find}, we find that the $s$-th tensor factor (from the right) in \eqref{genfun} equals \begin{align*} F_s=&\Big(x_{n_{r^{(2s-1)}_{4},4}^{(s)}\alpha_{4}}(z_{r_{4}^{(2s-1)},4})\cdots x_{n_{1,4}^{(s)}\alpha_{4}}(z_{1,4})\Big) \Big( x_{n_{r^{(2s-1)}_{3},3}^{(s)}\alpha_{3}}(z_{r_{3}^{(2s-1)},3})\cdots x_{n_{1,3}^{(s)}\alpha_{3}}(z_{1,3})\Big)\\ &\times\Big( x_{n_{r^{(s)}_{2},2}^{(s)}\alpha_{2}}(z_{r_{2}^{(s)},2})\cdots x_{n{_{1,2}^{(s)}\alpha_{2}}}(z_{1,2})\Big) \Big( x_{n_{r^{(s)}_{1},1}^{(s)}\alpha_{1}}(z_{r_{1}^{(s)},1})\cdots x_{n{_{1,1}^{(s)}\alpha_{1}}}(z_{1,1}) \Big) x_{\theta} (-1)v_{L(\Lambda_{0})}.\nonumber \end{align*} Consider the Weyl group translation operator $e_\alpha\in\mathop{\mathrm{End}} L(\Lambda_0)$ defined by $$ e_{\alpha}=\exp x_{-\alpha}(1)\exp (- x_{\alpha}(-1))\exp x_{-\alpha}(1) \exp x_{\alpha}(0)\exp (-x_{-\alpha}(0))\exp x_{\alpha}(0) $$ for $\alpha \in R$; see \cite[Chap. 3]{K}. It possesses the following properties: \begin{align} & e_{\alpha}v_{L(\Lambda_0)}=-x_{\alpha}(-1)v_{L(\Lambda_0)}\quad\text{for every long root }\alpha,\label{eq:21}\\ &x_{\beta}(j)e_{\alpha}=e_{\alpha}x_{\beta}(j+\beta(\alpha\sp\vee)) \quad \text{for all } \alpha,\beta \in R\text{ and } j \in \mathbb{Z}.\label{eq:22} \end{align} Using \eqref{eq:21} and \eqref{eq:22} for $\alpha=\theta$ we rewrite the $s$-th tensor factor $F_s$ as \begin{equation}\label{efes} F_s = - e_{\theta}\, F_s\, z_{r_{1}^{(s)},1}\cdots z_{2,1} z_{1,1}. \end{equation} Recall \eqref{maxroot} and notation \eqref{briefly}. Taking the coefficient of variables \eqref{vars} in \eqref{efes} we find $$ (A_{\theta})_s \,\pi_{\mathcal{D}}\,bv_{L(k\Lambda_0)} =-(e_{\theta})_s \,\pi_{\mathcal{D}}\,b^{+}v_{L(k\Lambda_0)}, $$ where $(e_{\theta})_s $ denotes the action of $e_{\theta}$ on the $s$-th tensor factor (from the right) and \begin{align*} b^{+} =b_{\alpha_{4}}\, b_{\alpha_{3}}\, b_{\alpha_{2}}\,b_{\alpha_{1}}^{<s}\,b_{\alpha_{1}}^{s},\quad\text{where}\quad &b_{\alpha_{1}}^{<s}=x_{n_{r_{1}^{(1)},1}\alpha_{1}}(m_{r_{1}^{(1)},1})\cdots x_{n_{r_{1}^{(s)}+1,1}}(m_{r_{1}^{(s)}+1,1})\\ \quad\text{and}\quad &b_{\alpha_{1}}^{s}= x_{n_{r_{1}^{(s)},1}}(m_{r_{1}^{(s)},1}+1)\cdots x_{n_{1,1}\alpha_{1}}(m_{1,1}+1).& \end{align*} Therefore, by applying the above procedure we increased the energies of all quasi-particles of color $1$ and charge $s=n_{1,1}$ in the monomial $b\inB_{L(k\Lambda_0)}$ by $1$. We may continue to apply the same procedure, now starting with $b^+ v_{L(k\Lambda_0)}$, until we obtain the monomial \begin{align*} \widetilde{b} =b_{\alpha_{4}}\, b_{\alpha_{3}}\, b_{\alpha_{2}}\,\widetilde{b}_{\alpha_{1}},\qquad\text{where}\qquad \widetilde{b}_{\alpha_{1}}= x_{n_{r_{1}^{(1)},1}}(\widetilde{m}_{r_{1}^{(1)},1})\cdots x_{n_{1,1}\alpha_{1}}(\widetilde{m}_{1,1})\qquad\text{and}\\ (\widetilde{m}_{r_{1}^{(1)},1},\ldots ,\widetilde{m}_{r_{1}^{(s)}+1,1}, \widetilde{m}_{r_{1}^{(s)},1},\ldots,\widetilde{m}_{1,1}) \hspace{-2pt}=\hspace{-2pt}(m_{r_{1}^{(1)},1},\ldots ,m_{r_{1}^{(s)}+1,1}, m_{r_{1}^{(s)},1}-m_{1,1}-s,\ldots,-s). \end{align*} Since $b$ is an element of $B_{L(k\Lambda_0)}$, the quasi-particle monomial $\widetilde{b}$ belongs to $B_{L(k\Lambda_0)}$ as well. Moreover, the charge-type and the dual charge-type of $\widetilde{b}$ equal $\mathcal{C}$ and $\mathcal{D}$ respectively. By \eqref{eq:21} we have $x_{\alpha_1}(-1)v_{L(\Lambda_0)}=-e_{\alpha_1}v_{L(\Lambda_0)}$. Hence, the vector $\pi_{\mathcal{D}}\,\widetilde{b} v_{L(k\Lambda_0)}$, which belongs to $ W_{L(k\Lambda_0)}\subset W_{L(\Lambda_0)}^{\otimes k}$, equals the coefficient of the variables \begin{equation}\label{vars2} \overline{z}\,\Big(z_{r_{1}^{(s)},1}\cdots z_{2,1} z_{1,1}\Big)^{m_{1,1}+s} \end{equation} in \begin{equation}\label{genfun2} (-1)^s \,\pi_{\mathcal{D}}\, x_{n_{r_{4}^{(1)},4}\alpha_{4}}(z_{r_{4}^{(1)},4}) \cdots x_{n_{2,1}\alpha_{1}}(z_{2,1}) \, (1^{\otimes (k-s)} \otimes e_{\alpha_1}^{\otimes s}) \,v_{L(\Lambda_0)}^{\otimes k}, \end{equation} where $\overline{z}$ is given by \eqref{vars}. We now employ \eqref{eq:22} to move $1^{\otimes (k-s)} \otimes e_{\alpha_1}^{\otimes s}$ all the way to the left in \eqref{genfun2}. Next, by dropping the invertible operator $(-1)^s(1^{\otimes (k-s)} \otimes e_{\alpha_1}^{\otimes s})$ and taking the coefficient of variables \eqref{vars2} we get $\pi_{\mathcal{D}'}\,b'v_{L(k\Lambda_0)}$, where the quasi-particle monomial $b'$ of charge-type $\mathcal{C}'$ and dual charge-type $\mathcal{D}'$ is given by \begin{align*} b'=b_{\alpha_{4}}\, b_{\alpha_{3}}\, b'_{\alpha_{2}}\,b'_{\alpha_{1}} \qquad\text{for}\qquad b'_{\alpha_{1}}=x_{n_{r^{(1)}_{1},1}\alpha_{1}}(\widetilde{m}_{r^{(1)}_{1},1}+2n_{r^{(1)}_{1},1})\cdots x_{n_{2,1}\alpha_{1}}(\widetilde{m}_{2,1}+2n_{2,1}),&\\ b'_{\alpha_{2}}=x_{n_{r^{(1)}_{2},2}\alpha_{2}}(m_{r^{(1)}_{2},2}-n^{(1)}_{r_{2}^{(1)},2}-\cdots-n^{(s)}_{r_{2}^{(1)},2})\cdots x_{n_{1,2}\alpha_{2}}(m_{1,2}-n^{(1)}_{1,2}-\cdots-n^{(s)}_{1,2}).& \end{align*} Clearly, the energies of the quasi-particles in colors $3$ and $4$ did not change. Furthermore, if the dual charge-type $\mathcal{D}$ of $b$ equals $$\mathcal{D}=\big(r^{(1)}_{4}, \ldots , r^{(2k)}_{4};\, r^{(1)}_{3}, \ldots , r^{(2k)}_{3}; \, r^{(1)}_{2}, \ldots , r^{(k)}_{2};\, r^{(1)}_{1},\ldots, r_1^{(n_{1,1})},\underbrace{0, \ldots, 0}_{k-s}\big), $$ then the dual charge-type $\mathcal{D}'$ of $b'$ equals $$\mathcal{D}'=\big(r^{(1)}_{4}, \ldots , r^{(2k)}_{4};\, r^{(1)}_{3}, \ldots , r^{(2k)}_{3};\, r^{(1)}_{2}, \ldots , r^{(k)}_{2};\, r^{(1)}_{1}-1,\ldots, r_1^{(n_{1,1})}-1,\underbrace{0, \ldots, 0}_{k-s}\big). $$ In particular, we have $\mathcal{C}'<\mathcal{C}$ with respect to linear order \eqref{order1}. Finally, by arguing as in \cite[Proposition 3.3.1]{B2} one can check that $b'$ belongs to $B_{L(k\Lambda_0)}$. \subsection{Linear independence of the sets \texorpdfstring{$\mathfrak{B}_{V}$}{B-V}}\label{subsec54} In this section, we prove linear independence of the set $\mathfrak{B}_{L(k\Lambda_0)}$. Linear independence of $\mathfrak{B}_{N(k\Lambda_0)}$ can be verified by arguing as in \cite[Sect. 3]{B1}. Suppose there exists a linear dependence relation among some elements $b^a v_{L(k\Lambda_0)}\in\mathfrak{B}_{L(k\Lambda_0)}$, \begin{equation}\label{eq:d1} \sum_{a \in A} c_{a}\, b^a v_{L(k\Lambda_0)}=0, \quad\text{where}\quad c_a\in\mathbb{C},\, c_a\neq 0\text{ for all }a\in A \end{equation} and $A$ is a finite nonempty set. As the principal subspace $W_{L(k\Lambda_0)}$ is a direct sum of its $\mathfrak{h}$-weight subspaces, we can assume that all $b^a\inB_{L(k\Lambda_0)}$ posses the same color-type. Recall strict linear order \eqref{order1} and choose $a_0\in A$ such that $b^{a_0}< b^a$ for all $a\in A$, $a\neq a_0$. Suppose that the charge-type $\mathcal{C}$ and the dual charge-type $\mathcal{D}$ of $b^{a_0}$ are given by \eqref{charge-type} and \eqref{dual-charge-type} respectively. Applying the projection $\pi_{\mathcal{D}}$ on \eqref{eq:d1} we obtain a linear combination of elements in \begin{align*} &W_{(\mu^{(k)}_{4};\mu^{(k)}_{3};r_{2}^{(k)};0)} \otimes\cdots \otimes W_{(\mu^{(n_{1,1}+1)}_{4};\mu^{(n_{1,1}+1)}_{3};r_{2}^{(n_{1,1}+1)};0)} \\ &\otimes W_{(\mu^{(n_{1,1})}_{4};\mu^{(n_{1,1})}_{3};r_{2}^{(n_{1,1})};r_{1}^{(n_{1,1})})} \otimes\cdots \otimes W_{(\mu^{(1)}_{4};\mu^{(1)}_{3};r_{2}^{(1)};r_{1}^{(1)})}, \end{align*} recall \eqref{miovi}. The definition of the projection $\pi_{\mathcal{D}}$ implies that all $b^a v_{L(k\Lambda_0)}$ such that the charge-type of $b^a$ is strictly greater than $\mathcal{C}$ with respect to \eqref{order2} are annihilated by $\pi_{\mathcal{D}}$. Therefore, we can assume that all $b^a$ posses the same charge-type $\mathcal{C}$ and, consequently, the same dual-charge-type $\mathcal{D}$. As in \eqref{briefly}, write the monomials $b^a$ as $b^a=b^a_{\alpha_4}b^a_{\alpha_3}b^a_{\alpha_2}b^a_{\alpha_1}$, where $b^a_{\alpha_j}$ consist of quasi-particles of color $j$. We now apply the procedure described in Section \ref{subsec53} on the linear combination \begin{equation}\label{eq:d2} c_{a_0}\,\pi_{\mathcal{D}}\, b^{a_0} v_{L(k\Lambda_0)}+ \sum_{a \in A,\,a\neq a_0} c_{a}\,\pi_{\mathcal{D}}\, b^a v_{L(k\Lambda_0)}=0. \end{equation} We repeat it until all quasi-particles of color $1$ are removed from the first summand $c_{a_0}\pi_{\mathcal{D}}\, b^{a_0} v_{L(k\Lambda_0)}$. This also removes all quasi-particles of color $1$ from other summands, so that \eqref{eq:d2} becomes \begin{equation}\label{thesum} \widetilde{c}_{a_0}\,\pi_{\widetilde{\mathcal{D}}}\, b^{a_0}_{\alpha_4}b^{a_0}_{\alpha_3}\widetilde{b}^{a_0}_{\alpha_2} v_{L(k\Lambda_0)} + \sum_{\substack{ a \in A,\,a\neq a_0\\b_{\alpha_1}^{a_0}=b_{\alpha_1}^{a} }} \widetilde{c}_{a}\,\pi_{\widetilde{\mathcal{D}}}\, b^{a}_{\alpha_4}b^{a}_{\alpha_3}\widetilde{b}^{a}_{\alpha_2} v_{L(k\Lambda_0)}=0 \end{equation} for some quasi-particle monomials $\widetilde{b}^{a}_{\alpha_2}$ of color $2$ and scalars $\widetilde{c}_{a}\neq 0$ such that $\widetilde{\mathcal{D}}$ is the dual charge-type of all quasi-particle monomials $b^{a}_{\alpha_4}b^{a}_{\alpha_3}\widetilde{b}^{a}_{\alpha_2}$ in \eqref{thesum}. The summation in \eqref{thesum} goes over all $a\neq a_0$ such that $b^{a}_{\alpha_1}= b^{a_0}_{\alpha_1}$ because the summands $\pi_{\mathcal{D}}\, b^a v_{L(k\Lambda_0)}$ such that $b^{a_0}_{\alpha_1}< b^a_{\alpha_1}$ get annihilated in the process. The vectors $b^{a}_{\alpha_4}b^{a}_{\alpha_3}\widetilde{b}^{a}_{\alpha_2} v_{L(k\Lambda_0)}$ in \eqref{thesum} belong to $\mathfrak{B}_{L(k\Lambda_0)}$. Furthermore, they can be realized as elements of the principal subspace of the level $k$ standard module $L(k\Lambda_0)$ with the highest weight vector $v_{L(k\Lambda_0)}$ for the affine Lie algebra of type $C_3^{(1)}$. Moreover, their realizations belong to the corresponding basis in type $C_3^{(1)}$, as given by Theorem \ref{thm_baza} (for a detailed proof in type $C_l^{(1)}$ see \cite{B2}). This implies $\widetilde{c}_{a_0}=0$ and, consequently, $c_{a_0}=0$, thus contradicting \eqref{eq:d1}. Finally, we conclude that the set $\mathfrak{B}_{L(k\Lambda_0)}$ is linearly independent. \subsection{Small spanning sets \texorpdfstring{$\bar{\mathfrak{B}}_{V}$}{B-V}}\label{subsec55} In this section, we construct certain small spanning sets $\bar{\mathfrak{B}}_{N(k\Lambda_0)}$ and $\bar{\mathfrak{B}}_{L(k\Lambda_0)}$ for the quotients $U(\widetilde{\mathfrak{n}}_+)/ I_{N(k\Lambda_0)} $ and $U(\widetilde{\mathfrak{n}}_+)/ I_{L(k\Lambda_0)}$ of the algebra $U(\widetilde{\mathfrak{n}}_+)$ over its left ideals $I_{N(k\Lambda_0)} =U(\widetilde{\mathfrak{n}}_+) \widetilde{\mathfrak{n}}_+^{\geqslant 0}$ and $I_{L(k\Lambda_0)}$ defined by \eqref{ideal}. We denote by $\bar{x}$ the image of the element $x\in U(\widetilde{\mathfrak{n}}_+)$ in these quotients with respect to the corresponding canonical epimorphisms. First, we consider $U(\widetilde{\mathfrak{n}}_+)/ I_{N(k\Lambda_0)}$. By Poincar\'{e}--Birkhoff--Witt theorem for the universal enveloping algebra we have $$ U(\widetilde{\mathfrak{n}})=U(\widetilde{\mathfrak{n}}_{\alpha_4})U(\widetilde{\mathfrak{n}}_{\alpha_3})U(\widetilde{\mathfrak{n}}_{\alpha_2})U(\widetilde{\mathfrak{n}}_{\alpha_1}),\quad\text{where}\quad \widetilde{\mathfrak{n}}_{\alpha_i}=\mathfrak{n}_{\alpha_i}\otimes\mathbb{C}[t,t^{-1}]\text{ and } \mathfrak{n}_{\alpha_i}=\mathbb{C} x_{\alpha_i}. $$ By \eqref{defrel} quasi-particles of the same color commute, so all monomials \begin{equation}\label{monomial4bar}\tag{$\bar{m}_{F_4}$} \bar{b}= \Big(\bar{x}_{n_{r_{4}^{(1)},4}\alpha_{4}}(m_{r_{4}^{(1)},4})\ldots \bar{x}_{n_{1,4}\alpha_{4}}(m_{1,4})\Big)\ldots \Big( \bar{x}_{n_{r_{1}^{(1)},1}\alpha_{1}}(m_{r_{1}^{(1)},1})\ldots \bar{x}_{n_{1,1}\alpha_{1}}(m_{1,1})\Big) \end{equation} such that their charges and energies satisfy \eqref{extra} form a spanning set for $U(\widetilde{\mathfrak{n}}_+)/ I_{N(k\Lambda_0)}$. We now list two families of quasi-particle relations which can be used to strengthen the conditions in \eqref{extra}, i.e. to obtain a smaller spanning set. The first family is given for quasi-particles on $N(k\Lambda_0)$ of color $i=1,2,3,4$ and charges $n_1$ and $n_2$ such that $n_2\leqslant n_1$: \begin{align} \left(\frac{d^p}{dz^p} x_{n_2\alpha_i} (z)\right)x_{n_1\alpha_i} (z) =A_p(z) x_{(n_1 +1)\alpha_i} (z) +B_p(z) \frac{d^p}{dz^p} x_{(n_1+1)\alpha_i} (z), \label{r1}\tag{$r_1$} \end{align} where $p=0, 1,\ldots, 2 n_2 -1$ and $A_p(z), B_p(z)$ are some formal series with coefficients in the set of quasi-particle polynomials; see \cite{FS,G,JP}. As demonstrated in \cite[Remark 4.6]{JP}, see also \cite[Lemma 2.2.1]{B1}, relations \eqref{r1} can be used to express $2n_2$ monomials of the form $$ x_{n_2\alpha_i}(m_2)x_{n_1\alpha_i}(m_1) ,\, \ldots \, , x_{n_2\alpha_i}(m_2-2n_2+1)x_{n_1\alpha_i}(m_1+2n_2-1) $$ as a linear combination of monomials $$ x_{n_2\alpha_i}(p_2)x_{n_1\alpha_i}(p_1) \quad \text{such that} \quad p_2 \leqslant m_2- 2n_2,\quad p_1\geqslant m_1+2n_2\quad\text{and}\quad p_1 +p_2=m_1+m_2 $$ and monomials which contain a quasi-particle of color $i$ and charge $n_1+1$, thus possessing the greater charge-type. In particular, for $n_2 = n_1$ one can express $2n_2$ monomials $$ x_{n_2 \alpha_i}(m_2)x_{n_2 \alpha_i}(m_1)\quad\text{with} \ \ m_1-2n_2< m_2 \leqslant m_1 $$ as a linear combination of monomials $$ x_{n_2\alpha_i}(p_2)x_{n_2\alpha_i}(p_1) \quad\text{such that} \quad p_2\leqslant p_1-2n_2$$ and monomials which contain a quasi-particle of color $i$ and charge $n_2+1$, thus possessing the greater charge-type; cf. \cite[Corollary 2.2.2]{B1}. The second family of relations for quasi-particles on $N(k\Lambda_0)$ is given by \begin{align} (z_{1}-z_{2})^{ M_{i}}x_{n_{i-1}\alpha_{i-1}}(z_{1}) x_{n_{i}\alpha_{i}}(z_{2}) =(z_{1}-z_{2})^{M_{i} } x_{n_{i}\alpha_{i}}(z_{2}) x_{n_{i-1}\alpha_{i-1}}(z_{1}) \label{r2}\tag{$r_2$}\\ \text{for}\quad i=2,3,4,\quad M_{i}=\min \left\{\textstyle\frac{\nu_i}{\nu_{i-1}} n_{i-1},n_i\right\}\quad\text{and}\quad n_{i-1},n_i\in\mathbb{N}.\nonumber \end{align} They follow by a direct computation employing the commutator formula for vertex operators; see, e.g., \cite[Chap. 6.2]{LL}. \begin{rem} Due to \eqref{r2}, the quasi-particles of colors $ 1$ and $ 2$ and the quasi-particles of colors $ 3$ and $ 4$ interact as the quasi-particles of colors $1$ and $2$ for the affine Lie algebra $A_2^{(1)}$ while the quasi-particles of colors $ 2$ and $ 3$ interact as the quasi-particles of colors $1$ and $2$ for the affine Lie algebra $B_2^{(1)}$. \end{rem} Let $\bar{\mathfrak{B}}_{N(k\Lambda_0)}$ be the set of all monomials \eqref{monomial4bar} satisfying difference conditions \eqref{d1} and \eqref{d2} (with $l=4$ and $i'=i-1$ for all $i=1,2,3,4$). Using relations \eqref{r1} and \eqref{r2} and arguing as in \cite{B1,G} one can show that every monomial of the form \eqref{monomial4bar} satisfying \eqref{extra} can be expressed as a linear combination of some monomials in $ \bar{\mathfrak{B}}_{N(k\Lambda_0)}$, so that $\bar{\mathfrak{B}}_{N(k\Lambda_0)}$ spans the quotient $U(\widetilde{\mathfrak{n}}_+)/ I_{N(k\Lambda_0)}$. The proof goes by induction on the charge-type and total energy of quasi-particle monomials and relies on the properties of strict linear order \eqref{order1}. Roughly speaking, difference condition \eqref{d1} follows from relations \eqref{r1} for $n_2=n_1$; the last summand $- 2(p-1) n_{p,i}$ in \eqref{d2} follows from relations \eqref{r1} for $n_2<n_1$; the sum in \eqref{d2} follows from \eqref{r2}. Finally, the first summand $-n_{p,i}$ in \eqref{d2} is due to the fact that each summand on the right hand side of $$ x_{m\alpha_i}(n)=\sum_{n_1+\ldots +n_m =n} x_{\alpha_i}(n_1)\cdots x_{\alpha_i}(n_m),\quad\text{where }i=1,2,3,4\text{ and }n>-m, $$ contains at least one quasi-particle $x_{\alpha_i}(n_j)$ with $n_j\geqslant 0$, so that $x_{m\alpha_i}(n)$ belongs to $I_{N(k\Lambda_0)}$ for $n>-m$. We now consider $U(\widetilde{\mathfrak{n}}_+)/ I_{L(k\Lambda_0)}$. It is clear that all monomials \eqref{monomial4bar}, regarded as elements of $U(\widetilde{\mathfrak{n}}_+)/ I_{L(k\Lambda_0)}$ and satisfying difference conditions \eqref{d1} and \eqref{d2}, form a spanning set for the quotient $U(\widetilde{\mathfrak{n}}_+)/ I_{L(k\Lambda_0)}$. However, the form of the ideal $I_{L(k\Lambda_0)}$, as defined in \eqref{ideal}, implies additional relations \begin{equation}\label{r3}\tag{$r_3$} \bar{x}_{n\alpha_i}(z)=0\qquad\text{for all}\quad n\geqslant k\nu_i +1\quad\text{and}\quad i=1,2,3,4, \end{equation} in $U(\widetilde{\mathfrak{n}}_+)/ I_{L(k\Lambda_0)}$ which we now use to obtain a smaller spanning set; recall Remark \ref{remarkLP}. Suppose that monomial \eqref{monomial4bar} satisfies difference conditions \eqref{d1} and \eqref{d2} and contains a quasi-particle $\bar{x}_{n_{j,i}\alpha_i }(m_{j,i})$ of charge $n_{j,i}\geqslant k\nu_i +1$ and color $1\leqslant i\leqslant 4$. Clearly, such monomial coincides with the coefficient of the variables \begin{equation}\label{vars3} z_{r_{4}^{(1)},4}^{-m_{r_{4}^{(1)},4}-n_{r_{4}^{(1)},4}}\cdots z_{j,i}^{-m_{j,i}-n_{j,i}} \cdots z_{2,1}^{-m_{2,1}-n_{2,1}} z_{1,1}^{-m_{1,1}-n_{1,1}} \end{equation} in the generating function $$ \bar{X}=\bar{x}_{n_{r_{4}^{(1)},4}\alpha_{4}}(z_{r_{4}^{(1)},4}) \cdots \bar{x}_{n_{j,i}\alpha_i} (z_{j,i}) \cdots \bar{x}_{n_{2,1}\alpha_{1}}(z_{2,1}) \bar{x}_{n_{1,1}\alpha_{1}}(z_{1,1}) . $$ Introduce the Laurent polynomial $$ P=\prod_{i=2}^4 \prod_{q=1}^{r_{i-1}^{(1)}} \prod_{p=1}^{r_{i}^{(1)}} \left( 1-\frac{z_{q,i-1}}{z_{p,i}} \right)^{\min \left\{ \textstyle \frac{\nu_i}{\nu_{i-1}} n_{q,i-1},n_{p,i} \right\} }. $$ By combining relations \eqref{r2} and \eqref{r3} we find $P\bar{X}=0$ as the operator $\bar{x}_{n_{j,i}\alpha_i} (z_{j,i})$ in $P\bar{X}$ can be moved all the way to the right, thus annihilating the expression. By taking the coefficient of the variables \eqref{vars3} in $P\bar{X}=0$ we express \eqref{monomial4bar} as a linear combination of some quasi-particle monomials of the same charge-type and of the same total energy $m_{r_4^{(1)},4} + \ldots + m_{1,1}$, which are greater than \eqref{monomial4bar} with respect to linear order \eqref{order1}. However, there exists only finitely many such quasi-particle monomials which are nonzero. Hence, by repeating the same procedure for an appropriate number of times, now starting with these new monomials, we find, after finitely many steps, that \eqref{monomial4bar} equals zero. Therefore, we conclude that the set $\bar{\mathfrak{B}}_{L(k\Lambda_0)}$ of all monomials \eqref{monomial4bar} in $U(\widetilde{\mathfrak{n}}_+)/ I_{L(k\Lambda_0)}$ which satisfy difference conditions \eqref{d1}, \eqref{d2} and \eqref{d3} forms a spanning set for $U(\widetilde{\mathfrak{n}}_+)/ I_{L(k\Lambda_0)}$. \subsection{Proof of Theorems \ref{thm_baza} and \ref{thm_prezentacija}}\label{subsec56} In Section \ref{subsec54}, we established the linear independence of the sets $\mathfrak{B}_{N(k\Lambda_0)}$ and $\mathfrak{B}_{L(k\Lambda_0)}$. We now prove that they span the principal subspaces $W_{L(k\Lambda_0)}$ and $W_{N(k\Lambda_0)}$, thus finishing the proof of Theorem \ref{thm_baza}. Moreover, as a consequence of the proof, we obtain the presentations of the principal subspace $W_{L(k\Lambda_0)}$ given by Theorem \ref{thm_prezentacija}. Introduce the natural surjective map \begin{align*} f_{N(k\Lambda_0)}\,\colon\, U(\widetilde{\mathfrak{n}}_+)\,&\,\to\, W_{N(k\Lambda_0)}\\ a\,&\,\mapsto\, a\cdot v_{N(k\Lambda_0)},\nonumber \end{align*} so that we can consider the cases $V=L(k\Lambda_0)$ and $V=N(k\Lambda_0)$ simultaneously. Recall that the surjective map $f_{L(k\Lambda_0)}$ is given by \eqref{map}, the left ideal $ I_{L(k\Lambda_0)}$ is defined by \eqref{ideal} and $I_{N(k\Lambda_0)} =U(\widetilde{\mathfrak{n}}_+) \widetilde{\mathfrak{n}}_+^{\geqslant 0}$ . Let $V$ be $N(k\Lambda_0)$ or $L(k\Lambda_0)$. It is clear that the left ideal $I_{V}$ belongs to the kernel of $f_{V}$. Hence, there exists a unique map \begin{equation}\label{mapbar} \bar{f}_{V}\colon U(\widetilde{\mathfrak{n}}_+)/I_{V} \to W_V\quad\text{such that}\quad f_{V}=\bar{f}_{V}\,\pi_V, \end{equation} where $\pi_V $ is the canonical epimorphism $U(\widetilde{\mathfrak{n}}_+)\to U(\widetilde{\mathfrak{n}}_+)/I_{V}$. The map $\bar{f}_{V}$ is surjective as $f_{V}$ is surjective and, furthermore, it maps bijectively $\bar{\mathfrak{B}}_{V}$ to $\mathfrak{B}_{V}$. Therefore, the linearly independent set $\mathfrak{B}_{V}$ spans the principal subspace $W_V$ and so it forms a basis of $W_V$, which proves Theorem \ref{thm_baza}. This implies that the map \eqref{mapbar} is a vector space isomorphism, so, in particular, we conclude that $\ker f_{L(k\Lambda_0)} = I_{L(k\Lambda_0)}$, thus proving Theorem \ref{thm_prezentacija}. \section{Proof of Theorems \ref{thm_baza}, \ref{thm_baza_DE} and \ref{thm_prezentacija_DE} in types \texorpdfstring{$D$}{D} and \texorpdfstring{$E$}{E} }\label{sec60} In this section, unless stated otherwise, we denote by $\widetilde{\mathfrak{g}}$ the affine Lie algebra of type $D_l^{(1)}$, $E_6^{(1)}$, $E_7^{(1)}$, $E_8^{(1)}$. First, we give an outline of the proof of Theorem \ref{thm_baza} for $\widetilde{\mathfrak{g}}$. As the generalization of the arguments from Section \ref{subsec55} is straightforward, we only discuss the proof of linear independence. It relies on the coefficients of certain level 1 intertwining operators and on the vertex operator algebra construction of basic modules, thus resembling the corresponding proofs in types $A_l^{(1)}$, $B_l^{(1)}$ and $C_l^{(1)}$; see \cite{G,B1,B2}. In Section \ref{voacon} we recall the aforementioned construction while in Section \ref{op} we demonstrate how to use the corresponding operators to complete the proof of Theorem \ref{thm_baza}. Next, in Section \ref{proof_DE} we add some details as compared to Sections \ref{sec50} and \ref{op} to take care of the modifications needed to carry out the argument for rectangular weights, i.e. to prove Theorems \ref{thm_baza_DE} and \ref{thm_prezentacija_DE}. Finally, in Section \ref{a} we construct different quasi-particle bases in type $E$, such that their linear independence can be verified by employing the operator $A_\theta$ associated with the maximal root $\theta$, thus resembling the corresponding proof in type $F$ from Section \ref{sec50}. \subsection{Vertex operator algebra construction of basic modules}\label{voacon} We follow \cite{FLM,LL} to review the vertex operator algebra construction of the basic modules $L(\Lambda_i)$ \cite{FK,S}. Set $$\widehat{\mathfrak{h}}_{*}=\bigoplus_{m \in \mathbb{Z}\setminus\{0\}}(\mathfrak{h} \otimes t^m) \oplus \mathbb{C} c\quad\text{and}\quad \widehat{\mathfrak{h}}^{< 0}=\bigoplus_{m <0}\mathfrak{h} \otimes t^m.$$ Let $M(1)=S(\widehat{\mathfrak{h}}^{< 0})$ be the Fock space for the Heisenberg algebra $\widehat{\mathfrak{h}}_{*}$ with $h(-m)$ acting as multiplication and $h(m)$ acting as differentiation on $M(1)$ for all $h \in \mathfrak{h}$ and $ m\in \mathbb{N}$. Consider the tensor products $$V_P = M(1)\otimes \mathbb{C}\left[P\right]\quad\text{and}\quad V_Q = M(1)\otimes \mathbb{C}\left[Q\right], $$where $\mathbb{C}\left[P\right]$ and $\mathbb{C}\left[Q\right]$ denote the group algebras of the weight lattice $P$ and of the root lattice $Q$ with respective bases $\{ e^{\lambda}: \lambda \in P\}$ and $\{ e^{\alpha}: \alpha \in Q\}$. We use the identification of group elements $e^{\lambda} = 1 \otimes e^{\lambda} \in V_P$. Let $e_{\lambda}\colon V_P\to V_P$ be the linear isomorphism defined by \begin{equation}\label{voaop3} e_{\lambda}e^{\mu}= \epsilon(\lambda, \mu)e^{\mu+\lambda} \quad\text{for all }\lambda,\mu\in P, \end{equation} where $\epsilon $ is a certain map $ P\times P \to \mathbb{C}^{\times}$ satisfying $\epsilon(\lambda, 0)=\epsilon(0,\lambda)=1$ for all $\lambda\in P$; see \cite{FLM,LL} for more details. The space $V_Q$ is equipped with a structure of a vertex operator algebra, with $V_P$ being a $V_Q$-module, by \begin{equation}\nonumber Y (e^{\lambda}, z) = E^{-}(-\lambda, z)E^{+}(-\lambda, z)e_{\lambda}z^{\lambda},\quad \text{where} \quad E^{\pm}(-\lambda, z)=\exp \left( \sum_{n \leqslant 1}\lambda(\pm n)\frac{z^{\mp n}}{\pm n}\right) \end{equation} and $z^{\lambda} = 1 \otimes z^{\lambda}$ acts by $ z^{\lambda}e^{\mu} = z^{\langle \lambda, \mu \rangle}$ for all $\lambda, \mu \in P$. Moreover, the space $V_P$ acquires a structure of level one $\widetilde{\mathfrak{g}}$-module via $$x_{\alpha}(m)= \mathop{\mathrm{Res}}_z z^m Y (e^{\alpha}, z)\qquad \text{for }\alpha \in R\text{ and }m \in \mathbb{Z}.$$ With respect to this action, the space $V_Q$ is identified with the standard module $L(\Lambda_0)$ while the irreducible $V_Q$-submodules $V_Qe^{\lambda_i}$ of $V_P$ are identified with the standard modules $ L(\Lambda_i)$ for all $i$ such that the weight $\Lambda_i$ is of level one. The corresponding highest weight vectors are $v_{L(\Lambda_0)} =1$ and $v_{L(\Lambda_i)} = e^{\lambda_i}$. \subsection{Operators \texorpdfstring{$A_{\lambda_i}$}{A-lambda} and proof of Theorem \ref{thm_baza}}\label{op} Let $b\inB_{L(k\Lambda_0)}$ be a quasi-particle monomial as in \eqref{monomial}, of charge-type $\mathcal{C}$ and dual charge-type $$ \mathcal{D}=\left(r^{(1)}_{l}, \ldots , r^{(k)}_{l};\, \ldots \, r^{(1)}_{2}, \ldots , r^{(k)}_{2}; \, r^{(1)}_{1}, \ldots , r^{(k)}_{1}\right). $$ We now demonstrate how to carry out the procedure from Section \ref{subsec53}, i.e. how to reduce $b$ to obtain a new monomial $b'\in B_{L(k\Lambda_0)}$ such that its charge-type $\mathcal{C}'$ satisfies $\mathcal{C}'<\mathcal{C}$ with respect to linear order \eqref{order1}. Denote by $I(\cdot, z)$ the intertwining operator of type $\binom{V_P}{V_P \,\, V_Q}$, $$ I(w, z)v =\exp (zL(-1))Y(v, -z)w, \ \ \text{where} \ \ w \in V_P, v \in V_Q, $$ see \cite[Sect. 5.4]{FHL}. For $i=1,\ldots ,l$ let $A_{\lambda_i}$ be the constant term of $I(e^{\lambda_i}, z)$, that is \begin{equation}\nonumber A_{\lambda_i}=\mathop{\mathrm{Res}}_z z^{-1}I(e^{\lambda_i}, z). \end{equation} We have \begin{equation} \label{koefintvec} A_{\lambda_i}v_{L(\Lambda_0)}=e^{\lambda_i}\quad\text{for all }i=1,\ldots ,l. \end{equation} In contrast with Section \ref{subsec53}, which relies on the application of the operators $A_{\theta}$ and $e_{\theta}$, we here make use of $A_{\lambda_i}$ and $e_{\lambda_i}$ in a similar fashion. In particular, we employ the following property of $e_{\lambda_i}$: \begin{equation} \label{sv1} e_{\lambda_i} x_{\alpha_j}(m)=(-1)^{\delta_{ij}}x_{\alpha_j}(m-\delta_{ij} )e_{\lambda_i}\quad\text{for all }i,j=1,\ldots ,l\text{ and }m \in \mathbb{Z}, \end{equation} see \cite{CLM3} for more details. Moreover, we use the fact that the operators $A_{\lambda_i}$ commute with the action of $x_{\alpha}(z)$ for all $\alpha \in R$, which comes as a consequence of the commutator formula for $x_{\alpha}(z)$ and $I(e^{\lambda_i}, z)$; see \cite[Sect. 5.4]{FHL}. As in Section \ref{subsec52}, denote by $\pi_{\mathcal{D}}$ the projection of the principal subspace $W_{L(k\Lambda_0)}$ on $$ W_{(r^{(k)}_{l};r^{(k)}_{l-1};\ldots;r_{2}^{(k)};r_{1}^{(k)})} \otimes \cdots \otimes W_{(r^{(1)}_{l};r^{(1)}_{l-1};\ldots;r_{2}^{(1)};r_{1}^{(1)})} \subset W_{L(\Lambda_0)}^{\otimes k}\subset L(\Lambda_0)^{\otimes k}, $$ where $W_{(r^{(t)}_{l};\ldots; r_{2}^{(t)};r_{1}^{(t)})}$ denote the $\mathfrak{h}$-weight subspaces of the level $1$ principal subspace $W_{L(\Lambda_0)}$ of the weight $r^{(t)}_{l}\alpha_l+\cdots+r^{(t)}_{2}\alpha_2+ r_{1}^{(t)}\alpha_1 \in R$. Arguing as in Section \ref{subsec53}, we conclude that the image of $\pi_{\mathcal{D}}\, b v_{L(k\Lambda_0)}\in W_{L(k\Lambda_0)}\subset W_{L(\Lambda_0)}^{\otimes k}$ with respect to the operator $$ (A_{\lambda_1})_s \coloneqq\underbrace{1\otimes\cdots\otimes 1}_{k-s}\otimes A_{\lambda_1} \otimes \underbrace{1\otimes\cdots \otimes 1}_{s-1},\qquad\text{where}\qquad s=n_{1,1}, $$ equals the coefficient of the variables \begin{equation} \label{varsDE} z_{r_{l}^{(1)},l}^{-m_{r_{l}^{(1)},l}-n_{r_{l}^{(1)},l}}\cdots z_{2,1}^{-m_{2,1}-n_{2,1}} z_{1,1}^{-m_{1,1}-n_{1,1}} \end{equation} in the expression \begin{equation}\label{513} (A_{\lambda_1})_s \, \pi_\mathcal{D}\, x_{n_{r_{l}^{(1)},l}\alpha_{l}}(z_{r_{l}^{(1)},l}) \cdots x_{n_{2,1}\alpha_{1}}(z_{2,1}) x_{n_{1,1}\alpha_{1}}(z_{1,1}) v_{L(k\Lambda_0)}. \end{equation} Moreover, the $s$-th tensor factor in \eqref{513} (from the right) equals $$ F_s=\Big(x_{n_{r^{(s)}_{l},l}^{(s)}\alpha_{l}}(z_{r_{l}^{(s)},l})\cdots x_{n_{1,l}^{(s)}\alpha_{l}}(z_{1,l})\Big) \cdots \Big( x_{n_{r^{(s)}_{1},1}^{(s)}\alpha_{1}}(z_{r_{1}^{(s)},1})\cdots x_{n{_{1,1}^{(s)}\alpha_{1}}}(z_{1,1}) \Big) e^{\lambda_1}, $$ where the integers $n_{p,i}^{(t)}$ are given by $$ 0 \leqslant n^{(k)}_{p,i}\leqslant \ldots \leqslant n^{(2)}_{p,i} \leqslant n^{(1)}_{p,i} \leqslant 1 \quad\text{and}\quad n_{p,i}=\sum_{t=1}^k n^{(t)}_{p,i}\qquad\text{for all } i=1,\ldots, l. $$ By combining \eqref{voaop3} and \eqref{sv1} we get \begin{equation}\label{opetDE} F_s =(-1)^{r_{1}^{(s)}}e_{\lambda_1}\, F_s\, z_{r_{1}^{(s)},1}\cdots z_{2,1} z_{1,1}. \end{equation} Recall the notation from \eqref{briefly}. By taking the coefficient of variables \eqref{varsDE} in \eqref{opetDE} we have $$ (A_{\lambda_1})_s\pi_{\mathcal{D}}\,bv_{L(k\Lambda_0)} =(-1)^{r_{1}^{(s)}}(e_{\lambda_1})_s \,\pi_{\mathcal{D}}\,b^{+}v_{L(k\Lambda_0)}, $$ where $(e_{\lambda_1})_s $ denotes the action of $e_{\lambda_1}$ on the $s$-th tensor factor (from the right) and \begin{align*} b^{+} =b_{\alpha_{l}}\, \cdots\, b_{\alpha_{2}}b_{\alpha_{1}}^{<s}\,b_{\alpha_{1}}^{s} \quad\text{with}\quad &b_{\alpha_{1}}^{<s}=x_{n_{r_{1}^{(1)},1}\alpha_{1}}(m_{r_{1}^{(1)},1})\cdots x_{n_{r_{1}^{(s)}+1,1}}(m_{r_{1}^{(s)}+1,1})\\ \quad\text{and }\quad &b_{\alpha_{1}}^{s}= x_{n_{r_{1}^{(s)},1}}(m_{r_{1}^{(s)},1}+1)\cdots x_{n_{1,1}\alpha_{1}}(m_{1,1}+1).& \end{align*} Note that the monomial $b^+$ belongs to $B_{L(k\Lambda_0)}$. As in Section \ref{subsec53}, we can now continue to apply this procedure until we obtain a monomial $b'\in B_{L(k\Lambda_0)}$ of charge-type $\mathcal{C}'<\mathcal{C}$. Finally, by repeating the arguments from Section \ref{subsec54} almost verbatim, we can prove the linear independence of the set $\mathfrak{B}_{L(k\Lambda_0)}$. However, in contrast with Section \ref{subsec54}, where the quasi-particle basis in type $F_4^{(1)}$ was reduced to a basis in type $C_3^{(1)}$, the quasi-particle basis in type $D_l^{(1)}$, $E_6^{(1)}$, $E_7^{(1)}$ or $E_8^{(1)}$ is reduced, after sufficient number of steps, to a basis in type $A_1^{(1)}$ from Theorem \ref{thm_baza}. Note that such a modification of the argument is possible because we have the operators $A_{\lambda_i}$ and $e^{\lambda_i}$ satisfying \eqref{koefintvec} and \eqref{sv1} at our disposal; cf. corresponding properties \eqref{eq:21} and \eqref{eq:22} for $\alpha=\theta$. \subsection{Proof of Theorems \ref{thm_baza_DE} and \ref{thm_prezentacija_DE}} \label{proof_DE} Let $\widetilde{\mathfrak{g}}$ be the affine Lie algebra of type $D_l^{(1)}$, $E_6^{(1)}$ or $E_7^{(1)}$ and let $\Lambda=k_0\Lambda_0 + k_j\Lambda_j$ be an arbitrary rectangular weight, as defined in Section \ref{subsec23}. First, we prove that the set $\mathfrak{B}_{L(\Lambda)}$ is linearly independent. As in Section \ref{subsec52}, we regard the standard module $L(\Lambda)$ as the submodule of $L(\Lambda_0)^{\otimes k_0}\otimes L(\Lambda_j)^{\otimes k_j}$ generated by the highest weight vector $v_{L(\Lambda)}= v_{L(\Lambda_0)}^{\otimes k_0}\otimes v_{L(\Lambda_j)}^{\otimes k_j}$. Suppose that \begin{equation}\label{eq:d1DE67} \sum_{a \in A} c_{a}\, b^a v_{L(\Lambda)}=0, \quad\text{where}\quad c_a\in\mathbb{C},\, c_a\neq 0\text{ for all }a\in A, \end{equation} $A$ is a finite nonempty set and all $b^a\inB_{L(\Lambda)}$ posses the same color-type. Let $b^{a_0}$ be a monomial of dual charge-type $\mathcal{D}$ such that $b^{a_0}< b^a$ for all $a\in A$, $a\neq a_0$, with respect to linear order \eqref{order1}. Applying the corresponding projection $\pi_{\mathcal{D}}$, which is defined in parallel with Section \ref{op}, on linear combination \eqref{eq:d1DE67}, we obtain \begin{equation}\label{eq:d2DE67} \sum_{a \in A} c_{a}\, \pi_{\mathcal{D}}\,b^a v_{L(\Lambda)}=0. \end{equation} By Section \ref{voacon}, the highest weight vector $v_{L(\Lambda)}$ is identified with $1^{\otimes k_0} \otimes (e^{\lambda_j})^{\otimes k_j}$, so that, due to \eqref{voaop3}, we have $$ v_{L(\Lambda)}=1^{\otimes k_0} \otimes (e^{\lambda_j})^{\otimes k_j} =(1^{\otimes k_0} \otimes e_{\lambda_j}^{\otimes k_j}) \,1^{\otimes k}=(1^{\otimes k_0} \otimes e_{\lambda_j}^{\otimes k_j}) v_{L(\Lambda_0)}^{\otimes k}\quad\text{for }k=k_0+k_j. $$ Therefore, linear combination \eqref{eq:d2DE67} can be expressed as $$ \sum_{a \in A} c_{a}\, \pi_{\mathcal{D}}\,b^a (1^{\otimes k_0} \otimes e_{\lambda_j}^{\otimes k_j}) v_{L(\Lambda_0)}^{\otimes k}=0. $$ By employing \eqref{sv1} to move $1^{\otimes k_0}\otimes e_{\lambda_j}^{\otimes k_j}$ all the way to the left and then dropping the invertible operator, we get $$ \sum_{a \in A} c_{a}\, \pi_{\mathcal{D}}\, \tilde{b}^a v_{L(\Lambda_0)}^{\otimes k } =0 $$ for some quasi-particle monomials $\tilde{b}^a$. Using the fact that the original monomials $b^a$ belong to $B_{L(\Lambda)}$ one can verify that all $\tilde{b}^a$ belong to $B_{L(k\Lambda_0)}$. Therefore, due to the identification $v_{L(\Lambda_0)}^{\otimes k } =v_{L(k\Lambda_0)}$, the linear independence of the set $\mathfrak{B}_{L(\Lambda)}$ now follows from Theorem \ref{thm_baza}. We now proceed as in Section \ref{subsec55} and construct a spanning set for $U(\widetilde{\mathfrak{n}}_+)/I_{L(\Lambda)}$. We denote the image of the element $x\in U(\widetilde{\mathfrak{n}}_+)$ in the quotient $U(\widetilde{\mathfrak{n}}_+)/I_{V}$, where $V=L(k\Lambda_0),L(\Lambda)$, by $\bar{x}$. Let $\bar{\mathfrak{B}}_{L(\Lambda)}$ be the set of all monomials \begin{equation}\label{monomialbar}\tag{$\bar{m} $} \bar{b}= \Big(\bar{x}_{n_{r_{l}^{(1)},l}\alpha_{l}}(m_{r_{l}^{(1)},l})\ldots \bar{x}_{n_{1,l}\alpha_{l}}(m_{1,l})\Big)\ldots \Big( \bar{x}_{n_{r_{1}^{(1)},1}\alpha_{1}}(m_{r_{1}^{(1)},1})\ldots \bar{x}_{n_{1,1}\alpha_{1}}(m_{1,1})\Big) \end{equation} in $U(\widetilde{\mathfrak{n}}_+)/I_{L(\Lambda)}$ such that their charges and energies satisfy \begin{equation}\label{uvjeti} n_{r_{i}^{(1)},i}\leqslant \ldots \leqslant n_{1,i}\quad\text{and}\quad m_{r_{i}^{(1)},i}\leqslant \ldots \leqslant m_{1,i}\qquad\text{for all } i=1,\ldots,l \end{equation} and difference conditions \eqref{d1}, \eqref{d2DE67} and \eqref{d3}. It is clear from Theorem \ref{thm_baza} that the set of all monomials $\bar{b}$ as in \eqref{monomialbar} satisfying \eqref{uvjeti} and difference conditions \eqref{d1}, \eqref{d2} and \eqref{d3} spans the quotient $U(\widetilde{\mathfrak{n}}_+)/I_{L(\Lambda)}$. Suppose that such a monomial $\bar{b}$ does not satisfy the more restrictive condition \eqref{d2DE67}. Introduce the generating functions $$ \bar{X}_V= \bar{x}_{n_{r_{l}^{(1)},l}\alpha_{l}}(z_{r_{l}^{(1)},l}) \cdots \bar{x}_{n_{2,1}\alpha_{1}}(z_{2,1}) \bar{x}_{n_{1,1}\alpha_{1}}(z_{1,1})\quad\text{for }V=L(k\Lambda_0),L(\Lambda), $$ where the subscript $V$ indicates that the coefficients of $\bar{X}_V$ are regarded as elements of the quotient $U(\widetilde{\mathfrak{n}}_+)/I_{V}$. Clearly, $\bar{b}$ equals the coefficient of the variables $$ z_{r_{l}^{(1)},l}^{-m_{r_{l}^{(1)},l}-n_{r_{l}^{(1)},l}}\cdots z_{2,1}^{-m_{2,1}-n_{2,1}} z_{1,1}^{-m_{1,1}-n_{1,1}} $$ in $\bar{X}_{L(\Lambda)}$. By Theorem \ref{thm_prezentacija} we have $W_{L(k\Lambda_0)}\cong U(\widetilde{\mathfrak{n}}_+)/I_{L(k\Lambda_0)}$. Therefore, due to commutation relations $$ (z_{p,i}-z_{q,i'})^{ M_{i}} x_{n_{q,i'}\alpha_{i'}}(z_{q,i'}) x_{n_{p,i}\alpha_{i}}(z_{p,i}) =(z_{p,i}-z_{q,i'})^{M_{i} } x_{n_{p,i}\alpha_{i}}(z_{p,i}) x_{n_{q,i'}\alpha_{i'}}(z_{q,i'}) $$ with $M_{i}=\min \left\{\textstyle n_{q,i'},n_{p,i}\right\}$, the product $P \bar{X}_{L(k\Lambda_0)}$, where $P$ is the Laurent polynomial $$ P=\prod_{i=2}^l \prod_{q=1}^{r_{i'}^{(1)}} \prod_{p=1}^{r_{i}^{(1)}} \left( 1-\frac{z_{q,i'}}{z_{p,i}} \right)^{\min \left\{ n_{q,i'},n_{p,i} \right\} }, $$ belongs to \begin{equation}\label{equation123} \prod_{i=1}^{l} \prod_{p=1}^{r_{i}^{(1)}} z_{p,i}^{-\sum_{q=1}^{r_{i'}^{(1)}} \min\left\{n_{q,i'}, n_{p,i}\right\}} ( U(\widetilde{\mathfrak{n}}_+)/I_{V})[[z_{r_{l}^{(1)},l},\ldots ,z_{1,1}]] \end{equation} for $V=L(k\Lambda_0)$. This implies that the product $P \bar{X}_{L(\Lambda)}$ belongs to \eqref{equation123} for $V=L(\Lambda)$. However, every vertex operator $\bar{x}_{n\alpha_i}(z)$ in the product $P \bar{X}_{L(\Lambda)}$ can be moved all the way to the right. By \eqref{idealDE} we have $x_{\alpha_j}(-1)^{k_0+1}\in I_{L(\Lambda)}$, so that each $\bar{x}_{n\alpha_i}(z)$ increases the power of its variable $z$ in \eqref{equation123} by $\sum_{t=1}^n \delta_{i j_t}$. Therefore, we have \begin{equation}\label{equation1234} P \bar{X}_{L(\Lambda)} \in \prod_{i=1}^{l} \prod_{p=1}^{r_{i}^{(1)}} z_{p,i}^{\sum_{t=1}^{n_{p,i}} \delta_{i j_t}-\sum_{q=1}^{r_{i'}^{(1)}} \min\left\{n_{q,i'}, n_{p,i}\right\}} ( U(\widetilde{\mathfrak{n}}_+)/I_{L(\Lambda)})[[z_{r_{l}^{(1)},l},\ldots ,z_{1,1}]]. \end{equation} By employing \eqref{equation1234} and repeating the corresponding part of the proof of \cite[Thm. 5.1]{G} the monomial $\bar{b}$ can be expressed as a linear combination of elements of $\bar{\mathfrak{B}}_{L(\Lambda)}$. Hence we conclude that the set $\bar{\mathfrak{B}}_{L(\Lambda)}$ spans the quotient $U(\widetilde{\mathfrak{n}}_+)/I_{L(\Lambda)}$. Since the ideal $ I_{L(\Lambda)}$ belongs to the kernel of the map $f_{L(\Lambda)}$ defined by \eqref{map}, Theorems \ref{thm_baza_DE} and \ref{thm_prezentacija_DE} can be now verified by arguing as in Section \ref{subsec56}. \subsection{Operator \texorpdfstring{$A_\theta$}{A-theta} revisited} \label{a} As with type $G$ in \cite{B3}, the linear independence proof in type $F$ employs certain operator $A_\theta = x_{\theta}(-1)$; see Sections \ref{subsec53} and \ref{subsec54}. In this section we show that the operator $A_\theta$ associated with the maximal root $\theta$ in type $E$ can be also used to verify the linear independence, but of different bases. First, for $\mathfrak{g}=E_l$ set $$ (i_1,\ldots ,i_l; i''_3,\ldots ,i''_l) = \begin{cases} (1,7,2,3,4,5,6,8; 1,2,3,4,5,5),&\text{if }l=8,\\ (1,6,5,4,3,2,7; 6,5,4,3,3),&\text{if }l=7,\\ (6,5,4,3,2,1; 5,4,3,2),&\text{if }l=6. \end{cases} $$ Introduce the following families of difference conditions: \begin{align} & m_{p,i_j}\leqslant -n_{p,i_j} - 2(p-1) n_{p,i_j}\quad \text{for}\quad p=1,\ldots, r_{i_j}^{(1)}\quad\text{and}\quad j=1,2 ;\label{cd1}\tag{$c_2^{0}$}\\ & m_{p,i_j}\leqslant -n_{p,i_j} + \sum_{q=1}^{r_{i_j''}^{(1)}}\min\left\{n_{q,i_j''},n_{p,i_j}\right\} - 2(p-1) n_{p,i_j}\quad \text{for}\quad p=1,\ldots, r_{i_j}^{(1)}; \label{cd2}\tag{$c_2^{j}$}\\ & m_{p,i_j}\leqslant -n_{p,i_j} + \sum_{s=i''_j,i_k}\sum_{q=1}^{r_{s}^{(1)}}\min\left\{n_{q,s},n_{p,i_j}\right\} - 2(p-1) n_{p,i_j}\quad \text{for}\quad p=1,\ldots, r_{i_j}^{(1)}.\label{cd3}\tag{$c_2^{j,k}$} \end{align} Let $B_{L(k\Lambda_0)}^{E_l}$ be the set all monomials \eqref{monomial} which satisfy \eqref{uvjeti} and the following difference conditions: \begin{itemize} \item[$\circ$] \eqref{d1}, \eqref{d3}, \eqref{cd1}, \eqref{cd2} for $j=3,4,5,6,8$ and \eqref{cd3} for $(j,k)=(7,2)$ if $l=8$; \item[$\circ$] \eqref{d1}, \eqref{d3}, \eqref{cd1}, \eqref{cd2} for $j=3,4,5,7$ and \eqref{cd3} for $(j,k)=(6,1)$ if $l=7$; \item[$\circ$] \eqref{d1}, \eqref{d3}, \eqref{cd1}, \eqref{cd2} for $j=3, 5,6$ and \eqref{cd3} for $(j,k)=(4,1)$ if $l=6$. \end{itemize} \begin{pro}\label{proposition} For any positive integer $k$ the set $$\mathfrak{B}_{L(k\Lambda_0)}^{E_l}=\left\{b v_{L(k\Lambda_0)} : b\in B_{L(k\Lambda_0)}^{E_l}\right\}\subset W_{L(k\Lambda_0)}$$ forms a basis of the principal subspace $W_{L(k\Lambda_0)}$ of the standard module $L(k\Lambda_0)$ for the affine Lie algebra in type $E_l^{(1)}$. \end{pro} \begin{prf} The maximal root $\theta $ in type $E$ satisfies \begin{equation}\label{athetae} \alpha_i(\theta^\vee) =\delta_{6i}\quad\text{for }\mathfrak{g}=E_6 \qquad\text{and}\qquad \alpha_i(\theta^\vee) =\delta_{1i}\quad\text{for }\mathfrak{g}=E_7,E_8. \end{equation} Therefore, as described in Section \ref{subsec54}, by applying the procedure from Section \ref{subsec53} on an arbitrary linear combination of elements of $\mathfrak{B}_{L(k\Lambda_0)}^{E_8}$, one can remove all quasi-particles of color $1$ from the corresponding quasi-particle monomials. The resulting linear combination can be identified as a linear combination of elements of $\mathfrak{B}_{L(k\Lambda_0)}^{E_7}$; see Figure \ref{figure}. Due to \eqref{athetae}, by applying the same procedure once again, one can remove all quasi-particles of color $1$\footnote{ Note that the quasi-particles of color $1$ in type $E_7$ correspond, with respect to the aforementioned identification, to the quasi-particles of color $7$ in type $E_8$; see Figure \ref{figure}. } from the corresponding quasi-particle monomials, thus obtaining the expression which can be identified as a linear combination of elements of the basis $\mathfrak{B}_{L(k\Lambda_0)}$ from Theorem \ref{thm_baza} for $\mathfrak{g}=D_6$; see Figure \ref{figure}. As for type $E_6$, due to \eqref{athetae}, by applying the procedure from Section \ref{subsec53} on an arbitrary linear combination of elements of $\mathfrak{B}_{L(k\Lambda_0)}^{E_6}$, one can remove all quasi-particles of color $6$ from the corresponding quasi-particle monomials. The resulting expression can be identified as a linear combination of elements of the basis $\mathfrak{B}_{L(k\Lambda_0)}$ from Theorem \ref{thm_baza} for $\mathfrak{g}=A_5$; see Figure \ref{figure}. Therefore, the proposition follows from Theorem \ref{thm_baza} and the fact that the characters of the corresponding bases coincide which is verified by arguing as in Section \ref{sec40}. \end{prf} \section{Character formulae and combinatorial identities}\label{sec40} Let $\delta=\sum_{i=0}^l a_i \alpha_i$ be the imaginary root as in \cite[Chap. 5]{K}, where the integers $a_i$ denote the labels in the Dynkin diagram \cite[Table Aff]{K} for $\widetilde{\mathfrak{g}}$. As before, let $V$ denote a standard module or a generalized Verma module. Define the character $\mathop{\mathrm{ch}} W_{V}$ of the corresponding principal subspace $W_V$ by $$ \mathop{\mathrm{ch}} W_{V}=\sum_{m,n_1,\ldots,n_l\geqslant 0} \dim (W_{V})_{-m\delta +n_1\alpha_1 +\ldots + n_l\alpha_l}\, q^{m}y^{n_1}_{1}\cdots y^{n_l}_{l}, $$ where $q, y_1,\ldots, y_l$ are formal variables and $(W_{V})_{-m\delta +n_1\alpha_1 +\ldots + n_l\alpha_l}$ denote the weight subspaces of $W_V$ of weight $-m\delta +n_1\alpha_1 +\cdots + n_l\alpha_l$ with respect to $$\widetilde{\mathfrak{h}} =\mathfrak{h}\otimes \mathbb{C}[t,t^{-1}]\oplus \mathbb{C}c\oplus \mathbb{C}d.$$ In order to simplify our notation, we set $\mu_i =\nu_i/\nu_{i'}$ for $i=2,\ldots ,l$; recall \eqref{niovi3}. Also, we write $$(a;q)_r=\prod_{i=1}^r (1- aq^{i-1}) \ \ \text{for} \ \ r\geqslant 0 \qquad\text{and}\qquad (a; q)_{\infty} =\prod_{i\geqslant 1} (1- aq^{i-1}). $$ Theorem \ref{thm_baza} implies the following character formulae: \begin{thm}\label{thm_karakterL(kLambda0)} {\normalfont(a)} Set $n_i=\sum_{t=1}^{\nu_i k}r_i^{(t)}$ for $i=1,\ldots ,l$. For any integer $k\geqslant 1 $ we have $$ \mathop{\mathrm{ch}} W_{L(k\Lambda_{0})}= \sum_{\substack{r_{1}^{(1)}\geqslant \cdots\geqslant r_{1}^{(\nu_1k)}\geqslant 0\vspace{-5pt}\\ \vdots\vspace{-2pt} \\r_{l}^{(1)}\geqslant \cdots\geqslant r_{l}^{(\nu_lk)}\geqslant 0}} \frac{q^{\sum_{i=1}^l\sum_{t=1}^{\nu_{i} k}r_i^{(t)^2} -\sum_{i=2}^{l}\sum_{t=1}^{k} \sum_{p=0}^{\mu_i-1}r_{i'}^{(t)} r_{i}^{\left(\mu_i t -p\right)} }} {\prod_{i=1}^{l}(q;q)_{r^{(1)}_{i}-r^{(2)}_{i}}\cdots (q;q)_{r^{(\nu_i k)}_{i}}}\, \prod_{i=1}^{l}y^{n_i}_{i}. $$ \noindent {\normalfont(b)} Set $n_i=\sum_{t=1}^{\nu_iu_i}r_i^{(t)}$ for $i=1,\ldots ,l$. For any integer $k\geqslant 1 $ we have \begin{eqnarray*}\label{karakterN(kLambda0)} &\mathop{\mathrm{ch}} W_{N (k\Lambda_{0})} =\displaystyle \sum_{\substack{r_{1}^{(1)}\geqslant \cdots\geqslant r_{1}^{(\nu_1u_1)}\geqslant 0\vspace{-5pt}\\ \vdots\vspace{-2pt} \\ \substack{r_{l}^{(1)}\geqslant \cdots\geqslant r_{l}^{(\nu_lu_l)}\geqslant 0\\ u_1, u_2, \ldots , u_l \geqslant 0}}} \frac{q^{\sum_{i=1}^l\sum_{t=1}^{\nu_{i} u_i}r_i^{(t)^2}- \sum_{i=2}^{l}\sum_{t=1}^{u_i}\sum_{p=0}^{\mu_i-1}r_{i'}^{(t)} r_{i}^{\left(\mu_i t -p\right)} }} {\prod_{i=1}^{l}(q;q)_{r^{(1)}_{i}-r^{(2)}_{i}}\cdots (q;q)_{r^{(\nu_iu_i)}_{i}}}\prod_{i=1}^{l}y^{n_i}_{i}. \end{eqnarray*} \end{thm} \begin{proof} We give the proof of this theorem for the case $F_4^{(1)}$, since the proof for the cases $D_l^{(1)}$, $E_6^{(1)}$, $E_7^{(1)}$ and $E_8^{(1)}$ goes analogously. The proof for other types can be found in \cite{G,B1,B2,B3}. In order to determine the character of $W_{L(k\Lambda_{0})}$, we write conditions on energies of quasi-particles of the set $B_{W_{L(k\Lambda_0)}}$ in terms of $r_i^{(s)}$. For a fixed color-type $(n_{4},n_{3},n_{2},n_{1})$, charge-type $$\quad\mathcal{C}=\left( n_{r_{4}^{(1)},4},\ldots , n_{1,4}; \, n_{r_{3}^{(1)},3},\ldots , n_{1,3};\, n_{r_{2}^{(1)},2},\ldots , n_{1,2};\, n_{r_{1}^{(1)},1},\ldots , n_{1,1}\right)$$ and dual-charge-type $$ \mathcal{D}=\left(r^{(1)}_{4}, \ldots , r^{(2k)}_{4};\, r^{(1)}_{3}, \ldots , r^{(2k)}_{3};\, r^{(1)}_{2}, \ldots , r^{(k)}_{2}; \, r^{(1)}_{1}, \ldots , r^{(k)}_{1}\right) $$ the following equalities can be verified by a direct calculation: \begin{equation}\label{uvjet1} \sum_{p=1}^{r_{i}^{(1)}} (2(p-1)n_{p,i}+n_{p,i})= \sum_{t=1}^{k}r^{(t)^{2}}_{i} \quad\text{for }i=1,2,\end{equation} \begin{equation}\label{uvjet2}\sum_{p=1}^{r_{i}^{(1)}} ((2(p-1)n_{p,i}+n_{p,i})= \sum_{t=1}^{2k}r^{(t)^{2}}_{i}\quad\text{for }i=3,4,\end{equation} \begin{equation}\label{uvjet3} \sum_{p=1}^{r^{(1)}_{2}}\sum_{q=1}^{r^{(1)}_1}\mathrm{min}\{n_{p,2},n_{q,1}\}=\sum_{t=1}^{k}r^{(t)}_1r^{(t)}_2,\qquad \sum_{p=1}^{r^{(1)}_{4}}\sum_{q=1}^{r^{(1)}_3}\mathrm{min}\{n_{p,4},n_{q,3}\}=\sum_{t=1}^{2k}r^{(t)}_3r^{(t)}_4,\end{equation} \begin{equation}\label{uvjet5}\sum_{p=1}^{r^{(1)}_{3}}\sum_{q=1}^{r^{(1)}_2}\mathrm{min}\{n_{p,3},2n_{q,2}\}=\sum_{t=1}^{k}r^{(t)}_2(r^{(2t-1)}_3+r^{(2t)}_3). \end{equation} By combining \eqref{uvjet1}--\eqref{uvjet5}, difference conditions \eqref{d1}--\eqref{d3} and the formula $$ \frac{1}{(q)_r}=\sum_{j\geqslant 0}p_r(j)q^j, $$ where $p_r(j)$ denotes the number of partitions of $j$ with at most $r$ parts, we get \begin{align}\nonumber \mathrm{ch} \ W_{L (k\Lambda_{0})} =& \sum_{\substack{r_{1}^{(1)}\geqslant \cdots\geqslant r_{1}^{(k)}\geqslant 0\\ r_{2}^{(1)}\geqslant \cdots\geqslant r_{2}^{(k)}\geqslant 0}} \frac{q^{\sum_{i=1}^{2}\sum_{t=1}^k r^{(t)^{2}}_{i}-\sum_{t=1}^kr^{(t)}_1r^{(t)}_{2}}}{\prod_{i=1}^{2}(q;q)_{r^{(1)}_{i}-r^{(2)}_{i}}\cdots (q;q)_{r^{(k)}_{i}}}\prod_{i=1}^{2}y^{n_i}_{i}\\ \nonumber & \times \sum_{\substack{r_{3}^{(1)}\geqslant \cdots\geqslant r_{3}^{(2k)}\geqslant 0\\r^{(1)}_{4}\geqslant \ldots \geqslant r^{(2k)}_{4}\geqslant 0}} \frac{q^{\sum_{i=3}^{4}\sum_{t=1}^{2k}r^{(t)^{2}}_{i}-\sum_{t=1}^{2k}r^{(t)}_3r^{(t)}_{4}-\sum_{t=1}^kr_{2}^{(t)}(r_{3}^{(2t-1)}+r_{3}^{(2t)})}}{\prod_{i=3}^{4}(q;q)_{r^{(1)}_{i}-r^{(2)}_{i}}\cdots (q;q)_{r^{(2k)}_{i}}}\prod_{i=3}^{4}y^{n_i}_{i},& \end{align} where $n_i=\sum_{t=1}^{k}r_i^{(t)}$ for $i=1,2$ and $n_i=\sum_{t=1}^{2k}r_i^{(t)}$ for $i=3,4$, as required. The character formula for the generalized Verma module is verified analogously. \end{proof} Theorem \ref{thm_baza_DE} implies the following character formulae in types $D_l^{(1)}$, $E_6^{(1)}$ and $E_7^{(1)}$ while the case $A_l^{(1)}$ is due to \cite{G}. \begin{thm} Set $n_i=r_i^{(1)}+\cdots +r_i^{(k)}$ for $i=1,\ldots ,l$. For any rectangular weight $\Lambda=k_0\Lambda_0+k_j\Lambda_j$ of level $k=k_0+k_j$ we have $$ \mathop{\mathrm{ch}} W_{L(\Lambda)}= \sum_{\substack{r_{1}^{(1)}\geqslant \cdots\geqslant r_{1}^{(k)}\geqslant 0\vspace{-5pt}\\ \vdots\vspace{-2pt} \\r_{l}^{(1)}\geqslant \cdots\geqslant r_{l}^{(k)}\geqslant 0}} \frac{q^{\sum_{i=1}^l\sum_{t=1}^{ k}r_i^{(t)^2} -\sum_{i=2}^{l}\sum_{t=1}^{k} r_{i'}^{(t)} r_{i}^{\left( t \right)} +\sum_{i=1}^l\sum_{t=1}^k r_i^{(t)}\delta_{i j_t}}} {\prod_{i=1}^{l}(q;q)_{r^{(1)}_{i}-r^{(2)}_{i}}\cdots (q;q)_{r^{( k)}_{i}}}\, \prod_{i=1}^{l}y^{n_i}_{i}. $$ \end{thm} Note that from \eqref{mapbar} we have an isomorphism of $\widetilde{\mathfrak{n}}_+$-modules $W_{N(k\Lambda_{0})}$ and $U(\widetilde{\mathfrak{n}}_+^{< 0})$, so we can obtain character formula of $W_{N(k\Lambda_{0})}$ by using Poincar\'{e}--Birkhoff--Witt basis of $U(\widetilde{\mathfrak{n}}_+^{< 0})$ as well. For example, in the case $F_4^{(1)}$, we get \begin{comment} \begin{equation} \mathop{\mathrm{ch}} W_{N (k\Lambda_{0})} = \frac{1}{(qy_1, qy_1y_2,qy_1y_2y_3 , qy_1y_2 y_3 y_4; q)_{\infty}}\end{equation} $$\times\frac{1}{(qy_1y_2y_3^2, qy_1y_2y_3^2 y_4, qy_1y_2 y_3^2y_4^2,qy_1y_2^2y_3^2, qy_1y_2^2y_3^2y_4, qy_1y_2^2y_3^2y_4^2,qy_3; q)_{\infty}} $$ $$ \times\frac{1}{(qy_1y_2^2y_3^3y_4,qy_1y_2^2y_3^3y_4^2,qy_1y_2^2y_3^4y_4^2, qy_1y_2^3y_3^4y_4^2, qy_1^2y_2^3y_3^4y_4^2, qy_3y_4; q)_{\infty}} $$ $$ \times\frac{1}{(qy_2,qy_2y_3, qy_2y_3y_4,qy_2y_3^2,qy_2y_3^2y_4, qy_2y_3^2y_4^2, qy_4; q)_{\infty}}, $$ \end{comment} \begin{align}\label{t3:2} \mathop{\mathrm{ch}} W_{N (k\Lambda_{0})} =\,\,& \frac{1}{(qy_1, qy_1y_2,qy_1y_2y_3 , qy_1y_2 y_3 y_4,qy_2,qy_2y_3, qy_2y_3y_4,qy_2y_3^2; q)_{\infty}}\\ &\hspace{-60pt}\times \frac{1}{(qy_1y_2y_3^2, qy_1y_2y_3^2 y_4, qy_1y_2 y_3^2y_4^2,qy_1y_2^2y_3^2, qy_1y_2^2y_3^2y_4, qy_1y_2^2y_3^2y_4^2,qy_3,qy_2y_3^2y_4; q)_{\infty}}\nonumber\\ &\hspace{-60pt}\times \frac{1}{(qy_1y_2^2y_3^3y_4,qy_1y_2^2y_3^3y_4^2,qy_1y_2^2y_3^4y_4^2, qy_1y_2^3y_3^4y_4^2, qy_1^2y_2^3y_3^4y_4^2, qy_3y_4, qy_2y_3^2y_4^2, qy_4; q)_{\infty}}\nonumber, \end{align} where \begin{equation*}(a_1, \ldots, a_n; q)_{\infty} \coloneqq (a_1; q)_{\infty} \cdots (a_n; q)_{\infty}.\end{equation*} For any positive root $\alpha =a_1\alpha_1+\cdots a_l\alpha_l\in R_+$ we introduce the following notation \begin{equation*}\label{oznakazaidentitet} (\alpha; q)_{\infty}=(qy_1^{a_1},\ldots, qy_l^{a_l};q)_{\infty}, \end{equation*} so that for an arbitrary affine Lie algebra $\widetilde{\mathfrak{g}}$ character formula \eqref{t3:2} generalizes to \begin{equation}\label{karakter} \mathop{\mathrm{ch}} W_{N (k\Lambda_{0})} = \frac{1} {\prod_{\alpha \in R_+}(\alpha; q)_{\infty} }. \end{equation} Theorem \ref{thm_karakterL(kLambda0)} and \eqref{karakter} imply the following generalization of Euler--Cauchy theorem; cf. \cite{A}. \begin{thm}\label{thm_identitet} For any untwisted affine Lie algebra $\widetilde{\mathfrak{g}}$ we have \begin{eqnarray*} \displaystyle\frac{1}{\prod_{\alpha \in R_+}(\alpha; q)_{\infty} } = \displaystyle\sum_{\substack{r_{1}^{(1)}\geqslant \cdots \geqslant r_{1}^{(m)} \geqslant \cdots \geqslant 0\vspace{-5pt}\\ \hspace{-7pt}\vdots\vspace{-2pt} \\ \substack{r_{l}^{(1)}\geqslant \cdots \geqslant r_{l}^{(m)} \geqslant \cdots \geqslant 0}}} \frac{q^{\sum_{i=1}^l\sum_{t\geqslant 1} r_i^{(t)^2}-\sum_{i=2}^{l}\sum_{t\geqslant 1} \sum_{p=0}^{\mu_i-1}r_{i'}^{(t)} r_{i}^{\left(\mu_i t -p\right)} }}{\prod_{i=1}^{l}\prod_{j\geqslant 1} (q;q)_{r^{(j)}_{i}-r^{(j+1)}_{i}}}\prod_{i=1}^{l}y^{n_i}_{i}, \end{eqnarray*} where $n_i=\sum_{t\geqslant 1}r_i^{(t)}$ for $i=1,\ldots ,l$ and the sum on the right hand side of goes over all descending infinite sequences of nonnegative integers with finite support. \end{thm} In particular, the theorem produces three new families of combinatorial identities which correspond to types $D$, $E$ and $F$. \section*{Acknowledgement} The authors would like to thank Mirko Primc for useful discussions and support. The first author is partially supported by the QuantiXLie Centre of Excellence, a project cofinanced by the Croatian Government and European Union through the European Regional Development Fund - the Competitiveness and Cohesion Operational Programme (Grant KK.01.1.1.01.0004). \linespread{1.0}
{ "timestamp": "2019-03-01T02:04:11", "yymm": "1902", "arxiv_id": "1902.10794", "language": "en", "url": "https://arxiv.org/abs/1902.10794" }
\section{Introduction}\label{sec1} Free probability theory was started by Voiculescu in the 1980's \cite{voi1, voi2, voi3}. It is about calculating moments (or distributions) of non-commutative random variables, such as, random matricies where the matrix entries are classical random variables. In classical probability theory, random variables are usually real-valued and can be extended to be complex-valued. For convenience, let us say that they are real-valued. Therefore, they are commutative. For example, assume $x_1, x_2$ are two independent non-zero random variables and $E$ denotes the expectation. Then,\begin{equation}\label{1.1} E(x_1x_2x_1x_2)=E(x_1^2x_2^2)=E(x_1^2)E(x_2^2)>0, \end{equation} no matter whether $x_1$ and/or $x_2$ have $0$ mean or not, which is because $x_1$ and $x_2$ are commutative. However, if $x_1$ and $x_2$ are not commutative, then, the property (\ref{1.1}) may not hold and two natural questions are as follows. What will happen to (\ref{1.1})? What does the independence mean to non-commutative random variables? Free probability theory addresses the above two questions. It introduces freeness between non-commutative random variables, which is analogous to the independence between classical commutative random variables. It basically says that although $E(x_1x_2x_1x_2)$ may not be equal to $E(x_1^2x_2^2)$, it is $0$ if $x_1$ and $x_2$ are free and both have mean $0$. With this freeness, when a large number of free random variables are summed with proper weights, it converges to the classical semicircular distribution. This is the free central limit theorem similar to the classical central limit theorem, where Gaussian distribution corresponds to semicircular distribution. Note that the eigenvalue distribution of a random matrix with entries of independent Gaussian random variables (for simplicity, the matrix symmetricity is not specified here) goes to semicircular distribution as well when the matrix size goes to infinity. This suggests a connection between free random variables and large size random matrices. Free probability theory says that, it indeed has a strong connection, i.e., random matrices of independent Gaussian random variables become free when the matrix size goes to infinity. In other words, when the size of matrices is large, these matrices are approximately free. Furthermore, the entries in random matrices can be replaced by free semicircular random variables (called deterministic equivalent). With the replacement, all the joint moments or cumulants of random matrices can be calculated, which may lead to the calculations of the distributions of the eigenvalues of the functions of these random matrices. This is the reason why free probability theory has attracted much attention in wireless communications and signal processing areas. Massive MIMO systems have been identified as potential candidates in future wireless communications systems. In massive MIMO systems, their channel matrices are random of large sizes. Therefore, it is natural to apply free probability theory to do some of the difficult calculations, such as, channel capacity \cite{tul1, cou, anan}. It is particularly interesting when some statistics of a channel matrix of large size, such as, the first two moments (covariances) of the channel coefficients, are known, how we calculate the channel performance without performing Monte Carlo simulations that may be hard to do in practice when the channel matrix size is large, such as, a massive MIMO channel. The main goal of this tutorial paper is to briefly introduce free probability theory and its application to large size random matrices so that an ordinary researcher in signal processing and communications areas can easily understand. In the following, we adopt most of the notations in Speicher \cite{spe1,spe2,spe3,spe4}. All the results described below are from \cite{spe1,spe2,spe3,spe4} as well. The remainder of this paper is organized as follows. In Section \ref{sec2}, we describe the basics of free random variables and the free central limit theorem without proof. In Section \ref{sec3}, we describe the calculations/relations of joint moments, cumulants, and distributions of multiple free random variables. In Section \ref{sec4}, we describe random matrices and the approximate distributions of their eigenvalues. In Section \ref{sec5}, we describe free deterministic equivalents for random matrices. We also describe how to calculate the Cauchy transforms of random matrices using the second order statistics of their entries. In Section \ref{sec6}, we conclude this paper. \section{Free Random Variables}\label{sec2} For convenience, in the following we will use as simple notations as possible, which may be too simplified in terms of mathematical rigorousness. Let $x_1, x_2,..., x_n$ be $n$ elements that may not be commutative, and $E$ be a linear functional on these elements so that $E(1)=1$. Examples of these elements are matrices and $E$ is like the expectation of a classical random variable. \begin{DD}\label{def1} Elements (or random variables) $x_1,x_2,..., x_n$ are called free or freely independent, if for any $m$ polynomials $p_k(x)$, $1\leq k\leq m$, with $m\geq 2$, \begin{equation}\label{2.1} E(p_1(x_{i_1})p_2(x_{i_2})\cdots p_m(x_{i_m}))=0, \end{equation} when $E(p_k(x_{i_k}))=0$ for all $k$, $1\leq k\leq m$, and any two neighboring indices $i_l$ and $i_{l+1}$ are not equal, i.e., $1\leq i_1\neq i_2 \neq \cdots \neq i_m\leq n$. \end{DD} From (\ref{2.1}), if $x_1$ and $x_2$ are free, then $E(x_1x_2x_1x_2)=0$ when $E(x_1)=E(x_2)=0$, where $m=4$, $i_1=1, i_2=2, i_3=1, i_4=2$, and polynomials $p_k(x)=x$ for $1\leq k\leq 4$. Comparing with (\ref{1.1}) in the classical commutative case, independent real-valued random variables are not free. The terminology ``free'' comes from the concept of free groups, where there is no any nontrivial relation between any generating elements of a free group. One might want to ask why, in the above definition, polynomials of the random variables $x_k$ are used. It is for the convenience later in calculating their joint moments. Note that in free probability theory context, it not convenient to directly define density functions (or distribution functions) for noncommutative random variables. However, as we can recall, in the classical probability theory, if all the moments of a random variable are known, its characteristic function can be often determined and therefore, its density function can be often determined as well. Thus, calculating all the joint moments of free random variables may be sufficient for their joint distributions. Its details will be described in Section \ref{sec3}. The set ${\cal A}_k$ of all polynomials $p(x_k)$ of $x_k$ including the identity element $1=x_k^0$ is called the subalgebra generated by element $x_k$ for $1\leq k\leq n$. Subalgebras ${\cal A}_1, {\cal A}_2,...,{\cal A}_n$ are called free if and only if elements $x_1, x_2, ..., x_n$ are free. Clearly, when elements $x_1, x_2, ..., x_n$ are free, for any $n$ polynomials $p_1(x),...,p_n(x)$, elements $p_1(x_1),...,p_n(x_n)$ are free as well. If elements $x_1, x_2, \cdots, x_n$ are free, they are called free random variables. With the above freeness definition, although one may construct abstract free random variables using possibly many mathematical concepts, it is not easy to show concrete examples of free random variables at this moment. Two sets ${\cal S}_1$ and ${\cal S}_2$ are called free if any element in ${\cal S}_1$ and any element in ${\cal S}_2$ are free. With property (\ref{2.1}), when $\{x_1,x_3\}$ and $x_2$ are free, it is easy to check that $E(x_1x_2)=E(x_1)E(x_2)$ and $E(x_1x_2x_3)=E(x_1x_3)E(x_2)$. In many practical applications, we may need to deal with complex-valued random variables, such as, complex Gaussian, where the complex conjugation $*$ is usually used. In correspondence with the complex conjugation, the above freeness becomes $*$-freeness. We call that $x_1,x_2,\cdots,x_n$ are $*$-free, if (\ref{2.1}) holds when the polynomials $p_k(x)$ in Definition \ref{def1} are changed to polynomials $p_k(x, x*)$ of two variables. If $x=x^*$, element $x$ is called self-adjoint. For example, when $x$ is a matrix and $*$ is the complex conjugate transpose operation, if $x$ is Hermitian, then $x$ is self-adjoint. In this case, $x$ can be diagonalized by a unitary matrix and all its eigenvalues are real-valued. \begin{DD}\label{def2} \begin{itemize} \item[1)] When two random variables $x_1$ and $x_2$ have all the moments the same, i.e., $E(x_1^i)=E(x_2^i)$ for all positive integers $i$, they are called identically distributed or having the same distribution. \item[2)] For a sequence of random variables $x_n$, $n=1,2,...$, we call $x_n$ converges to $x$ in distribution when $n$ goes to infinity, if all the moments of $x_n$ converge to the moments of $x$ as $n$ goes to infinity, i.e., for any positive integer $m$, \begin{equation}\label{2.2} \lim_{n\rightarrow \infty} E(x_n^m) =E(x^m), \end{equation} which is denoted as $\lim_{n\rightarrow \infty} x_n \overset{distr}{=} x$ or $x_n \overset{distr}{\longrightarrow} x$ as $n\rightarrow \infty$. \item[3)] Let $I$ be an index set. For each $i\in I$, let $x_n^{(i)}$, $n=1,2,...$, be a sequence of random variables. We call that $(x_n^{(i)})_{i\in I}$ converges to $(x^{(i)})_{i\in I}$ in distribution, if \begin{equation}\label{2.02} \lim_{n\rightarrow \infty}E(x_n^{(i_1)}\cdots x_n^{(i_k)})=E(x^{(i_1)}\cdots x^{(i_k)}) \end{equation} for all positive integers $k$ and all $i_1,..., i_k\in I$, which is denoted as $$ \lim_{n\rightarrow \infty} (x_n^{(i)})_{i\in I} \overset{distr}{=} (x^{(i)})_{i\in I} \mbox{ or } (x_n^{(i)})_{i\in I} \overset{distr}{\longrightarrow} (x^{(i)})_{i\in I} \mbox{ as }n\rightarrow \infty. $$ \end{itemize} \end{DD} The definition in 2) is about the convergence in distribution for a single sequence of random variables and the definition in 3) is about the convergence in distribution for multiple sequences of random variables jointly. One of the most important results in classical probability theory is the central limit theorem. It says that the summation of independent random variables of a totally fixed variance converges to Gaussian random variable, when the number of the independent random variables goes to infinity. For free random variables, it has the following free central limit theorem. \begin{TT}\label{thm1} Let $x_k$, $k=1,2,...$, be a sequence of self-adjoint, freely independent, and identically distributed random variables with $E(x_k)=0$ and $E(x_k^2)=\sigma^2$. For a positive integer $n$, let \begin{equation}\label{2.3} S_n=\frac{x_1+x_2+\cdots +x_n}{\sqrt{n}}. \end{equation} Then, $S_n$ converges in distribution to a semicircular element $s$ of variance $\sigma^2$ as $n\rightarrow \infty$, i.e., \begin{equation}\label{2.4} \lim_{n\rightarrow \infty}E(S_n^{i})= \left\{ \begin{array}{ll} \sigma^{i}C_{i/2}, & \mbox{if $i$ is even},\\ 0, & \mbox{if $i$ is odd}, \end{array} \right. \end{equation} where $C_k$ is the Catalan number and the $(2k)$th moment of the semicircular distribution: \begin{equation}\label{2.5} C_k=\frac{1}{2\pi} \int_{-2}^{2} t^{2k} \sqrt{4-t^2} dt = \frac{1}{k+1} {2k \choose k}. \end{equation} \end{TT} The random variable $s$ in Theorem \ref{thm1} is called a semicircular element in this context and it, after divided by $\sigma$, has the same distribution as the classical semicircular random variable of density function \begin{equation}\label{2.6} q(t)=\left\{ \begin{array}{ll} \frac{1}{2\pi} \sqrt{4-t^2},& \mbox{if }|t|<2,\\ 0, & \mbox{otherwise}. \end{array}\right. \end{equation} Its mement of an even order has the form in (\ref{2.4}) and an odd order is always $0$. Note that semicircular distributions are the asymptotic distributions of the eigenvalues of Hermitian Gaussian random matrices when the matrix size goes to infinity, which is called Wigner's semi-circle law and will be discussed in more details in Section \ref{sec4} later. \section{Moments, Cumulants, and Cauchy Transforms}\label{sec3} As mentioned earlier, it is not convenient to directly define a density function or probability measure for a noncommutative random variable, and instead its all moments are defined and the freeness is to simplify the joint moments between free random variables. In order to see how moments are related to distributions of free random variables, let us first see how in classical probability theory, a probability measure and its moments are related. Let $\mu(t)$ be a probability measure on the real line $\mathbb{R}$. Assume its all moments are finite and let $m_i$ be its $i$th moment for a positive integer $i$ and $\phi(t)$ be its characteristic function, i.e., \begin{equation}\label{3.1} m_i=\int_{\mathbb R} t^i d\mu(t), \mbox{ and } \phi(t)=\int_{\mathbb R} e^{{\bf i}\tau t}d\mu(\tau), \end{equation} where ${\bf i}\overset{\Delta}{=}\sqrt{-1}$. Then, it is easy to see \begin{equation}\label{3.2} m_i={\bf i}^{-i}\phi^{(i)}(0), \mbox{ and } \phi(t)=\sum_{i=0}^{\infty} m_i \frac{({\bf i}t)^i}{i!}, \end{equation} where $\phi^{(i)}(t)$ stands for the $i$th derivative of $\phi(t)$. Furthermore, we can write \begin{equation}\label{3.3} \log(\phi(t))=\sum_{i=1}^{\infty} k_i \frac{({\bf i}t)^i}{i!} \mbox{ with } k_i = {\bf i}^{-i} \left. \frac{d^i}{dt^i} \log(\phi(t))\right|_{t=0}, \end{equation} where $k_i$ are called the cumulants of $\mu(t)$. We will call them the classical cumulants. The moment sequence $\{m_i\}_{i\geq 0}$ and the cumulant sequence $\{k_i\}_{i\geq 1}$ can be determined from each other: \begin{eqnarray} m_n & = & \mathop{\sum_{1\cdot r_1+\cdots + n\cdot r_n=n}}_{r_1,...,r_n\geq 0} \frac{n!}{(1!)^{r_1}\cdots (n!)^{r_n} r_1!\cdots r_n!} k_1^{r_1} \cdots k_n^{r_n} \label{3.4}\\ k_n & = & \mathop{\sum_{1\cdot r_1+\cdots + n\cdot r_n=n}}_{r_1,...,r_n\geq 0} \frac{(-1)^{r_1+\cdots +r_n-1}(r_1+\cdots r_n-1)! n!}{(1!)^{r_1}\cdots (n!)^{r_n} r_1!\cdots r_n!} m_1^{r_1} \cdots m_n^{r_n}. \label{3.5} \end{eqnarray} Sometimes, cumulants may be easier to obtain than moments. In this case, one may first obtain cumulants and then moments. Since for noncommutative random variables, we start with their moments as we have seen so far, it is very important to investigate moment and cumulant sequences for further calculations. Before going to more details, let us see some basic concepts about partitions of an index set, which plays an important role in free probability theory. \subsection{Partitions, Non-crossing Partitions, and Free-Cumulants} For a positive integer $n$, we denote $[n]\overset{\Delta}{=}\{1,2,...,n\}$. A partition $\pi$ of set $[n]$ means $\pi=\{V_1,...,V_k\}$ such that $V_1,...,V_k\subset [n]$ with $V_i\neq \emptyset$, $V_i\cap V_j=\emptyset$ for all $1\leq i\neq j\leq n$, and $V_1\cup \cdots \cup V_k=[n]$. Subsets $V_1,...,V_k$ are called the blocks of $\pi$ and $\#(\pi)$ denotes the number of the blocks of $\pi$. ${\cal P}(n)$ denotes the set of all the partitions of $[n]$. A partition is called a pairing if its each block has size $2$ and the set of all the pairings of $[n]$ is denoted by ${\cal P}_2(n)$. Let $\pi\in {\cal P}(n)$ and $\{k_i\}_i$ be a sequence. We denote $k_{\pi}=k_1^{r_1}k_2^{r_2}\cdots k_n^{r_n}$ where $r_i$ is the number of blocks of $\pi$ of size $i$. Then, the determination formulas in (\ref{3.4})-(\ref{3.5}) of moments and cumulants can be re-formulated as \begin{eqnarray} m_n & = & \sum_{\pi\in {\cal P}(n)} k_{\pi}, \label{3.6}\\ k_n & = & \sum_{\pi \in {\cal P}(n)} (-1)^{\#(\pi)-1}(\#(\pi)-1)! m_{\pi}. \label{3.7} \end{eqnarray} For $\pi\in {\cal P}(n)$, denote the moment of $n$ random variables $x_1,...,x_n$ with partition $\pi$ as \begin{equation}\label{3.8} E_{\pi}(x_1,...,x_n)\overset{\Delta}{=}\mathop{\prod_{V\in {\pi}}}_{V=(i_1,...,i_l)} E(x_{i_1}\cdots x_{i_l}), \end{equation} where $V=(i_1,...,i_l)$ means that set $V$ has $l$ distinct elements with increasing order as $i_1<i_2<\cdots<i_l$. When $\pi\in {\cal P}_2(2k)$, i.e., $\pi$ is a pairing of $[2k]$, we have \begin{equation}\label{3.9} E_{\pi}(x_1,..., x_{2k})=\prod_{(i,j)\in \pi}E(x_ix_j). \end{equation} With this notation, for Gaussian random variables $X_1,X_2,...,X_n$, we have the following Wick's formula: \begin{equation}\label{3.10} E(X_{i_1}\cdots X_{i_{2k}})=\sum_{\pi\in {\cal P}_2(2k)} E_{\pi}(X_{i_1},...,X_{i_{2k}}), \end{equation} where $i_1,...,i_{2k}\in [n]$. Let $\pi\in {\cal P}(n)$. If there exist $i<j<k<l$ such that $i$ and $k$ are in one block $V$ of $\pi$, and $j$ and $l$ in another block $W$ of $\pi$, we call that $V$ and $W$ cross. If one cannot find any pair of blocks in $\pi$ that cross, partition $\pi$ is called non-crossing. Denote the set of all non-crossing partitions of $[n]$ by $NC(n)$ and the set of all non-crossing pairings of $[n]$ by $NC_2(n)$. The partition set ${\cal P}(n)$ of $[n]$ is partially ordered via $$ \pi_1\leq \pi_2 \mbox{ if and only if each block of }\pi_1 \mbox{ is contained in a block of }\pi_2. $$ With this order, $NC(n)$, as a subset of ${\cal P}(n)$, is also partially ordered. The largest and the smallest partitions in both ${\cal P}(n)$ and $NC(n)$ are $[n]$ and $\{ \{1\},\{2\},...,\{n\}\}$, denoted as $1_n$ and $0_n$, respectively. \begin{DD}\label{def3} The following free cumulants $\kappa_n(x_1,...,x_n)$ are defined inductively in terms of moments by the moment-cumulant formula: \begin{equation}\label{3.11} E(x_1\cdots x_n)=\sum_{\pi\in NC(n)} \kappa_{\pi}(x_1,...,x_n), \end{equation} where \begin{equation}\label{3.12} \kappa_{\pi}(x_1,...,x_n) \overset{\Delta}{=} \mathop{\prod_{V\in \pi}}_{V=(i_1,...,i_l)} \kappa_l(x_{i_1},...,x_{i_l}). \end{equation} \end{DD} The above inductive definition is not hard to implement as follows. For $n=1$, we have $E(x_1)=\kappa_1(x_1)$. Thus, $\kappa_1(x_1)=E(x_1)$. For $n=2$, we have $$ E(x_1x_2)=\kappa_{(1,2)}(x_1,x_2)+\kappa_{(1),(2)}(x_1,x_2)=\kappa_2(x_1,x_2) +\kappa_1(x_1)\kappa_1(x_2). $$ Thus, $$ \kappa_2(x_1,x_2)=E(x_1x_2)-E(x_1)E(x_2), $$ etc. Let $\mu(\pi_1,\pi_2)$ be the M\"{o}bius function on ${\cal P}(n)$ \cite{spe4,spe5,hia} that has a recursion formula to calculate. Then, we also have the following M\"{o}bius inversion formula: \begin{equation}\label{3.13} \kappa_n(x_1,...,x_n)= \sum_{\pi\in NC(n)}\mu(\pi, 1_n) E_{\pi}(x_1,...,x_n). \end{equation} The moment-cumulant formulas (\ref{3.11}) and (\ref{3.13}) for momemts and free-cumulants for noncummtative random variables are in analogous to (\ref{3.6}) and (\ref{3.7}) (or (\ref{3.4}) and (\ref{3.5})) for classical random variables in classical probability theory. \begin{TT}\label{thm2} Random variables $x_1,...,x_n$ are free if and only if all mixed cumulants of $x_1,...,x_n$ vanish. In other words, $x_1,...,x_n$ are free if and only if, for any $i_1,...,i_p\in [n]=\{1,2,...,n\}$ with $i_j\neq i_l$ for some $j,l\in [p]$, we have $\kappa_p(x_{i_1},...,x_{i_p})=0$. \end{TT} The result in the above theorem significantly simplifies the calculations of the free cumulants of multiple free random variables and therefore, helps to calculate the joint moments of multiple free random variables. For example, if $x$ and $y$ are free, then we have \begin{eqnarray} \kappa_n^{x+y} & \overset{\Delta}{=} & \kappa_n(x+y,...,x+y) \nonumber\\ & = & \kappa_n(x,...,x)+\kappa_n(y,...,y)+(\mbox{mixed cumulants in }x, y) \nonumber\\ & = & \kappa_n^x+ \kappa_n^y. \label{3.133} \end{eqnarray} \begin{DD}\label{def4} Let $I$ be an index set. A self-adjoint family $(s_i)_{i\in I}$ is called a semicircular family of covariance matrix $C=(c_{ij})_{i,j\in I}$ if $C$ is non-negative definite and for any $n\geq 1$ and any $n$-tuple $i_1,...,i_n\in I$ we have \begin{equation}\label{3.14} E(s_{i_1}\cdots s_{i_n})=\sum_{\pi\in NC_2(n)} E_{\pi}(s_{i_1},...,s_{i_n}), \end{equation} where \begin{equation}\label{3.15} E_{\pi}(s_{i_1},...,s_{i_n})=\prod_{(p,q)\in \pi} c_{i_p,i_q}. \end{equation} If $C$ is diagonal, then $(s_i)_{i\in I}$ is a free semicircular family. \end{DD} The above formula is the free analogue of Wick's formula for Gaussian random variables. If we let $X_1,...,X_r$ be $N\times N$ matrices of all entries in all matrices i.i.d. Gaussian random variables, then they jointly converge in distribution to a free semi-circular family $s_1,...,s_r$ of covariance matrix $(c_{ij})_{1\leq i,j\leq r}=\mbox{I}_r$ where $\mbox{I}_r$ is the identity matrix of size $r$, as $N$ goes to infinity. More details on random matrices will be seen in Section \ref{sec4}. \subsection{Cauchy Transforms and R-Transforms} As we have seen earlier, for classical random variables, their distributions or density functions can be determined by their moment sequences or cumulant sequences as shown in (\ref{3.2}) and (\ref{3.3}). To further study noncommutative random variables, their moment and cumulant sequences similarly lead to their analytic forms as follows. Let $x$ be a noncommutative random variable and $m_n^x=E(x^n)$ and $\kappa_n^x$ be its moments and free cumulants, respectively. Their power series (moment and cumulant generating functions) in an indeterminate $z$ are defined by \begin{equation}\label{3.16} M(z)=1+\sum_{n=1}^{\infty} m_n^x z^n \mbox{ and } C(z)=1+\sum_{n=1}^{\infty} \kappa_n^x z^n. \end{equation} Then, the following identity holds: \begin{equation}\label{3.17} M(z)=C(zM(z)). \end{equation} The Cauchy transform of $x$ is defined by \begin{equation}\label{3.18} G(z)\overset{\Delta}{=} E\left( \frac{1}{z-x}\right) =\sum_{n=0}^{\infty}\frac{E(x^n)}{z^{n+1}}=\sum_{n=0}^{\infty}\frac{m_n^x}{z^{n+1}}=z^{-1}M(z^{-1}), \end{equation} and the $R$-transform of $x$ is defined by \begin{equation}\label{3.19} R(z)\overset{\Delta}{=}\frac{C(z)-1}{z}=\sum_{n=0}^{\infty} \kappa_{n+1}^x z^n. \end{equation} If we let $K(z)\overset{\Delta}{=}R(z)+z^{-1}$, then $K(G(z))=z$, i.e., $K(z)$ is the inverse of the Cauchy transform $G(z)$. If we let $G_x(z)$ and $R_x(z)$ denote the Cauchy transform and the $R$-transform of random variable $x$, respectively, then, for two free random variables $x$ and $y$, from (\ref{3.133}) we have \begin{equation}\label{3.20} R_{x+y}(z)= R_x(z)+R_y(z). \end{equation} In case not both $R_x(z)$ and $R_y(z)$ are well-defined on a region of $z$, one may be able to find the Cauchy transform $G_{x+y}(z)$ of $x+y$ for free random variables $x$ and $y$ from the Cauchy transforms $G_x(z)$ and $G_y(z)$ of $x$ and $y$ as follows. We shall see soon below that when $z$ is in the upper complex plane $\mathbb{C}^+ \overset{\Delta}{=} \{ c\in \mathbb{C}| \mbox{Im}(c)>0\}$ where $\mathbb{C}$ stands for the complex plane and Im stands for the imaginary part of a complex numnber, a Cauchy transform is well-defined. For an $z\in \mathbb{C}^+$, solve the following system of two equations for two unknown functions $\omega_x(z)$ and $\omega_y(z)$: \begin{equation}\label{3.21} G_x(\omega_x(z))=G_y(\omega_y(z)) \mbox{ and } \omega_x(z) +\omega_y(z)-\frac{1}{G_x(\omega_x(z))} = z. \end{equation} Then, \begin{equation}\label{3.22} G_{x+y}(z)=G_x(\omega_x(z))=G_y(\omega_y(z)). \end{equation} If noncommutative random variable $x$ is self-adjoint, then it has a spectral measure $\nu$ on $\mathbb{R}$ such that the moments of $x$ are the same as the conventional moments of the probability measure $\nu$. One can simply see it when $x$ is a Hermitian matrix and then $x$ can be diagonalized by a unitary matrix and has real-valued eigenvalues. These real-valued eigenvalues are the spectra of $x$ that are discrete for a finite matrix but may become continuous when $x$ is a general operator over an infinite dimensional space. In this case, we say that random variable $x$ has distribution $\nu$. Then, the Cauchy transform $G(z)$ of $x$ can be formulated as \begin{equation}\label{3.23} G(z)=\int_{\mathbb{R}} \frac{1}{z-t} d\nu(t), \end{equation} and $G(z)$ is also called the Cauchy transform of $\nu$. One can clearly see from (\ref{3.23}) that Cauchy transform $G(z)$ is well-defined when $z\in \mathbb{C}^+$. In fact, $G(z)$ is analytic in $\mathbb{C}^+$, i.e., it exists derivatives of all orders for any $z\in \mathbb{C}^+$. Furthermore, $G(z)\in \mathbb{C}^{-}$, the lower complex plane similarly defined as $\mathbb{C}^+$. In other words, a Cauchy transform $G(z)$ maps $\mathbb{C}^+$ to $\mathbb{C}^{-}$. From (\ref{3.23}), one can also see that the Cauchy transform excludes the real axis $\mathbb{R}$ for $z$, which is because when $z\in \mathbb{R}$, the integration may not exist. After saying so, it may exist in the generalized function sense as if $z\in \mathbb{R}$, the Cauchy transform (\ref{3.23}) becomes the Hilbert transform of $d\nu(t)/dt$. When probability measure $\nu$ is compactly supported, i.e., it is supported on a finite interval, not only its Cauchy transform is analytic in $\mathbb{C}^+$, but also its $R$-transform is analytic on some disk centered at the origin. This, however, may not be true for a general probability measure $\nu$. For more details, see \cite{spe3}. With a Cauchy transform $G(z)$, its corresponding probability measure can be formulated by the Stieltjes inversion formula as follows. \begin{TT}\label{thm3} Let $\nu$ be a probability measure on $\mathbb{R}$ and $G(z)$ be its Cauchy transform. For $a<b$, we have \begin{equation}\label{3.24} - \lim_{\tau\rightarrow 0^+}\frac{1}{\pi} \int_a^b \mbox{Im}(G(t+j\tau))dt =\nu( (a,b))+\frac{1}{2}\nu(\{a,b\}), \end{equation} where $\nu((a,b))$ and $\nu(\{a,b\})$ are the continuous and the discrete parts of the measure $\nu$, respectively. If $\nu_1$ and $\nu_2$ are two probability measures on $\mathbb{R}$ with equal Cauchy transforms, i.e., $G_{\nu_1}(z)= G_{\nu_2}(z)$, then $\nu_1=\nu_2$. \end{TT} This result tells us that Cauchy transforms and probability measures (distributions or random variables) are one-to-one corresponding to each other. If $x$ and $y$ are two free self-adjoint random variables with distributions $\nu_x$ and $\nu_y$, respectively. The distribution of $x+y$ is called the free convolution of those of $x$ and $y$, which is denoted by $\nu_x\boxplus \nu_y$. As an example of Cauchy transform, when $\nu$ is semicircular with density function $q(t)$ in (\ref{2.6}), its Cauchy transform is, \cite{spe3}, \begin{equation}\label{3.25} G_s(z)=\frac{z-\sqrt{z^2-4}}{2}. \end{equation} \section{Application in Random Matrices}\label{sec4} As mentioned in Introduction, random matrices with entries of complex Gaussian random variables are often used in wireless communications and signal processing. In particular, their singular value (eigenvalue) distributions play an important role in analyzing wireless communications systems. This section is on applying free probability theory to random matrices of large sizes. It tells us how to use the second order statistics of the entries of random matries to calculate their asymptotic eigenvalue distributions. \subsection{GUE Random Matrices and Wigner's Semi-Circle Law} Let $X_N$ be an $N\times N$ matrix with complex random variables $a_{ij}=x_{ij}+{\bf i}y_{ij}$ as entries such that $x_{ij}$ and $y_{ij}$ are real Gaussian random variables, $\sqrt{N} a_{ij}$ is a standard complex random variable, i.e., $E(a_{ij})=0$ and $E(|a_{ij}|^2)=1/N$ and \begin{itemize} \item[1)] $a_{ij}=a_{ji}^*$, \item[2)] $\{x_{ij}\}_{i\geq j}\cup \{y_{ij}\}_{i>j}$ are i.i.d. \end{itemize} In this case, $X_N$ is Hermitian, i.e., self-adjoint. $X_N$ is called a Gaussian unitary ensemble (GUE) random matrix. The following theorem is Wigner's semi-circle law. \begin{TT}\label{thm4} If $\{X_N\}_N$ is a sequence of GUE random matrices, then, for any positive integer $k$, \begin{eqnarray*} \lim_{N\rightarrow \infty} E(tr(X_N^k)) & = & \frac{1}{2\pi} \int_{-2}^2 t^k \sqrt{4-t^2}dt \\ & = & \left\{ \begin{array}{ll} \frac{1}{l+1} {2l \choose l}, & \mbox{if }k=2l \mbox{ for some positive integer }l,\\ 0, & \mbox{if }k \mbox{ is odd}. \end{array} \right. \end{eqnarray*} where $tr$ stands for the normalized matrix trace, i.e., $tr(\cdot)\overset{\Delta}{=}\mbox{Tr}(\cdot)/N$ with the conventional matrix trace Tr. \end{TT} Since $X_N$ is Hermitian, it has spectra (eigenvalues) $\nu_N$ that is a random variable as well. Since $tr(X_N^k)=tr(\nu_N^k)$, we have $$ \lim_{N\rightarrow \infty} E(tr(X_N^k))= \lim_{N\rightarrow \infty} \int_{\mathbb{R}} t^kd\nu_N(t). $$ Thus, the above theorem says that the eigenvalues of $X_N$ converge in distribution to the semicircular random variable. In fact, the convergence in distribution can be made stronger to the almost surely convergence. \subsection{Asymptotic Freeness of GUE Random Matrices} For random matrices $X$ as noncommuntative random variables, their linear functional $E$ used in Section \ref{sec2} is defined as $E(tr(X))$, i.e., $E(\cdot)$ used before for a noncommutative random variable $x$ corresponds to $E(tr(\cdot))$ for a random matrix $X$ in what follows. \begin{DD}\label{def5} Let $(X_N)_N$ and $(Y_N)_N$ be two sequences of $N\times N$ matrices. We say that $X_N$ and $Y_N$ are asymptotically free if they converge in distribution to two free random variables $x$ and $y$, respectively, as $N$ goes to infinity. \end{DD} From Definitions \ref{def2} and \ref{def5}, $X_N$ and $Y_N$ are asymptotically free, if for any positive integer $m$ and non-negative integers $p_1,q_1,...,p_m,q_m$ we have $$ \lim_{N\rightarrow \infty} E(tr(X_N^{p_1}Y_N^{q_1}\cdots X_N^{p_m}Y_N^{q_m})) =E(x^{p_1}y^{q_1}\cdots x^{p_m}y^{q_m}), $$ for two free random variables $x$ and $y$. For a sequence of $N\times N$ deterministic matrices $(D_N)_N$, if $\lim_{N\rightarrow \infty}tr(D_N^m)$ exists for every non-negative integer $m$, we say $D_N$ converges to $d$ in distribution, where $d$ is a noncommutative random variable and its $m$th moment is the same as the limit. We also write it as $\lim_{N\rightarrow \infty} D_N \overset{distr}{=} d$ or $D_N \overset{distr,}{\longrightarrow} d$. With the above notations, the following theorem of Voiculescu improves Wigner's semi-circle law. \begin{TT}\label{thm5} Assume $X_N^{(1)},...,X_N^{(p)}$ are $p$ independent $N\times N$ GUE random matrices and $D_N^{(1)},...,D_N^{(q)}$ are $q$ deterministic $N\times N$ matrices such that $$ D_N^{(1)},...,D_N^{(q)} \overset{distr}{\longrightarrow} d_1,...,d_q \mbox{ as } N\rightarrow \infty. $$ Then, $$ X_N^{(1)},...,X_N^{(p)}, D_N^{(1)},...,D_N^{(q)} \overset{distr}{\longrightarrow} s_1,...,s_p, d_1,...,d_q \mbox{ as } N\rightarrow \infty, $$ where each $s_i$ is semicircular and $s_1,...,s_p, \{d_1,...,d_q\}$ are free. The convergence above also holds almost surely. \end{TT} This result tells that independent GUE random matrices $X_N^{(1)},...,X_N^{(p)}, \{D_N^{(1)},...,D_N^{(q)}\}$ are asymptotically free when $N$ is large. Furthermore, $X_N^{(1)},...,X_N^{(p)}$ asymptotically have the same distributions as free semicircular elements $s_1,...,s_p$ do, and this is still true even when they are mixed with deterministic matrices. \subsection{Asymptotic Freeness of Haar Distributed Unitary Random Matrices} For a general Hermtian random matrix, it can be diagonalized by a unitary matrix and in this case, the unitary matrix is random as well. Therefore, it is also important to study unitary random matrices. Let ${\cal U}(N)$ denote the group of $N\times N$ unitary matrices $U$, i.e., $UU^*=U^*U=\mbox{I}_N$. Since ${\cal U}(N)$ is bounded (compact), it has Haar meansure $dU$ with $\int_{{\cal U}(N)} dU=1$. Thus, $dU$ is a probability measure (it can be understood as a uniform distribution). A Haar distributed unitary random matrix is a matrix $U_N$ randomly chosen in ${\cal U}(N)$ with respect to Haar measure. One method to construct Haar unitary matrices is as follows. First, take an $N\times N$ random matrix whose entries are the independent standard complex Gaussian random variables. Then, use the Gram-Schmidt orthogonalization procedure to make it unitary. A noncommutative random variable $u$ is called Haar unitary if it is unitary, i.e., $uu^*=u^*u=1$ and $E(u^m)=\delta_{0,m}$, i.e., $0$ when $m>0$. A Haar unitary random matrix is Haar unitary, i.e., if $U\in {\cal U}(N)$, then $E(tr(U^m))=0$ for $m>0$ \cite{spe3}. \begin{TT}\label{thm6} Assume $U_N^{(1)},...,U_N^{(p)}$ are $p$ independent $N\times N$ Haar unitary random matrices and $D_N^{(1)},...,D_N^{(q)}$ are $q$ deterministic $N\times N$ matrices such that $$ D_N^{(1)},...,D_N^{(q)} \overset{distr}{\longrightarrow} d_1,...,d_q \mbox{ as } N\rightarrow \infty. $$ Then, as $N \rightarrow \infty$, $$ U_N^{(1)},U_N^{(1)*},...,U_N^{(p)},U_N^{(p)*}, D_N^{(1)},...,D_N^{(q)} \overset{distr}{\longrightarrow} u_1,u_1^*,...,u_p,u_p^*, d_1,...,d_q, $$ where each $u_i$ is Haar unitary and $\{u_1,u_1^*\},...,\{u_p,u_p^*\}, \{d_1,...,d_q\}$ are free. The convergence above also holds almost surely. \end{TT} A more special case is as follows. \begin{TT}\label{thm7} Let $A_N$ and $B_N$ be two sequences of deterministic $N\times N$ matrices with $\lim_{N\rightarrow \infty}A_N \overset{distr}{=}a$ and $\lim_{N\rightarrow \infty}B_N \overset{distr}{=}b$. Let $U_N$ be a sequence of $N\times N$ Haar unitary random matrices. Then, $$ A_N, U_NB_NU_N^* \overset{distr}{\longrightarrow} a,b \mbox{ as } N \rightarrow \infty, $$ where $a$ and $b$ are free. This convergence also holds almost surely. \end{TT} The above theorem says that $A_N$ and $U_NB_NU_N^*$ are asymptotically free when $N$ is large. \subsection{Aymptotic Freeness of Wigner Random Matrices} Let $\mu$ be a probability measure on $\mathbb{R}$ and $a_{ij}$ with $i\leq j$ be i.i.d. real random variables with distribution $\mu$. Let $a_{ij}=a_{ji}$ for $i>j$, and $$A_N=\frac{1}{\sqrt{N}} (a_{ij})_{1\leq i,j\leq N}, $$ which is self-adjoint (symmetry) and called Wigner random matrix (ensemble). \begin{TT}\label{thm8} Let $\mu_1,...,\mu_p$ be probability measures on $\mathbb{R}$ with all moments exist and $0$ mean. Assume $A_N^{(1)},...,A_N^{(p)}$ are $p$ independent $N\times N$ Wigner random matrices with entry distributions $\mu_1,...,\mu_p$, respectively, and $D_N^{(1)},...,D_N^{(q)}$ are $q$ deterministic $N\times N$ matrices such that $$ D_N^{(1)},...,D_N^{(q)} \overset{distr}{\longrightarrow} d_1,...,d_q \mbox{ as } N\rightarrow \infty, $$ and $$ \sup_{r,N}\|D_N^{(r)}\|<\infty. $$ Then, as $N \rightarrow \infty$, $$ A_N^{(1)},...,A_N^{(p)}, D_N^{(1)},...,D_N^{(q)} \overset{distr}{\longrightarrow} s_1,...,s_p,, d_1,...,d_q, $$ where each $s_i$ is semicircular and $s_1,...,s_p, \{d_1,...,d_q\}$ are free. \end{TT} As a special case, $A_N D_N A_N, E_N \overset{distr}{\longrightarrow} sds, e$, where $s$ is semicircular, $sds$ and $e$ are free, and $e$ can be arbitrary. \section{Free Deterministic Equivalents and Random Matrix Singular Value Distribution Calculations}\label{sec5} Let $H$ be an $N\times M$ wireless channel matrix, which is usually modelled as a random matrix, with additive white Gaussian noise (AWGN) of variance $\sigma$. Then, its mutual information is \begin{equation}\label{5.1} C(\sigma)=\frac{1}{N}E\left[\log \det \left( \mbox{I}_N+ \frac{HH^*}{\sigma}\right) \right], \end{equation} where $^*$ stands for Hermitian operation. Let $\nu(\lambda)$ denote the eigenvalue distribution (or spectra, or probability measure) of matrix $HH^*$. Then, when $N$ is large, \begin{equation}\label{5.2} C(\sigma)=\int_{0}^{\infty} \log\left( 1+\frac{\lambda}{\sigma}\right) d\nu(\lambda). \end{equation} On the other hand, the Cauchy transform of the probability measure $\nu$ and matrix $HH^*$ is \begin{equation}\label{5.3} G(z)=\int_{0}^{\infty} \frac{1}{z-\lambda} d\nu(\lambda) =E(tr(z\mbox{I}_N-HH^*)^{-1}), \end{equation} where $z\in \mathbb{C}^+$. Assume that $G(z)$ exists as $\mbox{Im}(z)\rightarrow 0^+$, whose limit is denoted by $G(\omega)$ with $\omega=$Re$(z)$. For semicircular distribution, from (\ref{3.25}) one can see that $G(\omega)$ exists when $\omega=$Re$(z)>2$. Then, \cite{tul2}, \begin{equation}\label{5.4} C(\sigma)=\int_{\sigma}^{\infty}\left( \frac{1}{\omega}-G(-\omega)\right) d\omega. \end{equation} The above formula tells us that, to calculate the mutual information of the channel with channel matrix $H$, we only need to calculate the Cauchy transform of matrix $HH^*$. As an example, if $HH^*$ is a GUE random matrix, then, when $N$ is large, it is approximately semicircular and its Cauchy transform has the form of (\ref{3.25}) with a proper normalization. Thus, its mutual information can be calculated. However, in applications, $HH^*$ may not be a GUE matrix. We next introduce free deterministic equivalents to help to calculate the Cauchy transforms of large random matrices, such as the above $HH^*$, based on Speicher \cite{spe1, spe3, spe6}. \subsection{Matrix-Wise Free Deterministic Equivalents} From Section \ref{sec4}, we know that when the entries $X_{ij}$, $i\geq j$, of an $N\times N$ self-adjoint (symmetry for real-valued or Hermitian for complex-valued) matrix $X$ are i.i.d. random variables, when $N$ is large, it is approximately semicircular. It is also true for multiple such random matrices and multiple deterministic matrices jointly. For a non-adjoint random matrix $X$ of i.i.d. Gaussian entries, it can be made into two independent self-adjoint GUE matrices as $Y_1=(X+X^*)/\sqrt{2}$ and $Y_2=-{\bf i}(X-X^*)/\sqrt{2}$. Then, $X=(Y_1+{\bf i}Y_2)/\sqrt{2}$. In this case, $X$ converges in distribution to $s=(s_1+{\bf i} s_2)/\sqrt{2}$ for two free semicircular elements $s_1$ and $s_2$ with the same distribution. While $s_1$ and $s_2$ are semicircular, we call $s$ circular. In \cite{spe6,spe3} it is proposed to replace these random matrices by semicircular and circular elements etc. Consider the following collections of $N\times N$ matrices, where for each random matrix, its entries of different random variables are i.i.d.: \begin{eqnarray*} & {\bf X}=\{X_1,...,X_{n_1}\} : & \mbox{independent self-adjoint matrices},\\ & {\bf Y}=\{Y_1,...,Y_{n_2}\} : & \mbox{independent non-self-adjoint matrices},\\ & {\bf U}=\{U_1,...,U_{n_3}\} : & \mbox{independent Haar distribued unitary matrices},\\ & {\bf D}=\{D_1,...,D_{n_4}\} : & \mbox{deterministic matrices}. \end{eqnarray*} Let \begin{eqnarray*} {\bf s}=\{s_1,...,s_{n_1}\} &: & \mbox{free semicircular},\\ {\bf c}=\{c_1,...,c_{n_2}\} &: & \mbox{free circular},\\ {\bf u}=\{u_1,...,u_{n_3}\} &: & \mbox{free Haar unitary},\\ {\bf d}=\{d_1,...,d_{n_4}\} &: & \mbox{abstract elements}, \end{eqnarray*} Assume that the joint distribution of ${\bf D}$ is the same as that of ${\bf d}$, and ${\bf X},{\bf Y},{\bf U}$ are independent among each other. Also assume that ${\bf s},{\bf c}, {\bf u}$ have their each individual distribution asymptotically the same as that of ${\bf X}, {\bf Y}, {\bf U}$, respectively. Let $P_N$ be a multi-variable polynomial of ${\bf X}, {\bf Y}, {\bf U}, {\bf D}$. Then, when $N$ is large, $$P_N=P(X_1,...,X_{n_1},Y_1,...,Y_{n_2},U_1,...,U_{n_3},D_1,...,D_{n_4})$$ can be replaced by $$ P_N^{\Box}=P(s_1,...,s_{n_1},c_1,...,c_{n_2},u_1,...,u_{n_3},d_1,...,d_{n_4})$$ and $P_N^{\Box}$ is called the (matrix-wise) free deterministic equivalent of $P_N$. Then, we have, for any positive integer $k$, $$ \lim_{N\rightarrow \infty} E(tr(P_N^k))=E((P_N^{\Box})^k). $$ Now let us go back to the matrix $HH^*$ in (\ref{5.1}). Although matrix $H$ is not self-adjoint itself, but if we follow \cite{spe1} and \cite{spe3} and let \begin{equation}\label{5.5} T=\left( \begin{array}{cc} 0 & H \\ H^* & 0 \end{array} \right), \end{equation} then, matrix $T$ is self-adjoint. Furthermore, \begin{equation}\label{5.6} T^2=\left( \begin{array}{cc} HH^* & 0 \\ 0 & H^*H \end{array} \right), \end{equation} which includes $HH^*$ as a diagonal block. Using operator-valued free probability theory \cite{spe6,spe3}, it can be similarly treated as what is done in the previous sections. Note that $T^2$ is just a polynomial of $T$ but unfortunately not all entries in matrix $T$ have the same distribution, which makes the above matrix-wise free deterministic equivalent approach difficult to use. In order to deal with this problem, we next consider component-wise free deterministic equivalents. \subsection{Component-Wise Free Deterministic Equivalents and Cauchy Transform Calculation of Random Matrices} This part is mainly from \cite{spe1}. We consider $N\times N$ random matrices $X=(X_{ij})$ where $X_{ij}$ are complex Gaussian random variables with $E(X_{ij})=0$ and $E(X_{ij}X_{ij}^*)=\sigma_{ij}/N$, where $\sigma_{ij}$ are independent of $N$. Now we replace all entries $X_{ij}$ in $X$ by (semi)circular elements $c_{ij}$ such that \begin{equation}\label{5.7} E(c_{ij}c_{ij}^*)=E(X_{ij}X_{ij}^*)=\sigma_{ij}/N \end{equation} where if $X_{ij}$ is real-valued (or complex-valued), then, $c_{ij}$ is semicircular (or circular) with mean $0$; if $X_{ij}$ and $X_{kl}$ are independent, then $c_{ij}$ and $c_{kl}$ are free; if $X_{ij}=X_{kl}$, then $c_{ij}=c_{kl}$; and $E(X_{ij}X_{kl}^*)=E(c_{ij}c_{kl}^*)=(E(c_{ij}^*c_{kl}))^*$. Then, we form an $N\times N$ matrix of (semi)circular elements as $c=(c_{ij})$. Matrix $c$ is called the component-wise free deterministic equivalent of matrix $X$. Let $X_1,...,X_{n_1}$ be $n_1$ random matrices, each of which is specified above, where all entries of each of these matrices are independent from all entries of all the remaining matrices. Let $c_1,...,c_{n_1}$ be the component-wise deterministic equivalents of $X_1,...,X_{n_1}$, and all elements in $c_i$ are free from all elements in $c_j$ when $i\neq j$. Let $D_1,...,D_{n_2}$ be $n_2$ deterministic matrices. Assume $P_N$ is a multi-variable polynomial and \begin{eqnarray} P_N & = & P(X_1,...,X_{n_1},D_1,...,D_{n_2}), \label{5.8}\\ P_N^{\Box} & = & P(c_1,...,c_{n_1}, D_1,...,D_{n_2}). \label{5.9} \end{eqnarray} We call that $P_N^{\Box}$ is the component-wise free deterministic equivalent of $P_N$. \subsubsection{Independent Cases} Consider the case when every matrix $X_i$ is Hermitian/self-adjoint and entries $X_{ij}$ for $i\geq j$ are all independent. It is explicitly shown in \cite{anan} that $\lim_{N\rightarrow \infty} (P_N-P_N^{\Box})=0$, i.e., the matrices $X_1,...,X_{n_1}, D_1,...,D_{n_2}$ have the same joint distribution as matrices $c_1,...,c_{n_1},D_1,...,D_{n_2}$ do. Thus, $c_1,...,c_{n_1}$ may be used to calculate the Cauchy transforms of $X_1,...,X_{n_1}$ when only the variances of the entries in matrices $X_i$ are used, as $N$ is large. We now consider a special example shown in \cite{spe1}. Let $X=(X_{ij})$ be an $N\times N$ Hermitian/self-adjoint Gaussian random matrix with $E(X_{ij})=0$ and $E(X_{ij}X_{ij}^*)=\sigma_{ij}/N$, and let $c=(c_{ij})$ be its component-wise deterministic equivalent, i.e., $E(c_{ij})=0$ and $E(c_{ij}c_{ij}^*)=\sigma_{ij}/N$. Note that since $X_{ij}=X_{ji}^*$, we have $c_{ij}=c_{ji}^*$ as well. Let $A$ be an $N\times N$ deterministic matrix. Consider the matrix sum $Y=A+X$. We next show how to calculate the Cauchy transform of $Y$ by calculating that of $T=A+c$. For an $N\times N$ deterministic matrix $B=(B_{ij})$, define a mapping $\eta$ that maps $B$ to another $N\times N$ deterministic matrix $\eta(B)$ with its $(i,j)$th component as \begin{equation}\label{5.10} [\eta(B)]_{ij}\overset{\Delta}{=}E(cBc)=\sum_{k,l}E(c_{ik}B_{kl}c_{lj}) =\sum_{k,l} E(c_{ik}c_{jl}^*)B_{kl} =\delta_{i,j}\sum_k \sigma_{ik}B_{kk}, \end{equation} which shows that $\eta(B)$ is a diagonal matrix. Then, the Cauchy transform $g_T(z)$ of $T$ can be determined by solving the following fixed point equation \cite{spe7}, \cite{spe1}: \begin{eqnarray} g_T(z) & = & tr(G_T(z)), \label{5.11}\\ G_T(z) & = & E\left( \frac{1}{z-\eta(G_T(z))-A}\right), \label{5.12} \end{eqnarray} where $E(B)\overset{\Delta}{=}(E(B_{ij}))$. It is shown in \cite{spe7} that there is exactly one solution of the above fixed point equation with the proper positivity constriant. We next consider the case when $X$ is not Hermitian, such as the channel matrix $H$ in (\ref{5.1}), where all entries of $X$ are independent. In this case, consider $Y=A+X$ and we next calculate the Cauchy transform of $YY^*$. To do so, define \begin{equation}\label{5.122} T=\left( \begin{array}{cc} 0 & Y\\ Y^* & 0 \end{array} \right) =\left( \begin{array}{cc} 0 & A\\ A^* & 0 \end{array} \right) +\left( \begin{array}{cc} 0 & X\\ X^* & 0 \end{array} \right). \end{equation} Then, \begin{equation}\label{5.13} T^2=\left( \begin{array}{cc} YY^* & 0\\ 0 & Y^*Y \end{array} \right). \end{equation} Since the eigenvalue distributions of $YY^*$ and $Y^*Y$ are the same, the Cauchy transform of $YY^*$ is the same as that of $T^2$. It is presented in \cite{spe1} as follows. For an $M\times M$ matrix $B=(B_{ij})$, define $$ E_{D_M}(B)\overset{\Delta}{=} \mbox{diag}(E(B_{11}),\cdots, E(B_{MM})), $$ where diag stands for the $M\times M$ diagonal matrix with its arguments as its diagonal elements, and also define \begin{equation}\label{5.100} \eta_1(B)\overset{\Delta}{=} E(cBc^*) \mbox{ and } \eta_2(B)\overset{\Delta}{=} E(c^*Bc). \end{equation} Note that since all the entry elements in matrix $c$ are free from each other, $E(cBc^*)=E_{D_N}(cBc^*)$ and $E(c^*Bc)=E_{D_N}(c^*Bc)$ as what is shown for $\eta$ in (\ref{5.10}). Then, the Cauchy transform $g_{T^2}(z)$ of $T^2$ or $YY^*$ is $g_{T^2}(z)=tr(G_{T^2}(z))$ and $G_{T^2}(z)$ can be obtained by solving the following fixed point equations \cite{hac}, \cite{spe1}: \begin{equation}\label{5.144} zG_{T^2}(z^2) =G_T(z) = E_{D_{2N}} \left[ \left( \begin{array}{cc} z-z\eta_1(G_2(z^2)) & -A \\ -A^* & z-z\eta_2(G_1(z^2)) \end{array} \right)^{-1} \right], \end{equation} where \begin{eqnarray} zG_1(z) & = & E_{D_N} \left[ \left( 1-\eta_1(G_2(z))+ A \frac{1} {z-z\eta_2(G_1(z))}A^* \right)^{-1} \right], \label{5.14}\\ zG_2(z) & = & E_{D_N} \left[ \left( 1-\eta_2(G_1(z))+ A \frac{1} {z-z\eta_1(G_2(z))}A^* \right)^{-1} \right]. \label{5.15} \end{eqnarray} \subsubsection{Correlated Cases and Summary} When the entries in matrix $X$ are correlated, similar treatment as the above can be done \cite{spe1}. One can still get the Cauchy transform of $Y$ when $X$ is Hermitain by solving the fixed point equation (\ref{5.11})-(\ref{5.12}) and the Cauchy transform of $YY^*$ when $X$ is not Hermitian by solving the fixed point equation (\ref{5.144})-(\ref{5.15}), where $\eta(B)=E(cBc)$ may not be diagonal as what is calculated in (\ref{5.10}), and $\eta_1(B)$ and $\eta_2(B)$ may not be diagonal either. An example of correlated entries in $X$ is that each column vector (or row vector) of $X$ is a linear transform of a vector of independent Gaussian random variables. A simpler example of correlated cases is when random matrix $X_1=BX$ where $B$ is a deterministic matrix and $X$ is a random matrix of independent entries. In this case, $X_1$ can be treated as a product of two matrices of $B$ and $X$ and thus, was covered previously. The above Cauchy transform calculation is only based on the covariances (the second order statistics) of the entries of random matrix $X$. As we mentioned easlier, in this case one does not need to implement Monte-Carlo simulations to do the calculations that may be not convenient in practice when $X$ has a large size. Going back to the mutual information in the beginning of this section, we can just let $A=0$ in the above to get the Cauchy transform of $HH^*=YY^*$. As a remark, the deterministic equivalents defined above are from \cite{spe6, spe3, spe1}, which we refer to for any difference with those appeared in \cite{gir, hac}. \section{Conclusions}\label{sec6} As mentioned in the beginning of this paper, the main goal here is to introduce free probability theory and its application to random matrices as simple as possible. It is for a non-mathematics major researcher in, for example, communications and signal processing areas. This paper is mainly based on \cite{spe1, spe2, spe3, spe4}. Free probability theory is about noncommutative elements or random variables, such as, random matrices, in contrast to the conventional (real-valued or complex-valued) commutative random variables in the classical probability theory. The freeness significantly simplifies the calculations of the moments and therefore the distributions, and interestingly, random matrices, when their size is large, do have the freeness asymptotically. Therefore, free probability theory is naturally applied to calculate the asymptotic distributions of the eigenvalues/singular-values of random matrices when their size is large, such as wireless channel matrices in massive MIMO systems. It is particularly interesting that the calculation only needs the second order statistics of the matrix entries. This paper is based on the author's own understanding on free probability theory and by no means the material covered in this paper is complete. More complete materials on this topic are referred to \cite{spe1, spe2, spe3, spe4, spe5, hia, tul2, cou1}. \begin{center} {\bf Acknowledgement} \end{center} The author would like to thank Dr. Roland Speicher for his free online video lectures \cite{spe4} and for his helps to my questions and Dr. Anan Lu for his numerous discussions on free probability theory.
{ "timestamp": "2019-03-01T02:02:36", "yymm": "1902", "arxiv_id": "1902.10763", "language": "en", "url": "https://arxiv.org/abs/1902.10763" }
\section{Introduction} The \emph{Memory Wall}~\cite{DBLP:journals/sigarch/WulfM95} is arguably one of the biggest performance challenges in modern computer systems. Since the speed gap between processors and memory is currently several orders of magnitude and still diverging, it becomes more and more important to understand the memory bottlenecks of an application. A na\"ive approach to measuring memory requirements would be to determine the total amount of memory that an application has claimed. While this can be useful as a first impression, it does not give any information about how much of that memory is actively being used. The principle of (temporal) locality tells us, that recently used memory is also more likely to be used soon in the future. Therefore, although an application may have occupied a large amount of memory, it is likely to only operate on a subset at any point in time. This, in turn, is important for the memory hierarchy: Fast memory needs to be small, and thus lower memory requirements will make a faster application. In summary, we should not be concerned with the total amount of memory that an application has claimed, but only whether we can bring the soon-to-be-required memory contents quickly enough into the fast memory. In 1968, Peter Denning~\cite{denning1968working} introduced the concept of a \emph{working set}, as a means to unify computation and memory management. It represents a process' demand for memory, tightly coupled to the computation being performed. Specifically, at any point in time $t$, it is defined as the memory that has been accessed within the last $\tau$ time units, also known as the \emph{working set window}. As such, the working set does not include momentarily disused memory, and thus its size is usually much smaller than the total number of pages mapped (``resident set size''). It also directly translates into how much precious cache and RAM is required in that time interval, and thus also allows for estimating memory throughput requirements to keep a program running. We refer the reader to Denning's paper for a formal definition of the working set. Surprisingly, while there are countless tools to measure resident set size (e.g., Linux' \texttt{top}, Windows' task manager) or virtual memory (same tools as above), only few tools are available today to measure the working set. One possible explanation is that estimating the working set would require tracking memory accesses, which can be costly and have side-effects. In fact, as long as no page faults occur, an operating system would not even be capable of seeing page accesses. Consequently, such tools often invalidate MMU entries and observe how they reappear over time, and thus they are intrusive. One recent, non-intrusive tool to measure the working set under Linux is \texttt{wss}~\cite{wss18}. It builds on kernel patches that were added in 2015~\cite{lkml15}, which allow tracking of idle pages, without changing the state of the memory subsystem. This tool only works for Linux newer than 4.3, and is not enabled in mainline builds. Furthermore, it has several minor limitations, such as only tracking pages on the LRU list and only working with certain allocators (the \emph{slab} allocator does not use an LRU list). Most importantly, in contrast to our tool, it does not give any introspection beyond the size of the working set, due to restrictive performance penalties for online implementations. To offer a more generic and precise way of measuring the working set for both instruction and data memory, we have developed a tool for the popular instrumentation framework Valgrind~\cite{nethercote2007valgrind}. It seamlessly integrates into the latest Valgrind version, and thus works on many Linux platforms, such as x86, ARM/Android, MIPS32, etc. Computing the working set is achieved by tracking all memory accesses of a process (both instruction and data), and counting the number of pages that have been accessed within the last $\tau$ (user-defined) time units at equidistant sampling points $t=kT$, with $k\in\mathbb N$ and $T$ being the sampling interval. The output is the size of the instruction and data working sets over time, for each individual thread, annotated with additional information. \pagebreak \section{Tool Details} Our tool for measuring the working set is made open source, and available at \url{https://github.com/mbeckersys/valgrind-ws}. It tracks the number of page accesses over time, calculates the working set size (WSS) individually for each process, and separately for data and instruction memory. It is a plugin for Valgrind~\cite{nethercote2007valgrind}, and thus the program under analysis is effectively being instrumented and simulated, while using the host ISA. \subsection{Interaction with Valgrind Core} Valgrind tools work by instrumenting and observing a process that is interpreted by the Valgrind core. In particular, the Valgrind core provides a stream of an architecture-independent intermediate representation (IR), which carries markers relating the IR to the original instruction stream. Figure~\ref{fig:internals} shows how we interact with the core. We log all page references for both instruction and data, and count recently accessed pages, i.e., the WSS, at equidistant time intervals. The time intervals are based on the number of instructions simulated, because the simulation slows down the process under analysis, and thus wall-clock time would be meaningless. Towards that, we instrument the IR stream with IR statements that increment a counter every time an instruction marker is reached. Page accesses are registered only by observing the incoming IR statements. For every data access and every instruction marker (indicating a new original instruction) we query the referenced addresses, translate them to page addresses, and maintain a table holding all seen pages together with a time stamp of the last access. Every time the instruction counter reaches a multiple of the sampling interval $T$, we compute the working set by counting all pages that have been accessed within the last $\tau$ instructions. Additionally, a peak detection algorithm can be enabled, which records additional information when the WSS exhibits peaks, and is described in the following section. \begin{figure}[htb] \centering \includegraphics[width=.5\textwidth]{img/interaction-crop} \caption{Interaction with Valgrind core\label{fig:internals}} \vspace*{-1em} \end{figure} \subsection{Online Peak Detection and Annotation} For a developer it is important to understand why her program exhibits a certain WSS behavior so that it can be analyzed and improved. Therefore, additional information about the WSS samples, such as the current call stack, shall be recorded. However, this cannot be done for every sample, since it would significantly increase the memory footprint of our tool, and thus slow down the simulation unfavorably. Thus, the tool detects peaks in the WSS of either instructions or data, and records additional information only for those peaks. Such additional information is indicated in the output by boxed numbers, e.g., \fbox{1} and \fbox{3}, see Figure~\ref{fig:teaser}. Peak detection must be quick to respond to peaks, otherwise we blame the wrong code location. It also must have a low memory footprint, since it otherwise fails its own purpose. Therefore, we build on the \emph{principle of dispersion} and use signal filters that do not require storing a window of samples. The threshold for peak detection depends on the current signal properties; if the memory usage is high, a certain distance from the moving average is used as threshold, otherwise, we compare the deviation from mean against the variance. In between, the threshold is a mixture of both. This strategy allows us to identify meaningful peaks in both stationary and non-stationary memory usage settings. Specifically, we first compute an exponential moving average $\mu$ and moving variance $\sigma^2$; both of them do not require storing a list of recently seen samples. Then, for every new sample $x$, we calculate its distance $e = |x - \mu|$, and the ratio $F=\sigma^2/\mu$, also known as the \emph{Fano factor}. The new sample $x$ is considered a peak if $e > E$, where $E$ is the time-varying detection threshold, calculated as $E=cg\sigma^2 + (1-c)g\mu$, with $c=1-\exp(F/2)$, and $g$ being a parameter to influence detection sensitivity. Finally, we apply an exponential filtering to the signal before updating the moving average/variance, which prevents that single huge peaks skyrocket these statistics and prevent detection of subsequent peaks. To satisfy the requirement of quick response to peaks, we only apply filtering as long as a peak is present. \subsection{Hot Pages} The tool also yields a list of the most frequently accessed pages, both for instruction and data. We use debug information to provide additional information about the pages, such as source locations. For example: \begin{lstlisting} Code pages (57 entries): count page information 90312000 0x0400 touch_pages (pageramp.c:38) 112640 0x4D08 __vsyslog_chk (syslog.c:298) ... \end{lstlisting} It can be seen that one single page of instructions is the target of most page accesses. Since we only track at page granularity, the information here is only approximate. It refers to the first instruction/data access that falls into the given page. A similar output is produced for data pages. \section{Examples} In this section we demonstrate several examples, to illustrate the tool's output and use cases. We compare our output to the following tools: \begin{inparaenum}[(1)] \item \texttt{psutil}~\cite{psutil} (a Python script to programmatically query process details) and \item \texttt{wss}~\cite{wss18} (a recent Linux tool to measure the working set size). \end{inparaenum} One notable difference is that our simulation has a different time base than the other two tools. Both of them measure at wall-clock time, but incur different overhead. Our tool, in contrast, uses the number of executed instructions as a time basis. Therefore, in a practical setting, all three tools have different time bases and the output should not be compared directly. Another difference is that the other tools are quite limited in their temporal resolution. For the examples given here, we had them sample the memory state every 2ms, which already keeps our machine busy for these simple example programs. \subsection{``Pageramp'' Demo} \begin{lstlisting} valgrind --tool=ws --ws-peak-detect=yes ./pageramp \end{lstlisting} This is a simple workload requesting and releasing data pages in a sawtooth pattern. The program starts with zero data pages, and successively claims more and more pages, until an upper bound of 1024 pages is reached. Subsequently, it releases the pages again, until we arrive back to zero. During the entire time, it writes one byte of data to every second page, to ensure that half of the claimed memory is actively used. This process is repeated 10 times before the program exits. \begin{figure}[tb] \centering \includegraphics[width=.7\textwidth]{img/pageramp-mod.pdf} \caption{Memory requirements of "pageramp", as reported by our tool ($\tau=100,000$ instructions)\label{fig:pageramp:us}} \end{figure} \begin{figure}[tb] \input{img/pageramp_mod_others} \caption{Memory requirements of ``pageramp'' as reported by \texttt{wss} and \texttt{psutil}.} \label{fig:pageramp:others} \end{figure} We expect that both our tool and \texttt{wss} tools show similar output, and that \texttt{psutil} overestimates the memory requirements roughly by a factor of two. As it can be seen in Fig.~\ref{fig:pageramp:us}, our tool clearly yields the expected output. The output from \texttt{psutil} and \texttt{wss} are depicted in Fig.~\ref{fig:pageramp:others}. Both programs miss some part of the execution, since they can only be started after the process has come to life. Additionally, they have a worse temporal resolution. More importantly, while \texttt{wss} delivers the expected result (yet without any introspection), \texttt{psutil} can only provide an upper bound on the working set size by means of the resident set size (RSS). Last but not least, the other tools do not separate between instruction and data. Only our tool provides further information about the WSS. Two peaks have been detected, marked as \fbox{0} and \fbox{1} in the output. The latter one is a peak in the instruction WSS, due to OS cleanup actions. The first one is caused by user code, and the information recorded for \fbox{0} is \begin{lstlisting} [0] refs=2, loc=pageramp.c:37|pageramp.c:77 \end{lstlisting} saying that the call stack at this peak was \texttt{pageramp.c} line 37, called by line 77. The corresponding code is \begin{lstlisting} 36: static int touch_pages(pagetab_t *pt, int stride) { 37: for (int p=0; p<=(pt->num-stride); p+=stride) { ... 77: touch_pages(&pt, stride); \end{lstlisting} and thus the tool is pointing the user directly to the piece of code that caused the latest access. Finally, the tool also provides some summaries, which give a first impression about average (avg), maximum WSS (peak) and the total number of unique pages being accessed: \begin{lstlisting} Insn avg/peak/total: 2.1/40/57 pages (8/160/228 kB) Data avg/peak/total: 348.5/534/1098 pages (1394/2136/4392 kB) \end{lstlisting} Note that our example program apparently uses 74 data pages that are not requested by the user, but by the C library. \subsection{Working Set Window $\tau \neq$ Sampling Period $T$} \begin{lstlisting} valgrind --tool=ws --ws-tau=10000 ./pageramp ... valgrind --tool=ws --ws-tau=10000000 ./pageramp \end{lstlisting} So far, the working set window $\tau$ has been the same length as the sampling interval $T$, since the other tools cannot measure when $\tau \neq T$: \texttt{psutil} does not measure the working set, so $\tau$ is meaningless here, and \texttt{wss} modifies bits in the page table every time a sample is taken, and therefore cannot handle this case. However, it is important for scalability of the measurements to separate these two aspects. While for long-running programs the sampling interval should be increased to reduce the amount of measurement overhead, the working set window should be chosen according to a maximum memory bandwidth, which is independent of the program under analysis. \begin{figure}[tb] \centering \input{img/pageramp_vws_tau}\vspace*{-1em} \caption{Data working set size of ``pageramp'' as reported by tool for different working set windows $\tau$} \label{fig:pageramp:tau}\vspace*{-1em} \end{figure} We now exercise the test program ``pageramp'' again with our tool, while choosing different working set windows $\tau$. The results are shown in Figure~\ref{fig:pageramp:tau}. For $\tau=10,000$ instructions, the working set size never exceeds 600 pages. The pages cannot be accessed fast enough to include all the 1024 pages in the working set; or conversely, it takes more than 10,000 instructions to touch all 1024 pages. For larger $\tau$, the upper peaks become clearly visible, since now $\tau$ is large enough so that at least briefly all pages are considered active before they leave the working set again. In turn, the lower bound of the working set is now increased, because the already released pages are still considered to be active. The latter behavior could in principle be fixed by tracking the release of pages, but this would deviate the definition of the working set. \section{Discussion} \subsection{Performance} As any Valgrind tool, the execution is slowed down. The slowdown depends on the number of memory accesses, and also on the sampling interval. A larger sampling interval should be chosen for long-running programs, which reduces the memory footprint and the simulation time. In line with other Valgrind tools, all the results (e.g., the working set size over time) need to be stored in memory before they can be written to file. Therefore, our tool's memory requirements increase with the run-time of the workload. Because the workload itself is simulated in the Valgrind core, this has no influence on the output. \subsection{Precision} The sampling interval is not exactly equidistant, but happens only at the end of superblocks or exit IR statements. Thereby, a small jitter must be expected, which is usually in the order of a few dozen of instructions. In our implementation, the jitter is strictly positive, i.e., the sampling interval given by the user is never undercut. \subsection{Limitations} Sharing pages between threads or processes is currently ignored, therefore the working set is overestimated for multi-threaded programs sharing data. If pages are unmapped and a new page is later mapped under the same address, they are counted as the same page, even if the contents are different. This is less critical for the working set size at any given point in time, but affects the summaries given in the end. Finally, we want to explicitly warn that OS- and hardware mechanisms, such as read-ahead and prefetching, are not being considered by virtue of Valgrind's execution model. This program only counts demand accesses, as logically required by the running process. \section{Related Work\label{sec:related-work}} Most attempts to measure the working set size in a running system are based on MMU invalidation, as the \texttt{wss} tool~\cite{wss18} mentioned earlier. Under the Windows\texttrademark~operating system, the equivalent tools are the \emph{Windows Performance Recorder} and \emph{Xperf}~\cite{winrefset}. Such measurements are intrusive, since the tools actively remove pages from the MMU, and observe how they are being loaded back again. Also, they cannot track instruction memory. Another mentionable example for tracking the working set size via MMU invalidation is \cite{zhao2011low}. Gregg~\cite{GreggWss18} mentions other ways to estimate the WSS, among them the hardware performance monitoring counters (PMCs) in modern CPUs. Deducing the working set size from the PMCs would require measuring cache and DRAM in/out traffic, and possibly MMU accesses, and deducing from that which pages are being used. Since counters are processor-specific, such an approach would have to be developed for each processor family individually, and likely not yield precise results due to lacking implementation details. Other approaches he mentions are memory breakpoints (slow), processor traces and bus snooping (needs hardware). Another approach for taking WSS measurements is via virtualization. Since full virtualization or tracking every memory access would be too slow, these approaches are typically also based on MMU invalidation. Efficient implementations for such environments have been presented recently~\cite{wires2014characterizing}. An approach based on soft PMCs of the Windows guest operating in a virtualization environment has recently been proposed in~\cite{melekhova2015estimating}. However, virtualization in general requires more setup work than our tool, and thus is less convenient for daily use. The tool presented here falls under the simulation category, and thus offers more insights than the mentioned methods, for the price of slower execution. It is, however, a generic tool for an instrumentation framework that is widely used on many Linux platforms, and does not depend on the caching hardware. We are not aware of any publicly available tool with similar functionality. \section{Closing Remarks} We have introduced a new tool for the popular Valgrind suite, which determines the active memory requirements of a process, known as the \emph{working set size}, and allows associating peaks in the working set size with call stack information. The measurements are taken at page-level granularity, at a user-defined interval and working set window. This tool can be used to monitor the time-varying memory requirements of an application, and subsequently leverage this information for performance debugging. This first release has some limitations that need to be considered by a user. At the moment, we do not track release and re-use of pages, and we do not consolidate the working sets of several threads in multi-threaded applications. All of that results in a slight overapproximation of the working set, especially when shared memory is used. Future work entails addressing these shortcomings, and possibly also introducing models to measure spatial locality of memory accesses. \bibliographystyle{ACM-Reference-Format}
{ "timestamp": "2019-03-01T02:16:42", "yymm": "1902", "arxiv_id": "1902.11028", "language": "en", "url": "https://arxiv.org/abs/1902.11028" }
\section*{Introduction} Dickson's commutative division algebras \cite{Dic} have been widely studied over finite fields as they yield proper finite semifields of even dimension: For any choice of $c\in K\setminus K^2$ and $\sigma\in Aut_F(K)$ not equal to the identity, $K\oplus K$ equipped with the multiplication $$(u,v)(x,y)=(ux+c\sigma(vy),ux+vy)$$ is a division algebra over $F$ when $F$ is a finite field. This construction was investigated more generally in two papers by Burmester where $K$ is a cyclic field extension of degree $n$ over a field of characteristic not 2 \cite{Bur, Bur2}, producing $2n$-dimensional unital algebras over $F$. Further, Dickson \cite{Dic} and Burmester gave a necessary and sufficient condition for when the algebras constructed this way are division.\\ Dickson's commutative division algebras also appear as a special case of a family of finite semifields constructed by Knuth \cite{Knu}: A subfield $L$ of a semifield $S$ is called a \textit{weak nucleus} if $x(yz)-(xy)z=0$, whenever two of $x,y,z$ lie in $L$. Knuth produced conditions to determine all isotopism classes of finite semifields which are quadratic over their weak nucleus; Dickson's semifields are the only commutative semifields of this type. Isotopism classes of commutative Dickson semifields were also treated in Burmester's paper \cite{Bur2} and more recently in some work by Hui, Tai and Wong \cite{HTW}.\\ In this paper, we generalise Dickson's doubling process by first doubling a finite (not necessarily cyclic) field extension $K/F$ and then a central simple associative algebra $B$ over $F$. As $B$ is not commutative, in this last setup we have more options for a possible generalisation of the multiplication given in Dickson's construction. For instance, we may define the multiplication on the $F$-vector space $B\oplus B$ as $$(u,v)(x,y)=(ux+c\sigma(vy),uy+vx)$$ for some $c\in B^{\times}$ and non-trivial $\sigma\in Aut_F(B)$, but we can also define a multiplication by putting $c$ in the middle, i.e. $$(u,v)(x,y)=(ux+\sigma(v)c\sigma(y),uy+vx),$$ or by putting $c$ on the right-hand side: $$(u,v)(x,y)=(ux+\sigma(vy)c,uy+vx).$$ Clearly, the unital $F$-algebras we obtain this way are no longer commutative.\\ After preliminary results and definitions in Section 1, the doubling of a finite field extension is investigated in Section 2. We find multiple conditions for when we obtain division algebras this way and consider when our algebras are isomorphic. We also examine their automorphisms and determine their automorphism groups. Section 3 looks at what happens when we construct algebras starting with a central simple algebra $B$ over $F$ and employs several canonical generalisations of Dickson's doubling process. Again we examine both isomorphisms and automorphisms of these algebras and determine the size of their automorphism groups. Most importantly, we investigate when the algebras we obtained this way are division algebras. The results of this paper are part of the author's PhD thesis written under the supervision of Dr S. Pumpl\"{u}n. \section{Definitions and preliminary results} In the following, let $F$ be a field. We will define an $F$-algebra $A$ as a finite dimensional $F$-vector space equipped with a (not necessarily associative) bilinear map $A\times A\to A$ which is the multiplication of the algebra. $A$ is a \textit{division algebra} if for all nonzero $a\in A$ the maps $L_a:A\to A$, $x\mapsto ax$, and $R_a:A\to A$, $x\mapsto xa$, are bijective maps. As $A$ is finite dimensional, $A$ is a division algebra if and only if there are no zero divisors \cite{Sch}.\\ The \textit{associator} of $x,y,z\in A$ is defined to be $[x,y,z]:=(xy)z-x(yz).$ Define the \textit{left, middle and right nuclei} of $A$ as $Nuc_l(A):=\lbrace x\in A \mid [x,A,A]=0\rbrace,$ $Nuc_m(A):=\lbrace x\in A \mid [A,x,A]=0\rbrace,$ and $Nuc_r(A):=\lbrace x\in A \mid [A,A,x]=0\rbrace.$ The left, middle and right nuclei are associative subalgebras of $A$. Their intersection $Nuc(A):=\lbrace x\in A \mid [x,A,A]=[A,x,A]=[A,A,x]=0\rbrace$ is the \textit{nucleus} of $A$. The \textit{commuter} of $A$ is the set of elements which commute with every other element, $Comm(A):=\lbrace x\in A\mid xy=yx \:\forall y\in A\rbrace.$ The \textit{center} of $A$ is given by the intersection of $Nuc(A)$ and $Comm(A)$, $Z(A):=\lbrace x\in Nuc(A)\mid xy=yx\: \forall y\in A\rbrace.$ For two algebras $A$ and $B$, any isomorphism $f:A\to B$ maps $Nuc(A)$ isomorphically onto $Nuc(B)$. An algebra $A$ is \textit{unital} if there exists an element $1_A\in A$ such that $x1_A=1_Ax=x$ for all $x\in A$.\\ A \textit{form of degree d} over $F$ is a map $N:A\to F$ such that $N(ax)=a^dN(x)$ for all $a\in F$, $x\in A$ and such that the map $\theta:A\times...\times A\to F$ defined by $$\theta(x_1,x_2,...,x_d)=\frac{1}{d!}\sum_{1\leq i_1<...<i_l\leq d}(-1)^{d-1}N(x_{i_1}+...+x_{i_l})$$ ($1\leq l\leq d$) is a $d$-linear form over $F$. A \textit{d-linear form} over $F$ is an $F$-multilinear map $\theta:A\times...\times A\to F$ ($d$ copies) such that $\theta(x_1,x_2,...,x_d)$ is invariant under all permutations of its variables. A form $N:A\to F$ of degree $d$ is called \textit{multiplicative} if $N(xy)=N(x)N(y)$ for all $x,y\in A$ and \textit{nondegenerate} if we have $N(x)=0$ if and only if $x=0$. Note that if $N:A\to F$ is a nondegenerate multiplicative form and $A$ is a unital algebra, it follows that $N(1_A)=1_F$. Every central simple algebra of degree $d$ admits a uniquely determined nondegenerate multiplicative form of degree $d$, called the \textit{norm} of the algebra.\\ \section{Commutative Dickson algebras over any base field}\label{GeneralisingDicksonAlgebras} \subsection{The construction process} Let $K$ be a finite field extension of $F$ of degree $n$. For some $c\in K^{\times}$ and $\sigma\in Aut_F(K)$, we define a multiplication on the $F$-vector space $K\oplus K$ by $$(u,v)(x,y)=(ux+c\sigma(vy),uy+vx)$$ for all $u,v,x,y\in K$. This makes $K\oplus K$ a unital nonassociative ring which we denote by $D(K,\sigma,c)$. Note that $D(K,id,c)$ is isomorphic to a quadratic field extension of $K$ when $c\in K\setminus K^2$ and that $D(K,id,c)\cong K\times K$ when $c\in K^{\times 2}$. Due to this, we will only consider $\sigma\neq id$. Note that $F$ is canonically embedded into $D(K,\sigma,c)$ via the map $F\mapsto F\oplus 0$. Similarly, we will denote any subalgebras of the form $E\oplus 0$ simply by $E$. Clearly $D=D(K,\sigma,c)$ is commutative. Over finite fields, it is known that when $\sigma\neq id$, then $Nuc_l(D)=Nuc_r(D)=Fix(\sigma)$ and $Nuc_m(D)=K$ \cite[p.126]{Cor}. This is also true for any arbitrary field and is easily checked: \begin{theorem}\label{CDSNuclei} Let $D=D(K,\sigma,c)$ with $\sigma\in Aut_F(K)$ a non-trivial automorphism. Then we have $Nuc_l(D)=Nuc_r(D)=Fix(\sigma)$ and $Nuc_m(D)=K$. In particular, this yields $Nuc(D)=Fix(\sigma)$ and $Z(D)=Fix(\sigma)$. \end{theorem} A nonassociative ring is always an algebra over its centre, so $D(K,\sigma,c)$ is an algebra over $Fix(\sigma)$. However, as $F\subset Fix(\sigma)$ we can also view $D(K,\sigma,c)$ as an algebra over $F$ of dimension $2n$. Clearly all subfields $E$ of $K$ are subalgebras of $D(K,\sigma, c)$. Additionally, if $E$ be a subfield of $K$ such that $c\in E^{\times}$ and $\sigma\!\mid_E \,\in Aut_F(E)$, then $D(E,\sigma\!\mid_E,c)$ is a subalgebra of $D(K,\sigma,c)$. Moreover, if $L=Fix(\sigma)$ and $c\in L^{\times}$, then $L\oplus L$ is an associative subalgebra of $D(K,\sigma,c)$.\\ \subsection{Division algebras} Dickson \cite{Dic} gave a sufficient condition for $D(K,\sigma,c)$ to be a nonassociative division algebra when $F$ is an infinite field and $K/F$ is a cyclic extension. Burmester further showed this was also a necessary condition \cite[Theorem 1]{Bur}. If we assume $K/Fix(\sigma)$ is cyclic, \cite[Theorem 1]{Bur} extends naturally to our construction: \begin{theorem} Let $F$ be an infinite field and $L=Fix(\sigma)$. If $Aut_L(K)=\langle\sigma\rangle$, then $D(K,\sigma, c)$ is a division algebra over $F$ if and only if $N_{K/L}(c)\neq N_{K/L}(a^2)$ for all $a\in K$. \end{theorem} The proof is analogous to the proof of \cite[Theorem 1]{Bur}. As it uses \cite[Theorem 5, p.200]{A2018}, we require that $F$ is not a finite field. If $K/Fix(\sigma)$ is not a cyclic extension, this result does not necessarily hold. However, we can directly compute a different necessary and sufficient condition for $D(K,\sigma,c)$ to be a division algebra: \begin{theorem}\label{CDSDivision} $D(K,\sigma,c)$ is a division algebra if and only if $c\neq r^2s\sigma(s)^{-1}t^{-1}\sigma(t)^{-1}$ for all $r,s,t\in K^{\times}$. \end{theorem} \begin{proof} Suppose $D(K,\sigma,c)$ is not a division algebra. Then there exist nonzero elements $(u,v),(x,y)\in K\oplus K$ such that $(u,v)(x,y)=(0,0).$ This is equivalent to the simultaneous equations \begin{align} ux+c\sigma(vy)=&0, \label{CDSDivision1}\\ uy+vx=&0. \label{CDSDivision2} \end{align} If $v=0$, (\ref{CDSDivision2}) becomes $uy=0$, so either $u=0$ or $y=0$. However, $u$ must be non-zero, else $(u,v)=(0,0)$ which is a contradiction, so we must have $y=0$. Additionally, (\ref{CDSDivision1}) gives $ux=0$. As $u$ is non-zero, this implies $x=0$ and so $(x,y)=(0,0)$ which is again a contradiction.\\ So let $v\neq 0$. As $K$ is a field, we have $v^{-1}\in K$ and hence we obtain $x=-uyv^{-1}$ from (\ref{CDSDivision2}). Now if $y=0$, this implies that $x=0$ which contradicts the assumption that $(x,y)\neq(0,0)$. Substituting this into (\ref{CDSDivision1}), we get $-u^2yv^{-1}+c\sigma(vy)=0,$ which rearranges to give $c=u^2y\sigma(y)^{-1}v^{-1}\sigma(v)^{-1}$.\\ Conversely, suppose $c=r^2s\sigma(s)^{-1}t^{-1}\sigma(t)^{-1}$ for some $r,s,t\in K^{\times}$. Consider the elements $(r,t)$ and $(-rst^{-1},s)$. Both elements are nonzero but satisfy \begin{equation*} (r,t)(-rst^{-1},s)=(-r^2st^{-1}+r^2s\sigma(s)^{-1}t^{-1}\sigma(t)^{-1}\sigma(ts),rs-rst^{-1}t)=(0,0). \end{equation*} Hence $D(K,\sigma,c)$ is not a division algebra. \end{proof} \begin{corollary}\label{CDSDivisionNorm} If $N_{K/F}(c)\neq N_{K/F}(a^2)$ for all $a\in K^{\times}$, then $D(K,\sigma,c)$ is a division algebra. \end{corollary} \begin{proof} Suppose $D(K,\sigma,c)$ is not a division algebra. By Theorem \ref{CDSDivision}, there exists some $r,s,t\in K^{\times}$ such that $c=r^2s\sigma(s)^{-1}t^{-1}\sigma(t)^{-1}$. Taking norms of both sides of the equation we obtain $N_{K/F}(c)= N_{K/F}(r^2s\sigma(s)^{-1}t^{-1}\sigma(t)^{-1})$. As the norm is multiplicative and $N_{K/F}(x)=N_{K/F}(\sigma(x))$, this yields $$N_{K/F}(c)=N_{K/F}(r^2)N_{K/F}(s)N_{K/F}(s^{-1})N_{K/F}(t^{-1})^2,$$ which simplifies to $N_{K/F}(c)=N_{K/F}((rt^{-1})^2)$. \end{proof} \begin{corollary}\label{CDSDivisionSquare} If $c$ is a square in $K$, then $D(K,\sigma,c)$ is not a division algebra. \end{corollary} \begin{proof} In the notation of Theorem \ref{CDSDivision}, let $s=t=1$. Then if $c=r^2$ for some $r\in K$, then $D(K,\sigma,c)$ is not a division algebra. \end{proof} \begin{remark}\begin{enumerate}[(i)] \item Let $F=\mathbb{R}$ and $K=\mathbb{C}$. As every element of $\mathbb{C}$ is a square, we do not obtain any real division algebras by using this construction. \item If $F$ is a finite field of characteristic 2, we also do not obtain any division algebras: again, every element is a square, so $D(K,\sigma,c)$ is not a division algebra by Corollary \ref{CDSDivisionSquare}. \end{enumerate} \end{remark} \begin{example}\begin{enumerate}[(i)] \item Let $F=\mathbb{Q}$ and $K=\mathbb{Q}(\sqrt{a})$ for some $a\in \mathbb{Q}\setminus \mathbb{Q}^2$. Then we obtain $N_{K/\mathbb{Q}}(x+y\sqrt{a})=x^2-y^2a$ for all $x,y\in \mathbb{Q}$. If we let $c=y\sqrt{a}$ for any $y\in \mathbb{Q}^{\times}$, this yields $N_{K/\mathbb{Q}}(c)=-y^2a\not\in \mathbb{Q}^2,$ so we conclude that $D(K,\sigma,c)$ is a division algebra of dimension 4 over $\mathbb{Q}$. \item Let $F=\mathbb{Q}_p$ and $K=\mathbb{Q}_p(\alpha)$ be a quadratic field extension of $\mathbb{Q}_p$. Thus $K$ is equal to one of $\mathbb{Q}_p(\sqrt{p})$, $\mathbb{Q}_p(\sqrt{u})$ or $\mathbb{Q}_p(\sqrt{up})$, where $u\in \mathbb{Z}_p\setminus \mathbb{Z}_p^2$. If $p\equiv 1\mbox{ (mod }4)$, it follows that $-\alpha^2\not\in \mathbb{Q}_p^2$ and thus we have $N_{K/ \mathbb{Q}_p}(y\alpha)=-y^2\alpha^2\not\in \mathbb{Q}_p^2.$ Hence, $D(K,\sigma, y\alpha)$ is a division algebra of dimension 4 over $\mathbb{Q}_p$. \end{enumerate} \end{example} \begin{remark} If $F$ is a finite field of odd characteristic, we can see that Corollary \ref{CDSDivisionSquare} is also a necessary condition for $D(K,\sigma,c)$ to be a division algebra. This was originally proved in \cite[Theorem 1']{Bur} but can also be obtained as a consequence of Theorem \ref{CDSDivision}:\\ If $F=\mathbb{F}_{p^s}$ and $K=\mathbb{F}_{p^r}$ is a finite extension of $F$, it is known that $Aut_F(K)$ is cyclic of order $r/s$ and is generated by $\phi^s$, where $\phi$ is defined by the Frobenius automorphism $\phi(x)=x^p$ for all $x\in K$. Over a finite field of odd characteristic, we thus have $$\sigma(x)x=\phi^t(x)x=x^{p^{st}}x=x^{p^{st}+1}$$ for some $t\in\mathbb{Z}$. As $p$ is odd, $p^{st}+1=2n$ for some $n\in\mathbb{Z}$ and so we can write $\sigma(x)x=x^{2n}=(x^{n})^2$ for all $x\in K$. A similar argument shows that $\sigma(x)x^{-1}$ is a square for all $x\in K$. Hence over finite fields of odd characteristic, Theorem \ref{CDSDivision} yields that $D=D(K,\sigma,c)$ is a division algebra if and only if $c$ is not a square in $K$. \end{remark} \subsection{Isomorphisms} For the rest of the section, we will assume that $F$ has characteristic not 2 unless stated otherwise and that $\sigma\in Aut_F(K)$ is a non-trivial automorphism. Burmester \cite{Bur} computed the isomorphisms of commutative Dickson algebras $D(K,\sigma,c)$ when $K$ is a cyclic extension of $F$. The notation originally used in \cite{Bur} differs from ours; for clarity, we rephrase his result in our notation: \begin{theorem}[\cite{Bur}, Theorem 2]\label{BurmesterAutomorphisms} Let $K$ be a cyclic field of degree n over $F$ and let $Aut_F(K)=\langle\sigma\rangle$. Then $D(K,\sigma^i,c)\cong D(K,\sigma^j,d)$ if and only if $i=j$, and if there exists an integer $0\leqslant k< n$ and an element $x\in K$ such that $d=x^2\sigma^k(c).$ \end{theorem} In order to generalise this result, we first note the following two lemmas: \begin{lemma}\label{IsomorphismRestrictsToFixFields} Let $D(K,\sigma,c)$ and $D(L,\phi, d)$ be two commutative Dickson algebras over $F$. If $Fix(\sigma)\not\cong Fix(\phi)$, then $D(K,\sigma,c)\not\cong D(L,\phi,d)$ for any choice of $c\in K^{\times}$ and $d\in L^{\times}$. \end{lemma} \begin{proof} Suppose $D(K,\sigma,c)\cong D(L,\phi,c)$. As any isomorphism must map the centre of $D(K,\sigma,c)$ to the centre of $D(L,\phi,c)$, this implies $Fix(\sigma)\cong Fix(\phi)$. \end{proof} \begin{lemma}\label{sigmataufixfields}Let $\sigma\in Aut_F(K)$ and $\phi\in Aut_F(L)$. If there exists an $F$-isomorphism $\tau:K\to L$ such that $\tau\circ\sigma=\phi\circ\tau$, then $\tau\!\mid_{Fix(\sigma)}:Fix(\sigma)\to Fix(\phi)$ is an $F$-isomorphism. \end{lemma} \begin{proof} For all $x\in Fix(\sigma)$, it follows that $$\phi\circ\tau(x)=\tau\circ\sigma(x)=\tau(x),$$ so $\tau(x)\in Fix(\phi)$. Hence we conclude that $im(\tau\mid_{Fix(\sigma)})\subseteq Fix(\phi)$. To show that in fact $im(\tau\!\mid_{Fix(\sigma)})= Fix(\phi)$, we note that for any $y\in Fix(\phi)$ there exists $x\in K$ such that $\tau(x)=y$. As $\tau(x)\in Fix(\phi)$, this implies $\tau\circ\sigma(x)=\phi\circ\tau(x)=\tau(x),$ thus $x\in Fix(\sigma)$ and it follows that $im(\tau\!\mid_{Fix(\sigma)})= Fix(\phi)$. This is sufficient to show that $\tau\mid_{Fix(\sigma)}:Fix(\sigma)\to Fix(\phi)$ is an $F$-isomorphism. \end{proof} \begin{theorem}\label{CDSIsomorphisms} Let $K$ and $L$ be two finite field extensions of $F$ and $D=D(K,\sigma,c)$ and $D'=D(L,\phi,d)$ be two commutative Dickson algebras over $F$. Then $G:D\to D'$ is an isomorphism if and only if $G$ has the form $$G(x,y)=(\tau(x),\tau(y)b)$$ for some $F$-isomorphism $\tau:K\to L$ such that:\begin{enumerate}[(i)] \item $\phi\circ\tau=\tau\circ\sigma$, \item there exists $b\in L^{\times}$ such that $\tau(c)=d\phi(b^2)$. \end{enumerate} It is possible to find $b\in L^{\times}$ satisfying (i) and (ii) if and only if $\tau(c)d^{-1}$ is a square in $L^{\times}$. \end{theorem} \begin{proof} Suppose $G:D\to D'$ is an $F$-isomorphism. Then $G$ maps the middle nucleus of $D$ to the middle nucleus of $D'$, so we must have $K\cong L.$ This means $G$ restricted to $K$ must be an isomorphism which maps $K$ to $L$; that is, $G\!\mid_K=\tau:K\to L$ is an isomorphism of fields and we conclude $G(x,0)=(\tau(x),0)$ for all $x\in K$. Additionally, by Lemma \ref{IsomorphismRestrictsToFixFields} we see that $Z(D)\cong Z(D')$ under $G$. Thus, it follows that $\tau$ restricted to $Fix(\sigma)$ must yield an isomorphism from $Fix(\sigma)$ to $Fix(\phi)$. Let $G(0,1)=(a,b)$ for some $a,b\in L$. This implies \begin{equation*} G(x,y)=G(x,0)+G(0,1)G(y,0)=(\tau(x)+a\tau(y),\tau(y)b). \end{equation*} As $G$ is multiplicative, it follows that $G((0,1)^2)=G(0,1)^2$ which holds if and only if $(a,b)(a,b)=(\tau(c),0).$ From this, we obtain the equations $a^2+d\phi(b^2)=\tau(c)$ and $2ab=0.$ As $L$ does not have characteristic 2, this implies either $a=0$ or $b=0$. If $b=0$, then $G(x,y)=(\tau(x)+\tau(y)a,0)$ and so $G$ is not surjective. This is a contradiction, as $G$ is an isomorphism and hence is bijective by definition. Thus we obtain $a=0$ and $d\phi(b^2)=\tau(c)$.\\ Finally, as $G$ is multiplicative this yields $G(u,v)G(x,y)=G((u,v)(x,y))$ for all $u,v,x,y\in K$. Computing both sides of this equation, we get $$(\tau(ux)+d\phi(\tau(vy)b^2),\tau(uy)b+\tau(vx)b)=(\tau(ux+c\sigma(vy)),\tau(uy+vx)b)$$ for all $u,v,x,y\in K$, which implies $d\phi(\tau(vy)b^2)=\tau(c\sigma(vy)).$ After substituting the condition $d\phi(b^2)=\tau(c)$, we are left with $\phi(\tau(vy))=\tau(\sigma(vy))$ for all $v,y\in K$; that is, $\phi\circ\tau=\tau\circ\sigma$.\\ Conversely, let $G:K\oplus K\to L\oplus L$ be defined by $G(x,y)=(\tau(x),\tau(y)b)$ for some $F$-isomorphism $\tau:K\to L$ satisfying the conditions stated in the theorem above. It is easily checked that this is an $F$-linear bijective map between vector spaces. We only need to check that the map is multiplicative. Then we have $G(u,v)G(x,y)=G((u,v)(x,y))$ for all $u,v,x,y\in K$ if and only if it follows that $d\phi(\tau(vy)b^2)=\tau(c\sigma(vy)).$ As $d\phi(b^2)=\tau(c)$ and $\phi\circ\tau=\tau\circ\sigma$, this is satisfied for all $v,y\in K$. Further, by Lemma \ref{sigmataufixfields} this certainly maps the centre of $D$ to the centre of $D'$. Thus we conclude that $G:D\to D'$ is an $F$-algebra isomorphism. \end{proof} \begin{corollary}\label{CDSIsomorphismFinite}Let $D=D(K,\sigma,c)$ and $D'=D(K,\phi,d)$ be two commutative Dickson algebras over $F$. Then $G:D\to D'$ is an $F$-algebra isomorphism if and only if $G$ has the form $$G(x,y)=(\tau(x),\tau(y)b)$$ for some $\tau\in Aut_F(K)$ such that:\begin{enumerate}[(i)] \item $\phi\circ\tau=\tau\circ\sigma$, \item there exists $b\in K^{\times}$ such that $\tau(c)=d\phi(b^2)$. \end{enumerate} It is possible to find $b\in K^{\times}$ satisfying (i) and (ii) if and only if $\tau(c)d^{-1}$ is a square in $K^{\times}$. \end{corollary} \begin{corollary}\label{CDSIsomorphismsAbelian} Suppose $Aut_F(K)$ is an abelian group. If $\sigma\neq\phi$, then $D(K,\sigma,c)\not\cong D(K,\phi,d)$ for any choice of $c,d\in K^{\times}$. \end{corollary} \begin{corollary} For all $c\in K^{\times}$, we have $D(K,\sigma,c)\cong D(K,\tau\circ\sigma\circ\tau^{-1},\tau(c))$ for each $\tau\in Aut_F(K)$ and $D(K,\sigma,c)\cong D(K,\sigma,\sigma(b^{2})c)$ for each $b\in K^{\times}$. \end{corollary} \begin{proof} This is clear employing the isomorphisms $G(x,y)=(\tau(x),\tau(y))$ and $G(x,y)=(x,b^{-1}y)$, respectively. \end{proof} When $K$ is a finite field of odd characteristic, $\tau(c)d^{-1}$ is a square if and only if either both $c$ and $d$ are squares or both are non-squares in $K$. Due to this, we obtain the following well-known result from Theorem \ref{CDSIsomorphisms}: \begin{corollary}[\cite{Bur}, Theorem 2']\label{CDSIsomorphismClassesFinite} Let $F$ be a finite field of odd characteristic and $K$ be a finite extension of degree $n$. Let $D=D(K,\sigma,c)$ and $D'=D(K,\phi,d)$ be division algebras. Then $D\cong D'$ if and only if $\sigma=\phi$. Hence up to isomorphism, there are exactly $n$ commutative Dickson semifields of order $p^{2n}$. \end{corollary} \begin{proof} First, we note that since $Aut_F(K)$ is a cyclic group, $D\cong D'$ if and only if $\sigma=\phi$ and $$G(x,y)=(\tau(x),\tau(y)b)$$ for some $\tau\in Aut_F(K)$ such that $\tau(c)=d\sigma(b^2)$ for some $b\in K^{\times}$ by Cor. \ref{CDSIsomorphismsAbelian}. As both $c,d$ are non-squares in $K$, this implies that $\tau(c)d^{-1}$ is certainly a square in the finite field $K$. Thus we can always find $b\in K$ satisfying $\sigma(b)^2=\tau(c)d^{-1}.$ Hence $D\cong D'$ if and only if $\sigma=\phi$. As each $\sigma\in Aut_F(K)$ determines a different isomorphism class of division algebras, this implies that there are $\left| Aut_F(K)\right|=n$ isomorphism classes. \end{proof} Over an arbitrary field however, it is possible that $D(K,\sigma,c)\not\cong D(K,\sigma,d)$ for some $c,d\in K$ as we cannot guarantee that there exists $b\in K$ such that $\sigma(b)^2=\tau(c)d^{-1}.$ Let us now consider $F=\mathbb{Q}_p$ for $p\neq 2$ as an example. We employ the following well-known result: \begin{lemma} Let $K/\mathbb{Q}_p$ be a finite field extension for $p\neq 2$ with uniformizer $\pi\in \mathcal{O}_K$, where $\mathcal{O}_K$ is the valuation ring of $K$. Then $K^{\times}/(K^{\times})^2=\lbrace 1,u,\pi,u\pi\rbrace$ for some $u\in \mathcal{O}_K\setminus \mathcal{O}_K^2$. \end{lemma} \begin{corollary}\label{IsomorphismsoverQp}For each finite field extension $K/\mathbb{Q}_p$ such that $Aut_{\mathbb{Q}_p}(K)$ is an abelian group, there are at most $3\left| Aut_{\mathbb{Q}_p}(K)\right|$ non-isomorphic commutative Dickson division algebras of the kind $D(K,\sigma,c)$. \end{corollary} \begin{proof} As in Corollary \ref{CDSIsomorphismClassesFinite}, we see that $D(K,\sigma,c)\cong D(K,\phi,d)$ if and only if $\sigma=\phi$ and there exists some $\tau\in Aut_{\mathbb{Q}_p}(K)$ and $b\in K^{\times}$ such that $\tau(c)d^{-1}=\sigma(b^2)$. Such $b\in K$ exists if and only if $\tau(c)d^{-1~}$ is a square in $K$. If we assume that $D(K,\sigma,c)$ and $D(K,\sigma,d)$ are division algebras, $c,d$ are certainly not squares in $K$ and so must lie in non-identity cosets of $K^{\times}/(K^{\times})^2$. It is clear that $\tau(c)$ must lie in the same coset as $c$.\\ Considering the images of $\tau(c)$ and $d^{-1}$ in the quotient group $K^{\times}/(K^{\times})^2$, it follows that $\tau(c)d^{-1}$ is a square in $K^{\times}$ if and only if $c$ and $d$ lie in the same coset of $K^{\times}/(K^{\times})^2$. As there are 3 non-trivial cosets, we conclude there are at most $3\left| Aut_{\mathbb{Q}_p}(K)\right|$ non-isomorphic commutative Dickson division algebras. \end{proof} We cannot say for certain that we attain this bound, as this would assume that there exists a suitable $c\in K^{\times}$ in each non-trivial coset of $K^{\times}/(K^{\times})^2$ such that $D(K,\sigma,c)$ is a division algebra for each $\sigma\in Aut_{\mathbb{Q}_p}(K)$. However, if we can find some $c\in K^{\times}$ that satisfies the conditions of Corollary \ref{CDSDivisionNorm} from each coset of $K^{\times}/(K^{\times})^2$, this is sufficient to show that there are exactly $3\left| Aut_{\mathbb{Q}_p}(K)\right|$ non-isomorphic commutative Dickson division algebras. For an arbitrary field $F$, we conclude the following analogously: \begin{corollary} Suppose $K/F$ is a finite field extension such that $Aut_F(K)$ is an abelian group and there exists $c\in K^{\times}$ such that $N_{K/F}(c)\neq N_{K/F}(a^2)$ for all $a\in K$. Then there are at least $\left| Aut_F(K)\right|$ non-isomorphic commutative Dickson division algebras over $F$ of the form $D(K,\sigma,c)$. \end{corollary} \subsection{Automorphisms} The automorphisms of commutative Dickson algebras were computed in \cite{Bur} when $K$ is a finite cyclic field extension. We consider the subset $$J(c)=\lbrace \tau\in Aut_F(K)\mid X^2-\tau(c)c^{-1} \mbox{ has solutions in }K\rbrace \subset Aut_F(K),$$ introduced in \cite{Bur}. \begin{lemma} $J(c)$ is a subgroup of $Aut_F(K)$.\label{JcSubgroup} \end{lemma} \begin{proof} Clearly the identity automorphism is contained in $J(c)$, as $X^2-cc^{-1}=X^2-1$ always has the solutions $X=\pm 1$.\\ Let $\tau,\phi\in J(c)$. Then $\tau(c)c^{-1}=a^2$ and $\phi(c)c^{-1}=b^2$ for some $a,b\in K^{\times}$. It follows that $$\phi\circ\tau(c)c^{-1}=\phi(a^2c)c^{-1}=\phi(a^2)b^2cc^{-1},$$ so $X^2-\phi\circ\tau(c)c^{-1}$ has the solutions $X=\pm \phi(a)b$. This implies $\phi\circ\tau\in J(c)$. Finally, for each $\tau\in J(c)$ we have $\tau^{-1}(c)c^{-1}=\tau^{-1}(a^{-1})^2,$ so $\tau^{-1}\in J(c)$. \end{proof} When $K$ is a cyclic extension, there exists $2\left|J(c)\right|$ automorphisms of $D(K,\sigma,c)$: \begin{theorem}\label{CDSSemifield2nAuts}\begin{enumerate}[(i)] \item[(i)] [\cite{Bur}, Theorem 3 in our notation] Let $K$ be a cyclic extension of $F$. Then there exists $2\left|J(c)\right|$ automorphisms of $D(K,\sigma,c)$, each of which is given by $$G(x,y)=(\tau(x),\tau(y)b_i)$$ for each $\tau\in J(c)$, where $b_i\in K$ are such that $\sigma(b_i)$ are the two solutions of $X^2-\tau(c)c^{-1}$ for $i=1,2$. \item[(ii)] [\cite{Bur}, Theorem 3' in our notation] Let $F$ be a finite field of odd characteristic and $K$ be a finite extension of degree $n$. Then there exists $2n$ automorphisms of $D=D(K,\sigma,c)$, each of which is given by $$G(x,y)=(\tau(x),\tau(y)b_i)$$ for each $\tau\in Aut_F(K)$, where $b_i\in K$ are such that $\sigma(b_i)$ are the two solutions of $X^2-\tau(c)c^{-1}$ for $i=1,2$. \end{enumerate} \end{theorem} We now compute the automorphisms when $K$ is an arbitrary finite field extension. We continue to assume that $\sigma\neq id$. \begin{theorem}\label{CDSAutGroup} All automorphisms $G:D(K,\sigma,c)\to D(K,\sigma,c)$ are of the form $$G(u,v)=(\tau(u),\tau(v)b)$$ for some $\tau\in Aut_F(K)$ such that $\tau$ and $\sigma$ commute and $b\in K^{\times}$ satisfying $\tau(c)=c\sigma(b^2)$. Further, all maps of this form with $\tau\in Aut_F(K)$ and $b\in K^{\times}$ satisfying these conditions yield an automorphism of $D$. \end{theorem} \begin{proof} Let $D=D(K,\sigma,c)$. Suppose that $G\in Aut_F(D)$. As automorphisms preserve the nuclei of an algebra, $G$ restricted to $K$ must be an automorphism of $K$. As $G$ is $F$-linear we obtain $F\subset Fix(G\!\mid_K)$ and so in fact $G\!\mid_K\in Aut_F(K)$. Let $G\!\mid_K=\tau\in Aut_F(K)$, so we have $G(x,0)=(\tau(x),0)$ for all $x\in K$.\\ Let $G(0,1)=(a,b)$ for some $a,b\in K$. Then we have \begin{equation*} G(x,y)=G(x,0)+G(0,1)G(y,0)+(\tau(x)+a\tau(y),\tau(y)b). \end{equation*} As $G$ is multiplicative, we must also have $G((0,1)^2)=G(0,1)^2$ which holds if and only if $$(a,b)(a,b)=(\tau(c),0).$$ From this, we obtain the equations $a^2+c\sigma(b^2)=\tau(c)$ and $2ab=0.$ As $K$ does not have characteristic 2, this implies that either $a=0$ or $b=0$. If $b=0$, then $G(x,y)=(\tau(x)+\tau(y)a,0)$ and so $G$ is not surjective. This is a contradiction, as $G$ is an automorphism. Thus $a=0$ and we obtain $c\sigma(b^2)=\tau(c)$.\\ Finally, as $G$ is multiplicative we have $G(u,v)G(x,y)=G((u,v)(x,y))$ for all $u,v,x,y\in K$. Computing both sides of this equation, we get $$(\tau(ux)+c\sigma(\tau(vy)b^2),\tau(uy)b+\tau(vx)b)=(\tau(ux+c\sigma(vy)),\tau(uy+vx)b)$$ for all $u,v,x,y\in K$, which implies that $c\sigma(\tau(vy)b^2)=\tau(c\sigma(vy))$. After substituting the condition $c\sigma(b^2)=\tau(c)$, we are left with $\sigma(\tau(vy))=\tau(\sigma(vy))$ for all $v,y\in K$; that is, $\tau$ and $\sigma$ must commute.\\ Conversely, let $G:D\to D$ be a map defined by $G(x,y)=(\tau(x),\tau(y)b)$ such that $\tau$ and $\sigma$ commute and $\tau(c)=c\sigma(b^2)$. It is easily checked that this map is $F$-linear, bijective, additive and multiplicative. Hence $G$ is an $F$-algebra automorphism of $D$. \end{proof} \begin{corollary}\label{CDSAutSubgroupb1} There is a subgroup of $Aut_F(D)$ isomorphic to $$\lbrace\tau\in Aut_F(K)\mid \tau(c)=c \mbox{ and } \tau\circ\sigma=\sigma\circ\tau\rbrace.$$ \end{corollary} \begin{proof} By Theorem \ref{CDSAutGroup}, all automorphisms of $D$ are of the form $G(x,y)=(\tau(x),\tau(y)b)$, such that $\tau$ and $\sigma$ commute and $b\in K^{\times}$ satisfies $\tau(c)=c\sigma(b^2)$. If we let $b=1$, we obtain a subgroup of $Aut_F(D)$ such that $\tau$ and $\sigma$ commute and $\tau(c)=c$. \end{proof} The subset of $Aut_F(K)$ containing all the automorphisms of $K$ which commute with $\sigma\in Aut_F(K)$ is called the \textit{centralizer of $\sigma$ in $Aut_F(K)$} and is denoted by $$C(\sigma)=\lbrace \tau\in Aut_F(K) \mid \tau\circ\sigma=\sigma\circ\tau\rbrace.$$ This subset forms a subgroup of $Aut_F(K)$, so $J(c)\cap C(\sigma)$ is also a subgroup of $Aut_F(K)$. We get the following generalisation of \cite[Theorem 3]{Bur}: \begin{theorem}\label{CDSExactNumberOfAutomorphisms} There are exactly $2\left|J(c)\cap C(\sigma)\right|$ automorphisms of $D(K,\sigma,c)$, each of which is given by $$G(x,y)=(\tau(x),\tau(y)b_i)$$ for each $\tau\in J(c)\cap C(\sigma)$, where $b_i\in K^{\times}$ is chosen such that $\sigma(b_i)$ are the two solutions of $X^2-\tau(c)c^{-1}$ for $i=1,2$. \end{theorem} \begin{proof} By Theorem \ref{CDSAutGroup}, $G$ is an automorphism of $D(K,\sigma,c)$ if and only if $G(u,v)=(\tau(u),\tau(v)b)$ for some $\tau\in C(\sigma)$ and $b\in K^{\times}$ such that $\sigma(b)^2=\tau(c)c^{-1}$. We can find such $b\in K^{\times}$ if and only if $\tau\in J(c)$. Denote the solutions of $X^2-\tau(c)c^{-1}$ by $\sigma(b_1)$ and $\sigma(b_2)$. Thus $G$ is an automorphism of $D(K,\sigma,c)$ if and only if $G(u,v)=(\tau(u),\tau(v)b_i)$ for each $\tau\in J(c)\cap C(\sigma)$, where $b_i\in K$ are such that $\sigma(b_i)$ are the two solutions of $X^2-\tau(c)c^{-1}$ for $i=1,2$. \end{proof} \begin{corollary}\label{CDSExactNumberOfAutomorphismsAbelian} If $Aut_F(K)$ is abelian, then $D(K,\sigma,c)$ has exactly $2\left|J(c)\right|$ automorphisms. \end{corollary} \begin{proof} This follows immediately from Theorem \ref{CDSExactNumberOfAutomorphisms} after noting that $C(\sigma)=Aut_F(K)$. \end{proof} \begin{corollary}\label{CDSCSigmaAuts} If $c\in F^{\times}$, then $D(K,\sigma,c)$ has exactly $2\left|C(\sigma)\right|$ automorphisms. \end{corollary} \begin{proof} As $c\in F^{\times}$, for all $\tau\in Aut_F(K)$ we have $$X^2-\tau(c)c^{-1}=X^2-cc^{-1}=X^2-1,$$ which always has the solutions $X=\pm 1$. This yields $J(c)=Aut_F(K)$. The result then follows from Theorem \ref{CDSExactNumberOfAutomorphisms}. \end{proof} As $J(c)\cap C(\sigma)$ forms a subgroup of $Aut_F(K)$, we know that $\left|J(c)\cap C(\sigma)\right|$ must divide $\left|Aut_F(K)\right|$. Due to this, we can easily determine the exact size of the automorphism group of $D(K,\sigma,c)$ in certain cases. \begin{corollary} If $K$ is a field extension of prime degree $p$ over $F$, $J(c)$ is equal to either $\lbrace id\rbrace$ or $Aut_F(K)$. Further, $\left|Aut_F(D(K,\sigma,c))\right|\in\lbrace 2,2p\rbrace$. \end{corollary} \begin{proof} Let $[K:F]=p$ for some prime $p$. Then $Aut_F(K)$ is necessarily cyclic and hence abelian. As $\left|Aut_F(K)\right|=p$, we must have $\left|J(c)\right|\in \lbrace 1,p\rbrace$ and so $J(c)=\lbrace id\rbrace$ or $J(c)=Aut_F(K)$. The remainder of the result follows from Corollary \ref{CDSExactNumberOfAutomorphismsAbelian}. \end{proof} \begin{corollary}\label{QpJc=Aut(K)} If $F=\mathbb{Q}_p$ for $p\neq 2$, then $J(c)=Aut_{\mathbb{Q}_p}(K)$ and $\left|Aut_{\mathbb{Q}_p}(D(K,\sigma,c))\right|=2\left| C(\sigma)\right|$. \end{corollary} \begin{proof} As $\tau(c)$ and $c^{-1}$ clearly lie in the same coset of $K^{\times}/(K^{\times})^2$, it follows that $\tau(c)c^{-1}\in K^2$ for all $\tau\in Aut_{\mathbb{Q}_p}(K)$. We conclude that $J(c)=Aut_{\mathbb{Q}_p}(K)$ and thus $\left|Aut_{\mathbb{Q}_p}(D(K,\sigma,c))\right|=2\left| C(\sigma)\right|$ by Theorem \ref{CDSExactNumberOfAutomorphisms}. \end{proof} Generally it is difficult to actually calculate $J(c)$, so we instead bound the size of $Aut_F(D(K,\sigma,c))$. We already have an upper bound as a consequence of Theorem \ref{CDSAutGroup}. All the elements of $Aut_F(K)$ which act as the identity on $c$ form a subgroup of $Aut_F(K)$ called the \textit{isotropy group of $c$}, denoted by $$Aut_F(K)_c=\lbrace \tau\in Aut_F(K)\mid\tau(c)=c\rbrace.$$ By Corollary \ref{CDSAutSubgroupb1}, there is a subgroup of $Aut_F(D(K,\sigma,c))$ which is isomorphic to $C(\sigma)\cap Aut_F(K)_c.$ This allows us to bound the size of the automorphism group of $D(K,\sigma,c)$ from below: \begin{theorem} There are between $2\left|C(\sigma)\cap Aut_F(K)_c\right|$ and $2\left|C(\sigma)\right|$ automorphisms of $D(K,\sigma, c)$. \end{theorem} \begin{proof} It is clear that $J(c)\cap C(\sigma)$ is a subgroup of $C(\sigma)$. Each $\tau\in C(\sigma)$ can be used to construct at most 2 automorphisms of $D(K,\sigma,c)$ corresponding to the two possible solutions of $X^2-\tau(c)c^{-1}$, so we have $\left|Aut_F(D(K,\sigma,c))\right|\leqslant 2\left|C(\sigma)\right|.$\\ Additionally, each $\tau\in C(\sigma)\cap Aut_F(K)_c$ can be used to construct the maps $(x,y)\mapsto (\tau(x),\pm\tau(y)).$ It follows from Theorem \ref{CDSAutGroup} that these are automorphisms of $D(K,\sigma,c)$, so $2\left|C(\sigma)\cap Aut_F(K)_c\right|\leqslant \left|Aut_F(D(K,\sigma,c))\right|$. \end{proof} Wene \cite{Wene} derived an alternative description of the automorphism group of $D(K,\sigma,c)$ when $K$ is a finite field, in terms of inner automorphisms. An automorphism $\theta$ of $D(K,\sigma,c)$ is an \textit{inner automorphism} if there exists $m\in D(K,\sigma,c)$ with left inverse $m_l^{-1}$ such that $$\theta(x)=(m_l^{-1}x)m$$ for all $x\in D(K,\sigma,c)$. The proof given in \cite[Theorem 18]{Wene} holds verbatim for any finite field extension, yielding a sufficient condition for the existence of (nontrivial) inner automorphisms of a commutative Dickson algebra: \begin{theorem}[\cite{Wene}, Theorem 18] Let $D(K,\sigma,c)$ be a division algebra. Denote $\lambda=(0,1)$. Then $$\Phi(x,y)=[\lambda_l^{-1}(x,y)]\lambda=(\sigma(x),\sigma(y))$$ defines an inner automorphism of $D(K,\sigma,c)$ if and only if $\sigma(c)=c$. \end{theorem} \subsection{The group structure of $Aut_F(D)$} By Theorem \ref{CDSAutGroup}, we know that all the $2\left|C(\sigma)\cap J(c)\right|$ automorphisms of $D$ are of the form $G(u,v)=(\tau(u),\tau(v)b)$ for some $\tau\in C(\sigma)\cap J(c)$ and $b\in K^{\times}$ such that $\sigma(b)^2=\tau(c)c^{-1}$. Note that this final condition is equivalent to $b\in K^{\times}$ being a solution of $X^2-\sigma^{-1}(\tau(c)c^{-1})\in K[X].$ We will denote the solutions of this polynomial by $b_{\tau,1}$ and $b_{\tau,2}$. As the characteristic of $F$ is not 2, it is clear that $b_{\tau,2}=-b_{\tau,1}$. \begin{lemma}\label{StructureofAutGroupPowers} Let $b_{\tau,1},b_{\tau,2}$ be the two solutions of $X^2-\sigma^{-1}(\tau(c)c^{-1})$ and suppose $\tau^n=id$. Then $b_{\tau,i}\tau(b_{\tau,i})\tau^2(b_{\tau,i})...\tau^{n-1}(b_{\tau,i})=\pm 1.$\\ Moreover, if $n$ is odd, we have $b_{\tau,i}\tau(b_{\tau,i})\tau^2(b_{\tau,i})...\tau^{n-1}(b_{\tau,i})=1$ for $i=1$ or $i=2$, but not both. \end{lemma} \begin{proof} As in the proof of Lemma \ref{JcSubgroup}, if $b_{\tau}$ and $b_{\phi}$ are solutions of $X^2-\sigma^{-1}(\tau(c)c^{-1})$ and $X^2-\sigma^{-1}(\phi(c)c^{-1})$ respectively then the equation $$X^2-\sigma^{-1}(\phi\circ\tau(c)c^{-1})$$ has the solutions $X=\pm\phi(b_{\tau})b_{\phi}$. Similarly the equation $X^2-\sigma^{-1}(\tau^2(c)c^{-1})$ has the solutions $X=\pm\tau(b_{\tau})b_{\tau}$, the equation $X^2-\sigma^{-1}(\tau^3(c)c^{-1})$ has the solutions $X=\pm\tau(b_{\tau^2})b_{\tau}=\tau^2(b_\tau)\tau(b_{\tau})b_\tau$, and so on. Hence we see that for $i=1,2$ $$b_{\tau,i}\tau(b_{\tau,i})\tau^2(b_{\tau,i})...\tau^{n-1}(b_{\tau,i})$$ is a solution of $X^2-\sigma^{-1}(\tau^n(c)c^{-1})$. As $\tau^n=id$, we also conclude that the solutions of $$X^2-\sigma^{-1}(\tau^n(c)c^{-1})=X^2-\sigma^{-1}(cc^{-1})=X^2-1$$ are $X=\pm 1$ and so $b_{\tau,i}\tau(b_{\tau,i})\tau^2(b_{\tau,i})...\tau^{n-1}(b_{\tau,i})=\pm 1.$ As $b_{\tau,2}=-b_{\tau,1}$, we have $$b_{\tau,2}\tau(b_{\tau,2})\tau^2(b_{\tau,2})...\tau^{n-1}(b_{\tau,2})\\ = (-1)^nb_{\tau,1}\tau(b_{\tau,1})\tau^2(b_{\tau,1})...\tau^{n-1}(b_{\tau,1}).$$ If $n$ is odd, this implies that $b_{\tau,2}\tau(b_{\tau,2})\tau^2(b_{\tau,2})...\tau^{n-1}(b_{\tau,2})= -b_{\tau,1}\tau(b_{\tau,1})\tau^2(b_{\tau,1})...\tau^{n-1}(b_{\tau,1})$ and the result follows. \end{proof} \begin{theorem} For all $D(K,\sigma,c)$, we have $$Aut_F(D(K,\sigma,c))\cong (C(\sigma)\cap J(c))\times \mathbb{F}_2.$$ \end{theorem} \begin{proof} As $C(\sigma)\cap J(c)$ is a finite group, there exists a minimal generating set $\lbrace \tau_1,...,\tau_m\rbrace.$ Let $\tau$ be an element of this generating set and let $b_{\tau,i}$ ($i=1,2$) be the two roots of $X^2-\sigma^{-1}(\tau(c)c^{-1})$. As $J(c)$ is a finite group, $\tau^n$ must be equal to the identity for some $n>1$. By Lemma \ref{StructureofAutGroupPowers}, this implies $$b_{\tau,i}\tau(b_{\tau,i})\tau^2(b_{\tau,i})...\tau^{n-1}(b_{\tau,i})=\pm 1$$ for $i=1,2$. If $n$ is odd, relabel the roots such that $b_{\tau,1}$ satisfies $$b_{\tau,1}\tau(b_{\tau,1})\tau^2(b_{\tau,1})...\tau^{n-1}(b_{\tau,1})=1$$ and $b_{\tau,2}$ satisfies $$b_{\tau,2}\tau(b_{\tau,2})\tau^2(b_{\tau,2})...\tau^{n-1}(b_{\tau,2})=-1.$$ Henceforth, we will denote $b_{\tau,1}=b_\tau$. Now let $\phi\in C(\sigma)\cap J(c)$. As $\lbrace \tau_1,...,\tau_m\rbrace$ generates $C(\sigma)\cap J(c)$, $\phi$ can be expressed as a product of the $\tau_i$. Due to this, we can construct the roots of $X^2-\sigma^{-1}(\phi(c)c^{-1})$ from the $b_{\tau_i}$. For example, if $\phi=\tau_i\circ\tau_j$ then we obtain $$b_{\phi}=b_{\tau_i}\tau_i(b_{\tau_j}).$$ This method can be applied recursively to construct the roots of $X^2-\sigma^{-1}(\tau(c)c^{-1})$ for all $\tau\in C(\sigma)\cap J(c)$.\\ We can now express all automorphisms of $D$ in the form $G(u,v)=(\tau(u),\pm\tau(v)b_{\tau})$ for some $\tau\in J(c)\cap C(\sigma)$ and $b_{\tau}$ as defined above. Define a map $\Phi: Aut_F(D)\to (J(c)\cap C(\sigma))\times \mathbb{F}_2$ by $$\Phi(G)=(\tau,\pm 1).$$ This map is well-defined due to the careful labelling of roots of $X^2-\sigma^{-1}(\tau(c)c^{-1})$. It is easy to see that it gives an isomorphism between groups. \end{proof} \begin{corollary} If $F=\mathbb{Q}_p$, then $Aut_{\mathbb{Q}_p}(D(K,\sigma,c))\cong C(\sigma)\times \mathbb{F}_2.$ \end{corollary} \begin{proof} This follows from Corollary \ref{QpJc=Aut(K)}. \end{proof} Thus it is sufficient to consider the subgroups of $Aut_F(K)$, $C(\sigma)$ and $J(c)$, in order to determine the structure of the automorphism groups of these algebras. \section{A generalisation of the commutative Dickson algebras construction obtained by doubling a central simple algebra}\label{FurtherGeneralisingDicksonAlgebras} Let $B$ be a central simple algebra over $F$. Let $\sigma\in Aut_F(B)$ be a non-trival automorphism and $c\in B^{\times}$. As $B$ is not commutative, we can generalise the classical Dickson multiplication on the $F$-vector space $B\oplus B$ in three ways: \begin{itemize} \item $(u,v)\circ(x,y)=(ux+c\sigma(vy),uy+vx),$ \item $(u,v)\circ(x,y)=(ux+\sigma(v)c\sigma(y),uy+vx),$ \item $(u,v)\circ(x,y)=(ux+\sigma(vy)c,uy+vx).$ \end{itemize} We denote the $F$-vector space $B\oplus B$ endowed with each of these multiplications by $D(B,\sigma,c)$, $D_m(B,\sigma,c)$ and $D_r(B,\sigma,c)$, respectively. If $c\in F^{\times}$, the three constructions are identical. All three constructions yield unital nonassociative algebras over $F$ and are canonical generalisations of the commutative construction defined by Dickson. \begin{lemma}\label{CommD}\begin{enumerate}[(i)] \item Let $D=D(B,\sigma,c)$ or $D=D_r(B,\sigma,c)$. Then $Comm(D)=F\oplus F$. \item Let $D=D_m(B,\sigma,c)$. If $c\in F^{\times}$, then $Comm(D)=F\oplus F$. Otherwise, $Comm(D)=F$. \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate}[(i)] \item We only show the proof for $D(B,\sigma,c)$ as the proof for $D_r(B,\sigma,c)$ follows identically. Let $(u,v)\in Comm(D)$. Then for all $x\in B$, we have $$(u,v)(x,0)=(x,0)(u,v).$$ This is equivalent to $ux=xu$ and $vx=xv.$ This holds for all $x\in B$ if and only if both $u$ and $v$ lie in the centre of $B$. Hence $Comm(D)\subseteq F\oplus F$. It is easily checked that all elements of $F\oplus F$ are contained in $Comm(D)$. Hence $Comm(D)=F\oplus F$. \item Let $(u,v)\in Comm(D)$. Then for all $x\in B$, we have $(u,v)(0,x)=(0,x)(u,v).$ This is equivalent to $\sigma(v)c\sigma(x)=\sigma(x)c\sigma(v)$ and $ux=xu.$ The second equation implies that $u\in Z(B)=F$. If $c\not\in F$, then the first equation is only satisfied for all $x\in B$ when $v=0$, which yields $Comm(D)=F$.\\ If $c\in F^{\times}$, we have $D_m(B,\sigma,c)=D(B,\sigma,c)$ and so by (i), we obtain that $Comm(D)=F\oplus F.$ \end{enumerate} \end{proof} \begin{theorem}\label{CSANucleus} Let $D=D(B,\sigma,c)$. Then \begin{itemize} \item $Nuc_l(D)=\lbrace k\in B \mid c\sigma(k)=kc\rbrace\subset B$, \item $Nuc_m(D)=B$, \item $Nuc_r(D)=Fix(\sigma).$ \end{itemize} In particular, $Nuc(D)=Fix(\sigma)\cap\lbrace k\in B\mid c\sigma(k)=kc\rbrace$ and $Z(D)=F$. \end{theorem} \begin{proof} We will show the proof for the left nucleus. The calculations for the middle and right nucleus are obtained similarly. \\ Suppose $(k,l)$ lies in the left nucleus for some $k,l\in B$. Then for all $x\in B$, we must have $$((k,l)(0,1))(x,0)=(k,l)((0,1)(x,0)).$$ Computing both sides of this it follows that $$(c\sigma(l)x,kx)=(c\sigma(lx),kx).$$ As $\sigma$ is a non-trivial automorphism of $B$, this is true for all $x\in B$ if and only if $l=0$. Thus we only need to consider elements of the form $(k,0)$ for $k\in B$. Now $(k,0)\in Nuc_l(D)$ if and only if we obtain $$((k,0)(u,v))(x,y)=(k,0)((u,v)(x,y))$$ for all $u,v,x,y\in B$. Computing both sides of this, this yields $$(kux+c\sigma(kvy),kuy+kvx)=(kux+kc\sigma(vy),kuy+kvx).$$ This is satisfied for all $u,v,x,y\in B$ if and only if $c\sigma(k)=kc$. Hence we have that $$Nuc_l(D)=\lbrace (k,0)\mid k\in B \mbox{ such that }c\sigma(k)=kc\rbrace.$$ As the centre is the intersection of the nucleus and the commutator, this yields $Z(D)=(Fix(\sigma)\cap\lbrace k\in B\mid c\sigma(k)=kc\rbrace\cap F)\oplus 0=F\oplus 0$. \end{proof} Similarly, we can calculate the left, middle and right nuclei and centre of $D_r(B,\sigma,c)$ and $D_m(B,\sigma,c)$: \begin{theorem}\label{CSANucleusR} Let $D=D_r(B,\sigma,c)$. Then \begin{itemize} \item $Nuc_l(D)=Fix(\sigma)$, \item $Nuc_m(D)=B$, \item $Nuc_r(D)=\lbrace k\in B \mid c\sigma(k)=kc\rbrace\subset B.$ \end{itemize} In particular, $Nuc(D)=Fix(\sigma)\cap\lbrace k\in B\mid c\sigma(k)=kc\rbrace$ and $Z(D)=F$. \end{theorem} \begin{theorem}\label{CSANucleusM} Let $D=D_m(B,\sigma,c)$. Then \begin{itemize} \item $Nuc_l(D)=Fix(\sigma)$, \item $Nuc_m(D)=\lbrace k\in B\mid \sigma(k)c=c\sigma(k)\rbrace \subset B$, \item $Nuc_r(D)=Fix(\sigma).$ \end{itemize} In particular, $Nuc(D)=Fix(\sigma)\cap\lbrace k\in B\mid c\sigma(k)=\sigma(k)c\rbrace$ and $Z(D)=F$. \end{theorem} Note that if $c\in F^{\times}$, the three algebras we obtain are identical as noted earlier. In this case, the left and right nuclei are equal to $Fix(\sigma)$ and the middle nucleus is equal to $B$. \begin{corollary} Let $c\in B\setminus F^{\times}$. Then \begin{itemize} \item $D(B,\sigma,c)\not\cong D_m(B,\sigma,c)$, \item $D_m(B,\sigma,c)\not\cong D_r(B,\sigma,c)$. \end{itemize} If $c$ does not commute with all elements of $Fix(\sigma)$, then $D(B,\sigma,c)\not\cong D_r(B,\sigma,c)$. \end{corollary} \begin{proof} Since automorphisms preserve each of the left, middle and right nuclei, if $D(B,\sigma,c)\cong D_m(B,\sigma,c)$ this implies that $\lbrace k\in B\mid \sigma(k)c=c\sigma(k)\rbrace=B.$ As $c\not\in F$, we can find $k\in B$ such that $\sigma(k)$ does not commute with $c$ so this is never true. An identical argument shows that $D_m(B,\sigma,c)\not\cong D_r(B,\sigma,c)$.\\ Finally, we see that $D(B,\sigma,c)\cong D_r(B,\sigma,c)$ occurs only if $Fix(\sigma)=\lbrace k\in B\mid kc=c\sigma(k)\rbrace.$ Let $x\in Fix(\sigma)$. We have $x\in \lbrace k\in B\mid kc=c\sigma(k)\rbrace$ if and only if $cx=xc.$\\ Similarly, if we take an element $y\in \lbrace k\in B\mid kc=c\sigma(k)\rbrace$, it lies in $Fix(\sigma)$ if and only if $cy=yc$. Thus the left nuclei of the two algebras are equal only when $c$ commutes with all of $Fix(\sigma)$. Otherwise, we must have $D(B,\sigma,c)\not\cong D_r(B,\sigma,c)$. \end{proof} Similarly to the algebras we obtained from doubling a field extension, any $F$-subalgebra of $B$ appears as a subalgebra of $D(B,\sigma,c)$, $D_m(B,\sigma,c)$ and $D_r(B,\sigma,c)$. Additionally, if $E\subset B$ is such that $c\in E^{\times}$ and $\sigma\!\mid_E\in Aut_F(E)$, then $D(E,\sigma\!\mid_E,c)$ (resp. $D_m(E,\sigma\!\mid_E,c)$ and $D_r(E,\sigma\!\mid_E,c)$) is a subalgebra of $D(B,\sigma,c)$ (resp. $D_m(B,\sigma,c)$ and $D_r(B,\sigma,c)$). In particular, this yields the following: \begin{theorem} If $c\in K^{\times}$ for some separable field extension $K/F$ contained in $B$ such that $\sigma\!\mid_K=\phi\in Aut_F(K)$, then $D(K,\phi,c)$ is a commutative Dickson subalgebra of $D(B,\sigma,c)$, $D_m(B,\sigma,c)$ and $D_r(B,\sigma,c)$. \end{theorem} \begin{theorem}\label{CDSDivisionB} \begin{enumerate}[(i)] \item $D=D(B,\sigma,c)$ is a division algebra if and only if $c\neq rt^{-1}rs\sigma(s^{-1}t^{-1})$ for all $r,s,t\in B^{\times}$. \item $D_m(B,\sigma,c)$ is a division algebra if and only if $c\neq \sigma(t)^{-1}rt^{-1}rs\sigma(s)^{-1}$ for all $r,s,t\in B^{\times}$. \item $D_r(B,\sigma,c)$ is a division algebra if and only if $c\neq \sigma(s^{-1}t^{-1})rt^{-1}rs$ for all $r,s,t\in B^{\times}$. \end{enumerate} \end{theorem} \begin{proof} (i): Suppose that $D$ is not a division algebra. Then there exist nonzero elements $(u,v),(x,y)\in K\oplus K$ such that $(u,v)(x,y)=(0,0).$ This is equivalent to the simultaneous equations \begin{align} ux+c\sigma(vy)=&0, \label{CDSDivisionB1}\\ uy+vx=&0. \label{CDSDivisionB2} \end{align} If $v=0$, then (\ref{CDSDivisionB2}) becomes $uy=0$, so either $u=0$ or $y=0$. However, $u$ must be nonzero, else $(u,v)=(0,0)$ which is a contradiction, so we must have $y=0$. Additionally, (\ref{CDSDivisionB1}) gives $ux=0$. As $u$ is nonzero, this implies $x=0$ and so $(x,y)=(0,0)$ which is again a contradiction.\\ So let $v\neq 0$. As $B$ is an associative division algebra, we have $v^{-1}\in B$ and hence we obtain $$x=-v^{-1}uy$$ from (\ref{CDSDivisionB2}). Now if $y=0$, this implies that $x=0$ which is a contradiction to $(x,y)\neq(0,0)$. Substituting this into (\ref{CDSDivisionB1}), we get $$-uv^{-1}uy+c\sigma(vy)=0,$$ which rearranges to give $c=uv^{-1}uy\sigma(y)^{-1}\sigma(v)^{-1}$.\\ Conversely, suppose $c=rt^{-1}rs\sigma(s)^{-1}\sigma(t)^{-1}$ for some $r,s,t\in K^{\times}$. Consider the elements $(r,t)$ and $(-t^{-1}rs,s)$. Both elements are nonzero but satisfy \begin{align*} (r,t)(-t^{-1}rs,s)=&(-rt^{-1}rs+rt^{-1}rs\sigma(s)^{-1}\sigma(t)^{-1}\sigma(ts),rs-tt^{-1}rs)\\ =&(0,0). \end{align*} Hence $D$ is not a division algebra.\\ The proofs of (ii) and (iii) follow almost identically to (i). \end{proof} \begin{corollary} If $c\in B^{\times 2}$, then $D(B,\sigma,c), D_m(B,\sigma,c)$, and $D_r(B,\sigma,c)$ are not division algebras. \end{corollary} \begin{proof} This follows from setting $s=t=1$ in Theorem \ref{CDSDivisionB}. \end{proof} \begin{corollary} Let $N_{B/F}:B\to F$ be the nondegenerate multiplicative norm form on $B$. The algebras $D=D(B,\sigma,c)$, $D_m(B,\sigma,c)$, $D_r(B,\sigma,c)$ are division algebras if $$N_{B/F}(c)\neq N_{B/F}(a)^2$$ for all $a\in B$. \end{corollary} \begin{proof} This follows analogously to Corollary \ref{CDSDivisionNorm}. \end{proof} \begin{example} \begin{enumerate}[(i)] \item Let $F=\mathbb{Q}$ and $B=(a,b)$ be a quaternion division algebra over $\mathbb{Q}$ with $a,b>0$. For all $x\in B^{\times}$, we see that $N_{B/\mathbb{Q}}(x)^2> 0$; as a consequence, $D(B,\sigma,c)$ is a division algebra for any $c\in B^{\times}$ such that $N_{B/\mathbb{Q}}(c)<0$. For example, if we pick $c=c_1i+c_2j$ for some $c_i\in \mathbb{Q}$ not both zero, then $$N_{B/\mathbb{Q}}(c)=-c_1^2a-c_2^2b<0,$$ so $D(B,\sigma,c)$ is a division algebra. \item Let $F=\mathbb{Q}_p$ and $B=(u,p)$ be the unique quaternion division algebra over $\mathbb{Q}_p$ for some $u\in \mathbb{Z}_p\setminus (\mathbb{Z}_p)^2$. Then for all $c\in B$, it follows that $N_{B/\mathbb{Q}_p}(c)=x^2-y^2u-z^2p+w^2up$ for some $x,y,z,w\in \mathbb{Q}_p$. As $up$ is not a square in $\mathbb{Q}_p$, for any $c\in B$ such that $N_{B/\mathbb{Q}_p}(c)=w^2up$ we conclude that $D(B,\sigma,c)$ is a division algebra over $\mathbb{Q}_p$. \end{enumerate} \end{example} \subsection{Isomorphisms} The results and proofs from Section 2 regarding isomorphisms and automorphisms of commutative Dickson algebras generalise almost identically to $D(B,\sigma,c)$ and $D_r(B,\sigma,c)$, as the middle nuclei of these algebras are equal to $B$. First note the following result: \begin{lemma}\label{otherdicksonnucleus} Let $D=D(B,\sigma,c)$, $D'=D(B,\phi,d)$ be two Dickson algebras over $F$. If there exists an $F$-isomorphism $\tau:B\to B'$ such that $\tau\circ\sigma=\phi\circ\tau$ and $\tau(c)=db^2$ for some $b\in F^{\times}$, then $\tau\!\mid_{Nuc_l(D)}:Nuc_l(D)\to Nuc_l(D')$ is an $F$-isomorphism. \end{lemma} \begin{proof} As with the proof of Lemma \ref{sigmataufixfields}, we only need to show that $im(\tau\!\mid_{Nuc_l(D)})=Nuc_l(D')$. First, consider $x\in Nuc_l(D)$. It follows that $x$ must satisfy $c\sigma(k)=kc$. Applying $\tau$ to both sides of the equation and substituting in the condition on $\tau(c)$, we obtain $$db^2\tau(\sigma(k))=\tau(k)db^2.$$ As $b\in F^{\times}$, we can cancel this from both sides. After substituting $\tau\circ\sigma=\phi\circ\tau$, this yields $d\phi(\tau(k))=\tau(k)d$ and thus $\tau(k)\in Nuc_l(D')$. Hence $im(\tau\!\mid_{Nuc_l(D)})\subseteq Nuc_l(D')$. In order to show equality, we follow an analogous process to the one in the proof of Lemma \ref{sigmataufixfields}. \end{proof} It is clear that this also holds when considering the right nucleus of $D_r(B,\sigma,c)$, as this is equal to the left nucleus of $D(B,\sigma,c)$. We will always assume that $B, B'$ are central simple division algebras over $F$. We now give a proof of the generalisation of Theorem \ref{CDSIsomorphisms}: \begin{theorem}\label{CDSBIsomorphisms} Let $D=D(B,\sigma,c)$ and $D'=D(B',\phi,d)$ be $F$-algebras. Then $G:D\to D'$ is an isomorphism if and only if $G$ has the form $$G(x,y)=(\tau(x),\tau(y)b)$$ for some $F$-isomorphism $\tau:B\to B'$ such that $\phi\circ\tau=\tau\circ\sigma$ and $\tau(c)=db^2$ for some $b\in F^{\times}$. \end{theorem} \begin{proof} Suppose $G:D\to D'$ is an $F$-isomorphism. Then $G$ maps the middle nucleus of $D$ to the middle nucleus of $D'$, so by Theorem \ref{CSANucleus} this implies $B\cong B'.$ This means $G$ restricted to $B$ must be an isomorphism which maps to $B'$; that is, $G\!\mid_B=\tau:B\to B'$, so this yields $G(x,0)=(\tau(x),0)$ for all $x\in B$.\\ Let $G(0,1)=(a,b)$ for some $a,b\in B'$. Then we have $G(x,y)=G(x,0)+G(0,1)G(y,0)=(\tau(x)+a\tau(y),\tau(y)b),$ and $G(x,y)=G(x,0)+G(y,0)G(0,1)=(\tau(x)+\tau(y)a,b\tau(y)).$ This implies that $a,b\in Z(B')=F$.\\ As $G$ is multiplicative, it follows that $G((0,1)^2)=G(0,1)^2$ which holds if and only if $(a,b)(a,b)=(\tau(c),0).$ From this, we obtain the equations \begin{equation*} a^2+d\phi(b^2)=\tau(c), \ ab+ba=0. \end{equation*} Since we established that $a,b\in F$, this simplifies to $a^2+db^2=\tau(c)$ and $2ab=0.$ As $F$ does not have characteristic 2, this implies that either $a=0$ or $b=0$. If $b=0$, then $G(x,y)=(\tau(x)+\tau(y)a,0)$ and so $G$ is not surjective. This is a contradiction, as $G$ is an isomorphism. Thus $a=0$ and we obtain $db^2=\tau(c)$.\\ Finally, as $G$ is multiplicative it follows that $G(u,v)G(x,y)=G((u,v)(x,y))$ for all $u,v,x,y\in K$. Computing both sides of this equation, we get $$(\tau(ux)+d\phi(\tau(v)b\tau(y)b),\tau(uy)b+\tau(v)b\tau(x))=(\tau(ux+c\sigma(vy)),\tau(uy+vx)b)$$ for all $u,v,x,y\in K$. As $b\in F$, this implies $db^2\phi(\tau(vy))=\tau(c\sigma(vy)).$ After substituting the condition $\tau(c)=d\phi(b^2)$, we conclude $\phi\circ\tau=\tau\circ\sigma.$\\ Conversely, let $G:B\oplus B\to B'\oplus B'$ be defined by $G(x,y)=(\tau(x),\tau(y)b)$ for some $F$-isomorphism $\tau:B\to B'$ such that $\phi\circ\tau=\tau\circ\sigma$ and $\tau(c)=db^2$ for some $b\in F^{\times}$. By Lemma \ref{sigmataufixfields} and Lemma \ref{otherdicksonnucleus}, we see that $G$ maps the nuclei of $D$ isomorphically to the nuclei of $D'$. Thus, it is easily checked that this $G$ gives an $F$-algebra isomorphism from $D$ to $D'$. \end{proof} \begin{theorem}\label{CDSBIsomorphismsR} Let $D=D_r(B,\sigma,c)$ and $D'=D_r(B',\phi,d)$ be $F$-algebras. Then $G:D\to D'$ is an isomorphism if and only if $G$ has the form $$G(x,y)=(\tau(x),\tau(y)b)$$ for some $F$-isomorphism $\tau:B\to B'$ such that $\phi\circ\tau=\tau\circ\sigma$ and $\tau(c)=db^2$ for some $b\in F^{\times}$. \end{theorem} \begin{proof} The proof is analogous to Theorem \ref{CDSBIsomorphisms}, as the middle nuclei of $D_r(B,\sigma,c)$ and $D_r(B',\phi,d)$ are equal to $B$ and $B'$ respectively. Due to this, we can construct the isomorphisms in the same way as in the previous proof. \end{proof} \begin{corollary}\label{CDSBIsomorphisms 2}Let $D=D(B,\sigma,c)$ (resp. $D_r(B,\sigma,c)$) and $D'=D(B,\phi,d)$ (resp. $D_r(B,\phi,d)$) be $F$-algebras. Then $G:D\to D'$ is an isomorphism if and only if $G$ has the form $$G(x,y)=(\tau(x),\tau(y)b)$$ for some $F$-isomorphism $\tau\in Aut_F(B)$ such that $\phi\circ\tau=\tau\circ\sigma$ and $\tau(c)=db^2$ for some $b\in F^{\times}$. \end{corollary} \begin{corollary} If $c\in F^{\times}$ and $d\in B^{\times}\setminus F$, then $D(B,\sigma,c)$ is not isomorphic to any of $D(B,\sigma,d)$, $D_m(B,\sigma,d)$ or $D_r(B,\sigma,d)$. \end{corollary} \begin{proof} If $D(B,\sigma,c)$ is isomorphic to one of $D(B,\sigma,d)$ or $D_r(B,\sigma,d)$, by Corollary \ref{CDSBIsomorphisms 2} there must exist some $b\in F^{\times}$ such that $c=db^2$. This implies $d=cb^{-2}\in F^{\times}$, which is a contradiction.\\ Finally, if $D_m(B,\sigma,d)\cong D(B,\sigma,c)$, then the middle nuclei of the two algebras must be isomorphic; that is, $B\cong \lbrace k\in B\mid \sigma(k)d=d\sigma(k)\rbrace$. This is satisfied if and only if $d\in F^{\times}$, contradicting our assumption. \end{proof} Note that we cannot use an analogous proof to the one in Theorem \ref{CDSBIsomorphisms} to determine the isomorphisms of $D_m(B,\sigma,c)$, as the middle nucleus is not equal to $B$. We obtain some weaker results: \begin{lemma}If $Fix(\sigma)\not\cong Fix(\phi)$, then $D_m(B,\sigma,c)\not\cong D_m(B',\phi,d)$ for any choice of $c\in B^{\times}$ and $d\in B'^{\times}$. \end{lemma} \begin{proof} If $D_m(B,\sigma,c)\cong D_m(B',\phi,d)$, the left nucleus of $D_m(B,\sigma,c)$ is mapped isomorphically to the left nucleus of $D_m(B',\phi,d)$. By Lemma \ref{CSANucleusM}, this implies $Fix(\sigma)\cong Fix(\phi)$. \end{proof} \begin{theorem}\label{CDSBIsomorphismsR} Let $D=D_m(B,\sigma,c)$ and $D'=D_m(B',\phi,d)$ be $F$-algebras. If $\tau:B\to B'$ is an $F$-isomorphism such that $\phi\circ\tau=\tau\circ\sigma$ and $\tau(c)=db^2$ for some $b\in F^{\times}$, there is an isomorphism $G:D\to D'$ given by $G(x,y)=(\tau(x),\tau(y)b)$ for all $x,y\in B$. \end{theorem} \begin{proof} Clearly this is an $F$-vector space isomorphism from $B\oplus B$ to $B'\oplus B'$ as it is additive, bijective and $F$-linear. To show this map is multiplicative and thus an $F$-algebra isomorphism, we consider $G(u,v)G(x,y)=G((u,v)(x,y)).$ This is equivalent to $$\tau(u)\tau(x)+\phi(\tau(v)b)d\phi(\tau(y)b)=\tau(ux+\sigma(v)c\sigma(y)),\ \tau(u)\tau(y)b+\tau(v)b\tau(x)=\tau(uy+vx)b.$$ As $b\in F^{\times}$, this is equivalent to simply considering $\phi(\tau(v))db^2\phi(\tau(y))=\tau(\sigma(v))\tau(c)\tau(\sigma(y))$. Substituting $\tau(c)=db^2$, we conclude that this is satisfied for all $v,y\in B$ as we assumed $\phi\circ\tau=\tau\circ\sigma$. Hence $G:D\to D'$ is a $F$-algebra isomorphism. \end{proof} \subsection{Automorphisms} \begin{theorem}\label{CDSBAutomorphisms} Let $D=D(B,\sigma, c)$ (resp. $D=D_r(B,\sigma,c)$). All automorphisms $G:D\to D$ are of the form $$G(u,v)=(\tau(u),\tau(v)b)$$ for some $\tau\in Aut_F(B)$ such that $\tau\in C(\sigma)$ and $b\in F^{\times}$ satisfying $\tau(c)=cb^2$. Further, all maps of this form with $\tau\in Aut_F(B)$ and $b\in F^{\times}$ satisfying these conditions yield automorphisms of $D$. \end{theorem} \begin{proof} Suppose that $G:D\to D$ is an $F$-automorphism. Then $G$ restricts to an automorphism of the middle nucleus of $D$. This means that $G$ restricted to $B$ must be an automorphism of $B$; that is, $G\!\mid_B=\tau\in Aut_F(B)$, so we have $G(x,0)=(\tau(x),0)$ for all $x\in B$.\\ Let $G(0,1)=(a,b)$ for some $a,b\in B$. Then we have $$G(x,y)=G(x,0)+G(0,1)G(y,0)=(\tau(x)+a\tau(y),\tau(y)b),$$ and $G(x,y)=G(x,0)+G(y,0)G(0,1)=(\tau(x)+\tau(y)a,b\tau(y)).$ This implies that $a,b\in Z(B')=F$.\\ As $G$ is multiplicative, we must also have $G((0,1)^2)=G(0,1)^2$ which holds if and only if $(a,b)(a,b)=(\tau(c),0).$ From this, we obtain the equations $a^2+c\phi(b^2)=\tau(c)$ and $ab+ba=0.$ Since we have $a,b\in F$, this simplifies to $a^2+cb^2=\tau(c)$ and $2ab=0.$ As $F$ does not have characteristic 2, this implies either $a=0$ or $b=0$. If $b=0$, then $G(x,y)=(\tau(x)+\tau(y)a,0)$ and so $G$ is not surjective. This is a contradiction, as $G$ is an automorphism. Thus we conclude $a=0$ and $cb^2=\tau(c)$.\\ Finally, as $G$ is multiplicative we have $G(u,v)G(x,y)=G((u,v)(x,y))$ for all $u,v,x,y\in K$. When $D=D(B,\sigma,c)$, this yields $$(\tau(ux)+c\sigma(\tau(v)b\tau(y)b),\tau(uy)b+\tau(v)b\tau(x))=(\tau(ux+c\sigma(vy)),\tau(uy+vx)b)$$ for all $u,v,x,y\in K$. As $b\in F$, this implies we must have $cb^2\sigma(\tau(vy))=\tau(c\sigma(vy)).$ After substituting the condition $\tau(c)=c\phi(b^2)$, we get $\sigma\circ\tau=\tau\circ\sigma.$ This follows almost identically for $D_r(B,\sigma,c)$.\\ Conversely, let $G:B\oplus B\to B\oplus B$ be defined by $G(x,y)=(\tau(x),\tau(y)b)$ for some $F$-automorphism $\tau:B\to B$ such that $\sigma\circ\tau=\tau\circ\sigma$ and $\tau(c)=cb^2$ for some $b\in F^{\times}$. It is easily checked that this in fact gives an $F$-algebra automorphism of $D$. \end{proof} \begin{corollary} Let $D=D(B,\sigma, c)$ (resp. $D=D_r(B,\sigma,c)$). There is a subgroup of $Aut_F(D)$ isomorphic to $$\lbrace\tau\in Aut_F(B)\mid \tau(c)=c \mbox{ and } \tau\circ\sigma=\sigma\circ\tau\rbrace.$$ \end{corollary} In order to describe the number of automorphisms of $D(B,\sigma,c)$ and $D_r(B,\sigma,c)$, we introduce a slightly different version of the group $J(c)$: $$J_F(c)=\lbrace \tau\in Aut_F(B)\mid X^2-\tau(c)c^{-1} \mbox{ has solutions in }F\rbrace\subset Aut_F(B).$$ Similarly to $J(c)$, this forms a subgroup of $Aut_F(B)$. The proof of this follows identically to the proof of Theorem \ref{JcSubgroup}. \begin{theorem} There are exactly $2\left|J_F(c)\cap C(\sigma)\right|$ automorphisms of $D(B,\sigma,c)$ (respectively $D_r(B,\sigma,c)$), each of which is given by the automorphisms $G(x,y)=(\tau(x),\tau(y)b_i)$ for each $\tau\in J_F(c)\cap C(\sigma)$, where $b_i\in F$ are the two solutions of $X^2-\tau(c)c^{-1}$ for $i=1,2$. \end{theorem} \begin{proof} The proof follows analogously to the proof of Theorem \ref{CDSExactNumberOfAutomorphisms}, apart from requiring that $b_i\in F^{\times}$. This is due to the constraints determined in Theorem \ref{CDSBAutomorphisms}. \end{proof} \begin{corollary} If $c\in F^{\times}$, then there are exactly $2\left|C(\sigma)\right|$ automorphisms of $D(B,\sigma,c)$, each of which is given by the automorphisms $G(x,y)=(\tau(x),\pm\tau(y))$ for each $\tau\in C(\sigma)$. \end{corollary} \begin{proof} This follows similarly to Corollary \ref{CDSCSigmaAuts}. \end{proof} An integral part of the proof given in Theorem \ref{CDSBAutomorphisms} is that one of the nuclei of these algebras must be equal to $B$ and so any automorphism of $D(B,\sigma,c)$ must restrict to an automorphism of $B$. For $D_m(B,\sigma,c)$ with $c\not\in F^{\times}$, $B$ is not equal to any of the nuclei so we cannot make this deduction. However, if we assume that an automorphism of $D_m(B,\sigma, c)$ restricts to an automorphism of $B$, then it must be of the same form as the automorphisms of the other Dickson algebras: \begin{theorem} Let $D=D_m(B,\sigma,c)$ and suppose $G$ is an automorphism which restricts to an automorphism of $B$. Then $$G(u,v)=(\tau(u),\tau(v)b)$$ for some $\tau\in Aut_F(B)$ such that $\tau\in C(\sigma)$ and $b\in F^{\times}$ satisfying $\tau(c)=cb^2$. \end{theorem} \begin{proof} The proof follows analogously to Theorem \ref{CDSBAutomorphisms} as $G$ restricts to an automorphism of $B$. \end{proof} \printbibliography \end{document}
{ "timestamp": "2019-03-01T02:15:32", "yymm": "1902", "arxiv_id": "1902.11016", "language": "en", "url": "https://arxiv.org/abs/1902.11016" }
\section{Introduction} In this paper we prove the Margulis lemma on Alexandrov spaces with curvature bounded below. A group $\Gamma$ is called {\it $w$-nilpotent} if there is a nilpotent subgroup $N<\Gamma$ whose index $[\Gamma:N]\le w$. Let $B_r(p)$ denote a metric ball centered at $p$ of radius $r$. \begin{theorem}[Generalized Margulis Lemma]\label{thmb-margulis} There are $\epsilon(n),w(n)>0$ such that for any Alexandrov space $X$ with curvature $\ge -1$ and any point $p\in X$, the subgroup $\Gamma_p(p;\epsilon)$ of fundamental group $\pi_1(B_1(p),p)$ generated by loops at $p$ lying in $B_{\epsilon}(p)$ with $0<\epsilon\le \epsilon(n)$ is $w(n)$-nilpotent. \end{theorem} The original Margulis lemma is also called Margulis-Heintze's theorem, which was proved by Margulis (cf. \cite{Gr78a}), and also independently discovered by Heintze \cite{He76} on manifolds of $-1\le K\le 0$. Since then, it has been one of the fundamental facts in Riemannian geometry which has many applications, e.g., Gromov's almost flat theorem \cite{Gr78a}, finiteness of closed negatively pinched manifolds \cite{Gr78b} of bounded volume, and more recently the almost rigidity of maximal volume entropy \cite{CRX16} for manifolds of lower bounded Ricci curvature to be hyperbolic. For manifolds with sectional curvature $K\ge -1$, it was proved by Fukaya-Yamaguchi \cite{FY92} that $\Gamma(\epsilon)$ is almost nilpotent without a uniform bound on the index. For Alexandrov spaces, the earlier version of Theorem \ref{thmb-margulis} was proved by Yamaguchi in \cite{Ya96}, also without a uniform bound on the index of the nilpotent subgroup, where the proof was based on the Lipschitz submersion Theorem \ref{thm:Lipschitz-submersion} and arguments in \cite{FY92}. Later a global version of Theorem \ref{thmb-margulis} for manifolds of almost nonnegative curvature was proved by Kapovitch-Petrunin-Tuschmann \cite{KPT10}, where $\Gamma(\epsilon)$ admits a nilpotent subgroup with uniformly bounded index. Theorem \ref{thmb-margulis} also follows from the main ideas of Kapovitch-Petrunin-Tuschmann \cite{KPT10}. Gromov conjectured that the Margulis lemma with a universal bounded index holds for manifolds of lower bounded Ricci curvature. A breakthrough on this conjecture was made by Cheeger-Colding \cite{CC96}, and it has been finally confirmed recently by Kapovitch-Wilking \cite{KW11}. We point it out that the uniform index bound is very important to some geometric applications, for example, in Gromov's almost flat theorem \cite{Gr78a}, the uniform index bound corresponds to the holonomy gap which is crucial in Gromov's and Ruh's proof (see \cite{Gr78a}, \cite{Ruh82}, \cite{BK81}). The uniformly index bound is also crucial for the almost rigidity of maximal volume entropy \cite{CRX16} in deriving that the connectedness component of a Gromov-Hausdorff limit group of deck-transformations is a nilpotent Lie group. \begin{remark} More generally, one may further consider a metric space $X$ of $K$-bounded packing, i.e., there is $K>0$ such that every ball of radius $4$ in $X$ can be covered by at most $K$ balls of radius $1$. In \cite[\S5.F]{Gro2007} Gromov proposed a question whether a discrete isometric subgroup $\Gamma$ acting on a metric space with $K$-bounded packing is virtually nilpotent, if $\Gamma$ is generated by finite elements whose displacement at one point $< \epsilon(K)$? It has been answered affirmatively by \cite{BGT2012} recently. However, the uniform index bound as in Theorem \ref{thmb-margulis} is beyond their approach (see \cite[Section 11]{BGT2012}). \end{remark} Our proof relies on Theorem \ref{thm:Lipschitz-submersion} and Theorem \ref{thm-gradientpush-1} below. For small $0<\delta<\delta(n,\kappa)$, the {\it $\delta$-strained radius} \cite{Ya96} at a point $p$ in an $n$-dimensional Alexandrov space $Y$ of curv $\ge \kappa$ is defined to be $$r_{\text{$\delta$-str}}(p)=\sup \{r\; | \text{ there exists an $(n,\delta)$-strainer at $p$ of length $r$} \}.$$ Let $r_{\text{$\delta$-str}}(Y)=\inf\{ r_{\text{$\delta$-str}}(p):p\in Y\}$. Let $\varkappa(\delta,\epsilon|n)$ denote a positive function depending on $n$, $\delta$ and $\epsilon$ satisfying $\varkappa(\delta,\epsilon)\to 0$ as $\delta,\epsilon\to 0$. A map $f:X\to Y$ between Alexandrov spaces is an {\it $\epsilon$-almost Lipschitz submersion} \cite{Ya96} if \begin{enumerate} \item[(i)] $f$ is an $\epsilon$-Gromov-Hausdorff approximation (GHA for simplicity), i.e., for any $p,q\in X$, $||f(p)f(q)|-|pq||\le \epsilon$ and $f(X)$ is $\epsilon$-dense in $Y$, where $|pq|=d(p,q)$ denote the distance between two points $p,q$; and \item[(ii)] for any $p, q \in X$, $$\left|\frac{|f(p)f(q)|}{|pq|}-\sin\theta\right|<\epsilon,$$ where $\theta(p,q)$ is the infimum of $\measuredangle qpx$ when $x$ runs over $f^{-1}(f(p))$. \end{enumerate} We call an $\epsilon$-almost Lipschitz submersion is {\it regular}, if in addition, \begin{enumerate} \item[(iii)] for any $y,z\in Y$, there are points $p\in f^{-1}(y),q\in f^{-1}(z)$ such that $|\theta(p,q)-\frac{\pi}{2}|\le \epsilon.$ \end{enumerate} \begin{theorem}[Lipschitz submersion \& fibration] \label{thm:Lipschitz-submersion} For any dimension $n$ and positive number $\mu_0$, there exist positive numbers $\delta(n)$ and $\epsilon(n,\mu_0)$ such that for any $m$-dimensional Alexandrov space $X$ with curv $\ge -1$ and any $n$-dimensional Alexandrov space $Y$ with curv $\ge -1$, if \begin{enumerate}\numberwithin{enumi}{theorem} \item the $\delta$-strained radius of $Y$, $r_{\text{$\delta$-str}}(Y)\ge \mu_0$ with $0<\delta<\delta(n)$, and \item the Gromov-Hausdorff distance $d_{GH}(X,Y)\le \epsilon<\epsilon(n,\mu_0)$, \end{enumerate} then there exists a regular $\varkappa(\delta,\epsilon|n)$-almost Lipschitz submersion $f : X \to Y$ that is a Hurewicz fibration. \end{theorem} \begin{remark} If in addition, every $f$-fiber is a topological manifold without boundary of co-dimension $n$, then $f$ is a locally trivial fibration; see \cite{RX12}. We also prove that the fibration in Theorem \ref{thm:Lipschitz-submersion} is uniquely determined in the homotopic sense; see Theorem \ref{thm:uniqueness}. \end{remark} Theorem \ref{thm:Lipschitz-submersion} can be traced back to the fibration theorem \cite{Fu87}, \cite{Ya91}, \cite{Pr93} for manifolds, which has played a fundamental role in the study of collapsed manifolds. The existence of regular almost Lipschitz in Theorem \ref{thm:Lipschitz-submersion} is due to Yamaguchi \cite{Ya96}, where he conjectured that it should be a locally trivial fibration. Here we partly verify his conjecture. \begin{remark} A direct corollary of Theorem \ref{thm:Lipschitz-submersion} is a long exact sequence arising from the fibration: \begin{equation}\label{long-exact-seq} \begin{aligned} &\cdots\to\pi_l(F,x)\to \pi_l(X,x)\overset{f_*}{\to}\pi_l(Y,f(x))\to\pi_{l-1}(F, x)\to\cdots\to \\ &\cdots\to\pi_1(Y,f(x))\to 0. \end{aligned} \end{equation} In \cite{Pr97} Perelman concluded the same long exact sequence under a much weaker situation, that is, when a sequence of Alexandrov spaces $X_i$ with curv $\ge\kappa$ collapses to a limit space $Y$, if $Y$ contains no proper extremal subsets, then (\ref{long-exact-seq}) holds for $i$ large and a regular fiber $F$ (i.e., the fiber of a lifting map to $X_i$ of regular admissible maps locally defined on $Y$ to $\Bbb R^n$, see \cite{Pr97}). By the proof of Theorem \ref{thmb-margulis}, both the homotopy fiber in Theorem \ref{thm:Lipschitz-submersion} and Perelman's regular fiber admit a $w(m-n)$-nilpotent fundamental group, where $w$ depends on the codimension. \end{remark} The gradient push developed by Kapovitch-Petrunin-Tuschmann \cite{KPT10} is important for us to deduce the uniform index bound in Theorem \ref{thmb-margulis}, as what happened for almost nonnegatively curved manifolds in \cite{KPT10}. \begin{theorem}[Gradient push, {\cite[Lemma 2.5.1]{KPT10}}]\label{thm-gradientpush-1} There are $\delta(n), T(n)>0$ such that if the metric ball $B_1(p_0)$ centered at $p_0$ of radius $1$ is relative compact in an Alexandrov $n$-space $X$ with curvature $\ge -1$, then there are regular points $\{a_j,b_j\}_{j=1}^n$ and $q_0$ in $B_{\frac{1}{100}}(p_0)$ such that $\{a_j,b_j\}_{j=1}^n$ is a $(n,\delta)$-strainer at $q_0$ and any point $q$ in $B_{\delta |a_nb_n|}(q_0)$ can be pushed successively by the gradient flows of $\frac{1}{2}\operatorname{dist}_{q_0}^2$, $\frac{1}{2}\operatorname{dist}_{a_j}^2$, $\frac{1}{2}\operatorname{dist}_{b_j}^2$ $(j=1,\dots, n)$ to any point $p\in B_{\frac{1}{2}}(q)$ in total time $\le T(n)$. \end{theorem} Compared to the case of manifolds, a crucial difference on an Alexandrov $n$-space $X$ is that, there may be proper extremal subsets and no gradient curves can get out of them. When pushing a loop at a regular point to another regular point, it is a subtle point whether the successive gradient curve at base point do not pass any proper extremal subset in $X$. Since it is hard for us by following \cite{KPT10} to check this directly, in the appendix we give a detailed proof of Theorem \ref{thm-gradientpush-1}, by constructing a specific gradient pushing broken line, which consists of $k$-regular (i.e., the tangent cone $T_pX$ at least splits off $\mathbb R^k$) or $(n,\delta)$-strained points when $a_j,b_j$ and the ending point $p$ are $k$-regular. In particular, the gradient push between regular points can always keep away from extremal subsets. We also sharpen the universal time bound $T(n)$ to $n^2 \delta^{-1}$, improving the universal time bound $\delta^{-n^2}$ in \cite{KPT10}. This provides a detailed justification for the gradient push in proving the Margulis lemma on an Alexandrov space. \begin{remark} Kapovitch-Wilking \cite{KW11} developed a replacement (see the zooming in property and rescaling theorem in \cite{KW11}) of Yamaguchi's fibration theorem \cite{Ya91} and gradient push \cite{KPT10} in proving the Margulis lemma for manifolds with lower bounded Ricci curvature. Note that it is necessary to change base points many times when the rescaling theorem is applied. Since a fixed base point is chosen to be valid for our case at every scale, the proof of Theorem \ref{thmb-margulis} is more direct than \cite{KW11}. \end{remark} Now let us briefly explain ideas of the proofs. According to \cite{KPT10}, a finite generated group $G$ is $w$-nilpotent, if it admits a filtration $G_1=G\vartriangleright G_2 \vartriangleright \cdots \vartriangleright G_l=\{e\}$, where $l\le n$, each $G_i\vartriangleleft G_1$, $G_i/G_{i+1}$ is $c$-abelian, and the conjugate action of $G_1$ on $G_i/G_{i+1}$, namely $\rho_i:G_1\to \operatorname{Out}(G_i/G_{i+1})$, has a finite image, whose order is bounded by $C$. By a contradicting argument and an iterated blowing-up process, we will prove that around any $p\in X$, there is a nearby point $q$ at which the local fundamental group corresponding to different collapsing scales (see Definition \ref{def-local-group}) has a filtration as above. Then Theorem \ref{thmb-margulis} follows from a compact packing argument as in \cite{KW11}. Theorem \ref{thm:Lipschitz-submersion} is used in proving $G_{i+1}\vartriangleleft G_i$ (for an alternative proof, see \cite{FY92} or \cite{Ya96}). The normal property $G_{i}\vartriangleleft G_1$ and a uniform bound on $\# \rho_i(G_1)$ follow from the universal time bound in Theorem \ref{thm-gradientpush-1}. According to Ferry's result (\cite{Fe78}, see also Theorem \ref{thm:strong-regular}), the homotopy lifting property holds for the map in Theorem \ref{thm:Lipschitz-submersion} if there are controlled homotopy equivalences between nearby fibers (called {\it strong regular}, see Section 2.2) and all fibers are abstract neighborhood retracts. As a generalization of the tubular neighborhood of fibers and horizontal curves of an $\epsilon$-Riemannian submersion, a neighborhood retraction $\varphi_p$ to a fiber $f^{-1}(p)$ of a LcL was constructed in \cite{RX12} (see also Proposition \ref{prop:gradient-retract}, Section 2.4), which is defined via iterated gradient deformations of distance functions. By this neighborhood retraction associated to every fiber, we are able to define controlled homotopy equivalences between nearby fibers and prove the fiber is locally contractible. The remaining of the paper is divided into three parts. In Section 2, we will review some topological results and prove Theorem \ref{thm:Lipschitz-submersion}. In Sections 3,4 and 5 we prove Theorem \ref{thmb-margulis}. In the Appendix we give an elementary construction of the gradient push in Theorem \ref{thm-gradientpush-1} with a sharpened time estimate improving that in \cite{KPT10}. \begin{acknowledgements} The first author would like to thank Xiaochun Rong and Hao Fang for helpful discussions, and thank the University of Iowa for hospitality and support during a visit in which a part of the work was completed. The second author would like to thank Yin Jiang and Liman Chen for helpful suggestions. We are grateful to Fuquan Fang for pointing out the nilpotency result in \cite{BGT2012} to us. This work is supported partially by National Natural Science Foundation of China [11871349], [11821101], by research funds of Beijing Municipal Education Commission and Youth Innovative Research Team of Capital Normal University. \end{acknowledgements} \section{Homotopy lifting properties} \subsection{Proof of Theorem \ref{thm:Lipschitz-submersion}} A map $f:X\to Y$ between two metric spaces is called an {\it $e^\epsilon$-Lipschitz and co-Lipschitz} \cite{Ka07}, \cite{RX12} (briefly, $e^\epsilon$-LcL), if for any $p\in X$, and any $r>0$, the metric balls satisfy \begin{equation} B_{e^{-\epsilon}r}(f(p))\subseteq f(B_r(p))\subseteq B_{e^\epsilon r}(f(p)).\tag{1.7} \end{equation} A $1$-LcL preserves metric balls exactly and is called a {\it submetry} \cite{BG00}. Clearly, a regular $\epsilon$-almost Lipschitz submersion is an $e^{C\epsilon}$-LcL for some universal constant $C$. Since by definition, a regular almost Lipschitz submersion satisfies the LcL property, it suffices to show Theorem \ref{thm-Hurewicz} below. In order to simplify constant dependence, we introduce another terminology other than $\delta$-strained radius. An $n$-dimensional Alexandrov space $Y$ is called {\it $\epsilon$-almost Euclidean} if for any point $p\in Y$, there is a neighborhood $U$ containing $p$ and a bi-Lipschitz map $\varphi:U\to \varphi(U)\subset \Bbb R^n$ onto an open neighborhood in $\Bbb R^n$ such that for any $x,y\in U$, \begin{equation}\label{eq:almost-Euclidean} e^{-\epsilon}|xy|\le |\varphi(x)\varphi(y)|\le e^\epsilon |xy|.\tag{2.1} \end{equation} If (\ref{eq:almost-Euclidean}) holds on every $r$-ball in $Y$, then $Y$ is called {\it $(r,\epsilon)$-almost Euclidean}. By \cite[Theorem 5.4]{BGP92}, an Alexandrov space with curv $\ge -1$ and $\delta$-strained radius $\ge \mu_0$ is $(\mu_0,\varkappa(\delta|n))$-almost Euclidean. \begin{theorem}\label{thm-Hurewicz} Let $f:X\to Y$ is a $\sqrt{1.023}$-LcL between finite-dimensional Alexandrov spaces with curv $\ge \kappa$. If $f$ is proper and the base space $Y$ is $\ln\sqrt{1.023}$-almost Euclidean, then $f$ is a Hurewicz fibration, i.e., satisfying the homotopy lifting property with respect to any space. \end{theorem} Theorem \ref{thm-Hurewicz} has appeared in an earlier preprint \cite{Xu12}. \vspace{2mm} \begin{proof}[Proof of Theorem \ref{thm:Lipschitz-submersion}] ~ The existence of a regular almost Lipschitz submersion is proven by Yamaguchi \cite{Ya96}. By Theorem \ref{thm-Hurewicz} and the discussion above, any regular almost Lipschitz submersion $f:X\to Y$ is a Hurewicz fibration. \end{proof} The remaining of this section is devoted to prove Theorem \ref{thm-Hurewicz}. \subsection{A sufficient condition for a fibration} The following topological results are used in the proof of Theorem \ref{thm-Hurewicz}. For any Hurewicz fibration $f:X\to Y$, if $Y$ is path-connected, then by definition the fibers are homotopy equivalent to each other. In \cite{Fe78} Ferry proved that the inverse is also true, if the homotopy equivalences between nearby fibers and the homotopies are under control in the following sense. A map $f:X\to Y$ between metric spaces is said to be {\it strongly regular} \cite{Fe78} if $f$ is proper and if for each $p\in Y$ and any $\epsilon>0$ there is a $\delta>0$ such that if $|pp_1|<\delta$, then there are homotopy equivalences between fibers $\varphi_{pp_1}: f^{-1}(p)\to f^{-1}(p_1)$, $\varphi_{p_1p}: f^{-1}(p_1)\to f^{-1}(p)$ which togther with the homotopies move points in distance $<\epsilon$. A topological space $X$ is an {\it absolute neighborhood retract} (ANR) if there is an embedding of $X$ as a closed subspace of the Hilbert cube $I^{\infty}$ such that some neighborhood $N$ of $X$ retracts onto $X$. If $X$ is finite covering dimensional and locally contractible, then $X$ is an ANR (\cite{Bo55}). \begin{theorem}[\cite{Fe78}]\label{thm:strong-regular} If $f:E\to B$ is a strongly regular map onto a complete finite covering dimensional space $B$ and all fibers are ANRs, then $f$ is a Hurewicz fibration. \end{theorem} \begin{remark}\label{rem-local-fibration} Note that the properties of being an ANR or a Hurewicz fibration are local properties (cf. \cite{Fe78}), Theorem \ref{thm:strong-regular} was proved locally in \cite{Fe78}. Moreover, the Lipschitz submersion in Theorem \ref{thm:Lipschitz-submersion} can be constructed locally (\cite{Ya96}). Hence, both of them holds over $\epsilon$-almost Euclidean points in a complete Alexandrov space. And so are Theorem \ref{thm-Hurewicz} and Theorem \ref{thm:Lipschitz-submersion}. \end{remark} According to Theorem \ref{thm:strong-regular} and the discussion above, Theorem \ref{thm-Hurewicz} holds if an $e^\epsilon$-LcL between Alexandrov spaces with almost Euclidean base space is strongly regular, and all its fibers are locally contractible. \subsection{Gradient estimate for an LcL} Let us first recall a basis property of an $e^\epsilon$-LcL $f:X\to Y$. For any compact subset $S\subset Y$, let $\operatorname{dist}_S$ be the distance function to $S$ in $Y$, $$\operatorname{dist}_S(y)=\abs{yS}=\inf\{d(y,s): s\in S\}.$$ Then the two functions $\operatorname{dist}_S\circ f$ and $\operatorname{dist}_{f^{-1}(S)}: X\to \Bbb R_+$ satisfy (see Lemma 1.4 in \cite{RX12}) \begin{equation}\label{eq:lcl} e^{-\epsilon}\cdot \operatorname{dist}_S\circ f \le \operatorname{dist}_{f^{-1}(S)}\le e^\epsilon \cdot \operatorname{dist}_S\circ f.\tag{2.3} \end{equation} Since LcL property is rescaling invariant, from now on we assume that $X$ is an Alexandrov space with curv $\ge -1$, $Y$ is an $n$-dimensional Alexandrov space with curv $\ge -1$ that is $\epsilon$-almost Euclidean. Let $f:X\to Y$ be an $e^\epsilon$-LcL. Under the assumption that $Y$ is a Riemannian manifold, we constructed in \cite{RX12} a neighborhood retraction $\varphi_p$ of $f$-fiber over $p\in Y$, which is continuously depending on $p$ and can be used as a weaker replacement of the horizontal lifting of minimal geodesics. In the proof of Theorem \ref{thm-Hurewicz} we will apply it to define controlled homotopy equivalences between nearby fibers. Because now $Y$ is an Alexandrov space, for reader's convenience we recall its construction and point out the differences to \cite{RX12} in below. For an $\epsilon$-almost Euclidean point $p\in Y$, let $r_p$ denote the maximal number that there is a map $\varphi: B(p,r_p)\to \Bbb R^n$ satisfying (\ref{eq:almost-Euclidean}). Let $S_r(p)=\partial B_r(p)$ be the metric sphere around $p$ and let $x$ be any point in $B_r(p)\setminus\{p\}$. We have the following estimate on the gradient of distance function $\operatorname{dist}_{f^{-1}(S_r(p))}$. \begin{lemma}\label{lem-gradient-estimate} Let $f:X\to Y$ and $0<r\le \min\{r_p,1\}$ be as above. Let $x$ be point in $f^{-1}(B_r(p))\setminus f^{-1}(p)$. The gradient vector of $\operatorname{dist}_{f^{-1}(S_r(p))}$ satisfies \begin{equation}\label{eq:gradient-estimate-totalspace} 1\ge \abs{\nabla_x \operatorname{dist}_{f^{-1}(S_r(p))}} \ge 1 - (e^{2\epsilon}-1) \cdot \frac{2r^2}{\abs{x f^{-1}(p)}\cdot \abs{x f^{-1} (S_r(p))}}.\tag{2.4} \end{equation} \end{lemma} \begin{proof} The proof is similar to Lemma 1.5 (1.5.1) in \cite{RX12}. Let $z\in f^{-1}(S_r(p))$, $y\in f^{-1}(p)$ be such that $|xz|=|x f^{-1}(S_r(p))|$ and $|xy|=|xf^{-1}(p)|$. Let $v$ be the direction at $x$ of a minimal geodesic from $x$ to $y$. It suffices to bound $\cos \measuredangle (v,w)$ from above for any direction $w$ from $x$ to $f^{-1}(S_r(p))$. Since $f$ and $\varphi$ are $e^{\epsilon}$-LcLs, by (\ref{eq:lcl}) we directly see $$\aligned |xy| &\le e^{2\epsilon}\cdot |\varphi(f(x))\varphi(p)|, \\ |xz| &\le e^{2\epsilon}\cdot |\varphi(f(x))\varphi(S_r(p))|,\\ |yz| &\ge |z f^{-1}(p)| \ge e^{-\epsilon}|f(z)p|=e^{-\epsilon}\cdot r. \endaligned $$ Moreover, \begin{align*} |\varphi(f(x))\varphi(p)|+|\varphi(f(x))\varphi(S_r(p))|&\le |\varphi(f(x))\varphi(p)|+ |\varphi(f(x))S_{e^\epsilon r}(\varphi(p))|\\ &= e^\epsilon r. \end{align*} Thus $$|yz|\le |xy|+|xz|\le e^{2\epsilon}\cdot (|\varphi(f(x))\varphi(S_r(p))|+|\varphi(f(x))\varphi(S_r(p))|)\le e^{3\epsilon}r. $$ Since the proof below is similar for different curvature lower bound, for simplicity we only prove for $\kappa=0$. By the Euclidean cosine law, we derive $$\aligned \cos \tilde \measuredangle_0 (zxy)&=\frac {|xz|^2+|xy|^2- |yz|^2}{2|xz|\cdot |xy|}\\ &=\frac{(|xz|+|xy|)^2-|yz|^2}{2|xz|\cdot |xy|}-1\\ &= \frac{(|xz|+|xy|-|yz|)\cdot (|xz|+|xy|+|yz|)}{2|xz|\cdot|xy|}-1\\ &\le (e^{2\epsilon}-1) \cdot \frac{r^2}{|xz|\cdot|xy|} -1. \endaligned$$ \end{proof} By Lemma \ref{lem-gradient-estimate} and a standard argument, for sufficient small $\epsilon$ ($e^{2\epsilon}\le 1.02368$), points in $f^{-1}(B_{\frac{2r}{3}}(p))$ can be flowed into $f^{-1}(B_{\frac{r}{3}}(p))$ along gradient curves of $\operatorname{dist}_{f^{-1}(S_r(p))}$ in a definite time. \begin{lemma}[Lemma 1.5 in \cite{RX12}]\label{lem:gradient-estimate} For any $p\in Y$ and $r<\min\{r_p, \frac{1}{2e^\epsilon}\}$, there is a constant $C_0(\epsilon)>0$ depending on $\epsilon$ such that for all $x\in f^{-1}(B_{\frac {2r}3}(p))$, the gradient curve $\Phi(t,x)$ of the function $\operatorname{dist}_{f^{-1}(S_r(p))}$ satisfies $$\Phi(x,t)\in f^{-1}(B_{\frac {r}3}(p)), \qquad t\ge C_0^{-1}\cdot\left(\frac{2}{3}e^{\epsilon} r -\abs{x f^{-1}(S_r(p))}\right).$$ \end{lemma} \subsection{Neighborhood retraction of a fiber} In this part we construct a neighborhood retraction around a fiber $f^{-1}(p)$ which continuously depends on $p$. We first define a gradient deformation of $\operatorname{id}_{f^{-1}(B_{\frac{2r}{3}}(p))}$ which maps $f^{-1}(B_{\frac{2r}{3}}(p))$ into $f^{-1}(B_{\frac{r}{3}}(p))$ and fixes $f^{-1}(B_{0.3r}(p))$. Let $$T_{p,r}(x)=\max\left\{0,C_0^{-1}\cdot\left(\frac{2}{3}e^{\epsilon} r -\abs{x f^{-1}(S_r(p))}\right)\right\},$$ and $\Phi_p^{T_{p,r}(x)}(x)=\Phi(x,T_{p,r}(x))$ be the gradient deformation of $\operatorname{id}_{f^{-1}(B_{\frac{2r}{3}}(p))}$ with respect to $\operatorname{dist}_{f^{-1}(S_r(p))}$. Then by Lemma \ref{lem:gradient-estimate} and direct calculation, for $e^{2\epsilon}\le 1.02368$ and $r<\min\{r_p, \frac{1}{2e^\epsilon}\}$, we have \begin{equation}\label{eq:gradient-flow} \begin{cases} \Phi_p^{T_{p,r}(x)}(x)\in f^{-1}(B_{\frac {r}3}(p)), &\forall\; x\in f^{-1}(B_{\frac{2r}{3}}(p)),\\ T_{p,r}(x)=0, &\forall\; x\in f^{-1}(B_{0.3 r}(p)).\tag{2.5} \end{cases} \end{equation} In \cite[Proposition 1.6]{RX12} we proved that $\Phi_p^{T_{p,r}(x)}(x)$ is continuous both in $p$ and $x$, provided that $Y$ is a Riemannian manifold and $r$ is smaller than the injectivity radius of $Y$. In the following we prove the same holds when $Y$ is an almost Euclidean Alexandrov space. \begin{lemma}\label{lem-continuity} Let $0<\epsilon<\ln \sqrt{1.02368}$, and let $f:X\to Y$ be an $e^\epsilon$-LcL between Alexandrov spaces such that $Y$ is $(\mu_0,\epsilon)$-almost Euclidean. Then for any $0<r<\frac{1}{2}\cdot \min\{\mu_0,1\}$, $$ \Psi:\bigcup_{p\in Y} \{p\}\times f^{-1}(B_{\frac{2r}{3}}(p))\subset Y\times X \to X,\quad \Psi(p,x)=\Phi_p^{T_r(x)}(x)$$ is a continuous map. \end{lemma} \begin{proof} Since the proof is similar to \cite[Proposition 1.6]{RX12}, we give a sketch proof by pointing out the difference. Because the gradient curves are stable as function converges (\cite{Petr07}), it suffices to show that the distance functions $\operatorname{dist}_{f^{-1}(S_r(p))}, \operatorname{dist}_{f^{-1}(S_r(q))}$ (to $f^{-1}(S_r(p))$ and $f^{-1}(S_r(q))$ respectively) are $C|pq|$-close for small $|pq|$ and a constant $C$. By the definition of LcL, it is easy to verify (see \cite[Lemma 1.4, Lemma 1.7]{RX12}) that the Hausdorff distance and the difference between $\operatorname{dist}_{f^{-1}(S_r(p))}$ and $\operatorname{dist}_{f^{-1}(S_r(q))}$ satisfy \begin{align} d(\operatorname{dist}_{f^{-1}(S_r(p))}, \operatorname{dist}_{f^{-1}(S_r(q))})&=d_H(f^{-1}(S_r(p)), f^{-1}(S_r(q))),\tag{\ref{lem-continuity}.1}\\ d_H(f^{-1}(S_r(p)), f^{-1}(S_r(q)))&\le e^\epsilon\cdot d_H(S_r (p), S_r(q)).\tag{\ref{lem-continuity}.2} \end{align} Let $d(p,q)=\varepsilon_1$. By (\ref{lem-continuity}.1) and (\ref{lem-continuity}.2), what remains is to show $S_r(p)$ and $S_r(q)$ are $C\epsilon_1$-close in Hausdorff distance. Let $z$ be a middle point in a minimal geodesic $[pq]$. Since both $S_r(p)$ and $S_r(q)$ lie in the annulus $B_{r+\varepsilon_1}(z)\setminus B_{r-\varepsilon_1}(z)$, it is easy to see that one only needs to bound the Hausdorff distance between metric spheres $S_{r+\varepsilon_1}(z)$ and $S_{r-\varepsilon_1}(z)$, i.e., for some constant $C$, $d_H(S_{r-\varepsilon_1}(z),S_{r+\varepsilon_1}(z))\le C\epsilon_1$. Indeed, for any point $x\in S_{r+\varepsilon_1}(z)$, since the point $x_1$ in a minimal geodesic $[xz]$ with distance $|x_1x|=2\varepsilon_1$ lies in $S_{r-\varepsilon_1}(z)$, $S_{r+\varepsilon_1}(z)$ lies in $2\varepsilon_1$-neighborhood of $S_{r-\varepsilon_1}(z)$. Conversely, let $x\in S_{r-\varepsilon_1}$. By the proof of Lemma \ref{lem-gradient-estimate}, there is a point $y$ in $S_{2r}(z)$ such that the comparison triangle $\tilde\measuredangle_{-1}(zxy)$ is larger than $\pi/2$ by a positive definite error $\theta>0$. By the triangle version of Toponogov theorem, there exists $y_1$ in $[xy]$ with distance $|xy_1|\le \frac{2\varepsilon_1}{-\cos \tilde\measuredangle_{-1}(zxy)}$ such that $|y_1z|=r+\varepsilon_1$. \end{proof} Next, let us repeat the construction above for the sequence $\{r_i=\frac{r}{2^i}\}_{i=0,1,2,\cdots}$ and let $\Phi_{p,i}^{T_{p,i}}(x)=\Phi_{p,i}(x,T_{p,r_i}(x))$ be the gradient curves of $\operatorname{dist}_{f^{-1}(S_{r_i}(p))}$ at $x$ with time $T_{p,r_i}(x)$. By (\ref{eq:gradient-flow}), $\Phi_{p,i}^{T_{p,i}}:f^{-1}(B_{r_i}(p))\to X$ takes $f^{-1}(B_{\frac{2}{3}\cdot \frac{r}{2^i}}(p))$ into $f^{-1}(B_{\frac{1}{3}\cdot\frac{r}{2^{i+1}}}(p)),$ and $$\left .\Phi_{p,i}^{T_{p,i}} \right |_{f^{-1}(B_{0.3\frac{r}{2^i}}(p))} = \operatorname{id}.$$ Hence the iterated gradient deformations $$\Phi_{p,i}^{T_{p,i}}\circ \Phi_{p,i-1}^{T_{p,i-1}}\circ \cdots \Phi_{p,0}^{T_{p,0}}$$ is well-defined on $f^{-1}(B_{\frac{2r}{3}}(p))$ and its restriction on $f^{-1}(B_{0.3\frac{r}{2^i}}(p))$ is identity. Because $$T_{p,r_i}(x)\le \frac{r}{2^{i-1}}\cdot \frac{e^\epsilon}{3}\cdot C_0^{-1},$$ it can be directly verified that the sequence of maps \begin{gather*} \Psi_i:\bigcup_{p\in Y} \{p\}\times f^{-1}(B_{\frac{2r}{3}}(p)) \to X,\\ \Psi_i(p,x)=\Phi_{p,i}^{T_{p,i}}\circ \Phi_{p,i-1}^{T_{p,i-1}}\circ \cdots \Phi_{p,0}^{T_{p,0}}(x) \end{gather*} uniformly converges. The limit $\varphi_p(x)=\displaystyle\lim_{i\to \infty}\Psi_i(p,x)$ gives a retraction from the neighborhood $f^{-1}(B_{\frac{2r}{3}}(p))$ to $f^{-1}(p)$, which by Lemma \ref{lem-continuity} is continuous both in $p$ and $x$. We summarize it to the following proposition. \begin{proposition} \label{prop:gradient-retract} For any $0<r<\frac{1}{2}\min\{\mu_0,1\}$, there is a deformation retraction $\varphi_p(x)$ from a neighborhood $f^{-1}(B_{\frac{2r}{3}}(p))$ to the fiber $f^{-1}(p)$ such that $$\varphi: \bigcup_{p\in Y} \{p\}\times f^{-1}(B_{\frac{2r}{3}}(p))\to X, \quad \varphi(p,x)=\varphi_p(x)$$ is continuous both in $p$ and $x$, and satisfies \begin{enumerate} \numberwithin{enumi}{theorem} \item $\varphi_p(x)=x$ for any $x\in f^{-1}(p)$, and \item $\abs{x \varphi_p(x)} \le 2C_1r, $ for some constant $C_1(\epsilon)$ depending only on $\epsilon$. \end{enumerate} \end{proposition} \vspace{2mm} \begin{proof}[Proof of Theorem \ref{thm-Hurewicz}] ~ Up to a rescaling we assume that the lower curvature bounds of both $X$ and $Y$ are $-1$. By Theorem \ref{thm:strong-regular}, it suffices to show that $f$ is strong regular and any fiber is an ANRs. For any $p,q\in B$ with small distance $0<\abs{pq}<\frac{1}{2}\min\{r_p, \frac{1}{2e^\epsilon}\}$, let $\rho=2\abs{pq}$. By the definition of LcL, it is easy to see that $$e^{-\epsilon}\cdot |pq| \le d_H(f^{-1}(p),f^{-1}(q))\le e^\epsilon\cdot |pq|.$$ Thus $f^{-1}(q)$ lies in $e^\epsilon \frac{\rho}{2}$-neighborhood of $f^{-1}(p)$ and vice versa. By Proposition \ref{prop:gradient-retract}, there are neighborhood retractions $\varphi_p: f^{-1}(B_{\frac{2\rho}{3}}(p))\to f^{-1}(p)$ and $\varphi_q: f^{-1}(B_{\frac{2\rho}{3}}(q))\to f^{-1}(q)$ around $f^{-1}(p)$ and $f^{-1}(q)$ respectively. Then the homotopy equivalences between fibers can be chosen to be $\left.\varphi_p\right|_{f^{-1}(q)} :f^{-1}(q)\to f^{-1}(p)$ and $\left.\varphi_q\right|_{f^{-1}(p)}:f^{-1}(p)\to f^{-1}(q)$, and the homotopies are $H_t=\varphi_p\circ \varphi_{\gamma(t)}:f^{-1}(p)\to f^{-1}(p)$ and $K_t=\varphi_q\circ \varphi_{\gamma(1-t)}:f^{-1}(q)\to f^{-1}(q)$, where $\gamma:[0,1]\to B$ is a minimal geodesic from $p$ to $q$. By (\ref{prop:gradient-retract}.2), $\abs{H_t(x)x}\le 4C_1\rho$ and $\abs{K_t(x)x}\le 4C_1\rho$. Therefore $f$ is strongly regular. According to \cite{Pr91} (cf. \cite{Ka07}, \cite{Petr07}), an Alexandrov space with curvature bounded below is locally contractible. For $x\in f^{-1}(p)$, let $U_x\ni x$ be a contractible neighborhood around $x$ and $H_t:U_x\to U_x$ be the homotopy from $\operatorname{id}_{U_x}$ to the retraction $r:U_x\to \{x\}$ such that $H_t(x)=x$. Then $\varphi_p\circ H_t$ is a homotopy from $\operatorname{id}_{U_x\cap f^{-1}(p)}$ to the retraction $r :U_x\cap f^{-1}(p)\to \{x\}$. Therefore $f^{-1}(p)$ is locally contractible and thus an absolute neighborhood retract. \end{proof} \subsection{Homotopic uniqueness of fibration} Recently it is proved in \cite{JX18} that two collapsed metrics $g_{i}$ ($i=0,1$) on $M$ induces the same nilpotent Killing structure up to a diffeomorphism, provided $g_{i}$ are $L_0$-Lipschitz equivalent and sufficiently collapsed. In the following we prove that in the homotopic sense, the collapsing fibration in Theorem \ref{thm:Lipschitz-submersion} is unique. We say that two Hurewicz fibrations $f_i:X_i\to Y$ ($i=0,1$) are {\it fibrewise homotopy equivalent} if there are fiber-preserving maps $h:X_0 \to X_1$ and $g : X_1 \to X_0$ and fiber-preserving homotopies between $g\circ h$ and identity $1_{X_0}$, and between $h\circ g$ and $1_{X_1}$. We say that Hurewicz fibrations $f_i:X_i\to Y_i$ ($i=0,1$) are {\it equivalent} if there is a homeomorphism $\psi:Y_0\to Y_1$ such that $\psi\circ f_0:X_0\to Y_1$ is fiber-homotopy equivalent to $f_1:X_1\to Y_1$. \begin{theorem}\label{thm:uniqueness} Let $X$, $Y_i$ $(i=0,1)$ be Alexandrov spaces with curv $\ge -1$ such that $Y_i$ satisfies (1.1.1), the dimension $\dim Y_1=\dim Y_2$, and (1.1.2) holds for $d_{GH}(X,Y_i)$. Then any two Hurewicz fibrations $f_i$ from $X$ to $Y_i$ $(i=0,1)$ provided by Theorem \ref{thm:Lipschitz-submersion} are equivalent. \end{theorem} It follows either from \cite[Theorem 9.8]{BGP92} (a key lemma of its proof has a flaw, for a correct proof see \cite{SSW13}), or from Theorem \ref{thm:Lipschitz-submersion}, that there is $e^{\varkappa(\epsilon|n)}$-bi-Lischitz map $\varphi:Y_0\to Y_1$ such that $\varphi\circ f_0$ is $100\epsilon$-close to $f_1$. Thus, the uniqueness in Theorem \ref{thm:uniqueness} is reduced to a stability result below. \begin{proposition}\label{prop-stability} Let $X$ and $Y$ be two Alexandrov spaces with curv $\ge\kappa$, where $Y$ is $(\mu_0,\ln\sqrt{1.023})$-almost Euclidean. If two $\sqrt{1.023}$-LcLs $f_{0},f_1:X\to Y$ are $\frac{\mu_0}{3}$-close, i.e., \begin{equation} d(f_0,f_1)=\sup_{x\in X}|f_0(x)f_1(x)|<\frac{1}{3}\mu_0, \tag{2.2} \end{equation} then they are equivalent as Hurewicz fibrations. \end{proposition} Theorem \ref{prop-stability} is an improvement of a stability result in \cite{Xu12}. In the proof of Proposition \ref{prop-stability}, we need a ``canonical'' pointed contraction on the base space $Y$, which are constructed similarly as in Proposition \ref{prop:gradient-retract}. \begin{lemma}\label{lem-ptcontract-base} Let $Y$ be a $(\mu_0,\ln 1.02368)$-almost Euclidean Alexandrov space with curv $\ge -1$. There is a continuous pointwise contraction on $Y$, $$\tau: \bigcup_{p\in Y} \{p\}\times B_{\mu_0/2}(p)\times [0,1]\to Y, \quad \tau(p,x,0)=x,\quad \tau(p,x,1)=p.$$ \end{lemma} \begin{proof} Note that the estimates in Lemma \ref{lem-gradient-estimate} and \ref{lem:gradient-estimate} also holds for the distance function $\operatorname{dist}_{S_r(p)}$ for $0<r\le \min\{\mu_0,1\}$. Let $\psi(p,x,t)$ be the limit of iterated gradient flows of $\operatorname{dist}_{S_{r_i}(p)}$ for $r_i=2^{-i}\mu_0$ with time $t\in [0,T_{p,r_i}(x)]$, where $C_0$ is the constant in Lemma \ref{lem:gradient-estimate} and $T_{p,r_i}(x)=\max\{0, C_0^{-1}(\frac{2}{3}e^\epsilon r_i-|xS_{r_i}(p)|)\}$. Let $T(p,x)=\sum_{i=0}^\infty T_{p,r_i}(x)$. It follows from the proof of Proposition \ref{prop:gradient-retract} directly that the map $\tau(p,x,t)=\psi(p,x,tT(p,x))$ satisfies the requirement of Lemma \ref{lem-ptcontract-base}. \end{proof} \vspace{2mm} \begin{proof}[Proof of Proposition \ref{prop-stability}] ~ Let $f_0,f_1:X\to Y$ be the $\sqrt{1.023}$-LcLs between Alexandrov spaces with a $(\mu_0,\ln\sqrt{1.023})$-almost Euclidean base $Y$. We now construct fiber-preserving maps $h,g:X\to X$ and fiber-preserving homotopies $g\circ h$ to the identity $1_X$ and from $h\circ g$ to $1_X$ as follows. For any point $x\in X$, let $p=f_0(x)\in B$, let $F_0(p)$ be the fiber $f^{-1}_0(p)$ and $F_1(p)=f^{-1}_1(p)$. Suppose that $d(f_0,f_1)<\frac{\mu_0}{3}$. Then by (\ref{eq:lcl}), $F_0(p)$ lies in the $\sqrt{1.023}\frac{\mu_0}{3}$-neighborhood of $F_1(p)$. Let $\varphi_p$ be the neighborhood retraction of $F_1(p)$ in Proposition \ref{prop:gradient-retract} with respect to $f_1$, we define $h:X\to X$ by $h(x)=\varphi_p(x)=\varphi_{f_0(x)}(x)$. Then the continuous map $h:X\to X$ is globally defined and maps all fibers of $f_0$ into that of $f_1$. Similarly we define $g:X\to X$ through the neighborhood retraction of $f_0$-fibers such that $f_0\circ g=f_1$, where $g(x)=\psi_{f_1(x)}(x)$ and $\psi_{q}$ is the neighborhood retraction of $f_0^{-1}(q)$ with respect to $f_0$. Note that $f_1(\varphi_{f_0(x)}(x))=f_0(x)$, thus $$g\circ h(x)=\psi_{f_1(\varphi_{f_0(x)}(x))}(\varphi_{f_0(x)}(x))=\psi_{f_0(x)}\circ \varphi_{f_0(x)}(x).$$ Moreover, since $\varphi_{f_1(x)}$ is a neighborhood retract to $F_1(f_1(x))$, $\varphi_{f_1(x)}(x)=x$. Similarly, $\psi_{f_0(x)}(x)=x$, and thus $$\psi_{f_0(x)}\circ \varphi_{f_1(x)}=\operatorname{1}_X:X\to X.$$ For $p_0=f_0(x)$, let $p_1=f_1(x)$ and let $p_t=\tau(p_1,p_0,t)$ be the map provided by Lemma \ref{lem-ptcontract-base}. Then $p_t$ is a curve from $p_0$ to $p_1$ continuously depending on $x$ and $t$. We define the fiber-preserving homotopy $H_t:X\to X$ by $H_t(x)=\psi_{f_0(x)}\circ \varphi_{p_t}(x)$. Then $H:[0,1]\times X\to X$ is a $f_0$-fiber-preserving continuous map such that $H_0=g\circ h$ and $H_1=1_{X}$. A fiber-preserving homotopy from $h\circ g$ to the identity $1_X$ can be defined similarly. \end{proof} \section{Margulis lemma on Alexandrov spaces} \subsection{Proof of Theorem \ref{thmb-margulis}} Let $X$ be a locally complete Alexandrov $n$-space of curv $\ge -1$. Let $B_1(p)$ be the $1$-ball centered at some point $p\in X$. It is well-known that sufficient away from where $X$ is non-complete, the global Toponogov comparison on Alexandrov space holds (\cite{BGP92}, \cite{Pl96}). To be precisely, there is a constant $C_T$, such that Toponogov comparison holds for any triangle in $B_{\frac{1}{C_T}}(p)$, provided that $B_1(p)$ is relative compact. According to the proof of global Toponogov theorem in \cite{LS12} or \cite{Wa18}, it is enough to choose $C_T=100$. Note that in a locally complete local Alexandrov $n$-space of curv $\ge -1$, the convex hull of a triangle may not be bounded. However, by the proof of Toponogov comparison (cf. \cite{LS12}, \cite{Wa18}), any contradicting triangle can be reduced successively to other ones, whose perimeters decay in a definite ratio to form a converging geometric progression, such that a contradiction can be derived in a neighborhood of the initial triangle whose radius is not more than $12$-times of the initial perimeter. Due to the above discussion, the fundamental facts on a complete Alexandrov space will be freely applied locally in this section without further mention. We first reduce Theorem \ref{thmb-margulis} to the following special case. For any $p\in X$ and $q\in B_1(p)$, $0<r\le 1-|pq|$, let $\Gamma_{p}(q;r)$ be the subgroup of the fundamental group $\pi_1(B_1(p),q)$ generated by loops at $q$ lying in $B_r(q)$. As before, we will use $d(p,q)$ or $|pq|$ to denote the distance between two points. \begin{theorem}\label{thm-good-point} Suppose that $B_1(p)$ is relative compact in $X$. Then there are positive constants $\epsilon(n),w(n)>0$, both depending only on the dimension $n$, such that there is a ``good'' point $q\in B_{\frac{1}{2C_T}}(p)$ satisfying $\Gamma_p(q;\epsilon(n))$ is $w(n)$-nilpotent. \end{theorem} For general points in $X$, we need the following result in \cite{KW11}. \begin{lemma}[{\cite[Step 2 in \S7] {KW11}}]\label{lem-finite-index-of-uniform-small} \label{lem-packing-index} For any positive integer $n$ and $0<\epsilon\le \epsilon(n)$ with $\epsilon(n)$ in Proposition \ref{thm-good-point}, there is $L(\epsilon,n)>0$ such that the following holds. Let $X$ be an Alexandrov $n$-space of curv $\ge -1$, and $p\in X$ be a point such that $B_1(p)$ is relative compact in $X$. Let $\Gamma=\left<\beta_1,\cdots,\beta_k:d(\beta_ip,p)\le \frac{1}{100C_TL(\epsilon,n)}\right>$ be a discrete subgroup of isometries of $X$ that acts freely. Then the subgroup $$H=\left<g\in\Gamma: d(gx,x)\le \epsilon, \forall x\in B_{\frac{1}{2C_T}}(p)\right>$$ has finite index $[\Gamma:H]\le (2k+1)^{L(\epsilon,n)}$. \end{lemma} Note that for any isometry $\gamma$ of $X$ which moves $p$ not farther than $\frac{1}{100C_T}$ but a point in $B_{\frac{2}{3C_T}}(p)$ farther than $\epsilon$, $\gamma$ should move a point in any maximal $\frac{\epsilon}{4}$-net of $B_{\frac{2}{3C_T}}(p)$ farther than $\frac{\epsilon}{4}$. Thus the total possibility of such isometries can be reduced to permutations of lattice, whose total number is under control by the relative volume comparison (see \cite{LiRong12}). By considering the naturally extended action of $\Gamma$ on the $m$ times direct product space by $X$ itself, the total number of cosets of $H$ can be counting via a wordlength-cutting-off argument with $$L(\epsilon,n)=\frac{(\operatorname{vol} B_{-1}^n(\frac{3}{4C_T}))^{m(\epsilon,n)}}{\operatorname{vol} B_{-1}^{m(\epsilon,n)n}(\frac{\epsilon}{8})}, \qquad \text{where}\quad m(\epsilon,n)=\frac{\operatorname{vol} B_{-1}^n(\frac{2}{3C_T})}{\operatorname{vol} B_{-1}^n(\frac{\epsilon}{8})},$$ and $B_{-1}(r)$ denotes a ball in the Hyperbolic space $\mathbb H^n$. For details, see \cite[\S 7]{KW11}. Assuming Theorem \ref{thm-good-point}, we now prove Theorem \ref{thmb-margulis}. \vspace{2mm} \begin{proof}[Proof of Theorem \ref{thmb-margulis}] ~ Let $(\tilde X,\tilde p)\to (B_1(p),p)$ be the universal cover of $B_1(p)$. Now we take $\epsilon(n)$ and $q$ to be the constant and a corresponding ``good'' point given by Theorem \ref{thm-good-point}. Let $$H=\left<g\in \pi_1(B_1(p),p):d(gx,x)\le \epsilon(n) \text{ for any $x\in B_{\frac{1}{2C_T}}(\tilde p)$}\right>.$$ Since $q\in B_{\frac{1}{2C_T}}(p)$, $H$ can be viewed as a subgroup of $\Gamma_p(q;\epsilon(n))$, hence $H$ is $w(n)$-nilpotent. Since for some $0<\delta\le \frac{1}{2C_T}$, $B_{2\delta}(p)$ is locally contractible, any loop lying in $B_{\delta}(p)$ at $p$ is homotopic to a joining of loops not longer than $3\delta$ at $p$. By a standard argument of Gromov's short basis, the generating set of $\Gamma_p(p;\delta)$ can be chosen to have at most $k(n)$ elements. By Lemma \ref{lem-packing-index}, let $\delta(n)=\frac{1}{300C_TL(\epsilon(n),n)}$, then for any $0<\delta\le \delta(n)$, $[\Gamma_p(p;\delta):\Gamma_p(p;\delta)\cap H]\le c(n)$. Therefore, $\Gamma_p(p;\delta)\cap H$ is a subgroup of $H$, which is $w(n)$-nilpotent and has finite index $\le c(n)$ in $\Gamma_p(p;\delta)$, we derive that $\Gamma_p(p;\delta)$ itself is $w'(n)$-nilpotent. \end{proof} What remains in this paper is devoted to prove Theorem \ref{thm-good-point}. We will argue by contradiction. Assuming the contrary, then there is a sequence $(X_\alpha,p_\alpha)$ of Alexandrov $n$-spaces with curv $\ge -1$, such that for any $q_\alpha\in B_{\frac{1}{2C_T}}(p_\alpha)$, $\Gamma_{p_\alpha}(q_\alpha;\alpha^{-1})$ fails to be $w(n)$-nilpotent. By passing to a subsequence, we may assume that $(X_\alpha, p_\alpha)\overset{GH}{\longrightarrow}(X_\infty,p_\infty)$, i.e., $(X_\alpha, p_\alpha)$ Gromov-Hausdorff converges to a limit space $(X_\infty,p_\infty)$. Following \cite{KPT10}, we will show in the remaining sections that: \begin{claim}\label{claim-good-point} By passing to a subsequence, there is $0<R_1\le \frac{1}{16C_T}$ such that for each sufficient large $\alpha$, there is a point $q_\alpha\in B_{\frac{1}{2C_T}}(p_\alpha)$ and a chain of subgroups $$G_{1,\alpha}=\Gamma_{p_\alpha}(q_\alpha;R_1)\vartriangleright G_{2,\alpha} \vartriangleright\cdots \vartriangleright G_{l,\alpha}=\{e\} \qquad (l\le n)$$ satisfying \begin{enumerate} \item[(A)] $G_{i,\alpha}/G_{i+1,\alpha}$ is $C$-abelian; \item[(B)] $G_{i,\alpha}\vartriangleleft G_{1,\alpha}$; \item[(C)] By (A) and (B), $G_{1,\alpha}$ acts on $G_{i,\alpha}/G_{i+1,\alpha}$ by conjugation, which induces a homomorphism $\rho_{i,\alpha}:G_{1,\alpha}\to \operatorname{Out}(G_{i,\alpha}/G_{i+1,\alpha})$. The image $\rho_{i,\alpha}(G_{1,\alpha})$ has finite elements, $\#\rho_{i,\alpha}(G_{1,\alpha})< N_0$. \end{enumerate} \end{claim} Then by \cite[Lemma 4.2.1]{KPT10}, we derive that $G_{1,\alpha}$ is $w(C, N_0)$-nilpotent, a contradiction. In order to construct each $G_{i,\alpha}$, we define the local fundamental groups (Definition \ref{def-leveled-gap} below). Then (B) and (C) would follow from the leveled gap property (Definition \ref{subsec-local-group}) and a universal estimate of gradient push associated to a $\delta^2$-maximal frame (Definition \ref{def-maximal-frame}); see Section \ref{sec-B-C-prime}. (A) will be guaranteed by the construction and the generalized Bieberbach theorem (\cite{FY92}, cf. \cite{Ya96}); see Proposition \ref{prop-existence-local-group}. \subsection{Local fundamental group and numerical maximal frame}\label{subsec-local-group} We first introduce the local fundamental group that will realize $G_{i,\alpha}$. \begin{definition}\label{def-local-group} Let $X$ be a locally complete Alexandrov $n$-space with curv $\ge -1$. Let $p$ be a point in $X$ such that the metric ball $B_1(p)$ is relative compact in $X$. For $0<r\le \frac{1}{2}$, the \emph{$r$-local fundamental group} $\pi_1^L(p;r)$ at $p$ is defined to be $$\pi_1^L(p;r)=\left< \text{loop $\gamma$ at $p$}:\operatorname{im}\gamma \in B_{r}(p)\right>/\sim,$$ where $\gamma_1\sim \gamma_2$ if they are homotopic in $B_{2r}(p)$. \end{definition} For $r_1>r_2>0$, let $\imath:\pi_1^L(p;r_2)\to \pi_1^L(p;r_1)$ be the inclusion homomorphism. A key property used in proving (B) and (C) is certain ``leveled gap'' between local fundamental groups at different scales as follow. \begin{definition}\label{def-leveled-gap} We say that $\pi_1^L(p;R_1)$ satisfies \emph{$(\epsilon,\sigma,l)$-leveled gap property}, if there is a sequence of intervals $[r_l=0,R_l]$, $\cdots$, $[r_1,R_1]$ such that \begin{enumerate}\numberwithin{enumi}{theorem} \item $r_{i}\le \epsilon R_i$, and $r_i/R_{i+1}\le \sigma$, \item $\imath:\pi_1^L(p;r_i)\to \pi_1^L(p;R_i)$ is an isomorphism, \item $\imath(\pi_1^L(p;R_{i+1}))\vartriangleleft \pi_1^L(p;r_i)$. \end{enumerate} \end{definition} In practice, $r_i=3\operatorname{diam} Y_{i+1}$, where $Y_{i+1}$ is a ``regular fiber'' at $i$-level, which by definition, is a level set of $F_{k_1+\cdots+k_i}=(\operatorname{dist}_{a_1},\cdots,\operatorname{dist}_{a_{k_1+\cdots+k_i}})$, where $a_j$ are from a maximal $(k_1+\cdots+k_i)$-frame (for definition see below), $\operatorname{dist}_{a_j}$ is the distance function to $a_j$, and $2R_i$ is the radius of a Perelman's fibration $F_{k_1+\cdots+k_i}$'s base disk around a regular point in a limit space. Secondly, we introduce a $\delta^2$-maximal frame. Let $X$ be an Alexandrov $n$-space and let $k$ be a positive integer $\le n$. Let $0<\delta\le \frac{1}{10^2}$. By \cite{BGP92}, given a pair of points $(a_1,b_1)$ and a minimal geodesic segment $[a_1b_1]$ between them, a $k$-frame $\{[a_ib_i]\}_{i=1}^k$, which consists of $k$ minimal geodesic segment $[a_ib_i]$, can be built up successively (and non-uniquely) on $X$: Assuming $[a_{i-1}b_{i-1}]$ is well-defined, then take $[a_ib_i]$ on $X$ that satisfies the following \numberwithin{enumi}{theorem} \addtocounter{theorem}{1} \begin{enumerate} \item $b_i$ is the middle point of the geodesic segment $[a_{i-1}b_{i-1}]$, \item $|a_ia_{j}|=|b_ia_{j}|$, for all $1\le j\le i-1$, \item\label{delta-col} the edge $[a_ib_i]$ is \emph{$\delta^2$-collapsed}, i.e., $|a_{i}b_{i}|\le \delta^2|a_{i-1}b_{i-1}|$. \end{enumerate} A little more generally, we will consider $k$-frames where $b_i$ is not far away from the middle point $m_{i-1}$ of $[a_{i-1}b_{i-1}]$. Let $\{[a_ib_i]\}_{i=1}^k$ be a $k$-frame. Let $F_k=(\operatorname{dist}_{a_1},\cdots,\operatorname{dist}_{a_k}):X\to \mathbb R^k$. A new pair $(a_{k+1},b_{k+1})$ is called \emph{$\delta^2$-maximal} relative to a $k$-frame $\{[a_ib_i]\}_{i=1}^k$ if \addtocounter{theorem}{1} \begin{enumerate} \item $b_{k+1}$ is $\frac{\delta}{100}|a_kb_k|$-close to the middle point $m_k$ of $[a_kb_k]$, \item $F_{k}(a_{k+1})=F_k(b_{k+1})$, \item\label{cond-num-maximal} $|a_{k+1}b_{k+1}|= d_{k+1}$, where $$d_{k+1}=\max\{|b_{k+1}x|: x\in F_k^{-1}(F_k(b_{k+1})) \text{ and } |x b_{k+1}|\le \delta^2\min_{i=1,\dots, k}|a_{i}b_{i}|\}.$$ \end{enumerate} Note that by (\ref{cond-num-maximal}), one always has $|a_{k+1}b_{k+1}|\le \delta^2\min_{i=1,\dots, k}|a_{i}b_{i}|$. \begin{definition}\label{def-maximal-frame} A $k$-frame is called $\delta^2$-\emph{maximal} if for each $2\le i\le k$, $(a_i,b_i)$ is $\delta^2$-maximal relative to the $(i-1)$-frame $\{[a_{j}b_{j}]\}_{j=1}^{i-1}$. For an $\delta^2$-maximal $n$-frame $\{[a_{j}b_{j}]\}_{j=1}^{n}$, we say that $\{[a_{j}b_{j}]\}_{j=1}^{n}$ is \emph{centered} at $x\in X$, if the point $x$ is $\frac{\delta}{100}|a_nb_n|$-close to $m_n$. \end{definition} By the construction above, Theorem \ref{thm-gradientpush-1} is reduced to a universal estimate of gradient push associated to a $\delta^2$-maximal frame; see Theorem \ref{thm-gradientpush} in the appendix. The following fact on the gradient flow of $\lambda$-concave functions on Alexandrov space is applied in proving Theorem \ref{thm-good-point}. \begin{theorem}[\cite{Petr07}]\label{thm-gradient-lip} Let $\Phi_t$ be the gradient flow of a $\lambda$-concave function on a complete Alexandrov space. Then $\Phi_t:X\to X$ is $e^{\lambda t}$-Lipschitz. \end{theorem} \begin{remark} We remark that all results on gradient push with respect to a $\delta^2$-maximal frame also hold for a $(n,\delta)$-strainer with suitable maximum property. We only use maximal frames in this paper for simplicity. \end{remark} \section{Proofs of Claims (B) and (C)}\label{sec-B-C-prime} We now prove that the existence of $(\epsilon,\sigma,l)$-leveled gap property and a $\delta^2$-maximal frame centered at $p$ would implies (B) and (C) hold for $G_i=\imath\pi_1^L(q;R_i)$ with $q\in B_\frac{1}{2C_T}(p)$. Throughout this subsection, we always assume that $X$ is a locally complete Alexandrov $n$-space with curv $\ge -1$ such that the metric ball $B_1(p)$ is relative compact in $X$. Let $q\in B_\frac{1}{2C_T}(p)$, $0<R_1\le \frac{1}{16C_T}$. Let $\pi_1^L(q;R_i)$ be a local fundamental group satisfying the $(\epsilon,\sigma,l)$-leveled gap property. Let $G_i=\imath\pi_1^L(q;R_i)$ for each $1\le i\le l$. Then by the proofs in \cite{KPT10}, (B) and (C) hold for $G_i$. We give a proof for completeness. \begin{proposition}[\cite{KPT10}]\label{prop-B-C-for-local-group} For $0<R_1\le \frac{1}{16C_T}$ and $\sigma>0$, there is $\epsilon(n)>0$ such that for $0<\epsilon<\epsilon(n)$, any local fundamental group $\pi_1^L(q;R_i)$ with $(\epsilon,\sigma,l)$-leveled gap property for intervals $[r_l=0,R_l]$, $\cdots$, $[r_1,R_1]$, if there is a $\delta^2$-maximal frame $\{[a_jb_j]\}_{j=1}^n$ centered at $q$ such that $$|a_1b_1|=\min \{\frac{1}{2C_T},\frac{1}{2}\operatorname{diam} X\},$$ then the chain of groups $G_i=\imath\pi_1^L(q;R_i)$, namely $$G_1\vartriangleright G_2\vartriangleright\cdots \vartriangleright G_l=\{e\},$$ satisfies (B) and (C). \end{proposition} Let $S_0$ be a short basis of $\pi_1(B_1(p),q)$ and $S_i=(S_0\cap G_{i})\cup (S_0\cap G_i)^{-1}$. For any $\gamma\in G_1$, the norm $|\gamma|$ is defined to be is the minimal length of its representative loops. The following elementary fact will be used in proving Lemma \ref{lem-conj-lip-control} below and (B), (C). \begin{lemma}\label{lem-keep-gap} Any element $\gamma\in S_i\setminus S_{i+1}$ has norm $$2R_{i+1}\le|\gamma|\le \frac{2\epsilon}{3}\cdot \min_{\beta\in S_{i-1}\setminus S_i}|\beta|$$ and $G_i=\left< S_i\right>$. \end{lemma} \begin{proof} Since $\imath:\pi_1^L(q;r_i)\to \pi_1^L(q;R_i)$ is an isomorphism, any loop lying in $B_{R_i}(q)$ at $q$ is homotopic to a loop lying in $B_{r_i}(q)$ at $q$. Furthermore, since $B_{2r_i}(q)$ is locally contractible, any loop lying in $B_{r_i}(q)$ at $q$ is homotopic to a joining of loops not longer than $3r_i$ at $q$. Because $S_0$ is a short basis of $\pi_1(B_1(p),q)$, it can bee seen that for any $\gamma\in S_i\setminus S_{i+1}$, $$2R_{i+1}\le |\gamma|\le 3r_{i}$$ and $G_i=\left< S_i\right>$. \end{proof} Via gradient push by a $\delta^2$-maximal $n$-frame on certain cover $\hat X$ of $B_1(p)$ and a $\delta^2$-maximal $n$-frame centered at $q$, up to a conjugation any loop in $G_1$, whose action on $\hat X$ has a definite displacement, admits the following control in Lemma \ref{lem-conj-lip-control}, which is essential in proving (B) and (C). \begin{lemma}[\cite{KPT10}]\label{lem-conj-lip-control} Assume that there is a $\delta^2$-maximal frame $\{[a_jb_j]\}_{j=1}^n$ centered at $q$ such that $|a_1b_1|=\min \{\frac{1}{2C_T},\frac{1}{2}\operatorname{diam} X\}$. Suppose that $G_i\vartriangleleft G_1$. Then for any element $\gamma=\gamma_1*\cdots*\gamma_m$ with $\gamma_j\in S_1$ with $|\gamma| \le \frac{R_1}{100C(n)}$, there is $\beta\in G_{i}$ such that for any loop $\alpha\in G_{i}$ with $|\alpha|\le 3r_i$, $$|(\gamma*\beta)^{-1}*\alpha*(\gamma*\beta)|\le e^{2 \frac{\cosh2}{\sinh2} (2T(n)+C(n))} |\alpha|,$$ where $C(n)$ is the constant in Remark \ref{rem-push-length}, $T(n)$ is the constant in Theorem \ref{thm-gradientpush}, and $(\hat X_i,\hat q_i) \overset{\pi_i}{\to} (B_1(p),q)$ is a suitable defined cover with $\pi_{i*}\pi_1(\hat X_i,\hat q_i)=G_{i}$. \end{lemma} \begin{proof} Up to a lifting to a cover $(\hat X_1,\hat q_1) \overset{\pi_1'}{\to} (B_1(p),q)$ with $$\pi_{1*}'\pi_1(\hat X_1,\hat q_1)=G_{1},$$ we assume that $\pi_1(B_1(p),q)=G_1$. Indeed, by the definition of $G_1$, $\pi_1'$ maps $B_{R_1}(\hat q_1)$ homeomorphically onto $B_{R_1}(q)$. If we want to construct a homotopy lying in $B_{R_1}(\hat q_1)$ of a short loop, we can actually do the construction in $X$ with the resulting homopoty lies in $B_{R_1}(q)$, then composite this homotopy by $(\pi_1'|_{B_{R_1}(\hat q_1)})^{-1}$. Let $(\hat X_i,\hat q_i) \overset{\pi_i}{\to} (B_1(p),q)$ be a cover with $\pi_{i*}\pi_1(\hat X_i,\hat q_i)=G_{i}$. Then by our assumption, $\pi_i$ is a normal cover. (This assumption will also be used in proving (B) and (C).) Let us construct a $\delta^2$-maximal frame $\{[\hat c_j\hat o_j]\}_{j=1}^n$ on $\hat X_i$ such that $\pi_i(\hat c_1)=q$ and $|\hat c_1\hat o_1|=\min \{\frac{R_1}{100C(n)},\frac{1}{2}\operatorname{diam} \hat X_i\}$. Let $\hat q_i'$ be a regular centered point of $\{[\hat c_j\hat o_j]\}_{j=1}^n$, i.e., $\hat q_i'$ is close to the middle point $\hat m_n$ of $[\hat c_n\hat o_n]$. Since $|q\pi_i(\hat q_i')|\le \min \{\frac{R_1}{100C(n)},\operatorname{diam} X\}$, there is a gradient push $\varphi$ of $\{[a_jb_j]\}_{j=1}^n$ in time $\le T(n)$ such that $\varphi(q)=\pi_i(\hat q_i')$, which gives rise to a homotopy $H$ from $\alpha$ to a loop $\varphi\circ \alpha$ at $\pi_i(\hat q_i')$. Moreover, the whole pushing line of broken geodesics has total length $\le C(n)\cdot |q\pi_i(\hat q_i')|$ (see Remark \ref{rem-push-length}). Since $G_{i}\vartriangleleft G_1$, there exists a lifting $\hat \alpha$ of $\alpha$ at $\gamma\hat q_i$, and a lifting homotopy $\hat H$ of $H$ on $\hat X_i$ from $\hat \alpha$ to $\hat\alpha'=\widehat{\varphi\circ \alpha}$, whose base points are $\gamma\hat q_i$ and $\hat q_i''$. Then $\hat H$ and $\hat \alpha'$ lie in $B_{R_1}(\gamma\hat q_i)$. Moreover, there exists a deck transformation $\psi$ that maps $\hat q_i''$ to $\hat q_i'$. Let $\{\psi^{-1}[\hat c_j\hat o_j]\}$ be the pullback $\delta^2$-frame at $\hat q_i''$. Then there is a gradient push $\hat\phi$ of $\{\psi^{-1}[\hat c_j\hat o_j]\}$ in time $\le T(n)+C(n)$, which gives rise to a homotopy from $\hat\alpha'$ to $\hat \alpha''$, whose base point is $\hat q_i$. Joining two homotopies above together, we get a homotopy from $\hat \alpha$ to $\hat \alpha''$, whose base points are $\gamma \hat q_i$ and $\hat q_i$ respectively. Note that any single step in these two homotopies are defined by a gradient flow of $\frac{1}{2}\operatorname{dist}_x^2$ with $\operatorname{dist}_x<2$ for some $x$, hence the concavity of $\frac{1}{2}\operatorname{dist}_x^2$ is bounded by $2 \frac{\cosh2}{\sinh2}$. By Theorem \ref{thm-gradientpush} and Theorem \ref{thm-gradient-lip}, the length of $\pi(\hat \alpha'')$ satisfies $$\operatorname{length}\pi(\hat \alpha'')\le e^{2 \frac{\cosh2}{\sinh2} (2T(n)+C(n))}\cdot \operatorname{length}\alpha.$$ Let $\gamma'$ be the successive joining of push curves of $\varphi$ and $\pi\hat\phi$. Then it is clear that $\alpha''$ is homotopic to $\gamma'^{-1}*\alpha*\gamma'$, and there is $\beta\in G_i$ such that $\gamma'=\gamma*\beta$. \end{proof} \vspace{2mm} \begin{proof}[Proof of (B) in Proposition \ref{prop-B-C-for-local-group}] ~ By definition of leveled gap property (Definition \ref{def-leveled-gap}), $G_{2}\vartriangleleft G_{1}$. We now prove $G_3\vartriangleleft G_1$. Let $(\hat X_2,\hat q_2) \overset{\pi_2}{\to} (B_1(p),q)$ be the normal cover defined in the proof of Lemma \ref{lem-conj-lip-control}. For any $\gamma\in S_1$, $\gamma$ satisfies that $|\gamma| \le \frac{R_1}{100C(n)}$ as $\epsilon$ in Definition \ref{def-leveled-gap} sufficient small. There is $\beta\in G_2$ such that $\gamma'=\gamma*\beta$, for any $\alpha\in S_3$, $$|\gamma'^{-1}*\alpha*\gamma'|\le e^{2 \frac{\cosh2}{\sinh2} (2T(n)+C(n))}|\alpha|.$$ Let us take $\epsilon^{-1}>300C(n)e^{2 \frac{\cosh2}{\sinh2} (2T(n)+C(n))}$, then by Lemma \ref{lem-keep-gap}, $\gamma'^{-1}*\alpha*\gamma'\in G_3$. Since $G_3\vartriangleleft G_2$, $\gamma^{-1}*\alpha*\gamma\in G_3$. This implies $G_3\vartriangleleft G_1$. Repeating the argument above for loops in each $G_i$ for $i\ge 4$ successively, we complete the proof. \end{proof} \vspace{2mm} \begin{proof}[Proof of (C) in Proposition \ref{prop-B-C-for-local-group}] ~ For any fixed integer $m$, let $S_1^m=\{\gamma\in G_1: \operatorname{wordlength}(\gamma)\le m\}$. Firstly, similar to the proof of (B), let $(\hat X_i,\hat q_i) \overset{\pi_i}{\to} (B_1(p),q)$ be the normal cover defined in the proof of Lemma \ref{lem-conj-lip-control}. For any $\gamma\in S_1^m$, $\gamma$ satisfies that $|\gamma| \le \frac{R_1}{100C(n)}$ as $\epsilon$ in Definition \ref{def-leveled-gap} sufficient small. There is $\beta\in G_i$ such that $\gamma'=\gamma*\beta$, for any $\alpha\in S_i$, $$|\gamma'^{-1}*\alpha*\gamma'|\le e^{2 \frac{\cosh2}{\sinh2} (2T(n)+C(n))}|\alpha|.$$ Secondly, let us consider the normal cover $(\hat X_{i+1},\hat q_{i+1}) \overset{\pi_{i+1}}{\to} (B_1(p),q)$ defined in the proof of Lemma \ref{lem-conj-lip-control}. Then the relative volume comparison holds in $B_{R_1}(\hat q_{i+1})$ (see \cite{LiRong12}). By counting the lattice points $G_1(\hat q_{i+1})$ in balls of $(\hat X_{i+1},\hat q_{i+1})$, up to an inner automorphism of $G_i/G_{i+1}$ the possibility of transformations $\rho_i(S_1^m)$ on $G_i/G_{i+1}$ is bounded by the following number $$\left(\frac{\operatorname{vol}B_{-1}^n(e^{2 \frac{\cosh2}{\sinh2} (2T(n)+C(n))}\cdot 3r_{i+1}+R_{i+2})}{\operatorname{vol}B_{-1}^n(R_{i+2})}\right)^{\# (S_i\setminus S_{i+1})}.$$ Let $N_0=\sup_{0<R_{i+2}\le 1}\left[\left(\frac{\operatorname{vol}B_{-1}^n((e^{2 \frac{\cosh2}{\sinh2} (2T(n)+C(n))}\cdot 3\sigma+1)R_{i+2})}{\operatorname{vol}B_{-1}^n(R_{i+2})}\right)^{c(n)}\right]+2$, where $c(n)$ is an upper bound of the total number of short basis $\#S_1$. Let us take $\epsilon^{-1}>300C(n)N_0$, then by Lemma \ref{lem-keep-gap}, $\#\rho_i(S_1^{N_0})< N_0$. Thus by \cite[Trivial Lemma 4.2.2]{KPT10}, $\#\rho_i(G_1)< N_0$. \end{proof} \section{Proof of Claim (A)} To finish the proof of Theorem \ref{thm-good-point}, it suffices to construct the local fundamental groups and a maximal frames associated to a contradicting sequence $(X_\alpha, p_\alpha)\overset{GH}{\longrightarrow}(X_\infty,p_\infty)$, and then verify (A). \begin{proposition}\label{prop-existence-local-group} Let $(X_\alpha,p_\alpha)\overset{GH}{\longrightarrow}(X_\infty,p_\infty)$ be a convergence sequence of Alexandrov $n$-spaces with curv $\ge -1$ such that $\operatorname{diam} X_\infty\ge 1$. Then by passing to a subsequence of $(X_\alpha, p_\alpha)$, there are $0<R_1\le \frac{1}{16C_T}$, $\sigma>0$, $1\le l\le n$, and for all large $\alpha\in \mathbb N$ there exist a point $q_\alpha\in B_{\frac{1}{2C_T}}(p_\alpha)$ such that \begin{enumerate} \item \label{item-construction} there is an associated $\delta^2$-maximal $n$-frame centered at $q_\alpha$ with $a_{1,\alpha}=p_\alpha$, $|a_{1,\alpha}b_{1,\alpha}|=\frac{1}{2C_T}$; \item \label{item-leveled-gap} the $R_1$-local fundamental group $\pi_1^L(q_\alpha; R_1)$ satisfies $(\epsilon_\alpha,\sigma,l)$-leveled gap property with respect to $[r_{l,\alpha}=0,R_{l,\alpha}]$, $\cdots$, $[r_{1,\alpha},R_{1,\alpha}=R_1]$ and $\epsilon_\alpha\to 0$; \item \label{item-almost-abelian} $\pi_1^L(q_\alpha;r_{i,\alpha})/\imath\pi_1^L(q_\alpha;r_{i+1,\alpha})$ is $C$-abelian for some constant $C$. \end{enumerate} \end{proposition} Now Theorem \ref{thm-good-point} follows from earlier arguments in Section \ref{sec-B-C-prime} and Proposition \ref{prop-existence-local-group}. \vspace{2mm} \begin{proof}[Proof of Theorem \ref{thm-good-point}] ~ Continue from earlier discussion, we have assumed a contradicting sequence $(X_\alpha,p_\alpha)$ of Alexandrov $n$-spaces with curv $\ge -1$, such that for any $q_\alpha\in B_{\frac{1}{2C_T}}(p_\alpha)$, $\Gamma_{p_\alpha}(q_\alpha;\alpha^{-1})$ fails to be $w(n)$-nilpotent, and $(X_\alpha, p_\alpha)$ Gromov-Hausdorff converges to a limit space $(X_\infty,p_\infty)$. Up to changing $X_\alpha$ to $X_\alpha \times \mathbb R^1$, we further assume that $\operatorname{diam} X_\infty\ge 1$. By Section \ref{sec-B-C-prime}, it suffices to construct a sequence of local groups at some $q_\alpha\in B_{\frac{1}{2C_T}}(p_\alpha)$ with leveled gap property such that (A) holds for $G_{i,\alpha}=\imath\pi_1^L(q_\alpha;R_{i,\alpha})$, and there is $\delta^2$-maximal frames at $q_{\alpha}$. Since the construction follows from Proposition \ref{prop-existence-local-group}, the proof of Theorem \ref{thm-good-point} is complete. \end{proof} What remains of the paper is proving Proposition \ref{prop-existence-local-group}. The following non-collapsing property of maximal frame will be used in its proof. \begin{lemma}\label{lem-maximal-frame-non-collapse} Let $(X_\alpha^n,p_\alpha)\overset{GH}{\longrightarrow}(A^k,p_\infty)$ be a sequence of Alexandrov $n$-spaces that converges to an Alexandrov $k$-space $A^k$. Let $\{[a_{i,\alpha}b_{i,\alpha}]\}_{i=1}^k$ be a $\delta^2$-maximal frame in $X_\alpha^n$ with $a_{1,\alpha}=p_\alpha$, $0<d\le d_{1,\alpha}=|a_{1,\alpha}b_{1,\alpha}|\le 1$. By passing to a sequence, $\{[a_{i,\alpha}b_{i,\alpha}]\}_{i=1}^k$ converges to a frame $\{[a_{i,\infty}b_{i,\infty}]\}_{i=1}^k$ in $A^k$. Then $|a_{k,\infty}b_{k,\infty}|>0$. \end{lemma} \begin{proof} Argue by induction on $i$. Assume $d_{i,\infty}=|a_{i,\infty}b_{i,\infty}|>0$ for $i<k$, it suffices to show $d_{i+1,\infty}>0$. Since $\dim A^k>i$, there is a point $a'_{i+1,\infty}$ in $A^k$ such that $0<|a'_{i+1,\infty}b_{i+1,\infty}|\le \frac{1}{4}\delta^2 d_{i,\infty}$ and $F_{i,\infty}(a'_{i+1,\infty})=F_{i,\infty}(b_{i+1,\infty})$, where $F_{i,\infty}=(\operatorname{dist}_{a_{1,\infty}},\cdots,\operatorname{dist}_{a_{i,\infty}})$. Take points $a'_{i+1,\alpha}\in X_\alpha^n$ which converges to $a'_{i+1,\infty}$. By \cite[Theorem 5.4]{BGP92} (or see Theorem \ref{thm-co-Lip-part-dist-coord} below), without loss of generality we assume that $F_{i,\alpha}$ is $\frac{1-2i\delta}{\sqrt{i}}$-open, i.e., $\frac{\sqrt{i}}{1-2i\delta}$-co-Lipschitz, on $B_{\delta d_{i,\alpha}}(b_{i+1,\alpha})$. Hence there is $a''_{i+1,\alpha}$ in $F_{i,\alpha}^{-1}(b_{i+1,\alpha})$ such that $|a''_{i+1,\alpha}a'_{i+1,\alpha}|\to 0$ as $\alpha\to \infty$. Since $d_{i+1,\alpha}$ is maximal, we derive $$d_{i+1,\alpha}\ge |a''_{i+1,\alpha}b_{i+1,\alpha}|>\frac{1}{2}|a'_{i+1,\infty}b_{i+1,\infty}|.$$ \end{proof} \vspace{2mm} \begin{proof}[Proof of Proposition \ref{prop-existence-local-group}] ~ (\ref{item-construction}) The construction will be done inductively as follows. The Starting Step. Assume $\dim X_\infty=k_1$. Since $\operatorname{diam}(X_\infty)\ge 1$, we are able to directly construct a $\delta^2$-maximal $k_1$-frame $\{[a_{j,\alpha}b_{j,\alpha}]\}_{j=1}^{k_1}$ in $B_1(p_\alpha)$ such that $a_{1,\alpha}=p_{\alpha}$ and $b_{1,\alpha}$ is a point such that $d_{1,\alpha}=|a_{1,\alpha}b_{1,\alpha}|_{X_\alpha}=\frac{1}{2C_T}$. By Lemma \ref{lem-maximal-frame-non-collapse}, it converges to a $k_1$-frame $\{[a_jb_j]\}_{j=1}^{k_1}$ in $X_\infty$. Let $m_{k_1}$ be the middle point of $[a_{k_1}b_{k_1}]$. Let $p_1\in X_\infty$ be a regular point such that $|p_1m_{k_1}|\le \frac{1}{200}\delta |a_{k_1}b_{k_1}|$. By \cite[Theorem 5.4]{BGP92}, there is $0<R_1\le \frac{1}{16C_T}$ such that $B_{2R_1}(p_1)$ is bi-Lipschitz to an open ball $D^{k_1}$ in the Euclidean space $\mathbb R^{k_1}$ with bi-Lipschitz constant almost $1$. By Perelman's fibration theorem \cite{Pr93}, the map $F_{k_1}^{-1}\circ F_{k_1,\alpha}$ is a locally trivial fibration, where $F_k=(\operatorname{dist}_{a_1},\cdots,\operatorname{dist}_{a_k})$ is the map associate to the maximal $k$-frame. Let us chose $p_{1,\alpha}\in X_{\alpha}\to p_1$. Clearly, $\{[a_{j,\alpha}b_{j,\alpha}]\}_{j=1}^{k_1}$ is also a $\delta^2$-maximal $k_1$-frame at $p_{1,\alpha}$. We define $r_{1,\alpha}=3\operatorname{diam}_{X_\alpha} F_{k_1,\alpha}^{-1} (F_{k_1,\alpha}(p_{1,\alpha}))$, i.e., the extrinsic diameter of a reguler fiber of the Perelman's fibration. Step 1. Let $Y_{2,\alpha}=F_{k_1,\alpha}^{-1}(F_{k_1,\alpha}(p_{1,\alpha}))$, and let $\theta_{2,\alpha}=\frac{1}{3}r_{1,\alpha}=\operatorname{diam}_{X_{\alpha}} Y_{2,\alpha}$. Since $\theta_{2,\alpha}\to 0$, it is easy to see $Y_{2,\alpha}$ is connected. By passing to a subsequence, we assume the rescaled sequence $$\theta_{2,\alpha}^{-1}(X_\alpha,p_{1,\alpha})\to (A_2\times \mathbb R^{k_1},p'_2).$$ Moreover, as subsets, $Y_{2,\alpha}$ converges to $A_2\times \{0\}$. Assume $\dim A_2=k_2$. Let $b_{k_1+1,\alpha}=p_{1,\alpha}$, and $a_{k_1+1,\alpha}\in Y_{2,\alpha}$ be the farthest point away from $p_{1,\alpha}$. Starting with $[a_{k_1+1,\alpha}b_{k_1+1,\alpha}]$, we construct a $\delta^2$-maximal frame $$\{[a_{k_1+j,\alpha}b_{k_1+j,\alpha}]\}_{j=1}^{k_2},$$ which by the same argument as in the Starting Step, converges to a $k_2$-frame in $A_2\times \{0\}$, $$\{[a_{k_1+j}b_{k_1+j}]\}_{j=1}^{k_2}.$$ Let $p_2$ in $A_2\times\{0\}$ such that $|p_2m_{k_1+k_2}|\le \frac{1}{200}\delta |a_{k_1+k_2}b_{k_1+k_2}|$. We similarly define $p_{2,\alpha}\in X_\alpha\to p_{2}$, $R_{2,\alpha}=\theta_{2,\alpha}\cdot R_2$, where there is a Perelman's fibration $F_{k_1+k_2}^{-1}\circ F_{k_1+k_2,\alpha}:\theta_{2,\alpha}^{-1}X_{\alpha}\to A_2\times \mathbb R^{k_1}$ over $B_{2R_2}(p_2)$, which is bi-Lipschitz to an open ball $D^{k_1+k_2}$ in the Euclidean space $\mathbb R^{k_1+k_2}$ with bi-Lipschitz constant almost $1$. (Note that every component in $F_{k_1}$ is a canonical Busemann function on $\mathbb R^{k_1}$.) Let $r_{2,\alpha}=3\operatorname{diam}_{X_{\alpha}} F_{k_1+k_2,\alpha}^{-1}(F_{k_1+k_2,\alpha}(p_{2,\alpha}))$. Step 2. Do the same process as in Step 1 for $Y_{3,\alpha}=F_{k_1+k_2,\alpha}^{-1}(F_{k_1+k_2,\alpha}(p_{2,\alpha}))$, and $\theta_{3,\alpha}=\frac{1}{3}r_{2,\alpha}=\operatorname{diam}_{X_{\alpha}} Y_{3,\alpha}$. Let us repeat the process in Step 2 until $k_1+\cdots+k_l=n$, then we have constructed a $\delta^2$-maximal $n$-frame $$\{[a_{j,\alpha}b_{j,\alpha}]\}_{j=1}^{n},$$ centered at $p_{l,\alpha}$. Let $q_{\alpha}=p_{l,\alpha}$, then (\ref{item-construction}) is complete. (\ref{item-leveled-gap}) By definition, each $\pi_1^L(p_{l,\alpha}; R_1)$ satisfies $(\epsilon_\alpha,\sigma,l)$-leveled gap property, where $\epsilon_\alpha=\frac{r_{i,\alpha}}{R_{i,\alpha}}\to 0$, $\sigma=\max\{3R_2^{-1}, \cdots, 3R_l^{-1}\}$. Indeed, in order to verify (\ref{item-leveled-gap}), it suffices to show that \begin{equation}\label{normal-by-Hurewicz} \imath\pi_1^L(p_{l,\alpha};R_{i+1,\alpha})\vartriangleleft \pi_1^L(p_{l,\alpha};r_{i,\alpha}) \tag{3.15} \end{equation} (\ref{normal-by-Hurewicz}) follows from the Hurewicz fibration Theorem \ref{thm:Lipschitz-submersion}. Indeed, let $D_{i,\alpha}=F_{k_1+\cdots+k_i,\alpha}^{-1}(D^{k_1+\cdots+k_i})$. Then $$\pi_1^L(p_{l,\alpha}; R_{i,\alpha})\cong\pi_1^L(p_{l,\alpha};r_{i,\alpha})\cong\pi_1(D_{i,\alpha},p_{l,\alpha}).$$ By the choice of $p_{l,\alpha}$, $\theta_{i+1,\alpha}^{-1}(D_{i,\alpha},p_{l,\alpha})\overset{GH}{\longrightarrow} (A_{i+1}\times \mathbb R^{k_1+\cdots+k_i}, p_{i+1})$, where $p_{i+1}$ is a regular point in $A_{i+1}\times\{0\}$. Let $(\hat D_{i,\alpha},\hat p_{l,\alpha})\overset{\pi_{i,\alpha}}{\to} (D_{i,\alpha},p_{l,\alpha})$ be a cover of $D_{i,\alpha}$ with $$\pi_1(\hat D_{i,\alpha},\hat p_{l,\alpha})=\imath\pi_1^L(p_{l,\alpha};r_{i+1,\alpha}).$$ We are to show that $\pi_{i,\alpha}$ is normal. Let $S_{i,\alpha}$ of be a short basis of $\pi_1^L(p_{l,\alpha};r_{i,\alpha})$. By passing to a subsequence, we assume that for the same $t_i$, $\{\gamma_{i,\alpha,1},\dots,\gamma_{i,\alpha,t_i}\}=S_{i,\alpha}\setminus \imath\pi_1^L(p_{l,\alpha};r_{i+1,\alpha}).$ By the definition of a short basis, their lifting curve $\hat \gamma_{i,\alpha,1},\dots, \hat \gamma_{i,\alpha,t_i}$ are minimal geodesics in $\hat D_{i,\alpha}$ from $\hat p_{l,\alpha}$ to some $\hat q_{i+1,\alpha,1},\dots, \hat q_{i+1,\alpha,t_i}$ respectively. It suffices to show that for any loop $\gamma\in \imath \pi_1^L(p_{l,\alpha};r_{i+1,\alpha})$, and any $\gamma_{i,\alpha,s}$, there is a homotopy with fixed endpoint from $\gamma_{i,\alpha,s}*\gamma*\gamma^{-1}_{i,\alpha,s}$ to a loop in $\imath \pi_1^L(p_{l,\alpha};r_{i+1,\alpha})$. By passing to a subsequence, $\theta_{i+1,\alpha}^{-1}(\hat D_{i,\alpha},\hat p_{l,\alpha})\to (\hat A_{i+1}\times \mathbb R^{k_1+\cdots+k_i}, \hat p_{i+1})$. And each minimal geodesic $\hat \gamma_{i,\alpha,s}$ converges to $\hat \gamma_{i,s}$ in $\hat A_{i+1}\times\{0\}$, which is a minimal geodesic form $\hat p_{i+1}$ to $\hat q_{i+1,s}$. If $\hat \gamma_{i,1},\cdots,\hat \gamma_{i,t_i}$ pass only regular points, then by \cite{BGP92} there is a positive $\eta>0$ such that the neighborhood $B_{2\eta}=U_{2\eta}(\bigcup_{s=1}^{t_i} \hat \gamma_{i,s})\subset \hat A_{i+1}\times \mathbb R^{k_1+\cdots +k_i}$ contains only $(k_1+\cdots +k_{i+1},\delta)$-strained points with a universal strainer radius. Thus by Theorem \ref{thm:Lipschitz-submersion} and Remark \ref{rem-local-fibration}, for $\alpha$ large we have \addtocounter{theorem}{1} \begin{enumerate} \item\label{homotopy-control-1} any $F_{k_1+\cdots+k_i,\alpha}$-fiber in $D_{i,\alpha}$ has extrinsic diameter not larger than $\eta/4$. \item\label{homotopy-control-2} there is a Hurewicz fibration $\varphi_{i,\alpha}$ which is close to the original GHA, over $B_{2\eta}$, whose fiber's diameter $\le \eta/4$, such that $\hat \gamma_{i,\alpha,s}\subset \varphi_{i,\alpha}^{-1}(U_\eta(\hat\gamma_{i,s}))$ and $\varphi_{i,\alpha}(\hat \gamma_{i,\alpha,s})\subset U_{\eta/4}(\hat \gamma_{i,s})$. \end{enumerate} Note that the lifting of $\gamma_{i,\alpha,s}*\gamma*\gamma_{i,\alpha,s}^{-1}$ at $\hat q_{i+1,\alpha,s}$ is $\hat \gamma_{i,\alpha,s}*\hat \gamma*\hat \gamma_{i,\alpha,s}^{-1}$ with $\hat \gamma$ a closed lifting of $\gamma$ at $\hat p_{l,\alpha}$. Then by the construction of $\varphi_{i,\alpha}$ (see Proposition \ref{prop:gradient-retract}), there is a canonical contraction from tubular neighborhood of a $\varphi_{i,\alpha}$-fiber to itself. Thus, by (\ref{homotopy-control-1}) there is a homotopy $\hat H_1'$ maps $\hat \gamma $ to $\hat \gamma'\subset \varphi_{i,\alpha}^{-1}(\varphi_{i,\alpha}(\hat p_{l,\alpha}))$ keeping $\hat p_{l,\alpha}$ unmoved. Thus, we have a homotopy $\hat H_1$ maps $\hat \gamma_{i,\alpha,s}*\hat \gamma*\hat \gamma_{i,\alpha,s}^{-1}$ to $\hat \gamma_{i,\alpha,s}*\hat \gamma'*\hat \gamma_{i,\alpha,s}^{-1}$, keeping $\hat \gamma_{i,\alpha,s}$ and $\hat\gamma_{i,\alpha,s}^{-1}$ unmoved. Furthermore, by (\ref{homotopy-control-2}) there is a homotopy $\hat H_2'$ maps $\hat \gamma'$ to $\hat \gamma''$, which lies in $\varphi_{i,\alpha}^{-1}(\varphi_{i,\alpha}(\hat q_{i+1,\alpha,s}))$, moving $\hat p_{l,\alpha}$ to $\hat q_{i+1,\alpha,s}$ along $\hat \gamma_{i,\alpha,s}$ such that $\gamma''$ is a loop at $\hat q_{i+1,\alpha,s}$. Thus, we have a homotopy $\hat H_2$ maps $\hat \gamma_{i,\alpha,s}*\hat \gamma'*\hat \gamma_{i,\alpha,s}^{-1}$ to $\hat \gamma''$, keeping $\hat q_{i+1,\alpha,s}$ unmoved. Then $\pi_{i,\alpha}(\hat H_2*\hat H_1)$ is a homotopy maps $\gamma_{i,\alpha,s}*\gamma*\gamma_{i,\alpha,s}^{-1}$ to $\pi_{i,\alpha}(\hat \gamma'')$ keeping $p_{l,\alpha}$ unmoved, and $\pi_{i,\alpha}(\hat\gamma'')$ lies in $\imath\pi_1^L(p_{l,\alpha};r_{i+1,\alpha})$. In order to complete the proof of (\ref{normal-by-Hurewicz}), we now verify that all limit minimal geodesics $\hat \gamma_{i,s}$ pass regular points. Firstly, it is clear that the limit projection $\pi_i:\hat A_{i+1}\times\{0\}\to A_{i+1}\times\{0\}$ is a submetry (i.e., $1$-LcL). Secondly, there is a neighborhood of $\hat p_{i+1}$ restricted on which $\pi_i$ is an isometry. This is because near $\hat p_{l,\alpha}$, there is a homeomorphic lifting of $D_{i,\alpha}$ in $\hat D_{i,\alpha}$. Hence $\dim \hat A_{i+1}=\dim A_{i+1}$, and all lift points $\hat p_{i+1}$, $\hat q_{i+1,1},\dots,\hat q_{i+1,t_i}$ are regular. By \cite{BGP92}, any minimal geodesic $\hat \gamma_{i,s}$ between them contains only regular points. The proof of (\ref{item-leveled-gap}) is now complete. (\ref{item-almost-abelian}) By (\ref{normal-by-Hurewicz}), $(\hat D_{i,\alpha},\hat p_{l,\alpha})\overset{\pi_{i,\alpha}}{\to} (D_{i,\alpha},p_{l,\alpha})$ is a normal cover, whose deck-transformation group is $\Lambda_{i,\alpha}=\pi_1^L(p_{l,\alpha};r_{i,\alpha})/\imath\pi_1^L(p_{l,\alpha};r_{i+1,\alpha})$. Since $\Lambda_{i,\alpha}$ equivariantly converges, the limit group $\Lambda_{i}$ acts on $\hat A_{i+1}\times \{0\}$ isometrically. By the generalized Bieberbach theorem \cite{FY92} (cf. \cite{Ya96}), $\Lambda_{i}$ is $C$-abelian. Since $\Lambda_i$ is a discrete group, the GHA $\rho_{i,\alpha}$ between $\Lambda_{i,\alpha}$ and $\Lambda_i$ is a homomorphism. We now prove that $\rho_{i,\alpha}$ is an isomorphism. Firstly, since there is no non-trivial element of $\Lambda_{i,\alpha}$ whose displacement is shorter than $2R_{i+1,\alpha}$, $\rho_{i,\alpha}$'s kernel should be a subgroup $K_{i,\alpha}$, which moves $\hat p_{l,\alpha}$ to infinity. Secondly, since $\pi_1^L(p_{l,\alpha};r_{i,\alpha})$ is generated by all of its elements whose displacements are not longer than $3r_{i,\alpha}$ and any generating relation can be written as a word in these elements with wordlength $\le 3$, the corresponding property holds for $\Lambda_{i,\alpha}$. Because the relative volume comparison (see \cite{LiRong12}) provides an uniform bound to the number of $\Lambda_{i,\alpha}$-orbit points in $B_{9r_{i,\alpha}}(\hat p_{l,\alpha})$, by passing to a subsequence, the presentation of $\Lambda_{i,\alpha}$ is stable. Hence $\rho_{i,\alpha}$ is an isomorphism. \end{proof} \section{Appendix on gradient push} Let $\{[a_jb_j]\}_{j=1}^n$ be a $\delta^2$-maximal $n$-frame with $|a_1b_1|\le 1$ on an Alexandrov $n$-space $X$ with curv $\ge -1$. Let $F_k(x)=(\operatorname{dist}_{a_1}(x),\operatorname{dist}_{a_2}(x),\cdots,\operatorname{dist}_{a_k}(x)).$ Recall that by the definition of maximal frame, $d_j=|a_jb_j|$ which satisfies \begin{equation}\label{def-key-max} d_{k+1}=\max_{F_k^{-1}(F_k(b_{k+1}))}\{\min\{\operatorname{dist}_{b_{k+1}}, \delta^2\cdot\min_{i=1,\dots, k}|a_{i}b_{i}|\}\}.\tag{4.1} \end{equation} In the following we always assume that $m_k$ is the middle point of $[a_kb_k]$, and $b_{n+1}$ is a point $\frac{\delta}{100}|a_{n}b_{n}|$-close to the middle point $m_n$ of $[a_nb_n]$. We restate Theorem \ref{thm-gradientpush-1} and give a proof in the following form. \begin{theorem}[\cite{KPT10}]\label{thm-gradientpush} There is $T(n)>0$ such that for any $\delta^2$-maximal $n$-frame $\{[a_jb_j]\}_{j=1}^n$ with $|a_1b_1|\le 1$ on an Alexandrov $n$-space $X$ with curvature $\ge -1$, any point $b_{n+1}$ that is $\frac{\delta}{100}|a_nb_n|$-close to the middle point $m_n$ of $[a_nb_n]$ can be pushed successively by the gradient flows of $\frac{1}{2}\operatorname{dist}_{b_{n+1}}^2$, $\frac{1}{2}\operatorname{dist}_{a_j}^2$, $\frac{1}{2}\operatorname{dist}_{b_j}^2$ $(j=1,\dots, n)$ to any point $p\in B_{100 |a_1b_1|}(b_{n+1})$ in total time $\le T(n)$. \end{theorem} \begin{remark}\label{rem-push-length} By the proof of Theorem \ref{thm-gradientpush} (or by replacing $\frac{1}{2}\operatorname{dist}^2$ with $\operatorname{dist}$ in Theorem \ref{thm-gradientpush}), the length of broken gradient curves between $b_{n+1}$ and $p$ is no more than $C(n)\cdot |b_{n+1}p|$, where $C(n)=2n+\frac{4(n-1)}{\sin\sigma(n)}+1$ with $\sigma(n)$ in Lemma \ref{lem-angle-est-0}. This fact is also used in the proof of Theorem \ref{thmb-margulis}; see Lemma \ref{lem-conj-lip-control}. \end{remark} Some partial motivations to write a proof other than just referring to \cite[Lemma 2.5.1]{KPT10} are as follows. Firstly, there is only a sketched proof for Theorem \ref{thm-gradientpush} in \cite{KPT10}, where the ratio bound on the pushing time $\frac{t_{k-1}}{t_k}\le \frac{1}{\delta^n}$ from level $d_k$ to $d_{k-1}$ is claimed without explanation in the proof of \cite[Lemma 2.5.1]{KPT10}. Since it is hard for us to follow at that point, we write a detailed proof on the surjectivity and universal speed of gradient pushing-out (using the maximum property (\ref{def-key-max}) and $\frac{1-2n\delta}{\sqrt{n}}$-openness in \cite[Theorem 5.4]{BGP92}). In particular, our proof leads to a sharpened universal time bound $n^2\delta^{-1}$, improving the universal time bound $\delta^{-n^2}$ claimed in \cite{KPT10}. Secondly, a crucial difference between an Alexandrov space $X$ with curvature bounded below and a Riemannian manifold $M$ is that, there may be proper extremal subsets in $X$ such that no gradient curves can get out of them. Without further explanation, it is also hard for us to see from \cite{KPT10} that the gradient pushing-out process can be chosen to avoid extremal subsets. We fill more details and construct a specific gradient pushing broken line, consisting of $k$-regular (i.e., the tangent cone $T_pX$ at least splits off $\mathbb R^k$) or $(n,\delta)$-strained points when $a_j,b_j$ and the ending point $p$ are $k$-regular. Since all our estimates will hold for a new $n$-frame $\{[a_j'b_j']\}_{j=1}^n$, where $a_j'$ and $b_j'$ are nearby regular points around $a_j$ and $b_j$. It follows that gradient push between regular points only passes through regular points. This provides a detailed justification for the gradient push in proving the Margulis lemma on an Alexandrov space. \subsection{Proof of Theorem \ref{thm-gradientpush}} The proof of Theorem \ref{thm-gradientpush} can be divided into two steps. \textbf{Step 1}. Prove that in at most a definite time $T(\delta,n)$, $b_{n+1}$ can be pushed to any point in a ball $B_{\frac{\delta}{2}d_n}(m_n)$ whose radius is at a small but fixed relative scale, where $m_n$ is the middle point of $[a_nb_n]$. \begin{lemma}\label{lem-pushingforward-0} For $0<\delta<\delta(n)$ and any $q\in B_{\frac{\delta}{2}d_n}(m_n)$, $b_{n+1}$ can be pushed by an at most countably succession of the gradient flows of $\frac{1}{2}\operatorname{dist}_{a_j}^2, \frac{1}{2}\operatorname{dist}_{b_j}^2$ to $q$ in time $\le \frac{12n^2}{4n-1}\delta$. \end{lemma} Compared with the proof of \cite[Theorem 5.4]{BGP92}, Lemma \ref{lem-pushingforward-0} follows from certain reversing argument, which will be given at the end of the appendix. \textbf{Step 2}. Prove that $B_{\frac{\delta}{2}d_n}(m_n)$ can be pushed outside further. If $\frac{d_n}{d_1}$ admits a definite lower bound $\tau$, then one may push $B_{\frac{\delta}{2}d_n}(m_n)\setminus B_{\frac{\delta}{4}d_n}(m_n)$ onto $B_{100d_1}(m_n)$ by $\frac{1}{2}\operatorname{dist}_{m_n}^2$ just one more time taking no more than $T(\tau,n)$. However, $d_n$ may be far less than $d_1$, or even $d_{n-1}$. To overcome this difficulty, we divided an $n$-frame into several levels. We say that a $\delta^2$-maximal $n$-frame $\{[a_jb_j]\}_{j=1}^n$ is of \emph{$(\frac{\delta^2}{100},l)$-leveling} if there is $1\le k_1<\cdots<k_l=n$ such that $[a_{k_{i-1}+1}b_{k_{i-1}+1}],\dots, [a_{k_{i}}b_{k_{i}}]$ lies in the same level in the sense that $d_{j}>\frac{\delta^2}{100}d_{j-1}$ for any integer $k_{i-1}+1\le j\le k_i$, $1\le i\le l$ ($k_0=0$), and $[a_{k_i}b_{k_i}]$, $[a_{k_i+1}b_{k_i+1}]$ lie in different levels, i.e., $d_{k_i+1}\le \frac{\delta^2}{100}d_{k_i}$ for any $1\le i\le l$. Inside each $i$-th level, it follows from elementary gradient estimate that $B_{\frac{\delta}{4}d_{k_i}}(b_{n+1})$ can be pushed by the center $b_{n+1}$, i.e., $\frac{1}{2}\operatorname{dist}_{b_{n+1}}^2$, onto $B_{100d_{k_{i-1}+1}}(b_{n+1})$ in time $\le \ln (\frac{400}{\delta}(\frac{100}{\delta^2})^{k_i-(k_{i-1}+1)})$. In order to push $B_{100d_{k_{i-1}+1}}(b_{n+1})$ further outside onto a large leveled ball in a specific way, we need to prove the following lemma. \begin{lemma}\label{lem-difflevel-push-0} If $d_{k+1}=|a_{k+1}b_{k+1}|\le \frac{\delta^2}{100}d_k$, then for any $p\in B_{\frac{\delta}{2}d_k}(b_{k+1})$, there is some point $q\in B_{50d_{k+1}}(b_{k+1})$ which can be pushed successively along finitely-broken geodesics, each of which is pointing to one of $\{a_j,b_j\}_{j=1}^k$, by the gradient flows of $\frac{1}{2}\operatorname{dist}_{a_j}^2, \frac{1}{2}\operatorname{dist}_{b_j}^2$ to $p$ in time $\le C(n)\delta$. \end{lemma} Note that in the case of Lemma \ref{lem-difflevel-push-0} for different level, we are using endpoints of long edges in the frame, which lie outside the small ball $B_{50d_{k+1}}(b_{k+1})$. In the proof of Lemma \ref{lem-difflevel-push-0}, the core is the following angle estimate, which follows from the numerical maximum property (\ref{def-key-max}) of $\delta^2$-maximal frame. \begin{lemma}[Angle Estimate]\label{lem-angle-est-0} There is $\delta(n)>0$ such that the following holds for $0<\delta<\delta(n)$. If $d_{k+1}=|a_{k+1}b_{k+1}|\le \frac{\delta^2}{100}|a_kb_k|$, then for any $p\in B_{\frac{\delta}{2}d_k}(b_{k+1})\setminus B_{50d_{k+1}}(b_{k+1})$, there exists $e\in \{a_j,b_j\}_{j=1}^k$ such that $\measuredangle(p;e,b_{k+1})\le \frac{\pi}{2}-\sigma(n)$. \end{lemma} \begin{proof} Argue by contradiction. For any $\delta>0$, there is a $\delta^2$-maximal $k$-frame such that the conclusion of Lemma \ref{lem-angle-est-0} fails. Then by Toponogov comparison (cf. \cite[Lemma 5.6]{BGP92}), there is $\sigma=\sigma(\delta)\to 0$ as $\delta\to 0$ such that for any $x\in [b_{k+1}p]$ and every $1\le i\le k$, $$\measuredangle(x;a_i,b_{k+1})\ge \frac{\pi}{2}-\sigma, \text{ and } \measuredangle(x;b_i,b_{k+1})\ge \frac{\pi}{2}-\sigma.$$ Then as $\sigma$ sufficient small, \begin{equation}\label{eq-dist-image} ||a_ix|-|a_ib_{k+1}||\le |xb_{k+1}|\cdot \sin \sigma. \end{equation} By \cite[Theorem 5.4]{BGP92} (or see Theorem \ref{thm-co-Lip-part-dist-coord} below), for any $0\le \sigma\le \frac{1}{2k}$ the partial distance coordinates map associated to $k$-subframe $\{[a_jb_j]\}_{j=1}^k$, $$F_k:X\to \mathbb R^k, \quad F_k(x)=(|a_1x|,|a_2x|,\cdots,|a_kx|),$$ is $\frac{1-2k\sigma}{\sqrt{k}}$-open, i.e., $\frac{\sqrt{k}}{1-2k\sigma}$-co-Lipschitz, on $B_{\delta d_k}(b_{k+1})$. Hence there is $x'\in F_k^{-1}(F_k(b_{k+1}))\cap B_{\delta d_k}(b_{k+1})$ such that the distance \begin{equation}\label{eq-dist-preimage} |xx'|\le \frac{\sqrt{k}}{1-2k\delta}\cdot |F_k(x)-F_k(x')|, \end{equation} which is, by (\ref{eq-dist-image}), far less than $|xb_{k+1}|$. Let $|xb_{k+1}|=50d_{k+1}$, then as $\delta=\delta(n)$ sufficient small, $$|x'b_{k+1}|>> d_{k+1},$$ a contradiction to the choice of $(a_{k+1},b_{k+1})$ in (\ref{def-key-max}). \end{proof} We now prove Lemma \ref{lem-difflevel-push-0}. \vspace{2mm} \begin{proof}[Proof of Lemma \ref{lem-difflevel-push-0}] ~ Let $e=e(p)$ be one of $\{a_j,b_j\}_{j=1}^k$ provided by Lemma \ref{lem-angle-est-0}, and let us connect $p$ and $e$ by a minimal geodesic $[pe]$. By Toponogov comparison and Lemma \ref{lem-angle-est-0}, there is a universal $\Delta r$ determined by the $(-1)$-law of cosine such that for any $p'\in [pe]$ with $|pp'|\le \Delta r$, one has $$0<\frac{|b_{k+1}p|-|b_{k+1}p'|}{|pp'|}\le \sin\sigma(n).$$ If $p'$ can be chosen that $[pp']\cap B_{50d_{k+1}}(b_{k+1})\neq \emptyset$, then $x$ is one of the intersection point and the geodesic $[xp]$ is the gradient flow of $\frac{1}{2}\operatorname{dist}_{e}^2$. Otherwise, let $p'=p$ with $|pp'|=\Delta r$. By repeating the process above successively, we get a finitely-broken geodesic from $p$ to some point $q\in B_{50d_{k+1}}(b_{k+1})$, whose reverse realizes the geodesic flows from $q$ to $p$ by endpoints $\{a_j,b_j\}_{j=1}^k$. Because for each $p'$ above, $|p'e(p')|\ge \frac{1-\delta}{2}d_k-\frac{\delta}{100}d_k$, and the total length of the broken geodesic is bounded by $\frac{1}{\sin\sigma} \delta d_k$, this completes the proof of Lemma \ref{lem-difflevel-push-0}. \end{proof} Now we are ready to prove Theorem \ref{thm-gradientpush}. \vspace{2mm} \begin{proof}[Proof of Theorem \ref{thm-gradientpush}] ~ Let us assume that the $n$-frame $\{[a_jb_j]\}_{j=1}^n$ admit a $(\frac{\delta^2}{100},l)$-leveling, $1\le k_1<\cdots<k_l=n$. Let $k_0=0$. By Lemma \ref{lem-pushingforward-0}, $b_{n+1}$ can be pushed onto $B_{\frac{\delta}{2}}(b_{n+1})$ in time $\le 16\delta\frac{4n^2}{4n-1}$. In each $i$-th level, $B_{\frac{\delta}{4}d_{k_i}}(b_{n+1})$ can be pushed by $\frac{1}{2}\operatorname{dist}_{b_{n+1}}^2$ onto $B_{100d_{k_{i-1}+1}}(b_{n+1})$ in time $\le \ln (\frac{400}{\delta}(\frac{100}{\delta^2})^{k_i-(k_{i-1}+1)})$. From $i$-th level to $(i-1)$-th level, note that for any $100d_{k_{i-1}+1}\le r\le \frac{\delta}{4}d_{k_{i-1}}$, $B_{\frac{r}{2}}(b_{k_{i-1}+1})\subset B_r(b_{n+1})\subset B_{2r}(b_{k_{i-1}+1})$. By Lemma \ref{lem-difflevel-push-0}, $B_{100d_{k_{i-1}+1}}(b_{n+1})$ can be pushed onto $B_{\frac{\delta}{4}d_{k_{i-1}}}(b_{n+1})$ in time $\le C(n)\delta$. Since it finishes after $2l$-steps, the proof completes. \end{proof} \subsection{Tracing back process} For completeness we give a proof for the co-Lipschitzness of $F_k:X\to \mathbb R^k$, which has been used in proving Lemma \ref{lem-angle-est-0}. Lemma \ref{lem-pushingforward-0} also follows similarly. The idea of proof is just the same as that of \cite[Theorem 5.4]{BGP92}. \begin{theorem}[{\cite[Theorem 5.4]{BGP92}}]\label{thm-co-Lip-part-dist-coord} There is $\sigma(k)>0$ such that the following holds for $0\le \sigma\le \sigma(k)$. Let $\{a_j,b_j\}_{j=1}^k$ be a $(k,\sigma)$-strainer at $x_0$ with radius $$r_k=\min\{|a_jx_0|,|b_jx_0|\}_{j=1}^k\le \max\{|a_jb_j|\}_{j=1}^k\le 1.$$ Let $F_k:X\to \mathbb R^k$, $F_k(x)=(|a_1x|,\cdots,|a_kx|)$, be the map associated to $\{a_j,b_j\}_{j=1}^k$ that forms a partial distance coordinates around $x_0$. Let $p=p_0$ be a point in $B_{\frac{\sigma}{20} d_k}(x_0)$ such that \begin{equation}\label{ineq-co-Lip-part-dist-coord} ||a_jp|-|a_jx_0||\le \frac{1}{4k}|px_0| \qquad (j=1,\dots, k). \end{equation} Then there is a (infinitely-)broken geodesic $[p_0p_1^1\cdots p_1^kp_2^1\cdots p_2^kp_3^1\cdots]$, contained in $B_{\frac{\sigma}{10} d_k}(x_0)$ such that the endpoint $p_l=p_l^k$ converges to a point $p'$ as $l\to \infty$, which satisfies \begin{equation} |pp'|\le \frac{4k+1}{3\sqrt{k}}\cdot |F_k(p)F_k(x_0)|,\qquad F_k(p')=F_k(x_0). \end{equation} \end{theorem} Let $\delta>0$ be a small number other than $\sigma$. Let $p$ be a point in $B_{\delta d_k}(x_0)$. Let us first define its \emph{$l$-th round $k$-tracing back point} $p_l=p_l^k$ of $p$ towards $x_0$'s $F_k$-fiber inductively as follows. Here tracing back means moving along gradient curves of distance to $a_j$ or $b_j$ backwards. Let $p_0=p$ and let us assume that $p_{l-1}$ is well-defined. For the first coordinate function $f_1=\operatorname{dist}_{a_1}$, let $p_{l}^1$ be a point lies in the broken geodesic $[a_1p_{l-1}b_1]$ such that $$f_1(p_{l}^1)-f_1(p_{l-1})=f_1(x_0)-f_1(p_{l-1}).$$ Let $p_l^2$ be a point lies in the broken geodesic $[a_2p_{l-1}^1b_2]$ such that $$f_2(p_{l}^2)-f_2(p_{l-1}^1)=f_2(x_0)-f_2(p_{l-1}).$$ Repeating $k$-times, we have $p_{l-1}^k$ in $[a_kp_{l-1}^{k-1}b_k]$ such that $$f_k(p_{l}^k)-f_k(p_{l-1}^{k-1})=f_k(x_0)-f_k(p_{l-1}).$$ Then $p_l$ is defined to be $p_l^k$. Since $p\in B_{\delta d_k}(x_0)$, by an elementary angle estimate \cite[Lemma 5.6]{BGP92}, the following holds for $0<\delta<\frac{\sigma}{10}$: for any $1\le i\le k$, $1\le j\le i-1$, $$\begin{aligned} |\measuredangle (p;a_{j},a_{i})-\frac{\pi}{2}|\le 4\sigma,\quad |\measuredangle (p;b_{j},a_{i})-\frac{\pi}{2}|\le 4\sigma,\\ |\measuredangle (p;a_{j},b_{i})-\frac{\pi}{2}|\le 4\sigma,\quad |\measuredangle (p;b_{j},b_{i})-\frac{\pi}{2}|\le 4\sigma. \end{aligned}$$ Clearly, it follows that the relations below hold. \begin{lemma} For some positive function $\epsilon=\epsilon(\sigma)\to 0$ as $\sigma\to 0$, \begin{enumerate} \item\label{tracingback-est1} $|p_{l}^ip_{l}^{i-1}|\le (1+\epsilon)\cdot |f_i(x_0)-f_i(p_l)|$; \item\label{tracingback-est2} $|f_j(p_l^i)-f_j(p_l^{i-1})|\le \epsilon \cdot |f_i(x_0)-f_i(p_{l-1})|$ for any $j\neq i$. \end{enumerate} \end{lemma} Now we are ready to prove Theorem \ref{thm-co-Lip-part-dist-coord}. \vspace{2mm} \begin{proof}[Proof of Theorem \ref{thm-co-Lip-part-dist-coord}] ~ Let $\delta=\frac{\sigma}{20}$. Let $A_l=\sum_{j=1}^k|f_j(p_l)-f_j(x_0)|$ and $B_l=|p_{l+1}p_l|$. As long as the $l$-th tracing back point $p_l$ lies in $B_{2\delta d_k}(x_0)$, the estimates (\ref{tracingback-est1})-(\ref{tracingback-est2}) hold. By triangle inequality, we derive $A_{l+1}\le \epsilon (k-1)A_l$ and $B_l\le (1+\epsilon)A_l$. As $\delta$ sufficient small, $\epsilon\le \frac{1}{4k}$ so that $A_{l+1}\le \frac{1}{4}A_l$ and $B_l\le \frac{4k+1}{4k}A_l$, and thus $A_l\le \frac{1}{4^l}A_0$ and $B_l\le \frac{4k+1}{4k}\cdot\frac{1}{4^l}A_0$ are Cauchy sequences. Now let us check that, by induction on $l$, each $p_l$ satisfies $|p_lx_0|\le \frac{3}{2}\delta d_k$ so that $p_l\in B_{2\delta d_k}(x_0)$. By the assumption (\ref{ineq-co-Lip-part-dist-coord}), $A_0\le \frac{1}{4}|px_0|$, and thus $A_l\le \frac{1}{4^{l+1}} |px_0|$, $B_l\le \frac{4k+1}{4k}\cdot \frac{1}{4^{l+1}} |px_0|$. Then $$\sum_{t=0}^{l}B_t\le \frac{4k+1}{4k}\cdot \frac{1}{3} |px_0| \le \frac{1}{2}|px_0|,$$ which justifies $|p_lx_0|\le \frac{3}{2}\delta d_k$. Let $p'$ be the limit point of $p_l$, then $$|pp'|\le \sum_{l=0}^\infty B_l \le \frac{4k+1}{3k}A_0\le \frac{4k+1}{3\sqrt{k}}|F_k(x_0)F_k(p)|.$$ The conclusion of Theorem \ref{thm-co-Lip-part-dist-coord} now follows. \end{proof} \subsection{Proof of Lemma \ref{lem-pushingforward-0}} In this subsection we prove that a gradient push can be started from $b_{n+1}$ to any point in a very small ball in a definite short time. Note that if we set $k=n$, $x_0=q$ and $p=b_{n+1}$ in Theorem \ref{thm-co-Lip-part-dist-coord}, then $b_{n+1}$ can be moved to $q$ along gradient curves of $\frac{1}{2}\operatorname{dist}_{a_j}^2$, $\frac{1}{2}\operatorname{dist}_{b_j}^2$ backwards. So we need to reverse the tracing back process defined in the proof of Theorem \ref{thm-co-Lip-part-dist-coord}. Let $m_n$ be the middle point of $[a_nb_n]$ of a $\delta^2$-maximal $n$-frame $\{[a_jb_j]\}_{j=1}^n$. Let $b_{n+1}$ be a point $\frac{\delta}{100}d_n$-close to $m_n$. For any $q\in B_{\frac{\delta}{2}d_n}(b_{n+1})$, the \emph{$l$-th round pushing forward point} $O_l$ from $O_0=b_{n+1}$ towards $q$ is defined inductively as follows. Assume that $O_l$ is well-defined. By tracing back $q_{l+1}^0=q$ to $O_l$ by a single round, we have the $n$-tracing points and tacking broken geodesic $[q_{l+1}^0q_{l+1}^1\cdots q_{l+1}^n]$, where $q_{l+1}^i\in [a_i q_{l+1}^{i-1}]$ or $[b_iq_{l+1}^{i-1}]$ ($i=1,\cdots,n$). Let $\Phi_{l+1}$ be the successive gradient flow defined by $$\Phi_{l+1}=\Phi_{1,t_{l+1,1}}\circ\Phi_{2,t_{l+1,2}}\circ\cdots\circ\Phi_{n,t_{l+1,n}}:X\to X,$$ where $\Phi_{i,t_{l+1,i}}$ is the gradient flow of $\frac{1}{2}\operatorname{dist}_{a_i}^2$ or $\frac{1}{2}\operatorname{dist}_{b_i}^2$ which maps $q_{l+1}^i$ to $q_{l+1}^{i-1}$. We define $O_{l+1}=\Phi_{l+1}(O_l)$. By (\ref{tracingback-est1}), it is easy to see that the total time satisfies \begin{equation}\label{round-time} T_{l+1}=\sum_{i=1}^n{t_{l+1,i}}\le \frac{4(1+\epsilon)}{d_n}\sum_{i=1}^n|f_i(q)-f_i(O_l)|.\tag{4.8} \end{equation} \vspace{2mm} \begin{proof}[Proof of Lemma \ref{lem-pushingforward-0}] ~ It suffices to show that the $l$-th round pushing forward point $O_l$ towards $q$ converges to $q$, and the total time admits the bound in Lemma \ref{lem-pushingforward-0}. Let $A_l=|qO_l|$ and $B_l=\sum_{i=1}^n |f_i(q)-f_i(O_l)|$. Then (\ref{round-time}) can be rewritten as $T_{l+1}\le \frac{4(1+\epsilon)}{d_n}B_l$. We first assume that $O_l$ always lies in the cube $$I_{\delta d_n}(m_n)=\{x\in X: |f_i(x)-f_i(m_n)|\le \delta d_n,\; \forall\; 1\le i\le n\}.$$ Since the Lipschitz constant of distant coordinate function $F_n$ on $I_{\delta d_n}(m_n)$ is almost $1$, \begin{equation}\label{pushingforward-dist-est1} |q_{l+1}^nO_l|\le 2\sum_{i=1}^n |f_i(q_{l+1}^n)-f_i(O_l)|,\tag{4.9} \end{equation} where by (\ref{tracingback-est2}) \begin{equation}\label{pushingforward-dist-est2} \sum_{i=1}^n |f_i(q_{i+1}^n)-f_i(O_l)|\le \epsilon (n-1) \sum_{i=1}^n |f_i(q)-f_i(O_l)|\le \epsilon (n-1)n |qO_l|.\tag{4.10} \end{equation} By $|f_i(q)-f_i(O_{l+1})|\le |qO_{l+1}|$, \begin{align*} B_{l+1}=\sum_{i=1}^n|f_i(q)-f_i(O_{l+1})| \le n A_{l+1}. \end{align*} The concavity of $\frac{1}{2}\operatorname{dist}_x^2$ with $\operatorname{dist}_x<2$ is bounded by $2 \frac{\cosh2}{\sinh2}$. By Theorem \ref{thm-gradient-lip}, (\ref{round-time}) and (\ref{pushingforward-dist-est1})-(\ref{pushingforward-dist-est2}), \begin{align*} A_{l+1}=d(q,O_{l+1}) & \le e^{2 \frac{\cosh2}{\sinh2} \cdot T_{l+1}}|q_{l+1}^nO_l|\\ & \le e^{8 \frac{\cosh2}{\sinh2} \cdot (1+\epsilon) B_l/d_n} \cdot 2\epsilon (n-1)n |qO_l|\\ & = e^{8 \frac{\cosh2}{\sinh2} \cdot (1+\epsilon)n A_l/d_n} \cdot 2\epsilon (n-1)n A_l. \end{align*} Let us take $\delta(n)>0$ such that for $0<\delta\le\delta(n)$, $A_0/d_n\le \frac{\delta}{2}\le \frac{1}{(1+\epsilon)n}$, and $\epsilon\le \frac{1}{8n^3 e^{8 \frac{\cosh2}{\sinh2} }}$. Then $(1+\epsilon)n A_0/d_n\le 1$. Moreover, $A_1\le \frac{1}{4n} A_0\le A_0$. By induction, for any $l$, $A_l\le \frac{1}{(4n)^l}A_0$, $B_l\le \frac{n}{(4n)^l} A_0$, and $O_l$ lies in $I_{\delta d_n}(m_n)$. Therefore, all estimates above are valid for $0<\delta<\delta(n)$, and $A_l\to 0$ as $l\to \infty$, i.e., $O_l\to q$. Moreover, the total time \begin{align*} T&=\sum_{i=1}^\infty T_{l} \le 4(1+\epsilon)n\sum_{i=1}^\infty \frac{A_l}{d_n} \\ &\le 2(1+\epsilon)\delta\frac{4n^2}{4n-1}. \end{align*} \end{proof} \bibliographystyle{plain}
{ "timestamp": "2019-04-02T02:19:03", "yymm": "1902", "arxiv_id": "1902.10973", "language": "en", "url": "https://arxiv.org/abs/1902.10973" }
\section{Introduction} Mobile health (mHealth) refers to the use of mobile phones and other wireless devices to improve health outcomes, often by providing individuals with support for health-related behavior change. One major category of time-varying treatments delivered through mobile devices, which is the focus of this paper, are ``push interventions''; in this setting, the mobile device determines when a treatment will be provided, rather than the individual seeking the intervention of her own accord (e.g., by opening the app). Push interventions are usually provided via some kind of a notification, such as an audible ping, vibration, or the lock screen of a phone lightening up. For example, to encourage physical activity in sedentary individuals, the HeartSteps intervention sends users push notifications that contain contextually tailored activity suggestions \citep{klasnja2018}. Micro-randomized trials (MRTs) provide an experimental design for developing mHealth interventions. These trials provide longitudinal data to assess whether there is an effect of a time-varying treatment, how this effect changes over time, and whether aspects of the current context impact the effect \citep{liao2016sample, dempsey2015randomised}. In an MRT, each individual is randomized repeatedly to different versions of a treatment (or no treatment) with a known probability over the course of the trial (often hundreds or even thousands of times). Between randomizations, the trial collects covariate data on the individual's current/recent context via sensors and self-report, and after each randomization it assesses a proximal outcome. The large number of randomization points likely covers a wide range of contexts, and methods that exploit this for assessing effect moderation of a time-varying treatment have been developed \citep{boruvka2017}. Random effects models \citep{laird1982random,raudenbush2002}, sometimes also known as mixed effect models, hierarchical models, or multilevel models, have been used with great success in the analysis of longitudinal studies. Behavioral scientists, and researchers from many other scientific fields, have long been using random effects model in research involving longitudinal data \citep{agresti2000,berger2004robust,cheung2008model,luger2014robust}. A particularly appealing feature of random effects models is the ability to predict person-specific random effects, which enables quantitative characterization of between-person heterogeneity due to unobserved factors. Understanding such heterogeneity can bring forth new scientific hypotheses for further studies. In addition, the random effects provide a model for the within-person dependence in the time-varying outcome, which improves efficiency in parameter estimation. Because data from an MRT is longitudinal, it is natural to consider a random effects model in the analysis of MRT data. However, random effects models were designed for settings where the covariates are considered fixed, and inferential challenges arise when one tries to apply the standard random effects model if there are endogenous time-varying covariates. A time-varying covariate is endogenous if this covariate is not independent of previous treatment or outcomes; we give a more precise definition in Section \ref{subsec:notation}. Because many mHealth interventions target behavior change, it is common for covariates to be endogenous. For example, consider a smoking cessation study where the interventions are push reminders to practice stress-regulation exercises \citep{spring2017optimizing}: one question is whether the current smoking urge moderates the effect of the reminder on subsequent stress. Because it's likely that current urge is related to past stress as well as past treatment, urge is an endogenous covariate. Regarding endogenous covariates in longitudinal data analysis, \citet{pepeanderson1994} pointed out that when using generalized estimating equations (GEE) with endogenous covariates, one should use working independence correlation structure to avoid biased estimates. \citet{diggle2002} noted that: \begin{quote} ``Although \citet{pepeanderson1994} focused on the use of GEE, the issue that they raise is important for all longitudinal data analysis methods including likelihood-based methods such as linear and generalized linear mixed models.'' \end{quote} In this paper, we focus on linear mixed models (LMM), a simple form of random effects models where the outcome is continuous and the link function is identity. We first provide a detailed account on the issue of including endogenous covariates in LMM. Coefficients in a standard LMM with fixed covariates have both marginal and conditional-on-the-random-effect interpretations. We show that the marginal interpretation is no longer valid with endogenous covariates. However, fortunately the conditional interpretation is consistent with scientific interest in the prediction of person-specific effects. Thus, in using LMM with possibly endogenous covariates we interpret treatment effects as conditional on the random effect. We provide an additional assumption under which valid estimates of the effect (conditional on the random effect) of the time-varying treatment, estimates of the variance components, and person-specific predictions of these treatment effects can be obtained through standard LMM software, even if some covariates are endogenous. Simulation studies are conducted to support the main result. Last, we consider HeartSteps, an MRT of an intervention that aims to increase physical activity among sedentary adults. We discuss whether and when the aforementioned assumption makes sense in this situation, and analyze the data using the proposed method. The paper is organized as follows. We provide an overview of the HeartSteps MRT in Section \ref{subsec:motivating-example}. We introduce notation and definition in Section \ref{subsec:notation}. In Section \ref{sec:explain-issue} we give a detailed account of the issue regarding endogenous covariates in a standard LMM. Next we provide an assumption under which treatment effects can be estimated based on an LMM with endogenous covariates in Section \ref{sec:model}. In Section \ref{sec:simulation} we present results from a simulation study. We apply the proposed model to analyzing the HeartSteps data in Section \ref{sec:data-analysis}. Section \ref{sec:discussion} concludes with discussion. \subsection{Motivating Example: HeartSteps} \label{subsec:motivating-example} Our motivating example is from HeartSteps, an MRT of an mHealth intervention to encourage regular walking among sedentary adults \citep{klasnja2018}. The intervention package in HeartSteps includes multiple components; in this paper we focus on one push intervention component as the treatment, which is activity suggestions tailored to the individual's current context. Each individual is in the study for 42 days, and is randomized 5 times a day, each time with probability 0.6 to receive an activity suggestion. The activity suggestions are designed to increase the individual's near-term step count, so the proximal outcome is defined as the individual's step count during the 30 minutes following each randomization. In addition to the step counts, at each randomization the individual's context is also recorded, including current location, weather and 30 minute step count prior to randomization. Note that the 30 minute step count prior to the time of randomization is likely impacted by prior treatment and thus is an endogenous covariate. In addition to the measured information, there are other unobserved variables that may impact the treatment effect, such as each individual's commitment to becoming more active, conscientiousness, degree of social support and so on. Therefore, it is of interest to provide person-specific predictions of treatment effect. We will apply methods developed in this paper to the HeartSteps data in Section \ref{sec:data-analysis}. \subsection{Notation and definition} \label{subsec:notation} We will consider two types of settings in the paper. In the first setting we consider a longitudinal study without treatment, and in the second one with treatment. The first setting will be used to explain bias incurred by the inclusion of endogenous covariates in random effects models, as this issue also occurs without treatment and is easier to explain there. The second setting involves time varying treatment; thus it's relevant to data from MRTs. We will consider assumptions that allow valid estimation under the second setting. The setting under consideration will be clear from the context. For the first setting without treatment, we denote data for individual $i$ by $X_{i1} , Y_{i2}$, $X_{i2}, Y_{i3}, \ldots, X_{in_i},$ $Y_{in_i+1}$, where $n_i$ denotes the total number of observations for individual $i$. $X_{it}$ is a vector of covariates prior to the $t$-th time point and $Y_{it+1}$ is the outcome subsequent to the $t$-th time point. Note that the time index for the outcome $Y$ is augmented by $1$. We use overbar to denote history; for example, $\bar{X}_{it} = (X_{i1}, X_{21}, \ldots, X_{it})$. The individual's history information up to the $t$-th time is denoted by $H_{it} = (X_{i1}, Y_{i2}, \ldots, X_{it-1}, Y_{it}, X_{it}) = (\bar{X}_{it}, \bar{Y}_{it})$. For the second setting with treatment, the data for individual $i$ is $X_{i1}, A_{i1} , Y_{i2}, X_{i2}$, $A_{i2}, Y_{i3}, \ldots, X_{in_i},$ $A_{in_i},Y_{in_i+1}$, where $X_{it}$ is the covariate vector prior to the $t$-th time, $A_{it}$ is the randomized treatment at the $t$-th time, and $Y_{it+1}$ is the proximal outcome subsequent to the $t$-th time. To maintain expositional clarity, throughout we assume there are only two types of treatment and $A_{it}\in \{0,1\}$. The history is defined as $H_{it} = (X_{i1}, A_{i1}, Y_{i2}, \ldots, X_{it-1}, A_{it-1}, Y_{it}, X_{it}) = (\bar{X}_{it}, \bar{A}_{it-1}, \bar{Y}_{it})$. We define $X_{i0} = \emptyset$, $A_{i0} = \emptyset$, and $Y_{i2} = \emptyset$. In both settings, we use $b_i$ to denote the random effect of individual $i$. We use $\perp$ to denote statistical independence; for example, $A \perp B \mid C$ means that $A$ is independent of $B$ conditional on $C$. In the first setting, a covariate process $X_{it}$ is called \textit{exogenous} (with respect to the outcome process $Y_{it}$) if $X_{it} \perp \bar{Y}_{it} \mid \bar{X}_{it-1}$; otherwise, $X_{it}$ is \textit{endogenous}. In the second setting, $X_{it}$ is called \textit{exogenous} if $X_{it} \perp (\bar{Y}_{it}, \bar{A}_{it-1}) \mid \bar{X}_{it-1}$; otherwise, $X_{it}$ is \textit{endogenous}. In a longitudinal study, examples of exogenous covariates include baseline variables (age, gender, etc.), functions of time, and time-varying variables that are not impacted by prior treatment or prior outcome, such as weather. \section{Issue of linear mixed models with endogenous covariates} \label{sec:explain-issue} In this section, we start by considering the situation where no treatment is involved, as endogenous covariates give rise to issues even without considering causal inference. At the end of the section we review literature on the issue of endogenous covariate in the causal inference context. \subsection{Brief overview of standard LMM with exogenous covariates} A standard linear mixed model (LMM) \citep{laird1982random} assumes a relationship between the covariate $X_{it}$ and the outcome $Y_{it+1}$ such as the following: \begin{equation} Y_{it+1} = X_{it}^T \beta + Z_{it}^T b_i + \epsilon_{it+1}. \label{eq:standard-lmm} \end{equation} Here, $b_i \sim N(0,G)$ denotes the vector of person-specific random effects, $Z_{it} \subset X_{it}$ and $\epsilon_{it+1} \sim N(0,\sigma^2_\epsilon)$ is a random noise. It is typically assumed that $\epsilon_{it+1}$'s are independent of each other and of $b_i$, and we will adopt this assumption throughout this paper. This model provides the conditional distribution of each $Y_{it+1}$ given $X_{it}$ and $b_i$; in particular this is a normal with mean: \begin{equation} E(Y_{it+1} \mid X_{it}, b_i) = X_{it}^T \beta + Z_{it}^T b_i. \label{eq:lmm-conditional-model} \end{equation} Furthermore, use of the standard LMM assumes, though not always explicitly, that all covariates are fixed, or at least exogenous. Thus, the marginal mean is \begin{equation} E(Y_{it+1} \mid X_{it}) = X_{it}^T \beta \label{eq:lmm-marginal-model} \end{equation} because when $\{X_{it}\}_{t\ge1}$ is exogenous, $E(b_i \mid X_{it})=0$. Thus, when the covariates are exogenous, $\beta$ has both a conditional interpretation and a marginal interpretation\footnote{In this paper, we use the term ``conditional'' to denote a model that is conditional on the random effect, and we use ``marginal'' to denote a model where the random effect is marginalized over. This is consistent with the terminology in \citet{zeger1992overview} and \citet{heagerty2000marginalized}.}. This dual interpretation provides the opportunity to estimate $\beta$ with alternative approaches depending on the desired robustness of the estimator of $\beta$ to deviations from the LMM assumptions such as with generalized estimating equations (GEE) \citep{zeger1986}. Assuming the covariates are indeed exogenous, the maximum likelihood score equation for $\beta$ is: \begin{align} \frac{1}{n} \sum_{i=1}^n X_i V_i^{-1} (Y_i - X_i^T \beta) = 0, \label{eq:lmm-ee} \end{align} where $X_i = (X_{i1}, \ldots, X_{in_i})$, $Z_i = (Z_{i1}, \ldots, Z_{in_i})$ and $Y_i = (Y_{i2}, \ldots, Y_{in_i+1})^T$, $V_i = Z_i^T G Z_i + R_i$ is a $n_i \times n_i$ covariance matrix, and $R_i$ is a $n_i \times n_i$ diagonal matrix with all diagonal entries equal to $\sigma^2_\epsilon$. \subsection{Issue with endogenous covariates: marginal interpretation is no longer valid} Any LMM solves the same estimating equation as a GEE with a corresponding non-independence working correlation structure (e.g., an LMM with only a random intercept solves the same estimating equation as a GEE with compound symmetric working correlation structure). In fact, \eqref{eq:lmm-ee} is the estimating equation for a GEE with marginal mean model \eqref{eq:lmm-marginal-model} and working correlation matrix $V_i$. In the GEE literature, estimation bias due to the inclusion of endogenous covariates has been discussed repeatedly. We first review this briefly. \citet{pepeanderson1994} first pointed out that when using GEE to estimate parameters in $E(Y_{it+1} \mid X_{it})$, a sufficient condition for estimation consistency is either \begin{equation} E(Y_{it+1} \mid X_{it}) = E(Y_{it+1} \mid X_{i1}, \ldots, X_{iT}) \label{eq:FCCM} \end{equation} or the use of a working independence correlation structure. When \eqref{eq:FCCM} is violated and a correlation structure other than working independence is used, they provided simulation results to show that bias could occur. \citet[Chapter 12]{diggle2002} reiterated this point, and referred to \eqref{eq:FCCM} as ``full covariate conditional mean (FCCM)'' assumption. \citet{schildcrout2005} analyzed the bias-efficiency trade-off associated with working correlation choices of GEE for longitudinal binary data, when FCCM is violated due to exogenous covariates being time-varying. This potential bias from the violation of FCCM have also been warned about by \citet{pan2000} in the context of linear regression. When there are endogenous covariates, the FCCM assumption is unlikely to hold because $Y_{it+1}$ may impact future $X_{is}$. In this case, \citet{pepeanderson1994} suggested the use of working independence GEE to guarantee consistent estimation of parameters in the marginal mean $E(Y_{it+1} \mid X_{it})$. Because of the close tie between the estimating equations of LMM and GEE, Pepe and Anderson's point about GEE implies that estimators fitted using the standard LMM could be inconsistent when there are endogenous covariates. Indeed, if one intends to estimate parameters in the marginal mean $E(Y_{it+1} \mid X_{it})$, then using LMM \textit{as an estimation procedure} can result in inconsistent estimators because of the biased estimating equations. However, in our opinion, this is not the fundamental issue of LMM under endogeneity, but rather a technical consequence. More fundamentally, when there are endogenous covariates, LMM \textit{as a model} can imply a marginal mean relationship different from (\ref{eq:lmm-conditional-model}). $X_{it}$ being endogenous means it may depend on previous outcomes, which in turn implies dependence on the random effect $b_i$. Thus, $E(b_i \mid X_{it})$ is usually nonzero and the conditional model \eqref{eq:lmm-conditional-model} may no longer imply the marginal model \eqref{eq:lmm-marginal-model}. The marginal model implied by \eqref{eq:lmm-conditional-model} becomes, instead, \begin{equation} E(Y_{it+1} \mid X_{it}) = X_{it}^T \beta + Z_{it}^T E(b_i \mid X_{it}). \label{eq:lmm-marginal-model-endogenous} \end{equation} As a concrete example, consider the case where each individual is observed for 2 time points ($n_i=2$), and the covariate at the second time point is the lag-1 outcome: $X_{i2} = Y_{i2}$. Suppose the variables are generated from the following LMM with a random intercept: $b_i\sim N(0,\sigma^2_u)$, $X_{i1} \sim N(0,\sigma^2_{X_1})$ independently of $b_i$, $Y_{i2} \mid X_{i1}, b_i \sim N(\beta_0 + \beta_1 X_{i1} + b_i, \sigma^2_\epsilon)$, $X_{i2} = Y_{i2}$, and $Y_{i3} \mid X_{i1}, Y_{i2}, X_{i2}, b_i \sim N(\beta_0 + \beta_1 X_{i2} + b_i, \sigma^2_\epsilon)$. This implies a parsimonious conditional relationship: $E(Y_{it+1} \mid X_{it}, b_i) = \beta_0 + \beta_1 X_{it} + b_i$, but the induced marginal relationship is rather complex: \begin{align} E(Y_{i2} \mid X_{i1}) & = \beta_0 + \beta_1 X_{i1}, \nonumber \\ E(Y_{i3} \mid X_{i2}) & = ( 1- \rho \zeta - \rho ) \beta_{0} + \{ (1- \rho \zeta) \beta_{1}+\rho \} X_{i2}, \nonumber \end{align} with $\rho = \sigma^2_u / (\sigma^2_u + \sigma^2_\epsilon)$ and $\zeta = \beta_{1} \sigma_{X_{1}}^{2} / (\beta_{1} \sigma_{X_{1}}^{2}+\sigma_{u}^{2}+\sigma_{\epsilon}^{2})$. \smallskip Therefore, when building LMM with endogenous covariates, one needs to be aware that the modeling assumption is on the conditional relationship $E(Y_{it+1} | X_{it}, b_i)$, not the marginal relationship $E(Y_{it+1} | X_{it})$. Although it is attractive to treat $\beta$ in \eqref{eq:standard-lmm} with not only a conditional interpretation but also a marginal interpretation, which is true with exogenous covariates, the latter interpretation can be invalid with endogenous covariates. In addition to this model interpretation issue, endogenous covariates also give rise to additional concerns in model fitting, which will be discussed in the next section. Researchers have also raised the issue of potential bias from endogenous covariates in the causal inference context. \cite{vansteelandt2007confounding} discussed data-generating mechanisms where such bias would occur in a causal inference setting, and provided a class of estimators for a model on the mean outcome conditional on the observed history. \citet{tchetgen2012specifying} showed that when using inverse-probability weighting to adjust for time-varying treatment or dropout, the regression estimates obtained from GEE with such weights can be inconsistent. They provided a sufficient condition for the above estimators to be consistent similar to the condition given by \citet{pepeanderson1994}. As a side note, for generalized linear mixed models, it is well appreciated that even when all covariates are exogenous, the conditional parameter and the marginal parameter are different due to the nonlinear link function, and there has been work in the literature on connecting the two interpretations \citep{zeger1988models, heagerty1999marginally, wanglouis2004}. For LMMs, the discrepancy in the two interpretations can only occur when there are endogenous covariates. \section{A conditional independence assumption} \label{sec:model} In an MRT, the vectors $X_{it}$ and $Z_{it}$ in an LMM can contain the treatment indicator $A_{it}$. To emphasize this we write \eqref{eq:standard-lmm} as \begin{equation} Y_{it+1} = {X_{it}^{(0)}}^T \beta_0 + A_{it} {X_{it}^{(1)}}^T \beta_1 + {Z_{it}^{(0)}}^T b_{0i} + A_{it} {Z_{it}^{(1)}}^T b_{1i} + \epsilon_{it+1}. \label{eq:proposed-model} \end{equation} Recall that for simplicity we consider only binary treatment. In this section, we provide an additional assumption that, if true, ensures valid inference and person-specific predictions via standard software even when there are endogenous covariates. We make the usual LMM assumptions: the random effects $(b_{0i}^T, b_{1i}^T)$ are assumed to marginally follow a multivariate Gaussian distribution $N(0,G)$ with variance-covariance matrix $G$; the random noise $\epsilon_{it+1}$ is independent of $(H_{it}, A_{it}, b_{0i}, b_{1i})$ and is assumed to follow $N(0,\sigma_\epsilon^2)$. Unlike in a standard LMM, here we allow the covariate process $X_{it}$ to be endogenous. $A_{it}$ is assumed to be randomized with randomization probability depending only on $H_{it}$, not $b_{i0}$ or $b_{i1}$, which is ensured by the MRT design. Note that the LMM \eqref{eq:proposed-model} implies a model on the treatment effect, given by \begin{align} E(Y_{it+1} \mid b_{1i}, H_{it}, A_{it}=1) - E(Y_{it+1} \mid b_{1i}, H_{it}, A_{it}=0) = {X_{it}^{(1)}}^T \beta + {Z_{it}^{(1)}}^T b_{1i}. \nonumber \end{align} Furthermore due to endogenous $X_{it}$, it is likely that \begin{align} E(Y_{it+1} \mid H_{it}, A_{it}=1) - E(Y_{it+1} \mid H_{it}, A_{it}=0) \neq {X_{it}^{(1)}}^T \beta. \nonumber \end{align} In other words, model \eqref{eq:proposed-model} can only be interpreted as a \textit{conditional-on-the-random-effect} model for the treatment effect, but not as a \textit{marginal} model. A similar point for when there is no treatment has been extensively discussed in Section \ref{sec:explain-issue}. Although model \eqref{eq:proposed-model} looks no different than a standard LMM that would typically be imposed with fixed covariates, to estimate the conditional-on-the-random-effect, $\beta$, we make an additional conditional independence assumption. The \textit{conditional independence assumption} is \begin{equation} X_{it} \perp (b_{0i}, b_{1i}) \mid H_{it-1}, A_{it-1}, Y_{it}. \label{eq:independence-assumption} \end{equation} This does allow $X_{it}$ to be endogenous, but the endogenous covariate $X_{it}$ can only depend on the random effects through the data $H_{it-1}, A_{it-1}, Y_{it}$. In particular, the endogenous part of $X_{it}$ may include a function of prior treatments and prior outcomes. In general, the validity of this assumption needs to be discussed in the real data context. We discuss the conditional independence assumption in the context of HeartSteps in Section \ref{sec:data-analysis}. This assumption allows us to decompose the likelihood. This likelihood decomposition will provide a justification for the use of estimators from standard LMM software. Denote by $X_i$, $A_i$ and $Y_i$ the vectors of observations for individual $i$, and $X$, $A$ and $Y$ the collection of observations for all individuals. Denote by $b_i =(b_{0i}, b_{1i})$. Suppose $G$, the covariance matrix of the random effects, is parametrized by $\theta$. The joint likelihood of the observed data, $\mathcal{L}(\alpha, \beta, \theta, \sigma_\epsilon \mid X, A, Y)$, can be written as \begin{align} \prod_i p(X_i, A_i, Y_i \mid \alpha, \beta, \theta, \sigma_\epsilon) &= \prod_i \int p(X_i, A_i, Y_i \mid b_i ; \alpha, \beta, \theta, \sigma_\epsilon) dF(b_i) \nonumber \\ &= \prod_i \Big\{ \int \prod_t p(X_{it} \mid H_{it-1}, A_{it-1}, Y_{it}, b_i) p(A_{it} \mid H_{it}, b_i) \nonumber \\ & \qquad \qquad \times p(Y_{it+1} \mid H_{it}, A_{it}, b_i ; \alpha, \beta, \theta, \sigma_\epsilon) dF(b_i) \Big\}. \label{eq:lkd-1} \end{align} By the conditional independence assumption \eqref{eq:independence-assumption} and given that $A_{it}$ is randomized conditional on $H_{it}$, the joint likelihood in \eqref{eq:lkd-1} becomes \begin{align} \mathcal{L}(\alpha, \beta, \theta, \sigma_\epsilon \mid X, A, Y) & = \Big\{ \prod_i \prod_t p(X_{it} \mid H_{it-1}, A_{it-1}, Y_{it}) p(A_{it} \mid H_{it}) \Big\} \mathcal{L}_1(\alpha, \beta, \theta, \sigma_\epsilon \mid X, A, Y), \label{eq:lkd-2} \end{align} where \begin{align} \mathcal{L}_1(\alpha, \beta, \theta, \sigma_\epsilon \mid X, A, Y) = \prod_i \Big\{ \int \prod_t p(Y_{it+1} \mid H_{it}, A_{it}, b_i ; \alpha, \beta, \theta, \sigma_\epsilon) dF(b_i) \Big\}. \label{eq:partial-lkd} \end{align} Because the first factor on the right hand side of \eqref{eq:lkd-2} does not involve $(\alpha, \beta, \theta, \sigma_\epsilon)$, any inference for $(\alpha, \beta, \theta, \sigma_\epsilon)$ that is based on the joint likelihood $\mathcal{L}(\alpha, \beta, \theta, \sigma_\epsilon \mid X, A, Y)$ can be equivalently based on the partial likelihood $\mathcal{L}_1(\alpha, \beta, \theta, \sigma_\epsilon \mid X, A, Y)$. Observe that $\mathcal{L}_1(\alpha, \beta, \theta, \sigma_\epsilon \mid X, A, Y)$ is actually the likelihood function for a standard LMM where $X_{it}$ and $A_{it}$ are treated as fixed covariates. Thus, the maximum likelihood estimators that are obtained through standard LMM software are valid maximum likelihood estimators for the joint likelihood $\mathcal{L}(\alpha, \beta, \theta, \sigma_\epsilon \mid X, A, Y)$ under the conditional independence assumption, and (\ref{eq:lmm-ee}) with $X$ redefined to include the treatment indicator is a likelihood score equation for $\beta$ in the conditional-on-the-random-effect model. Note that even though the form of (\ref{eq:lmm-ee}) appears to indicate estimation of a regression coefficient in a marginal model, this is false impression in the case of endogenous covariates. Furthermore recall that restricted maximum likelihood (REML) estimation can be viewed as maximum \textit{a posteriori} in a Bayesian hierarchical model \citep{laird1982random}. This latter interpretation continues to hold for the REML estimators obtained through standard LMM software when there are endogenous covariates. In addition, it can be shown that the empirical Bayes predictor of the random effects $\hat{b}_i$ obtained through standard LMM software is valid empirical Bayes predictor for model \eqref{eq:proposed-model} with endogenous covariates. We include proofs of these claims in the Appendix. The conditional independence assumption (\ref{eq:independence-assumption}) is similar to an assumption used by \citet{sitlani2012}. \citet{sitlani2012} aimed to use an LMM to assess causal effects in the context of noncompliance in surgical trials. They assumed conditional independence between the treatment assignment and the random effect given the observed history. This assumption allowed them to decompose the likelihood as is done above and thus use standard LMM estimators. \section{Simulation} \label{sec:simulation} In the simulation, we considered three generative models (GMs), in all of which the covariate is endogenous. In the first two GMs, the endogenous covariate $X_{it}$ equals the previous outcome $Y_{it}$ plus some random noise, so the conditional independence assumption \eqref{eq:independence-assumption} is valid. In GM 3, the endogenous covariate depends directly on $b_i$, so the assumption \eqref{eq:independence-assumption} is violated. Details of the generative models are described in the following. In GM 1, we considered a simple case with only a random intercept and a random slope for $A_{it}$, so that $Z_{it}^{(0)} = Z_{it}^{(2)} = 1$ in model \eqref{eq:proposed-model}. The outcome is generated as $Y_{it+1} = \alpha_0 + \alpha_1 X_{it} + b_{i0} + A_{it}(\beta_0 + \beta_1 X_{it} + b_{i2}) + \epsilon_{it+1}$. The random effects $b_{i0} \sim N(0, \sigma_{b0}^2)$ and $b_{i2} \sim N(0, \sigma_{b2}^2)$ are independent of each other. We generated the covariate to be $X_{i1} \sim N(0,1)$, $X_{it} = Y_{it} + N(0,1)$ for $t \geq 2$. The randomization probability $p_t$ is constant 1/2. The exogenous noise $\epsilon_{it+1} \sim N(0,\sigma_\epsilon^2)$. In GM 2, we considered the case where $Z_{it}^{(0)} = Z_{it}^{(2)} = (1, X_{it})$, and the randomization probability is time-varying. The outcome is generated as $Y_{it+1} = \alpha_0 + \alpha_1 X_{it} + b_{i0} + b_{i1} X_{it} + A_{it}(\beta_0 + \beta_1 X_{it} + b_{i2} + b_{i3} X_{it}) + \epsilon_{it+1}$. The random effects $b_{ij} \sim N(0, \sigma_{bj}^2)$, $0\leq j \leq 3$, are independent of each other. We generated the covariate to be $X_{i1} \sim N(0,1)$, $X_{it} = Y_{it} + N(0,1)$ for $t \geq 2$. The randomization probability depends on $X_{it}$: $p_{t}=0.7 \cdot {\mathds 1}(X_{it}>-1.27)+0.3 \cdot {\mathds 1}(X_{it}\leq -1.27)$. Here ${\mathds 1}(\cdot)$ represents the indicator function, and the cutoff $-1.27$ was chosen so that $p_t$ equals 0.7 or 0.3 each for about half of the time. The exogenous noise $\epsilon_{it+1} \sim N(0,\sigma_\epsilon^2)$. GM 3 is the same as GM 1, except that the covariate $X_{it}$ depends directly on $b_i$: $X_{i1} \sim N(b_{i0},1)$, $X_{it} = Y_{it} + N(b_{i0},1)$ for $t \geq 2$. We chose the parameter values as follows: $\alpha_0 = -2$, $\alpha_1 = -0.3$, $\beta_0 = 1$, $\beta_1 = 0.3$, $\sigma_{b0}^2 = 4$, $\sigma_{b1}^2 = 1/4$, $\sigma_{b2}^2 = 1$, $\sigma_{b3}^2 = 1/4$, $\sigma_\epsilon^2 = 1$. For each of the three GMs, we simulated for sample size $n=30,100,200$ and the number of observations per individual $n_i = T = 10, 30$. Each setting was replicated 1,000 times. The estimation was done using the R package \textsf{lmer} \citep{lmer} for standard LMM, and 95\% confidence interval was computed based on the $t$ distribution with degrees of freedom obtained by Satterthwaite approximation \citep{satterthwaite1941synthesis}, which is implemented in the R package \textsf{lmerTest} \citep{lmerTest}. Bias, standard deviation (sd) and coverage probability (cp) of 95\% nominal confidence interval for the estimated $\beta_0$ and $\beta_1$ are presented in Table \ref{tab:simulation-result}. As expected, the estimators are consistent for GM 1 and GM 2, and they are inconsistent for GM 3 because of the violation of the conditional independence assumption \eqref{eq:independence-assumption}. For GM 1 and GM 2, the confidence interval coverage probability can be slightly lower than the nominal level for some of the parameters for small $n$ or small $T$, but it gets back to the nominal level as the sample size or total number of time points gets larger. Additional simulation results for more choices of $n$ and $T$, the performance of estimated $\alpha_0$, $\alpha_1$, and variance components $\sigma_{bj}^2, 1\leq j\le q3$ and $\sigma_\epsilon^2$ are in the Appendix, and the conclusion is similar to the results shown for the $\beta$'s above. \begin{table}[htbp] \centering \input{table-simulation-1.tex} \caption{Bias, standard deviation (sd) and coverage probability (cp) of 95\% nominal confidence interval for estimated $\beta_0$ and $\beta_1$ in the simulation study. $n$ denotes sample size; $T$ denotes total number of observations for each individual; GM denotes generative model. The result is based on 1,000 replicates for each setting.} \label{tab:simulation-result} \end{table} \section{Illustrative data analysis of HeartSteps} \label{sec:data-analysis} \subsection{Data and model assumptions} The HeartSteps study \citep{klasnja2018} is a 6-week micro-randomized trial of an mHealth intervention to encourage activity among sedentary adults. The following analysis focuses on the time varying treatment consisting of contextually-tailored activity suggestions. These suggestions can be delivered up to 5 times a day at pre-specified individual-specific time points (morning commute, lunchtime, mid-afternoon, evening commute, and after-dinner). The content of the suggestion was tailored to the current time of day, weekend vs weekday, weather, and the individual's current location. The activity suggestions were designed to help individuals get activity throughout the day. Due to the tailoring of the suggestions to the individual's current context, the research team expected to see the greatest impact of the activity suggestions on near time, proximal activity. Prior to the randomization at each time point, software on the smartphone determined whether an individual is \textit{available} for treatment at the time. If the activity recognition on the phone determined that an individual was operating a vehicle, the individual was considered unavailable for safety reasons. If an individual had just finished an activity bout in the prior 90 seconds, they were considered unavailable for treatment in order to minimize user burden and aggravation. Lastly, because the software on the server and smartphone required an internet connection to send a suggestion, if the smartphone did not have wireless connectivity the individual was deemed unavailable. At each of the five points each day for each individual, availability was assessed, the context was recorded and if the individual was available then HeartSteps randomized to deliver an activity suggestion to the individual with probability 3/5. The sample for this analysis consisted of 7,540 time points from 37 individuals. The individuals were available for 6,061 (80.4\%) time points, unavailable due to no internet connection for 602 (8.0\%) time points, unavailable due to being detected as in transit for 841 (11.1\%) time points, and unavailable due to being detected to have just finished an activity bout in the prior 90 seconds for 36 (0.5\%) time points. Let $A_{it}=1$ if an activity suggestion is delivered at time $t$ for individual $i$ and equal to $0$ otherwise. The proximal outcome $Y_{it+1}$ is the (log-transformed) 30-minute step count following time point $t$. We used three covariates in the model: \begin{itemize} \item $X_{it,1}$: day in the study for the time point $t$, which is coded as $0,1,\ldots,41$. \item $X_{it,2}$: whether the individual was at home or work at time point $t$; $X_{it,2} = 1$ if at home or work, 0 if at some other location. \item $X_{it,3}$: (log-transformed) 30-minute step count preceding time point $t$. \end{itemize} We specify model \eqref{eq:proposed-model} in the HeartSteps context as follows: $X_{it}^{(0)} = (X_{it,1}, X_{it,2}, X_{it,3})$; $X_{it}^{(1)} = (X_{it,1},$ $X_{it,2})$; the model contains a random intercept, $Z_{it}^{(0)} = 1$, and a random slope for $A_{it}$, $Z_{it}^{(1)} = 1$. We denote the availability status of individual $i$ at time $t$ by $I_{it}$ ($I_{it} = 1$ if available; $0$ otherwise). In the model, we multiply $A_{it}$ with $I_{it}$ to operationalize the notion that the treatment may only be delivered when the individual is available. Because the relationship between $Y_{it+1}$ and the $X_{it}^{(0)}$ can depend on the availability status, we included an interaction between $I_{it}$ and $X_{it}^{(0)}$. Thus, the LMM is given by \begin{align} Y_{it+1} & = \alpha_0 + \alpha_1 X_{it,1} + \alpha_2 X_{it,2} + \alpha_3 X_{it,3} + I_{it} (\tilde{\alpha}_0 + \tilde{\alpha}_1 X_{it,1} + \tilde{\alpha}_2 X_{it,2} + \tilde{\alpha}_3 X_{it,3}) + b_{i0} \nonumber \\ & \quad + A_{it} I_{it} (\beta_0 + \beta_1 X_{it,1} + \beta_2 X_{it,2} + b_{i1}) + \epsilon_{it+1} \label{eq:model-data-analysis} \end{align} where $\epsilon_{it+1} \sim N(0,\sigma_\epsilon^2)$, and the random effects $(b_{0i}, b_{1i}) \sim N(0, G)$ with $G$ being a $2\times 2$ variance-covariance matrix. $b_{0i}$ accounts for the between-individual variation in the 30-minute step count under no treatment, and $b_{1i}$ accounts for the between-individual variation in the treatment effect on the 30-minute step count. In model \eqref{eq:model-data-analysis}, $X_{it,2}$, $X_{it,3}$ and $I_{it}$ are possibly endogenous. Location, $X_{it,2}$, is most likely exogenous but might be endogenous because the number of steps an individual took following a prior time point, combined with the location s/he was at then, might be predictive of whether s/he would be at home/work or other places at the subsequent time point. Prior time $t$ 30-minute step count, $X_{it,3}$, might be correlated with 30-minute step count after time $t-1$, $Y_{it}$, because an individual might walk less if s/he had already walked earlier in the day. For the availability status $I_{it}$, unavailability due to being in transit is likely exogenous but may be endogenous for a reason similar to that of location, $X_{it,2}$. Unavailability due to having just finished an activity bout may be endogenous for a reason similar to that of prior time $t$ 30-minute step count, $X_{it,3}$. We argue that the conditional independence assumption \eqref{eq:independence-assumption} is plausible for all three variables. For location, $X_{it,2}$, because the enrollment criterion required each individual to either have a full-time daytime job or be a student, the time-varying location of such individuals with regular schedule is unlikely to depend on some unmeasured baseline factors (i.e., the random effects) that impact step count. For prior time $t$ 30-minute step count, $X_{it,3}$, the impact of random effects should be largely explainable through earlier outcomes and covariates, as those are also step counts but just for other time windows. For $I_{it}$, most of the unavailability (1443/1479) instances are due to being in transit or loss of internet connection; the conditional independence is likely to approximately hold for $I_{it}$ for a similar reason to that of $X_{it,2}$. \subsection{Results} We fitted model \eqref{eq:model-data-analysis} using the R package \textsf{lmer} \citep{lmer} for standard LMM, because standard LMM yields valid estimators under the conditional independence assumption \eqref{eq:independence-assumption}. The first three columns in Table \ref{tab:data-analysis-result} show the estimated fixed effects with 95\% confidence interval and the estimated variance components. The estimated variance for $b_{1i}$ is extremely small and the estimated correlation between $b_{0i}$ and $b_{1i}$ is 1.000, suggesting that we might not have enough data to fit two separate random effects so the fitting is collapsing onto a linear combination of the two. We conducted the likelihood ratio test for nonzero variance of $b_{1i}$, and the p-value was 0.72. Note that likelihood ratio tests for nonzero variance components can be conservative because the null value ($\mbox{Var}(b_{1i}) = 0$) is on the boundary of the parameter space \citep{self1987asymptotic, stram1994variance, crainiceanu2004likelihood}, and we are just using this test and the critical value as a guideline. The result suggests that the potential heterogeneity in the treatment effect may not be large enough to be detected from the data. Model fit of \eqref{eq:model-data-analysis} with $b_{1i}$ removed is presented in the last two columns in Table \ref{tab:data-analysis-result}. The estimated treatment effects, which are conditional on the observed history and the unobserved random effects, are similar from both model fits in the point estimates as well as the confidence intervals. The data indicates that, for an individual, the treatment has a positive effect at the beginning of the study ($\hat\beta_0 > 0$), and the effect decreased over time ($\hat\beta_1 < 0$). This is likely due to the individual's habituation to the activity suggestions, which is consistent with the exit interviews reported by \citet{klasnja2018} in which individuals reported that ``the suggestions became boring after 2--4 weeks''. On the other hand, the data indicate no moderating influence of location (whether an individual was at home/work or some other place) on the treatment effect for an individual. \begin{table}[htbp] \centering \begin{tabular}{rrcrc} \hline & \multicolumn{2}{c}{Model with $b_{1i}$} & \multicolumn{2}{c}{Model without $b_{1i}$} \\ \cmidrule(lr){2-3} \cmidrule(lr){4-5} coefficient & estimate & 95\% CI & estimate & 95\% CI\\ \hline $\alpha_0$ & 1.990 & (\phm1.643, \phm2.338) & 1.997 & (\phm1.646, \phm2.348) \\ $\alpha_1$ & -0.009 & (-0.021, \phm0.002) & -0.009 & (-0.021, \phm0.002) \\ $\alpha_2$ & 0.851 & (\phm0.238, \phm1.465) & 0.840 & (\phm0.226, \phm1.453) \\ $\alpha_3$ & 0.539 & (\phm0.495, \phm0.583) & 0.537 & (\phm0.493, \phm0.582) \\ $\tilde{\alpha}_0$ & -0.177 & (-0.586, \phm0.232) & -0.182 & (-0.591, \phm0.228) \\ $\tilde{\alpha}_1$ & 0.008 & (-0.006, \phm0.023) & 0.008 & (-0.007, \phm0.023) \\ $\tilde{\alpha}_2$ & -0.871 & (-1.522, -0.221) & -0.863 & (-1.514, -0.212) \\ $\tilde{\alpha}_3$ & -0.156 & (-0.206, -0.107) & -0.154 & (-0.204, -0.104) \\ $\beta_0$ & 0.415 & (\phm0.105, \phm0.724) & 0.410 & (\phm0.100, \phm0.719) \\ $\beta_1$ & -0.017 & (-0.028, -0.005) & -0.017 & (-0.028, -0.005) \\ $\beta_2$ & 0.122 & (-0.156, \phm0.400) & 0.130 & (-0.148, \phm0.408) \\ \hline \hline $\mbox{Var}(b_{0i})$ & 0.160 & & 0.182 & \\ $\mbox{Var}(b_{1i})$ & 0.003 & & - & \\ $\mbox{Corr}(b_{0i}, b_{1i})$ & 1.000 & & - & \\ $\mbox{Var}(\epsilon_{it+1})$ & 7.138 & & 7.139 & \\ \hline \end{tabular} \caption{Estimated coefficients and 95\% confidence interval for model \eqref{eq:model-data-analysis} of HeartSteps data. Estimators are obtained using R package \textsf{lmer}, and the 95\% confidence interval are based on $t$ distribution with Satterthwaite approximation implemented in R package \textsf{lmerTest}.} \label{tab:data-analysis-result} \end{table} \section{Discussion} \label{sec:discussion} Linear mixed models (LMM) were originally developed for settings with fixed covariates, and it has been natural for researchers to think about the induced marginal model when building and interpreting the fixed effects in LMM. In this paper, we review related literature on the potential bias that would arise when including endogenous covariates into LMM. We argued that the fundamental issue in LMM with endogenous covariates is that the fixed effects will only have a conditional-on-the-random-effects interpretation, and the marginal interpretation is no longer valid. In terms of estimation in LMM with endogenous covariates, we introduced a conditional independence assumption, and showed that under this assumption standard LMM software can still be used to obtain valid estimator of the fixed effects and the variance components, as well as valid prediction of the random effects. We used an LMM to model the treatment effect of sequentially assigned treatment in HeartSteps MRT where covariates are likely endogenous, and we discussed the plausibility of the conditional independence assumption for those covariates. The inclusion of endogenous covariates to an LMM implies that the fixed effects may only be interpreted when conditional on an individual. Thus, a future research question is to develop estimation methods for the parameters in the marginal mean model that are coherent with an LMM where there are endogenous covariates, which will enable interpretation in the marginal sense. Related work in generalized linear mixed models but with exogenous covariates includes \citet{heagerty1999marginally}, \citet{heagerty2000marginalized}, and \citet{larsen2000interpreting}. In a standard LMM with exogenous covariates, the empirical best linear unbiased predictor (eBLUP) equals the empirical Bayes estimator where a noninformative prior is imposed on the fixed effect and the variance components are estimated through REML \citep{lindley1972bayes, dempfle1977comparison}. In Section \ref{sec:model} we showed through partial likelihood argument that the empirical Bayes estimator of random effects from standard LMM is still a valid empirical Bayes estimator in the case of endogenous covariates. However, it is unknown whether it is still eBLUP in this case without further assumptions. Along the same lines, in a standard LMM the restricted maximum likelihood (REML) estimator of the variance components can be viewed as the maximum \textit{a posteriori} estimator in a Bayesian hierarchical model \citep{laird1982random}, and in Section \ref{sec:model} we showed that this latter interpretation is valid for the REML estimators obtained through standard LMM software when there are endogenous covariates. Another interpretation of the REML estimator in a standard LMM is the maximizer for the likelihood of linear combinations of the outcome that is orthogonal to the fixed effects. It is unknown whether this interpretation continues to hold for the endogenous covariate case. In the literature, there has been work on handling endogenous covariates in longitudinal data via jointly modeling of the covariate process and the outcome process, which could be alternative approaches to the method proposed in this paper for situations where the conditional independence assumption is questionable. Note that each of these alternative approaches require certain assumptions on the covariate process, and these assumptions themselves need to be verified in the context of each application. For example, \citet{miglioretti2004marginal} modeled the covariate process, and assumed that $X_{it} \perp b_i \mid X_{i1}, X_{i2}, \ldots, X_{it-1}$. \citet{roy2006conditional} proposed to model the distribution of covariates given the history to infer the dependence of a Poisson process outcome on the endogenous covariates. \citet{sitlani2012} proposed to use joint modeling for analyzing the effect of a surgical trial (where the time-varying treatment is a jump process) under noncompliance. \citet{shardell2018joint} proposed to use a joint model approach, by assuming either the distribution of $X_{it}$ can be correctly modeled, or that the endogenous covariate is the lagged outcome. In MRTs, treatments are sequentially randomly assigned with known randomization probability. The method in this paper utilizes the randomization to the extent that the treatment indicator $A_{it}$ automatically satisfies the conditional independence assumption. An interesting future problem is whether the randomization can be further leveraged and related techniques such as action centering (e.g., \citet{brumback2003intensity}, \citet{goetgeluk2008conditional}, and \citet{boruvka2017}) can be adapted to introduce robustness to the estimation procedure.
{ "timestamp": "2019-03-01T02:07:54", "yymm": "1902", "arxiv_id": "1902.10861", "language": "en", "url": "https://arxiv.org/abs/1902.10861" }
\section{Introduction} Experiments aiming to directly detect the interactions of dark matter (DM) particles in underground laboratories have made tremendous progress over the past decades and place some of the strongest bounds on the parameter space of many DM models~\cite{Cushman:2013zza}. Indeed, these experiments have become so sensitive that they can constrain even DM models where standard spin-independent and spin-dependent interactions are absent at leading order~\cite{Xia:2018qgs}. As a result there has been a rapidly growing interest in DM models with momentum- or velocity-dependent interactions, which can be described by a general effective field theory (EFT) in the non-relativistic limit~\cite{Fitzpatrick:2012ix,Anand:2013yka,Catena:2014uqa,Gresham:2014vja,Catena:2014epa,Gluscevic:2015sqa,Dent:2015zpa,Kahlhoefer:2016eds,Bishara:2016hek,Edwards:2018lsl}. In these models it becomes essential to include loop effects, which may reintroduce spin-independent interactions and thereby substantially boost the expected event rates~\cite{Haisch:2013uaa,Crivellin:2014qxa,Crivellin:2014gpa,DEramo:2016gos,Bishara:2018vix}. Particular attention has been paid to models in which DM scattering is mediated by a pseudoscalar exchange particle~\cite{Freytsis:2010ne,Dienes:2013xya,Boehm:2014hva}, motivated partially by the interesting implications for collider~\cite{Ipek:2014gua,No:2015xqa,Goncalves:2016iyg,Bauer:2017ota,Bauer:2017fsw,Pani:2017qyd,Tunney:2017yfp,Banerjee:2017wxi,Haisch:2018kqx,Abe:2018bpo} and flavour~\cite{Batell:2009di,Freytsis:2009ct,Batell:2009jf,Dolan:2014ska,Berlin:2015wwa,Dobrich:2018jyi} physics. At leading order the resulting interactions are so strongly suppressed in the non-relativistic limit that they are well below the ``neutrino floor'' which indicates the ultimate reach of direct detection experiments~\cite{Billard:2013qya}. However, several recent studies have shown that loop-induced spin-independent interactions can change this picture dramatically, in particular when taking into account the interactions between the pseudoscalar mediator and the SM Higgs boson required by gauge invariance~\cite{Arcadi:2017wqi,Sanderson:2018lmj,Li:2018qip,Abe:2018emu}. In fact, ref.~\cite{Abe:2018emu} pointed out that for this particular model even two-loop processes give a relevant contribution and need to be properly included for an accurate estimate of experimental sensitivities. In the present work we generalise these results by considering spin-0 mediators that couple to DM and Standard Model (SM) quarks with arbitrary CP phases. We furthermore treat the coupling between the mediator and SM Higgs bosons as a free parameter and thus remain agnostic about the underlying ultraviolet (UV) completion. A particular emphasis is placed on the impact of two-loop processes. We show that, at least for heavy quarks, accurate results can be obtained by first integrating out the heavy quark and then performing all further calculations in the resulting EFT. This approach substantially simplifies and speeds up the evaluation of direct detection constraints. We find that for general CP phases loop-induced spin-independent interactions may be strong enough to lead to detectable signals in near-future direct detection experiments, such as LZ~\cite{Akerib:2018lyp} or XENONnT~\cite{Aprile:2015uzo}. The importance of our results are illustrated for a number of relevant scenarios. We show that for DM models with maximal CP violation (as studied e.g.\ in the context of self-interacting DM~\cite{Kahlhoefer:2017umn}) loop effects can be comparable to the leading-order contribution and change the shape of the recoil spectrum in important ways. Large effects are also found in the CP-violating Higgs portal model, which has been the subject of several recent studies~\cite{Beniwal:2015sdl,Athron:2018hpc,Abe:2019wku}. In both cases loop-induced interactions enable direct detection experiments to probe parameter regions that would otherwise be out of reach. The paper is structured as follows. In section~\ref{sec:loops} we briefly introduce the general model with free CP phases and then present our central results on how to perform the mapping onto the low-energy EFT relevant for DM direct detection. We discuss in detail the importance of two-loop processes and the matching onto non-relativistic effective operators. Specific applications of the general results are presented in section~\ref{sec:implications}, where we also calculate the sensitivity of present and future direct detection experiments. We summarise our findings and conclude in section~\ref{sec:conclusions}. Detailed results from our one-loop and two-loop calculations are presented in the appendices~\ref{app:one-loop} and~\ref{app:two-loop}, respectively. Finally, appendix~\ref{app:rel-nuc-wilson} provides details on nuclear form factors. \section{Loop effects in direct detection} \label{sec:loops} We investigate a simplified model of a Dirac fermion DM particle $\chi$ interacting with SM fermions $f$ through a general spin-0 mediator $a$ with mass $m_a$ greater than the bottom-quark mass $m_b$: \begin{align} \mathcal{L} = g_\chi\,a\,\bar{\chi}\, ( \cos\phi_\chi + i \gamma_5 \sin\phi_\chi)\, \chi + g_\text{SM}\sum_{f} \frac{m_f}{v} a\,\bar{f}\,( \cos\phi_\text{SM} + i \gamma_5 \sin\phi_\text{SM})\,f\;, \label{eq:L} \end{align} where $\phi_\chi$ and $\phi_\text{SM}$ are CP phases, $v\approx 246\,\mathrm{GeV}$ is the electroweak vacuum expectation value, $m_f$ are the SM fermion masses and $g_\chi$ as well as $g_\text{SM}$ denote the couplings of $a$ to DM and SM fermions, respectively. We have further assumed Yukawa-like couplings in agreement with the hypothesis of minimal flavour violation (MFV)~\cite{DAmbrosio:2002vsn} such that flavour physics constraints on the universal coupling $g_\text{SM}$ are weakened (see section~\ref{subsec:maxCP}).\footnote{In a generic MFV scenario a slightly more general Lagrangian than eq.~(\ref{eq:L}) can be written down, as different couplings to up- and down-type quarks are allowed. For the scope of this work, however, we will focus on the case of one universal coupling.} For $\phi_\chi = \phi_\text{SM} = 0$ we recover the well-known simplified model of a scalar mediator, whereas for $\phi_\chi = \phi_\text{SM} = \pi/2$ we obtain a CP-conserving theory with a pseudoscalar mediator~\cite{Abdallah:2015ter}. In the former case constraints on the model from direct detection experiments are very strong, whereas in the latter case they are almost entirely absent~\cite{Arcadi:2017wqi,Sanderson:2018lmj,Abe:2018emu}. Here we will treat the CP phases as free parameters in order to study the impact of different phase combinations on the predictions for direct detection experiments. The simplified model in eq.~(\ref{eq:L}) does not respect all gauge symmetries of the SM before electroweak symmetry breaking. The interactions between $a$ and SM fermions are therefore expected not to appear in isolation but in combination with additional interactions between $a$ and the SM Higgs boson $h$. In the present work, we will not discuss how these different interactions can be linked in specific UV completions. Instead, we introduce an additional free parameter $\lambda_{ah}$ and supplement eq.~(\ref{eq:L}) by the interaction term \begin{align} \mathcal{L}^\text{Higgs}_\text{int} = \frac{1}{2} \lambda_{ah} v h a^2\;. \end{align} We will show that this interaction can play a relevant role in the phenomenology of this model. Moreover, it will be of particular importance in section~\ref{subsec:CPHiggs} where we will identify $a$ with the SM Higgs boson $h$ itself. Note that we neglect additional interaction terms involving two Higgs bosons. Although such terms are in general expected to be present, they do not give any relevant contribution to the calculation of direct detection signatures. \subsection{Low-energy effective Lagrangian} \label{sec:one-loop} \begin{figure}[t] \includegraphics[width=2.75cm]{TreeLvlDiagramNoLabels.pdf} \hspace{1.5cm} \includegraphics[width=4.5cm]{HiggsTriangleNoLabel.pdf} \hspace{1.5cm} \includegraphics[width=4.5cm]{BoxDiagramNoLabel.pdf} \caption{Tree-level, Higgs-induced triangle as well as box diagram contribution to the cross section relevant for direct searches of DM. All Feynman diagrams are drawn with \texttt{TikZ-Feynman}~\cite{Ellis:2016jkw}.}\label{fig:CPDiagram1} \end{figure} \noindent To calculate event rates in direct detection experiments, we need to determine the effective interactions between DM and quarks that result from the three types of diagrams illustrated in figure~\ref{fig:CPDiagram1}. For the discussion below it will be useful to distinguish between interactions that lead to spin-independent (SI) and to spin-dependent (SD) scattering in the non-relativistic limit.\footnote{Note that here and below we use the term ``spin-independent'' to refer to all types of interactions that do not depend on the nucleus spin, irrespective of whether or not they are suppressed in the non-relativistic limit. Accordingly, the term ``spin-dependent'' refers to all interactions that are not spin-independent, including momentum-suppressed interactions. Indeed, unsuppressed spin-dependent interactions are absent in the model that we consider.} Starting with the tree-level exchange of $a$ illustrated in the left panel of figure \ref{fig:CPDiagram1}, we obtain \begin{align} \mathcal{L}^{\text{SI}}_{\text{tree}} &= \sum_{q=\text{all}} m_q\,\mathcal{C}^\text{tree}\big(\cos(\phi_\chi) \cos(\phi_\text{SM}) \,\bar{\chi} \chi + \sin(\phi_\chi) \cos(\phi_\text{SM})\,\bar{\chi}i \gamma_5 \chi\big)\,\bar{q} q\;,\\ \mathcal{L}^{\text{SD}}_{\text{tree}} &= \sum_{q=\text{all}} m_q\,\mathcal{C}^\text{tree}\big(\cos(\phi_\chi) \sin(\phi_\text{SM}) \,\bar{\chi} \chi + \sin(\phi_\chi) \sin(\phi_\text{SM})\,\bar{\chi}i \gamma_5 \chi\big)\,\bar{q}i \gamma_5 q\;, \end{align} where the sum runs over all quark species. Here we have defined the tree-level coefficient \begin{align} \mathcal{C}^\text{tree} = \frac{g_\chi\, g_\text{SM}}{v\, m_a^2}\;, \end{align} and have kept the dependence on the two CP phases explicit. Next we consider the Higgs-mediated exchange shown in the middle panel of figure \ref{fig:CPDiagram1}, which maps onto the purely spin-independent interaction \begin{align} \mathcal{L}^\text{SI}_{\text{triangle}} =\sum_{q=\text{all}} \frac{m_q \lambda_{ah}}{m_h^2}\left( \mathcal{C}_S^{\text{triangle}} \,\bar{\chi} \chi\,\bar{q} q +\mathcal{C}_{PS}^{\text{triangle}} \,\bar{\chi} i \gamma_5 \chi\, \bar{q} q\right) \;, \end{align} where the sum again runs over all quarks. We have further introduced the triangle coefficients \begin{align} \mathcal{C}_S^{\text{triangle}} &= \frac{g_\chi^2}{(4\pi)^2}\, m_\chi \left[ (1 + \cos(2\phi_\chi))\,C_0(m_\chi^2,\, m_a^2,\,m_\chi^2) + C_2(m_\chi^2,\, m_a^2,\,m_\chi^2)\right]\;,\\ \mathcal{C}_{PS}^{\text{triangle}} &= \frac{g_\chi^2}{(4\pi)^2}\, m_\chi \sin(2\phi_\chi)\,C_0(m_\chi^2,\, m_a^2,\,m_\chi^2)\;, \end{align} in terms of the loop functions $C_0(m_\chi^2,\, m_a^2,\,m_\chi^2)$ and $C_2(m_\chi^2,\, m_a^2,\,m_\chi^2)$, which are given in appendix~\ref{app:one-loopfcts}. Finally, we have to take into account the box diagram in the right panel of figure~\ref{fig:CPDiagram1}. We expand the amplitude in terms of the quark momentum, which is the smallest scale in the diagram~\cite{Abe:2018emu}, and obtain \begin{align} \begin{split} \label{effbox1} \mathcal{L}^{\text{SI}}_{\text{box}} =& \sum_{q=u,d,s} \left(m_q\,\mathcal{C}^\text{box}_{1,q}\,\bar{\chi} \chi \,\bar{q} q + m_q\,\mathcal{C}^\text{box}_{2,q} \,\bar{\chi} i \gamma_5 \chi\,\bar{q} q\right)\\ &+\sum_{q=u,d,s,c,b}\Big(\mathcal{C}^\text{box}_{5,q} \,\bar{\chi} i\partial^\mu \gamma^\nu \chi \, \mathcal{O}^q_{\mu\nu} + \mathcal{C}^\text{box}_{6,q} \,\bar{\chi} i\partial^\mu i \partial^\nu \chi \, \mathcal{O}^q_{\mu\nu}+\mathcal{C}^\text{box}_{7,q} \,\bar{\chi} i \gamma_5 i\partial^\mu i \partial^\nu \chi \, \mathcal{O}^q_{\mu\nu}\Big)\;, \end{split}\raisetag{3.3\normalbaselineskip}\\ \label{effbox2} \mathcal{L}^{\text{SD}}_{\text{box}} =& \sum_{q=u,d,s} \left(m_q\,\mathcal{C}^\text{box}_{3,q}\,\bar{\chi} \chi \,\bar{q}i \gamma_5 q + m_q\,\mathcal{C}^\text{box}_{4,q}\,\bar{\chi} i \gamma_5 \chi \,\bar{q} i \gamma_5 q\right)\;. \end{align} Computational details and the expressions of the different box diagram coefficients $\mathcal{C}^\text{box}_{i,q}$ are given in appendix~\ref{app:one-loopwilson}. Note that all of these coefficients share a common factor of $g^2_\chi\, g^2_\text{SM}\,m_q^2/v^2$, which also constitutes the only quark dependence. In eq.~(\ref{effbox1}) we have also introduced the twist-2 quark operator \begin{align} \mathcal{O}^q_{\mu\nu} = \bar{q}\,\left(\frac{i \partial^\mu \gamma^\nu + i \partial^\nu \gamma^\mu}{2} - \frac{1}{4} g^{\mu\nu} i \slashed{\partial}\right)\,q\;. \end{align} Since the corresponding form factors are evaluated at the scale of the $Z$ boson mass $m_Z$, we include the charm and bottom quark in the corresponding sums in eq.~(\ref{effbox1})~\cite{Hisano:2010ct,Hisano:2015bma}. However, none of the heavy quarks have been included in the remaining terms of eqs.~(\ref{effbox1}) and~(\ref{effbox2}), because they require a different treatment, which will be discussed next. \subsection{Effective description of two-loop processes}\label{subsec:eff2Loop} \begin{figure}[t] \centering \begin{subfigure}{0.24\textwidth} \includegraphics[width=\textwidth]{Full2Loop.pdf} \end{subfigure} \qquad \begin{subfigure}{0.24\textwidth} \includegraphics[width=\textwidth]{Full2LoopCrossed.pdf} \end{subfigure} \caption{Two-loop processes for the evaluation of the heavy quark ($Q = c,b,t$) contributions to effective DM-gluon interactions.}\label{fig:CPDiagram2} \end{figure} As the charm, bottom and top quark are heavier than the energy scale relevant for DM direct detection experiments, they should be integrated out of the theory aiming to describe interactions at the level of nuclei. For the tree-level and Higgs-induced triangle diagram this can be done simply by replacing the heavy quarks by the corresponding effective gluon interaction obtained from triangular heavy-quark loops~\cite{Shifman:1978zn} \begin{align} \label{eq:QQGGShifman} m_Q \bar{Q} Q &\rightarrow -\frac{\alpha_s}{12\pi} G^{a}_{\mu\nu} G^{a\mu\nu}\;,\\ m_Q \bar{Q} i \gamma_5 Q &\rightarrow \frac{\alpha_s}{8\pi} G^{a}_{\mu\nu} \widetilde{G}^{a\mu\nu}\;, \end{align} where $G^{a\mu\nu}$ is the gluon field strength tensor and ${\widetilde{G}}^{a\mu\nu} = \frac{1}{2} \epsilon^{\mu\nu\alpha\beta}G^a_{\alpha\beta}$ with the convention $\epsilon^{0123} = 1$. This procedure is justified for these two diagrams since the two steps of integrating out the mediator $a$ and integrating out the heavy quarks factorise. The situation is however very different for the box diagram in the right panel of figure~\ref{fig:CPDiagram1}. In this case one cannot make a simple factorization argument to integrate out heavy quarks. This is visualised in figure~\ref{fig:CPDiagram2}, which shows the two-loop diagrams that need to be computed to obtain the effective interactions between DM and gluons. Any attempt to simplify this calculation by first integrating out the mediator $a$ and then using eq.~(\ref{eq:QQGGShifman}) would neglect the contribution from the diagram on the right. For $m_Q \ll m_a,\,m_\chi$ the two-loop computation hence cannot be simplified in this way without introducing potentially large errors~\cite{Abe:2018emu}. In the opposite case of $m_Q \gg m_a,\,m_\chi$ it was argued in ref.~\cite{Abe:2018emu} that a simplification is not possible because one cannot expand the box diagram amplitude in terms of the external quark momentum, which is no longer the smallest scale in the diagram. It was in particular stressed that for the top quark a full two-loop computation is mandatory. \begin{figure}[t] \begin{subfigure}{0.24\textwidth} \includegraphics[width=\textwidth]{Full2Loop.pdf} \end{subfigure} {+} \begin{subfigure}{0.24\textwidth} \includegraphics[width=\textwidth]{Full2LoopCrossed.pdf} \end{subfigure} {$\rightarrow$} \begin{subfigure}{0.24\textwidth} \includegraphics[width=\textwidth]{Eff2LoopChiChiGGMatch.pdf} \end{subfigure} {$\rightarrow$} \begin{subfigure}{0.16\textwidth} \includegraphics[width=\textwidth]{ChiChiGGMatch.pdf} \end{subfigure} \caption{Illustration of the decomposition of the two-loop process for \mbox{$m_Q \gg m_a,\,m_\chi$}. After first integrating out the heavy quark $Q$ (first arrow) one can then match the resulting one-loop diagram onto effective DM-gluon interactions (second arrow). The black dot represents an effective interaction corresponding to a higher-dimensional operator.}\label{fig:CPDiagram4} \end{figure} As we are now going to demonstrate, however, for $m_Q \gg m_a,\,m_\chi$ it is in fact possible to decompose the underlying two-loop process into two separate one-loop diagrams by integrating out the \mbox{heavy quark $Q$} first and the mediator $a$ afterwards. This approach, in which no diagrams are neglected, is illustrated in figure~\ref{fig:CPDiagram4}. Provided the mediator is light compared to the heavy quark, it is thus possible to simplify the calculations significantly. In the following we will be mostly interested in the case $m_a \ll m_t$, such that the approach outlined above can be applied to the top quark. Therefore, we first consider the loop involving the top quark separately and integrate it out by performing a $1/m_t$ expansion of the (in total six) corresponding amplitudes. We employ \mbox{\texttt{Package-X}~\cite{Patel:2015tea}} for the evaluation and expansion of the loop computations. This then yields the following leading order effective Lagrangian coupling $a$ to gluons \begin{align} \label{effaaGMatch} \mathcal{L}_{\text{eff}}^{\text{aaG}} &= \frac{1}{2}\,d^\text{\,eff}_G\,a a\,\frac{\alpha_s}{12\pi}\,G^{a}_{\mu\nu} G^{a\mu\nu} +\frac{1}{2}\,d^\text{\,eff}_{\widetilde{G}}\,a a\,\frac{\alpha_s}{8\pi}G^{a}_{\mu\nu} \widetilde{G}^{a\mu\nu}\;. \end{align} Here we have included a symmetry factor of 1/2 and defined\footnote{Note that $d^\text{\,eff}_{G}$ vanishes for certain values of $\phi_\text{SM}$ such that one would need to include higher orders. However, these specific cases are not of interest in the present work. While $d^\text{\,eff}_{\widetilde{G}}$ also vanishes for specific values of $\phi_\text{SM}$, the same is true for the full expression $d^\text{\,full}_{\widetilde{G}}$, see eq.~(\ref{app:dGdualFull}) in appendix~\ref{app:two-loopcomp}, i.e.\ this is not a result of the heavy quark expansion.} \begin{align} d^\text{\,eff}_G = \frac{g_\text{SM}^2}{v^2} & \big( \sin^2(\phi_\text{SM}) - \cos^2(\phi_\text{SM})\big)\;,\qquad d^ \text{\,eff}_{\widetilde{G}} = \frac{g_\text{SM}^2}{v^2} \sin(2 \phi_\text{SM})\;, \end{align} which are both independent of the top-quark mass. Now performing the second step visualised in figure \ref{fig:CPDiagram4}, we obtain for the effective two-loop approach \begin{align} \mathcal{L}^\text{SI}_{\text{2-Loop}} &= \left(\mathcal{C}^\text{eff}_{G,S}\, \bar{\chi} \chi+\mathcal{C}^\text{eff}_{G,PS}\, \bar{\chi} i \gamma_5 \chi\,\right)\frac{-\alpha_s}{12\pi}\,G^{a}_{\mu\nu} G^{a\mu\nu}\;,\\ \mathcal{L}^\text{SD}_{\text{2-Loop}} &= \left(\mathcal{C}^\text{eff}_{\widetilde{G},S}\, \bar{\chi} \chi+\mathcal{C}^\text{eff}_{\widetilde{G},PS}\, \bar{\chi} i \gamma_5 \chi\, \right)\frac{\alpha_s}{8\pi}\,G^{a}_{\mu\nu} {\widetilde{G}}^{a\mu\nu}\;, \end{align} where the effective two-loop coefficients read \begin{align} \mathcal{C}^\text{eff}_{G,S} &= d^\text{\,eff}_G\,\mathcal{C}_S^{\text{triangle}}\;,\qquad &\mathcal{C}^\text{eff}_{G,PS} &= d^\text{\,eff}_G\, \mathcal{C}_{PS}^{\text{triangle}}\;,\\ \mathcal{C}^\text{eff}_{G,S}&= -d^\text{\,eff}_{\widetilde{G}}\, \mathcal{C}_S^{\text{triangle}}\;,\qquad &\mathcal{C}^\text{eff}_{\widetilde{G},PS} &= -d^\text{\,eff}_{\widetilde{G}}\, \mathcal{C}_{PS}^{\text{triangle}}\;. \end{align} An analogous calculation for the bottom and charm quark only gives a useful approximation if $m_a \ll m_c,\,m_b$. For heavier mediator masses it is in general unavoidable to perform the full two-loop calculation to accurately estimate the corresponding contributions (see appendix~\ref{app:two-loopcomp} for more details). However, for the specific coupling structure that we are interested in, bottom and charm quark are found to give only a small contribution.\footnote{This conclusion could change for example in models with extended Higgs sectors, where couplings to down-type quarks may receive a substantial enhancement.} It is hence possible to obtain a very good approximate result of the total heavy quark contribution to the effective DM-gluon interactions by including only the top-quark contribution using our effective approach. \begin{figure}[t] \centering \includegraphics[width=6.8cm]{CSGComparisonMa.pdf} \hspace{1cm} \includegraphics[width=6.8cm]{CSGComparisonMchi.pdf} \caption{Comparison of $|\mathcal{C}_{G,S}|$ in the effective approach for the top quark (red), the two-loop contribution of the top quark (dotted green) and the two-loop result including all heavy quarks (dashed grey) as a function of $m_a$ (left panel) and $m_\chi$ (right panel). For both plots we set $\phi_\chi = 0$ and $\phi_\text{SM} = \pi/2$.}\label{fig:CPComparisonCoeff} \end{figure} This is illustrated in figure~\ref{fig:CPComparisonCoeff}, where we plot the absolute value of the coefficient $\mathcal{C}_{G,S}$ as a function of the mediator mass (left panel) and of the DM mass $m_\chi$ (right panel). The effective approach for the top quark (indicated by the solid red line) and the corresponding two-loop calculation (dotted green) show very good agreement for $m_a \ll m_t$ across the whole range of DM masses. Including also bottom and charm quark in the two-loop calculation has only slight influences for small values of $m_a$ (dashed grey). Similar results can be obtained for the other coefficients.\footnote{For specific parameter points cancellations might occur within the coefficients $\mathcal{C}_{G}$ and $\mathcal{C}_{\widetilde{G}}$ like in $d_G^\text{\,eff}$ for $\phi_\text{SM} \approx \pi/4$. In this parameter region the two-loop result and the effective approach differ. However, this discrepancy does not affect any of the scenarios studied in detail below.} We conclude that it is possible to simplify the full two-loop calculation in the case of $m_Q \gg m_a,\,m_\chi$, which makes it possible to circumvent the full two-loop calculation entirely if the top quark is expected to give the dominant contribution. We will therefore use the effective approach for the remainder of this work. \subsection{Matching onto effective operators}\label{subsec:Matching} In this section we match the effective interactions of DM with quarks and gluons onto non-relativistic DM-nucleon interactions in order to obtain predictions for direct detection experiments. The first step is to perform the matching of quark and gluon fields onto nucleon fields, which yields the following effective Lagrangian: \begin{align} \mathcal{L}_{\chi N}^{\text{eff}} &= \left(\mathcal{C}^{\text{SI}}_{\text{eff},N}\,\bar{\chi}\chi + \mathcal{C}^{\text{SI,CPV}}_{\text{eff},N}\, \bar{\chi}i\gamma_5\chi\right) \bar{N} N +\left(\mathcal{C}^{\text{SD,CPV}}_{\text{eff},N}\, \bar{\chi}\chi +\mathcal{C}^{\text{SD}}_{\text{eff},N}\,\bar{\chi}i\gamma_5\chi\right) \bar{N} i\gamma_5N\;, \label{eq:LeffchiN} \end{align} where $N = p,n$ is a nucleon field and `CPV' indicates terms that only arise when CP is violated. The coefficients $C_\text{eff}$ depend on the various coefficients we derived in the previous two sections as well as on the nuclear form factors that parametrise the quark and gluon contents of a nucleon. Note that in general the nuclear form factors and hence the effective coefficients are different for protons and neutrons: $C_{\text{eff},p} \neq C_{\text{eff},n}$. For the SI coefficients, we find \begin{align} \notag \mathcal{C}^{\text{SI}}_{\text{eff},N} &=\sum_{q=u,d,s} m_N f^N_q\left( \cos(\phi_\chi) \cos(\phi_\text{SM})\,\mathcal{C}^\text{tree} + \frac{\lambda_{ah}}{m_h^2} \,\mathcal{C}^\text{triangle}_{S}+\mathcal{C}^\text{box}_{1,q} \right)\\ &\hspace{0.5cm}+ 3\cdot \frac{2}{27} m_N f^N_G\left( \cos(\phi_\chi) \cos(\phi_\text{SM})\,\mathcal{C}^\text{tree} + \frac{ \lambda_{ah}}{m_h^2} \,\mathcal{C}^\text{triangle}_{S} \right)\\ \notag &\hspace{0.5cm}+ \sum_{q=u,d,s,c,b} \frac{3}{4} m_N m_\chi \,\Big(q^{N}(2) + \bar{q}^{N}(2)\Big) \Big(\mathcal{C}^\text{box}_{5,q} + m_\chi\,\mathcal{C}^\text{box}_{6,q}\Big) + \frac{2}{27} m_N f^N_G\,\mathcal{C}^\text{eff}_{G,S}\;, \end{align} as well as \begin{align} \notag \mathcal{C}^{\text{SI,CPV}}_{\text{eff},N} &= \sum_{q=u,d,s} m_N f^N_q\left( \sin(\phi_\chi) \cos(\phi_\text{SM})\,\mathcal{C}^\text{tree} + \frac{\lambda_{ah}}{m_h^2}\,\mathcal{C}^\text{triangle}_{PS}+\mathcal{C}^\text{box}_{2,q} \right)\\ &\hspace{0.5cm}+ 3 \cdot \frac{2}{27} m_N f^N_G \left( \sin(\phi_\chi) \cos(\phi_\text{SM})\,\mathcal{C}^\text{tree} + \frac{\lambda_{ah}}{m_h^2}\,\mathcal{C}^\text{triangle}_{PS} \right)\\ \notag &\hspace{0.5cm} + \sum_{q=u,d,s,c,b} \frac{3}{4} m_N m^2_\chi \,\Big(q^{N}(2) + \bar{q}^{N}(2)\Big)\,\mathcal{C}^\text{box}_{7,q} + \frac{2}{27} m_N f^N_G \,\mathcal{C}^\text{eff}_{G,PS}\;, \end{align} where the nuclear form factors $f^N_{q,G}$, $q^N(2)$ and $\bar{q}^N(2)$ are defined in appendix~\ref{app:rel-nuc-wilson}. Likewise, we obtain for the SD coefficients \begin{align} \begin{split} \mathcal{C}^{\text{SD,CPV}}_{\text{eff},N}&= \sum_{q=u,d,s} F_P^{q/N}(q^2) \,\Big(\cos(\phi_\chi) \sin(\phi_\text{SM}) \,\mathcal{C}^\text{tree} + \mathcal{C}^\text{box}_{3,q}\Big)\\ &\hspace{0.5cm}+\, F^N_{\widetilde{G}}(q^2)\, \Big(3 \cos(\phi_\chi) \sin(\phi_\text{SM}) \,\mathcal{C}^\text{tree} + \,\mathcal{C}^\text{eff}_{\widetilde{G},S} \Big)\;, \end{split} \raisetag{3.4\normalbaselineskip} \end{align} and \begin{align} \begin{split} \mathcal{C}^{\text{SD}}_{\text{eff},N} &= \sum_{q=u,d,s} F_P^{q/N}(q^2) \,\Big(\sin(\phi_\chi) \sin(\phi_\text{SM}) \,\mathcal{C}^\text{tree} + \mathcal{C}^\text{box}_{4,q}\Big)\\ &\hspace{0.5cm}+\, F^N_{\widetilde{G}}(q^2)\, \Big(3 \sin(\phi_\chi) \sin(\phi_\text{SM}) \,\mathcal{C}^\text{tree} + \,\mathcal{C}^\text{eff}_{\widetilde{G},PS} \Big)\;. \end{split} \raisetag{3.4\normalbaselineskip} \end{align} The form factors $F_P^{q/N}(q^2)$ and $F^N_{\widetilde{G}}(q^2)$ are given in appendix~\ref{app:rel-nuc-wilson}. Because of non-negligible contributions from the $\pi$ and $\eta$ pole, these form factors depend on the momentum exchange $q^\mu$ between DM and nucleons. In the non-relativistic limit the effective Lagrangian from eq.~(\ref{eq:LeffchiN}) can be matched onto a basis of effective operators: \begin{equation} \mathcal{L}_{\chi N}^{\text{eff}} \to \sum_i c_i^N \mathcal{O}^N_i \; , \end{equation} where the operators $\mathcal{O}^N_i$ depend only on the spins $\vec{S}_\chi$ and $\vec{S}_N$ of DM and the nucleon, respectively, as well as on the momentum transfer $\vec{q}$ and the DM-nucleon relative velocity $\vec{v}$~\cite{Fitzpatrick:2012ix,Anand:2013yka,Fan:2010gt}. For the model that we consider, only four different operators are generated, namely \begin{align} \begin{split} \mathcal{O}^N_1 & = 1 \, ,\\ \mathcal{O}^N_6 & = (\vec{S}_\chi \cdot \frac{\vec{q}}{m_N}) (\vec{S}_N \cdot \frac{\vec{q}}{m_N}) \, , \\ \mathcal{O}^N_{10} & = i (\vec{S}_N \cdot \frac{\vec{q}}{m_N}) \, , \\ \mathcal{O}^N_{11} & = i(\vec{S}_\chi \cdot \frac{\vec{q}}{m_N}) \; . \end{split} \end{align} The corresponding coefficients can be directly read off from $\mathcal{L}_{\chi N}^\text{eff}$~\cite{Anand:2013yka}: \begin{equation} \label{eq:NRcoeff} c_1^N = C_{\text{eff},N}^\text{SI} \, , \qquad c_6^N = \frac{m_N}{m_\chi} C_{\text{eff},N}^\text{SD} \, , \qquad c_{10}^N = C_{\text{eff},N}^\text{SD,CPV} \, , \qquad c_{11}^N = - \frac{m_N}{m_\chi} C_{\text{eff},N}^\text{SI,CPV} \; . \end{equation} Note that like the form factors $F_P^{q/N}(q^2)$ and $F^N_{\widetilde{G}}(q^2)$ the coefficients $c_6^N$ and $c_{10}^N$ also depend on the momentum transfer. This final step completes the derivation of the effective interactions relevant for DM direct detection from the general Lagrangian of a spin-0 mediator given in eq.~(\ref{eq:L}). \section{Phenomenological implication} \label{sec:implications} In this section we use the results from above to predict the differential event rates in past and future direct detection experiments and to calculate the resulting exclusion limits and expected sensitivities. In models that predict dominantly spin-independent scattering, this can be done by simply calculating the corresponding scattering cross section \begin{equation} \sigma^\text{SI}_N = \frac{\mu_N^2 \, |c_1^N|^2}{\pi} \; , \end{equation} where $\mu_N = m_\chi m_N / (m_\chi + m_N)$ is the DM-nucleon reduced mass. For $c_1^p \approx c_1^n$ the differential event rate with respect to recoil energy $E_\mathrm{R}$ is then simply given by \begin{align} \label{dRdER} \frac{\mathrm{d}R}{\mathrm{d}E_\mathrm{R}} = \frac{\rho_0 \, \sigma^\text{SI}_p \, A^2 \, F^2(E_{\text{R}})}{2 \, \mu_p^2 \, m_\chi} g(v_\text{min}) \; , \end{align} where $\rho_0$ is the local DM density, $A$ is the mass number of the target nucleus and $F^2(E_\text{R})$ denotes the nuclear form factor. The factor $g(v_\text{min}) = \int_{v_\text{min}} f(v)/v \, \mathrm{d}v$ denotes the velocity integral as a function of the minimum velocity $v_\text{min}(E_\text{R}) = \sqrt{m_A E_\text{R} / (2 \, \mu^2)}$ with $m_A$ being the mass of the target nucleus and $\mu$ being the corresponding reduced mass. Direct detection experiments typically assume this particular form of the differential cross section in order to produce exclusion limits and quote expected sensitivities in terms of $\sigma^\text{SI}_p$ as a function of $m_\chi$. In the presence of additional interactions, however, the calculation of the differential event rate becomes much more involved. We do not review the corresponding formalism here and instead refer to refs.~\cite{Fitzpatrick:2012ix,Anand:2013yka,Kahlhoefer:2016eds}. Crucially, for momentum-dependent interactions it is no longer possible to capture the model prediction in terms of a single cross section at fixed momentum transfer which can then be compared to published exclusion limits. To evaluate experimental sensitivity it thus becomes necessary to reproduce experimental analyses for the appropriate recoil spectra and include information on detection efficiencies and background levels in order to obtain approximate likelihood functions. This process has been automated for the most general set of non-relativistic effective operators in the public code \texttt{DDCalc\_v2.1}~\cite{Workgroup:2017lvb,Athron:2018hpc}, which includes an extensive database of existing and planned direct detection experiments. Furthermore, \texttt{DDCalc} contains an automated interface with \texttt{DirectDM}~\cite{Bishara:2017nnn}, which we use for the matching of the spin-dependent coefficients in eq.~(\ref{eq:NRcoeff}) and the evaluation of the corresponding nuclear form factors. We can therefore simply pass the coefficients $C_{\text{eff},N}$ calculated for our model to \texttt{DDCalc} and obtain the likelihoods for existing direct detection experiments and the predicted number of events in future experiments. In the following, we will indicate the regions of parameter space that are excluded by the most recent XENON1T results~\cite{Aprile:2018dbl} and the regions that predict at least 5 events in the next-generation LZ experiment~\cite{Akerib:2018lyp}.\footnote{This number of events corresponds approximately to the median expected sensitivity using a cut-and-count analysis with a background expectation of 6.49 events. A better sensitivity may be achieved by exploiting differences in the differential distributions between signal and background.} Similar exclusion limits are obtained from the Panda-X~\cite{Cui:2017nnn,Xia:2018qgs} and LUX~\cite{Akerib:2016vxi,Akerib:2017kat} experiments, while comparable sensitivities are expected for the XENONnT experiment~\cite{Aprile:2015uzo}. \subsection{General CP phases}\label{subsec:genCP} \begin{figure}[t] \centering \includegraphics[width=0.49\textwidth]{figure_CPV_100.pdf} \includegraphics[width=0.49\textwidth]{figure_CPV_300.pdf} \caption{ Direct detection constraints as a function of the CP-violating phases $\phi_\chi$ and $\phi_\text{SM}$. For both panels we have fixed $g_\chi = g_\text{SM} = 1$ and $\lambda_{ah} = 0$. The dotted lines indicate the ratio of the predicted number of events in LZ when including both tree-level and loop-level diagrams and when only including tree-level diagrams. This ratio can be smaller than unity due to destructive interference.} \label{fig:general} \end{figure} We first consider in figure~\ref{fig:general} the most general case, in which $\phi_\chi$ and $\phi_\text{SM}$ can take arbitrary values between 0 (corresponding to purely scalar couplings) and $\pi/2$ (corresponding to purely pseudoscalar couplings). For the purpose of this figure we have fixed $g_\text{SM} = g_\chi = 1$ and $\lambda_{ah} = 0$ and consider two different combinations of $m_\chi$ and $m_a$ in the two panels. The blue shading indicates the parameter region excluded by XENON1T, while the dashed green line provides an estimate for the reach of LZ. The black dotted lines indicate the ratio of the total number of predicted events in LZ to the number of events predicted at tree-level. For $\phi_\chi$, $\phi_\text{SM} \ll \pi/2$ the tree-level exchange of $a$ dominates the spin-independent coefficient $\mathcal{C}^{\text{SI}}_{\text{eff},N}$ from eq.~(\ref{eq:LeffchiN}) and therefore also the whole scattering process. In such a scenario current direct detection bounds rule out a large part of the parameter space and constrain $g_\text{SM}$ to be very small~\cite{Kaplinghat:2013yxa,Kahlhoefer:2017umn}. As the two phases approach $\pi/2$, tree-level scattering becomes more and more suppressed, leading to a reduced sensitivity of direct detection experiments and a greater importance of loop effects. For $\phi_\chi = 0$ and \mbox{$\phi_\text{SM} = \pi/2$}, i.e.\ the top-left corner of figure~\ref{fig:general}, CP violation is maximal. In this case the tree-level contribution maps onto the non-relativistic operator $\mathcal{O}^N_{10}$, which is suppressed in the non-relativistic limit and furthermore depends on the spin of the nucleus. Existing direct detection constraints can thus be evaded even with $\mathcal{O}(1)$ couplings~\cite{Dienes:2013xya}. However, spin-independent contributions arise at loop-level and can dominate the event rate and yield potentially observable signals. The importance of loop-effects can also be seen for $\phi_\chi \approx 0$ and $10^{-4} \lesssim \pi/2 - \phi_\text{SM} \lesssim 10^{-3}$, where the total event rate is \emph{smaller} than the one predicted at tree-level due to the destructive interference between spin-independent interactions present at tree-level and those induced at loop-level. We discuss the case of maximal CP violation in more detail in section~\ref{subsec:maxCP}. For the opposite scenario of $\phi_\chi = \pi/2$ and $\phi_\text{SM} = 0$, i.e.\ the bottom-right corner of figure~\ref{fig:general}, the tree-level contribution to spin-independent scattering maps onto the non-relativistic operator $\mathcal{O}^N_{11}$, which depends on the DM spin and the momentum transfer. While the scattering cross section does receive a coherent enhancement in this case, it is suppressed by an additional factor of $m_N^2 / m_\chi^2$. We will therefore study the influence of purely spin-independent contributions emerging at loop-level in the context of the CP-violating Higgs-portal model in section~\ref{subsec:CPHiggs}. Finally, in the top-right corner of figure~\ref{fig:general}, corresponding to almost purely pseudoscalar interactions, the loop-induced event rate dominates over the tree-level prediction by many orders of magnitude. However, as observed previously~\cite{Abe:2018emu}, the sensitivity of direct detection experiments is strongly suppressed in this limit, so that the case of pure pseudoscalar interactions is out of reach for current direct detection experiments. A crucial conclusion from figure~\ref{fig:general} is that loop effects become increasingly important as experimental sensitivity improves. For the couplings and masses considered, XENON1T is only sensitive to those regions in parameter space where loop-induced interactions give a sub-leading contribution. LZ on the other hand will be sensitive to interactions that are more strongly suppressed at tree-level, giving greater importance to an accurate calculation of loop-level contributions. \subsection{Maximal CP violation}\label{subsec:maxCP} Let us take a closer look at the case $\phi_\chi = 0$ and $\phi_\text{SM} = \pi/2$, corresponding to the top-left corner in figure~\ref{fig:general}. In this case spin-independent interactions are completely absent at tree-level, making loop effects particularly important. Indeed, for the masses and couplings considered in figure~\ref{fig:general} this scenario is not excluded by the bounds from XENON1T but can be tested with LZ. However, the the loop contributions depend sensitively on the strength of the couplings, which enter quadratically into the Wilson coefficients. In order to fully assess the importance of loop effects, it is therefore important to consider alternative constraints on the couplings $g_\text{SM}$, $g_\chi$ and $\lambda_{ah}$. For given values of $m_a$, $m_\chi$ and $g_\text{SM}$ we can fix $g_\chi$ by the requirement that the observed DM relic abundance can be explained in terms of thermal freeze-out via the annihilation processes $\chi \bar{\chi} \to q \bar{q}$ and $\chi \bar{\chi} \to a a$. If the latter process is kinematically allowed, i.e.\ for $m_a < m_\chi$, it will typically give the dominant contribution for $g_\text{SM} \ll 1$, such that the required value for $g_\chi$ becomes independent of $g_\text{SM}$. In this limit, we find $g_\chi \propto m_\chi^{1/2}$ with $g_\chi = 1$ for $m_\chi \approx 500\,\mathrm{GeV}$. For larger $g_\text{SM}$ the calculation becomes more involved and we use \texttt{micrOmegas\_v5.0.6}~\cite{Belanger:2018mqt} to determine the required value for $g_\chi$ numerically. The coupling of the light spin-0 boson to SM particles can be constrained through a range of flavour physics observables. For $m_a \lesssim m_B \approx 5.2\,\mathrm{GeV}$, these constraints are very strong and effectively exclude the possibility of obtaining observable direct detection signatures~\cite{Dolan:2014ska}. However, almost all of these constraints disappear for larger values of $m_a$. Bounds from radiative $\Upsilon$ decays~\cite{Lees:2012iw,Lees:2011wb} extend to slightly larger masses, but also disappear for $m_a \gtrsim 7 \,\mathrm{GeV}$. Provided the pseudoscalar couples also to leptons (with coupling strength $g_\text{SM} \, m_\ell / v$), another important constraint arises from $B_s \to \mu^+ \mu^-$, which can arise from loop-induced flavour-changing interactions with an off-shell mediator. The resulting branching ratio is given by~\cite{Altmannshofer:2011gn,Dolan:2014ska} \begin{align} \frac{\text{BR}(B_s \rightarrow \mu^+ \mu^-)_\text{NP}}{\text{BR}(B_s \rightarrow \mu^+ \mu^-)_\text{SM}} & \simeq \frac{g_\text{SM}^4 \, m_t^4 \, m_{B_s}^4}{16 m_W^4 \, \sin(\theta_W)^4 |C_{10}^\text{SM}|^2 \, \left((m_{B_s}^2-m_a^2)^2+ \Gamma_a^2 \, m_a^2\right)} \log^2\left(\frac{\Lambda^2}{m_t^2}\right)\;, \end{align} where $C_{10}^\text{SM} = -4.103$ and $\Lambda$ is the scale of new physics (such as additional charged Higgs bosons needed in a gauge-invariant UV completion). For $\Lambda = 1\,\mathrm{TeV}$ and assuming $m_a \gg m_{B_s}$, this expression simplifies to \begin{align} \frac{\text{BR}(B_s \rightarrow \mu^+ \mu^-)_\text{NP}}{\text{BR}(B_s \rightarrow \mu^+ \mu^-)_\text{SM}} \approx \left(\frac{7.9\,\mathrm{GeV} \, g_\text{SM}}{m_a}\right)^4\;. \end{align} The branching ratio of $B_s\to\mu^+\mu^-$ has been measured with a precision of $20\%$~\cite{Aaij:2017vad} and is found to be in agreement with the SM prediction~\cite{Bobeth:2013uxa}. To obtain an approximate bound on $g_\text{SM}$ we therefore require the new-physics contribution not to exceed 40\% of the SM value. This gives \begin{equation} g_\text{SM} \lesssim 1.0 \left(\frac{m_a}{10 \, \mathrm{GeV}}\right) \; . \end{equation} In other words, even for spin-0 bosons as light as $10\,\mathrm{GeV}$ the coupling strength $g_\text{SM}$ can be of order unity.\footnote{For $\phi_\text{SM}$ different from $\pi/2$ there would be additional constraints from observables sensitive to CP-violation, in particular electric dipole moments of leptons~\cite{Chen:2015vqy,Marciano:2016yhf} and nuclei~\cite{Mantry:2014zsa}. However, for $\phi_\text{SM} \approx \pi/2$ the spin-0 mediator behaves like a pure pseudoscalar in all observables involving only SM particles, such that these constraints are absent.} The situation is quite different for the coupling $\lambda_{ah}$ between $a$ and the SM Higgs boson. This coupling induces the decay $h \to aa$ with partial width~\cite{Beniwal:2015sdl} \begin{equation} \Gamma_{h\to aa} = \frac{\lambda_{ah}^2 \, v^2}{32\pi \, m_h}\left(1 - \frac{4\,m_a^2}{m_h^2}\right)^{1/2} \; . \end{equation} The presence of this decay mode gives rise to exotic Higgs decays and leads to a suppression of the Higgs signal strength in the conventional channels. While the former provide a promising strategy for future searches~\cite{Haisch:2018kqx}, at present the strongest constraints come from a global fit of the measured properties of the SM-like Higgs boson at ATLAS and CMS~\cite{Khachatryan:2016vau}. These fits imply $\text{BR}(h\to aa) < 0.34$, corresponding to $\Gamma_{h\to aa} \lesssim 2\,\mathrm{MeV}$, when simultaneously allowing for modifications of the Higgs boson production cross section, or $\text{BR}(h\to aa) < 0.13$, corresponding to $\Gamma_{h\to aa} \lesssim 0.6\,\mathrm{MeV}$, when assuming the production cross section to be given by the SM prediction. For $m_a \ll m_h/2$, these bounds translate to $\lambda_{ah} \lesssim 0.02$ and $\lambda_{ah} \lesssim 0.01$, respectively. We will conservatively show the weaker bound in the following. \begin{figure}[t] \centering \includegraphics[width=0.49\textwidth]{figure_gq_15.pdf} \includegraphics[width=0.49\textwidth]{figure_lah_15.pdf} \caption{ Constraints on $g_\text{SM}$ (left) and $\lambda_{ah}$ (right) as a function of $m_\chi$ in a model with maximal CP violation ($\phi_\chi = 0$, $\phi_\text{SM} = \pi/2$). At each point the coupling $g_\chi$ is fixed in such a way that the observed DM relic abundance is reproduced. The dotted lines in the left panel indicate the ratio of the predicted number of events in LZ from loop-induced spin-independent interactions and from tree-level momentum-suppressed interactions. In the right panel, tree-level interactions are absent. } \label{fig:maxCPV} \end{figure} Figure~\ref{fig:maxCPV} summarises the constraints on $g_\text{SM}$ (left) and $\lambda_{ah}$ (right) as a function of $m_\chi$. At each point in the two plots $g_\chi$ is determined by the relic density requirement and we have set $m_a = 15\,\mathrm{GeV}$. Again the solid blue region is excluded by XENON1T and the parameter points for which 5 events are predicted in LZ are indicated by the dashed green line. Dotted black lines in the left panel indicate the ratio of loop-induced spin-independent interactions and tree-level momentum suppressed interactions in terms of the number of predicted events in LZ. As expected, the importance of loop effects grows with increasing $g_\text{SM}$ and with increasing $m_\chi$, corresponding to increasing $g_\chi$. The kinks for $m_\chi \approx 175\,\mathrm{GeV}$ result from the fact that for larger DM masses annihilation into top quarks becomes kinematically allowed and provides an efficient annihilation channel, reducing the required value of $g_\chi$. In the right panel, tree-level interactions are absent (since we have set $g_\text{SM} = 0$), so direct detection constraints arise exclusively from loop-induced interactions. Since direct constraints on $g_\text{SM}$ are quite weak, we find large regions of parameter space where the model can be discovered by LZ. If the interactions of DM arise dominantly from $\lambda_{ah}$, on the other hand, the strong constraints from Higgs measurements imply that there remains only a small region of allowed parameter space that can be explored with LZ. We note that the constraints in the right panel are completely independent of $\phi_\text{SM}$ and would hence also apply to a pure pseudoscalar. For parameter points close to the XENON1T exclusion bound in the left panel loop effects give a sizeable contribution to the total event rate in direct detection experiments. This observation is illustrated further in figure~\ref{fig:dRdE}, which compares the predicted differential event rates at tree-level and loop-level in LZ for $m_\chi = 200\,\mathrm{GeV}$, $m_a = 15\,\mathrm{GeV}$ and $g_\text{SM} = 0.6$, corresponding to $g_\chi = 0.6$. The tree-level interactions are momentum-suppressed and therefore vanish in the limit $E_\mathrm{R} \to 0$, leading to a maximum around $E_\mathrm{R} \sim 30\,\mathrm{keV}$. The differential event rate from loop-induced spin-independent interactions, on the other hand, decreases monotonically with increasing recoil energy. Intriguingly, the two contributions conspire to give a total event rate that is approximately constant across the entire search region. Such a spectrum cannot be obtained from any single non-relativistic operator and could therefore, given enough statistics, be used to identify models like the one discussed here. A similar interplay between tree level and loop level can arise for $\phi_\chi = \pi/2$, $\phi_\text{SM} = 0$, in which case the tree-level process is coherently enhanced but suppressed by a factor $m_N/m_\chi$ in $c_{11}$, see eq.~(\ref{eq:NRcoeff}). The two scenarios however differ in their dependence on the target material. In particular, if tree-level scattering is spin-dependent, it will be absent in target materials with no nuclear spin, leading to a monotonically falling recoil spectrum from loop-induced spin-independent interactions. \begin{figure}[t] \centering \includegraphics[width=0.75\textwidth]{dRdEplot.pdf} \caption{ Predicted differential event rate in LZ for a specific parameter point in the model with maximal CP violation ($\phi_\chi = 0$, $\phi_\text{SM} = \pi/2$) consistent with all current constraints.} \label{fig:dRdE} \end{figure} Let us finally revisit the discussion of how to approximate two-loop effects in our model. We compare in figure \ref{fig:CPComparisonCross} the spin-independent scattering cross section obtained with our approach (outlined in section~\ref{subsec:eff2Loop}) with the result of a full two-loop calculation including all heavy quarks. The left panel corresponds to the case of maximal CP violation ($\phi_\chi = 0$, $\phi_\text{SM} = \pi/2$), the right panel corresponds to the pure pseudoscalar case ($\phi_\chi = \phi_\text{SM} = \pi/2$). In both cases we fix $g_\chi$ by the relic density requirement and set $g_\text{SM} = 0.6$, consistent with the bounds discussed above (which are independent of $\phi_\chi$). We find very good agreement between the two approaches, confirming our approach for integrating out top quarks and neglecting the contribution from bottom and charm quarks. In the right panel we also show the cross section obtained if the pseudoscalar is integrated out before all heavy quarks, as previously suggested in refs.~\cite{Arcadi:2017wqi,Sanderson:2018lmj}.\footnote{Here we have used the coefficient $C_{S,q}$ from ref.~\cite{Arcadi:2017wqi} for the top quark and have fixed the overall sign following ref.~\cite{Abe:2018emu}} As pointed out previously~\cite{Abe:2018emu}, this approach leads to a vast overestimation of the loop contribution. \begin{figure}[t] \includegraphics[width=7cm]{SigmaChiP15GeV.pdf} \hspace{1cm} \includegraphics[width=7cm]{SigmaChiP10GeV.pdf} \caption{Comparison of the effective approach and the full two-loop result for two benchmark points in $m_\chi-\sigma^\text{SI}_{p}\,-$\,plane. In the left panel we consider maximal CP violation with $\phi_\text{SM} = \pi/2$ and $\phi_\chi = 0$, whereas in the right panel we fix $\phi_\text{SM} = \phi_\chi = \pi/2$, i.e.\ pure pseudoscalar phases, and also show the curve corresponding to previous calculational approaches of the two-loop diagram. In both panels $\lambda_{ah}$ is set to zero and $g_\chi$ is fixed such that the correct relic density is reproduced. Note that additional contributions to the differential event rate from momentum-dependent interactions at tree-level may lead to stronger exclusion limits than the ones shown in this plot.}\label{fig:CPComparisonCross} \end{figure} \subsection{CP-violating Higgs portal}\label{subsec:CPHiggs} As a final example for the importance of loop-effects we consider the fermionic Higgs portal model~\cite{Beniwal:2015sdl,Athron:2018hpc,Abe:2019wku}: \begin{equation} \mathcal{L} = \mathcal{L}_\text{SM} + \overline{\chi} (i \slashed{\partial} - \mu) \chi - \frac{\lambda_{h\chi}}{\Lambda} \left(\cos\psi\, \overline{\chi}\chi + \sin\psi \, \overline{\chi}i\gamma_5 \chi \right) H^\dagger H \; , \end{equation} where $H$ denotes the SM Higgs doublet and $\Lambda$ parametrises the unknown scale of new physics. At first sight, this Lagrangian bears little resemblance to the simplified model discussed so far. After electroweak symmetry breaking, however, the following interactions are generated:\footnote{We omit the interaction of DM with two Higgs bosons, which plays no role for direct detection. This interaction is however included in the relic density calculation presented below.} \begin{equation} \mathcal{L} \supset - \lambda \, v \, h^3 - \frac{h}{v} \sum_{q} m_q \overline{q} q - \frac{\lambda_{h\chi} \, v}{\Lambda} h \left(\cos\phi \, \overline{\chi} \chi + \sin\phi \, \overline{\chi} i\gamma_5 \chi \right) \, , \label{eq:Lag_psi_pEWSB} \end{equation} where $\lambda$ denotes the quartic Higgs self-coupling and \begin{equation} \cos\phi = \frac{\mu}{m_\chi} \left(\cos\psi + \frac{1}{2}\frac{\lambda_{h\chi}}{\Lambda} \frac{v^2}{\mu_\chi} \right)\;, \end{equation} with \begin{align} m_\chi &= \sqrt{\left(\mu + \frac{1}{2}\frac{\lambda_{h\chi}}{\Lambda} v^2 \cos\psi \right)^2 + \left(\frac{1}{2}\frac{\lambda_{h\chi}}{\Lambda}v^2 \sin\psi \right)^2} \, . \end{align} We can therefore directly apply all the results from section~\ref{sec:loops} with the replacements \begin{align} m_a = m_h, \quad g_\chi = \frac{\lambda_{h\chi} v}{\Lambda}, \quad g_\text{SM} = 1, \quad \phi_\text{SM} = 0, \quad \phi_\chi = \phi, \quad \lambda_{ah} = - 6 \lambda = - 3 \frac{m_h^2}{v^2}\; . \end{align} The factor of 6 in the last expression is necessary to ensure that the correct Feynman rule is obtained in spite of different combinatorial factors. The free parameters of this model are hence $m_\chi$, $\lambda_{h\chi}/\Lambda$ and $\phi$. For $\phi \neq 0$ the model violates CP and spin-independent scattering is suppressed proportional to $\cos^2 \phi$. As $\phi$ approaches $\pi/2$, loop effects are therefore expected to become increasingly important. We confirm this expectation in figure~\ref{fig:HP}, which shows constraints on $\lambda_{h\chi}/\Lambda$ as a function of $m_\chi$. Dotted lines indicate the ratio of loop-induced spin-independent interactions to tree-level momentum-suppressed interactions (in terms of the expected number of events in LZ). In the parameter range that can be probed by direct detection experiments, this ratio is significantly larger than unity, implying that the sensitivity of direct detection experiments stems almost exclusively from loop-induced interactions.\footnote{We note that our effective description of top-quark loops overestimates the contribution to the Wilson coefficient for spin-independent scattering by up to a factor of 3 compared to the full two-loop result. However, by far the dominant contribution to this coefficient arises from triangle diagrams, making the difference between the effective description and the full two-loop calculation irrelevant.} \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{figure_HP.pdf} \caption{ Constraints and preferred parameter regions for the CP-violating Higgs portal model with $\phi = \pi/2$. The dotted lines indicate the ratio of loop-induced spin-independent interactions and tree-level momentum-suppressed interactions in terms of the predicted number of events in LZ. } \label{fig:HP} \end{figure} In figure~\ref{fig:HP} we also indicate the parameter regions excluded by the constraint $\text{BR}(h \to \text{inv}) < 0.26$~\cite{Aaboud:2018sfi,Sirunyan:2018owy} as well as the combinations of $\lambda_{h\chi}/\Lambda$ and $m_\chi$ for which the observed DM relic abundance can be reproduced via annihilations into SM particles~\cite{Beniwal:2015sdl}. The requirement of EFT validity, $\lambda_{h\chi}/\Lambda < 2\pi/m_\chi$~\cite{Athron:2018hpc}, is satisfied in the entire parameter region shown in figure~\ref{fig:HP}. We find that for $\phi = \pi/2$ constraints from direct detection experiments are rather weak and only probe parameter regions where the standard freeze-out calculation predicts $\chi$ to be a sub-dominant DM component. For these parameter regions we implicitly assume that the abundance of $\chi$ is set by a non-standard mechanism (e.g.\ a particle-antiparticle asymmetry) such that $\chi$ accounts for all of the DM. If, on the other hand, bounds from direct detection experiments are rescaled based on the abundance of $\chi$ obtained from standard freeze-out, as done e.g.~in ref.~\cite{Athron:2018hpc}, loop-induced direct detection signals do not provide relevant constraints on the CP-violating Higgs portal model for the foreseeable future. \section{Conclusions} \label{sec:conclusions} Future direct detection experiments will reach such a high level of sensitivity to the interactions between DM and quarks that loop effects become increasingly important. This is particularly true in models where tree-level scattering is suppressed, such that loop-induced interactions may give the dominant contribution and yield potentially observable signals. In the present work we have studied such a set-up in the context of a spin-0 particle $a$ mediating the interaction between DM and SM fermions. In contrast to previous studies, we allow general CP phases and therefore cover scalar, pseudoscalar and CP-violating interactions. Moreover, we include a trilinear coupling between $a$ and the SM Higgs boson which generally arises in UV completions of this model and can have important phenomenological consequences. For certain combinations of CP phases standard spin-independent contributions are strongly suppressed or even fully absent at tree-level, such that a proper calculation of the interactions induced at loop-level is crucial. In our model, these arise from Higgs-induced triangle diagrams, box diagrams for light quarks (both shown in figure~\ref{fig:CPDiagram1}) as well as the two-loop process involving heavy quarks shown in figure~\ref{fig:CPDiagram4}. In particular the two-loop process gives an important contribution, which is difficult to estimate without performing the full calculation. To address this challenge, we have presented a novel approach for simplifying the two-loop calculation significantly for heavy quark masses (schematically illustrated in figure~\ref{fig:CPDiagram4}). Provided the top quark gives the dominant contribution and the mediator is light compared to the top quark, this approach makes it possible to circumvent the two-loop calculation entirely and obtain an accurate estimate that is much easier to calculate and implement. A comparison between the two approaches is provided in figure~\ref{fig:CPComparisonCoeff}. As illustrated in figure~\ref{fig:general}, loop effects are most important when at least one of the CP phases is close to $\pi/2$ (corresponding to pseudoscalar interactions). Moreover, they gain in importance as the sensitivity of direct detection experiments improves. A particularly interesting observation is that the recoil rates induced at tree- and loop-level can be comparable, resulting in a roughly constant event rate over the whole energy window (see figure~\ref{fig:dRdE}). Since such a spectrum cannot be generated from a single type of interaction, it will be very interesting to perform a detailed statistical analysis of how to discriminate the model studied here from alternative hypotheses. Finally, we have studied the impact of spin-independent loop-induced interactions on the CP-violating fermionic Higgs portal model. Our results show that loop-level effects allow future direct detection experiments to probe parameter regions that would be otherwise inaccessible. Nevertheless, loop-level contributions are still too small to enable direct detection experiments to reach the parameter regions preferred by thermal freeze-out if the CP phase is close to $\pi/2$. Based on the results presented in this work, we conclude that a general spin-0 mediator offers an interesting possibility to evade current direct detection bounds even with $\mathcal{O}(1)$ couplings while still maintaining promising detection prospects for future years. It will therefore be important to investigate how such a simplified model can arise from a more complete theory, such as an extended Higgs sector with spontaneous CP breaking. Such an embedding will provide new insights on the relations between the different couplings and allow for a more accurate analysis of the constraints from flavour physics and precision observables. \acknowledgments We thank Giorgio Arcadi and Sebastian Wild for discussions and Joachim Brod for valuable comments on the manuscript. This work is funded by the Deutsche Forschungsgemeinschaft (DFG) through the Emmy Noether Grant No.\ KA 4662/1-1 and the Collaborative Research Center TRR 257 ``Particle Physics Phenomenology after the Higgs Discovery''.
{ "timestamp": "2019-03-01T02:18:24", "yymm": "1902", "arxiv_id": "1902.11070", "language": "en", "url": "https://arxiv.org/abs/1902.11070" }
\section{Introduction} This paper describes opportunities at the interface between large-scale simulations, experiment design and control, machine learning (ML including deep learning DL) and High-Performance Computing. We describe both the current status and possible research issues in allowing machine learning to pervasively enhance computational science. How should one do this and where is it valuable? We focus on research challenges on computing for science and engineering (as opposed to commercial) use cases for both big data and big simulation problems. More details including further citations can be found at \cite{LONGlearningEverywhere}. The convergence of HPC and data-intensive methodologies \cite{BDHPCConv} provide a promising approach to major performance improvements. Traditional HPC simulations are reaching the limits of original progress. The end of Dennard scaling of transistor power usage and the end of Moore’s Law as originally formulated has yielded fundamentally different processor architectures. The architectures continue to evolve, resulting in highly costly if not damaging churn in scientific codes that need to be finely tuned to extract the last iota of parallelism and performance. In domain sciences such as biomolecular sciences, advances in statistical algorithms and runtime systems have enabled extreme scale ensemble based applications \cite{KASSON201887} to overcome limitations of traditional monolithic simulations. However, in spite of several orders of magnitude improvement in efficiency from these adaptive ensemble algorithms, the complexity of phase space and dynamics for modest physical systems, require additional orders of magnitude improvements and performance gains. In many application domains, integrating traditional HPC approaches with machine learning methods arguably holds the greatest promise towards overcoming these barriers. The need for performance increase underlies the international efforts behind the exascale supercomputing initiatives and we believe that integration of ML into large scale computations (for both simulations and analytics) is a very promising way to get even large performance gains. Further, it can enable paradigms such as control or steering and provide a fundamental approach to coarse-graining which is a difficult but essential aspect of the many multi-scale application areas. Papers at two recent workshops BDEC2 \cite{BDEC2process} and NeurIPS \cite{NeurIPS2018} confirm our point of view and our approach is synergistic with the BDEC2 process with its emphasis on new application requirements and their implications for future scientific computing software platforms. We would like to distinguish between traditional performance measured by operations per second or benchmark scores and the effective performance that one gets by combining learning with simulation and gives increased performance as seen by the user without changing the traditional system characteristics. This is of particular interest in cases where there is a tight coupling between the learning and simulation components (as outlined below for MLforHPC). The need for significant enhancement in the effective performance of HPC motivates the introduction of a new paradigm in HPC: Learning Everywhere! {\bf Different Interfaces of ML and HPC: } We have identified \cite{SPIDAL2018A, BDEC2process} several important distinctly different links between machine learning (ML) and HPC. We define two broad categories: HPCforML and MLforHPC, \begin{itemize} \item \textbf{HPCforML:} Using HPC to execute and enhance ML performance, or using HPC simulations to train ML algorithms (theory guided machine learning), which are then used to understand experimental data or simulations. \item \textbf{MLforHPC:} Using ML to enhance HPC applications and systems \end{itemize} This categorization is related to Jeff Dean's "Machine Learning for Systems and Systems for Machine Learning" \cite{Jeff_Dean2017} and Satoshi Matsuoka's convergence of AI and HPC~\cite{Matsuoka2019}.We further subdivide \textbf{HPCforML} as \begin{itemize} \item \textbf{HPCrunsML:} Using HPC to execute ML with high performance \item \textbf{SimulationTrainedML:} Using HPC simulations to train ML algorithms, which are then used to understand experimental data or simulations. \end{itemize} We also subdivide \textbf{MLforHPC} as \begin{itemize} \item \textbf{MLautotuning:} Using ML to configure (autotune) ML or HPC simulations. Already, autotuning with systems like ATLAS is hugely successful and gives an initial view of MLautotuning. As well as choosing block sizes to improve cache use and vectorization, MLautotuning can also be used for simulation mesh sizes \cite{NanoIJHPCA} and in big data problems for configuring databases and complex systems like Hadoop and Spark \cite{MicrosoftSummit2018A, MicrosoftSummit2018B}. \item \textbf{MLafterHPC:} ML analyzing results of HPC as in trajectory analysis and structure identification in biomolecular simulations \item \textbf{MLaroundHPC:} Using ML to learn from simulations and produce learned surrogates for the simulations. The same ML wrapper can also learn configurations as well as results. This differs from SimulationTrainedML as there typically a learnt network is used to redirect observation whereas in MLaroundHPC we are using the ML to improve the HPC performance. \item \textbf{MLControl:} Using simulations (with HPC) in control of experiments and in objective driven computational campaigns \cite{Alexander2018}. Here the simulation surrogates are very valuable to allow real-time predictions. \end{itemize} All 6 topics above are important and pose many research issues in computer science and cyberinfrastructure, directly in application domains and in the integration of technology with applications. However, in this paper, we focus on topics in MLforHPC, with close coupling between ML, simulations, and HPC. We involve applications as a driver for the requirements and evaluation of the computer science and infrastructure. In researching {\bf MLaroundHPC} we will consider ML wrappers for either HPC simulations or complex ML algorithms implemented with HPC. Our focus is on how to increase effective performance with the “learning everywhere” principle and how to build efficient “learning everywhere” parallel systems. One can view the use of ML learned surrogates as a performance boost that can lead to huge speedups as calculation of a prediction from a trained network, can be many orders of magnitude faster than full execution of the simulation as shown in section \ref{subsec:scaling}. One can reach Exa or even Zetta scale equivalent performance for simulations with existing hardware systems. These high-performance surrogates are valuable in education and control scenarios by just speeding existing simulations. Simple examples are the use of a surrogate to represent a chemistry potential or a larger grain size to solve the diffusion equation underlying cellular and tissue level simulations. Development of systematic ML-based coarse-graining techniques in both socio-technical simulations and nano-bio(cell)- tissue layering arises as an important area of research. In general, Domain-specific expertise will be needed to understand the needed accuracy and the number of training simulation runs needed. There are many groups working in MLaroundHPC but most of the work is just starting and not built around a systematic study of research issues as we propose. There is some deep work in building reduced dimension models to use in control scenarios \cite{Raissi2017A}. We look at three distinct important areas: Networked systems with socio-technical simulations, multiscale cell and tissue simulations and at a finer scale biomolecular and nanoscale molecular systems. We note that biomolecular and biocomplexity areas which represent 40\% of the HPC cycles used on NSF computational resources and so this is an area that is particularly ready and valuable. Molecular sciences has had several successful examples of using ML for autotuning and ML for analyzing the output of HPC simulation data. Several fields have made progress in using MLaroundHPC, e.g., Cosmoflow and CosmoGAN~\cite{cosmogan} are amongst the better known projects; and the Materials community is actively exploring the uptake of MLControl for the design of materials \cite{BDEC2process}. This paper does not cover development of new ML algorithms but rather the advancing the understanding of ML, including Deep Learning (DL) in support of MLaroundHPC. Of course, the usage experience is likely to suggest new ML approaches of value outside the MLaroundHPC arena. If one is to use an ML to replace a simulation, then an accuracy estimate is essential and as discussed in \ref{sec:UQ} there is a need to build on initial work on UQ (Uncertainty Quantification) with ML \cite{Chan2018} such as that using dropout regularization to build ensembles for UQ. There are more sophisticated Bayesian methods to investigate. The research must also address ergodicity, viz., have we learned across the full phase space of initial values. Here methods taken from Monte-Carlo arena could be useful as reliable integration over a domain is related to reliable estimates of values defined across a domain. Further much of our learning is for analytic functions whereas much of the existing DL experience is for discrete-valued classifiers of commercial importance. Section \ref{sec:CS} discusses cyberinfrastructure and computer science questions, section \ref{sec:UQ} covers uncertainty quantification for learnt results while section \ref{sec:Infraneeds} the infrastructure requirements needed to implement MLforHPC. Section \ref{subsec:scaling} gives a general performance analysis method and applies to current cases, Section \ref{sec:Future} covers new opportunities and research issues. \section{Science Exemplars} \subsection{Machine learning for Networked Systems}\label{sec:network} In this section we describe a hybrid method that fuses machine learning and mechanistic models to overcome the challenges posed by scenarios where data is sparse and knowledge of underlying mechanism is inadequate. Across domains, the two approaches have been compared~\cite{peterson2015mechanistic}. Machine learning approach usually needs a large amount of observation data for training, and does not explicitly account for mechanisms that govern the the complex phenomenon. On the other hand, mechanistic models (like agent-based models) result from a bottom-up approach; but they tend to have too many parameters, are compute intensive and hard to calibrate. In recent years, there have been several efforts to study physical processes under the umbrella of theory-guided data science (TGDS), with focus on artificial neural networks (ANN) as the primary learning tool. \cite{Karpatne2017} provides a survey of these methods and their application to hydrology, climate science, turbulence modeling, etc. where the underlying theory can be used to reduce the variance in model parameters by introducing constraints or priors in the model space. Here we consider a particular class of mechanistic models - network dynamical systems, which have been applied in diverse domains such as epidemiology and computational social science. A network dynamical system is composed of a network where nodes of the network are agents (representing population, computers, etc.) and the edges capture the interactions between them. A popular example of such systems is the SEIR model of disease spread in a social network~\cite{newman2002spread}. The complexity of the dynamics in such a network, due to individual level heterogeneity and interactions, makes it difficult to train a machine learning model that can be generalized to patterns not yet presented in historical data. Completely data driven models cannot discover higher resolution details (e.g. county level incidence) from lower resolution ground truth data (e.g. state level incidence). \noindent {\bf Learning from observational and simulation data: } Data sparsity is often a challenge for applying machine learning, especially deep learning methods to forecasting problems in socio-technical systems. One example of such problems is to predict weekly incidence in future weeks in an influenza epidemic. In such socio-technical systems, we usually have only limited observational data, e.g. weekly incidence number reported to the Centers for Disease Control and Prevention (CDC). Such data is of low spatial temporal resolution (weekly at state level), not real time (at least one week delay), incomplete (reported cases are only a small fraction of actual ones), and noisy (adjusted several times after being published), thus necessitating a hybrid framework for forecasting by learning from observational and simulation data. Observations need to be augmented with existing domain knowledge and behavior encapsulated in the agent-based model to inform the learning algorithm. In such hybrid framework, the network dynamical system is used to guide the learning algorithm so that it conforms to the principles ({\bf consistency}). At the same time, the learning algorithm will facilitate model selection in a principled manner. Moreover, the synthetic data goes beyond the observation data, thus helps voiding overfitting and makes the learned model capable of processing patterns unseen in the observation data ({\bf generalizability}). When the dynamical system is more detailed (e.g. individual level) than the observation data, the hybrid framework allows detailed forecasting ({\bf high resolution}). {\bf Epidemic Forecasting: } Simulation trained machine learning methods can be used for epidemic forecasting. An example of such a framework is DEFSI (Deep Learning Based Epidemic Forecasting with Synthetic Information) proposed in~\cite{Wang2019}. It consists of ($i$) a model configuration module that estimates a distribution for each parameter in an agent based epidemic model based on coarse surveillance data; ($ii$) simulation-geenrated synthetic training data module which generates high-resolution training data by running HPC simulations parameterized from distributions estimated in the previous module; ($iii$) a two-branch deep neural network trained on the synthetic training dataset and used to make details forecasts with coarse surveillance data as inputs. Experimental results show that DEFSI performs comparably or better than the other methods for state level forecasting; and it outperforms the EpiFast method for county level forecasting. See Ref. \cite{LONGlearningEverywhere} and citations therein for details. \subsection{ML for Virtual Tissue and Cellular Simulations} \label{sec:VT} \subsubsection{Virtual Tissue Models} Virtual Tissue (VT) simulations \cite{Osborne2017} are mechanism-based multiscale spatial simulations of living tissues that address questions about development, maintenance, damage and repair. They also find application in the design of tissues (tissue engineering) and the development of medical therapies, especially personalized therapies. VT simulations are computationally challenging for a number of reasons: 1) VT simulations are agent-based, with the core agent often representing biological cells. The number of cells in a real tissue is often of the order of $10^{8}$ or more. 2) Agents are often hierarchical, with agents composed of multiple agents at smaller scales. 3) Agents interact strongly with each other, often over significant ranges \cite{Sluka:2016fz}. 3) Individual agents typically contain complex sub models that control their properties and behaviors. 4) Materials properties may be complex, like the shear thickening or thinning or swelling or contraction of fiber networks. 5) Modeling transport and diffusion is compute intensive. 6) Models are typically stochastic, so predictivity requires many replicas. 7) Simulations involve uncertainty both in model parameters and in model structure. 8) Biological and medical time-series data are often qualitative, semi-quantitative or differential, making their use in classical optimization difficult. 9) VT models often produce movies of configurations over time. 10) Finally, simulating populations can add several orders of magnitude to the computational challenge. It is possible that ML techniques can be used to short circuit implementations at and between scales. \subsubsection{Virtual Tissue Modelling and AI + MLandHPC} AI can directly benefit VT applications in a number of ways: \begin{enumerate} \item Short-circuiting: The replacement of computationally costly modules with learned analogues. \item Parameter fitting in high dimensional parameter spaces. \item Treating stochasticity in results as information rather than noise. \item Prediction of bifurcations in models. \item Design of maximally discriminatory experiments -- predict the parameter sets by which two models can be differentiated. \item “Run time backwards,” to determine initial conditions that lead to observed endpoints. \item The elimination of short time scales, e.g., short-circuit the calculations of advection-diffusion. \item Generating additional spatial data sets from experimental images. \end{enumerate} Representative prior work by Karniadakis \cite{Raissi2017A}, Kevrekidis \cite{Kevrekidis2017} and Nemenman \cite{Nemenman2006} shows that neural networks can reproduce the temporal behaviors of biochemical regulatory and signaling networks. Ref. \cite{Liang2017} has shown that networks can learn nonlinear biomechanics simulations of the aorta--being able to predict the stress and strain distribution in the human aorta from the morphology observable with MRI or CT. \subsection{Machine Learning and Molecular Simulations} \subsubsection{Nanoscale simulation} \label{sec:nano} Despite the employment of the optimal parallelization techniques suited for the size and complexity of the system, nanoscale simulations remain time consuming. In research settings, simulations can take up to several days and it is often desirable to foresee expected overall trends in key quantities; for example, how does the contact density vary as a function of ion concentration in nanoscale confinement or how the peak positions of the pair correlation functions characterizing nanoparticle assembly evolve as the environmental parameters are tuned. Given the dramatic rise in ML and HPC technologies, it is not the question of if, but when, ML can be integrated with HPC to enhance nanoscale simulation meth{}ods. Recent years have seen a surge in the use of ML to accelerate material simulation techniques: ML has been used to predict parameters, generate configurations in material simulations, and classify material properties (see Ref \cite{LONGlearningEverywhere} and citations therein). At this time, it is critical to understand and develop the software frameworks to build ML layers around HPC to 1) enhance simulation performance 2) enable real-time and anytime engagement, and 3) broaden the applicability of simulations for both research and education (in-classroom) usage In the context of nanoscale simulation, an initial set of applications for the MLaroundHPC framework can be the prediction of the structure or correlation functions (outputs) characterizing the nanoscale system over a broad range of experimental control parameters (inputs). MLaroundHPC can enable the following outcomes: \begin{enumerate} \item Learn pre-identified critical features associated with the simulation output. \item Generate accurate predictions for un-simulated statepoints (by entirely bypassing simulations). \item Exhibit auto-tunability (with new simulation runs, the ML layer gets better at making predictions). \item Enable real-time, anytime, and anywhere access to simulation results (particularly important for education use). \item No run is wasted. Training needs both successful and unsuccessful runs. \end{enumerate} To illustrate these outcomes, we discuss nanoscale simulations aimed at the computation of the structure of ions confined by surfaces that are nanometers apart which has been the focus of recent experiments and computational studies (see Ref \cite{LONGlearningEverywhere} and citations therein). Typically, the entire ionic distribution averaged over sufficient number of independent samples generated during the simulation is a quantity of interest. However, in many important cases, average values of contact density or center density directly relate to important experimentally-measured quantities such as the osmotic pressure \cite{zwanikken1}. Further, often it is useful to visualize expected trends in the behavior of contact or mid-point density as a function of solution conditions or ionic attributes, before running simulations to explore specific system conditions. It is thus desirable that a ``smart'' simulation framework provide rapid estimates of these critical output features with high accuracy. MLaroundHPC can enable precisely this as we recently showed that an artificial neural network successfully learns from completed simulation results the desired features associated with the output ionic density profiles to rapidly generate predictions for contact, peak, and center densities in excellent agreement with the results from explicit simulations \cite{NanoICCS}. \subsubsection{Biomolecular simulations} \label{sec:bms} The use of ML and in particular DL approaches for biomolecular simulations \cite{Perez:2018aa} lags behind other areas such as nano-science and materials science \cite{butler2018machine}. This might be partly due to the difficulty to account for large heterogeneous systems with important interactions at short and long length scales. But it might also indicate that the commonly used classical empirical force fields are surprisingly successful \cite{Piana:2014qo} and it is not easy to outperform them at this level of approximation. Therefore, one primary direction of research in this area is to improve the accuracy of the simulation while maintaining the performance of empirical energy functions. One promising approach is based on work by Behler and Parrinello \cite{behler_generalized_2007} who devised a NN-based potential that was trained on quantum mechanical DFT energies; their key insight was to represent the total energy as a sum of atomic contributions and represent the chemical environment around each atom by an identically structured NN, which takes as input appropriate “symmetry functions” that are rotation and translation invariant as well as invariant to exchange of atoms while correctly reflecting the local environment that determines the energy \cite{behler_first_2017}. Based on this work, Gastegger \textit{et al.} \cite{gastegger_machine_2017} used ML to accelerate ab-initio MD (AIMD) to compute accurate IR spectra for organic molecules including the biological Ala$_{3}^{+}$ tripeptide in the gas phase. Interestingly, the ML model was able to reproduce anharmonicities and incorporate proton transfer reactions between different Ala$_{3}^{+}$ tautomers without having been explicit trained on such a chemical event, highlighting the promise of such an approach to incorporate a wide range of physically relevant effects with the right training data. The ML model was $>$1000 faster than the traditional evaluation of the underlying quantum mechanical physical equations. Roitberg \textit{et al.} \cite{s.smith_ani-1:_2017} trained a NN on QM DFT calculations, based on modified Behler-Parrinello symmetry functions. The resulting ANI-1 model was shown to be chemically accurate, transferrable, with a performance similar to a classical force field, thus enabling ab-initio molecular dynamics (AIMD) at a fraction of the cost of "true" DFT AIMD. Extensions of their work with an active learning (AL) approach demonstrated that proteins in an explicit water environment can be simulated with a NN potential at DFT accuracy \cite{smith_less_2018}. The AL approach reduced the amount of required training data to 10\% of the original model \cite{smith_less_2018} by iteratively adding training data calculations for regions of chemical space where the current ML model could not make good predictions. Using transfer learning, the ANI-1 potential was also extended to predict energies at the highest level of quantum chemistry calculations (coupled cluster CCSD(T)), with speedups in the billion. In general the focus has been on achieving DFT-level accuracy because NN potentials are not cheaper to evaluate than most classical empirical potentials. However, replacing solvent-solvent and solvent-solute interactions, which typically make up 80\%-90\% of the computational effort in a classical all-atom, explicit solvent simulation, with a NN potential promises large performance gains at a fraction of the cost of traditional implicit solvent models and with an accuracy comparable to the explicit simulations \cite{Wang:2018aa}, as also discussed above in the case of electrolyte solutions. Furthermore, inclusion of polarization, which is expensive (factor 3-10 in current classical polarizable force fields \cite{Lopes:2015aa}) but of great interest when studying the interaction of multivalent ions with biomolecules might be easily achievable with appropriately trained ML potentials. \section{Integrating ML and HPC: Background and Opportunities}\label{sec:CS} A primary contribution of this paper is in the categorization, description and examples of the different ways in which ML can enhance HPC (MLforHPC). Before we expound upon MLforHPC and open research issues, we provide a a summary status of HPC for ML (beyond the obvious and well-studied use of GPUs for ML). \subsection{HPC for Machine Learning} There has been substantial community progress here with the Industry supported MLPerf \cite{MLPERF} machine learning benchmark activity and Uber’s Horovod Open Source Distributed Deep Learning Framework for TensorFlow \cite{Horovod}. We have studied different parallel patterns (kernels) of machine learning applications, looking in particular at Gibbs Sampling, Stochastic Gradient Descent (SGD), Cyclic Coordinate Descent (CCD) and K-means clustering \cite{IPCC-IU}. These algorithms are fundamental for large-scale data analysis and cover several important categories: Markov Chain Monte Carlo (MCMC), Gradient Descent and Expectation and Maximization (EM). We show that parallel iterative algorithms can be categorized into four types of computation models (a) Locking, (b) Rotation, (c) Allreduce, (d) Asynchronous, based on the synchronization patterns and the effectiveness of the model parameter update. A major challenge of scaling is owing to the fact that computation is irregular and the model size can be huge. At the meantime, parallel workers need to synchronize the model continually. By investigating collective vs. asynchronous methods of the model synchronization mechanisms, we discover that optimized collective communication can improve the model update speed, thus allowing the model to converge faster. The performance improvement derives not only from accelerated communication but also from reduced iteration computation time as the model size may change during the model convergence. To foster faster model convergence, we need to design new collective communication abstractions. We identify all 5 classes of data-intensive computation\cite{BDHPCConv}, from pleasingly parallel to machine learning and simulations. To re-design a modular software stack with native kernels to effectively utilize scale-up servers for machine learning and data analytics applications. We are investigating how simulations and Big Data can use common programming environments with a runtime based on a rich set of collectives and libraries for a model-centric approach \cite{Qiu2017C,Qiu2018B}. \label{subsec:parallel} {\bf Parallel Computing: } We know that heterogeneity can lead to difficulty in parallel computing. This is extreme for MLaroundHPC as the ML learnt result can be huge factors ($10^5$ in our initial example\cite{NanoICCS}) faster than simulated answers. Further learning can be dynamic within a job and within different runs of a given job. One can address by load balancing the unlearnt and learnt separately but this can lead to geometric issues as quite likely that ML learning works more efficiently (for more potential simulations) in particular regions of phase space. \subsection{Uncertainty Quantification for Deep Learning} \label{sec:UQ} An important aspect of the use of a learned ML model is that one must learn not just the result of a simulation but also the uncertainty of the prediction e.g. if the learned result is valid enough to be used. This can be explained in the sense of the bias-variance trade-off, which is based on the decomposition of the expected error into two parts: variance and bias. The variance part explains the uncertainty of the model training process due to the randomness in the training algorithms or the lack of representativeness of the training set. A regularization scheme can reduce the variance so that the model complexity is in control and can result in a {\em smoother} model. However, the regularization approach comes at the cost of an increased amount of bias, which is another term in the expected error decomposition that explains the fitness of the model---by regularizing the model the training algorithm can do only a limited effort to minimize the training error. On the contrary, an unregularized model with a higher model complexity than necessary can also result in a minimal training error, while it suffers from high variance. Ideally, the bias-variance trade-off can be resolved to some degree by averaging trained instances of an originally complex model. Once these model instances are complex enough to fit the training data set, we can use the averaged predictions as the output of the model. However, averaging many different model instances implies a practical difficulty that one has to conduct multiple optimization tasks to secure a statistically meaningful sample distribution of the predictions. Given the assumption that the model might as well be a complex one to minimize the bias component (e.g. a deep neural network), the model averaging strategy is computationally challenging. Dropout has been extensively used in deep learning as a regularization technique \cite{dahl2013improving}, but recent researches revisit it as an uncertainty quantification (UQ) tool \cite{gal2016dropout}. The dropout procedure can be seen as an efficient way to maintain a pool of multiple network instances for the same optimization task. It is an efficient ensemble technique as it applies a randomly sampled Bernoulli mask to a layer-wise input unit, thus exposing the optimization process to many differently structured instances of the network. A a set of differently thinned versions of the network can form a sample distribution of predictions to be used as a UQ metric. The dropout-based UQ scheme can provide an opportunity for the MLaroundHPC simulation experiments. As a data-driven model it is reasonable to assume that a better ML surrogate can be found once the training routine sees more examples generated from the simulation experiment. However, creating more examples to train a better ML model is a conflicting requirement as the purpose of training the ML surrogate is to avoid such computation. The UQ scheme can play a role here to provide the training routine with a way to quantify the uncertainty in the prediction---once it is low enough, the training routine might less likely need more data. \subsection{Machine Learning for HPC} \label{sec:Infraneeds} Here we review the nature of the Machine Learning needed for MLforHPC in different application domains. The Machine Learning (ML) load depends on 1) Time interval between its invocations, which will translate into the number of training samples S and 2) size D of data set specifying each sample. This size could be as large as the number of degrees of freedom in simulation or could be (much) smaller if just a few parameters are needed to define simulation. We note two general issues \begin{itemize} \item There can very important data transfer and storage issues in linking the Simulations and Machine Learning parts of system. This could need carefully designed architectures for both hardware and software. \item The Simulations and Machine Learning subsystems are likely to require different node optimizations as in different types and uses of accelerators. \end{itemize} \subsection{Science Exemplar: Nanosimulations} In this subsection, using the example of Nanosimulations, we show progress in all areas at the intersection of HPC and ML are having an impact. In each of two cases below, one is using scikit-learn, Tensorflow and the Keras wrapper for Tensorflow, as the ML subsystem. The papers \cite{NanoICCS,NanoIJHPCA} are using ML to learn results (ionic density at a given location) of a complete simulation \begin{itemize} \item D=5 with the five specifying features as confinement length h, positive valency $z_p$, negative valency $z_n$, salt concentration c, and the diameter of the ions d. \item S= 4805 which 70\% of total 6864 runs with 30\% of the total runs used for testing. \end{itemize} In \cite{NanoIJHPCA}, one is not asking ML to predict a result as in \cite{NanoICCS},but rather training an Artificial Neural Net (ANN) to ensure that the simulation runs at its optimal speed (using for example, the lowest allowable timestep dt and "good" simulation control parameters for high efficiency) while retaining the accuracy of the final result (e.g. density profile of ions). For this particular application, we could get away by dividing a 10 million time-step run (~ 10 nanoseconds that is a typical timescale to reach equilibrium and get data in such systems) into 10 separate runs. \begin{itemize} \item Input data size D= 6 (1 input uses 64 bits floats and 5 inputs use 32 bits integers - total 224 bits) \item Input number of samples (S) = 15640 (70\% training 30\% test) \item Hidden layer 1 = 30 \item Hidden layer 1 = 48 \item Output variables = 3 \end{itemize} Creation of the training dataset took = 64 cores * 80 hrs * 5400 simulation runs = 28160000 or 28 million CPU hours on Indiana University's BigRed2 GPU compute nodes. Each run is 10 million steps long, and you use/learn/train ML every 1 million steps (so block size is a million), yielding 10 times more samples than runs. Generalizing this, the hardware needs will depend on how often you block, to stop and train the network, and then either on-the-fly or post-simulation, use that training to accelerate simulation or evaluate structure respectively. Blocking every timestep will not improve the training as typically, it won't produce a statistically independent data point to evaluate any structure you desire. So you want to block at a timescale that is at least greater than the autocorrelation time dc; this is, of course, dependent on example you are looking at -- and so your blocking and learning will depend on the application. In \cite{NanoICCS}, it is small and dc is 3-5 dt; in glasses, it can be huge as the viscosity is high; and in biomolecular simulations, it will also depend on the level of coarse-graining and will be different in fully atomistic or very coarse-grained systems. The training effort will also depend on the input data size D, and the complexity of the relationship you are trying to learn which change the number of hidden layers and nodes per layer. For example, suppose you are tracking a particle (a side atom on a molecule in a typical nanoscale simulation), in order to come up with a metric (e.g. distance between two side atoms on different molecules) to track the diversity of clusters of particles during the self-assembly process. This comes from expectation that correlations between side atoms may be critical to a macroscopic property (such as formation of these particles into a FCC crystal). In this case your D is huge, and your ML objectives may be looking for a deep relationship, and you may have to invoke an ensemble of ANN's and this will change hardware needs. \label{subsec:scaling} {\bf Scaling of Effective Performance: } An initial approach to estimate speedup in a hybrid MLaroundHPC situation is given in \cite{NanoICCS} for a nano simulation. One can estimate the speedup in terms in terms of four times $T_{seq}$ the sequential execution time of simulation; $T_{train}$ the time for the parallel execution of simulation to give training data; $T_{learn}$ is the time per sample to train the learning networkl; and $T_{lookup}$ is the inference time to predict the results of the simulation by using the trained network. In the formula below, $N_{lookup}$ is the number of trained neural net inferences and $N_{train}$ the number of parallel simulations used in training. \[Effective Speedup~S= \frac{T_{seq}(N_{lookup}+N_{train})}{T_{lookup}N_{lookup} + (T_{train}+T_{learn})N_{train}}\] This formula reduces to the classic simple $\frac{T_{seq}}{T_{train}}$ when there is no machine learning and in the limit of large $\frac{N_{lookup}}{N_{train}}$ becomes $\frac{T_{seq}}{T_{lookup}}$ which can be huge! \par There are many caveats and assumptions here. We are considering a simple case where one runs the $N_{train}$ simulations, followed by the learning and then all the $N_{lookup}$ inferences. Further we assume the training simulations are useful results and not just overhead. We also have not properly considered how to build in the likelihood that training, learning and lookup phases are probably using different hardware configurations with different node counts. \subsection{Opportunities and Research Issues}\label{sec:Future} {\bf Research Issues:} In addition to the six categories at the interface of ML and HPC, the research issues we identify reflect the multiple interdisciplinary activities linked in our study of MLforHPC, including application domains described in sections \ref{sec:network}, \ref{sec:VT}, \ref{sec:nano} and \ref{sec:bms}, as well as coarse graining studied in our case for network science and nano-bio areas. We have identified the following research areas, which can be categorized into Algorithms and Methods (1-5), Applied Math (10), Software Systems (6,7), Performance Measurement and Engineering (8,11). \begin{enumerate} \item Where can application domains use MLaroundHPC and MLautotuning effectively and what science is enabled by this \item Which ML and DL approaches are most relevant and how can they be set up to enable broad user-friendly MLaroundHPC and MLautotuning in domain science \item How can Uncertainty Quantification be enabled and separately study ergodicity (bias) and accuracy issues? \item Is there new area of algorithmic research focusing on finding algorithms that can be most effectively learnt? \item Is there a general multiscale approach using MLaroundHPC. \item What are appropriate systems frameworks for MLaroundHPC and MLautotuning. For example, should we wrap microservices invoked by a Function as a Service environment? Where and how should we enable learning systems? Is Dataflow useful? \item The different characters of surrogate and “real” executions produce system challenges as surrogate execution is much faster and invokes distinct software and hardware. This heterogeneity gives challenges for parallel computing, workload management and resource scheduling (heterogeneous and dynamic workflows). The implication for performance is briefly discussed in sections \ref{subsec:parallel} and \ref{subsec:scaling}. \item Scaling applications that are composed of multiple heterogeneous computational (execution) units, and have distinct forms of parallelism that need balanced performance. Consider a workload comprised of $N_L$ learning units, $N_S$ simulations units. The relative number of learning units to simulation units will vary with application and problem type. The relative values will even vary over execution time of the application, as the amount of data generated as a ratio of training data will vary. This requires runtime systems that are capable of real-time performance tuning and adaptive execution for workloads comprised of multiple heterogeneous tasks. \item The application of these ideas to statistical physics problems may need different techniques than those used in deterministic time evolutions. \item The existing UQ frameworks based on the dropout technique can provide the level of certainty as a probabilistic distribution in the prediction space. However, it does not always mean that the quality of the distribution is dependent on the quality/quantity of data. For example, two models with different dropout rates can produce different UQ results. If the goal of UQ in MLaroundHPC context is to supply only an adequate amount of data, we need a more reliable UQ method tailored for this purpose rather than the dropout technique that tends to manipulate the architecture of the model. \item Application agnostic description and defintion of effective performance enhancement. \end{enumerate} \section*{Conclusions} {\bf Broken Abstractions, New Abstractions:} In traditional HPC the prevailing orthodoxy is “Faster is Better” has driven the quest for abstractions of hierarchical parallelism to speeding up single units of works. Relinquishing the orthodoxy based upon hierarchical (vertical) parallelism as the only route to performance is necessary. The new paradigm in HPC --- “Learning Everywhere”, implies new performance, scaling and execution approaches. In this new paradigm, multiple, concurrent heterogeneous units of work replace single large units of works, which thus require both hierarchical (vertical) parallelism as well horizontal (many task) parallelism. \section*{Acknowledgments} This work was partially supported by NSF CIF21 DIBBS 1443054 and nanoBIO 1720625; the Indiana University Precision Health initiative and Intel through the Parallel Computing Center at Indiana University. JPS and JAG were partially supported by NSF 1720625, NIH U01 GM111243 and NIH GM122424. SJ was partially supported by ExaLearn -- a DOE Exascale Computing project.
{ "timestamp": "2019-03-01T02:04:23", "yymm": "1902", "arxiv_id": "1902.10810", "language": "en", "url": "https://arxiv.org/abs/1902.10810" }
\section{Introduction}\label{introduction} Many Banach spaces which play an important role in functional analysis and its applications are obtained in a special way: the norms of these spaces are generated by positive sublinear operators and by $L_p$-norms. In connection with Hardy and Copson operators $$ (Pf)(x) : = \frac{1}{x} \int_0^x f(t)\,dt \qq \mbox{and} \qq (Qf)(x) : = \int_x^{\infty} \frac{f(t)}{t}\,dt,\qq (x > 0), $$ the classical Ces\`{a}ro function space $$ \ces(p) : = \bigg\{ f:\, \|f\|_{\ces(p)} : = \bigg( \int_0^{\infty} \bigg( \frac{1}{x} \int_0^x |f(t)|\,dt \bigg)^p\,dx \bigg)^{\frac{1}{p}} < \infty \bigg\}, $$ and the classical Copson function space $$ \cop(p) : = \bigg\{ f:\, \|f\|_{\cop(p)} : = \bigg( \int_0^{\infty} \bigg( \int_x^{\infty} \frac{|f(t)|}{t}\,dt \bigg)^p\,dx \bigg)^{\frac{1}{p}} < \infty \bigg\}, $$ where $1 < p \le \infty$, with the usual modifications if $p = \infty$, are of interest. The classical Ces\`{a}ro function spaces $\ces(p)$ have been introduced in 1970 by Shiue \cite{shiue}. These spaces have been defined analogously to the Ces\`{a}ro sequence spaces that appeared two years earlier in \cite{prog} when the Dutch Mathematical Society posted a problem to find a representation of their dual spaces. In 1971 Leibowitz proved that $\Ces_1 = \{0\}$ and for $1 < q < p \leq \infty$, $\ell_p$ and $\Ces_q$ sequence spaces are proper subspaces of $\Ces_p$ \cite{Leibowitz}. The problem posted \cite{prog} was resolved by Jagers \cite{jagers} in 1974 who gave an explicit isometric description of the dual of Ces\`{a}ro sequence space. In \cite{syzhanglee}, Sy, Zhang and Lee gave a description of dual spaces of $\ces(p)$ spaces based on Jagers' result. In 1996 different, isomorphic description due to Bennett appeared in \cite{bennett1996}. In \cite[Theorem 21.1]{bennett1996} Bennett observes that the classical Ces\`{a}ro function space and the classical Copson function space coincide for $p > 1$. He also derives estimates for the norms of the corresponding inclusion operators. The same result, with different estimates, is due to Boas \cite{boas1970}, who in fact obtained the integral analogue of the Askey-Boas Theorem \cite[Lemma 6.18]{boas1967} and \cite{askeyboas}. These results generalized in \cite{grosse} using the blocking technique. In \cite{astasmal2009} they investigated dual spaces for $\ces (p)$ for $1 < p < \infty$. Their description can be viewed as being analogous to one given for sequence spaces in \cite{bennett1996}. For a long time, Ces\`{a}ro function spaces have not attracted a lot of attention contrary to their sequence counterparts. In fact there is quite rich literature concerning different topics studied in Ces\`{a}ro sequence spaces as for instance in \cites{CuiPluc,cuihud1999,cuihud2001,chencuihudsims,cuihudli}. However, recently in a series of papers, Astashkin and Maligranda started to study the structure of Ces\`{a}ro function spaces. Among others, in \cite{astasmal2009} they investigated dual spaces for $\ces (p)$ for $1 < p < \infty$. Their description can be viewed as being analogous to one given for sequence spaces in \cite{bennett1996} (For more detailed information about history of classical Ces\`{a}ro spaces see recent survey paper \cite{asmalsurvey}). Throughout the paper we assume that $I : = (a,b)\subseteq (0,\i)$. By $\mp (I)$ we denote the set of all measurable functions on $I$. The symbol $\mp^+ (I)$ stands for the collection of all $f\in\mp (I)$ which are non-negative on $I$, while $\mp^{+,\dn}(I)$ is used to denote the subset of those functions which are non-increasing on $I$, respectively. A weight is a function $v \in {\mathfrak M}^+(0,\infty)$ such that $0 < V(x) < \infty$ for all $x \in (0,\infty)$, where $$ V(x) : = \int_0^x v(t)\,dt. $$ The family of all weight functions (also called just weights) on $(0,\infty)$ is given by $\W(0,\infty)$. For $p\in (0,\i]$ and $w\in \mp^+(I)$, we define the functional $\|\cdot\|_{p,w,I}$ on $\mp (I)$ by \begin{equation*} \|f\|_{p,w,I} : = \left\{\begin{array}{cl} \left(\int_I |f(x)|^p w(x)\,dx \right)^{1/p} & \qq\mbox{if}\qq p<\i \\ \esup_{I} |f(x)|w(x) & \qq\mbox{if}\qq p=\i. \end{array} \right. \end{equation*} If, in addition, $w\in \W(I)$, then the weighted Lebesgue space $L^p(w,I)$ is given by \begin{equation*} L^p(w,I) = \{f\in \mp (I):\,\, \|f\|_{p,w,I} < \i\} \end{equation*} and it is equipped with the quasi-norm $\|\cdot\|_{p,w,I}$. When $I=(0,\infty)$, we write $L^p(w)$ instead of $L^p(w,(0,\infty))$. We adopt the following usual conventions. \begin{conv}\label{Notat.and.prelim.conv.1.1} We adopt the following conventions: \begin{itemize} \item Throughout the paper we put $0 \cdot \infty = 0$, $\infty / \infty = 0$ and $0/0 = 0$. \item If $p\in [1,+\infty]$, we define $p'$ by $1/p + 1/p' = 1$. \item If $0 < q < p < \infty$, we define $r$ by $1 / r = 1 / q - 1 / p$. \item If $I = (a,b) \subseteq {\mathbb R}$ and $g$ is monotone function on $I$, then by $g(a)$ and $g(b)$ we mean the limits $\lim_{x\rightarrow a+}g(x)$ and $\lim_{x\rightarrow b-}g(x)$, respectively. \end{itemize} \end{conv} Throughout the paper, we always denote by $c$ and $C$ a positive constant, which is independent of main parameters but it may vary from line to line. However a constant with subscript or superscript such as $c_1$ does not change in different occurrences. By $a\lesssim b$, ($b\gtrsim a$) we mean that $a\leq \la b$, where $\la>0$ depends on inessential parameters. If $a\lesssim b$ and $b\lesssim a$, we write $a\approx b$ and say that $a$ and $b$ are equivalent. Unless a special remark is made, the differential element $dx$ is omitted when the integrals under consideration are the Lebesgue integrals. The weighted Ces\`{a}ro and Copson function spaces are defined as follows: \begin{defi}\label{defi.2.0} Let $0 <p \le \infty$, $u \in \mp^+ \I$ and $v \in \W (0,\infty)$. The weighted Ces\`{a}ro and Copson spaces are defined by \begin{align*} \ces_{p} (u,v) : & = \bigg\{ f \in \mp^+ \I: \|f\|_{\ces_{p} (u,v)} : = \big\| \|f\|_{1,v,(0,\cdot)} \big\|_{p,u,\I} < \i \bigg\}, \\ \intertext{and} \cop_{p} (u,v) : & = \bigg\{ f \in \mp^+ \I: \|f\|_{\cop_{p} (u,v)} : = \big\| \|f\|_{1,v,(\cdot,\i)} \big\|_{p,u,\I} < \i \bigg\}, \end{align*} respectively. When $v \equiv 1$ on $(0,\infty)$, we simply write $\ces_{p} (u)$ and $\cop_{p} (u)$ instead of $\ces_{p} (u,v)$ and $\cop_{p} (u,v)$, respectively. \end{defi} Recall that $\ces_{p} (u,v)$ and $\cop_{p} (u,v)$ are contained in the scale of weighted Ces\`{a}ro and Copson function spaces $\ces_{p,q} (u,v)$ and $\cop_{p,q} (u,v)$ defined in \cite{gmu_2017}. Obviously, $\ces(p) = \ces_p (x^{-1})$ and $\cop(p) = \cop_p (x^{-1})$. In \cite{kamkub}, Kami{\'n}ska and Kubiak computed the dual norm of the Ces\`{a}ro function space $\ces_{p}(u)$, generated by $1 < p < \infty$ and an arbitrary positive weight $u$. A description presented in \cite{kamkub} resembles the approach of Jagers \cite{jagers} for sequence spaces. Let $u \in \W\I \cap C\I$, $b \in \W\I$ and $B(t) : = \int_0^t b(s)\,ds$. Assume that $b$ is a weight such that $b(t) > 0$ for a.e. $t \in (0,\infty)$. The weighted iterated Hardy-type operators involving suprema $T_{u,b}$ and $T_{u,b}^*$ are defined at $g \in \M^+ \I$ by \begin{align*} (T_{u,b} g)(t) & : = \sup_{t \le \tau} \frac{u(\tau)}{B(\tau)} \int_0^{\tau} g(y)b(y)\,dy,\qquad t \in \I, \\ (T_{u,b}^* g)(t) & : = \sup_{t \le \tau} \frac{u(\tau)}{B(\tau)} \int_{\tau}^{\infty} g(y)b(y)\,dy,\qquad t \in \I. \end{align*} Such operators have been found indispensable in the search for optimal pairs of rearrangement-invariant norms for which a Sobolev-type inequality holds (cf. \cite{kerp}). They constitute a very useful tool for characterization of the associate norm of an operator-induced norm, which naturally appears as an optimal domain norm in a Sobolev embedding (cf. \cite{pick2000}, \cite{pick2002}). Supremum operators are also very useful in limiting interpolation theory as can be seen from their appearance for example in \cite{evop}, \cite{dok}, \cite{cwikpys}, \cite{pys}. Recall that $T_{u,b}$ successfully controls non-increasing rearrangements of wide range of maximal functions (see, for instance, \cite{musbil} and references therein). It was shown in \cite{gop} that for every $h \in \mp^+(0,\infty)$ and $t \in (0,\infty)$ $$ (T_{u,b} h)(t) = (T_{\bar{u},b} h) (t), $$ where $$ \bar{u} (t) : = B(t) \sup_{t \le \tau} \frac{u(\tau)}{B(\tau)}, \qquad t\in (0,\infty). $$ Moreover, if the condition \begin{equation}\label{add.cond.} \sup_{0 < t < \infty} \frac{u(t)}{B(t)} \int_0^t \frac{b(\tau)}{u(\tau)}\,d\tau < \infty. \end{equation} holds, then for all $f \in \mp^{+,\dn} (0,\infty)$, \begin{equation}\label{Split} (T_{u,b}f)(t) \approx (R_u f)(t) + (P_{\bar{u},b}f)(t), \qquad t\in (0,\infty), \end{equation} where the supremal operator $R_u$ and the weighted Hardy operator $P_{u,b}$ are defined for $h \in \mp^+(0,\infty)$ and $t \in (0,\infty)$ by \begin{align*} (R_u h) (t) : & = \sup_{t \le \tau} u(\tau) h(\tau), \\ (P_{u,b} h)(t) : & = \frac{u(t)}{B(t)} \int_0^t h(\tau) b(\tau) \,d \tau, \end{align*} respectively. Recall that the boundedness of $R_u$ from $L^p(v)$ into $L^q(w)$ on the cone of monotone non-increasing functions, that is, the validity of the inequality \begin{equation}\label{eq.R} \|R_u f\|_{L^q(w)} \le C \, \|f\|_{L^p(v)}, \qq f \in \mp^{+,\dn} (0,\infty) \end{equation} was completely characterized in \cite {gop} in the case $0 < p \le q < \infty$. In the case $0 < q < p < \infty$, \cite {gop} provides solution when $u$ is equivalent to a non-decreasing function on $(0,\infty)$. The complete solution of inequality \eqref{eq.R} using a certain reduction method was presented in \cite{GogMusISI}. Another solution of \eqref{eq.R} was obtained in \cite{krep}. Note that inequality \begin{equation}\label{Reduction.Theorem.Thm.2.5} \| P_{u,b} (f) \|_{q,w,(0,\infty)} \le c \| f \|_{p,v,(0,\infty)}, \qq f \in \mp^{+,\dn} (0,\infty) \end{equation} was considered by many authors and there exist several characterizations of this inequality (see, papers \cites{cpss,bengros,gjop,cgmp2008,GogStep,GogMusIHI}). The complete characterizations of inequality \begin{equation}\label{Tub.thm.1.eq.1} \|T_{u,b}f \|_{q,w,\I} \le C \| f \|_{p,v,\I}, \qq f \in \mp^{+,\dn}(0,\infty) \end{equation} for $0 < q \le \infty$, $0 < p \le \infty$ were given in \cite{GogMusISI} and \cite{musbil}. Inequality \eqref{Tub.thm.1.eq.1} was characterized in \cite[Theorem 3.5]{gop} under condition \eqref{add.cond.}. Note that the case when $0 < p \le 1 < q < \infty$ was not considered in \cite{gop}. It is also worth to mention that in the case when $1 < p < \infty$, $0 < q < p < \infty$, $q \neq 1$ \cite[Theorem 3.5]{gop} contains only discrete condition. In \cite{gogpick2007} the new reduction theorem was obtained when $0 < p \le 1$, and this technique allowed to characterize inequality \eqref{Tub.thm.1.eq.1} when $b \equiv 1$, and in the case when $0 < q< p \le 1$, \cite{gogpick2007} contains only discrete condition. Using the results in \cites{PS_Proc_2013,PS_Dokl_2013,PS_Dokl_2014,P_Dokl_2015}, another characterization of \eqref{Tub.thm.1.eq.1} was obtained in \cite{StepSham} and \cite{Sham}. In this paper we investigate the boundedness of $T_{u,b}$ and $T_{u,b}^*$ from the weighted Lebesgue spaces $L_p(v)$ into the weighted Ces\`{a}ro spaces $\ces_{q} (w,a)$, when $1 < p,\, q < \infty$ (see, Theorems \ref{aux.thm.1} and \ref{aux.thm.2}). These results allow us to obtain the characterization of the boundedness of $R_u$ from $L^p(v)$ into $\ces_{q}(w,a)$ on the cone of monotone non-increasing functions (see, Theorem \ref{thm.R}). For the convenience of the reader, we formulate the statement on the boundedness of $P_{u,b }$ from $L^p(v)$ into $\ces_{q}(w,a)$ on the cone of monotone non-increasing functions (see, Theorem \ref{aux.thm.3}). In view of \eqref{Split}, we are able to characterize the boundedness of $T_{u,b}$ from $L^p(v)$ into $\ces_q(w,a)$ on the cone of monotone non-increasing functions (see, Theorem \ref{thm.T}). At the end of the paper, as an application of obtained results, we calculate the norm of the fractional maximal function $M_{\gamma}$ from $\Lambda^p(v)$ into $\Gamma^q(w)$. The paper is organized as follows. We start with formulations of "an integration by parts" formula in Section~\ref{integration by parts}. The boundedness results for $T_{u,b}$ and $T_{u,b}^*$ from $L^p(v)$ into $\ces_q(w,a)$ are presented in Section \ref{main results}. The characterizations of the boundedness of $R_u$, $P_{u,b}$ and $T_{u,b}$ from $L^p(v)$ into $\ces_{q}(w,a)$ on the cone of monotone non-increasing functions are given in Sections \ref{R}, \ref{P} and \ref{T}, respectively. Finally, the obtained in previous sections results are applied to calculate the norm of the operator $M_{\gamma}: \Lambda_p(v) \rw \Gamma_q(w)$ in Section \ref{Appl.}. \ \section{"An integration by parts" formula}\label{integration by parts} \ We recall the following "an integration by parts" formula. For the convenience of the reader we give the proof here (cf. \cite[Lemma, p. 176]{step_1993}). \begin{thm}\label{thm.IBP.0} Let $\alpha > 0$. Let $g$ be a non-negative function on $(0,\infty)$ such that $0 < \int_0^t g < \infty$, $t > 0$ and let $f$ be a non-negative non-increasing right-continuous function on $(0,\infty)$. Then \begin{align*} A_1 : = \int_0^{\infty} \bigg( \int_0^t g \bigg)^{\alpha} g(t) [f(t) - \lim_{t \rw +\infty} f(t)]\,dt < \infty \quad \Longleftrightarrow \quad A_2 : = \int_{(0,\infty)} \bigg( \int_0^t g \bigg)^{\alpha + 1}\,d[-f(t)] < \infty. \end{align*} Moreover, $A_1 \approx A_2$. \end{thm} \begin{proof} Assume at first that $\lim_{t \rw +\infty} f(t) = 0$. Let $$ A_1 = \int_0^{\infty} \bigg( \int_0^t g \bigg)^{\alpha} g(t) f(t)\,dt < \infty. $$ Then $$ \int_0^x \bigg( \int_0^t g \bigg)^{\alpha} g(t) f(t)\,dt \rightarrow 0, \quad \mbox{as} \quad x \rightarrow 0+. $$ Since \begin{align*} \int_0^x \bigg( \int_0^t g \bigg)^{\alpha} g(t) f(t)\,dt \ge f(x) \, \int_0^x \bigg( \int_0^t g \bigg)^{\alpha} g(t) \,dt \approx f(x) \, \bigg( \int_0^x g \bigg)^{\alpha + 1}, \quad x > 0, \end{align*} we have that $$ f(x) \, \bigg( \int_0^x g \bigg)^{\alpha + 1} \rightarrow 0, \quad \mbox{as} \quad x \rightarrow 0+. $$ Integrating by parts, we get that \begin{align*} A_2 & = \int_{(0,\infty)} \bigg( \int_0^t g \bigg)^{\alpha + 1}\,d[-f(t)] = - f(t) \, \bigg( \int_0^t g \bigg)^{\alpha + 1} \bigg|_{0}^{\infty} + \int_{(0,\infty)} f(t)\,d\bigg( \int_0^t g \bigg)^{\alpha + 1} \\ & = \lim_{t \rw 0+} f(t) \, \bigg( \int_0^t g \bigg)^{\alpha + 1} - \lim_{t \rw +\infty} f(t) \, \bigg( \int_0^t g \bigg)^{\alpha + 1} + (\alpha + 1) \int_0^{\infty} \bigg( \int_0^t g \bigg)^{\alpha} g(t) f(t)\,dt \\ & \le (\alpha + 1) \int_0^{\infty} \bigg( \int_0^t g \bigg)^{\alpha} g(t) f(t)\,dt = (\alpha + 1) A_1. \end{align*} Thus $$ A_2 \lesssim A_1. $$ Now assume that $$ A_2 : = \int_{(0,\infty)} \bigg( \int_0^t g \bigg)^{\alpha + 1}\,d[-f(t)] < \infty. $$ Then $$ \int_{[x,\infty)} \bigg( \int_0^t g \bigg)^{\alpha + 1}\,d[-f(t)] \rightarrow 0, \quad \mbox{as} \quad x \rightarrow +\infty. $$ Since \begin{align*} \int_{[x,\infty)} \bigg( \int_0^t g \bigg)^{\alpha + 1}\,d[-f(t)] & \ge \bigg( \int_0^x g \bigg)^{\alpha + 1} \int_{[x,\infty)} \,d[-f(t)] \\ & = \bigg( \int_0^x g \bigg)^{\alpha + 1} [f(x) - \lim_{x \rw +\infty} f(x)] = f(x) \,\bigg( \int_0^x g \bigg)^{\alpha + 1}, \quad x>0, \end{align*} we obtain that $$ f(x) \,\bigg( \int_0^x g \bigg)^{\alpha + 1} \rightarrow 0, \quad \mbox{as} \quad x \rightarrow +\infty. $$ Thus, integrating by parts, we get that \begin{align*} A_1 & = \int_0^{\infty} \bigg( \int_0^t g \bigg)^{\alpha} g(t) f(t)\,dt \approx \int_0^{\infty} f(t) \,d \bigg(\int_0^t g \bigg)^{\alpha + 1} \\ & = f(t) \, \bigg(\int_0^t g \bigg)^{\alpha + 1} \bigg|_0^{\infty} + \int_0^{\infty} \bigg(\int_0^t g \bigg)^{\alpha + 1} \,d [-f(t)] \\ & = \lim_{t \rw \infty} f(t) \, \bigg(\int_0^t g \bigg)^{\alpha + 1} - \lim_{t \rw 0+} f(t) \, \bigg(\int_0^t g \bigg)^{\alpha + 1} + \int_0^{\infty} \bigg(\int_0^t g \bigg)^{\alpha + 1} \,d [-f(t)] \\ & \le \int_0^{\infty} \bigg(\int_0^t g \bigg)^{\alpha + 1} \,d [-f(t)] = A_2. \end{align*} Hence $$ A_1 \lesssim A_2. $$ We have shown that if $\lim_{x \rw +\infty} f(x) = 0$, then $$ A_1 < \infty \quad \Longleftrightarrow \quad A_2 < \infty, $$ and $$ A_1 \approx A_2. $$ Now assume that $\lim_{x \rw +\infty} f(x) > 0$. Then, applying previous statement to the function $f(x) - \lim_{x \rw +\infty} f(x)$, we arrive at $$ \int_0^{\infty} \bigg( \int_0^t g \bigg)^{\alpha} g(t) [f(t) - \lim_{x \rw +\infty} f(x)]\,dt \approx\int_{(0,\infty)} \bigg( \int_0^t g \bigg)^{\alpha + 1}\,d[-f(t)]. $$ The proof is completed. \end{proof} \begin{rem}\label{rem.IBP.0} Note that if $f \in \mp^{+,\dn}(0,\infty)$ is such that $\lim_{x \rw +\infty} f(x) > 0$, then $$ \int_0^{\infty} \bigg( \int_0^t g \bigg)^{\alpha} g(t) f(t)\,dt < \infty \quad \Longrightarrow \quad \int_0^{\infty} g(x)\,dx < \infty. $$ Indeed: for each $x \in (0,\infty)$ \begin{align*} \infty > \int_0^{\infty} \bigg( \int_0^t g \bigg)^{\alpha} g(t) f(t)\,dt & \ge \int_0^x \bigg( \int_0^t g \bigg)^{\alpha} g(t) f(t)\,dt \\ & \ge f(x) \, \int_0^x \bigg( \int_0^t g \bigg)^{\alpha} g(t)\,dt \approx f(x)\,\bigg(\int_0^x g \bigg)^{\alpha + 1} \end{align*} holds. Thus $$ \lim_{x \rw +\infty} f(x) \cdot \bigg(\int_0^x g \bigg)^{\alpha + 1} \le f(x)\,\bigg(\int_0^x g \bigg)^{\alpha + 1} \le \int_0^{\infty} \bigg( \int_0^t g \bigg)^{\alpha} g(t) f(t)\,dt < \infty. $$ Hence $$ \lim_{x \rw +\infty} f(x) \cdot \bigg(\int_0^{\infty} g \bigg)^{\alpha + 1} < \infty. $$ Therefore $$ \int_0^{\infty} g < \infty. $$ \end{rem} \begin{cor}\label{cor.IBP.0} Let $\alpha > 0$. Let $g$ be a non-negative function on $(0,\infty)$ such that $0 < \int_0^t g < \infty$, $t > 0$ and let $f$ be a non-negative non-increasing right-continuous function on $(0,\infty)$. Then \begin{align*} \int_0^{\infty} \bigg( \int_0^t g \bigg)^{\alpha} g(t) f(t)\,dt \approx \int_{(0,\infty)} \bigg( \int_0^t g \bigg)^{\alpha + 1}\,d[-f(t)] + \lim_{x \rw +\infty} f(x) \cdot \bigg( \int_0^{\infty} g \bigg)^{\alpha + 1}. \end{align*} \end{cor} \begin{proof} If $\lim_{x \rw +\infty} f(x) = 0$, then the statement follows by Theorem \ref{thm.IBP.0}. If $\lim_{x \rw +\infty} f(x) > 0$, then by Remark \ref{rem.IBP.0}, we know that $$ \int_0^{\infty} \bigg( \int_0^t g \bigg)^{\alpha} g(t) f(t)\,dt < \infty \quad \Longrightarrow \quad \int_0^{\infty} g(x)\,dx < \infty. $$ Therefore, by Theorem \ref{thm.IBP.0}, we get that \begin{align*} \int_0^{\infty} \bigg( \int_0^t g \bigg)^{\alpha} g(t) f(t)\,dt & = \int_0^{\infty} \bigg( \int_0^t g \bigg)^{\alpha} g(t) [f(t) - \lim_{x \rw +\infty} f(x)]\,dt + \lim_{x \rw +\infty} f(x) \cdot \int_0^{\infty} \bigg( \int_0^t g \bigg)^{\alpha} g(t)\,dt \\ & \approx \int_{(0,\infty)} \bigg( \int_0^t g \bigg)^{\alpha + 1}\,d[-f(t)] + \lim_{x \rw +\infty} f(x) \cdot \bigg( \int_0^{\infty} g \bigg)^{\alpha + 1}. \end{align*} The proof is completed. \end{proof} \begin{thm}\label{thm.IBP} Let $\alpha > 0$. Let $g$ be a non-negative function on $(0,\infty)$ such that $0 < \int_t^{\infty} g < \infty$, $t > 0$ and let $f$ be a non-negative non-decreasing left-continuous function on $(0,\infty)$. Then \begin{align*} B_1 : = \int_0^{\infty} \bigg( \int_t^{\infty} g \bigg)^{\alpha} g(t) [f(t) - f(0+)]\,dt < \infty \quad \Longleftrightarrow \quad B_2 : = \int_{(0,\infty)} \bigg( \int_t^{\infty} g \bigg)^{\alpha + 1}\,d[f(t)] < \infty. \end{align*} Moreover, $B_1 \approx B_2$. \end{thm} \begin{proof} Assume at first that $f(0+) = 0$. Let $$ B_1 : = \int_0^{\infty} \bigg( \int_t^{\infty} g \bigg)^{\alpha} g(t) f(t)\,dt < \infty. $$ Then $$ \int_x^{\infty} \bigg( \int_t^{\infty} g \bigg)^{\alpha} g(t) f(t)\,dt \rightarrow 0, \quad \mbox{as} \quad x \rightarrow \infty. $$ Since \begin{align*} \int_x^{\infty} \bigg( \int_t^{\infty} g \bigg)^{\alpha} g(t) f(t)\,dt \ge f(x) \, \int_x^{\infty} \bigg( \int_t^{\infty} g \bigg)^{\alpha} g(t) \,dt \approx f(x) \, \bigg( \int_x^{\infty} g \bigg)^{\alpha + 1}, \quad x > 0, \end{align*} we have that $$ f(x) \, \bigg( \int_x^{\infty} g \bigg)^{\alpha + 1} \rightarrow 0, \quad \mbox{as} \quad x \rightarrow \infty. $$ Hence, integrating by parts, we get that \begin{align*} B_2 & = \int_{(0,\infty)} \bigg( \int_t^{\infty} g \bigg)^{\alpha + 1}\,d[f(t)] = f(t) \, \bigg( \int_t^{\infty} g \bigg)^{\alpha + 1} \bigg|_{0}^{\infty} - \int_{(0,\infty)} f(t)\,d\bigg( \int_t^{\infty} g \bigg)^{\alpha + 1} \\ & = \lim_{t \rw \infty} f(t) \, \bigg( \int_t^{\infty} g \bigg)^{\alpha + 1} - \lim_{t \rw 0+} f(t) \, \bigg( \int_t^{\infty} g \bigg)^{\alpha + 1} + (\alpha + 1) \int_0^{\infty} \bigg( \int_t^{\infty} g \bigg)^{\alpha} g(t) f(t)\,dt \\ & \le (\alpha + 1) \int_0^{\infty} \bigg( \int_t^{\infty} g \bigg)^{\alpha} g(t) f(t)\,dt = (\alpha + 1) B_1. \end{align*} Now assume that $$ B_2 : = \int_{(0,\infty)} \bigg( \int_t^{\infty} g \bigg)^{\alpha + 1}\,d[f(t)] < \infty. $$ Then $$ \int_{(0,x]} \bigg( \int_t^{\infty} g \bigg)^{\alpha + 1}\,d[f(t)] \rightarrow 0, \quad \mbox{as} \quad x \rightarrow 0+. $$ Since \begin{align*} \int_{(0,x]} \bigg( \int_t^{\infty} g \bigg)^{\alpha + 1}\,d[f(t)] & \ge \bigg( \int_x^{\infty} g \bigg)^{\alpha + 1} \int_{(0,x]} \,d[f(t)] \\ & = \bigg( \int_x^{\infty} g \bigg)^{\alpha + 1} [f(x) - f(0+)] = f(x) \,\bigg( \int_x^{\infty} g \bigg)^{\alpha + 1}, \quad x>0, \end{align*} we obtain that $$ f(x) \,\bigg( \int_x^{\infty} g \bigg)^{\alpha + 1} \rightarrow 0, \quad \mbox{as} \quad x \rightarrow 0+. $$ Thus, integrating by parts, we get that \begin{align*} B_1 & = \int_0^{\infty} \bigg( \int_t^{\infty} g \bigg)^{\alpha} g(t) f(t)\,dt \approx \int_0^{\infty} f(t)\,d \bigg[ - \bigg(\int_t^{\infty} g \bigg)^{\alpha + 1} \bigg] \\ & = - f(t) \, \bigg(\int_t^{\infty} g \bigg)^{\alpha + 1} \bigg|_0^{\infty} + \int_0^{\infty} \bigg(\int_t^{\infty} g \bigg)^{\alpha + 1} \,d [f(t)] \\ & = \lim_{t \rw 0+} f(t) \, \bigg(\int_t^{\infty} g \bigg)^{\alpha + 1} - \lim_{t \rw \infty} f(t) \, \bigg(\int_t^{\infty} g \bigg)^{\alpha + 1} + \int_0^{\infty} \bigg(\int_t^{\infty} g \bigg)^{\alpha + 1} \,d [f(t)] \\ & \le \int_0^{\infty} \bigg(\int_t^{\infty} g \bigg)^{\alpha + 1} \,d [f(t)] = B_2. \end{align*} We have shown that if $f(0+) = 0$, then $$ B_1 < \infty \quad \Longleftrightarrow \quad B_2 < \infty, $$ and $$ B_1 \approx B_2. $$ Now assume that $f(0+) > 0$. Then, applying previous statement to the function $f(x) - f(0+)$, we arrive at $$ \int_0^{\infty} \bigg( \int_t^{\infty} g \bigg)^{\alpha} g(t) [f(t) - f(0+)]\,dt \approx\int_{(0,\infty)} \bigg( \int_t^{\infty} g \bigg)^{\alpha + 1}\,d[f(t)]. $$ The proof is completed. \end{proof} \begin{rem}\label{rem.IBP} Note that if $f$ is a non-negative non-decreasing function on $(0,\infty)$ such that $f(0+) > 0$, then $$ \int_0^{\infty} \bigg( \int_t^{\infty} g \bigg)^{\alpha} g(t) f(t)\,dt < \infty \quad \Longrightarrow \quad \int_0^{\infty} g(x)\,dx < \infty. $$ Indeed: for each $x \in (0,\infty)$ \begin{align*} \infty > \int_0^{\infty} \bigg( \int_t^{\infty} g \bigg)^{\alpha} g(t) f(t)\,dt & \ge \int_x^{\infty} \bigg( \int_t^{\infty} g \bigg)^{\alpha} g(t) f(t)\,dt \\ & \ge f(x) \, \int_x^{\infty} \bigg( \int_t^{\infty} g \bigg)^{\alpha} g(t)\,dt \approx f(x)\,\bigg(\int_x^{\infty} g \bigg)^{\alpha + 1} \end{align*} holds. Thus $$ f(0+) \bigg(\int_x^{\infty} g \bigg)^{\alpha + 1} \le f(x)\,\bigg(\int_x^{\infty} g \bigg)^{\alpha + 1} \le \int_0^{\infty} \bigg( \int_t^{\infty} g \bigg)^{\alpha} g(t) f(t)\,dt < \infty. $$ Hence $$ f(0+) \, \bigg(\int_0^{\infty} g \bigg)^{\alpha + 1} < \infty. $$ Therefore $$ \int_0^{\infty} g < \infty. $$ \end{rem} \begin{cor}\label{cor.IBP} Let $\alpha > 0$. Let $g$ be a non-negative function on $(0,\infty)$ such that $0 < \int_t^{\infty} g < \infty$, $t > 0$ and let $f$ be a non-negative non-decreasing left-continuous function on $(0,\infty)$. Then \begin{align*} \int_0^{\infty} \bigg( \int_t^{\infty} g \bigg)^{\alpha} g(t) f(t)\,dt \approx \int_{(0,\infty)} \bigg( \int_t^{\infty} g \bigg)^{\alpha + 1}\,d[f(t)] + f(0+) \, \bigg( \int_0^{\infty} g \bigg)^{\alpha + 1}. \end{align*} \end{cor} \begin{proof} If $f(0+) = 0$, then the statement follows by Theorem \ref{thm.IBP}. If $f(0+) > 0$, then by Remark \ref{rem.IBP}, we know that $$ \int_0^{\infty} \bigg( \int_t^{\infty} g \bigg)^{\alpha} g(t) f(t)\,dt < \infty \quad \Longrightarrow \quad \int_0^{\infty} g(x)\,dx < \infty. $$ Therefore, by Theorem \ref{thm.IBP}, we get that \begin{align*} \int_0^{\infty} \bigg( \int_t^{\infty} g \bigg)^{\alpha} g(t) f(t)\,dt & = \int_0^{\infty} \bigg( \int_t^{\infty} g \bigg)^{\alpha} g(t) [f(t) - f(0+)]\,dt + f(0+) \,\int_0^{\infty} \bigg( \int_t^{\infty} g \bigg)^{\alpha} g(t)\,dt \\ & \approx \int_{(0,\infty)} \bigg( \int_t^{\infty} g \bigg)^{\alpha + 1}\,d[f(t)] + f(0+) \, \bigg( \int_0^{\infty} g \bigg)^{\alpha + 1}. \end{align*} The proof is completed. \end{proof} \ \section{The boundedness of $T_{u,b}$ and $T_{u,b}^*$ from $L^p(v)$ into $\ces_q(w,a)$}\label{main results} \ In this section we give solutions of the following two inequalities \begin{align} \bigg( \int_0^{\infty} \bigg( \int_0^x \bigg( \sup_{t \le \tau} \frac{u(\tau)}{B(\tau)} \int_0^{\tau} h(y)b(y)\,dy \bigg) \,a(t)\,dt \bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}} \le C \, \bigg( \int_0^{\infty} h(s)^pv(s)\,ds\bigg)^{\frac{1}{p}}, \quad h \in \mp^+ (0,\infty) \label{eq.1} \end{align} and \begin{align} \bigg( \int_0^{\infty} \bigg( \int_0^x \bigg( \sup_{t \le \tau} \frac{u(\tau)}{B(\tau)} \int_{\tau}^{\infty} h(y)b(y)\,dy \bigg) \,a(t)\,dt \bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}} \le C \, \bigg( \int_0^{\infty} h(s)^pv(s)\,ds\bigg)^{\frac{1}{p}}, \quad h \in \mp^+ (0,\infty), \label{eq.2} \end{align} where $1 < p \le q < \infty$ and $a,\,u,\,v,\,w \in \W \I$. Using the duality argument, we reduce the problem to the boundedness for the dual of integral Volterra operator with a kernel satisfying Oinarov’s condition and weighted Stieltjes operator. Note that the characterization of inequalities \begin{align} \bigg( \int_0^{\infty} \bigg( \int_x^{\infty} \bigg( \sup_{t \le \tau} \frac{u(\tau)}{B(\tau)} \int_0^{\tau} h(y)b(y)\,dy \bigg) \,a(t)\,dt \bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}} \le C \, \bigg( \int_0^{\infty} h(s)^pv(s)\,ds\bigg)^{\frac{1}{p}}, \quad h \in \mp^+ (0,\infty) \label{eq.3} \end{align} and \begin{align} \bigg( \int_0^{\infty} \bigg( \int_x^{\infty} \bigg( \sup_{t \le \tau} \frac{u(\tau)}{B(\tau)} \int_{\tau}^{\infty} h(y)b(y)\,dy \bigg) \,a(t)\,dt \bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}} \le C \, \bigg( \int_0^{\infty} h(s)^pv(s)\,ds\bigg)^{\frac{1}{p}}, \quad h \in \mp^+ (0,\infty) \label{eq.4} \end{align} can be reduced to the solutions of \eqref{eq.1} and \eqref{eq.2}. Recall that, if $F$ is a non-negative non-decreasing function on $\I$, then \begin{equation}\label{Fubini.2} \esup_{t \in (0,\infty)} F(t)G(t) = \esup_{t \in (0,\infty)} F(t) \esup_{\tau \in (t,\infty)} G(\tau), \end{equation} likewise, when $F$ is a non-negative non-increasing function on $\I$, then \begin{equation}\label{Fubini.1} \esup_{t \in (0,\infty)} F(t)G(t) = \esup_{t \in (0,\infty)} F(t) \esup_{\tau \in (0,t)} G(\tau) \end{equation} (see, for instance, \cite[p. 85]{gp2}). We need the following notations: $$ \begin{array}{ll} A(t) : =\int_0^t a(s)ds, \qq U(t) : =\int_0^t u(s)ds, \qq W(t) : =\int_0^t w(s)ds. \end{array} $$ \begin{thm}\label{aux.thm.1} Let $1 < p,\, q < \infty$. Assume that $u \in \W\I \cap C\I$ and $a,\,v,\,w \in \W\I$. Moreover, assume that $$ 0 < \int_0^x v(t)^{1-p'}\,dt < \infty \qquad \mbox{for all} \quad x > 0. $$ {\rm (i)} If $p \le q$, then \begin{align*} \sup_{h \ge 0} \frac{\bigg( \int_0^{\infty} \bigg( \int_0^x \bigg( \sup_{t \le \tau}u(\tau) \int_0^{\tau} h(y)\,dy \bigg) a(t)\,dt \bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}}}{\bigg( \int_0^{\infty} h(s)^pv(s)\,ds\bigg)^{\frac{1}{p}}} & \\ & \hspace{-7cm} \approx \, \sup_{t \in (0,\infty)} \bigg( \int_0^t v(x)^{1-p'} \, \bigg( \int_x^t \bigg( \sup_{s \le \tau}u(\tau)\bigg) a(s)\,ds \bigg)^{p'} \, dx \bigg)^{\frac{1}{p'}} \, \bigg( \int_t^{\infty} w(y) \,dy \bigg)^{\frac{1}{q}} \notag \\ & \hspace{-6.5cm} + \sup_{t \in (0,\infty)} \bigg( \int_0^t v(x)^{1-p'} \, dx \bigg)^{\frac{1}{p'}} \, \bigg( \int_t^{\infty} \, \bigg( \int_t^y \bigg( \sup_{s \le \tau}u(\tau)\bigg) a(s)\,ds \bigg)^{q} w(y) \,dy \bigg)^{\frac{1}{q}} \\ & \hspace{-6.5cm} + \, \sup_{x \in (0,\infty)} \bigg( \int_{[x,\infty)} \, d \, \bigg( - \sup_{t \le \tau} u(\tau)^{p'} \bigg( \int_0^{\tau} v(s)^{1-p'}\,ds \bigg) \bigg) \bigg)^{\frac{1}{p'}} \, \bigg( \int_0^x A(y)^q w(y) \,dy \bigg)^{\frac{1}{q}} \notag \\ & \hspace{-6.5cm} + \sup_{x \in (0,\infty)} \bigg( \int_{(0,x]} \, A(t)^{p'} \, d \, \bigg( - \sup_{t \le \tau} u(\tau)^{p'} \bigg( \int_0^{\tau} v(s)^{1-p'}\,ds \bigg) \bigg) \bigg)^{\frac{1}{p'}} \, \bigg( \int_x^{\infty} w(y) \,dy \bigg)^{\frac{1}{q}} \notag \\ & \hspace{-6.5cm} + \bigg( \int_0^{\infty} A(y)^q w(y) \,dy \bigg)^{\frac{1}{q}} \, \lim_{t \rightarrow \infty} \bigg(\sup_{t \le \tau} u(\tau) \bigg( \int_0^{\tau} v(s)^{1-p'}\,ds \bigg)^{\frac{1}{p'}}\bigg); \end{align*} {\rm (ii)} If $q < p$, then \begin{align*} \sup_{h \ge 0} \frac{\bigg( \int_0^{\infty} \bigg( \int_0^x \bigg( \sup_{t \le \tau}u(\tau) \int_0^{\tau} h(y)\,dy \bigg) a(t)\,dt \bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}}}{\bigg( \int_0^{\infty} h(s)^pv(s)\,ds\bigg)^{\frac{1}{p}}} & \\ & \hspace{-7cm} \approx \, \bigg( \int_0^{\infty} \bigg( \int_0^t v(x)^{1-p'} \,dx \bigg)^{\frac{r}{q'}} v(t)^{1-p'} \, \bigg( \int_t^{\infty} \bigg( \int_t^z \bigg( \sup_{s \le y}u(y)\bigg) a(s)\,ds \bigg)^{q} w(z)\, dz \bigg)^{\frac{r}{q}} \, dt \bigg)^{\frac{1}{r}} \notag \\ & \hspace{-6.5cm} + \bigg( \int_0^{\infty} \bigg( \int_0^t v(x)^{1-p'} \bigg( \int_x^t \bigg( \sup_{s \le y}u(y)\bigg) a(s)\,ds \bigg)^{p'} \, dx \bigg)^{\frac{r}{p'}} \bigg( \int_z^{\infty} w(s) \,ds \bigg)^{\frac{r}{p}} w(t)\,dt \bigg)^{\frac{1}{r}} \\ & \hspace{-6.5cm} + \, \bigg( \int_0^{\infty} \bigg( \int_{[x,\infty)} \, d \, \bigg( - \bigg(\sup_{t \le \tau} u(\tau)^{p'} \, \bigg( \int_0^{\tau} v(s)^{1-p'}\,ds \bigg) \bigg) \bigg) \bigg)^{\frac{r}{p'}} \, \bigg( \int_0^x A(y)^q w(y) \,dy \bigg)^{\frac{r}{p}} A(x)^q w(x) \,dx\bigg)^{\frac{1}{r}} \notag \\ & \hspace{-6.5cm} + \bigg( \int_0^{\infty} \bigg( \int_{(0,x]} \, A(t)^{p'} \, d \, \bigg( - \bigg(\sup_{t \le \tau} u(\tau)^{p'} \, \bigg( \int_0^{\tau} v(s)^{1-p'}\,ds \bigg) \bigg) \bigg) \bigg)^{\frac{r}{p'}} \, \bigg( \int_x^{\infty} w(y) \,dy \bigg)^{\frac{r}{p}} w(x)\,dx \bigg)^{\frac{1}{r}} \notag \\ & \hspace{-6.5cm} + \bigg( \int_0^{\infty} A(y)^q w(y) \,dy \bigg)^{\frac{1}{q}} \, \lim_{t \rightarrow \infty} \bigg(\sup_{t \le \tau} u(\tau) \bigg( \int_0^{\tau} v(s)^{1-p'}\,ds \bigg)^{\frac{1}{p'}}\bigg). \end{align*} \end{thm} \begin{proof} Assume that $1 < p \le q < \infty$. By duality, using Fubini's Theorem, and interchanging the suprema, we get that \begin{align*} \sup_{h \ge 0} \frac{\bigg( \int_0^{\infty} \bigg( \int_0^x \bigg( \sup_{t \le \tau} u(\tau) \int_0^{\tau} h(y)\,dy \bigg) a(t)\,dt \bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}}}{\bigg( \int_0^{\infty} h(s)^pv(s)\,ds\bigg)^{\frac{1}{p}}} & \\ & \hspace{-5cm} = \sup_{h \ge 0} \frac{1}{\bigg( \int_0^{\infty} h(s)^pv(s)\,ds\bigg)^{\frac{1}{p}}} \sup_{g \ge 0}\frac{ \int_0^{\infty} \bigg( \int_0^x \bigg( \sup_{t \le \tau} u(\tau) \int_0^{\tau} h(y)\,dy \bigg) a(t)\, dt \bigg) g(x)\,dx }{\bigg( \int_0^{\infty} g(x)^{q'}w(x)^{1-q'}\,dx\bigg)^{\frac{1}{q'}}} \\ & \hspace{-5cm} = \sup_{g \ge 0} \frac{1}{\bigg( \int_0^{\infty} g(x)^{q'}w(x)^{1-q'}\,dx\bigg)^{\frac{1}{q'}}} \sup_{h \ge 0}\frac{ \int_0^{\infty} \bigg( \sup_{t \le \tau} u(\tau) \int_0^{\tau} h(y)\,dy \bigg) \bigg( \int_t^{\infty} g(x)\,dx \bigg) a(t)\,dt }{\bigg( \int_0^{\infty} h(s)^pv(s)\,ds\bigg)^{\frac{1}{p}}}. \end{align*} Applying \cite[Theorems 4.4]{gop}, on using \eqref{Fubini.2}, we arrive at \begin{align*} \sup_{h \ge 0}\frac{ \int_0^{\infty} \bigg( \sup_{t \le \tau} u(\tau) \int_0^{\tau} h(y)\,dy \bigg) \bigg( \int_t^{\infty} g(x)\,dx \bigg) a(t)\,dt }{\bigg( \int_0^{\infty} h(s)^pv(s)\,ds\bigg)^{\frac{1}{p}}} & \approx D + E, \end{align*} where \begin{align*} D & : = \bigg( \int_0^{\infty} \bigg( \int_t^{\infty} \bigg( \sup_{s \le \tau}u(\tau)\bigg) \bigg( \int_s^{\infty} g(x)\,dx \bigg) a(s)\,ds \bigg)^{\frac{p'}{p}} \bigg( \sup_{t \le \tau}u(\tau)\bigg) \bigg( \int_0^t v(s)^{1-p'}\,ds \bigg) \bigg( \int_t^{\infty} g(x)\,dx \bigg)a(t)\,dt \bigg)^{\frac{1}{p'}}, \\ E & : = \bigg( \int_0^{\infty} \bigg( \int_0^t \bigg( \int_s^{\infty} g(x)\,dx \bigg) a(s)\,ds \bigg)^{\frac{p'}{p}} \bigg(\sup_{t \le \tau} u(\tau)^{p'} \bigg( \int_0^{\tau} v(s)^{1-p'}\,ds \bigg) \bigg) \bigg( \int_t^{\infty} g(x)\,dx \bigg)a(t)\,dt \bigg)^{\frac{1}{p'}}. \end{align*} Integrating by parts (applying Corollary \ref{cor.IBP}), on using Fubini's Theorem, we arrive at \begin{align*} D & \approx \bigg( \int_0^{\infty} \bigg( \int_t^{\infty} \bigg( \sup_{s \le \tau}u(\tau)\bigg) \bigg( \int_s^{\infty} g(x)\,dx \bigg) a(s)\,ds \bigg)^{p'} \, v(t)^{1-p'}\,dt \bigg)^{\frac{1}{p'}} \\ & = \bigg( \int_0^{\infty} \bigg( \int_t^{\infty} g(x) \int_t^x \bigg( \sup_{s \le \tau}u(\tau)\bigg) a(s)\,ds \,dx \bigg)^{p'} \, v(t)^{1-p'} \,dt \bigg)^{\frac{1}{p'}}. \end{align*} Similarly, integrating by parts (applying Corollary \ref{cor.IBP.0}), on using Fubini's Theorem, we get at \begin{align*} E \approx & \,\, \bigg( \int_0^{\infty} \bigg(\sup_{t \le \tau} u(\tau)^{p'} \, \bigg(\int_0^{\tau} v(s)^{1-p'}\,ds \bigg) \bigg) \,d \, \bigg( \int_0^t \bigg( \int_s^{\infty} g(x)\,dx \bigg) a(s)\,ds \bigg)^{p'} \bigg)^{\frac{1}{p'}} \\ \approx & \,\, \bigg( \int_0^{\infty} \bigg( \int_0^t \bigg( \int_s^{\infty} g(x)\,dx \bigg) a(s)\,ds \bigg)^{p'} \,d \, \bigg( - \bigg(\sup_{t \le \tau} u(\tau)^{p'} \, \bigg( \int_0^{\tau} v(s)^{1-p'}\,ds \bigg) \bigg) \bigg) \bigg)^{\frac{1}{p'}} \\ & + \bigg( \int_0^{\infty} \bigg( \int_s^{\infty} g(x)\,dx \bigg) a(s)\,ds \bigg) \lim_{t \rightarrow \infty} \bigg(\sup_{t \le \tau} u(\tau)^{p'} \, \bigg( \int_0^{\tau} v(s)^{1-p'}\,ds \bigg) \bigg) \\ \approx & \,\, \bigg( \int_0^{\infty} \bigg( \int_0^t g(x)A(x)\,dx \bigg)^{p'} \,d \, \bigg( - \bigg(\sup_{t \le \tau} u(\tau)^{p'} \, \bigg( \int_0^{\tau} v(s)^{1-p'}\,ds \bigg) \bigg) \bigg) \bigg)^{\frac{1}{p'}} \\ & + \bigg( \int_0^{\infty} \bigg( \int_t^{\infty} g(x)\,dx \bigg)^{p'} A(t)^{p'} \,d \, \bigg( - \bigg(\sup_{t \le \tau} u(\tau)^{p'} \, \bigg( \int_0^{\tau} v(s)^{1-p'}\,ds \bigg) \bigg) \bigg) \bigg)^{\frac{1}{p'}} \\ & + \bigg( \int_0^{\infty} g(x)A(x)\,dx \bigg) \lim_{t \rightarrow \infty} \bigg(\sup_{t \le \tau} u(\tau) \, \bigg( \int_0^{\tau} v(s)^{1-p'}\,ds \bigg)^{\frac{1}{p'}} \bigg) : = E_1 + E_2 + E_3. \end{align*} {\rm (i)} Let $p \le q$. By \cite[Theorem 1.1]{Oinar}, we obtain that \begin{align} \sup_{g \ge 0} \frac{ D }{\bigg( \int_0^{\infty} g^{q'}w^{1-q'}\bigg)^{\frac{1}{q'}}} & \notag \\ & \hspace{-3cm} = \, \sup_{g \ge 0} \frac{1}{\bigg( \int_0^{\infty} g^{q'}w^{1-q'}\bigg)^{\frac{1}{q'}}} \bigg( \int_0^{\infty} \bigg( \int_t^{\infty} g(x) \int_t^x \bigg( \sup_{s \le y}u(y)\bigg) a(s)\,ds \,dx \bigg)^{p'} \, v(t)^{1-p'} \,dt \bigg)^{\frac{1}{p'}} \notag \\ & \hspace{-3cm} \approx \, \sup_{t \in (0,\infty)} \bigg( \int_0^t v(x)^{1-p'} \, \bigg( \int_x^t \bigg( \sup_{s \le y}u(y)\bigg) a(s)\,ds \bigg)^{p'} \, dx \bigg)^{\frac{1}{p'}} \, \bigg( \int_t^{\infty} w(y) \,dy \bigg)^{\frac{1}{q}} \notag \\ & \hspace{-2.5cm} + \sup_{t \in (0,\infty)} \bigg( \int_0^t v(x)^{1-p'} \, dx \bigg)^{\frac{1}{p'}} \, \bigg( \int_t^{\infty} \, \bigg( \int_t^z \bigg( \sup_{s \le y}u(y)\bigg) a(s)\,ds \bigg)^{q} w(z) \,dz \bigg)^{\frac{1}{q}}. \label{eq.I} \end{align} By \cite[Theorem 1, p. 40 and Theorem 3, p. 44]{mazya}, respectively, we have that \begin{align*} \sup_{g \ge 0} \frac{ E_1 }{\bigg( \int_0^{\infty} g^{q'}w^{1-q'}\bigg)^{\frac{1}{q'}}} & \\ & \hspace{-3cm} = \sup_{g \ge 0} \frac{\bigg( \int_0^{\infty} \bigg( \int_0^t g(x)A(x)\,dx \bigg)^{p'} \,d \, \bigg( - \bigg(\sup_{t \le \tau} u(\tau)^{p'} \, \bigg( \int_0^{\tau} v(s)^{1-p'}\,ds \bigg) \bigg) \bigg) \bigg)^{\frac{1}{p'}}}{\bigg( \int_0^{\infty} g^{q'}w^{1-q'}\bigg)^{\frac{1}{q'}}} \\ & \hspace{-3cm} \approx \sup_{x \in (0,\infty)} \bigg( \int_{[x,\infty)} \, d \, \bigg( - \bigg(\sup_{t \le \tau} u(\tau)^{p'} \, \bigg( \int_0^{\tau} v(s)^{1-p'}\,ds \bigg) \bigg) \bigg) \bigg)^{\frac{1}{p'}} \, \bigg( \int_0^x A(y)^q w(y) \,dy \bigg)^{\frac{1}{q}} \end{align*} and \begin{align*} \sup_{g \ge 0} \frac{ E_2 }{\bigg( \int_0^{\infty} g^{q'}w^{1-q'}\bigg)^{\frac{1}{q'}}} & \\ & \hspace{-3cm} = \sup_{g \ge 0} \frac{\bigg( \int_0^{\infty} \bigg( \int_t^{\infty} g(x)\,dx \bigg)^{p'} A(t)^{p'} \,d \, \bigg( - \bigg(\sup_{t \le \tau} u(\tau)^{p'} \, \bigg( \int_0^{\tau} v(s)^{1-p'}\,ds \bigg) \bigg) \bigg) \bigg)^{\frac{1}{p'}}}{\bigg( \int_0^{\infty} g^{q'}w^{1-q'}\bigg)^{\frac{1}{q'}}} \\ & \hspace{-3cm} \approx \sup_{x \in (0,\infty)} \bigg( \int_{(0,x]} \, A(t)^{p'} \, d \, \bigg( - \bigg(\sup_{t \le \tau} u(\tau)^{p'} \, \bigg( \int_0^{\tau} v(s)^{1-p'}\,ds \bigg) \bigg) \bigg) \bigg)^{\frac{1}{p'}} \, \bigg( \int_x^{\infty} w(y) \,dy \bigg)^{\frac{1}{q}}. \end{align*} By duality, we have that \begin{align*} \sup_{g \ge 0} \frac{ E_3 }{\bigg( \int_0^{\infty} g^{q'}w^{1-q'}\bigg)^{\frac{1}{q'}}} & \\ & \hspace{-3cm} = \sup_{g \ge 0} \frac{\int_0^{\infty} g(x)A(x)\,dx }{\bigg( \int_0^{\infty} g^{q'}w^{1-q'}\bigg)^{\frac{1}{q'}}} \cdot \lim_{t \rightarrow \infty} \bigg(\sup_{t \le \tau} u(\tau) \bigg( \int_0^{\tau} v(s)^{1-p'}\,ds \bigg)^{\frac{1}{p'}}\bigg) \\ & \hspace{-3cm} = \bigg( \int_0^{\infty} A(y)^q w(y) \,dy \bigg)^{\frac{1}{q}} \, \lim_{t \rightarrow \infty} \bigg(\sup_{t \le \tau} u(\tau) \bigg( \int_0^{\tau} v(s)^{1-p'}\,ds \bigg)^{\frac{1}{p'}}\bigg). \end{align*} Thus, we get that \begin{align} \sup_{g \ge 0} \frac{E}{\bigg( \int_0^{\infty} g^{q'}w^{1-q'}\bigg)^{\frac{1}{q'}}} & \notag \\ & \hspace{-3cm} \approx \, \sup_{x \in (0,\infty)} \bigg( \int_{[x,\infty)} \, d \, \bigg( - \sup_{t \le \tau} u(\tau)^{p'} \bigg( \int_0^{\tau} v(s)^{1-p'}\,ds \bigg) \bigg) \bigg)^{\frac{1}{p'}} \, \bigg( \int_0^x A(y)^q w(y) \,dy \bigg)^{\frac{1}{q}} \notag \\ & \hspace{-2.5cm} + \sup_{x \in (0,\infty)} \bigg( \int_{(0,x]} \, A(t)^{p'} \, d \, \bigg( - \sup_{t \le \tau} u(\tau)^{p'} \bigg( \int_0^{\tau} v(s)^{1-p'}\,ds \bigg) \bigg) \bigg)^{\frac{1}{p'}} \, \bigg( \int_x^{\infty} w(y) \,dy \bigg)^{\frac{1}{q}} \notag \\ & \hspace{-2.5cm} + \bigg( \int_0^{\infty} A(y)^q w(y) \,dy \bigg)^{\frac{1}{q}} \, \lim_{t \rightarrow \infty} \bigg(\sup_{t \le \tau} u(\tau) \bigg( \int_0^{\tau} v(s)^{1-p'}\,ds \bigg)^{\frac{1}{p'}}\bigg). \label{eq.II} \end{align} Combining \eqref{eq.I} and \eqref{eq.II}, we arrive at \begin{align*} \sup_{h \ge 0} \frac{\bigg( \int_0^{\infty} \bigg( \int_0^x \bigg( \sup_{t \le \tau}u(\tau) \int_0^{\tau} h(y)\,dy \bigg) a(t)\,dt \bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}}}{\bigg( \int_0^{\infty} h(s)^pv(s)\,ds\bigg)^{\frac{1}{p}}} & \\ & \hspace{-7cm} \approx \, \sup_{t \in (0,\infty)} \bigg( \int_0^t v(x)^{1-p'} \, \bigg( \int_x^t \bigg( \sup_{s \le \tau}u(\tau)\bigg) a(s)\,ds \bigg)^{p'} \, dx \bigg)^{\frac{1}{p'}} \, \bigg( \int_t^{\infty} w(y) \,dy \bigg)^{\frac{1}{q}} \notag \\ & \hspace{-6.5cm} + \sup_{t \in (0,\infty)} \bigg( \int_0^t v(x)^{1-p'} \, dx \bigg)^{\frac{1}{p'}} \, \bigg( \int_t^{\infty} \, \bigg( \int_t^y \bigg( \sup_{s \le \tau}u(\tau)\bigg) a(s)\,ds \bigg)^{q} w(y) \,dy \bigg)^{\frac{1}{q}} \\ & \hspace{-6.5cm} + \, \sup_{x \in (0,\infty)} \bigg( \int_{[x,\infty)} \, d \, \bigg( - \sup_{t \le \tau} u(\tau)^{p'} \bigg( \int_0^{\tau} v(s)^{1-p'}\,ds \bigg) \bigg) \bigg)^{\frac{1}{p'}} \, \bigg( \int_0^x A(y)^q w(y) \,dy \bigg)^{\frac{1}{q}} \notag \\ & \hspace{-6.5cm} + \sup_{x \in (0,\infty)} \bigg( \int_{(0,x]} \, A(t)^{p'} \, d \, \bigg( - \sup_{t \le \tau} u(\tau)^{p'} \bigg( \int_0^{\tau} v(s)^{1-p'}\,ds \bigg) \bigg) \bigg)^{\frac{1}{p'}} \, \bigg( \int_x^{\infty} w(y) \,dy \bigg)^{\frac{1}{q}} \notag \\ & \hspace{-6.5cm} + \bigg( \int_0^{\infty} A(y)^q w(y) \,dy \bigg)^{\frac{1}{q}} \, \lim_{t \rightarrow \infty} \bigg(\sup_{t \le \tau} u(\tau) \bigg( \int_0^{\tau} v(s)^{1-p'}\,ds \bigg)^{\frac{1}{p'}}\bigg). \end{align*} {\rm (ii)} Let now $q < p$. By \cite[Theorem 1.2]{Oinar}, we obtain that \begin{align} \sup_{g \ge 0} \frac{ D }{\bigg( \int_0^{\infty} g^{q'}w^{1-q'}\bigg)^{\frac{1}{q'}}} & \notag \\ & \hspace{-3cm} = \, \sup_{g \ge 0} \frac{\bigg( \int_0^{\infty} \bigg( \int_t^{\infty} g(x) \int_t^x \bigg( \sup_{s \le y}u(y)\bigg) a(s)\,ds \,dx \bigg)^{p'} \, v(t)^{1-p'} \,dt \bigg)^{\frac{1}{p'}} }{\bigg( \int_0^{\infty} g^{q'}w^{1-q'}\bigg)^{\frac{1}{q'}}} \notag \\ & \hspace{-3cm} \approx \, \bigg( \int_0^{\infty} \bigg( \int_0^t v(x)^{1-p'} \bigg)^{\frac{r}{q'}} v(t)^{1-p'} \, \bigg( \int_t^{\infty} \bigg( \int_t^z \bigg( \sup_{s \le y}u(y)\bigg) a(s)\,ds \bigg)^{q} w(z)\, dz \bigg)^{\frac{r}{q}} \, dt \bigg)^{\frac{1}{r}} \notag \\ & \hspace{-2.5cm} + \bigg( \int_0^{\infty} \bigg( \int_0^t v(x)^{1-p'} \bigg( \int_x^t \bigg( \sup_{s \le y}u(y)\bigg) a(s)\,ds \bigg)^{p'} \, dx \bigg)^{\frac{r}{p'}} \bigg( \int_t^{\infty} w(s) \,ds \bigg)^{\frac{r}{p}} w(t)\,dt \bigg)^{\frac{1}{r}}. \label{eq.I.0} \end{align} By \cite[Theorem 2, p. 48]{mazya}, we have that \begin{align*} \sup_{g \ge 0} \frac{ E_1 }{\bigg( \int_0^{\infty} g^{q'}w^{1-q'}\bigg)^{\frac{1}{q'}}} & \\ & \hspace{-3cm} = \sup_{g \ge 0} \frac{\bigg( \int_0^{\infty} \bigg( \int_0^t g(x)A(x)\,dx \bigg)^{p'} \,d \, \bigg( - \bigg(\sup_{t \le \tau} u(\tau)^{p'} \, \bigg( \int_0^{\tau} v(s)^{1-p'}\,ds \bigg) \bigg) \bigg) \bigg)^{\frac{1}{p'}}}{\bigg( \int_0^{\infty} g^{q'}w^{1-q'}\bigg)^{\frac{1}{q'}}} \\ & \hspace{-3cm} \approx \bigg( \int_0^{\infty} \bigg( \int_{[x,\infty)} \, d \, \bigg( - \bigg(\sup_{t \le \tau} u(\tau)^{p'} \, \bigg( \int_0^{\tau} v(s)^{1-p'}\,ds \bigg) \bigg) \bigg) \bigg)^{\frac{r}{p'}} \, \bigg( \int_0^x A(y)^q w(y) \,dy \bigg)^{\frac{r}{p}} A(x)^q w(x) \,dx\bigg)^{\frac{1}{r}} \end{align*} and \begin{align*} \sup_{g \ge 0} \frac{ E_2 }{\bigg( \int_0^{\infty} g^{q'}w^{1-q'}\bigg)^{\frac{1}{q'}}} & \\ & \hspace{-3cm} = \sup_{g \ge 0} \frac{\bigg( \int_0^{\infty} \bigg( \int_t^{\infty} g(x)\,dx \bigg)^{p'} A(t)^{p'} \,d \, \bigg( - \bigg(\sup_{t \le \tau} u(\tau)^{p'} \, \bigg( \int_0^{\tau} v(s)^{1-p'}\,ds \bigg) \bigg) \bigg) \bigg)^{\frac{1}{p'}}}{\bigg( \int_0^{\infty} g^{q'}w^{1-q'}\bigg)^{\frac{1}{q'}}} \\ & \hspace{-3cm} \approx \bigg( \int_0^{\infty} \bigg( \int_{(0,x]} \, A(t)^{p'} \, d \, \bigg( - \bigg(\sup_{t \le \tau} u(\tau)^{p'} \, \bigg( \int_0^{\tau} v(s)^{1-p'}\,ds \bigg) \bigg) \bigg) \bigg)^{\frac{r}{p'}} \, \bigg( \int_x^{\infty} w(y) \,dy \bigg)^{\frac{r}{p}} w(x)\,dx \bigg)^{\frac{1}{r}}. \end{align*} Consequently, we arrive at \begin{align} \sup_{g \ge 0} \frac{E}{\bigg( \int_0^{\infty} g^{q'}w^{1-q'}\bigg)^{\frac{1}{q'}}} & \notag \\ & \hspace{-3cm} \approx \, \bigg( \int_0^{\infty} \bigg( \int_{[x,\infty)} \, d \, \bigg( - \bigg(\sup_{t \le \tau} u(\tau)^{p'} \, \bigg( \int_0^{\tau} v(s)^{1-p'}\,ds \bigg) \bigg) \bigg) \bigg)^{\frac{r}{p'}} \, \bigg( \int_0^x A(y)^q w(y) \,dy \bigg)^{\frac{r}{p}} A(x)^q w(x) \,dx\bigg)^{\frac{1}{r}} \notag \\ & \hspace{-2.5cm} + \bigg( \int_0^{\infty} \bigg( \int_{(0,x]} \, A(t)^{p'} \, d \, \bigg( - \bigg(\sup_{t \le \tau} u(\tau)^{p'} \, \bigg( \int_0^{\tau} v(s)^{1-p'}\,ds \bigg) \bigg) \bigg) \bigg)^{\frac{r}{p'}} \, \bigg( \int_x^{\infty} w(y) \,dy \bigg)^{\frac{r}{p}} w(x)\,dx \bigg)^{\frac{1}{r}} \notag \\ & \hspace{-2.5cm} + \bigg( \int_0^{\infty} A(y)^q w(y) \,dy \bigg)^{\frac{1}{q}} \, \lim_{t \rightarrow \infty} \bigg(\sup_{t \le \tau} u(\tau) \bigg( \int_0^{\tau} v(s)^{1-p'}\,ds \bigg)^{\frac{1}{p'}}\bigg). \label{eq.II.0} \end{align} Combining \eqref{eq.I.0} and \eqref{eq.II.0}, we arrive at \begin{align*} \sup_{h \ge 0} \frac{\bigg( \int_0^{\infty} \bigg( \int_0^x \bigg( \sup_{t \le \tau}u(\tau) \int_0^{\tau} h(y)\,dy \bigg) a(t)\,dt \bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}}}{\bigg( \int_0^{\infty} h(s)^pv(s)\,ds\bigg)^{\frac{1}{p}}} & \\ & \hspace{-7cm} \approx \, \bigg( \int_0^{\infty} \bigg( \int_0^t v(x)^{1-p'} \bigg)^{\frac{r}{q'}} v(t)^{1-p'} \, \bigg( \int_t^{\infty} \bigg( \int_t^z \bigg( \sup_{s \le y}u(y)\bigg) a(s)\,ds \bigg)^{q} w(z)\, dz \bigg)^{\frac{r}{q}} \, dt \bigg)^{\frac{1}{r}} \notag \\ & \hspace{-6.5cm} + \bigg( \int_0^{\infty} \bigg( \int_0^t v(x)^{1-p'} \bigg( \int_x^t \bigg( \sup_{s \le y}u(y)\bigg) a(s)\,ds \bigg)^{p'} \, dx \bigg)^{\frac{r}{p'}} \bigg( \int_t^{\infty} w(s) \,ds \bigg)^{\frac{r}{p}} w(t)\,dt \bigg)^{\frac{1}{r}} \notag \\ & \hspace{-6.5cm} + \, \bigg( \int_0^{\infty} \bigg( \int_{[x,\infty)} \, d \, \bigg( - \bigg(\sup_{t \le \tau} u(\tau)^{p'} \, \bigg( \int_0^{\tau} v(s)^{1-p'}\,ds \bigg) \bigg) \bigg) \bigg)^{\frac{r}{p'}} \, \bigg( \int_0^x A(y)^q w(y) \,dy \bigg)^{\frac{r}{p}} A(x)^q w(x) \,dx\bigg)^{\frac{1}{r}} \notag \\ & \hspace{-6.5cm} + \bigg( \int_0^{\infty} \bigg( \int_{(0,x]} \, A(t)^{p'} \, d \, \bigg( - \bigg(\sup_{t \le \tau} u(\tau)^{p'} \, \bigg( \int_0^{\tau} v(s)^{1-p'}\,ds \bigg) \bigg) \bigg) \bigg)^{\frac{r}{p'}} \, \bigg( \int_x^{\infty} w(y) \,dy \bigg)^{\frac{r}{p}} w(x)\,dx \bigg)^{\frac{1}{r}} \notag \\ & \hspace{-6.5cm} + \bigg( \int_0^{\infty} A(y)^q w(y) \,dy \bigg)^{\frac{1}{q}} \, \lim_{t \rightarrow \infty} \bigg(\sup_{t \le \tau} u(\tau) \bigg( \int_0^{\tau} v(s)^{1-p'}\,ds \bigg)^{\frac{1}{p'}}\bigg). \end{align*} The proof is completed. \end{proof} \begin{thm}\label{aux.thm.1.1} Let $1 < p,\, q < \infty$ and $b \in \W\I$ be such that $b(t) > 0$ for a.e. $t\in (0,\infty)$. Assume that $u \in \W\I \cap C\I$ and $a,\,v,\,w \in \W\I$. Moreover, assume that $$ 0 < \int_0^x v(t)^{1-p'}\,dt < \infty \qquad \mbox{for all} \quad x > 0. $$ {\rm (i)} If $p \le q$, then \begin{align*} \sup_{h \ge 0} \frac{\bigg( \int_0^{\infty} \bigg( \int_0^x \bigg( \sup_{t \le \tau} \frac{u(\tau)}{B(\tau)} \int_0^{\tau} h(y)b(y)\,dy \bigg) a(t)\,dt \bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}}}{\bigg( \int_0^{\infty} h(s)^pv(s)\,ds\bigg)^{\frac{1}{p}}} & \\ & \hspace{-7cm} \approx \, \sup_{t \in (0,\infty)} \bigg( \int_0^t b(x)^{p'}v(x)^{1-p'} \, \bigg( \int_x^t \bigg( \sup_{s \le \tau}\frac{u(\tau)}{B(\tau)}\bigg) a(s)\,ds \bigg)^{p'} \, dx \bigg)^{\frac{1}{p'}} \, \bigg( \int_t^{\infty} w(y) \,dy \bigg)^{\frac{1}{q}} \notag \\ & \hspace{-6.5cm} + \sup_{t \in (0,\infty)} \bigg( \int_0^t b(x)^{p'}v(x)^{1-p'} \, dx \bigg)^{\frac{1}{p'}} \, \bigg( \int_t^{\infty} \, \bigg( \int_t^y \bigg( \sup_{s \le \tau}\frac{u(\tau)}{B(\tau)}\bigg) a(s)\,ds \bigg)^{q} w(y) \,dy \bigg)^{\frac{1}{q}} \\ & \hspace{-6.5cm} + \, \sup_{x \in (0,\infty)} \bigg( \int_{[x,\infty)} \, d \, \bigg( - \sup_{t \le \tau} \bigg(\frac{u(\tau)}{B(\tau)}\bigg)^{p'} \bigg( \int_0^{\tau} b(s)^{p'} v(s)^{1-p'}\,ds \bigg) \bigg) \bigg)^{\frac{1}{p'}} \, \bigg( \int_0^x A(y)^q w(y) \,dy \bigg)^{\frac{1}{q}} \notag \\ & \hspace{-6.5cm} + \sup_{x \in (0,\infty)} \bigg( \int_{(0,x]} \, A(t)^{p'} \, d \, \bigg( - \sup_{t \le \tau} \bigg(\frac{u(\tau)}{B(\tau)}\bigg)^{p'} \bigg( \int_0^{\tau} b(s)^{p'}v(s)^{1-p'}\,ds \bigg) \bigg) \bigg)^{\frac{1}{p'}} \, \bigg( \int_x^{\infty} w(y) \,dy \bigg)^{\frac{1}{q}} \notag \\ & \hspace{-6.5cm} + \bigg( \int_0^{\infty} A(y)^q w(y) \,dy \bigg)^{\frac{1}{q}} \, \lim_{t \rightarrow \infty} \bigg(\sup_{t \le \tau} \frac{u(\tau)}{B(\tau)} \bigg( \int_0^{\tau} b(s)^{p'}v(s)^{1-p'}\,ds \bigg)^{\frac{1}{p'}}\bigg); \end{align*} {\rm (ii)} If $q < p$, then \begin{align*} \sup_{h \ge 0} \frac{\bigg( \int_0^{\infty} \bigg( \int_0^x \bigg( \sup_{t \le \tau}\frac{u(\tau)}{B(\tau)} \int_0^{\tau} h(y)b(y)\,dy \bigg) a(t)\,dt \bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}}}{\bigg( \int_0^{\infty} h(s)^pv(s)\,ds\bigg)^{\frac{1}{p}}} & \\ & \hspace{-7cm} \approx \, \bigg( \int_0^{\infty} \bigg( \int_0^t b(x)^{p'}v(x)^{1-p'} \,dx \bigg)^{\frac{r}{q'}} b(t)^{p'}v(t)^{1-p'} \, \bigg( \int_t^{\infty} \bigg( \int_t^z \bigg( \sup_{s \le y}\frac{u(y)}{B(y)}\bigg) a(s)\,ds \bigg)^{q} w(z)\, dz \bigg)^{\frac{r}{q}} \, dt \bigg)^{\frac{1}{r}} \notag \\ & \hspace{-6.5cm} + \bigg( \int_0^{\infty} \bigg( \int_0^t b(x)^{p'}v(x)^{1-p'} \bigg( \int_x^t \bigg( \sup_{s \le y}\frac{u(y)}{B(y)}\bigg) a(s)\,ds \bigg)^{p'} \, dx \bigg)^{\frac{r}{p'}} \bigg( \int_z^{\infty} w(s) \,ds \bigg)^{\frac{r}{p}} w(t)\,dt \bigg)^{\frac{1}{r}} \\ & \hspace{-6.5cm} + \, \bigg( \int_0^{\infty} \bigg( \int_{[x,\infty)} \, d \, \bigg( - \bigg(\sup_{t \le \tau} \bigg(\frac{u(\tau)}{B(\tau)}\bigg)^{p'} \, \bigg( \int_0^{\tau} b(s)^{p'}v(s)^{1-p'}\,ds \bigg) \bigg) \bigg) \bigg)^{\frac{r}{p'}} \, \bigg( \int_0^x A(y)^q w(y) \,dy \bigg)^{\frac{r}{p}} A(x)^q w(x) \,dx\bigg)^{\frac{1}{r}} \notag \\ & \hspace{-6.5cm} + \bigg( \int_0^{\infty} \bigg( \int_{(0,x]} \, A(t)^{p'} \, d \, \bigg( - \bigg(\sup_{t \le \tau} \bigg(\frac{u(\tau)}{B(\tau)}\bigg)^{p'} \, \bigg( \int_0^{\tau} b(s)^{p'}v(s)^{1-p'}\,ds \bigg) \bigg) \bigg) \bigg)^{\frac{r}{p'}} \, \bigg( \int_x^{\infty} w(y) \,dy \bigg)^{\frac{r}{p}} w(x)\,dx \bigg)^{\frac{1}{r}} \notag \\ & \hspace{-6.5cm} + \bigg( \int_0^{\infty} A(y)^q w(y) \,dy \bigg)^{\frac{1}{q}} \, \lim_{t \rightarrow \infty} \bigg(\sup_{t \le \tau} \frac{u(\tau)}{B(\tau)} \bigg( \int_0^{\tau} b(s)^{p'}v(s)^{1-p'}\,ds \bigg)^{\frac{1}{p'}}\bigg). \end{align*} \end{thm} \begin{proof} The statement follows by Theorem \ref{aux.thm.1} at once if we note that \begin{align*} \sup_{h \ge 0} \frac{\bigg( \int_0^{\infty} \bigg( \int_0^x \bigg( \sup_{t \le \tau} \frac{u(\tau)}{B(\tau)} \int_0^{\tau} h(y)b(y)\,dy \bigg) a(t)\,dt \bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}}}{\bigg( \int_0^{\infty} h(s)^pv(s)\,ds\bigg)^{\frac{1}{p}}} & \\ & \hspace{-7cm} = \sup_{h \ge 0} \frac{\bigg( \int_0^{\infty} \bigg( \int_0^x \bigg( \sup_{t \le \tau} \frac{u(\tau)}{B(\tau)} \int_0^{\tau} h(y)\,dy \bigg) a(t)\,dt \bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}}}{\bigg( \int_0^{\infty} h(s)^p b(s)^{-p}v(s)\,ds\bigg)^{\frac{1}{p}}}. \end{align*} \end{proof} \begin{thm}\label{aux.thm.2} Let $1 < p,\, q < \infty$ and $b \in \W\I$ be such that $b(t) > 0$ for a.e. $t\in (0,\infty)$. Assume that $u \in \W\I \cap C\I$ and $a,\,v,\,w \in \W\I$. Moreover, assume that $$ 0 < \int_x^{\infty} v(t)^{1-p'}\,dt < \infty \qquad \mbox{for all} \quad x > 0. $$ Denote by \begin{align*} \psi (x) & : = \bigg( \int_x^{\infty} b(t)^{p'} v^{1-{p}^{\prime}}(t)\,dt\bigg)^{- \frac{p^{\prime}}{p^{\prime} + 1}} b(x)^{p'} v^{1-{p}^{\prime}}(x) \\ \intertext{and} \Psi(x) & : = \bigg( \int_x^{\infty} b(t)^{p'} v^{1-{p}^{\prime}}(t)\,dt\bigg)^{\frac{1}{p^{\prime} + 1}}. \end{align*} {\rm (i)} If $p \le q$, then \begin{align*} \sup_{h \ge 0} \frac{\bigg( \int_0^{\infty} \bigg( \int_0^x \bigg( \sup_{t \le \tau}\frac{u(\tau)}{B(\tau)} \int_{\tau}^{\infty} h(y)b(y)\,dy \bigg) a(t)\,dt \bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}}}{\bigg( \int_0^{\infty} h(s)^pv(s)\,ds\bigg)^{\frac{1}{p}}} & \\ & \hspace{-7cm} \approx \, \sup_{t \in (0,\infty)} \bigg( \int_0^t \Psi(x)^{-p'} \psi(x) \, \bigg( \int_x^t \bigg( \sup_{s \le \tau} \frac{u(\tau)}{B(\tau)} \Psi(\tau)^2 \bigg) a(s)\,ds \bigg)^{p'} \, dx \bigg)^{\frac{1}{p'}} \, \bigg( \int_t^{\infty} w(y) \,dy \bigg)^{\frac{1}{q}} \notag \\ & \hspace{-6.5cm} + \, \sup_{t \in (0,\infty)} \bigg( \int_0^t \Psi(x)^{-p'} \psi(x) \, dx \bigg)^{\frac{1}{p'}} \, \bigg( \int_t^{\infty} \, \bigg( \int_t^y \bigg( \sup_{s \le \tau} \frac{u(\tau)}{B(\tau)} \Psi(\tau)^2\bigg) a(s)\,ds \bigg)^{q} w(y) \,dy \bigg)^{\frac{1}{q}} \\ & \hspace{-6.5cm} + \, \sup_{x \in (0,\infty)} \bigg( \int_{[x,\infty)} \, d \, \bigg( - \sup_{t \le \tau} \bigg(\frac{u(\tau)}{B(\tau)}\bigg)^{p'} \Psi(\tau)^{2p'} \bigg( \int_0^{\tau} \Psi(s)^{-p'} \psi(s)\,ds \bigg) \bigg) \bigg)^{\frac{1}{p'}} \, \bigg( \int_0^x A(y)^q w(y) \,dy \bigg)^{\frac{1}{q}} \notag \\ & \hspace{-6.5cm} + \,\sup_{x \in (0,\infty)} \bigg( \int_{(0,x]} \, A(t)^{p'} \, d \, \bigg( - \sup_{t \le \tau} \bigg(\frac{u(\tau)}{B(\tau)}\bigg)^{p'} \Psi(\tau)^{2p'} \bigg( \int_0^{\tau} \Psi(s)^{-p'} \psi(s)\,ds \bigg) \bigg) \bigg)^{\frac{1}{p'}} \, \bigg( \int_x^{\infty} w(y) \,dy \bigg)^{\frac{1}{q}} \notag \\ & \hspace{-6.5cm} + \,\bigg( \int_0^{\infty} A(y)^q w(y) \,dy \bigg)^{\frac{1}{q}} \, \lim_{t \rightarrow \infty} \bigg(\sup_{t \le \tau} \frac{u(\tau)}{B(\tau)} \Psi(\tau)^2 \bigg( \int_0^{\tau} \Psi(s)^{-p'} \psi(s)\,ds \bigg)^{\frac{1}{p'}}\bigg) \\ & \hspace{-6.5cm} + \, \bigg( \int_0^{\infty} \psi(s)\,ds\bigg)^{-\frac{1}{p}} \bigg( \int_0^{\infty} \bigg( \int_0^x \bigg( \sup_{t \le \tau} \frac{u(\tau)}{B(\tau)} \Psi(\tau)^2 \bigg) a(t)\,dt \bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}}; \end{align*} {\rm (ii)} If $q < p$, then \begin{align*} \sup_{h \ge 0} \frac{\bigg( \int_0^{\infty} \bigg( \int_0^x \bigg( \sup_{t \le \tau} \frac{u(\tau)}{B(\tau)} \int_{\tau}^{\infty} h(y)b(y)\,dy \bigg) a(t)\,dt \bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}}}{\bigg( \int_0^{\infty} h(s)^pv(s)\,ds\bigg)^{\frac{1}{p}}} & \\ & \hspace{-8cm} \approx \, \bigg( \int_0^{\infty} \bigg( \int_0^t \Psi(x)^{-p'} \psi(x) \,dx \bigg)^{\frac{r}{q'}} \Psi(t)^{-p'} \psi(t) \, \bigg( \int_t^{\infty} \bigg( \int_t^z \bigg( \sup_{s \le y} \frac{u(y)}{B(y)}\Psi(y)^2\bigg) a(s)\,ds \bigg)^{q} w(z)\, dz \bigg)^{\frac{r}{q}} \, dt \bigg)^{\frac{1}{r}} \notag \\ & \hspace{-7.5cm} + \bigg( \int_0^{\infty} \bigg( \int_0^t \Psi(x)^{-p'} \psi(x) \bigg( \int_x^t \bigg( \sup_{s \le y}\frac{u(y)}{B(y)}\Psi(y)^2\bigg) a(s)\,ds \bigg)^{p'} \, dx \bigg)^{\frac{r}{p'}} \bigg( \int_z^{\infty} w(s) \,ds \bigg)^{\frac{r}{p}} w(t)\,dt \bigg)^{\frac{1}{r}} \\ & \hspace{-7.5cm} + \, \bigg( \int_0^{\infty} \bigg( \int_{[x,\infty)} \, d \, \bigg( - \bigg(\sup_{t \le \tau} \bigg(\frac{u(\tau)}{B(\tau)}\bigg)^{p'} \Psi(\tau)^{2p'} \, \bigg( \int_0^{\tau} \Psi(s)^{-p'} \psi(s)\,ds \bigg) \bigg) \bigg) \bigg)^{\frac{r}{p'}} \, \bigg( \int_0^x A(y)^q w(y) \,dy \bigg)^{\frac{r}{p}} A(x)^q w(x) \,dx\bigg)^{\frac{1}{r}} \notag \\ & \hspace{-7.5cm} + \bigg( \int_0^{\infty} \bigg( \int_{(0,x]} \, A(t)^{p'} \, d \, \bigg( - \bigg(\sup_{t \le \tau} \bigg(\frac{u(\tau)}{B(\tau)}\bigg)^{p'} \Psi(\tau)^{2p'} \, \bigg( \int_0^{\tau} \Psi(s)^{-p'} \psi(s)\,ds \bigg) \bigg) \bigg) \bigg)^{\frac{r}{p'}} \, \bigg( \int_x^{\infty} w(y) \,dy \bigg)^{\frac{r}{p}} w(x)\,dx \bigg)^{\frac{1}{r}} \notag \\ & \hspace{-7.5cm} + \bigg( \int_0^{\infty} A(y)^q w(y) \,dy \bigg)^{\frac{1}{q}} \, \lim_{t \rightarrow \infty} \bigg(\sup_{t \le \tau} \frac{u(\tau)}{B(\tau)} \Psi(\tau)^2 \,\bigg( \int_0^{\tau} \Psi(s)^{-p'} \psi(s)\,ds \bigg)^{\frac{1}{p'}}\bigg) \\ & \hspace{-7.5cm} + \, \bigg( \int_0^{\infty} \psi(s)\,ds\bigg)^{-\frac{1}{p}} \bigg( \int_0^{\infty} \bigg( \int_0^x \bigg( \sup_{t \le \tau}\frac{u(\tau)}{B(\tau)} \Psi(\tau)^2 \bigg) a(t)\,dt \bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}}. \end{align*} \end{thm} \begin{proof} By \cite[Corollary 3.5]{GogMusIHI}, we have that \begin{align*} \sup_{h \ge 0} \frac{\bigg( \int_0^{\infty} \bigg( \int_0^x \bigg( \sup_{t \le \tau}\frac{u(\tau)}{B(\tau)} \int_{\tau}^{\infty} h(y)b(y)\,dy \bigg) a(t)\,dt \bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}}}{\bigg( \int_0^{\infty} h(s)^pv(s)\,ds\bigg)^{\frac{1}{p}}} & \\ & \hspace{-7cm} = \, \sup_{h \ge 0} \frac{\bigg( \int_0^{\infty} \bigg( \int_0^x \bigg( \sup_{t \le \tau}\frac{u(\tau)}{B(\tau)} \int_{\tau}^{\infty} h(y)\,dy \bigg) a(t)\,dt \bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}}}{\bigg( \int_0^{\infty} h(s)^p b(s)^{-p}v(s)\,ds\bigg)^{\frac{1}{p}}} \\ & \hspace{-7cm} \approx \, \sup_{h \ge 0} \frac{\bigg( \int_0^{\infty} \bigg( \int_0^x \bigg( \sup_{t \le \tau} \frac{u(\tau)}{B(\tau)}\Psi(\tau)^2 \int_0^{\tau} h(y)\,dy \bigg) a(t)\,dt \bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}}}{\bigg( \int_0^{\infty} h(s)^p \Psi(s)^p \psi(s)^{1-p}\,ds\bigg)^{\frac{1}{p}}} \\ & \hspace{-6.5cm} + \, \frac{\bigg( \int_0^{\infty} \bigg( \int_0^x \bigg( \sup_{t \le \tau} \frac{u(\tau)}{B(\tau)} \Psi(\tau)^2 \bigg) a(t)\,dt \bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}}}{\bigg( \int_0^{\infty} \psi(s)\,ds\bigg)^{\frac{1}{p}}}. \end{align*} {\rm (i)} Let $p \le q$. By Theorem \ref{aux.thm.1}, (i), we get that \begin{align*} \sup_{h \ge 0} \frac{\bigg( \int_0^{\infty} \bigg( \int_0^x \bigg( \sup_{t \le \tau}\frac{u(\tau)}{B(\tau)} \int_{\tau}^{\infty} h(y)b(y)\,dy \bigg) a(t)\,dt \bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}}}{\bigg( \int_0^{\infty} h(s)^pv(s)\,ds\bigg)^{\frac{1}{p}}} & \\ & \hspace{-7cm} \approx \, \sup_{t \in (0,\infty)} \bigg( \int_0^t \Psi(x)^{-p'} \psi(x) \, \bigg( \int_x^t \bigg( \sup_{s \le \tau} \frac{u(\tau)}{B(\tau)} \Psi(\tau)^2 \bigg) a(s)\,ds \bigg)^{p'} \, dx \bigg)^{\frac{1}{p'}} \, \bigg( \int_t^{\infty} w(y) \,dy \bigg)^{\frac{1}{q}} \notag \\ & \hspace{-6.5cm} + \, \sup_{t \in (0,\infty)} \bigg( \int_0^t \Psi(x)^{-p'} \psi(x) \, dx \bigg)^{\frac{1}{p'}} \, \bigg( \int_t^{\infty} \, \bigg( \int_t^y \bigg( \sup_{s \le \tau} \frac{u(\tau)}{B(\tau)} \Psi(\tau)^2\bigg) a(s)\,ds \bigg)^{q} w(y) \,dy \bigg)^{\frac{1}{q}} \\ & \hspace{-6.5cm} + \, \sup_{x \in (0,\infty)} \bigg( \int_{[x,\infty)} \, d \, \bigg( - \sup_{t \le \tau} \bigg(\frac{u(\tau)}{B(\tau)}\bigg)^{p'} \Psi(\tau)^{2p'} \bigg( \int_0^{\tau} \Psi(s)^{-p'} \psi(s)\,ds \bigg) \bigg) \bigg)^{\frac{1}{p'}} \, \bigg( \int_0^x A(y)^q w(y) \,dy \bigg)^{\frac{1}{q}} \notag \\ & \hspace{-6.5cm} + \,\sup_{x \in (0,\infty)} \bigg( \int_{(0,x]} \, A(t)^{p'} \, d \, \bigg( - \sup_{t \le \tau} \bigg(\frac{u(\tau)}{B(\tau)}\bigg)^{p'} \Psi(\tau)^{2p'} \bigg( \int_0^{\tau} \Psi(s)^{-p'} \psi(s)\,ds \bigg) \bigg) \bigg)^{\frac{1}{p'}} \, \bigg( \int_x^{\infty} w(y) \,dy \bigg)^{\frac{1}{q}} \notag \\ & \hspace{-6.5cm} + \,\bigg( \int_0^{\infty} A(y)^q w(y) \,dy \bigg)^{\frac{1}{q}} \, \lim_{t \rightarrow \infty} \bigg(\sup_{t \le \tau} \frac{u(\tau)}{B(\tau)} \Psi(\tau)^2 \bigg( \int_0^{\tau} \Psi(s)^{-p'} \psi(s)\,ds \bigg)^{\frac{1}{p'}}\bigg) \\ & \hspace{-6.5cm} + \, \bigg( \int_0^{\infty} \psi(s)\,ds\bigg)^{-\frac{1}{p}} \bigg( \int_0^{\infty} \bigg( \int_0^x \bigg( \sup_{t \le \tau} \frac{u(\tau)}{B(\tau)} \Psi(\tau)^2 \bigg) a(t)\,dt \bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}}; \end{align*} {\rm (ii)} Let $q < p$. By Theorem \ref{aux.thm.1}, (ii), we obtain that \begin{align*} \sup_{h \ge 0} \frac{\bigg( \int_0^{\infty} \bigg( \int_0^x \bigg( \sup_{t \le \tau} \frac{u(\tau)}{B(\tau)} \int_{\tau}^{\infty} h(y)b(y)\,dy \bigg) a(t)\,dt \bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}}}{\bigg( \int_0^{\infty} h(s)^pv(s)\,ds\bigg)^{\frac{1}{p}}} & \\ & \hspace{-8cm} \approx \, \bigg( \int_0^{\infty} \bigg( \int_0^t \Psi(x)^{-p'} \psi(x) \,dx \bigg)^{\frac{r}{q'}} \Psi(t)^{-p'} \psi(t) \, \bigg( \int_t^{\infty} \bigg( \int_t^z \bigg( \sup_{s \le y} \frac{u(y)}{B(y)}\Psi(y)^2\bigg) a(s)\,ds \bigg)^{q} w(z)\, dz \bigg)^{\frac{r}{q}} \, dt \bigg)^{\frac{1}{r}} \notag \\ & \hspace{-7.5cm} + \bigg( \int_0^{\infty} \bigg( \int_0^t \Psi(x)^{-p'} \psi(x) \bigg( \int_x^t \bigg( \sup_{s \le y}\frac{u(y)}{B(y)}\Psi(y)^2\bigg) a(s)\,ds \bigg)^{p'} \, dx \bigg)^{\frac{r}{p'}} \bigg( \int_z^{\infty} w(s) \,ds \bigg)^{\frac{r}{p}} w(t)\,dt \bigg)^{\frac{1}{r}} \\ & \hspace{-7.5cm} + \, \bigg( \int_0^{\infty} \bigg( \int_{[x,\infty)} \, d \, \bigg( - \bigg(\sup_{t \le \tau} \bigg(\frac{u(\tau)}{B(\tau)}\bigg)^{p'} \Psi(\tau)^{2p'} \, \bigg( \int_0^{\tau} \Psi(s)^{-p'} \psi(s)\,ds \bigg) \bigg) \bigg) \bigg)^{\frac{r}{p'}} \, \bigg( \int_0^x A(y)^q w(y) \,dy \bigg)^{\frac{r}{p}} A(x)^q w(x) \,dx\bigg)^{\frac{1}{r}} \notag \\ & \hspace{-7.5cm} + \bigg( \int_0^{\infty} \bigg( \int_{(0,x]} \, A(t)^{p'} \, d \, \bigg( - \bigg(\sup_{t \le \tau} \bigg(\frac{u(\tau)}{B(\tau)}\bigg)^{p'} \Psi(\tau)^{2p'} \, \bigg( \int_0^{\tau} \Psi(s)^{-p'} \psi(s)\,ds \bigg) \bigg) \bigg) \bigg)^{\frac{r}{p'}} \, \bigg( \int_x^{\infty} w(y) \,dy \bigg)^{\frac{r}{p}} w(x)\,dx \bigg)^{\frac{1}{r}} \notag \\ & \hspace{-7.5cm} + \bigg( \int_0^{\infty} A(y)^q w(y) \,dy \bigg)^{\frac{1}{q}} \, \lim_{t \rightarrow \infty} \bigg(\sup_{t \le \tau} \frac{u(\tau)}{B(\tau)} \Psi(\tau)^2 \,\bigg( \int_0^{\tau} \Psi(s)^{-p'} \psi(s)\,ds \bigg)^{\frac{1}{p'}}\bigg) \\ & \hspace{-7.5cm} + \, \bigg( \int_0^{\infty} \psi(s)\,ds\bigg)^{-\frac{1}{p}} \bigg( \int_0^{\infty} \bigg( \int_0^x \bigg( \sup_{t \le \tau}\frac{u(\tau)}{B(\tau)} \Psi(\tau)^2 \bigg) a(t)\,dt \bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}}. \end{align*} The proof is completed. \end{proof} \ \section{The boundedness of $R_u$ from $L^p(v)$ into $\ces_q(w,a)$ on the cone of monotone non-increasing functions}\label{R} \ In this section we characterize the boundedness of $R_u$ from $L^p(v)$ into $\ces_q(w,a)$ on the cone of monotone non-increasing functions. \begin{thm}\label{thm.R} Let $1 < p,\, q < \infty$. Assume that $u \in \W\I \cap C\I$ and $a,\,v,\,w \in \W\I$. {\rm (i)} If $p \le q$, then \begin{align*} \sup_{f \in \mp^{+,\dn} (0,\infty)} \frac{\bigg( \int_0^{\infty} \bigg( \int_0^x (R_u f) (t) a(t)\,dt \bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}}}{\bigg( \int_0^{\infty} f(s)^pv(s)\,ds\bigg)^{\frac{1}{p}}} & \\ & \hspace{-6cm} \approx \, \sup_{t \in (0,\infty)} \bigg( \int_0^t V(x)^{p'} v(x) \, \bigg( \int_x^t \bigg( \sup_{s \le \tau}u(\tau) V(\tau)^{-2}\bigg) a(s)\,ds \bigg)^{p'} \, dx \bigg)^{\frac{1}{p'}} \, \bigg( \int_t^{\infty} w(y) \,dy \bigg)^{\frac{1}{q}} \notag \\ & \hspace{-5.5cm} + \sup_{t \in (0,\infty)} \bigg( \int_0^t V(x)^{p'} v(x) \, dx \bigg)^{\frac{1}{p'}} \, \bigg( \int_t^{\infty} \, \bigg( \int_t^y \bigg( \sup_{s \le \tau}u(\tau) V(\tau)^{-2}\bigg) a(s)\,ds \bigg)^{q} w(y) \,dy \bigg)^{\frac{1}{q}} \\ & \hspace{-5.5cm} + \, \sup_{x \in (0,\infty)} \bigg( \int_{[x,\infty)} \, d \, \bigg( - \sup_{t \le \tau} u(\tau)^{p'} V(\tau)^{-2p'} \bigg( \int_0^{\tau} V(s)^{p'} v(s)\,ds \bigg) \bigg) \bigg)^{\frac{1}{p'}} \, \bigg( \int_0^x A(y)^q w(y) \,dy \bigg)^{\frac{1}{q}} \notag \\ & \hspace{-5.5cm} + \sup_{x \in (0,\infty)} \bigg( \int_{(0,x]} \, A(t)^{p'} \, d \, \bigg( - \sup_{t \le \tau} u(\tau)^{p'} V(\tau)^{-2p'} \bigg( \int_0^{\tau} V(s)^{p'} v(s)\,ds \bigg) \bigg) \bigg)^{\frac{1}{p'}} \, \bigg( \int_x^{\infty} w(y) \,dy \bigg)^{\frac{1}{q}} \notag \\ & \hspace{-5.5cm} + \bigg( \int_0^{\infty} A(y)^q w(y) \,dy \bigg)^{\frac{1}{q}} \, \lim_{t \rightarrow \infty} \bigg(\sup_{t \le \tau} u(\tau) V(\tau)^{-2} \bigg( \int_0^{\tau} V(s)^{p'} v(s)\,ds \bigg)^{\frac{1}{p'}}\bigg); \end{align*} {\rm (ii)} If $q < p$, then \begin{align*} \sup_{f \in \mp^{+,\dn} (0,\infty)} \frac{\bigg( \int_0^{\infty} \bigg( \int_0^x (R_u f) (t) a(t)\,dt \bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}}}{\bigg( \int_0^{\infty} f(s)^p v(s)\,ds\bigg)^{\frac{1}{p}}} & \\ & \hspace{-6cm} \approx \, \bigg( \int_0^{\infty} \bigg( \int_0^t V(x)^{p'} v(x) \,dx \bigg)^{\frac{r}{q'}} V(t)^{p'} v(t) \, \bigg( \int_t^{\infty} \bigg( \int_t^z \bigg( \sup_{s \le y}u(y)V(y)^{-2}\bigg) a(s)\,ds \bigg)^{q} w(z)\, dz \bigg)^{\frac{r}{q}} \, dt \bigg)^{\frac{1}{r}} \notag \\ & \hspace{-5.5cm} + \bigg( \int_0^{\infty} \bigg( \int_0^t V(x)^{p'} v(x) \bigg( \int_x^t \bigg( \sup_{s \le y}u(y)V(y)^{-2}\bigg) a(s)\,ds \bigg)^{p'} \, dx \bigg)^{\frac{r}{p'}} \bigg( \int_z^{\infty} w(s) \,ds \bigg)^{\frac{r}{p}} w(t)\,dt \bigg)^{\frac{1}{r}} \\ & \hspace{-5.5cm} + \, \bigg( \int_0^{\infty} \bigg( \int_{[x,\infty)} \, d \, \bigg( - \bigg(\sup_{t \le \tau} u(\tau)^{p'}V(\tau)^{-2p'} \, \bigg( \int_0^{\tau} V(s)^{p'} v(s)\,ds \bigg) \bigg) \bigg) \bigg)^{\frac{r}{p'}} \, \bigg( \int_0^x A(y)^q w(y) \,dy \bigg)^{\frac{r}{p}} x^q w(x) \,dx\bigg)^{\frac{1}{r}} \notag \\ & \hspace{-5.5cm} + \bigg( \int_0^{\infty} \bigg( \int_{(0,x]} \, A(t)^{p'} \, d \, \bigg( - \bigg(\sup_{t \le \tau} u(\tau)^{p'}V(\tau)^{-2p'} \, \bigg( \int_0^{\tau} V(s)^{p'} v(s)\,ds \bigg) \bigg) \bigg) \bigg)^{\frac{r}{p'}} \, \bigg( \int_x^{\infty} w(y) \,dy \bigg)^{\frac{r}{p}} w(x)\,dx \bigg)^{\frac{1}{r}} \notag \\ & \hspace{-5.5cm} + \bigg( \int_0^{\infty} A(y)^q w(y) \,dy \bigg)^{\frac{1}{q}} \, \lim_{t \rightarrow \infty} \bigg(\sup_{t \le \tau} u(\tau)V(\tau)^{-2} \bigg( \int_0^{\tau} V(s)^{p'} v(s)\,ds \bigg)^{\frac{1}{p'}}\bigg). \end{align*} \end{thm} \begin{proof} By \cite[Theorem 3.2]{GogStep} (cf. \cite[Theorem 2.3]{GogMusIHI}), we get that \begin{align*} \sup_{f \in \mp^{+,\dn} (0,\infty)} \frac{\bigg( \int_0^{\infty} \bigg( \int_0^x \bigg( \sup_{t \le \tau} u(\tau) f(\tau) \bigg) a(t)\,dt \bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}}}{\bigg( \int_0^{\infty} f(s)^pv(s)\,ds\bigg)^{\frac{1}{p}}} & \\ & \hspace{-7cm} \approx \sup_{h \ge 0} \frac{\bigg( \int_0^{\infty} \bigg( \int_0^x \bigg( \sup_{t \le \tau} u(\tau) V(\tau)^{-2} \int_0^{\tau} h(y)\,dy \bigg) a(t) \,dt \bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}}}{\bigg( \int_0^{\infty} h(s)^p V(s)^{-p} v(s)^{1-p}\,ds\bigg)^{\frac{1}{p}}}. \end{align*} By Theorem \ref{aux.thm.1}, we have that {\rm (i)} if $p \le q$, then \begin{align*} \sup_{h \ge 0} \frac{\bigg( \int_0^{\infty} \bigg( \int_0^x \bigg( \sup_{t \le \tau} u(\tau) V(\tau)^{-2} \int_0^{\tau} h(y)\,dy \bigg) a(t)\, dt \bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}}}{\bigg( \int_0^{\infty} h(s)^p V(s)^{-p} v(s)^{1-p}\,ds\bigg)^{\frac{1}{p}}} & \\ & \hspace{-7cm} \approx \, \sup_{t \in (0,\infty)} \bigg( \int_0^t V(x)^{p'} v(x) \, \bigg( \int_x^t \bigg( \sup_{s \le \tau}u(\tau) V(\tau)^{-2}\bigg) a(s) \,ds \bigg)^{p'} \, dx \bigg)^{\frac{1}{p'}} \, \bigg( \int_t^{\infty} w(y) \,dy \bigg)^{\frac{1}{q}} \notag \\ & \hspace{-6.5cm} + \sup_{t \in (0,\infty)} \bigg( \int_0^t V(x)^{p'} v(x) \, dx \bigg)^{\frac{1}{p'}} \, \bigg( \int_t^{\infty} \, \bigg( \int_t^y \bigg( \sup_{s \le \tau}u(\tau) V(\tau)^{-2}\bigg) a(s)\,ds \bigg)^{q} w(y) \,dy \bigg)^{\frac{1}{q}} \\ & \hspace{-6.5cm} + \, \sup_{x \in (0,\infty)} \bigg( \int_{[x,\infty)} \, d \, \bigg( - \sup_{t \le \tau} u(\tau)^{p'} V(\tau)^{-2p'} \bigg( \int_0^{\tau} V(s)^{p'} v(s)\,ds \bigg) \bigg) \bigg)^{\frac{1}{p'}} \, \bigg( \int_0^x A(y)^q w(y) \,dy \bigg)^{\frac{1}{q}} \notag \\ & \hspace{-6.5cm} + \sup_{x \in (0,\infty)} \bigg( \int_{(0,x]} \, A(t)^{p'} \, d \, \bigg( - \sup_{t \le \tau} u(\tau)^{p'} V(\tau)^{-2p'} \bigg( \int_0^{\tau} V(s)^{p'} v(s)\,ds \bigg) \bigg) \bigg)^{\frac{1}{p'}} \, \bigg( \int_x^{\infty} w(y) \,dy \bigg)^{\frac{1}{q}} \notag \\ & \hspace{-6.5cm} + \bigg( \int_0^{\infty} A(y)^q w(y) \,dy \bigg)^{\frac{1}{q}} \, \lim_{t \rightarrow \infty} \bigg(\sup_{t \le \tau} u(\tau) V(\tau)^{-2} \bigg( \int_0^{\tau} V(s)^{p'} v(s)\,ds \bigg)^{\frac{1}{p'}}\bigg). \end{align*} {\rm (ii)} if $q < p$, then \begin{align*} \sup_{h \ge 0} \frac{\bigg( \int_0^{\infty} \bigg( \int_0^x \bigg( \sup_{t \le \tau} u(\tau) V(\tau)^{-2} \int_0^{\tau} h(y)\,dy \bigg) a(t)\, dt \bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}}}{\bigg( \int_0^{\infty} h(s)^p V(s)^{-p} v(s)^{1-p}\,ds\bigg)^{\frac{1}{p}}} & \\ & \hspace{-8cm} \approx \, \bigg( \int_0^{\infty} \bigg( \int_0^t V(x)^{p'} v(x) \,dx \bigg)^{\frac{r}{q'}} V(t)^{p'} v(t) \, \bigg( \int_t^{\infty} \bigg( \int_t^z \bigg( \sup_{s \le y}u(y)V(y)^{-2}\bigg) a(s)\,ds \bigg)^{q} w(z)\, dz \bigg)^{\frac{r}{q}} \, dt \bigg)^{\frac{1}{r}} \notag \\ & \hspace{-7.5cm} + \bigg( \int_0^{\infty} \bigg( \int_0^t V(x)^{p'} v(x) \bigg( \int_x^t \bigg( \sup_{s \le y}u(y)V(y)^{-2}\bigg) a(s)\,ds \bigg)^{p'} \, dx \bigg)^{\frac{r}{p'}} \bigg( \int_z^{\infty} w(s) \,ds \bigg)^{\frac{r}{p}} w(t)\,dt \bigg)^{\frac{1}{r}} \\ & \hspace{-7.5cm} + \, \bigg( \int_0^{\infty} \bigg( \int_{[x,\infty)} \, d \, \bigg( - \bigg(\sup_{t \le \tau} u(\tau)^{p'}V(\tau)^{-2p'} \, \bigg( \int_0^{\tau} V(s)^{p'} v(s)\,ds \bigg) \bigg) \bigg) \bigg)^{\frac{r}{p'}} \, \bigg( \int_0^x A(y)^q w(y) \,dy \bigg)^{\frac{r}{p}} A(x)^q w(x) \,dx\bigg)^{\frac{1}{r}} \notag \\ & \hspace{-7.5cm} + \bigg( \int_0^{\infty} \bigg( \int_{(0,x]} \, A(t)^{p'} \, d \, \bigg( - \bigg(\sup_{t \le \tau} u(\tau)^{p'}V(\tau)^{-2p'} \, \bigg( \int_0^{\tau} V(s)^{p'} v(s)\,ds \bigg) \bigg) \bigg) \bigg)^{\frac{r}{p'}} \, \bigg( \int_x^{\infty} w(y) \,dy \bigg)^{\frac{r}{p}} w(x)\,dx \bigg)^{\frac{1}{r}} \notag \\ & \hspace{-7.5cm} + \bigg( \int_0^{\infty} A(y)^q w(y) \,dy \bigg)^{\frac{1}{q}} \, \lim_{t \rightarrow \infty} \bigg(\sup_{t \le \tau} u(\tau)V(\tau)^{-2} \bigg( \int_0^{\tau} V(s)^{p'} v(s)\,ds \bigg)^{\frac{1}{p'}}\bigg). \end{align*} \end{proof} \ \section{The boundedness of $P_{u,b}$ from $L^p(v)$ into $\ces_{q}(w,a)$ on the cone of monotone non-increasing functions}\label{P} \ In this section we characterize the boundedness of weighted Hardy operator $P_{u,b}$ from $L^p(v)$ into $\ces_q(w,a)$ on the cone of monotone non-increasing functions. \begin{thm}\label{aux.thm.3} Let $1 < p,\, q < \infty$ and $b \in \W\I$ be such that $b(t) > 0$ for a.e. $t\in (0,\infty)$. Assume that $u \in \W\I \cap C\I$ and $a,\,v,\,w \in \W\I$. {\rm (i)} If $p \le q$, then \begin{align*} \sup_{f \in \mp^{+,\dn} (0,\infty)} \frac{\bigg( \int_0^{\infty} \bigg( \int_0^x (P_{u,b} f)(t) a(t)\,dt \bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}}}{\bigg( \int_0^{\infty} f(s)^pv(s)\,ds\bigg)^{\frac{1}{p}}} & \\ & \hspace{-6cm} \approx \sup_{x \in (0,\infty)} \bigg( \int_0^x \bigg(\int_0^t a(y)u(y)\,dy\bigg)^q w(t)\,dt \bigg)^{\frac{1}{q}} \bigg( \int_x^{\infty} V(s)^{-p'} v(s)\,ds \bigg)^{\frac{1}{p'}} \\ & \hspace{-5.5cm} + \sup_{x \in (0,\infty)} \bigg( \int_x^{\infty} w(t)\,dt \bigg)^{\frac{1}{q}} \bigg( \int_0^x \bigg(\int_0^s a(y)u(y)\,dy\bigg)^{p'} V(s)^{-p'} v(s)\,ds \bigg)^{\frac{1}{p'}} \\ & \hspace{-5.5cm} + \sup_{x \in (0,\infty)} \bigg( \int_x^{\infty} \bigg( \int_x^t \frac{a(\tau)}{B(\tau)} u(\tau)\,d\tau \bigg)^q w(t)\,dt \bigg)^{\frac{1}{q}} \bigg( \int_0^x \bigg( \frac{B(s)}{V(s)}\bigg)^{p'} v(s)\,ds \bigg)^{\frac{1}{p'}} \\ & \hspace{-5.5cm} + \sup_{x \in (0,\infty)} \bigg( \int_x^{\infty} w(t)\,dt \bigg)^{\frac{1}{q}} \bigg( \int_0^x \bigg( \int_s^x \frac{a(\tau)}{B(\tau)} u(\tau)\,d\tau \bigg)^{p'} \bigg( \frac{B(s)}{V(s)}\bigg)^{p'} v(s)\,ds \bigg)^{\frac{1}{p'}} \\ & \hspace{-5.5cm} + \bigg( \int_0^{\infty} v(s)\,ds\bigg)^{-\frac{1}{p}}\, \bigg( \int_0^{\infty} \bigg(\int_0^x a(t)u(t)\,dt\bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}}; \end{align*} {\rm (ii)} If $q< p$, then \begin{align*} \sup_{f \in \mp^{+,\dn} (0,\infty)} \frac{\bigg( \int_0^{\infty} \bigg( \int_0^x (P_{u,b} f)(t) a(t)\,dt \bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}}}{\bigg( \int_0^{\infty} f(s)^pv(s)\,ds\bigg)^{\frac{1}{p}}} & \\ & \hspace{-6cm} \approx \bigg( \int_0^{\infty} \bigg( \int_0^x \bigg(\int_0^t a(y)u(y)\,dy\bigg)^q w(t)\,dt \bigg)^{\frac{r}{p}} \, \bigg( \int_x^{\infty} V(z)^{-p'} v(z)\,dz\bigg)^{\frac{r}{p'}} \, \bigg(\int_0^x a(y)u(y)\,dy\bigg)^q w(x)\,dx\bigg)^{\frac{1}{r}} \\ & \hspace{-5.5cm} + \bigg( \int_0^{\infty} \bigg( \int_x^{\infty} w(t)\,dt \bigg)^{\frac{r}{p}} \, \bigg( \int_0^x \bigg(\int_0^z a(y)u(y)\,dy\bigg)^{p'} V(z)^{-p'} v(z)\,dz\bigg)^{\frac{r}{p'}} \, w(x)\,dx\bigg)^{\frac{1}{r}} \\ & \hspace{-5.5cm} + \bigg( \int_0^{\infty} \bigg( \int_x^{\infty} w(t)\,dt \bigg)^{\frac{r}{p}} \, \bigg( \int_0^x \bigg( \int_z^x \frac{a(\tau)}{B(\tau)} u(\tau)\,d\tau \bigg)^{p'} \bigg( \frac{B(z)}{V(z)}\bigg)^{p'} v(z)\,dz\bigg)^{\frac{r}{p'}} \, w(x)\,dx\bigg)^{\frac{1}{r}} \\ & \hspace{-5.5cm} + \bigg( \int_0^{\infty} \bigg( \int_x^{\infty} w(t) \bigg( \int_x^t \frac{a(\tau)}{B(\tau)} u(\tau)\,d\tau \bigg)^q \,dt \bigg)^{\frac{r}{q}} \, \bigg( \int_0^x \bigg( \frac{B(s)}{V(s)}\bigg)^{p'} v(s)\,ds\bigg)^{\frac{r}{q'}} \, \bigg( \frac{B(x)}{V(x)}\bigg)^{p'} v(x) \,dx\bigg)^{\frac{1}{r}} \\ & \hspace{-5.5cm} + \bigg( \int_0^{\infty} v(s)\,ds\bigg)^{-\frac{1}{p}}\,\bigg( \int_0^{\infty} \bigg(\int_0^x a(y)u(y)\,dy\bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}}. \end{align*} \end{thm} \begin{proof} By \cite[Theorem 3.1]{GogStep}, using Fubini's Theorem, we get that \begin{align*} \sup_{f \in \mp^{+,\dn} (0,\infty)} \frac{\bigg( \int_0^{\infty} \bigg( \int_0^x (P_{u,b} f)(t) a(t)\,dt \bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}}}{\bigg( \int_0^{\infty} f(s)^pv(s)\,ds\bigg)^{\frac{1}{p}}} & \\ & \hspace{-6cm} = \sup_{f \in \mp^{+,\dn} (0,\infty)} \frac{\bigg( \int_0^{\infty} \bigg( \int_0^x \bigg( \frac{u(t)}{B(t)} \int_0^t f(\tau) b(\tau) \,d \tau \bigg) a(t)\,dt \bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}}}{\bigg( \int_0^{\infty} f(s)^pv(s)\,ds\bigg)^{\frac{1}{p}}} & \\ & \hspace{-6cm} \approx \sup_{h \ge 0} \frac{\bigg( \int_0^{\infty} \bigg( \int_0^x \bigg( \frac{u(t)}{B(t)} \int_0^t \bigg( \int_{\tau}^{\infty} h(y)\,dy \bigg) b(\tau) \,d \tau \bigg) a(t) \, dt \bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}}}{\bigg( \int_0^{\infty} h(s)^p V(s)^p v(s)^{1-p}\,ds\bigg)^{\frac{1}{p}}} + \frac{\bigg( \int_0^{\infty} \bigg(\int_0^x a(t)u(t)\,dt\bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}}}{\bigg( \int_0^{\infty} v(s)\,ds\bigg)^{\frac{1}{p}}} \\ & \hspace{-6cm} \approx \sup_{h \ge 0} \frac{\bigg( \int_0^{\infty} \bigg( \int_0^x \bigg( \int_t^{\infty} h(y)\,dy \bigg) a(t)u(t)\, dt \bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}}}{\bigg( \int_0^{\infty} h(s)^p V(s)^p v(s)^{1-p}\,ds\bigg)^{\frac{1}{p}}} \\ & \hspace{-5.5cm} + \sup_{h \ge 0} \frac{\bigg( \int_0^{\infty} \bigg( \int_0^x \bigg( \frac{u(t)}{B(t)} \int_0^t h(y) B(y)\,dy \bigg) a(t) \, dt \bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}}}{\bigg( \int_0^{\infty} h(s)^p V(s)^p v(s)^{1-p}\,ds\bigg)^{\frac{1}{p}}} + \frac{\bigg( \int_0^{\infty}\bigg(\int_0^x a(t)u(t)\,dt\bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}}}{\bigg( \int_0^{\infty} v(s)\,ds\bigg)^{\frac{1}{p}}} \\ & \hspace{-6cm} \approx \sup_{h \ge 0} \frac{\bigg( \int_0^{\infty} \bigg( \int_x^{\infty} h(y)\,dy \bigg)^q \bigg(\int_0^x a(t)u(t)\,dt\bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}}}{\bigg( \int_0^{\infty} h(s)^p V(s)^p v(s)^{1-p}\,ds\bigg)^{\frac{1}{p}}} + \sup_{h \ge 0} \frac{\bigg( \int_0^{\infty} \bigg( \int_0^x h(y) \bigg(\int_0^y a(t)u(t)\,dt\bigg)\,dy \bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}}}{\bigg( \int_0^{\infty} h(s)^p V(s)^p v(s)^{1-p}\,ds\bigg)^{\frac{1}{p}}} \\ & \hspace{-5.5cm} + \sup_{h \ge 0} \frac{\bigg( \int_0^{\infty} \bigg( \int_0^x h(y) \bigg( \int_y^x \frac{a(t)}{B(t)} u(t)\, dt \bigg) \,dy \bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}}}{\bigg( \int_0^{\infty} h(s)^p B(s)^{-p} V(s)^p v(s)^{1-p}\,ds\bigg)^{\frac{1}{p}}} + \frac{\bigg( \int_0^{\infty} \bigg(\int_0^x a(t)u(t)\,dt\bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}}}{\bigg( \int_0^{\infty} v(s)\,ds\bigg)^{\frac{1}{p}}}. \end{align*} {\rm (i)} Let $p \le q$. Using the characterizations of weighted Hardy-type inequalities (see, for instance, \cite[Section 1]{ok}), by \cite[Theorem 1.1]{Oinar}, we obtain that \begin{align*} \sup_{f \in \mp^{+,\dn} (0,\infty)} \frac{\bigg( \int_0^{\infty} \bigg( \int_0^x (P_{u,b} f)(t) a(t)\,dt \bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}}}{\bigg( \int_0^{\infty} f(s)^pv(s)\,ds\bigg)^{\frac{1}{p}}} & \\ & \hspace{-6cm} \approx \sup_{x \in (0,\infty)} \bigg( \int_0^x \bigg(\int_0^t a(y)u(y)\,dy\bigg)^q w(t)\,dt \bigg)^{\frac{1}{q}} \bigg( \int_x^{\infty} V(s)^{-p'} v(s)\,ds \bigg)^{\frac{1}{p'}} \\ & \hspace{-5.5cm} + \sup_{x \in (0,\infty)} \bigg( \int_x^{\infty} w(t)\,dt \bigg)^{\frac{1}{q}} \bigg( \int_0^x \bigg(\int_0^s a(y)u(y)\,dy\bigg)^{p'} V(s)^{-p'} v(s)\,ds \bigg)^{\frac{1}{p'}} \\ & \hspace{-5.5cm} + \sup_{x \in (0,\infty)} \bigg( \int_x^{\infty} \bigg( \int_x^t \frac{a(\tau)}{B(\tau)} u(\tau)\,d\tau \bigg)^q w(t)\,dt \bigg)^{\frac{1}{q}} \bigg( \int_0^x \bigg( \frac{B(s)}{V(s)}\bigg)^{p'} v(s)\,ds \bigg)^{\frac{1}{p'}} \\ & \hspace{-5.5cm} + \sup_{x \in (0,\infty)} \bigg( \int_x^{\infty} w(t)\,dt \bigg)^{\frac{1}{q}} \bigg( \int_0^x \bigg( \int_s^x \frac{a(\tau)}{B(\tau)} u(\tau)\,d\tau \bigg)^{p'} \bigg( \frac{B(s)}{V(s)}\bigg)^{p'} v(s)\,ds \bigg)^{\frac{1}{p'}} \\ & \hspace{-5.5cm} + \bigg( \int_0^{\infty} v(s)\,ds\bigg)^{-\frac{1}{p}}\, \bigg( \int_0^{\infty} \bigg(\int_0^x a(t)u(t)\,dt\bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}}. \end{align*} {\rm (ii)} Let now $q < p$. Using the characterizations of weighted Hardy-type inequalities (see, for instance, \cite[Section 1]{ok}), by \cite[Theorem 1.2]{Oinar}, we obtain that \begin{align*} \sup_{f \in \mp^{+,\dn} (0,\infty)} \frac{\bigg( \int_0^{\infty} \bigg( \int_0^x (P_{u,b} f)(t) a(t)\,dt \bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}}}{\bigg( \int_0^{\infty} f(s)^pv(s)\,ds\bigg)^{\frac{1}{p}}} & \\ & \hspace{-6cm} \approx \bigg( \int_0^{\infty} \bigg( \int_0^x \bigg(\int_0^t a(y)u(y)\,dy\bigg)^q w(t)\,dt \bigg)^{\frac{r}{p}} \, \bigg( \int_x^{\infty} V(z)^{-p'} v(z)\,dz\bigg)^{\frac{r}{p'}} \, \bigg(\int_0^x a(y)u(y)\,dy\bigg)^q w(x)\,dx\bigg)^{\frac{1}{r}} \\ & \hspace{-5.5cm} + \bigg( \int_0^{\infty} \bigg( \int_x^{\infty} w(t)\,dt \bigg)^{\frac{r}{p}} \, \bigg( \int_0^x \bigg(\int_0^z a(y)u(y)\,dy\bigg)^{p'} V(z)^{-p'} v(z)\,dz\bigg)^{\frac{r}{p'}} \, w(x)\,dx\bigg)^{\frac{1}{r}} \\ & \hspace{-5.5cm} + \bigg( \int_0^{\infty} \bigg( \int_x^{\infty} w(t)\,dt \bigg)^{\frac{r}{p}} \, \bigg( \int_0^x \bigg( \int_z^x \frac{a(\tau)}{B(\tau)} u(\tau)\,d\tau \bigg)^{p'} \bigg( \frac{B(z)}{V(z)}\bigg)^{p'} v(z)\,dz\bigg)^{\frac{r}{p'}} \, w(x)\,dx\bigg)^{\frac{1}{r}} \\ & \hspace{-5.5cm} + \bigg( \int_0^{\infty} \bigg( \int_x^{\infty} w(t) \bigg( \int_x^t \frac{a(\tau)}{B(\tau)} u(\tau)\,d\tau \bigg)^q \,dt \bigg)^{\frac{r}{q}} \, \bigg( \int_0^x \bigg( \frac{B(s)}{V(s)}\bigg)^{p'} v(s)\,ds\bigg)^{\frac{r}{q'}} \, \bigg( \frac{B(x)}{V(x)}\bigg)^{p'} v(x) \,dx\bigg)^{\frac{1}{r}} \\ & \hspace{-5.5cm} + \bigg( \int_0^{\infty} v(s)\,ds\bigg)^{-\frac{1}{p}}\,\bigg( \int_0^{\infty} \bigg(\int_0^x a(y)u(y)\,dy\bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}}. \end{align*} The proof is completed. \end{proof} \ \section{The boundedness of $T_{u,b}$ from $L^p(v)$ into $\ces_{q}(w,a)$ on the cone of monotone non-increasing functions}\label{T} \ In this section we combine the results from previous two sections to present the characterization of the boundedness of $T_{u,b}$ from $L^p(v)$ into $\ces_q(w,a)$ on the cone of monotone non-increasing functions. \begin{thm}\label{thm.T} Let $1 < p,\, q < \infty$ and $b \in \W\I$ be such that $b(t) > 0$ for a.e. $t\in (0,\infty)$. Assume that $u \in \W\I \cap C\I$ and $a,\,v,\,w \in \W\I$. Moreover, assume that condition \eqref{add.cond.} holds. {\rm (i)} If $p \le q$, then \begin{align*} \sup_{f \in \mp^{+,\dn} (0,\infty)} \frac{\bigg( \int_0^{\infty} \bigg( \int_0^x (T_{u,b} f) (t) a(t)\,dt \bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}}}{\bigg( \int_0^{\infty} f(s)^pv(s)\,ds\bigg)^{\frac{1}{p}}} & \\ & \hspace{-6cm} \approx \, \sup_{t \in (0,\infty)} \bigg( \int_0^t V(x)^{p'} v(x) \, \bigg( \int_x^t \bigg( \sup_{s \le \tau}u(\tau) V(\tau)^{-2}\bigg) a(s)\,ds \bigg)^{p'} \, dx \bigg)^{\frac{1}{p'}} \, \bigg( \int_t^{\infty} w(y) \,dy \bigg)^{\frac{1}{q}} \notag \\ & \hspace{-5.5cm} + \, \sup_{t \in (0,\infty)} \bigg( \int_0^t V(x)^{p'} v(x) \, dx \bigg)^{\frac{1}{p'}} \, \bigg( \int_t^{\infty} \, \bigg( \int_t^y \bigg( \sup_{s \le \tau}u(\tau) V(\tau)^{-2}\bigg) a(s)\,ds \bigg)^{q} w(y) \,dy \bigg)^{\frac{1}{q}} \\ & \hspace{-5.5cm} + \, \sup_{x \in (0,\infty)} \bigg( \int_{[x,\infty)} \, d \, \bigg( - \sup_{t \le \tau} u(\tau)^{p'} V(\tau)^{-2p'} \bigg( \int_0^{\tau} V(s)^{p'} v(s)\,ds \bigg) \bigg) \bigg)^{\frac{1}{p'}} \, \bigg( \int_0^x A(y)^q w(y) \,dy \bigg)^{\frac{1}{q}} \notag \\ & \hspace{-5.5cm} + \, \sup_{x \in (0,\infty)} \bigg( \int_{(0,x]} \, A(t)^{p'} \, d \, \bigg( - \sup_{t \le \tau} u(\tau)^{p'} V(\tau)^{-2p'} \bigg( \int_0^{\tau} V(s)^{p'} v(s)\,ds \bigg) \bigg) \bigg)^{\frac{1}{p'}} \, \bigg( \int_x^{\infty} w(y) \,dy \bigg)^{\frac{1}{q}} \notag \\ & \hspace{-5.5cm} + \, \bigg( \int_0^{\infty} A(y)^q w(y) \,dy \bigg)^{\frac{1}{q}} \, \lim_{t \rightarrow \infty} \bigg(\sup_{t \le \tau} u(\tau) V(\tau)^{-2} \bigg( \int_0^{\tau} V(s)^{p'} v(s)\,ds \bigg)^{\frac{1}{p'}}\bigg) \\ & \hspace{-5.5cm} + \, \sup_{x \in (0,\infty)} \bigg( \int_0^x \bigg(\int_0^t a(y)\bar{u}(y)\,dy\bigg)^q w(t)\,dt \bigg)^{\frac{1}{q}} \bigg( \int_x^{\infty} V(s)^{-p'} v(s)\,ds \bigg)^{\frac{1}{p'}} \\ & \hspace{-5.5cm} + \, \sup_{x \in (0,\infty)} \bigg( \int_x^{\infty} w(t)\,dt \bigg)^{\frac{1}{q}} \bigg( \int_0^x \bigg(\int_0^s a(y)\bar{u}(y)\,dy\bigg)^{p'} V(s)^{-p'} v(s)\,ds \bigg)^{\frac{1}{p'}} \\ & \hspace{-5.5cm} + \, \sup_{x \in (0,\infty)} \bigg( \int_x^{\infty} \bigg( \int_x^t \frac{a(\tau)}{B(\tau)} \bar{u}(\tau)\,d\tau \bigg)^q w(t)\,dt \bigg)^{\frac{1}{q}} \bigg( \int_0^x \bigg( \frac{B(s)}{V(s)}\bigg)^{p'} v(s)\,ds \bigg)^{\frac{1}{p'}} \\ & \hspace{-5.5cm} + \, \sup_{x \in (0,\infty)} \bigg( \int_x^{\infty} w(t)\,dt \bigg)^{\frac{1}{q}} \bigg( \int_0^x \bigg( \int_s^x \frac{a(\tau)}{B(\tau)} \bar{u}(\tau)\,d\tau \bigg)^{p'} \bigg( \frac{B(s)}{V(s)}\bigg)^{p'} v(s)\,ds \bigg)^{\frac{1}{p'}} \\ & \hspace{-5.5cm} + \, \bigg( \int_0^{\infty} v(s)\,ds\bigg)^{-\frac{1}{p}}\, \bigg( \int_0^{\infty} \bigg(\int_0^x a(t)\bar{u}(t)\,dt\bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}}; \end{align*} {\rm (ii)} If $q < p$, then \begin{align*} \sup_{f \in \mp^{+,\dn} (0,\infty)} \frac{\bigg( \int_0^{\infty} \bigg( \int_0^x (T_{u,b} f) (t) a(t)\,dt \bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}}}{\bigg( \int_0^{\infty} f(s)^p v(s)\,ds\bigg)^{\frac{1}{p}}} & \\ & \hspace{-6cm} \approx \, \bigg( \int_0^{\infty} \bigg( \int_0^t V(x)^{p'} v(x) \,dx \bigg)^{\frac{r}{q'}} V(t)^{p'} v(t) \, \bigg( \int_t^{\infty} \bigg( \int_t^z \bigg( \sup_{s \le y}u(y)V(y)^{-2}\bigg) a(s)\,ds \bigg)^{q} w(z)\, dz \bigg)^{\frac{r}{q}} \, dt \bigg)^{\frac{1}{r}} \notag \\ & \hspace{-5.5cm} + \, \bigg( \int_0^{\infty} \bigg( \int_0^t V(x)^{p'} v(x) \bigg( \int_x^t \bigg( \sup_{s \le y}u(y)V(y)^{-2}\bigg) a(s)\,ds \bigg)^{p'} \, dx \bigg)^{\frac{r}{p'}} \bigg( \int_z^{\infty} w(s) \,ds \bigg)^{\frac{r}{p}} w(t)\,dt \bigg)^{\frac{1}{r}} \\ & \hspace{-5.5cm} + \, \bigg( \int_0^{\infty} \bigg( \int_{[x,\infty)} \, d \, \bigg( - \bigg(\sup_{t \le \tau} u(\tau)^{p'}V(\tau)^{-2p'} \, \bigg( \int_0^{\tau} V(s)^{p'} v(s)\,ds \bigg) \bigg) \bigg) \bigg)^{\frac{r}{p'}} \, \bigg( \int_0^x A(y)^q w(y) \,dy \bigg)^{\frac{r}{p}} x^q w(x) \,dx\bigg)^{\frac{1}{r}} \notag \\ & \hspace{-5.5cm} + \, \bigg( \int_0^{\infty} \bigg( \int_{(0,x]} \, A(t)^{p'} \, d \, \bigg( - \bigg(\sup_{t \le \tau} u(\tau)^{p'}V(\tau)^{-2p'} \, \bigg( \int_0^{\tau} V(s)^{p'} v(s)\,ds \bigg) \bigg) \bigg) \bigg)^{\frac{r}{p'}} \, \bigg( \int_x^{\infty} w(y) \,dy \bigg)^{\frac{r}{p}} w(x)\,dx \bigg)^{\frac{1}{r}} \notag \\ & \hspace{-5.5cm} + \, \bigg( \int_0^{\infty} A(y)^q w(y) \,dy \bigg)^{\frac{1}{q}} \, \lim_{t \rightarrow \infty} \bigg(\sup_{t \le \tau} u(\tau)V(\tau)^{-2} \bigg( \int_0^{\tau} V(s)^{p'} v(s)\,ds \bigg)^{\frac{1}{p'}}\bigg) \\ & \hspace{-5.5cm} + \, \bigg( \int_0^{\infty} \bigg( \int_0^x \bigg(\int_0^t a(y)\bar{u}(y)\,dy\bigg)^q w(t)\,dt \bigg)^{\frac{r}{p}} \, \bigg( \int_x^{\infty} V(z)^{-p'} v(z)\,dz\bigg)^{\frac{r}{p'}} \, \bigg(\int_0^x a(y)u(y)\,dy\bigg)^q w(x)\,dx\bigg)^{\frac{1}{r}} \\ & \hspace{-5.5cm} + \, \bigg( \int_0^{\infty} \bigg( \int_x^{\infty} w(t)\,dt \bigg)^{\frac{r}{p}} \, \bigg( \int_0^x \bigg(\int_0^z a(y)\bar{u}(y)\,dy\bigg)^{p'} V(z)^{-p'} v(z)\,dz\bigg)^{\frac{r}{p'}} \, w(x)\,dx\bigg)^{\frac{1}{r}} \\ & \hspace{-5.5cm} + \, \bigg( \int_0^{\infty} \bigg( \int_x^{\infty} w(t)\,dt \bigg)^{\frac{r}{p}} \, \bigg( \int_0^x \bigg( \int_z^x \frac{a(\tau)}{B(\tau)} \bar{u}(\tau)\,d\tau \bigg)^{p'} \bigg( \frac{B(z)}{V(z)}\bigg)^{p'} v(z)\,dz\bigg)^{\frac{r}{p'}} \, w(x)\,dx\bigg)^{\frac{1}{r}} \\ & \hspace{-5.5cm} + \, \bigg( \int_0^{\infty} \bigg( \int_x^{\infty} w(t) \bigg( \int_x^t \frac{a(\tau)}{B(\tau)} \bar{u}(\tau)\,d\tau \bigg)^q \,dt \bigg)^{\frac{r}{q}} \, \bigg( \int_0^x \bigg( \frac{B(s)}{V(s)}\bigg)^{p'} v(s)\,ds\bigg)^{\frac{r}{q'}} \, \bigg( \frac{B(x)}{V(x)}\bigg)^{p'} v(x) \,dx\bigg)^{\frac{1}{r}} \\ & \hspace{-5.5cm} + \, \bigg( \int_0^{\infty} v(s)\,ds\bigg)^{-\frac{1}{p}}\,\bigg( \int_0^{\infty} \bigg(\int_0^x a(y)\bar{u}(y)\,dy\bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}}. \end{align*} \end{thm} \begin{proof} By \eqref{Split}, we have that \begin{align*} \sup_{f \in \mp^{+,\dn} (0,\infty)} \frac{\bigg( \int_0^{\infty} \bigg( \int_0^x (T_{u,b} f) (t) a(t)\,dt \bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}}}{\bigg( \int_0^{\infty} f(s)^p v(s)\,ds\bigg)^{\frac{1}{p}}} \approx & \sup_{f \in \mp^{+,\dn} (0,\infty)} \frac{\bigg( \int_0^{\infty} \bigg( \int_0^x (R_u f) (t) a(t)\,dt \bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}}}{\bigg( \int_0^{\infty} f(s)^pv(s)\,ds\bigg)^{\frac{1}{p}}} \\ & + \sup_{f \in \mp^{+,\dn} (0,\infty)} \frac{\bigg( \int_0^{\infty} \bigg( \int_0^x (P_{\bar{u},b} f)(t) a(t)\,dt \bigg)^q w(x)\,dx \bigg)^{\frac{1}{q}}}{\bigg( \int_0^{\infty} f(s)^pv(s)\,ds\bigg)^{\frac{1}{p}}}. \end{align*} It remains to apply Theorems \ref{thm.R} and \ref{aux.thm.3}. \end{proof} \ \section{The boundedness of $M_{\gamma}$ from $\Lambda^p(v)$ into $\Gamma^q(w)$}\label{Appl.} \ Suppose that $f$ is a measurable a.e. finite function on ${\mathbb R}^n$. Then its non-increasing rearrangement $f^*$ is given by $$ f^* (t) = \inf \{\lambda > 0: |\{x \in {\mathbb R}^n:\, |f(x)| > \lambda \}| \le t\}, \quad t \in (0,\infty), $$ and let $f^{**}$ denotes the Hardy-Littlewood maximal function of $f^*$, i.e. $$ f^{**}(t) : = \frac{1}{t} \int_0^t f^* (\tau)\,d\tau, \quad t > 0. $$ Quite many familiar function spaces can be defined by using the non-increasing rearrangement of a function. One of the most important classes of such spaces are the so-called classical Lorentz spaces. Let $p \in (0,\infty)$ and $w \in {\mathcal W}(0,\infty)$. Then the classical Lorentz spaces $\Lambda^p (w)$ and $\Gamma^p (w)$ consist of all measurable functions $f$ on $\rn$ for which $\|f\|_{\Lambda^p(w)} : = \|f^*\|_{p,w,(0,\infty)} < \infty$ and $\|f\|_{\Gamma^p(w)} : = \|f^{**}\|_{p,w,(0,\infty)} < \infty$, respectively. For more information about the Lorentz $\Lambda$ and $\Gamma$ spaces see e.g. \cite{cpss} and the references therein. The fractional maximal operator, $M_{\gamma}$, $\gamma \in (0,n)$, is defined at a locally integrable function $f$ on $\rn$ by $$ (M_{\gamma} f) (x) := \sup_{Q \ni x} |Q|^{ \gamma / n - 1} \int_{Q} |f(y)|\,dy,\quad x \in \rn. $$ It was shown in \cite[Theorem 1.1]{ckop} that \begin{equation}\label{frac.max op.eq.1.} (M_{\gamma}f)^* (t) \ls \sup_{\tau > t} \tau^{\gamma / n - 1} \int_0^{\tau} f^*(y)\,dy \ls (M_{\gamma} \tilde{f})^* (t) \end{equation} for every locally integrable function $f$ on $\rn$ and $t \in \I$, where $\tilde{f} (x) : = f^* (\omega_n |x|^n)$ and $\omega_n$ is the volume of the unit ball in $\rn$. The characterization of the boundedness of $M_{\gamma}$ between classical Lorentz spaces $\La^p(v)$ and $\La^q (w)$ was obtained in \cite{ckop} for the particular case when $1 < p \le q <\infty$ and in \cite[Theorem 2.10]{o} in the case of more general operators and for extended range of $p$ and $q$ (For the characteriation of the boundedness of more general fractional maximal functions between $\La^p(v)$ and $\La^q (w)$, see \cite{musbil}, and the references therein). As an application of obtained results, we calculate the norm of the fractional maximal function $M_{\gamma}$ from $\Lambda^p(v)$ into $\Gamma^q(w)$. \begin{thm} Let $1 < p,\, q < \infty$ and $0 < \gamma< n$. Assume that $v,\,w \in \W\I$. {\rm (i)} If $p \le q$, then \begin{align*} \|M_{\gamma}\|_{\Lambda^p(v) \rw \Gamma^q(w)} & \\ & \hspace{-2cm} \approx \, \sup_{t \in (0,\infty)} \bigg( \int_0^t V(x)^{p'} v(x) \, \bigg( \int_x^t \bigg( \sup_{s \le \tau} \tau^{\frac{\gamma}{n}} V(\tau)^{-2}\bigg) \,ds \bigg)^{p'} \, dx \bigg)^{\frac{1}{p'}} \, \bigg( \int_t^{\infty} y^{-q} w(y) \,dy \bigg)^{\frac{1}{q}} \notag \\ & \hspace{-1.5cm} + \sup_{t \in (0,\infty)} \bigg( \int_0^t V(x)^{p'} v(x) \, dx \bigg)^{\frac{1}{p'}} \, \bigg( \int_t^{\infty} \, \bigg( \int_t^y \bigg( \sup_{s \le \tau}\tau^{\frac{\gamma}{n}} V(\tau)^{-2}\bigg) \,ds \bigg)^{q} y^{-q} w(y) \,dy \bigg)^{\frac{1}{q}} \\ & \hspace{-1.5cm} + \, \sup_{x \in (0,\infty)} \bigg( \int_{[x,\infty)} \, d \, \bigg( - \sup_{t \le \tau} \tau^{\frac{\gamma}{n}p'} V(\tau)^{-2p'} \bigg( \int_0^{\tau} V(s)^{p'} v(s)\,ds \bigg) \bigg) \bigg)^{\frac{1}{p'}} \, \bigg( \int_0^x w(y) \,dy \bigg)^{\frac{1}{q}} \notag \\ & \hspace{-1.5cm} + \sup_{x \in (0,\infty)} \bigg( \int_{(0,x]} \, t^{p'} \, d \, \bigg( - \sup_{t \le \tau} \tau^{\frac{\gamma}{n}p'} V(\tau)^{-2p'} \bigg( \int_0^{\tau} V(s)^{p'} v(s)\,ds \bigg) \bigg) \bigg)^{\frac{1}{p'}} \, \bigg( \int_x^{\infty} y^{-q} w(y) \,dy \bigg)^{\frac{1}{q}} \notag \\ & \hspace{-1.5cm} + \bigg( \int_0^{\infty} w(y) \,dy \bigg)^{\frac{1}{q}} \, \lim_{t \rightarrow \infty} \bigg(\sup_{t \le \tau} \tau^{\frac{\gamma}{n}} V(\tau)^{-2} \bigg( \int_0^{\tau} V(s)^{p'} v(s)\,ds \bigg)^{\frac{1}{p'}}\bigg) \\ & \hspace{-1.5cm} + \sup_{x \in (0,\infty)} \bigg( \int_0^x t^{\frac{\gamma}{n} q} w(t)\,dt \bigg)^{\frac{1}{q}} \bigg( \int_x^{\infty} V(s)^{-p'} v(s)\,ds \bigg)^{\frac{1}{p'}} \\ & \hspace{-1.5cm} + \sup_{x \in (0,\infty)} \bigg( \int_x^{\infty} y^{-q} w(y)\,dy \bigg)^{\frac{1}{q}} \bigg( \int_0^x t^{(\frac{\gamma}{n}+1)p'} V(t)^{-p'} v(t)\,dt \bigg)^{\frac{1}{p'}} \\ & \hspace{-1.5cm} + \sup_{x \in (0,\infty)} \bigg( \int_x^{\infty} \bigg( t^{\frac{\gamma}{n}} - x^{\frac{\gamma}{n}}\bigg)^q t^{-q} w(t)\,dt \bigg)^{\frac{1}{q}} \bigg( \int_0^x s^{p'} V(s)^{-p'} v(s)\,ds \bigg)^{\frac{1}{p'}} \\ & \hspace{-1.5cm} + \sup_{x \in (0,\infty)} \bigg( \int_x^{\infty} t^{-q} w(t)\,dt \bigg)^{\frac{1}{q}} \bigg( \int_0^x \bigg( x^{\frac{\gamma}{n}} - s^{\frac{\gamma}{n}} \bigg)^{p'} s^{p'} V(s)^{-p'} v(s)\,ds \bigg)^{\frac{1}{p'}} \\ & \hspace{-1.5cm} + \bigg( \int_0^{\infty} v(s)\,ds\bigg)^{-\frac{1}{p}} \, \bigg( \int_0^{\infty} x^{\frac{\gamma}{n} q} w(x)\,dx \bigg)^{\frac{1}{q}}; \end{align*} {\rm (ii)} If $q < p$, then \begin{align*} \|M_{\gamma}\|_{\Lambda^p(v) \rw \Gamma^q(w)} & \\ & \hspace{-2cm} \approx \, \, \bigg( \int_0^{\infty} \bigg( \int_0^t V(x)^{p'} v(x) \,dx \bigg)^{\frac{r}{q'}} V(t)^{p'} v(t) \, \bigg( \int_t^{\infty} \bigg( \int_t^z \bigg( \sup_{s \le y}y^{\frac{\gamma}{n}}V(y)^{-2}\bigg) \,ds \bigg)^{q} z^{-q} w(z)\, dz \bigg)^{\frac{r}{q}} \, dt \bigg)^{\frac{1}{r}} \notag \\ & \hspace{-1.5cm} + \bigg( \int_0^{\infty} \bigg( \int_0^t V(x)^{p'} v(x) \bigg( \int_x^t \bigg( \sup_{s \le y}y^{\frac{\gamma}{n}}V(y)^{-2}\bigg) \,ds \bigg)^{p'} \, dx \bigg)^{\frac{r}{p'}} \bigg( \int_z^{\infty} s^{-q} w(s) \,ds \bigg)^{\frac{r}{p}} t^{-q} w(t)\,dt \bigg)^{\frac{1}{r}} \\ & \hspace{-1.5cm} + \, \bigg( \int_0^{\infty} \bigg( \int_{[x,\infty)} \, d \, \bigg( - \bigg(\sup_{t \le \tau} \tau^{\frac{\gamma}{n}p'} V(\tau)^{-2p'} \, \bigg( \int_0^{\tau} V(s)^{p'} v(s)\,ds \bigg) \bigg) \bigg) \bigg)^{\frac{r}{p'}}\, \bigg( \int_0^x w(y) \,dy \bigg)^{\frac{r}{p}} w(x) \,dx\bigg)^{\frac{1}{r}} \notag \\ & \hspace{-1.5cm} + \bigg( \int_0^{\infty} \bigg( \int_{(0,x]} \, t^{p'} \, d \, \bigg( - \bigg(\sup_{t \le \tau} \tau^{\frac{\gamma}{n}p'} V(\tau)^{-2p'} \, \bigg( \int_0^{\tau} V(s)^{p'} v(s)\,ds \bigg) \bigg) \bigg) \bigg)^{\frac{r}{p'}} \, \bigg( \int_x^{\infty} y^{-q} w(y) \,dy \bigg)^{\frac{r}{p}} x^{-q} w(x)\,dx \bigg)^{\frac{1}{r}} \notag \\ & \hspace{-1.5cm} + \bigg( \int_0^{\infty} w(y) \,dy \bigg)^{\frac{1}{q}} \, \lim_{t \rightarrow \infty} \bigg(\sup_{t \le \tau} \tau^{\frac{\gamma}{n}p'} V(\tau)^{-2} \bigg( \int_0^{\tau} V(s)^{p'} v(s)\,ds \bigg)^{\frac{1}{p'}}\bigg) \\ & \hspace{-1.5cm} + \bigg( \int_0^{\infty} \bigg( \int_0^x t^{\frac{\gamma}{n} q} w(t)\,dt \bigg)^{\frac{r}{p}} \, \bigg( \int_x^{\infty} V(z)^{-p'} v(z)\,dz\bigg)^{\frac{r}{p'}} \, x^{\frac{\gamma}{n} q} w(x)\,dx\bigg)^{\frac{1}{r}} \\ & \hspace{-1.5cm} + \bigg( \int_0^{\infty} \bigg( \int_x^{\infty} t^{-q} w(t)\,dt \bigg)^{\frac{r}{p}} \, \bigg( \int_0^x z^{(\frac{\gamma}{n} + 1) p'} V(z)^{-p'} v(z)\,dz\bigg)^{\frac{r}{p'}} \, x^{-q} w(x)\,dx\bigg)^{\frac{1}{r}} \\ & \hspace{-1.5cm} + \bigg( \int_0^{\infty} \bigg( \int_x^{\infty} t^{-q} w(t)\,dt \bigg)^{\frac{r}{p}} \, \bigg( \int_0^x \bigg( x^{\frac{\gamma}{n}} - z^{\frac{\gamma}{n}} \bigg)^{p'} z^{p'} V(z)^{-p'} v(z)\,dz\bigg)^{\frac{r}{p'}} \, x^{-q} w(x)\,dx\bigg)^{\frac{1}{r}} \\ & \hspace{-1.5cm} + \bigg( \int_0^{\infty} \bigg( \int_x^{\infty} \bigg( t^{\frac{\gamma}{n}} - x^{\frac{\gamma}{n}} \bigg)^q t^{-q} w(t) \,dt \bigg)^{\frac{r}{q}} \, \bigg( \int_0^x s^{p'} V(s)^{-p'} v(s)\,ds\bigg)^{\frac{r}{q'}} \, x^{p'} V(x)^{-p'} v(x) \,dx\bigg)^{\frac{1}{r}} \\ & \hspace{-1.5cm} + \bigg( \int_0^{\infty} v(s)\,ds\bigg)^{-\frac{1}{p}}\,\bigg( \int_0^{\infty} x^{\frac{\gamma}{n} q} w(x)\,dx \bigg)^{\frac{1}{q}}. \end{align*} \end{thm} \begin{proof} From inequalities \eqref{frac.max op.eq.1.}, we have that $$ \|M_{\gamma}\|_{\Lambda^p(v) \rw \Gamma^q(w)} \approx \sup_{f \in \mp^{+,\dn} (0,\infty)} \frac{\bigg( \int_0^{\infty} \bigg( \int_0^x (T_{u,b} f) (t) \,dt \bigg)^q x^{-q} w(x)\,dx \bigg)^{\frac{1}{q}}}{\bigg( \int_0^{\infty} f(s)^p v(s)\,ds\bigg)^{\frac{1}{p}}} $$ with $u(\tau) = \tau^{\gamma / n}$ and $b \equiv 1$. Note that \begin{equation*} \sup_{0 < t < \infty} \frac{u(t)}{B(t)} \int_0^t \frac{b(\tau)}{u(\tau)}\,d\tau < \infty \end{equation*} in this case. So, it remains to apply Theorem \ref{thm.T}. \end{proof} \begin{bibdiv} \begin{biblist} \bib{askeyboas}{article}{ author={Askey, R.}, author={Boas, R. P., Jr.}, title={Some integrability theorems for power series with positive coefficients}, conference={title={Mathematical Essays Dedicated to A. J. Macintyre},}, book={publisher={Ohio Univ. Press, Athens, Ohio},}, date={1970}, pages={23--32}, review={\MR{0277956 (43 \#3689)}}, } \bib{astasmal2009}{article}{ author={Astashkin, S. V.}, author={Maligranda, L.}, title={Structure of Ces\`aro function spaces}, journal={Indag. Math. (N.S.)}, volume={20}, date={2009}, number={3}, pages={329--379}, issn={0019-3577}, review={\MR{2639977 (2011c:46056)}}, doi={10.1016/S0019-3577(10)00002-9}, } \bib{asmalsurvey}{article}{ author={Astashkin, S. V.}, author={Maligranda, L.}, title={Structure of Ces\`{a}ro function spaces: a survey}, journal={Banach Center Publ.}, volume={102}, date={2014}, pages={13--40}, } \bib{bennett1996}{article}{ author={Bennett, G.}, title={Factorizing the classical inequalities}, journal={Mem. Amer. Math. Soc.}, volume={120}, date={1996}, number={576}, pages={viii+130}, issn={0065-9266}, review={\MR{1317938 (96h:26020)}}, doi={10.1090/memo/0576}, } \bib{bengros}{article}{ author={Bennett, G.}, author={Grosse-Erdmann, K.- G.}, title={Weighted Hardy inequalities for decreasing sequences and functions}, journal={Math. Ann.}, volume={334}, date={2006}, number={3}, pages={489--531}, issn={0025-5831}, review={\MR{2207873 (2006m:26038)}}, doi={10.1007/s00208-005-0678-7}, } \bib{boas1967}{book}{ author={Boas, R. P., Jr.}, title={Integrability theorems for trigonometric transforms}, series={Ergebnisse der Mathematik und ihrer Grenzgebiete, Band 38}, publisher={Springer-Verlag New York Inc., New York}, date={1967}, pages={v+66}, review={\MR{0219973 (36 \#3043)}}, } \bib{boas1970}{article}{ author={Boas, R. P., Jr.}, title={Some integral inequalities related to Hardy's inequality}, journal={J. Analyse Math.}, volume={23}, date={1970}, pages={53--63}, issn={0021-7670}, review={\MR{0274685 (43 \#447)}}, } \bib{cgmp2008}{article}{ author={Carro, M.}, author={Gogatishvili, A.}, author={Martin, J.}, author={Pick, L.}, title={Weighted inequalities involving two Hardy operators with applications to embeddings of function spaces}, journal={J. Operator Theory}, volume={59}, date={2008}, number={2}, pages={309--332}, issn={0379-4024}, review={\MR{2411048 (2009f:26024)}}, } \bib{cpss}{article}{ author={Carro, M.}, author={Pick, L.}, author={Soria, J.}, author={Stepanov, V. D.}, title={On embeddings between classical Lorentz spaces}, journal={Math. Inequal. Appl.}, volume={4}, date={2001}, number={3}, pages={397--428}, issn={1331-4343}, review={\MR{1841071 (2002d:46026)}}, doi={10.7153/mia-04-37}, } \bib{ckop}{article}{ author={Cianchi, A.}, author={Kerman, R.}, author={Opic, B.}, author={Pick, L.}, title={A sharp rearrangement inequality for the fractional maximal operator}, journal={Studia Math.}, volume={138}, date={2000}, number={3}, pages={277--284}, issn={0039-3223}, review={\MR{1758860 (2001h:42029)}}, } \bib{chencuihudsims}{article}{ author={Chen, S.}, author={Cui, Y.}, author={Hudzik, H.}, author={Sims, B.}, title={Geometric properties related to fixed point theory in some Banach function lattices}, conference={ title={Handbook of metric fixed point theory}, }, book={ publisher={Kluwer Acad. Publ., Dordrecht}, }, date={2001}, pages={339--389}, review={\MR{1904283 (2003f:46031)}}, } \bib{CuiPluc}{article}{ author={Cui, Y.}, author={P{\l}uciennik, R.}, title={Local uniform nonsquareness in Ces\`aro sequence spaces}, journal={Comment. Math. Prace Mat.}, volume={37}, date={1997}, pages={47--58}, issn={0373-8299}, review={\MR{1608225 (99b:46025)}}, } \bib{cuihud1999}{article}{ author={Cui, Y.}, author={Hudzik, H.}, title={Some geometric properties related to fixed point theory in Ces\`aro spaces}, journal={Collect. Math.}, volume={50}, date={1999}, number={3}, pages={277--288}, issn={0010-0757}, review={\MR{1744077 (2001f:46033)}}, } \bib{cuihud2001}{article}{ author={Cui, Y.}, author={Hudzik, H.}, title={Packing constant for Cesaro sequence spaces}, booktitle={Proceedings of the Third World Congress of Nonlinear Analysts, Part 4 (Catania, 2000)}, journal={Nonlinear Anal.}, volume={47}, date={2001}, number={4}, pages={2695--2702}, issn={0362-546X}, review={\MR{1972393 (2004c:46033)}}, doi={10.1016/S0362-546X(01)00389-3}, } \bib{cuihudli}{article}{ author={Cui, Y.}, author={Hudzik, H.}, author={Li, Y.}, title={On the Garcia-Falset coefficient in some Banach sequence spaces}, conference={ title={Function spaces}, address={Pozna\'n}, date={1998}, }, book={ series={Lecture Notes in Pure and Appl. Math.}, volume={213}, publisher={Dekker, New York}, }, date={2000}, pages={141--148}, review={\MR{1772119 (2001h:46009)}}, } \bib{cwikpys}{article}{ author={Cwikel, M.}, author={Pustylnik, E.}, title={Weak type interpolation near ``endpoint'' spaces}, journal={J. Funct. Anal.}, volume={171}, date={2000}, number={2}, pages={235--277}, issn={0022-1236}, review={\MR{1745635 (2001b:46118)}}, doi={10.1006/jfan.1999.3502}, } \bib{dok}{article}{ author={Doktorskii, R. Ya.}, title={Reiterative relations of the real interpolation method}, language={Russian}, journal={Dokl. Akad. Nauk SSSR}, volume={321}, date={1991}, number={2}, pages={241--245}, issn={0002-3264}, translation={ journal={Soviet Math. Dokl.}, volume={44}, date={1992}, number={3}, pages={665--669}, issn={0197-6788}, }, review={\MR{1153547 (93b:46143)}}, } \bib{evop}{article}{ author={Evans, W. D.}, author={Opic, B.}, title={Real interpolation with logarithmic functors and reiteration}, journal={Canad. J. Math.}, volume={52}, date={2000}, number={5}, pages={920--960}, issn={0008-414X}, review={\MR{1782334 (2001i:46115)}}, doi={10.4153/CJM-2000-039-2}, } \bib{gjop}{article}{ author={Gogatishvili, A.}, author={Johansson, M.}, author={Okpoti, C. A.}, author={Persson, L.-E.}, title={Characterisation of embeddings in Lorentz spaces}, journal={Bull. Austral. Math. Soc.}, volume={76}, date={2007}, number={1}, pages={69--92}, issn={0004-9727}, review={\MR{2343440 (2008j:46017)}}, doi={10.1017/S0004972700039484}, } \bib{GogMusIHI}{article}{ author={Gogatishvili, A.}, author={Mustafayev, R.Ch.}, title={Weighted iterated Hardy-type inequalities}, journal={Math. Inequal. Appl.}, volume={20}, date={2017}, number={3}, pages={683--728}, issn={1331-4343}, review={\MR{3653914}}, } \bib{GogMusISI}{article}{ author={Gogatishvili, A.}, author={Mustafayev, R.Ch.}, title={Iterated Hardy-type inequalities involving suprema}, journal={Math. Inequal. Appl.}, volume={20}, date={2017}, number={4}, pages={901--927}, issn={1331-4343}, review={\MR{3711402}}, } \bib{gmu_2017}{article}{ author={Gogatishvili, A.}, author={Mustafayev, R.}, author={\"Unver, T.}, title={Embeddings between weighted Copson and Ces\`aro function spaces}, journal={Czechoslovak Math. J.}, volume={67(142)}, date={2017}, number={4}, pages={1105--1132}, issn={0011-4642}, review={\MR{3736022}}, } \bib{gop}{article}{ author={Gogatishvili, A.}, author={Opic, B.}, author={Pick, L.}, title={Weighted inequalities for Hardy-type operators involving suprema}, journal={Collect. Math.}, volume={57}, date={2006}, number={3}, pages={227--255}, } \bib{gp2}{article}{ author={Gogatishvili, A.}, author={Pick, L.}, title={Embeddings and duality theorems for weak classical Lorentz spaces}, journal={Canad. Math. Bull.}, volume={49}, date={2006}, number={1}, pages={82--95}, issn={0008-4395}, review={\MR{2198721}}, doi={10.4153/CMB-2006-008-3}, } \bib{gogpick2007}{article}{ author={Gogatishvili, A.}, author={Pick, L.}, title={A reduction theorem for supremum operators}, journal={J. Comput. Appl. Math.}, volume={208}, date={2007}, number={1}, pages={270--279}, issn={0377-0427}, review={\MR{2347749 (2009a:26013)}}, } \bib{GogStep}{article}{ author={Gogatishvili, A.}, author={Stepanov, V. D.}, title={Reduction theorems for weighted integral inequalities on the cone of monotone functions}, language={Russian, with Russian summary}, journal={Uspekhi Mat. Nauk}, volume={68}, date={2013}, number={4(412)}, pages={3--68}, issn={0042-1316}, translation={ journal={Russian Math. Surveys}, volume={68}, date={2013}, number={4}, pages={597--664}, issn={0036-0279}, }, review={\MR{3154814}}, doi={10.1070/rm2013v068n04abeh004849}, } \bib{grosse}{book}{ author={Grosse-Erdmann, K.-G.}, title={The blocking technique, weighted mean operators and Hardy's inequality}, series={Lecture Notes in Mathematics}, volume={1679}, publisher={Springer-Verlag, Berlin}, date={1998}, pages={x+114}, isbn={3-540-63902-0}, review={\MR{1611898 (99d:26024)}}, } \bib{jagers}{article}{ author={Jagers, A. A.}, title={A note on Ces\`aro sequence spaces}, journal={Nieuw Arch. Wisk. (3)}, volume={22}, date={1974}, pages={113--124}, issn={0028-9825}, review={\MR{0348444 (50 \#942)}}, } \bib{kamkub}{article}{ author={Kami{\'n}ska, A.}, author={Kubiak, D.}, title={On the dual of Ces\`aro function space}, journal={Nonlinear Anal.}, volume={75}, date={2012}, number={5}, pages={2760--2773}, issn={0362-546X}, review={\MR{2878472 (2012m:46034)}}, doi={10.1016/j.na.2011.11.019}, } \bib{kerp}{article}{ author={Kerman, R.}, author={Pick, L.}, title={Optimal Sobolev imbeddings}, journal={Forum Math.}, volume={18}, date={2006}, number={4}, pages={535--570}, issn={0933-7741}, review={\MR{2254384 (2007g:46052)}}, doi={10.1515/FORUM.2006.028}, } \bib{krep}{article}{ author={K\v{r}epela, M.}, title={Integral conditions for Hardy-type operators involving suprema}, journal={Collect. Math.}, volume={68}, date={2017}, number={1}, pages={21--50}, issn={0010-0757}, review={\MR{3591463}}, doi={10.1007/s13348-016-0170-6}, } \bib{Leibowitz}{article}{ author={Leibowitz, G.M.}, title={A note on the Ces\`aro sequence spaces}, journal={Tamkang J. Math.}, volume={2}, date={1971}, pages={151--157}, } \bib{mazya}{book}{ author={Maz'ja, V. G.}, title={Sobolev spaces}, series={Springer Series in Soviet Mathematics}, note={Translated from the Russian by T. O. Shaposhnikova}, publisher={Springer-Verlag, Berlin}, date={1985}, pages={xix+486}, isbn={3-540-13589-8}, review={\MR{817985}}, doi={10.1007/978-3-662-09922-3}, } \bib{musbil}{article}{ author={Mustafayev, R. Ch.}, author={Bilgi\c{c}li, N.}, title={Generalized fractional maximal functions in Lorentz spaces $\Lambda$}, journal={J. Math. Inequal.}, volume={12}, date={2018}, number={3}, pages={827--851}, issn={1846-579X}, review={\MR{3857365}}, doi={10.7153/jmi-2018-12-62}, } \bib{Oinar}{article}{ author={Oinarov, R.}, title={Two-sided estimates for the norm of some classes of integral operators}, language={Russian}, journal={Trudy Mat. Inst. Steklov.}, volume={204}, date={1993}, number={Issled. po Teor. Differ. Funktsii Mnogikh Peremen. i ee Prilozh. 16}, pages={240--250}, issn={0371-9685}, translation={ journal={Proc. Steklov Inst. Math.}, date={1994}, number={3 (204)}, pages={205--214}, issn={0081-5438}, }, review={\MR{1320028}}, } \bib{o}{article}{ author={Opic, B.}, title={On boundedness of fractional maximal operators between classical Lorentz spaces}, conference={ title={Function spaces, differential operators and nonlinear analysis }, address={Pudasj\"arvi}, date={1999}, }, book={ publisher={Acad. Sci. Czech Repub., Prague}, }, date={2000}, pages={187--196}, review={\MR{1755309 (2001g:42043)}}, } \bib{ok}{book}{ author={Opic, B.}, author={Kufner, A.}, title={Hardy-type inequalities}, series={Pitman Research Notes in Mathematics Series}, volume={219}, publisher={Longman Scientific \& Technical}, place={Harlow}, date={1990}, pages={xii+333}, isbn={0-582-05198-3}, review={\MR{1069756 (92b:26028)}}, } \bib{pick2000}{article}{ author={Pick, L.}, title={Supremum operators and optimal Sobolev inequalities}, conference={ title={Function spaces, differential operators and nonlinear analysis }, address={Pudasj\"arvi}, date={1999}, }, book={ publisher={Acad. Sci. Czech Repub., Prague}, }, date={2000}, pages={207--219}, review={\MR{1755311 (2000m:46075)}}, } \bib{pick2002}{article}{ author={Pick, L.}, title={Optimal Sobolev embeddings---old and new}, conference={ title={Function spaces, interpolation theory and related topics (Lund, 2000)}, }, book={ publisher={de Gruyter, Berlin}, }, date={2002}, pages={403--411}, review={\MR{1943297 (2003j:46054)}}, } \bib{prog}{article}{ author={}, title={Programma van Jaarlijkse Prijsvragen (Annual Problem Section)}, journal={Nieuw Arch. Wiskd.}, volume={16}, date={1968}, number={}, pages={47--51}, } \bib{PS_Proc_2013}{article}{ author={Prokhorov, D. V.}, author={Stepanov, V. D.}, title={On weighted Hardy inequalities in mixed norms}, journal={Proc. Steklov Inst. Math.}, volume={283}, date={2013}, number={}, pages={149--164}, issn={}, } \bib{PS_Dokl_2013}{article}{ author={Prokhorov, D. V.}, author={Stepanov, V. D.}, title={Weighted estimates for a class of sublinear operators}, language={Russian}, journal={Dokl. Akad. Nauk}, volume={453}, date={2013}, number={5}, pages={486--488}, issn={0869-5652}, translation={ journal={Dokl. Math.}, volume={88}, date={2013}, number={3}, pages={721--723}, issn={1064-5624}, }, review={\MR{3203323}}, } \bib{PS_Dokl_2014}{article}{ author={Prokhorov, D. V.}, author={Stepanov, V. D.}, title={Estimates for a class of sublinear integral operators}, language={Russian}, journal={Dokl. Akad. Nauk}, volume={456}, date={2014}, number={6}, pages={645--649}, issn={0869-5652}, translation={ journal={Dokl. Math.}, volume={89}, date={2014}, number={3}, pages={372--377}, issn={1064-5624}, }, review={\MR{3287911}}, } \bib{P_Dokl_2015}{article}{ author={Prokhorov, D. V.}, title={On the boundedness of a class of sublinear integral operators}, language={Russian}, journal={Dokl. Akad. Nauk}, volume={92}, date={2015}, number={2}, pages={602--605}, issn={}, } \bib{pys}{article}{ author={Pustylnik, E.}, title={Optimal interpolation in spaces of Lorentz-Zygmund type}, journal={J. Anal. Math.}, volume={79}, date={1999}, pages={113--157}, issn={0021-7670}, review={\MR{1749309 (2001a:46028)}}, doi={10.1007/BF02788238}, } \bib{Sham}{article}{ author={Shambilova, G. {\`E}.}, title={Weighted inequalities for a class of quasilinear integral operators on the cone of monotone functions}, language={Russian, with Russian summary}, journal={Sibirsk. Mat. Zh.}, volume={55}, date={2014}, number={4}, pages={912--936}, issn={0037-4474}, translation={ journal={Sib. Math. J.}, volume={55}, date={2014}, number={4}, pages={745--767}, issn={0037-4466}, }, review={\MR{3242605}}, } \bib{shiue1}{article}{ author={Shiue, J.-S.}, title={On the Ces\`aro sequence spaces}, journal={Tamkang J. Math.}, volume={1}, date={1970}, number={1}, pages={19--25}, } \bib{shiue}{article}{ author={Shiue, J.-S.}, title={A note on Ces\`aro function space}, journal={Tamkang J. Math.}, volume={1}, date={1970}, number={2}, pages={91--95}, issn={0049-2930}, review={\MR{0276751 (43 \#2491)}}, } \bib{step_1993}{article}{ author={Stepanov, V. D.}, title={The weighted Hardy's inequality for nonincreasing functions}, journal={Trans. Amer. Math. Soc.}, volume={338}, date={1993}, number={1}, pages={173--186}, issn={0002-9947}, review={\MR{1097171}}, doi={10.2307/2154450}, } \bib{StepSham}{article}{ author={Stepanov, V. D.}, author={Shambilova, G. {\`E}.}, title={Weight boundedness of a class of quasilinear operators on the cone of monotone functions}, journal={Dokl. Math.}, volume={90}, date={2014}, number={2}, pages={569--572}, issn={}, } \bib{syzhanglee}{article}{ author={Sy, P. W.}, author={Zhang, W. Y.}, author={Lee, P. Y.}, title={The dual of Ces\`aro function spaces}, language={English, with Serbo-Croatian summary}, journal={Glas. Mat. Ser. III}, volume={22(42)}, date={1987}, number={1}, pages={103--112}, issn={0017-095X}, review={\MR{940098 (89g:46059)}}, } \end{biblist} \end{bibdiv} \end{document}
{ "timestamp": "2019-03-01T02:02:45", "yymm": "1902", "arxiv_id": "1902.10766", "language": "en", "url": "https://arxiv.org/abs/1902.10766" }
\section{DeepStamp Framework} \label{sec:method} \begin{figure*}[t] \centering \includegraphics[width=0.972\textwidth]{figures/framework.png} \caption{\textbf{Overview of Our Watermarking Framework.} We employ Generative Multi-Adversarial Networks (GMAN) to train the watermarking netwotk ($W$) which returns task-aware watermark images $w$ for each corresponding original data ($x$). We embed $w$ to $x$ and train the network of our interest ($F'$) with the watermarked images ($x'$). Once the network is trained, the network ($F'$) infers the correct labels for both $x$ and $x'$.} \label{fig:watermarking-framework} \vspace{-1.4em} \end{figure*} We aims to learn transformations of a given watermark to each data that can: (1) minimize the accuracy drop of networks trained on the watermarked data, (2) make the watermarks embedded in each data hard to remove by an attacker, and (3) make watermarks clearly perceptible to human-eyes. We illustrate the overview of our framework in Fig.~\ref{fig:watermarking-framework}. First, since \cite{dekel2017effectiveness} revisits that the random perturbation of watermarks embedded to data makes the watermark robust to removals, we learn the random perturbation for each data automatically during training by employing a generative network, GMAN, that enables a watermarking network $W$. To minimize the accuracy loss, we utilize a pre-trained CNN model $F$. We use an auto-encoder $V$ and a discriminator $D$ to enforce the transformations from $W$ are visually similar to the original watermark at a certain level. \noindent \textbf{In training}, DeepStamp\ minimizes: \begin{itemize}[topsep=0em,itemsep=0.2em,partopsep=0em,parsep=0em] \item $\mathcal{L}_{f}(x, x_w')$: the task loss computes the difference in the inferred labels of the clean data $x$ and the same data with synthesized watermarks $x_w'$. \item $\mathcal{L}_{v}(w, w')$: the $\ell_2$ loss from the auto-encoder $V$ between the original watermark $w$ and the synthesized one $w'$. \item $\mathcal{L}_{d}(x_w, x_w')$: the discriminator loss (binary cross entropy) that separates the original watermarked data $x_w$ from the data with synthesized watermarks $x_w'$. \end{itemize} \noindent Thus, the entire loss that we minimize is: \setlength{\abovedisplayskip}{-1.0em} \setlength{\belowdisplayskip}{0.2em} \begin{gather} \vspace{-1.0em} \mathcal{L}_{tot} = \mathcal{L}_{f}(x, x_w') + \mathcal{L}_{v}(w, w') + \mathcal{L}_{d}(x_w, x_w'). \end{gather} \noindent \textbf{In stamping}, DeepStamp\ synthesizes the watermarked data $x_w'$ for a given data $x$ and an watermark image $w$ that is visible and hard to remove with minimal accuracy drop. Alice now can share the watermarked data $x_w'$ to Bob, and Bob can train his network $F'$ with $x_w'$. Alice does not concern about the ownership issue since $w'$ is visible, and $w'$ is hard to be removed by anyone else because each $w'$ has randomly perturbed by $W$. Once $F'$ trained, $F'$ ensures the similar accuracy on both the $x$ and watermarked data $x_w'$. \section{Introduction} \label{sec:intro} Convolutional neural networks (CNN) have achieved super-human performances in various computer vision and machine learning applications with the help of large-scale supervised datasets. However, the data curation is very expensive in terms of time, fund and much effort to control the quality of supervision. To ease the cost of data curation, one can outsource the collection of a dataset to multiple parties with reasonable rewards. However, there is a risk of the shared data could be stolen and shared without any rewards to the parties. To prevent the case, we should be able to claim the ownership of the data when the data is stolen. One possible approach is to use cryptographic methods such as homomorphic encryption~\cite{gentry2009fully} and multi-party computation (MPC)~\cite{shokri2015privacy}. Homomorphic encryption allows computation on encrypted images and returns results when decrypted, matches the result of the operations as if they had been performed on the plain-data. MPC jointly computes a neural network over multiple inputs while keeping each input private. However, the computation on the encrypted data takes longer than that on the plain data. Also, once the content of the data is revealed, these schemes cannot protect the ownership since they are only designed to protect the privacy without sharing. Thus, they cannot prevent the fake ownership issues. Another approach is to conceal secrets (messages) within data, when the data is shared with others, such as steganography or invisible watermarking~\cite{shih2017digital}. Unlike the cryptographic solutions, these methods do not introduce extra computational costs because they compute on the plain data. Nevertheless, the secrets are easy to be destroyed by the data modifications, e.g., cropping, rotating, or resizing, easily exploited by an attacker. Given that the modifications are commonly used in training CNNs for the data augmentation, if an attacker modifies the data before claiming the fake ownership, the owner is hard to prove whether the data is hers or not. We consider \emph{visible watermarking}~\cite{braudaway1996protect} to protect the ownership of shared data for readiness of information for claiming the ownership. There have been challenges in using the technique to the datasets for training CNNs because networks trained with watermarked data can suffer from accuracy losses, and an attacker can use sophisticated methods~\cite{dekel2017effectiveness} to remove watermarks from data. Our work first studies the accuracy drops to the changes in visibility and randomness\footnote{\cite{dekel2017effectiveness} shows that the random perturbation of watermark images embedded in data makes an attacker hard to remove the watermarks.} of the watermarks in data. We, then, propose DeepStamp\ that synthesizes watermarks, once blended to data, minimizes the accuracy drop and makes them hard to be removed, with given data and a watermark image. We leverage a generative network, GMAN, to achieve randomness and implement necessary conditions as discriminators. With CIFAR10, we show that our watermarks minimize accuracy loss and, once we have watermarks, they can be used to train multiple CNNs. \section{Conclusion} \label{sec:conclusion} We propose a watermarking framework, DeepStamp, that embeds a visible watermark into the images of interest with less accuracy drop and difficulty of removal for easy claim of the ownership of the watermarked images in the data-sharing scenario. In experiments with the CIFAR10 dataset, we show that the DeepStamp\ learns transformations of a watermark to be embedded in another images with negligible accuracy drop while making its removal from the images non-trivial. \section{Evaluation} \label{sec:evaluation} \input{tables/acc_drops} \noindent \textbf{Experimental Setup.} We use CIFAR10 that consists of 32x32 pixels, three channel images. The images are labeled into ten classes, containing 50,000 training and 10,000 validation images. We implement the network $W$ with four convolutional layers whose input is concatenation of three channel data and four channel watermark (so total of seven channel) and output is a four channeled watermark. $D$ is composed of three transposed convolutional layers, and $V$ has five convolutional layers. For the network of our interest $F,F'$, we use three popular CNN architectures: AlexNet, VGG16, and ResNet50. \noindent \textbf{Results.} We summarize the results in Table~\ref{tbl:acc-drops}. We observe the followings: \begin{itemize}[topsep=0em,itemsep=0.2em,partopsep=0em,parsep=0em] \item Visible watermarking causes the accuracy drops in all cases, however, if the network capacity (the number of parameters in a network) is higher, the acc. drop is lower. % \item When we use a strong blending factor (1.0), the accuracy drop increases. However, the drops are not significant with a high-capacity network (ResNet50). \item Using AlexNet for $F,F'$, our data with synthesized watermark (DeepStamp) has a less accuracy drop than the statically watermarked data (S). However, we are not better when the watermarked data with displacements (D) is used. \item By training VGG16 or ResNet50 ($F'$) with our data synthesized with AlexNet ($F$), we observe accuracy drops by 1.17\% and 0.74\% compared to the static (S) method. Since the drops are similar to the case in which we use the same networks as $F$, once synthesized, the data can be used to train multiple $F'$s. \end{itemize} \section{Threat Model} \label{sec:threat} We consider an adversary who \emph{claims the ownership} of the datasets produced or collected by others such as industrial partners or public sources. For instance, suppose that Alice wants to provide a data collection $A$ to Bob who wants to train CNNs using $A$ as a subset of their training data. However, Alice still wants to claim the ownership of the shared data $A$, to prevent the case that Bob turns into malicious and takes benefit from re-sharing/selling $A$ to other parties. Alice also can claim the ownership of $A$ when Bob did data modifications. \section{Related Work} \label{sec:related} \section*{Acknowledgment} This research is partially supported by Department of Defense and the ``Global University Project" grant funded by the Gwangju Institute of Science Technology (GIST) in 2018.
{ "timestamp": "2019-03-13T01:06:18", "yymm": "1902", "arxiv_id": "1902.10854", "language": "en", "url": "https://arxiv.org/abs/1902.10854" }
\section{Motivation} \label{motivation} Questions of trust in machine learning models became crucial issues in recent years. Complex predictive models have various applications in different areas \cite{PALIWAL20092, KOUROU20158} and an increasing number of people use machine learning solutions in everyday life. Hence, it is important to ensure that predictions of these models are reliable. There are four requirements whose fulfillment is essential to ensure that predictive model is trustworthy and accessible: (1) high model performance, (2) auditability, (3) interpretability, and (4) automaticity. (1) High model performance means that a model rarely makes wrong predictions or the prediction error is small on average. Usually, this can be achieved by using complex, so-called black-box models, such as, boosting trees \cite{DBLP:journals/corr/ChenG16} or deep neutral networks \cite{Goodfellow-et-al-2016}. The opposite of black-boxes are glass-boxes. They are simple, interpretable models, such as linear regression, logistic regression, decision trees, regression trees, and decision rules. Model performance ensures only a part of information about model's quality. Model's (2) auditability guarantees that the model can be verified with respect to different criteria. They are, for example, stability, fairness, and sensitivity to a~concept drift. There are tools that allow to audit black-box models \cite{gosiewska2018auditor}, yet simple glass-boxes offer more extended range of diagnostic methods \cite{Harrell:2006:RMS:1196963}. The third requirement is an (3) interpretability, which became an important topic in recent years \cite{ONeil}. Machine learning models influence people's lives, in particular, they are used by financial, medical, and security institutions. Models have an impact on whether we get a~loan \cite{HUANG2007847}, what type of treatment we receive \cite{doi:10.1177/117693510600200030}, or even whether we are searched by the police \cite{4053200}. Therefore, models reasoning should be transparent and accessible. There is an ongoing debate about the right to explanation, what does it mean and how it can be achieved \cite{DBLP:journals/corr/abs-1711-00399, Edwards_Veale_2018}. The (4) automaticity of machine learning methods is spreading rapidly. Due to the increasing computational power, it becomes easier and easier to obtain more precise models, usually in an automatic manner. There are automated frameworks for AutoML like autokeras, auto-sklearn, TPOT \cite{jin2018efficient, NIPS2015_5872, Olson2016} that allow one to train a model even without any statistical knowledge or even programming skills. Yet, machine learning specialists can also take an advantage of automated methods of modeling. Such methods reduce time needed to train the model, therefore human effort can be directed towards more creative and sophisticated tasks than testing wide range of parameters and models. People usually choose automatically fitted black-box models that achieve high performance at the cost of auditability and interpretability. As a response to this problem, the methodology for explaining predictions of black-box models, so called post-hoc interpretability, is under active development. There are several approaches to explaining the global behavior of black-boxes. Model can be reduced to simple if-then rules \cite{MAGIX} or decision trees \cite{proc-jsm-2018}. However, these explanations are simplifications of models and may be inaccurate. As a consequence, they may be misleading or even harmful. Hence, in many applications it is better to train a transparent, interpretable model than apply explanations to a complex model \cite{2017arXiv171006169T, pleseStop}. Therefore, automated methods of obtaining interpretable models, while maintaining the predictive capabilities of a~complex model, are extremely important. In this article, we present a method for Surrogate Assisted Feature Extraction for Model Learning (SAFE ML). This method uses a surrogate model to assist feature engineering and lead to training accurate and transparent glass-box model. In this approach, surrogate model should be accurate to produce best feature transformation, yet it does not have to be interpretable. Based on the new features, the transparent glass-box model is trained. In many cases the high accuracy of black-box models comes from good data representation and this is something than can be next extracted from the model. The SAFE ML method is flexible and model agnostic, any class of models may be used as a surrogate model and as a~glass-box model. Therefore, surrogate model may be selected to fit the data as best as possible, while glass-box model one can be selected according to the particular task or abilities of the end-users to interpreting models. An advantage of this methodology is that the final glass-box model has a performance close to the surrogate model. By changing the representation of the data, SAFE ML allows to gain interpretability with minimal or no reduction of model~performance. The SAFE ML method can be used as a step in training a model with AutoML methods. We can use AutoML to fit elastic and complex model, then use SAFE to obtain a~transparent model. The paper is organized as follows. Section~\ref{SAFE_algorithm} provides a~description of the SAFE algorithm. Section~\ref{SAFE_application} contains illustrations and benchmarks for the SAFE method for regression and classification problems. Extensions for instance-level approaches and interactions are discussed in Section~\ref{extension}. Conclusions are in Section~\ref{discussion}. \section{Description of the SAFE Algorithm} \label{SAFE_algorithm} \begin{figure*}[t!h] \centering \includegraphics[width=\textwidth]{pdp_safe.pdf} \caption{The SAFE ML algorithm in four steps, 1. train elastic surrogate model, 2. approximate model response, 3.~extract transformations and new features, 4. train refined model.} \label{fig:safeDiagram} \end{figure*} The SAFE ML algorithm uses a complex model as a surrogate. New binary features are created on the basis of surrogate predictions. These new features are used to train a simple refined model. Illustration of the SAFE ML method is presented in Figure~\ref{fig:safeDiagram}. In the Algorithm~\ref{alg:SAFEdescription} we describe how data transformations are extracted from the surrogate model while in Algorithm~\ref{alg:SAFElearning} we show how to train a new refined model based on transformed features. Below, we explain details of the terminology being used in algorithms. Let $x_1, x_2, ..., x_p$ be features in the surrogate model $M$. A~subset of all features except $x_i$ we denote as $x_{-i}$. \textbf{The partial dependence profile} \cite{PDP} is defined as $$ f_i(x_i) = \mathbb{E}_{x_{-i}}[ M(x_i, x_{-i}) ], $$ and calculated as $$ \hat f_i(x_i) = \frac{1}{n} \sum_{j=1}^{n} M(x_{i}^j, x_{-i}^j), $$ where $n$ is the number of observations and $x_i^j$ is a value of the $i$-th feature for the $j$-th instance. Partial dependence function describes the expected output condition on a selected variable. The visualization of this function is Partial Dependence Plot \cite{RJ-2017-016}, an example plot is presented in Step~1 in Figure~\ref{fig:safeDiagram}. \textbf{The change point method} \cite{DBLP:journals/corr/abs-1801-00718} is used to identify times when the probability distribution of a time series changes. \textbf{The hierarchical clustering} \cite{Rokach2005} is an algorithm that groups observations into clusters. It involves creating a hierarchy of clusters that have a predetermined ordering. Step~2 in \mbox{Figure~\ref{fig:safeDiagram}} corresponds to both change point method and hierarchical clustering. \begin{algorithm}[tb] \caption{Surrogate Assisted Feature Extraction} \label{alg:SAFEdescription} \begin{algorithmic} \STATE {\bfseries Input:} data $X_{n \times p}$, surrogate model $M$, regularization penalty $\lambda$. \STATE {\bfseries Start:} \FOR{$i=1$ {\bfseries to} $p$} \STATE Let $x_i$ be $i$-th feature. \IF{$x_i$ is numerical} \STATE Calculate partial dependence profile $f_i(x)$ for feature $x_i$. \STATE Approximate $f_i(x)$ with interpretable features $x^*_{i}$, for example, use the change point method to discretize the variable with regularization penalty~$\lambda_i$. \STATE Save transformation $t_i(x)$ that transforms $x_i$ into~$x_i^*$. \ENDIF \IF{$x_i$ is categorical} \STATE Calculate model responses for each observation with imputed each possible value of $x_i$. \STATE Merge levels of $f_i(x)$ with similar model responses, for example use the hierarchical clustering with number of clusters $\lambda_i$. \STATE Save transformation $t_i(x)$ that transforms $x_i$ into~$x_i^*$. \ENDIF \ENDFOR \STATE Sets of transformations $T^* = \{t_1, ..., t_p\}$ may be used to create new data $X^*$ from features $x_i^* = t_i(x_i)$. \end{algorithmic} \end{algorithm} \begin{algorithm}[th] \caption{Model Learning with Surrogate Assisted Feature Extraction} \label{alg:SAFElearning} \begin{algorithmic} \STATE {\bfseries Input:} data $X^{new}_{m \times p}$, set of transformations $T^*$ derived from surrogate model $M$. \STATE {\bfseries Start:} \STATE Transform dataset $X$ into $X^{*, new} = T^*(X^{new})$. \STATE Create transparent model $M^{new}$ based on $X^{*, new}$. \end{algorithmic} \end{algorithm} \section{Application and Benchmarks} \label{SAFE_application} In this section, we perform SAFE ML on selected data sets for regression and classification problems. A summary discussion of the results is conducted at the end of this section. Examples are generated with scikit-learn models \cite{scikit-learn} and SafeTransformer. SafeTransformer is a~Python library that implements SAFE ML method. Code that generates artificial data sets and performs SAFE ML method and can be found in the GitHub repository: \url{https://github.com/agosiewska/SAFE_examples}. \subsection{Classification - Artificial Data Set} \label{subsection_classification_artificial} We compare performance of naïve logistic regression, surrogate xgboost, and refined logistic regression. Here naïve regression means that we fill vanilla regression model without any feature engineering. This example is performed on the artificial data set SIMULD2 for binary classification. SIMULD2 consists of 500 observations and three variables. Variable $y$ is a binary target. Variable $X1$ is continuous, uniform distributed at range from $-5$ to $5$ with normally distributed noise. Variable $X2$ is categorical with 40 levels. As can be seen in Table~\ref{tab:class_results}, refined logistic regression performs better than the other two models. Refined logistic regression achieves even better accuracy and AUC than xgboost model, while being a more transparent model. It may be surprising that the refined model is better than the surrogate one, however there are some reasons for that. Elastic models are better to capture non-linear relations but at the price of larger variance for parameter estimation. In some cases the refined models will work on better features and will have less parameters to train, thus it can outperform the surrogate model. \begin{table}[h!tb] \caption{Results of the SAFE method for models trained on the SIMULD2 data set. SAFE was performed with penalty equals~$0.42625$.} \label{tab:class_results} \vskip 0.15in \begin{center} \begin{small} \begin{sc} \begin{tabular}{lcccr} \toprule Model & Accuracy & AUC \\ \midrule Naïve logistic regression & 0.736 & 0.897 \\ Surrogate xgboost & 0.960 & 0.982 \\ Refined logistic regression & \textbf{0.976} & \textbf{0.989} \\ \bottomrule \end{tabular} \end{sc} \end{small} \end{center} \vskip -0.1in \end{table} Partial Dependence Plot in Figure~\ref{fig:class_pdp}~shows the relationship between variable $X1$ and output of the xgboost model. This pattern is close to real association, which is a step function with discontinuities in $-3$ and $2.5$. This relationship could not be caught by logistic regression. However, in Figure~\ref{fig:class_pdp},~we can see that SAFE ML method divided $X1$ variable into three binary variables. This make it possible for refined logistic regression to capture the non-linearity. Variable $X2$ consists of 40 levels, yet process of generating target variable y distinguishes between variables in three groups. When examining how SAFE ML has grouped variables, one can see that groups almost match up with real dependencies. This caused that instead of one variable of 40 levels, the new model was trained on 3 binary variables. This means that transformed features better reflected the real relationships. \begin{figure}[tb] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=0.6\textwidth]{fig/clasification_PDP_X2.pdf}} \caption{An expected response of the \mbox{xgboost} model conditioned on the variable X1. Black vertical lines marks points of the discretization calculated with SAFE ML.} \label{fig:class_pdp} \end{center} \vskip -0.2in \end{figure} \subsection{Regression - Boston Housing} \label{subsection_regression_boston} Second example is performed on Boston Housing data set \cite{HARRISON197881}. Boston Housing consists of 506 rows and 14 columns. The target variable is medv (median value of owner-occupied homes). We compare performances of naïve linear regression, surrogate xgboost, and refined linear regression. As described in Section~\ref{SAFE_algorithm}, feature extraction in SAFE ML algorithm depends on a choice of a regularization penalty $\lambda$. Figure~\ref{boston_results} shows performances of models as functions of penalty. Mean Square Errors (MSE) of the refined linear regression models are, in general, close to MSE of surrogate model. Thus, the use of a simpler model did not negatively affect the performance. At the same time, we gained transparency. \begin{figure}[tb] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=0.6\textwidth]{fig/boston_results.pdf}} \caption{Dependence between SAFE ML method's penalty and MSE for refined model. } \label{boston_results} \end{center} \vskip -0.2in \end{figure} Partial Dependence Plot for xgboost model and variable ZN is presented in Figure~\ref{boston_pdp}. Flexible boosting model captured the non-linear relationship between variable ZN and target medv. As a result, SAFE ML method divided ZN variable into two binary features to improve performance of refined model. \begin{figure}[tb] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=0.6\textwidth]{fig/boston_PDP_ZN.pdf}} \caption{Partial Dependence Plot (PDP) of the gradient boosting model and ZN variable . Black vertical line indicates variable split generated with SAFE ML method.} \label{boston_pdp} \end{center} \vskip -0.2in \end{figure} \subsection{Benchmark on a Number of Tabular Data Sets} \label{large_benchmark} In this section we benchmark the SAFE ML method on a~number of tabular data sets for regression and classification problems. We compare performances of three groups of models: simple models trained without SAFE ML feature transformation, complex surrogate models, and refined interpretable models. \subsubsection{Benchmark for Classification} \label{benchmark_classification} We train classification models on six different data sets. They are two simulated data sets, Titanic from Kaggle, Blood Transfusion Service Center \cite{Yeh2009}, Teaching Assistant Evaluation form UCI Machine Learning Repository \cite{Dua:2017}, and Pima Indian Diabetes \cite{johannes1988using}. We use Accuracy and AUC metrics to evaluate models. Logistic regression and classification trees trained without any feature extraction are baselines. Complex xgboost models are surrogates required to perform SAFE ML algorithm. Parameters of surrogate models differ between data sets. Refined models are logistic regression models and classification trees. To chose best penalty for SAFE ML transformations, for each surrogate model we examined 25 equally spaced penalties in the range from~$0.01$~to~$10$. The criterion was performance of a refined model. Results of benchmarking are in Table~\ref{table_classification}. For 22 out of 24 cases, refined model surpasses baseline model. In more than half cases, refined model outperforms baseline and surrogate model. \begin{table}[tb] \caption{Performances of models trained on six data sets for classification. Artificial data sets are marked by (A). Headers indicate class of baseline model (BASE.) and refined model (REF.). In each case, surrogate model (SURR.) is~xgboost.} \label{table_classification} \vskip 0.15in \begin{center} \begin{small} \begin{sc} \begin{tabular}{lcccr} \toprule \multicolumn{4}{c}{Logistic Regression - AUC} \\ \midrule Data set & BASE. & SURR. & REF. \\ \midrule SIMULD1 (A) & 0.833 & \textbf{0.980} & 0.963 \\ SIMULD2 (A) & 0.897 & 0.982 & \textbf{0.989} \\ Titanic & 0.861 & \textbf{0.896} & 0.870 \\ Blood Transfusion & 0.670 & \textbf{0.679} & 0.668 \\ Teaching Evaluation & 0.725 & \textbf{0.838} & 0.821 \\ Pima Indian Diabetes & 0.814 & 0.822 & \textbf{0.838} \\ \toprule \multicolumn{4}{c}{Logistic Regression - Accuracy}\\ \midrule Data set & BASE. & SURR. & REF. \\ \midrule SIMULD1 (A) & 0.744 & 0.888 & \textbf{0.912} \\ SIMULD2 (A) & 0.736 & 0.960 & \textbf{0.976} \\ Titanic & 0.798 & \textbf{0.834} & \textbf{0.834} \\ Blood Transfusion & 0.749 & \textbf{0.754} & 0.668 \\ Teaching Evaluation & 0.842 & 0.842 & \textbf{0.868} \\ Pima Indian Diabetes & 0.745 & 0.734 & \textbf{0.771} \\ \toprule \multicolumn{4}{c}{Classification Tree - AUC} \\ \midrule Data set & BASE. & SURR. & REF. \\ \midrule SIMULD1 (A) & 0.877 & \textbf{0.980} & 0.972 \\ SIMULD2 (A) & 0.928 & 0 982 & \textbf{0.983} \\ Titanic & 0.777 & \textbf{0.896} & 0.878 \\ Blood Transfusion & 0.598 & 0.667 & \textbf{0.683} \\ Teaching Evaluation & 0.763 & 0.817 & \textbf{0.842} \\ Pima Indian Diabetes & 0.665 & \textbf{0.822} & 0.767 \\ \toprule \multicolumn{4}{c}{Classification Tree - Accuracy}\\ \midrule Data set & BASE. & SURR. & REF. \\ \midrule SIMULD1 (A) & 0.896 & 0.888 & \textbf{0.912} \\ SIMULD2 (A) & 0.928 & 0.96 & \textbf{0.976} \\ Titanic & 0.794 & 0.834 & \textbf{0.839} \\ Blood Transfusion & 0.738 & \textbf{0.775} & 0.759 \\ Teaching Evaluation & 0.842 & 0.842 & \textbf{0.895} \\ Pima Indian Diabetes & 0.688 & 0.734 & \textbf{0.760} \\ \bottomrule \end{tabular} \end{sc} \end{small} \end{center} \vskip -0.1in \end{table} \subsubsection{Benchmark for regression} In this section, we examine performance of the SAFE ML method on 5 data sets for regression problems. They are Energy Efficiency and Yacht Hydrodynamics form UCI Machine Learning Repository \cite{Dua:2017}, Boston Housing \cite{HARRISON197881}, Warsaw Apartments \cite{DALEX}, and Real Estates \cite{Yeh:2018:BRE:3198938.3199153}. Base and refined models are linear regression models. We use xgboosts as a surrogate model, xgboost parameters differ between data sets. To chose best penalty for SAFE ML transformations we examine 25 equally spaced penalties in the range from~$0.01$~to~$10$ and MSE criterion. Results are presented in Table~\ref{table_regression}. For all data sets, baseline models outperform base models. For 3 out of 5 data sets, refined linear model achieves better performance than xgboost model. \begin{table}[h!tb] \caption{Performances of models trained on five data sets for regression problem. Artificial data sets are marked by (A). Baseline models (BASE.) and refined models (REF.) are linear regression models. Surrogate models (SURR.) are xgboost models. Performance metric is MSE, values in columns are scaled to MSE for baseline model.} \label{table_regression} \vskip 0.15in \begin{center} \begin{small} \begin{sc} \begin{tabular}{lcccr} \toprule Data set & BASE. & SURR. & REF. \\ \midrule Warsaw Apartments (A) & 1 & 7.12 & \textbf{64.99} \\ Real Estates & 1 & 1.02 & \textbf{1.38}\\ Boston Housing & 1 & 1.27 & \textbf{1.32} \\ Energy efficiency & 1 & \textbf{43.09} & 8.88 \\ Yacht Hydrodynamics & 1 & \textbf{267.75} & 105.17 \\ \bottomrule \end{tabular} \end{sc} \end{small} \end{center} \vskip -0.1in \end{table} \subsection{Benchmark Summary} \label{application_summary} \mbox{We examined 6 data sets for regression and 5 data sets for classification.} In more than half of the cases, refined model had outperformed surrogate model. In majority of the rest examples performance differences between surrogate and refined models were minimal. Refined models are simple, with a small number of parameters, therefore one could conclude that refined models generalize data better than complex models. However, it is worth noting that the refined models generalize relationships that were captured by surrogate models. Thus, without a complex model as a surrogate, it would not have been possible. With SAFE ML method, transferring knowledge about relationships to a simple model is automatic and do not require detailed investigation of the complex model. Even if black-box model gains better results, it is still worth considering applying transparent glass-box model. As we have seen in previous examples, performance of surrogate and refined model were, in general, close to each other. The advantage of a simpler model is that we gain transparency, interpretability and auditability. \section{Future extensions of the SAFE ML method} \label{extension} \subsection{Instance Level Problems} In previous sections, we showed how to use complex surrogate models to extract global, interpretable features. SAFE ML method could be also extended to instance level feature extraction. A complex model can capture local relationships between variables. Therefore, we may consider several local, interpretable models, instead of one global model. There are several approaches to obtain locality, we can subset data set, reweight original data, or simulate instances from the original data distribution. One of the examples of local model approximations is LIME (Local interpretable model-agnostic explanations) \cite{lime}. It is a~method for generating local models that approximate the predictions of the underlying complex model. Local models are simple, such as, LASSO regression. Since LASSO is a~method for selecting variables, while applying LIME we perform also a feature extraction. However, this method is not capable of extracting new interpretable features. An extension of the LIME that includes extraction of interpretable features is localModel (Local Explanations of Machine Learning Models for Tabular Data) \cite{localModel}. Local interpretable features are created by discretization of numerical features due to the splits of the decision tree. Categorical features are discretized by merging levels using the marginal relationship between the feature and the model response. Locality is obtained by generating a~random number of interpretable inputs around the explained instance. Then, LASSO regression model is fitted to new features and original model's responses. The idea behind localModel is similiar to SAFE ML. However, localModel is used to make statements about predictions and behaviour of the underlying black-box model, while the idea of SAFE is to create new refined model to make its own predictions. \subsection{Interactions Extractions} \label{interactions} SAFE ML algorithm is used for transforming single features. One can consider extending this approach of interactions. There are methods of capturing interactions from random forest \cite{randomForestExplainer} or xgboost \cite{xgboostExplainer}. This can be used for extraction of new features which contain information about interactions between variables. \section{Discussion} \label{discussion} In this article, we presented SAFE ML algorithm that uses surrogate model to feature transformations. New features are then used to train refined glass-box model. We benchmarked SAFE ML for regression and classification problems. The results confirmed that SAFE ML algorithm produces features that can be further used to fit accurate and transparent model. We also justified the advantage of refined models over surrogate black-boxes. We also discussed possible extensions of SAFE ML to instance level problems. In addition, we see the possibility of extending the SAFE ML method to include interaction extraction. \subsection{Benchmarking Methodology} Benchmarks in Section~\ref{SAFE_application} were based on a single split into training and test data sets. In further research, benchmarks could include k-fold cross-validation technique. However, while applying cross-validation, it would be necessary to take into account values of penalty. In Section~\ref{SAFE_application} we were selecting a penalty on the basis of the model performance on a~test data. The use of cross-validation will cause that values of penalty for each fold will be different. Thus it will not be possible to point the best penalty. \subsection{Conclusions} \label{conclusions} The SAFE ML method allows us to fulfill four requirements of trustworthy predictive model, stated in Section~\ref{motivation}. One can choose a final refined model, accordingly to the simplicity and transparency, therefore statement (3) about interpretability is accomplished. Simple models, such as, linear regression and logistic regression are extensively described from a mathematical point of view. As a result, there are many methods to diagnose such models. Therefore, requirement of the (2) auditability is also fulfilled. In Section~\ref{SAFE_application} we showed that performances of refined models are close to performance of complex surrogate models. Therefore, SAFE ML method allows to gain (1) high model performance. In Section~\ref{SAFE_application} we also argued that SAFE ML algorithm allows automatic feature transformation for the purpose of fitting refined model. This approach allows you to omit examining a~complex model. Thus (4) automaticity is also accomplished. \subsection{Similar Nomenclature} The phrase \textit{surrogate model} is occasionally referred to an interpretable glass-box model that approximates predictions of a black-box model \cite{h2o_mli_booklet}. The surrogate model in this sense mimics most of the properties of the model under consideration, and is used to makes statements about the black-box model and not about the real world. However, there is no unambiguous nomenclature for this kind of problem. Models that mimic black-boxes are called also proxy models, shadow models, metamodels, response surface models, emulators \cite{molnar,proc-jsm-2018}. Therefore, our meaning of the term \textit{surrogate model} is not a duplication the meaning of the existing phrase. In this article, we refer \textit{surrogate model} to a complex model that supports training interpretable model. \subsection{Software and Code} \label{software} Benchmarks from Section~\ref{SAFE_application} were generated with SafeTransformer Python library available at (\url{https://github.com/olagacek/SAFE}). Code that generates benchmarks is availible on Github: (\url{https://github.com/agosiewska/SAFE_examples}). \section{Acknowledgements} Alicja Gosiewska was financially supported by the grant of Polish Centre for Research and Development POIR.01.01.01-00-0328/17. Przemyslaw Biecek was financially supported by the grant NCN Opus grant 2017/27/B/ST6/01307. \bibliographystyle{unsrt}
{ "timestamp": "2019-03-01T02:16:57", "yymm": "1902", "arxiv_id": "1902.11035", "language": "en", "url": "https://arxiv.org/abs/1902.11035" }
\section{Introduction} It is widely recognized that the magnetohydrodymamics (MHD) is capable of describing a variety of astrophysical phenomena. The treatment is macroscopic one consisted of >fluid motions coupled with electromagnetic forces. A set of MHD equations is scale-independent, and may be applied to the laboratory to astrophysical plasma. The electromagnetic fields are described by classical Maxwell's equations. They have a symmetry with respect to a parity, that is, a transformation property under spatial inversion. Physical vectors can equate only vectors of the same kind. An example is Ohm's law ${\VEC j} =\sigma {\VEC E}$ with a scalar $\sigma$, where electric field ${\VEC E}$ and electric current ${\VEC j}$ are polar vectors. Magnetic vector ${\VEC B}$ is an axial one, and is connected to the polar current vector ${\VEC j} $ as ${\VEC \nabla} \times {\VEC B} = 4\pi {\VEC j}/c$ (e.g., \cite{1975clel.book.....J}). A peculiar form of an electric current arises from a microscopic level: \begin{equation} {\VEC j} = \kappa {\VEC B} . \label{curntb.eqn} \end{equation} This form means that $\kappa$ is not scalar but pseudo-scalar by the parity transformation. Possible origin of the form (\ref{curntb.eqn}) is a quantum anomaly known as the chiral magnetic effect (e.g.,\cite{1980PhRvD..22.3080V, 1985PhRvL..54..970R,1998PhRvL..81.3503A, 2010PhRvL.104u2001F,2013PhRvL.111e2002A} and and references therein). There is imbalance between left-handed and right-handed particles in a quantum system, and the current flow along the magnetic field emerges in a macroscopic level. It is known that the electric current along the magnetic field causes an instability, leading to a growth of magnetic field (e.g., \cite{1997PhRvL..79.1193J,2015PhRvD..92d3004B, 2016PhRvD..94b5009B}). Magnetic helicity, which is an indicator of a global topology, is also changed as well as the field amplification. It has been discussed that the chiral magnetic effect leads to inverse cascade, that is, energy transfer from small to large scales. The problem is studied from various aspects\cite{1999PhRvD..59f3008S, 2007PhRvL..98y1302C,2015PhRvL.114g5001B,2015PhRvD..92l5031H, 2016PhRvD..93l5016Y,2017PhRvD..96b3504P}. This property is an important process of self-organization of turbulent structure. In recent years, the chiral magnetic effect is widely discussed in relation to quark-gluon plasma in heavy-ion collision experiment \cite{2016PrPNP..88....1K}, and astrophysical consequences in the early universe \cite{1997PhRvL..79.1193J,1999PhRvD..59f3008S,2017ApJ...845L..21B}, core-collapse super-nova\cite{2016PhRvD..93f5017Y, 2018PhRvD..98h3018M}, magnetar\cite{2018MNRAS.479..657D}. The electric current (\ref{curntb.eqn}) is also discussed in a context of mean-field MHD dynamo(e.g.,\cite{1978mfge.book.....M, 1980mfmd.book.....K, 1983flma....3.....Z}). In the theory, the mean values of the variables can be distinguished from the fluctuating ones. Thus, the electric current (\ref{curntb.eqn}) on a large scale is caused as an ensemble of screw-like vortices in microscopic turbulence. Direct numerical simulations of MHD dynamo have been developed with the increase of computer power. In the approach, dynamics of all scales is simultaneously followed as far as small-scale waves are numerically resolved. Thus, a model of microscopic turbulence is no longer needed there. Indeed, some chiral MHD simulations have been performed in non-relativistic framework\cite{2017ApJ...846..153R,2018ApJ...858..124S} and in relativistic framework\cite{2018MNRAS.479..657D}. Their remarkable results demonstrate the ability of the method. However computational cost may be high in the high resolution simulations. Another example of the form (\ref{curntb.eqn}) is force-free magnetic fields (e.g., \cite{1978mfge.book.....M,1996ffmg.book.....M}), in which Lorentz force vanishes ${\VEC j} \times{\VEC B} =0$. In a stationary case, $ \kappa $ is constant along magnetic field line. The magnetosphere around a star is modeled by the force-free approximation, in which magnetic pressure is assumed to be much larger than thermal one. The macroscopic dynamics are governed by the same equations, although the transport coefficient $ \kappa $ determined by a microscopic process is various in the magnitude. Bearing various astrophysical environments in mind, it is important to explore the instability in a wide range of parameters. In this paper, we consider normal mode analysis for linearized system of chiral MHD equations. The background state is assumed to be homogeneous with uniform magnetic field, and small perturbations propagate as MHD waves in the absence of chiral magnetic effect. We study the modification of wave propagation and the instability caused by the chiral magnetic effect in Section 2. This problem was partially studied\cite{2017ApJ...846..153R}, where the modification is found, but the general property is not clear. The reason will be discussed after our results. We extensively analyze it to explore relevance of the instability in various astrophysical environments. We discuss our results in Section 3. \section{Waves in a linearized system} \subsection{Equations} A set of chiral MHD equations are discussed in literature(e.g., \cite{2017ApJ...846..153R,2018ApJ...858..124S}). The linear perturbation equations in non-relativistic dynamics are summarized here. We assume that the unperturbed state of the medium is static and homogeneous, i.e., the density $\rho_{0}$ and magnetic field ${\VEC B}= B_{0}{\VEC e}_{z}$, where $\rho_{0}$ and $B_{0}$ are constant. We write small perturbation of a quantity $Q$ as $\delta Q$, and then the perturbation equations are given by \begin{equation} \frac{ \partial }{\partial t} \delta {\VEC B} = - {\VEC \nabla} \times(c \delta {\VEC E} ), \label{Frad.eqn} \end{equation} \begin{equation} c \delta {\VEC E} = -\delta {\VEC v} \times {\VEC B}_{0} +\eta {\VEC \nabla} \times \delta {\VEC B} - \kappa \delta {\VEC B} , \label{Edef.eqn} \end{equation} \begin{equation} \frac{ \partial }{\partial t} \delta \rho + {\VEC \nabla} \cdot ( \rho_{0} \delta {\VEC v} )=0, \label{Evdns.eqn} \end{equation} \begin{equation} \rho_{0} \frac{ \partial }{\partial t} \delta {\VEC v} = - {\VEC \nabla}( c_{s}^2 \delta \rho) + \frac{1}{4\pi} ( {\VEC \nabla} \times \delta {\VEC B}) \times {\VEC B}_{0}, \label{Evmot.eqn} \end{equation} where an adiabatic relation $ \delta p= c_{s}^2 \delta \rho$ and the {Amp{\`{e}}re's} law $ 4\pi \delta {\VEC j} =c {\VEC \nabla} \times \delta {\VEC B}$ are used. An electric current parallel to magnetic field is added by the chiral magnetic effect in eq.(\ref{Edef.eqn}). We denote a sound speed as $ c_{s}$ and also Alfv{\'{e}}n speed as $ c_{a} = (B_{0}^2/( 4 \pi \rho_{0}) )^{1/2}$. There are two kinds of restoring forces, pressure and magnetic tension on the plasma motion. Relative importance is inferred from the ratio, which corresponds to so-called plasma $\beta$; $\beta \approx (c_{s}/c_{a})^{2}$. We assume that electric resistivity $\eta$ and a coefficient $\kappa$ of chiral magnetic effect are constant. The latter is determined by chiral chemical potential, i.e., imbalance between left and right-chiral particles. As the chiral magnetic instability grows, an electric current flows along magnetic field, and the magnitude $\kappa$ eventually reduces. Our concern is the linear growth at the initial stage, so that $\kappa$ is regraded as a constant. We assume that all perturbed quantities are proportional to the Fourier form $\exp(-i(\omega t -{\vec k} \cdot {\vec x}))$, and that the wave propagates in $x$-$z$ plane, i.e., $ {\VEC k} = ( k_{x}, 0, k_{z})$ $ = ( k \sin \theta, 0, k \cos \theta )$. We also assume $k > 0$, but the frequency $\omega$ in general is a complex number, $\omega \equiv \omega_{\rm R} +i \omega_{\rm I}$. A mode with $\omega_{\rm I} >0$ grows with time. We may limit to the case of $\kappa \ge 0 $, since the chiral instability depends on $ \kappa^2$ as shown below, although both signs of $\kappa$ are physically allowed. After some manipulation, perturbed equations (\ref{Frad.eqn})-(\ref{Evmot.eqn}) are reduced to \begin{equation} M \delta {\bf u} = 0 , \end{equation} where $M$ is a $5 \times 5$ matrix whose components are explicitly given by \begin{equation} M=\left[ \begin{array}{ccccc} k_{z} & 0 & 0 & \omega +i \eta k^2 & - k_{z} \kappa \\ 0 & k_{z} & 0 & \kappa k^2 k_{z} ^{-1} & \omega +i \eta k^2 \\ \omega^2 - c_{s}^2 k_{x}^2 & 0 & - c_{s}^2 k_{x} k_{z} & \omega c_{a}^2 k^2 k_{z}^{-1} & 0 \\ 0 & \omega & 0 & 0 & c_{a}^2 k_{z}\\ - c_{s}^2 k_{x} k_{z} & 0 & \omega^2 - c_{s}^2 k_{z}^2 & 0 & 0 \\ \end{array} \right], \end{equation} and a vector $\delta {\bf u} $ is given by $\delta {\bf u} ^{\rm T}=(\delta v_{x}, \delta v_{y}, \delta v_{z}, B_{0}^{-1} \delta B_{x}, B_{0}^{-1} \delta B_{y})$. The component $\delta B_{z}$ is determined by the Gauss law ${\VEC \nabla} \cdot \delta {\VEC B}=0$, i.e, $\delta B_{z}= -\delta B_{x} k_{x} k_{z}^{-1} .$ A determinant of the matrix $M$ provides a dispersion relation: \begin{eqnarray} \nonumber 0 & =& (\omega ^2 - c_{s}^{2} k^2) \omega ^2 X^2 -[ \omega ^2 ( k_{x} ^2 + 2k_{z}^2) -2 c_{s}^{2}k_{z}^2 k^2]c_{a}^{2} \omega X \\ & & + [ \kappa^2 \omega^4 +(c_{a}^{4} k_{z} ^2 -\kappa^2 c_{s}^{2} k^2) \omega^2 - c_{a}^{4} c_{s}^{2} k_{z}^4 ]k^2 , \label{dispersion.eqn} \end{eqnarray} where $ X= \omega +i\eta k^2$. Equation (\ref{dispersion.eqn}) is a sixth degree polynomial in $\omega$. In order to understand the general feature of the solutions, we start with some limiting cases in next subsections. \subsection{Chiral magnetic instability} We first consider the case of $c_{s}=c_{a}=0$. That is, there are no forces on plasma motions. The dispersion relation (\ref{dispersion.eqn}) becomes \begin{equation} 0 = [(\omega + i \eta k^2) ^2 + \kappa^2 k^2 ]\omega^4. \end{equation} Non-zero solution is given by $\omega = - i \eta k^2 \pm i \kappa k$. The mode with $ \omega = -i(\kappa k +\eta k^2)$ always decays. However, the mode with $ \omega = i( \kappa k- \eta k^2)$ grows for $ k < \eta ^{-1} \kappa$. That is, the long wavelength mode with $ \lambda > \lambda_{c} \equiv |\kappa|^{-1} \eta$ is unstable. Resistivity is inefficient for such a long wavelength mode. Eigenvectors of perturbation functions satisfy \begin{equation} \delta {\VEC v}= (0,~0,~0), ~~ \delta {\VEC B} \propto (k_{z}, ~\pm i k , ~k_{x}). \end{equation} These functions mean that the disturbance is purely magnetic one, and is transverse to the wave vector, i.e., ${\VEC k} \cdot \delta {\VEC B} =0 $ and also ${\VEC k} \cdot \delta {\VEC j} =0 $. Our concern is the chiral instability mode due to $\kappa \ne 0$, so that we from now on neglect the resistivity $\eta =0$. The approximation is valid in the long wavelength mode $\lambda \gg \lambda_{c}$. This approximation simplifies the dispersion relation (\ref{dispersion.eqn}), which is reduced to a cubic equation of $\omega^2$. \subsection{Effect of thermal pressure} By setting $\eta =0$ and $ c_{a}=0$ in eq.(\ref{dispersion.eqn}), we have a relation: \begin{equation} 0 = (\omega^2 + \kappa ^2 k^2)(\omega^2 -c_{s}^2 k^2)\omega^2 . \end{equation} There are two non-trivial solutions. One is chiral magnetic mode ($\omega^2 =-\kappa ^2 k^2$), and the other sound wave mode ($\omega^2 =c_{s}^2 k^2 $). The sound wave is produced by compressional motion of matter and hence longitudinal mode, i.e., $ {\VEC k} \cdot \delta {\VEC v} \ne 0$. We explicitly check this fact by the eigen-functions: \begin{equation} \delta {\VEC v} \propto {\VEC k} =( k_{x},~ 0, ~k_{z} ), ~~ \delta {\VEC B } = 0. \end{equation} As discussed in previous subsection, the chiral mode is transverse mode $ {\VEC k} \cdot \delta {\VEC B} =0$. These two modes are completely decoupled. The growth rate of chiral magnetic mode is not affected by the pressure. \subsection{Effect of a uniform magnetic field} In the case of $c_{s}=0$ and $\eta =0$, the dispersion relation (\ref{dispersion.eqn}) is reduced to \begin{equation} 0 = [(\omega^2 -c_{a}^2 k_{z}^2)(\omega^2 -c_{a}^2 k^2) + \kappa ^2 k^2 \omega ^2] \omega ^2 . \label{dscagn.eqn} \end{equation} It is clear that two waves, the Alfv{\'{e}}n mode and fast MHD mode (or fast magneto-acoustic mode) are coupled by a $\kappa$-term. Frequency of the slow one is zero ($ \omega ^2 = 0$) in the limit of $c_{s}=0$. Two non-zero solutions are given by \begin{equation} \frac{\omega ^2}{k^2} =\frac{1}{2}\left[(1+\cos^2 \theta )c_{a}^2 -\kappa^2 \pm Q^{1/2} \right], \label{AlfMagwv.eqn} \end{equation} where \begin{equation} Q=\left[ (1-\cos\theta)^2c_{a}^2-\kappa^2 \right] \left[(1+\cos\theta)^2c_{a}^2-\kappa^2 \right] . \end{equation} The function $Q$ becomes negative in a range of \begin{equation} \frac{1}{(1+|\cos\theta|)^2} < \frac{c_{a}^2}{\kappa^2} < \frac{1}{(1-|\cos\theta|)^2} , \label{CondCaQ.eqn} \end{equation} and hence $\omega $ becomes a complex number. Outside the range of eq.(\ref{CondCaQ.eqn}), $\omega $ is either real or pure imaginary. Nature of a solution $\omega$ is thus classified to three regions in the $c_{a} \kappa^{-1}$ - $\theta $ plane, as shown in Fig.1. In the region I (left part of the figure), where $ c_{a} \kappa^{-1} <( 1+|\cos\theta|)^{-1}$, the solution is a pure imaginary i.e., $\omega_{\rm R}=0 $. On the other hand, $\omega $ is real ($\omega_{\rm I}=0 $), in the region III (the up-right part of the figure). That is stable wave region, in which $ c_{a} \kappa^{-1} >(1-|\cos\theta|)^{-1}$. In the intermediate region II, the frequency is a complex number, $\omega = \omega_{\rm R}+ i \omega_{\rm I} $. The mode becomes oscillatory instability. \begin{figure}[bt] \begin{center} \includegraphics[scale=1.0]{Diagay.eps}% \caption{ Contours of $\omega_{\rm R}/(k \kappa)$(left panel) and $\omega_{\rm I}/(k \kappa)$(right panel) in the $c_{a} \kappa^{-1}$ - $\theta $ plane. In the region I of the left panel, $\omega_{\rm R}/(k \kappa)$ is 0, but $\omega_{\rm R}/(k \kappa)$ increases with $c_{a}\kappa^{-1}$ in the region II. Constant lines with $\omega_{\rm R}/(k \kappa) =1,2$ are plotted. In region III, there are two stable waves, fast MHD and Alfv{\'{e}}n waves for a fixed velocity $\omega_{\rm R}/(k \kappa)$. In the region I of the right panel, there are two kinds of growing modes. In region III of the right panel, $\omega_{\rm I}/(k \kappa) $ is zero. Some constant lines are plotted for values $\omega_{\rm I}/(k \kappa) $ labeled in the figure. } \end{center} \end{figure} \begin{figure}[bht] \begin{center} \includegraphics[scale=0.8]{mdmxaf45.eps} % \caption{ Three dimensional display of mode coupling as a function of $\log_{10}(c_{a}\kappa^{-1})$, which is chosen as $y$ axis ($-2 \le \log_{10}(c_{a}\kappa^{-1}) \le 2$). Real and imaginary parts of a mode are shown in $x$ ($-8 \le {\rm Re}( \omega/k \kappa) \le 8$) and $z$ ($-1 \le {\rm Im}( \omega/k \kappa) \le 1$) axes. Negative value of the imaginary part is plotted by a dashed line. For large $c_{a}\kappa^{-1}$, two modes are stable waves, fast MHD and Alfv{\'{e}}n waves. } \end{center} \end{figure} In Fig.1, we demonstrate some contours of real and imaginary parts as a function of $ c_{a} \kappa ^{-1}$ and propagation angle $\theta$. We may limit to the case of $\omega_{\rm R} >0 $ and $\omega_{\rm I} >0 $ only, since a pair of $ \pm (\omega_{\rm R} + i\omega_{\rm I} ) $ is always a solution. The real part $\omega_{\rm R}/(k \kappa)$, a normalized phase velocity is zero in the region I of the left panel, but it increases with $c_{a}\kappa^{-1}$ in the region II for a fixed angle $\theta$. There are two solutions in region III. They are identified as fast MHD and Alfv{\'{e}}n waves. The former is approximated as $ (\omega/k)^{2} \approx c_{a}^2 - (\kappa / \sin \theta)^{2}$ for $c_{a}\kappa^{-1} \gg 1 $, and $\omega_{\rm R}/(k \kappa)$ does not so strongly depend on $\theta$. Therefore, a curve with constant velocity $\omega_{\rm R}/(k \kappa)$ becomes almost vertical in the left panel of Fig.1. The other curve plotted by a horizontal dotted line represents the Alfv{\'{e}}n wave, which is expressed as $ (\omega/k)^{2} \approx c_{a}^2 \cos^2 \theta - \kappa ^{2} \cot^2 \theta $ for $c_{a}\kappa^{-1} \gg 1 $. Next we discuss the imaginary part $\omega_{\rm I}/(k \kappa)$. For small $ c_{a} \kappa^{-1} $, two solutions in eq.(\ref{AlfMagwv.eqn}) are approximated as \begin{equation} \frac{\omega ^2}{k^2} = \left\{ \begin{array}{l} - \kappa ^{-2} c_{a}^4 \cos^4 \theta + \cdots \\ -\kappa ^2 + (1+\cos^2\theta ) c_{a}^2 + \cdots \end{array} \right. \label{cakplimt.eqn} \end{equation} where two functional forms correspond to the upper and lower signs in eq.(\ref{AlfMagwv.eqn}). There are two growing modes, and their characteristic growth rates are $\omega_{\rm I} \approx c_{a}^2 \kappa^{-1} k$ (slowly growing mode), and $\omega_{\rm I} \approx \kappa k$ (rapidly growing mode). The frequency of the former vanishes in the limit of $ c_{a}=0$. As $c_{a} \kappa^{-1}$ increases, both growth rates approach each other, and match on the critical line $ c_{a} \kappa^{-1} =( 1+|\cos\theta|)^{-1}$. In the right panel of Fig.1, some lines with constant $\omega_{\rm I}/(k \kappa)$ are plotted. In the region I, there are two branches, which are approximated by eq.(\ref{cakplimt.eqn}). In intermediate region II, the growth rate strongly depends on the propagation angle $\theta$. For example, the wave perpendicular to unperturbed magnetic field, i.e., $\theta =\pi/2$ is stabilized for $ c_{a} \kappa^{-1} \ge 1$. On the other hand, the wave parallel to the magnetic field is never stabilized for any value of $ c_{a} \kappa^{-1} $. We show the coupling of the Alfv{\'{e}}n and fast MHD modes, which causes an unstable mode for $c_{a} \kappa^{-1} <1$. Figure 2 displays how the phase velocity $ \omega/k$ normalized by $\kappa$ changes with the Alfv{\'{e}}n velocity $c_{a} $, for fixed propagation angle $\theta =\pi/4$. In the large limit of $c_{a} \kappa^{-1} $, there are two different modes, which are described by positive velocities. There are also negative velocity modes, but they are physically the same as positive ones. These different modes represent stable Alfv{\'{e}}n and fast MHD waves. As $c_{a} \kappa^{-1} $ decreases, two velocities agree at a certain point($c_{a} \kappa^{-1}\approx 3 $), where two modes convert to one oscillatory growing and one oscillatory decaying modes in a region of $c_{a}\kappa^{-1} < 3$. The velocity $ \omega_{\rm R}/(k \kappa)$ further goes to 0, and $\omega_{\rm R} =0$ at $c_{a}\kappa^{-1} \approx 0.6 $. Two propagating waves merge and change as standing waves for $c_{a}\kappa^{-1} < 0.6$. Toward $c_{a}\kappa^{-1} \to 0$ after that point, the frequencies change as $\omega_{\rm I} \to \pm 0 $ or $\omega_{\rm I} \to \pm k\kappa $ with $\omega_{\rm R} = 0 $. The perturbation amplitudes satisfy \begin{eqnarray} && \delta v_{x}= - \frac{ c_{a}^2 k}{ \omega \cos\theta } \frac{ \delta B_{x} }{ B_{0} }, ~~ \delta v_{y}= - \frac{ c_{a}^2 k \cos\theta }{ \omega} \frac{ \delta B_{y} }{ B_{0} }, ~~ \delta v_{z}=0, \nonumber \\ && (\omega ^2 - c_{a}^2 k^2) \delta B_{x} = \kappa \omega k\cos\theta \delta B_{y}, ~~ \delta B_{z} = - \tan \theta \delta B_{x} . \label{eignvctA.eqn} \end{eqnarray} The perturbation of the magnetic field is always perpendicular to the wave, since $ {\VEC k } \cdot \delta {\VEC B} $ $ \propto {\VEC \nabla } \cdot \delta {\VEC B} =0$. In order to study the direction of the plasma motion, we consider two limiting cases. For the wave parallel to the magnetic field ($\theta =0$), plasma motion is also transverse to wave propagation, since $\delta v_{z} =0 $. There is no compression of matter, $ {\VEC \nabla } \cdot \delta {\VEC v} =0$ in this case. As $\theta \to \pi /2$, we have $\delta v_{y} \to 0 $ as well as $\delta v_{z} =0 $. This means that plasma motion is longitudinal, $\delta {\VEC v} \propto {\VEC k } $, and is compressional $ {\VEC \nabla } \cdot \delta {\VEC v} \ne 0$ in this case. At intermediate angles of wave propagation, the instability mode is a mixture of properties of two limiting cases. \subsection{Magnetohydrodyamical effects} In previous subsections, we have separately considered the effects of plasma motion driven by thermal pressure or magnetic tension on the chiral instability. We here consider a combined effect by $c_{a} \ne 0$ and $c_{s} \ne 0$. The dispersion relation (\ref{dispersion.eqn}) is a cubic equation of $\omega^2$, so that a pair ($\pm \omega$) is always a solution of it. It is also easy to understand the fact that there is at least one solution of $\omega^2 >0 $, i.e, a stable wave. Figure 3 shows the maximum growth rate $\omega_{\rm I}/(k \kappa)$ among four solutions in $c_{s}\kappa^{-1} $ - $c_{a}\kappa^{-1} $ plane, for the propagation angle $\theta =\pi/4$. Stable wave-propagation region is expressed by upper part of `$\nu$'-shape. It is natural that there is a different nature that depends on the dominant force. In high $\beta$ region(lower right part of Fig.3), there is an unstable mode. The growth rate is $\omega_{\rm I}/(k \kappa) \approx 1 $ in the limit of $c_{a}\kappa^{-1} =0$, irrespective of $c_{s}\kappa^{-1}$. Plasma motion driven by dominant pressure force does not affect the instability, as discussed in subsection 2.3. As $c_{a}\kappa^{-1}$ increases with a fixed value of $c_{s}\kappa^{-1} (> 1.5)$, $\omega_{\rm I}/(k \kappa) $ decreases and becomes 0 at $c_{a}\kappa^{-1} \approx c_{s}\kappa^{-1}$. However, unstable bound of $\omega_{\rm I}>0$ shifts to a larger value of $c_{a} \kappa^{-1}$ for a small $c_{s} \kappa^{-1} $, that is, a fat part of Fig.3, where $c_{a}\kappa^{-1} > c_{s}\kappa^{-1}$. \begin{figure}[bht] \begin{center} \includegraphics[scale=1.20]{grw45.eps} % \caption{ Color contour of maximum growth rate ${\rm Im} (\omega/(k \kappa))$ in $c_{s}\kappa^{-1}$-$c_{a}\kappa^{-1}$ plane. The propagation angle of the perturbation is $\theta =\pi/4$. All modes become stable propagation waves in upper `$\nu$'-shaped region (a blue region in the figure). } \end{center} \end{figure} In order to study the unstable mode, we show in Fig.4 how the frequency of a mode changes with $c_{a}\kappa^{-1}$ for a fixed $c_{s}\kappa^{-1}$. In the case of $c_{s}\kappa^{-1} =0.75$ (left panel of Fig.4), all modes become stable waves for $c_{a}\kappa^{-1} \ge 3$. Their phase velocities $ \omega/k$ characterize the waves, so that we identify the fast MHD, Alfv{\'{e}}n and slow MHD waves according to the absolute value of velocity. It is also found that the unstable mode is caused by a coupling of the fast and Alfv{\'{e}}n waves as $c_{a}\kappa^{-1} $ decreases. The slow one is always decoupled, and is stable wave. This situation is the same as that considered in previous subsection ($c_{s}=0$), where the slow mode is decoupled as $\omega=0$. In the right panel of Fig.4, we show the case of $c_{s}\kappa^{-1} =2$. Like the previous case, three stable waves are identified for $c_{a}\kappa^{-1} \ge 3$. As $c_{a}\kappa^{-1} $ decreases, the coupling occurs between the slow and Alfv{\'{e}}n modes. The fast mode is always decoupled. This point differs from that for $c_{s}\kappa^{-1} =0.75$. Unstable region in Fig.3 changes by a MHD mode which the Alfv{\'{e}}n mode couples with. The Alfv{\'{e}}n mode couples with slow one in a high $\beta$ region ($ c_{s} > c_{a}$), whereas it couples with fast one in a low $\beta$ region ($ c_{a} > c_{s}$). It is interesting to observe the behavior in a small $ c_{a}\kappa^{-1} $ region in the left panel of Fig.4 (the case of $c_{s}\kappa^{-1} =0.75$). The phase velocity $\omega_{\rm R}/(k \kappa)$ of the instability mode sharply decreases around $ c_{a}\kappa^{-1} = 0.75$. At that point, velocity of slow mode sharply increases. That is, wave nature is exchanged. The unstable mode originates from a coupling of fast and Alfv{\'{e}}n waves in a low $\beta$ region, but the nature changes like slow one in a high $\beta$ region. At the same time, stable mode behaves like the slow one in large $ c_{a} $ region, but behaves like the fast one in small $ c_{a} $ region. \begin{figure}[bht] \includegraphics[scale=0.8]{cutex.eps} % \caption{Three dimensional display of mode coupling as a function of $c_{a} \kappa^{-1}$, which is chosen as $y$ axis ($0 \le c_{a} \kappa^{-1} \le 4$). Real and imaginary parts of a mode are shown in $x$ ($-4 \le {\rm Re}( \omega/k \kappa) \le 4$) and $z$ ($0 \le {\rm Im}( \omega/k \kappa) \le 1$) axes. Left panel is for $c_{s} \kappa^{-1} = 0.75$, while right one is for $c_{s} \kappa^{-1} = 2$. For large $c_{a} \kappa^{-1}$, all modes are stable waves characterized by a real frequency. By their propagation velocity, they are identified as F(fast MHD), S(slow MHD) and A(Alfv{\'{e}}n) waves, as labeled in the figure. } \end{figure} \begin{figure}[bth] \begin{center} \includegraphics[scale=1.0]{l3x.eps} % \caption{Stable wave region in a parameter space of $c_{s}/\kappa$ and $c_{a}/\kappa$. A region between two curves of `$\nu$'-shape denotes stable wave propagation for a fixed angle. Three curves are plotted for $\theta =\pi/8, \pi/4, 3\pi/8$. Stable wave region increases with the increase of propagation angle $\theta$. } \end{center} \end{figure} We discuss how the stable wave region changes with the propagation angle $\theta$. Figure 5 shows the region for $\theta =\pi/8, \pi/4$ and $3 \pi/8$. The instability is almost unchanged in a high $\beta$ region ($c_{s} \kappa^{-1}> c_{a}\kappa^{-1}$), since $c_{a}$ is unimportant. However, the growth rate significantly depends on the angle $\theta$ in a low $\beta$ region ($c_{a} \kappa^{-1}> c_{s}\kappa^{-1}$), where the Alfv{\'{e}}n wave propagation affects the instability. As discussed in subsection 2.4, unstable region diminishes with the increase of $\theta$ for $c_{a} \kappa^{-1} >1 $. The Alfv{\'{e}}n mode velocity goes to $0$ in orthogonal direction to unperturbed magnetic field, $\theta \to \pi/2$. Accordingly, the growth is suppressed in a low $\beta$ region with $c_{a} \kappa^{-1} >1 $. A peculiar thing should be noted in the limit of $\theta =\pi/2$ ($\cos \theta =0$). The behavior of $\cos \theta =0$ differs from that of $\cos \theta \approx 0$. The dispersion relation for $\cos \theta =0$ is analytically expressed, and shows that there is always one growing mode, for any values of $c_{a} \ne 0 $ and $c_{s} \ne 0 $. In an exactly perpendicular direction, the Alfv{\'{e}}n wave propagation is prohibited, and the instability grows irrespective of magnetic field strength. \section{Summary and Discussion} The chiral magnetic instability is inherent in an electromagnetic field with the electric current parallel to the magnetic field. We have taken into account of the plasma motion in order to study its relevance in various environments. Assuming that disturbances are small, linearized equations of chiral MHD are examined. All modes are described by solving the resultant dispersion relation, no matter whether they are stable or unstable. The analysis of it is not new, as mentioned in Introduction. We should therefore discuss previous results\cite{2017ApJ...846..153R}, and especially insufficient points of previous analysis. The dispersion relation depends on three parameters: a set of independent ones is chosen as $c_{a}/\kappa$, $c_{s}/\kappa$ and $\theta$ in this paper. In Ref. \cite{2017ApJ...846..153R}, they fixed a ratio of $c_{a}/c_{s}$, and plotted the phase velocity of MHD waves as a function of propagation angle $\theta$ for $(c_{s}/\kappa )^2=0.1,1,10$. We found that the choice is not good to grasp whole structure of normal frequencies, as inferred from Fig.3. Their parameters are also limited to stable wave region, and growth rates are never discussed. The relation to the instability is unclear, although modified MHD waves are shown by weak chiral magnetic effect. Our choice is fixing the propagation angle at first, and all normal frequencies were calculated in a wider parameter space. The growth rates were illustrated in three dimensional representation. Thus, we have successfully explored the mode coupling in the unstable region. We next summarize our findings. When chiral magnetic effect is small enough, small disturbances are described by three waves, i.e, Alfv{\'{e}}n, fast and slow MHD waves. An unstable mode originates from a coupling of the Alfv{\'{e}}n and one of magneto-acoustic waves. The coupling condition is determined by matching the phase velocities, and therefore the counterpart is slow one for high $\beta$ plasma ($c_{s} > c_{a} $), and fast one for low $\beta$ plasma ($c_{s} < c_{a} $). Astrophysically, the high $\beta$ plasma is relevant to the early universe, and core-collapse super-nova and neutron star, while low $\beta$ plasma is relevant to a force-free magnetosphere. The Alfv{\'{e}}n wave plays a very important role, since the magnetic perturbations are always transverse waves, both in pure Alfv{\'{e}}n mode and in pure chiral mode. The unstable mode grows regardless of the propagation direction in high $\beta$ plasma, where pressure is a dominant force and does not hinder the growth. As a value of $\beta \approx (c_{s}/ c_{a})^2$ decreases, magnetic tension becomes important force on plasma motion. Accordingly, wave propagation velocity depends on the direction. In a low $\beta$ regime with $c_{a} > \kappa$, three stable waves like in ordinary MHD appear by mismatching their phase velocities. The propagation of unstable mode perpendicular to the background magnetic field is strongly constrained, i.e., disturbances propagate as stable waves in the direction. In this way, we found that the wave propagation and growth of the unstable mode are significantly affected in the presence of the magnetic field on a large scale. Especially, in a low $\beta$-regime, the direction parallel to the field is favored for unstable wave propagation, that is, the instability anisotropically grows. The situation may be related with structure formation with a larger coherent length of magnetic field, but the issue is a non-linear process and is beyond the scope of this paper.
{ "timestamp": "2019-03-01T02:09:16", "yymm": "1902", "arxiv_id": "1902.10888", "language": "en", "url": "https://arxiv.org/abs/1902.10888" }
\tempchapter{Proof of main theorem}\lb{npy} In this section, we will first define the notion of distance. Then prove the main theorem by classifying automorphisms of $\mathcal{G}_0$ into two classes based on its distance to the subring $\O$. We call one class the shallow automorphism and another one deep automorphism. We prove the theorem for both of them with different methods by our formulae in Theorem \ref{shenqi} \tempsection{Distance and Projection} \[defi]{ Let $\gamma\in D$, $S\subset D$ is a compact subset, The distance of $\gamma$ to $S$ is $$ ||\gamma||_S=\min_{x\in S}\{|\gamma-x|_D\}. $$ Furthermore, we define the set of projection $$ \mathrm{Proj}_S(\gamma)=\{a\in S: |\gamma-a|_D=||\gamma||_S\}. $$ } \[lem]{\lb{xiaozuo} Let $\gamma\in D$, $S\subset D$ a compact subset, $\gamma^\prime\in\mathrm{Proj}_S(\gamma)$. \[enumerate]{ \item For any $a\in S$, $|\gamma+a|_D=\max\{|\gamma^\prime+a|_D,|\gamma-\gamma^\prime|_D\}$ \item If $\gamma^\prime\in T\subset S$, then $||\gamma||_T=||\gamma||_S$. } } \[proof]{Since $|\gamma+a|_D=|(\gamma-\gamma^\prime)+(\gamma^\prime+a)|_D$, the equality holds if $|\gamma-\gamma^\prime|_D\neq |\gamma^\prime+a|_D$. Otherwise, if $|\gamma-\gamma^\prime|_D= |\gamma^\prime+a|_D$, $$ \max\{|\gamma^\prime+a|_D,|\gamma-\gamma^\prime|_D\} = |\gamma-\gamma^\prime|_D = ||\gamma||_S\leq |\gamma+a|_D. $$ By triangle inequality, the last sign must be equal. To prove (2), we note that both $||\gamma||_T$ and $||\gamma||_S$ equals to $|\gamma-\gamma^\prime|_D$. Therefore, $||\gamma||_T=||\gamma||_S$. } \tempsection{Lifting of shallow automorphisms} \ss{We set up\[itemize]{ \item The defintiion of shallow automorphism \item The direct computation of shallow automorphism } } We call $\gamma\in\OO D^\times$ a shallow automorphism to $\O$ or an $\O$-shallow automorphism if $$ ||\gamma||_{\OO F^\times}\geq |\pi^{-1}\mu|_D. $$ Shallow automorphisms exists only when $|\mu|_D<1$, in this case, the U-section of $\mathcal{G}$ is an uniformizer of $\OO W$ by Lemma \ref{emptyset}. \[lem]{\lb{Anna}Let $\gamma^\prime\in \mathrm{Proj}_{\OO F^\times}(\gamma)$, $\gamma^{\prime\prime}=\gamma-\gamma^\prime$. If $\gamma$ is a shallow automorphism,$$\phi(\gamma^{\prime\prime})\leq\phi(\pi^{-1}\mu)$$ } \[proof]{We compare the integrand for \eqref{fyf}. Note that for any $x\in\pi\OO F$, using Lemma \ref{xiaozuo} we have $$|\gamma^{\prime\prime}-x|_D=|\gamma-\gamma^\prime-x|_D=\max\{|x|_D,||\gamma||_{\OO F^\times}\}.$$ And we have $$|\pi^{-1}\mu-x|_D=\max\{|x|_D,|\pi^{-1}\mu|_D\}$$ The lemma follows since we defined $||\gamma||_{\OO F^\times}\geq |\pi^{-1}\mu|_D$. } \[lem]{\lb{fendou} If $\gamma\in\OO D$ and $\gamma\notin\OO D^\times$, we have $\phi(\pi\gamma)=q\phi(\gamma)$.} \[proof]{Since $\gamma\notin\OO D^\times$ we have $|x-\pi\gamma|_D^{-1}=q^2$ for $x\in\pi\OO F^\times$, therefore $$ \phi(\pi\gamma)=1+\int_{\pi\OO F}|x-\pi\gamma|_D^{-1}\mathrm{d} x=1+\int_{\pi\OO F^\times}q^2\mathrm{d} x+\int_{\pi\OO F}|\pi x-\pi\gamma|_D^{-1}\mathrm{d} \pi x. $$ Note that the subset $\pi\OO F^\times$ have volumn $q^{-1}(1-q^{-1})$, the above equation equals to $$ 1+(q-1)+|\pi|_D^{-1}|\pi|_F\int_{\pi\OO F}|x-\gamma|_D^{-1}\mathrm{d} x=q+q\int_{\pi\OO F}|x-\gamma|_D^{-1}\mathrm{d} x. $$ This is exactly the value of $q\phi(\gamma)$. } From now on $\mathrm{d} k^\times$ is always the Haar-measure normalized by $\OO K^\times$, $t$ is the U-section of $\mathcal{G}$. \[lem]{\lb{sunjunyi}Suppose $\gamma\in \O\left[\frac1\pi\right]\cap \OO D$, but $\gamma\notin \O$, then $$ v_x(\gamma) = [\OO K^\times:\O^\times]\int_{\O^\times}|k-\gamma|_D^{-1}\mathrm{d} k^\times. $$ } \[proof]{By Proposition \ref{Galo}, we have $\gamma(t)=t^{(\gamma)}.$Therefore, \[equation]{\lb{fengyifan} v_{W}(\gamma(t)-t)=v_{W}\left(t^{(\gamma)}-t\right). } By Lemma \ref{emptyset}, $t$ is an uniformizer of $\OO W$, therefore the expression \eqref{fengyifan} is the lower numbering of the element $(\gamma)$. The value of of \eqref{fengyifan} does not depend on the choice of uniformizer. We construct another uniformizer as following. We embed $\OO K$ into $\OO D$ such that it has the image $\OO D\cap \O\left[\frac1\pi\right]$ and construct the corresponding canonical lifting $\mathcal{G}^\prime$. Let $W^\prime/\breve K$ be the field extension by adding all $\pi^m$ torsions of $\mathcal{G}^\prime$. Assume here $m$ is large enough so $W\subset W^\prime$. Let $X$ be an $\pi^m$-torsion of $\mathcal{G}^\prime$ which is not killed by $\pi^{m-1}$. Note $X$ is an uniformizer of $\OO{W^\prime}$ and $$\mathrm{Gal}(W^\prime/\breve{K})\cong\OO K^\times/1+\pi^m\OO K.$$ By previous construction in \S\ref{yuanwang}, $W$ is the subfield fixed by $\O^\times$. Therefore the Norm $$Y=\mathrm{N}_{W/\breve{K}}(X)=\prod_{\sigma\in\mathrm{Gal}(W^\prime/W)} X^{(\sigma)}=\prod_{k\in\O^\times/1+\pi^m\OO K}[k]_{\mathcal{G}^\prime}(X)$$ is an uniformizer of $\OO{W}$.Put \[equation]{\lb{liuxinran} f(x) = \prod_{k\in\O^\times/1+\pi^m\OO K}[k]_{\mathcal{G}^\prime}(x)-Y } Since $f([k]_{\mathcal{G}^\prime}X)=0$ for all $k\in \O^\times/1+\pi^m\OO F$, we have the following expansion \[equation]{\lb{dazuo} f(x) = u(x)\prod_{k\in\O^\times/1+\pi^m\OO K}\left(x[-]_{\mathcal{G}^\prime}[k]_{\mathcal{G}^\prime}X\right). } Let $x=0$ in \eqref{dazuo} and compare valuations, we see $u(0)$ is an unit. Plug $x=X^{(\gamma)}$ into \eqref{liuxinran}, $$ f\left(X^{(\gamma)}\right) = \prod_{k\in\O^\times/1+\pi^m\OO K}[k]_{\mathcal{G}^\prime}(X^{(\gamma)})-Y = Y^{(\gamma)}-Y. $$ On the other hand, using expansion \eqref{dazuo}, we have $$ f\left(X^{(\gamma)}\right) = u(X^{(\gamma)})\prod_{k\in\O^\times/1+\pi^m\OO K}[\gamma-k]_{\mathcal{G}^\prime}X. $$ Since $v_{W^\prime}([\gamma-k]_{\mathcal{G}^\prime}X)=|\gamma-k|_D^{-1}$, $v_{W^\prime}\left(u(X^{(\gamma)})\right)=0$, and $v_{W^\prime}=[W^\prime:W]v_{W}$, we have $$ v_{W}\left(Y^{(\gamma)}-Y\right) = \frac1{[W^\prime:W]}\sum_{k\in\O^\times/1+\pi^m\OO K}|\gamma-k|_D^{-1}. $$ Note that $[W^\prime:W]=[\O^\times:1+\pi^m\OO K]$, we write this sum into \[equation]{\lb{sure} v_{W}\left(Y^{(\gamma)}-Y\right) = \int_{\O^\times}|\gamma-x|_D^{-1}\mathrm{d} x^\times, } here $\mathrm{d} x^\times$ is normalized by $\O^\times$, therefore $\mathrm{d} x^\times = [\OO K^\times:\O^\times]\mathrm{d} k^\times$. We proved this Lemma.} Now we generalize this formula for all $\O$-shallow automorphisms. \[lem]{\lb{jiaqi} If $\gamma$ is $\O$-shallow automorphism, let $\gamma^\prime\in\mathrm{Proj}_{\OO F^\times}(\gamma)$ and $\gamma^{\prime\prime}=\gamma-\gamma^\prime$, then $$[\OO K^\times:\O^\times]\int_{\O^\times}|\gamma-k|_D^{-1}\mathrm{d} k^\times = \frac q{q-1}\phi(\gamma^{\prime\prime})-\frac2{q-1}$$ and this value is smaller than $\phi(\mu)$. } \[proof]{ Let $\mathrm{d} x^\times$ be the Haar measure normalized by $\O^\times$. Then $\mathrm{d} x^\times=[\OO K^\times:\O^\times]\mathrm{d} k^\times$. Write $\O^\times = \OO F^\times\oplus \mu\OO F$ and decompose $x=a+b\mu$, then $\mathrm{d} x^\times=\mathrm{d} a^\times\mathrm{d} b$, where $\mathrm{d} a^\times$,$\mathrm{d} b$ are Haar measures for $\OO F^\times$ and $\OO F$. Since $\gamma$ is $\O$-shallow, we have for any $a\in\OO F^\times$, $$|\gamma-a|_D\geq ||\gamma||_{\OO F^\times}>|\mu|_D.$$ Using the triangle inequality, the equation \eqref{sure} can be written as \[equation]{\lb{whoisnext} \int_{\O^\times}|\gamma-a-b\mu|_D^{-1}\mathrm{d} x^\times = \int_{\OO F}\int_{\OO F^\times}\left|\gamma-a\right|_D^{-1}\mathrm{d} a^\times\mathrm{d} b=\int_{\OO F^\times}\left|\gamma-a\right|_D^{-1}\mathrm{d} a^\times } By Lemma \ref{xiaozuo}, \[equation]{ \int_{\OO F^\times}\left|\gamma-a\right|_D^{-1}\mathrm{d} a^\times=\int_{\OO F^\times}\min\{|\gamma^{\prime\prime}|_D^{-1},|(\gamma^\prime-a)|_D^{-1}\}\mathrm{d} a^\times. } Suppose $\OO F^{\times\times}=\OO F^\times\smallsetminus(\gamma^\prime+\pi\OO F)$. We found $|\gamma^\prime-a|_D=1$ on $\OO F^{\times\times}$, therefore \eqref{xiaozuo} equals to $$ \mathrm{Vol}(\OO F^{\times\times})+\int_{\gamma^\prime+\pi\OO F}|\gamma^{\prime\prime}+\gamma^\prime-a|\mathrm{d} a^\times. $$ Note that $\mathrm{Vol}(\OO F^{\times\times})=\frac{q-2}{q-1}$, $\mathrm{d} a^\times = \frac{q}{q-1}\mathrm{d} a$, change variable $a\mapsto a+\gamma^\prime$,the integral simplifies to $$ \frac{q-2}{q-1}+\frac q{q-1}\int_{\pi\OO F}|\gamma^{\prime\prime}-a|\mathrm{d} a=\frac q{q-1}\phi(\gamma^{\prime\prime})-\frac2{q-1}. $$ This value is less than $\phi(\mu)$ because by Lemma \ref{Anna} and Lemma \ref{fendou}, $$ \frac q{q-1}\phi(\gamma^{\prime\prime})-\frac2{q-1}\leq\frac q{q-1}\phi(\pi^{-1}\mu)-\frac2{q-1}<\frac {1}{q-1}\phi(\mu)\leq\phi(\mu). $$The Lemma follows.} \[thm]{\lb{syr}Let $\gamma$ be an $\O$-shallow automorphism, then $$ v_x(\gamma) = [\OO K^\times:\O^\times]\int_{\O^\times}|k-\gamma|_D^{-1}\mathrm{d} k^\times. $$ } \[proof]{ Let $K_2=F[\gamma]$ and $\O_2=\OO F[\pi^m\gamma]$ for a large enough $m$ such taht $\O_2\subset \O+\mu\OO D$. Let $W_2$ be the abelian extension of $K$ corresponding to $\O_2^\times$, and $\OO {W_2}$ is its ring of integers. Let $\mathcal{G}_2$ be a formal $\O_2$-module over $\OO {W_2}$ and let $t_2$ be its U-section, $\mu_2=\pi^m\gamma$. Let $y_2$ be the map as constructed in \eqref{xiangxiang} for $\mathcal{G}_2$, $x_2$ the graph of $y_2$, take $\gamma^\prime\in\mathrm{Proj}_{\OO F^\times}(\gamma)$ and $\gamma^{\prime\prime}=\gamma-\gamma^\prime$. Since $\gamma\in \OO {K_2}^\times$, by Lemma \ref{sunjunyi} and Lemma \ref{jiaqi}, we have $$ v_{x_2}(\gamma)=\frac q{q-1}\phi(\gamma^{\prime\prime})-\frac2{q-1}. $$ By Lemma \ref{jiaqi}, this value is less than $\phi(\mu)$. Since $|\mu_2|_D=|\pi^m\gamma|_D<|\mu|_D<1$, by Theorem \ref{shenqi}, we have $\langle y,y_2\rangle=\phi(\mu)$. Therefore, $v_{x_2}(\gamma)<\langle y,y_2\rangle$, by Lemma \ref{landuo} and Lemma \ref{jiaqi}, we have $$ v_{x}=v_{x_2}(\gamma)=\frac q{q-1}\phi(\gamma^{\prime\prime})-\frac2{q-1}=[\OO K^\times:\O^\times]\int_{\O^\times}|k-\gamma|_D^{-1}\mathrm{d} k^\times. $$ This Theorem follows. This proves the main Theorem for $\O$-shallow automorphisms.} \tempsection{Lifting of deep automorphisms} We call $\gamma\in\OO D^\times$ an $\O$-deep automorphism if $$ ||\gamma||_{\OO F^\times}< |\pi^{-1}\mu|_D. $$ \[lem]{\lb{canojiu} Suppose $\gamma_1,\gamma_2\in\OO D^\times$ and $||\gamma_1||_{\OO F^\times}>||\gamma_2||_{\OO F^\times}$, then $||\gamma_1\gamma_2||_{\OO F^\times}\geq||\gamma_1||_{\OO F^\times}$.} \[proof]{Let $\gamma_1^\prime\in\mathrm{Proj}_{\OO F^\times}(\gamma_1)$, $\gamma_2^\prime\in\mathrm{Proj}_{\OO F^\times}(\gamma_2)$. For any $a\in\OO F^\times$, we can write $$ \gamma_1\gamma_2-a=\gamma_1\gamma_2^\prime-a+\gamma_1(\gamma_2-\gamma_2^\prime). $$ On one hand, since $\gamma_2^\prime\in\OO F^\times$, $$ |\gamma_1\gamma_2^\prime-a|_D=|\gamma_1-\gamma_2^{\prime-1}a|_D\geq ||\gamma_1||_{\OO F^\times}. $$ On the other hand, since $\gamma_1\in\OO D^\times$, and $||\gamma_2||_{\OO F^\times}< ||\gamma_1||_{\OO F^\times}$, $$ |\gamma_1(\gamma_2-\gamma_2^\prime)|_D=|\gamma_2-\gamma_2^\prime|_D=||\gamma_2||_{\OO F^\times}<||\gamma_1||_{\OO F^\times}. $$ So $|\gamma_1\gamma_2-a|_D\geq ||\gamma_1||_{\OO F^\times}$ for any $a\in\OO F^\times$. } \[cor]{\lb{jiandu} Suppose $\gamma_1$ and $\gamma_2$ are $\O$-shallow and $\O$-deep automorphisms respectively. $\gamma_1\gamma_2$ is an $\O$-shallow automorphism. } \[proof]{ we have $||\gamma_2||_{\OO F^\times}<|\pi^{-1}\mu|_D\leq ||\gamma_1||_{\OO F^\times}$, therefore $||\gamma_1\gamma_2||_{\OO F^\times}\geq|\pi^{-1}\mu|_D$ by Lemma \ref{canojiu}.} We will compute $v_x(\gamma)$ by $v_y(\gamma)$ for $\O$-deep automorphisms. Now let $t$ be the U-section of $\mathcal{G}$, $v_W$ the normalized valuation in $W$. $g(T)\in\CO K[T]$ the minimal polynomial of $t$ over $\breve{K}$. \begin{lem}\lb{paobu} We have \begin{equation}\lb{xingxing} v_{z}(\gamma)=\sum_{k\in\OO K^\times/\O^\times}v_{x}(k\gamma). \end{equation} \end{lem} \begin{proof} Since $z$ is a closed embedding, we know $v_{z}(\gamma)$ equals to the length of $$\frac{\CO K[[U]]}{\left(g(U)\right)+\left(g(\gamma(U))\right)}\cong \OO W/\(g(\gamma(t))\).$$ Therefore $ v_{z}(\gamma) = v_W\(g(\gamma(t))\) $, In contrast, $v_x(\gamma)$ equals to the length of $$\frac{\OO W[[U]]}{(U-t)+(\gamma(U)-t)}\cong \OO W/\gamma(t).$$ Therefore, $ v_{x}(\gamma) = v_W(\gamma(t)-t) $. Note that $g(T)$ is the product of all $(T-t^{(k)})$ for $k$ run through $\OO K^\times/\O^\times$, therefore, $$ v_z(\gamma)=v_W\(g(\gamma(t))\)=\sum_{k\in\OO K^\times/\O^\times} v_W(\gamma(t)-t^{(k)})=\sum_{k\in\OO K^\times/\O^\times}v_{x}(k^{-1}\gamma), $$ here the last equality is by $v_W(\gamma(t)-t^{(k)})= v_W(\gamma(t)^{(k^{-1})}-t)=v_W(k^{-1}\gamma(t)-t)=v_{x}(k^{-1}\gamma)$. \end{proof} \tempsection{Computation of $v_{z}$ if $K/F$ is unramified} If $K/F$ is unramified, then $v_{y}=v_{z}$. Therefore \[equation]{\lb{ganqing} v_z(\gamma)= [\OO K^\times:\O^\times]\int_{\OO K^\times}|k^{-1}\gamma-1|_D^{-1}\mathrm{d} k^\times. } \tempsection{Computation of $v_{z}$ if $K/F$ is ramified} If $K/F$ is ramified, Let $\sigma\in \OO D^\times$ be the element in the normalizer of $\O^\times$ but not centralizer of $\O^\times$. In other words, $\sigma\in \OO D^\times\cap D^-$. \begin{lem}\lb{ximeng} We have \begin{equation}\lb{jieren} v_{y}(\gamma)=v_{z}(\gamma)+v_{\a}(\gamma). \end{equation} \end{lem} \begin{proof}Since the minimal polynomial of $t$ over $\CO F$ is $g(T)\overline g(T)$, the sheaf of ideals for $y$ is generated by $g(U)\overline g(U)$, for $z$ is generated by $g(U)$, for $\a$ is generated by $\overline{g}(U)$. So \begin{equation}\lb{shala} v_{y}(\gamma) = v_W\(g(\gamma(t))\overline{g}(\gamma(t))\)=v_W\(g(\gamma(t))\)+v_W\(\overline{g}(\gamma(t))\)=v_{z}(\gamma)+v_{\a}(\gamma). \end{equation} This lemma follows.\end{proof} \[lem]{\lb{shadoumei}We have $v_{\overline{z}}(\gamma)=v_z(\gamma\sigma)$.} \[proof]{ By Lemma \ref{xiebuwan}, $v_y(\sigma)=\boldsymbol{\epsilon}_F[\OO K:\O^\times]^2(P_s+P_d)=\infty$. We claim $v_z(\sigma)<\infty$, if not, by formula \eqref{xingxing}, there exists $k\in\OO K^\times$ such that $v_x(k\sigma)=\infty$. Therefore $k\sigma\in \Aut(\mathcal{G})$, so $\pi+k\sigma\in\Aut(\mathcal{G})$, which implies $v_y(\pi+k\sigma)>v_x(\pi+k\sigma)=\infty$. But $\pi+k\sigma\notin D_+\cup D_-$, contradiction by Lemma \ref{xiebuwan}. Therefore $v_z(\sigma)<\infty$. Since $\infty=v_y(\sigma)=v_z(\sigma)+v_\a(\sigma)$ but $v_z(\sigma)<\infty$, this implies $v_\a(\sigma)=\infty$. This implies $$ v_W(\overline g(\sigma(t)))=v_\a(\sigma)=\infty. $$ So $\overline g(\sigma(t))=0$. Then the map $t\mapsto \sigma(t)$ lifts the non-trivial element in $\mathrm{Gal}(\breve K/\breve F)$ to $\mathrm{Gal}(W/\breve F)$, $$ v_\a(\gamma)=v_W(\overline g(\gamma(t)))=v_W(g(\gamma(\sigma(t))))=v_z(\gamma\sigma). $$ This lemma follows} \[cor]{\lb{zhaonvyou} If $K/F$ ramified. Let $\epsilon\in\OO D^\times$ such that $||\epsilon||_{\OO F^\times}=1$, then $$ v_x(\epsilon) = 1. $$ } \[proof]{If $\O\neq \OO K$, $\epsilon$ is an $\O$-shallow automorphism, by formula in Theorem \ref{syr}, the integrand $|\epsilon-k|_D^{-1}=1$ for all $k\in\O^\times$. So $v_x(\epsilon)=1$. If $\O=\OO K$, let $\mu_2=\pi\mu$, $\mathcal{G}_2$ the quasi-canonical lifting as a formal $\OO F[\mu_2]$-module. Notice $$|\mu-a|_D^{-1}=|\mu|_D^{-1}=q$$ for all $a\in\pi\OO F$, we have $\phi(\mu)=1+q^{-1}q=2$. Since $||\epsilon||_{\OO F^\times}=1$, $\epsilon$ is a shallow automorphism of $\mathcal{G}_2$. Let $y_2$ be corresponding map in \ref{xiangxiang} and $x_2$ its graph. Notice $|\epsilon-a-b\pi\mu|_D=|\epsilon-a|_D=1$ for any $x=a+b\pi\mu\in \OO 2^\times$, therefore $$ v_{x_2}(\epsilon) = [\OO{K_2}^\times:\O_2^\times]\int_{\O_2^\times}|\epsilon-k|_D^{-1}\mathrm{d} k^\times = 1. $$ Since $|\pi\mu|_D<|\mu|_D<1$ we have $\langle y,y_2 \rangle=\phi(\mu)=2$. By Lemma \ref{landuo}, $$ v_x(\epsilon)=v_{x_2}(\epsilon) = 1. $$ The Corollary follows.} Since $k\gamma\in\OO K^\times$ and $K/F$ is a ramified extension, we have $||k\gamma||_{\OO F^\times}<1$. Therefore $||\sigma||_{\OO F^\times}=1$ implies $||k\gamma\sigma||_{\OO F^\times} = 1$ by Lemma \ref{canojiu}. By Corollary \ref{zhaonvyou}, we have $$ v_x(k\gamma\sigma) = 1 $$ for all $k\in\OO K^\times/\O^\times$. By Equation \eqref{xingxing} and Lemma \ref{shadoumei}, we have $$ v_\a(\gamma)=v_z(\gamma\sigma)=\sum_{\OO K^\times/\O^\times}v_x(k\gamma\sigma) = [\OO K^\times:\O^\times]. $$ Therefore, if $K/F$ is ramified, $v_z(\gamma)$ equals to \[equation*]{\lb{lianai} v_y(\gamma)-v_\a(\gamma)=[\OO K^\times:\O^\times]\left(1+\int_{\O^\times}|\gamma-k|_D^{-1}\mathrm{d} k\right)-[\OO K^\times:\O^\times] = [\OO K^\times:\O^\times]\int_{\O^\times}|\gamma-k|_D^{-1}\mathrm{d} k^\times. } We found the expression of $v_z(\gamma)$ in the ramified case is the same as unramifeid case \eqref{ganqing}. \tempsection{Proof of main Theorem} Since $v_z(\gamma)$ has the same expression in either cases, we compute $v_x$ uniformly, if $k\in\OO K^\times\smallsetminus\O$, then $k$ is an $\O$-shallow automorphism, so is $k\gamma$ by Corollary \ref{jiandu}. Therefore, $v_x(\gamma)$ equals to $$ v_z(\gamma)-\sum_{\substack{k\in\OO K^\times/\O^\times k\notin \O^\times}}v_x(k^{-1}\gamma)= [\OO K^\times:\O^\times]\left(\int_{\OO K^\times}|k^{-1}\gamma-1|_D^{-1}\mathrm{d} k-\int_{\OO K^\times\smallsetminus\O^\times}|k^{-1}\gamma-1|_D^{-1}\mathrm{d} k\right). $$ This proves $$ v_x(\gamma)=[\OO K^\times:\O^\times]\int_{\O^\times}|\gamma-k|_D^{-1}\mathrm{d} k^\times. $$ Now we proved the Theorem for $\O$-deep automorphisms. The case for $\O$-shallow automorphisms was proved in Theorem \ref{syr}. \tempsection{Results} Let $\mathrm{d} x$ be the normalized Haar measure for $\OO F$. For any $\gamma\in\End(\GG_0)$, let \begin{equation}\lb{fyf} \phi(\gamma)=\int_{\pi\OO F}|x-\gamma|_D^{-1}\mathrm{d} x+1. \end{equation} Let $y_1,y_2$ be maps as \eqref{xiangxiang} corresponding to quasi-canonical liftings of formal $\OO 1,\OO2$-modules $\mathcal{G}_1$, $\mathcal{G}_2$ over $\OO{W_1}$ and $\OO{W_2}$, with U-sections $t_1$,$t_2$. Let $\mu_1,\mu_2$ be generators of $\OO 1$,$\OO 2$ with smallest absolute value. Choose uniformizers $\varpi_1,\varpi_2$ in $\OO{W_1}$ and $\OO{W_2}$. Let $\pi_D$ be an uniformizer of $\OO D$. \[thm]{\lb{shenqi} We have \[equation]{\lb{qiguaizixi} \langle y_1,y_2\rangle = \[cases]{\phi(\mu_1)&\text{ if } 1>|\mu_1|_D>|\mu_2|_D;\\1&\text{ if } 1=|\mu_1|_D>|\mu_2|_D.} } Let $\mathrm{d} k$ be the normalized Haar measure of $\OO K^\times$. For any $\gamma\in\End(\GG_0)$, we have \[equation]{\lb{xishoujian} v_y(\gamma)=\[cases]{\infty&\text{ if }\gamma\in \OO D^\times\cap(D^+\cup D^-);\\ [\OO K^\times:\O^\times]\left(1+\int_{\OO K^\times}|k-\gamma|_D^{-1}\mathrm{d} k\right)&\text{ if }K/F\text{ ramifeid, }\gamma\in\OO K^\times+\pi_D\OO D;\\ [\OO K^\times:\O^\times]\int_{\OO K^\times}|k-\gamma|_D^{-1}\mathrm{d} k&\text{ if }K/F\text{ unramifeid and }\OO K\neq\O;\\ \frac{r+1}2&\text{ if }K/F\text{ unramifeid and }\OO K=\O. }} Here $r=v_D(\gamma_-)$, where $\gamma_-=(\overline\mu-\mu)^{-1}(\gamma\mu-\mu\gamma)$. The group $\OO D^\times\cap(D^+\cup D^-)$ is the normalizer of $\OO K^\times$ in $\OO D^\times$, where $D^+=K\subset D$ and $D^-=\{\gamma\in D:\gamma\mu=\overline\mu\gamma\}$. } The value $\langle y_1,y_2 \rangle$ leads the following observation \[lem]{\lb{emptyset} The map $y$ in \eqref{xiangxiang} is a closed embedding. } \[proof]{We need to show the induced map \[equation]{\lb{tina} \CO F[[U]]\longrightarrow\OO W = \OO y } is onto. If $|\mu|_D=0$, then $W=\breve F$ implies this map is surjective. When $|\mu|_D<1$, Let $y_0$ be the map in \eqref{xiangxiang} corresponding to $\GG_K$, and $t_0$ its U-section. by Theorem \ref{shenqi}, we have $$ \length(\OO{y_0}\otimes_\mathcal{M}\OO y)=1. $$ Since $\CO F[[U]]\longrightarrow\OO{y_0}$ is a surjective. Let $t$ be the U-section of $\mathcal{G}$, the above expression implies $$ \length\left(\OO y/(t-t_0)\right)=1. $$ Since $W/\breve F$ is a ramified extension, so $t_0$ is not a uniformizer of $W$, therefore $t$ is a uniformizer, the image of $U$ is a generator of $\OO W$. The map \eqref{tina} is a surjective. We proved this Lemma. } The value $\langle y_1,y_2 \rangle$ will also help us to determine $v_x(\gamma)$ in the following sense. \[lem]{\lb{landuo}Let $x_1$, $x_2$ be graphs of $y_1,y_2$. If $v_{x_1}(\gamma)<\langle y_1,y_2 \rangle$, then $$ v_{x_1}(\gamma)=v_{x_2}(\gamma). $$ } \[proof]{For $i=1,2$, since $y_i$ are closed embeddings. The natural maps $$ \map{\iota_i}{\OO{W_i}}{\OO{W_1}\otimes_\mathcal{M} \OO{W_2}} $$ are surjective. This induces an isomorphism of coimages of $\iota_1$ and $\iota_2$ $$ \map{\alpha}{\OO{W_1}/\varpi_1^a}{\OO{W_2}/\varpi_2^a} $$ for any $a\leq\length(\OO 1\otimes_\mathcal{M}\OO 2) = \langle y_1,y_2 \rangle$ such that $\alpha(\overline{t_1})=\overline{t_2}$. Since $\overline{t_1},\overline{t_2}$ are U-sections of $\mathcal{G}_1\otimes\OO{W_1}/\varpi_1^a$ and $\mathcal{G}_2\otimes\OO{W_2}/\varpi_2^a$, \[equation]{\lb{xihuantina} \End(\mathcal{G}_1\otimes\OO{W_1}/\varpi_1^a) = \End(\mathcal{G}_2\otimes\OO{W_2}/\varpi_2^a) } as subrings of $\End(\GG_0)$. Let $n=v_{x_1}(\gamma)$, we have $$ \gamma\in \End(\mathcal{G}_1\otimes\OO 1/\varpi_1^n)\smallsetminus\End(\mathcal{G}_1\otimes\OO 1/\varpi_1^{n+1}). $$ Since $n+1\leq\langle y_1,y_2 \rangle$, therefore by \eqref{xihuantina} $$ \gamma\in \End(\mathcal{G}_2\otimes\OO 2/\varpi_2^n)\smallsetminus\End(\mathcal{G}_2\otimes\OO 2/\varpi_2^{n+1}). $$ This implise $v_{x_2}(\gamma)=n=v_{x_1}(\gamma)$.} \tempsection{The intersection formula}We prove Theorem \ref{shenqi} by computation. Before further elaboration, we use $\mathrm{d} g$ and $\mathrm{d} k^\times$ for Haar measures on groups $\GL_2(\OO F)$ and $\OO K$ normalized by the subset $\GL_2(\OO F)$ and $\OO K^\times$ respectively. For any subset $A,B$ of those groups, by $[A:B]$ we mean $$ [A:B] = \mathrm{Vol}(A)/\mathrm{Vol}(B). $$ By an abuse of notation, we reserve $[W:\breve{K}]$ to denote the degree of the field extension $W/\breve{K}$. Let $[\OO y]$ be the class of $\OO y$ in the $\mathbb{Q}$-coefficient K-group of $\mathcal{M}$. To offset the influence of $W$, we normalize it by \[equation]{\lb{yaofabiao} \delta[\varphi,\tau]=\frac1{[W:\breve K]}\left[\OO y\right]. } Let $(\varphi_1,\tau_1)$ and $(\varphi_2,\tau_2)$ be equi-height pairs for $\mathcal{G}_1,\mathcal{G}_2$. The intersection formula in \cite{qirui2017love} gives \begin{equation}\label{gongshizi}\comments{gongshizi} \chi(\delta[\varphi_1,\tau_1]\otimes_{\mathcal{M}}\delta[\varphi_2,\tau_2])=\frac{\zeta_{K_1}(1)\cdot\zeta_{K_2}(1)}{|\Delta_{K_1/F}|_F\cdot \zeta_F(1)\cdot\zeta_F(2)}\int_{\GL_2(\OO F)}|R(g)|_D^{-1}\mathrm{d} g. \end{equation} Here \begin{itemize} \item $\zeta_L(s)=(1-q_L^{-s})^{-1}$, where $q_L$ is the residue cardinality of $\OO L$ for $L=F,K_1,K_2$. \item $\Delta_{K_1/F}$ is the discriminant of $K_1/F$. \item $\chi(\mathcal{F},\mathcal{G})$ represents the following number for coherent sheaves $\mathcal{F}$,$\mathcal{G}$ over $\mathcal{M}$: $$ \chi(\mathcal{F},\mathcal{G})=\sum_{i=0}^{\infty}\length_{\CO F}\(\mathrm{Tor}^{(i)}_{\mathcal{M}}(\mathcal{F},\mathcal{G})\). $$ \item The function $R(g)\in\OO D$ depends on $\varphi_1,\tau_1,\varphi_2,\tau_2$ is the following expression $$ R(g)=\begin{pmatrix}0&1\end{pmatrix}\begin{pmatrix}1&1\\\mu_1&\overline{\mu_1}\end{pmatrix}^{-1}\varphi_1^{-1}g\varphi_2\mm11{\mu_2}{{\overline\mu}_2}\cc10, $$ \end{itemize} We remark here that $\mathcal{M}$ is regular of dimension 2 and $\delta[\varphi,\tau]$ is of dimension 1. Its higher Tor groups vanish (See Lemma 4.3 of \cite{qirui2017love} or \cite[\href{http://stacks.math.columbia.edu/tag/0B01}{Tag 0B01}]{stacks-project}). So we have \[equation]{\lb{zhaonvpengyou} \chi(\delta[\varphi_1,\tau_1],\delta[\varphi_2,\tau_2])=\length_{\CO F}\(\delta[\varphi_1,\tau_1]\otimes_{\mathcal{M}}\delta[\varphi_2,\tau_2]\). } Notice that for $i=1,2$, $$[W_i:\breve{K}]\zeta_{K_i}(1)=[\OO{K_i}^\times: \O_i^\times][\OO{K_i}:\OO{K_i}^\times] = [\OO{K_i}:\O_i^\times].$$ Furthermore, write $$ \boldsymbol{\epsilon}_F=\zeta_F(1)^{-1}\zeta_F(2)^{-1}=(1-q^{-1})(1-q^{-2}). $$ By \eqref{gongshizi},\eqref{zhaonvpengyou}, we have \begin{equation}\lb{halu} \langle y_1,y_2\rangle = \boldsymbol{\epsilon}_F[\OO{K_1}:\O_1^\times][\OO{K_2}:\O_2^\times]|\Delta_{K_1/F}|_F^{-1}\int_{\GL_2(\OO F)}|R(g)|_D^{-1}\mathrm{d} g. \end{equation} For any $g\in\GL_2(\OO F)$ we will use $g_{ij}$ to denote $i,j$-th entry of $g$, we define the following subset \[equation]{\lb{kunnan} \Gamma(a)=\{g\in\GL_2(\OO F)| g_{21}\in a\OO F\}; } \[equation]{\lb{sjy} \Omega(a)=\{g\in\GL_2(\OO F)| g_{21}\in a\OO F^\times\}. } Now we compute $\langle y_1,y_2\rangle$ when $|\mu_1|_D>|\mu_2|_D$ by the formula \eqref{halu}. \tempsection{Computation for $\langle y_1,y_2\rangle$} Let $\gamma_0=\varphi_1^{-1}\varphi_2$. Write $\mu_2^{(\gamma_0)}:=\gamma_0\mu_2\gamma_0^{-1}$. We have \begin{equation}\label{aru}\comments{aru} R(g)=|(\mu_1-\overline{\mu}_1)^2|_F\cdot|g_{21}+g_{11}\overline{\mu_1}+g_{22}\mu_2^{(\gamma_0)}+g_{12}\overline{\mu_1}\mu_2^{(\gamma_0)}|_D^{-1}|\gamma_0|_D^{-1}. \end{equation} Since either $g_{21}$ or $g_{11}$ is a unit, together with $|\mu_1|_D>|\mu_2|_D$, we have $$ |g_{21}+g_{11}\overline{\mu}_1|_D\geq |\mu_1|_D>|g_{22}\mu_2^{(\gamma_0)}+g_{12}\overline{\mu}_1\mu_2^{(\gamma_0)}|_D. $$ Therefore \begin{equation}\label{budeng} R(g)=|\gamma_0|_D^{-1}|\mu_1-\overline{\mu}_1|_D\cdot|g_{21}+g_{11}\overline{\mu}_1|_D^{-1}\end{equation} We can write $$ \int_{\GL_2(\OO F)}R(g)=|\gamma_0|_D^{-1}|\mu_1-\overline{\mu}_1|_D\left(\int_{\Omega(1)} |g_{21}+g_{11}\overline\mu_1|_D^{-1}\mathrm{d} g + \int_{\Gamma(\pi)}|g_{21}+g_{11}\overline\mu_1|_D^{-1}\mathrm{d} g\right) $$ Note that $|a_{21}+a_{11}\overline\mu_1|_D=1$ over $\Omega(1)$, and $\mathrm{Vol}(\Omega(1))=(1+q^{-1})^{-1}$. Let $\mathrm{d} g_{11}^\times,\mathrm{d} g_{21},\mathrm{d} g_{22}^\times$ be Haar measures of $\OO F^\times$, $\OO F$, $\OO F^\times$ respectively, then the measure $q\mathrm{d} g_{11}^\times\mathrm{d} g_{21}\mathrm{d} g_{22}^\times$ is normalized Haar measure for $\Gamma(\pi)$. So we have $$ \mathrm{d} g = q\mathrm{Vol}\left(\Gamma(\pi)\right)\mathrm{d} g_{11}^\times\mathrm{d} g_{21}\mathrm{d} g_{22}^\times = \frac1{1+q^{-1}}\mathrm{d} g_{11}^\times\mathrm{d} g_{21}\mathrm{d} g_{22}^\times. $$ Furthermore by the replacement $a=g_{11}\mapsto g_{21}g_{11}$ for the integrand over $\Gamma(\pi)$, we write $$ \frac1{|\gamma_0|_D^{-1}|\mu_1-\overline{\mu}_1|_D}\int_{\GL_2(\OO F)}R(g)=\frac1{1+q^{-1}} + \frac1{1+q^{-1}}\int_{\pi\OO F}|a-\overline\mu_1|_D^{-1}\mathrm{d} a=\frac{\phi(\mu_1)}{1+q^{-1}}. $$ Therefore we have \[equation]{ \langle y_1,y_2\rangle =(1-q^{-1})(1-q^{-2})[\OO K:\OO 1^\times][\OO K:\OO 2^\times]|\gamma_0|_D^{-1}|\mu_1-\overline\mu_1|_D|\Delta_{K_1/F}|_F^{-1}\frac{\phi(\mu_1)}{1+q^{-1}}. } Since $|\gamma_0|_D^{-1}=|\mu_2\mu_1^{-1}|_D$, $|1-\mu_1^{-1}\overline\mu_1|_D=|\Delta_{K_1/F}|_F$, $[\OO K:\OO 2^\times]|\mu_2|_D=(1-q^{-1})^{-1}$, we have $$ \langle y_1,y_2\rangle = \frac{[\OO K:\OO 1^\times](1-q^{-2})}{1+q^{-1}}\phi(\mu_1). $$ If $|\mu_1|_D=1$, then $[\OO K:\OO 1^\times]=(1-q^{-2})^{-1}$, $|a-\mu_1|_D=1$ for $a\in\pi\OO F$, therefore $\phi(\mu_1)=1+q^{-1}$, $$ \langle y_1,y_2\rangle = \frac{\phi(\mu_1)}{1+q^{-1}}=1. $$ Otherwise, if $|\mu_1|_D<1$, then $[\OO K:\OO 1^\times]=(1-q^{-1})^{-1}$, in this case $$ \langle y_1,y_2\rangle = \phi(\mu_1). $$ We finished the calculation of the formula. \tempsection{Computation for $v_y(\gamma)$} Note that $v_y(\gamma)=\langle y,\gamma y\rangle$, in this case, the formula \eqref{halu} specializes to the case $K_1=K_2$,$\mu_1=\mu_2=\mu$, $\varphi_1^{-1}\varphi_2=\gamma$, we have \[equation]{\lb{kuaidian} v_y(\gamma)=\boldsymbol{\epsilon}_F[\OO K:\O^\times]^2\int_{\GL_2(\OO F)}\left|\rr10\mm11\mu{\overline\mu}^{-1}\gamma g\mm11\mu{\overline\mu}\cc01\right|_D^{-1}\mathrm{d} g. } We should first simplify the integrand \[equation]{\lb{addiction} I_\gamma(g):=\left|\rr10\mm11{\overline\mu}\mu^{-1}\gamma g\mm11{\overline\mu}\mu\cc01\right|_D^{-1}. } This equals to \[equation]{\lb{lianai} I_\gamma(g)=|\overline\mu-\mu|_D|\mu\gamma(g_{11}+g_{12}\mu)-\gamma(g_{21}+g_{22}\mu)|_D^{-1}. } If $|g_{21}|_D>|\mu|_D$, we know $I_\gamma(g)=|\overline\mu-\mu|_D|g_{21}|_D^{-1}$ by strong triangle inequality. In other words, \[equation]{\lb{jingshen} I_\gamma(g)=|\overline\mu-\mu|_D|g_{21}|_D^{-1}\qquad\text{for }g\in\Omega(\pi^n), |\pi^n|_D>|\mu|_D. } Let $u$ be the maximal integer that $|\pi^{u-1}|_D>|\mu|_D$, then $u=w$ if $K/F$ is unramified or $u=w+1$ otherwise. We decompose the integral \eqref{kuaidian} for $v_y(\gamma)$ into two parts. \[equation]{\lb{parts} \frac{v_y(\gamma)}{\boldsymbol{\epsilon}_F[\OO K:\O^\times]^2}=\left(\sum_{0\leq n<u}\int_{\Omega(\pi^n)}|\overline{\mu}-\mu|_D|\pi^n|_D^{-1}\mathrm{d} g\right)+\int_{\Gamma(\pi^u)}I(g)\mathrm{d} g. } Denote former and later part as $P_s$ and $P_d$, in other words, \[equation]{\lb{sl} P_s=\sum_{0\leq n<u}\int_{\Omega(\pi^n)}|\overline{\mu}-\mu|_D|\pi^n|_D^{-1}\mathrm{d} g, } \[equation]{\lb{dp} P_d=\int_{\Gamma(\pi^u)}I(g)\mathrm{d} g. } \tempsubsection{Computation of $P_s$} The group $\GL_2(\OO F)$ acts transitively on $\mathbb{P}^1(\OO F/\pi^n)$, with $\Gamma(\pi^n)$ the stablizer of the point representing the submodule $\OO F/\pi^n\oplus\{0\}\subset\left(\OO F/\pi^n\right)^2$. So when $n\geq 1$ the volumn of $\Gamma(\pi^n)$ is $(1+q^{-1})^{-1}q^{-n}$, which is the reciprocal of the cardinality of $\mathbb{P}^1(\OO F/\pi^n)$. Note that $\Omega(\pi^n)=\Gamma(\pi^n)\smallsetminus\Gamma(\pi^{n+1})$, this implies \[equation]{\lb{bz} \int_{\Omega(\pi^n)}\mathrm{d} g=\left\{\begin{array}{ll}(1+q^{-1})^{-1}&\text{ if }n=0\\\\(1+q^{-1})^{-1}(q^{-n}-q^{-n-1})&\text{ if }n>0\\\end{array}\right. } Since $|\pi|_D=q^{-2}$, we can use \eqref{bz} to calculate \eqref{sl} \[equation]{\lb{slres} P_s=|\mu-\overline{\mu}|_D\left((1-q^{-1})^{-1}+\sum_{1\leq n<u}(1-q^{-1})^{-1}(q^{-n}-q^{-n-1})q^{2n}\right) = |\mu-\overline{\mu}|_D(1+q^{-1})^{-1}q^{u-1} } \tempsubsection{Computation of $P_d$} Denote the subgroup $$ \Gamma_0(a)=\left\{g\in\GL_2(\OO F):\mm{g_{11}}{g_{12}}{g_{21}}{g_{22}}=\mm10{g_{21}}{g_{22}}, g_{21}\in a\OO F\right\} $$ For any $k\in\O$, let $k_H\in\GL_2(\OO F)$ be the element such that $$ k_H\cc1\mu=\cc1\mu k. $$ Let $H=\{k_H|k\in\O ^\times\}$, then $H$ is a subgroup of $\GL_2(\OO F)$. \[lem]{ We have a group decomposition $$ \Gamma(\pi^u)=\Gamma_0(\pi^u)H;\qquad \Gamma_0(\pi^u)\cap H=\left\{\mm1{}{}1\right\} $$ } \[proof] {For any $g\in\Gamma(\pi^u)$, Let $k=g_{11}+g_{12}\mu$ then $k\in \O^\times$ and $$ gk_H^{-1}\cc1\mu=\cc1{\frac{g_{21}+g_{22}\mu}{g_{11}+g_{12}\mu}}=\cc1{c+d\mu}, $$ here we put $\frac{g_{21}+g_{22}\mu}{g_{11}+g_{12}\mu}=c+d\mu$. Since $1,\mu$ form an $\OO F$-basis of $\O$, we have $$gk_H^{-1}=\mm1{}cd.$$ Notice that $c=(g_{11}+g_{12}\mu)^{-1}(g_{11}+g_{12}\overline\mu)^{-1}(g_{21} g_{11}+g_{12} g_{22}\mu\overline\mu)\in\pi^u\OO F$ because both $g_{21},\mu\overline\mu\in\pi^u\OO F$. Therefore $gk_H^{-1}\in \Gamma_0(\pi^u)$. To compute $\Gamma_0(\pi^u)\cap H$, note for every $h\in\Gamma_0(\pi^u)$, $$ h\cc1*=\cc1*, $$ where $*$ denotes arbitrary element. Since $h=k_H$ for some $k\in\O^\times$, then $$ h\cc1\mu=\cc k{\mu k} $$ Therefore $k=1$, we proved $\Gamma_0(\pi^u)\cap H$ is trivial.} From now on, we write $g=lk_H$ for every $g\in \Gamma(\pi^u)$ where $l,k_H$ are elements of $\Gamma_0(\pi^u),H$ corresponding to the decomposition $\Gamma(\pi^u)=\Gamma_0(\pi^u)H$. We write the integrand \eqref{addiction} as $$ I_\gamma(g)=|\overline\mu-\mu|_D\left|\rr{\mu}{-1}\gamma lk_H\cc1\mu\right|_D^{-1} = |\overline\mu-\mu|_D\left|\rr{\mu}{-1}\gamma l\cc1\mu k\right|_D^{-1} $$ Let $\mathrm{d} k_H$, $\mathrm{d} l$ be normalized Haar measures on $H$ and $\Gamma_0(\pi^u)$, then $\mathrm{d} l\mathrm{d} k_H$ is normalized Haar measure for $\Gamma(\pi^u)$. Therefore $$ \mathrm{d} g = \mathrm{Vol}(\Gamma(\pi^u))\mathrm{d} l\mathrm{d} k_H=(1+q^{-1})^{-1}q^{-u}\mathrm{d} l\mathrm{d} k_H. $$ We can simplify $P_d$ as $$ P_d=(1+q^{-1})^{-1}q^{-u}\int_{\Gamma_0(\pi^u)}|\overline\mu-\mu|_D\left|\rr{\mu}{-1}\gamma l\cc1\mu\right|_D^{-1}\mathrm{d} l\int_{\O^\times}|k|_D^{-1}\mathrm{d} k. $$ Since $|k|_D=1$ for all $k\in\O^\times$, we drop the last factor. By calculating the matrix, we have $$ I_\gamma(l)=|\overline\mu-\mu|_D|\mu\gamma-\gamma\mu(l_{22}+l_{21}\mu^{-1})|_D^{-1} $$Consider an inclusion map $$ \maps{\iota}{\Gamma_0(\pi^u)}{\OO K^\times}l{l_{22}+l_{21}\mu^{-1}} $$ The pull-back of Haar measures on $\OO K^\times$ are Haar measures on $\Gamma_0(\pi^u)$. The image of $\iota$ is $\OO F^\times\oplus \zeta\OO F$, which has relative volumn $1$ in $\OO K^\times$ if $K/F$ is ramified or otherwise $(1+q^{-1})^{-1}$. Now we denote $k=l_{22}+l_{21}\mu^{-1}$ and $\mathrm{d} k^\times$ be the $\iota$-pull back of the Haar measure on $\OO K^\times$. Then we write $P_d$ as \begin{equation}\lb{shenme} \begin{split} P_d=(1+q^{-1})^{-1}q^{-u}|\overline\mu-\mu|_D\int_{\OO K^\times}|\mu\gamma-\gamma\mu k|_D^{-1}\mathrm{d} k^\times&\qquad\text{ if }K/F \text{ is ramified}.\\ P_d=|\overline\mu-\mu|_Dq^{-u}\int_{\OO F^\times\oplus\zeta\OO F}|\mu\gamma-\gamma\mu k|_D^{-1}\mathrm{d} k^\times&\qquad \text{ if }K/F \text{ is unramified}. \end{split} \end{equation} To simplify the integrand $|\mu\gamma-\gamma\mu k|_D^{-1}$, we think $D$ as a left K-algebra. The right multiplication of $\mu$ decomposes $D$ as eigenspaces $D^+$ of eigenvalue $\mu$, and $D^-$ of eigenvalue $\overline\mu$. For $\gamma\in D$, write $$\gamma=\gamma_++\gamma_-$$ where $\gamma_+\in D^+$, $\gamma_-\in D^-$. So $\gamma_+\mu=\mu\gamma_+$, $\gamma_-\mu=\overline\mu\gamma_-$. Note $\gamma_-^2\in F$ since it commutes with $\mu,\gamma_-$. Denote $\overline{\gamma}=\overline{\gamma_+}-\gamma_-$ we have $\overline{\gamma}\gamma=\overline{\gamma_+}\gamma_+-\gamma_-^2\in F$. \[lem]{\lb{xiebuwan}Suppose $K/F$ is ramified. We have $P_d=\infty$ if and only if $\gamma\in \OO D^\times\cap(D^+\cup D^-)$.} \[proof]{The equation for $k\in K$ \[equation]{\lb{wangzhan}\mu\gamma-\gamma\mu k=0 } have a solution if and only if $\gamma^{-1}\mu\gamma=\mu k\in K$. Since the minimal polynomial of $\mu$ and $\gamma^{-1}\mu\gamma$ are the same, either $\gamma^{-1}\mu\gamma=\mu$ or $\overline\mu$. This is equivalent to say $\gamma\in D^+\cup D^-$. If \eqref{wangzhan} does not have a solution, the integrand of $P_d$ is continous on a compact subset $\OO F^\times\oplus\zeta\OO F$, therefore convergent. If \eqref{wangzhan} have a solution, $\mu^{-1}\gamma^{-1}\mu\gamma\in \OO K^\times$. Therefore the integral $$ S_n = |\overline\mu-\mu|_D\int_{\gamma^{-1}\mu\gamma\mu^{-1}+\pi^n\OO K^\times}|\mu\gamma - \gamma\mu k|_D^{-1}\mathrm{d} k^\times = |\overline\mu-\mu|_D|\gamma\mu|_D^{-1}. $$ Since $P_d$ has positive integrand and $\mu^{-1}\gamma^{-1}\mu\gamma+\pi^n\OO K^\times$ are disjoint subsets of $\subset \OO K^\times$ We conclude $P_d>S_1+S_2+\cdots = \infty$. } \[cor]{\lb{huaxue}If $\sigma\in\OO D^\times\cap (D^+\cup D^-)$, then $v_y(\sigma)=\infty$.} \[proof]{By Lemma \ref{xiebuwan}, $P_d=\infty$, therefore $v_y(\sigma)=\boldsymbol{\epsilon}_F[\OO K:\O^\times]^2(P_s+P_d)=\infty$.} \[lem]{\lb{jinzhang}If $|2|_D=1$, then for any $\gamma\in D$, \[equation]{\lb{xueba} |\gamma|_D=\max\{|\gamma_+|_D,|\gamma_-|_D\} } } \[proof]{ By triangle inequality, we only need to show $ |\gamma|_D\geq\max\{|\gamma_+|_D,|\gamma_-|_D\} $. Since $$ (\mu-\overline\mu)\gamma(\mu-\overline\mu)^{-1}=\gamma_+-\gamma_-, $$ we have $|\gamma_++\gamma_-|_D=|\gamma_+-\gamma_-|_D$, therefore $$ |2\gamma_\pm|_D=|(\gamma_++\gamma_-)\pm(\gamma_+-\gamma_-)|_D\leq\max\{|\gamma_++\gamma_-|_D,|\gamma_+-\gamma_-|_D\}=|\gamma|_D. $$ Therefore the Lemma follows. } Note $\gamma(\mu-\overline{\mu})\gamma_-=(\mu-\overline{\mu})\gamma_-\overline{\gamma}$, so $|\overline{\gamma}|_D=|\gamma|_D$. In the integrand of \eqref{shenme}, $|\gamma|_D=|\overline{\gamma}|_D=1$, \[equation]{\lb{tiaopi} |\mu\gamma-\gamma\mu k|_D^{-1} = |\overline{\gamma}(\mu\gamma-\gamma\mu k)|_D^{-1} = \left|\left(\mu(\overline{\gamma}_+\gamma_+-\overline\gamma\ACj k)-\gamma_-^2\overline\mu\right)+\gamma_-\gamma_+(\overline\mu-\mu)\right|_D^{-1} } We have $\mu(\overline{\gamma}_+\gamma_+-\overline\gamma\ACj k)-\gamma_-^2\overline\mu\in D^+$ and $\gamma_-\gamma_+(\overline\mu-\mu)\in D^-$. From now on, we assume $q$ is odd. This implies $|2|_D=1$ and $|\mu-\overline{\mu}|_D=|\mu|_D$. Now assume $\gamma\in\OO K^\times+\pi_D\OO D$, let $a\in\OO K^\times$ such that $\gamma-a\in\pi_D\OO D$. Therefore $|\gamma-a|_D<1$ and by Lemma \ref{jinzhang} this implies $$ \max\left\{|\gamma_+-a|_D,|\gamma_-|_D\right\}<1. $$ Therefore $|\gamma_+-a|_D<1$, we have $|\gamma_+|_D=|a|_D=1$. The integrand simplified as $$ |\mu\gamma-\gamma\mu k|_D^{-1}=\min\left\{|\mu(\overline\gamma_+\gamma_+-\overline\gamma\ACj k)-\gamma_-^2\overline\mu|_D^{-1},|\gamma_-\mu|_D^{-1}\right\}. $$ Note that if $|\mu(\overline\gamma_+\gamma_+-\overline\gamma\ACj k)|_D^{-1}<|\gamma_-\mu|_D^{-1}$, we have $$|\mu(\overline\gamma_+\gamma_+-\overline\gamma\ACj k)|_D^{-1}=|\mu(\overline\gamma_+\gamma_+-\overline\gamma\ACj k)-\gamma_-^2\overline\mu|_D^{-1}.$$ If $|\mu(\overline\gamma_+\gamma_+-\overline\gamma\ACj k)|_D^{-1}\geq|\gamma_-\mu|_D^{-1}$, since $|\gamma_-^2\overline\mu|_D\leq|\gamma_-\mu|_D$, we have $$ |\mu(\overline\gamma_+\gamma_+-\overline\gamma\ACj k)-\gamma_-^2\overline\mu|_D^{-1}\geq |\gamma_-\mu|_D^{-1}. $$ Therefore, $$ \min\left\{|\mu(\overline\gamma_+\gamma_+-\overline\gamma\ACj k)-\gamma_-^2\overline\mu|_D^{-1},|\gamma_-\mu|_D^{-1}\right\}= \min\left\{|\mu(\overline\gamma_+\gamma_+-\overline\gamma\ACj k)|_D^{-1},|\gamma_-\mu|_D^{-1}\right\}. $$ So we can write \[equation]{\lb{xiaozuozaina} |\mu\gamma-\gamma\mu k|_D^{-1}=|\mu|_D^{-1}\min\left\{|\overline\gamma_+\gamma_+-\overline\gamma\ACj k|_D^{-1},|\gamma_-|_D^{-1}\right\}. } By replacing the variable $k\mapsto \overline\gamma_+(\overline\gamma\ACj)^{-1}k$ and using \eqref{xiaozuozaina} we write \[equation]{\lb{ticket} \int_{\OO K^\times}|\mu\gamma-\gamma\mu k|_D^{-1}\mathrm{d} k=|\mu|_D^{-1}\int_{\OO K^\times}\min\left\{|\gamma_+-k|_D^{-1},|\gamma_-|_D^{-1}\right\}\mathrm{d} k = |\mu|_D^{-1}\int_{\OO K^\times}|k-\gamma|_D^{-1}\mathrm{d} k^\times. } Now we will use the formula of $P_s$ in \eqref{slres} and $P_d$ in \eqref{shenme} to calculate $v_y(\gamma)$ in each cases. \tempsubsection{The ramified case} If $K/F$ is ramified, We have \eqref{ticket} equals to $(1+q^{-1})|\overline\mu-\mu|_D^{-1}q^u P_d$, $$ \frac{v_y(\gamma)}{\boldsymbol{\epsilon}_F[\OO K:\O^\times]^2|\Delta_{K/F}|_F^{-1}}=P_s+P_d=\frac{q^{-u}}{1+q^{-1}}+\frac{q^{-u}}{1+q^{-1}}\int_{\OO K^\times}|k-\gamma|_D^{-1}\mathrm{d} k. $$ Since $\boldsymbol{\epsilon}_F=(1-q^{-1})^2(1+q^{-1})$, $[\OO K:\OO K^\times]=(1-q^{-1})^{-1}$, $[\OO K:\O^\times]|\Delta_{K/F}|_F^{-1}=(1-q^{-1})^{-1}q^u$, \[equation]{\lb{ramified} v_y(\gamma)= [\OO K^\times:\O^\times]\left(1+\int_{\OO K^\times}|k-\gamma|_D^{-1}\mathrm{d} k\right). } \tempsubsection{Unramified cases}If $K/F$ is unramified, we note that $\OO K^\times = \left(\OO F^\times\oplus\zeta\OO F\right)\coprod\left(\pi\OO F\oplus\zeta\OO F^\times\right)$. By equation \eqref{shenme}, for $|\overline\mu-\mu|_D^{-1}q^u P_d$, the equation \eqref{ticket} has overcounted the part \[equation]{\lb{future} |\mu|_D^{-1}\int_{\pi\OO F\oplus\zeta\OO F^\times}\min\left\{|\overline\gamma_+\gamma_+-\overline\gamma\ACj k|_D^{-1},|\gamma_-|_D^{-1}\right\}\mathrm{d} k. } Since $\overline\gamma_+\gamma_+,\overline\gamma\ACj\in\OO F^\times$, we have $\overline\gamma_+\gamma_+-\overline\gamma\ACj k\in\OO F^\times\oplus\zeta\OO F^\times$. The integrand is 1. \eqref{future} equals to $$ |\mu|_D^{-1}\mathrm{Vol}(\pi\OO F\oplus\zeta\OO F^\times)=|\mu|_D^{-1}\frac{1}{1+q}. $$ If $|\mu|_D<1$, this equals to $q^u P_s$ therefore we have $$ \frac{v_y(\gamma)}{\boldsymbol{\epsilon}_F[\OO K:\O^\times]^2}=P_s+P_d=q^u\int_{\OO K^\times}|k-\gamma|_D^{-1}\mathrm{d} k. $$ Since $\boldsymbol{\epsilon}_F=(1-q^{-1})^2(1+q^{-1})$, $[\OO K:\OO K^\times]=(1-q^{-2})^{-1}$, $[\OO K:\O^\times]=(1-q^{-1})^{-1}q^u$, \[equation]{\lb{unramified} v_y(\gamma)= [\OO K^\times:\O^\times]\int_{\OO K^\times}|k-\gamma|_D^{-1}\mathrm{d} k. } If $|\mu|_D=1$, by \eqref{shenme}, \eqref{xiaozuozaina} and change varible $k\mapsto \overline\gamma_+\gamma_+(\overline\gamma\ACj)^{-1}k$ we have directly $$ \frac{v_y(\gamma)}{\boldsymbol{\epsilon}_F[\OO K:\OO K^\times]^2}=P_d=(1+q^{-1})\int_{\OO F^\times\oplus\zeta\OO F}\min\left\{|1-k|_D^{-1},|\gamma_-|_D^{-1}\right\}\mathrm{d} k^\times. $$ Note that$|1-k|_D^{-1} = q^{2n}$ if and only if $k\in 1+\pi^n\OO K^\times$. Let $a=0.5(v_D(\gamma_-)+1)$We write $P_d$ into $$ \left(\mathrm{Vol}\left(\OO F^\times\oplus\zeta\OO F\right)-\mathrm{Vol}\left(1+\pi\OO K\right))\right)+\sum_{n=1}^{a-1}q^{2n}\mathrm{Vol}\left(1+\pi^n\OO K^\times\right)+\mathrm{Vol}\left(1+\pi^{a}\OO K\right)q^{2a-1} $$ By $\mathrm{Vol}(\OO F^\times\oplus\zeta\OO F)=(1+q^{-1})^{-1}$, $\mathrm{Vol}(1+\pi^n\OO K^\times)=q^{-2n}$, $\mathrm{Vol}(1+\pi^n\OO K)=(1-q^{-2})^{-1}q^{-2n}$, $$ P_d = (1+q^{-1})^{-1}\frac{v_D(\gamma_-)+1}2. $$ Since $\boldsymbol{\epsilon}_F[\OO K:\OO K^\times]^2=(1+q^{-1})^{-1}$, we have $$ v_y(\gamma)=\frac{v_D(\gamma_-)+1}2. $$ \tempchapter{The endomorphism ring of formal modules}\lb{zhunbei} \ss{This section sets up all background knowledge and notation. \[itemize]{ \item Notation: $\mathcal{G},\GG_K,\GG_0,\psi$ \item The *-isomorphism and parameter for the deformation \item The action of division algebra \item The action of Galois group \item The deformation space and the meaning of intersection number }} In this section, we introduce canonical and quasi-canonical liftings, and relate their endomorphism rings to intersection numbers in Lubin-Tate deformation spaces. \ss{\[itemize]{ \item What is a formal module; What is the height; \item What is the canonical lifting; What is a quasi-canonical lifting \item Regarding $\O$ as the subring of endomorphism ring of $\GG_0$. \item What is a deformation, what is a parameter for a deformation. \item Action of the maximal order of division algebra and action of Galois group on parameters. }} \tempsection{Canonical and quasi-canonical liftings} We review the notion of canonical and quasi-canonical liftings follow the paper \cite{gross1986canonical} of Gross. \ss{What is a formal module; What is the height;} \tempsubsection{Formal modules}A formal $\OO F$-module $\mathcal{G}$ over a Notherian complete $\OO F$-algebra $A$ is an one dimensional formal group law over $A$ with endomorphisms by $\OO F$, such that the induced action of $\OO F$ on the Lie algebra of $\mathcal{G}$ agrees with the structure map. Fix a coordinate of $\mathcal{G}$ and let $[k]_\mathcal{G}(X)$ be the power series over $A$ defining the action of $k\in\OO F$. If $A$ is of characteristic p, then $[\pi]_\mathcal{G}(X)=\beta(X^{q^h})$ for some $\beta^\prime(X)\neq 0$, we call this $h$ the height of $\mathcal{G}$. \ss{What is the canonical lifting;What is a quasi-canonical lifting} \tempsubsection{The Canonical lifting} For any $\mathcal{G}_0$ a formal $\OO K$-module of height 1 over $\CFF q$, Lubin and Tate constructs in \cite{lubin1965formal} a formal $\OO K$ module $\GG_K$ over $\CO K$ such that its special fiber is $\mathcal{G}_0$. They showed that this $\GG_K$ is unique up to isomorphism, we call $\GG_K$ the canonical lifting of $\mathcal{G}_0$. Let $\OO D := \End(\mathcal{G}_0)$ be the endomorphism ring of $\mathcal{G}_0$ as a formal $\OO F$-module. Note $\mathcal{G}_0$ has height 2 as a formal $\OO F$-module. Our $\End(\mathcal{G}_0)$ is a maximal order of the quaternion algebra $D$ over $F$. \tempsubsection{Quasi-canonical liftings}\lb{yuanwang} In general, fix an embedding $\O\subset\OO D$, we can construct a lifting $\mathcal{G}$ of $\mathcal{G}_0$ such that the endormorphism ring $\End(\mathcal{G})\subset\End(\mathcal{G}_0)$ is exactly given by $\O\subset \OO D$. The construction is made as following. The Tate module $T\GG_K$ of $\GG_K$ is a free $\OO K$-module of rank 1. Let $\textbf{v}$ be an $\OO K$ generator of $T\GG_K$ and the submodule $T^\prime=\O\cdot \textbf{v}\subset T\GG_K$ give rise to an isogeny \begin{equation}\lb{fangong}\map{\Phi}{\GG_K}{\mathcal{G}}\end{equation} of formal $\OO F$-modules with the kernel isomorphic to $\OO K/\O$ and $T\mathcal{G}=T^\prime$. Since $\mu\cdot T\mathcal{G}\subset T\mathcal{G}$, $\mu$ induces an endomorphism on $\mathcal{G}$. Therefore $\mathcal{G}$ is a formal $\O$-module. Since all height 2 formal $\OO F$-modules over $\CFF q$ are isomorphic, we can find an isomorphism $\psi$ such that the reduction of $\psi\circ\Phi$ is an endomorphism of $\mathcal{G}_0$. Replacing $\Phi$ by $\psi\circ\Phi$ we can assume $\mathcal{G}$ reduces to $\GG_0$ over $\CFF q$. The reduction of actions of $\O$ and $\OO K$ on $\mathcal{G}$ and $\GG_K$ induces natural embeddings \[equation]{ \lb{zhyy}\map{\overline{[\cdot]}_{\mathcal{G}}}\O{\End(\GG_0)}} \[equation]{ \lb{zuohe}\map{\overline{[\cdot]}_{\GG_K}}{\OO K}{\End(\GG_0)}} Let $\overline{[\Phi]}$ be the reduction of the map $\Phi$. For any $k\in\O$, we have \[equation]{\lb{youyin} \overline{[\Phi]}\circ\overline{[k]}_{\GG_K}=\overline{[k]}_{\mathcal{G}}\circ\overline{[\Phi]} } In fact, the $\End(\GG_0)$ is the maximal order of the quaternion algebra over $F$. We can write the above equation by $\overline{[\Phi]}\circ\overline{[k]}_{\GG_K}\circ\overline{[\Phi]}^{-1}=\overline{[k]}_{\mathcal{G}}$. In other words, images $\overline{[\O]}_\mathcal{G}$ and $\overline{[\O]}_{\GG_K}$ are conjugate by $\overline{[\Phi]}$. Let $\eta\in \Aut(\GG_0)$ such that $\eta\overline{[\O]}_\mathcal{G}\eta^{-1} = \O$, $\widetilde{\eta}$ an lifting of $\eta$. We replace $\Phi$ by $\widetilde{\eta}\circ\Phi$ so $\overline{[\O]}_\mathcal{G}=\O$. \tempsection{Equi-Height Pairs} Note $\Phi$ induces a map $\varphi=\overline{[\Phi]}$ on the special fiber $$ \map{\varphi}{\mathcal{G}_0}{\mathcal{G}_0}. $$ Identifying $T\GG_K\cong\OO K$ and $T\mathcal{G}\cong\OO F^2$, by rational Tate modules of $\GG_K$ and $\mathcal{G}$, $\Phi$ induces a map $\tau$ $$ \map{\tau}{F^2}K. $$ In other words, $\Phi$ induces a map $\varphi$ and a map $\tau$ up to right-$\GL_2(\OO F)$ and left-$\OO K^\times$ action and $$[\OO K:\tau(\OO F^2)] = \deg(\varphi).$$ We call such a $(\varphi,\tau)$ an equi-height pair. Conversely, any equi-height pair $(\varphi,\tau)$ determines $\Phi$ up to *-isomorphisms. Here by *-isomorphism we mean an isomorphism that reduces to the identity map in the special fiber. An automorphism is a *-isomorphism only when it is the identity map. The data $(\varphi,\tau)$ determines a (*)-isomorphic class of quasi-canonical liftings. \tempsection{The action of Galois group} The action of $\mathrm{Gal}(\overline{K}/\breve{K})$ on $T\mathcal{G}_K$ gives a map $$\mathrm{Gal}(\overline{K}/\breve{K})\rightarrow \OO K^\times.$$ By Lubin and Tate, this is a surjective. Note that $\O^\times$ preserves the submodule $T\mathcal{G}$. Let $\Gamma\subset\mathrm{Gal}(\overline{K}/\breve{K})$ be the preimage of $\O^\times$, and let $W$ be the fixed field of $\Gamma$. Since $\Gamma$ preserves $T\mathcal{G}$, $\mathcal{G}$ is a formal $\O$-module over $W$. We fix the identification \[equation]{\lb{Galois} \mathrm{Gal}(W/\breve{K})\;\;\cong\;\;\mathrm{Gal}(\overline{K}/\breve{K})/\Gamma\;\;\cong\;\;\OO K^\times/\O^\times. } For $k\in \OO K^\times$, Let $(k)$ be the corresponding element in $\mathrm{Gal}(W/\breve{K})$, write its action as $t\mapsto t^{(k)}$. \ss{What is a deformation, what is the parameter for the deformation} \tempsection{The universal deformation of $\mathcal{G}_0$} A formal module is called a deformation of $\GG_0$ if it reduces to $\GG_0$. For example, $\mathcal{G}$ and $\GG_K$ are deformations of $\GG_0$. By Lubin and Tate in \cite{lubin1966formal}, there is an universal deformation formal $\OO F$-module $\GG^{\mathrm{univ}}$ over $\CO F[[U]]$ for $\mathcal{G}_0$. For any complete local $\CO F$-algebra $A$ with residue field $\CFF q$, assigning the varible $U$ in coefficients of $\GG^{\mathrm{univ}}$ to a topolygically nilpotent element $t\in A$ defines a deformation of $\mathcal{G}_0$ over $A$. \[defi]{[U-section] We call the above $t$ the U-section of the deformation. } We denote this deformation as $\mathcal{G}^t$. Suppose $\map\beta{\CO F[[U]]}{A}$ is the map sends $U$ to $t$. We see $\mathcal{G}^t=\beta^*\GG^{\mathrm{univ}}$. The formal module $\GG^{\mathrm{univ}}$ is universal in the sense that every formal $\OO F$-module deformation of $\mathcal{G}_0$ over $A$ is *-isomorphic to some $\mathcal{G}^t$ for some $t\in A^\circ$. Here the set $A^\circ$ is the set of topologically nilpotnetn elements in $A$. Therefore, Two height 2 formal $\OO F$-modules over $A$ are *-isomorphic if and only if their U-sections are the same. Since the choice of $U$ is not canonical, we need to fix the universal formal module $\mathcal{G}^{univ}$. The set $A^\circ$ classifies all *-isomorphic classes of deformations of $\mathcal{G}_0$ over $A$. \tempsection{Automorphism Liftings of $\mathcal{G}_0$} Since the endomorphism ring of $\GG_0$ is the maximal order of a division algebra. For any $a\in\End(\mathcal{G}_0)$, either $a$ or $1-a$ must be an automorphism. The problem of lifting endomorphisms reduces to lifting automorphisms of $\GG_0$. Let $\gamma\in\Aut(\GG_0)$, $[\gamma]_{\GG_0}$ represents a power series over $\CFF q$. Lifting $[\gamma]_{\GG_0}$ arbitrarily to a power series $\psi$ with coefficients in $\CO F[[U]]$. Since $\psi^\prime(X)\neq 0$, it is invertible. By putting $\mathcal{G}^U=\mathcal{G}^{univ}$ and $$ X[+]_{\mathcal{G}^\prime}Y=\psi(\psi^{-1}(X)[+]_{\mathcal{G}^U}\psi^{-1}(Y))\qquad [l]_{\mathcal{G}^\prime}(X)=\psi([l]_{\mathcal{G}^U}(\psi^{-1}(X));\qquad l\in\OO F^\times $$ we produced another formal $\OO K$-module $\mathcal{G}^\prime$ with the isomorphism $$ \map{\psi}{\GG^{\mathrm{univ}}}{\mathcal{G}^\prime}. $$ By an abuse of notation we denote the U-section of $\mathcal{G}^\prime$ by $\gamma(U)$. There is a *-isomorphism $$ \map{\Phi}{\mathcal{G}^\prime}{\mathcal{G}^{\gamma(U)}} $$ Let $\psi_{\gamma}=\Phi\circ\psi$ we obtained an isomorphism \begin{equation}\lb{doubledating} \map{\psi_\gamma}{\GG^{\mathrm{univ}}}{\mathcal{G}^{\gamma(U)}}. \end{equation} The uniqueness of $\psi_{\gamma}$ and $\mathcal{G}^{\gamma(U)}$ is followed by the universal property. The isomorphism \eqref{doubledating} is an universal lifting of $\gamma\in\Aut(\GG_0)$. From now on, we view $\gamma(U)$ as a power series over $\CO F$ with the varible $U$. For any $a\in A^\circ$, $\gamma(a)$ represents applying this power series to $a$. This defines an action of $\Aut(\GG_0)$ on $A^\circ$. Please note that the power series $\gamma(U)$ we used in our discussion is different from the power series $[\gamma]_{\GG_0}$ over $\CFF q$ defining automorphisms of $\GG_0$. From the universal lifting \eqref{doubledating}, we know for any $t,s\in A^\circ$, $\gamma$ can be lifted as an isomorphism $\map{\psi_{\gamma}}{\mathcal{G}^t}{\mathcal{G}^s}$ if and only if $s=\gamma(t)$. In particular, $\gamma$ can be lifted as an automorphism of $\mathcal{G}$ if and only if $\gamma$ fixes its U-section. For any deformation $\mathcal{G}$ of $\GG_0$ over a discrete valuation ring $A$ with uniformizer $\varpi$ and an arbitrary automorphism $\gamma\in\Aut(\GG_0)$. Let $t$ be the U-section for $\mathcal{G}$. the maximal $n$ such that $\gamma$ can be lifted as an automorphism of $\mathcal{G}$ over $A/\varpi^n$ is the maximal $n$ that makes \[equation]{\lb{yinyue}\gamma(t)\equiv t\text{ mod }\varpi^n.} Therefore $n$ equals the valuation of $t-\gamma(t)$. \ss{Action of the maximal order of division algebra and action of Galois group on parameters.} \[prop]{\lb{Galo}Let $t$ be the U-section of $\mathcal{G}$, if $k\in\O\left[\frac1\pi\right]\cap\OO D^\times$, then $$ k(t) = t^{(k)} $$ } \[proof]{By \eqref{youyin}, as submodules of $\OO D$, we have $\O = \varphi[\O]_{\GG_K}\varphi^{-1}$. We need to find an isomorphism $\map\alpha\mathcal{G}{\mathcal{G}^{(k)}}$ that reduces to $\varphi\overline{[k]}_{\GG_K}\varphi^{-1}$ in $\GG_0$. Conjugating the isogeny in \eqref{fangong} we have $$ \map{\Phi^{(k)}}{\GG_K}{\mathcal{G}^{(k)}}. $$ The map $\Phi^{(k)}$ and $\Phi$ embeds $T\mathcal{G}$ and $T\mathcal{G}^{(k)}$ into $T\GG_K$. By definition of $(k)$, we have $$ T\mathcal{G}^{(k)} = k\cdot T\mathcal{G} $$ as submodules of $T\GG_K$. Note the endomorphism $[k]_{\GG_K}$ multiplies $k^{-1}$ on $T\GG_K$. The isogenies $$ \map{\Phi}{\GG_K}{\mathcal{G}};\qquad \map{\Phi^{(k)}\circ [k]_{\GG_K}}{\GG_K}{\mathcal{G}^{(k)}} $$ will send $T\mathcal{G}$ and $T\mathcal{G}^{(k)}$ to the same subset of $T\GG_K$. Therefore there exists an isomorphism $\map{\alpha}\mathcal{G}{\mathcal{G}^{(k)}}$ such that $$ \alpha\circ\Phi=\Phi^{(k)}\circ[k]_{\GG_K}. $$ Since $\Phi$ and $\Phi^{(k)}$ reduces to $\varphi$ on the special fiber, $\alpha$ reduces to $\varphi\overline{[k]}_{\GG_K}\varphi^{-1}$.} \tempsection{Intersection numbers in Lubin-Tate space} We put $$ \mathcal{M}=\Spf(\CO F[[U]]) $$ and call it the Lubin-Tate deformation space of $\mathcal{G}_0$. The formal $\O$-module $\mathcal{G}$ we defined in \eqref{fangong} corresponds to an assignment of $U$ to its U-section in $W$, which give rise to a formal curve $y$ in $\mathcal{M}$: \begin{equation}\lb{xiangxiang} \map{y}{\Spf \OO W}{\mathcal{M}} \end{equation} In Lemma \ref{emptyset} we will prove this is a colosed embedding. The Lubin-Tate space $\mathcal{M}$ admits an action by $\Aut(\GG_0)$. By an abuse of notation, for any $\gamma\in\Aut(\GG_0)$, we denote $\map{\gamma}{\mathcal{M}}{\mathcal{M}}$ the isomorphism defined by $U\mapsto \gamma(U)$. Suppose $t$ is the U-section of $\mathcal{G}$. Then $\gamma(y)$ corresponds to $\mathcal{G}^{\gamma(t)}$. We are interested in the valuation of $t-\gamma(t)$, which is the maximal $n$ that $\gamma$ can be lifted as an automorphism over $W/\varpi^{n}$. Put $$ \begin{array}{lll} \mathcal{M}_W&=&\mathcal{M}\times_{\OO F}\Spf \OO W;\\ \mathcal{M}_K&=&\mathcal{M}\times_{\OO F}\Spf \CO K.\\ \end{array} $$Let $\map{x}{\Spf\OO W}{\mathcal{M}_W}$ be the graph of $y$ in \eqref{xiangxiang}, $\map{z}{\Spf\OO W}{\mathcal{M}_K}$ the map by composing $x$ with the projection to $\mathcal{M}_K$. Let $\a$ be the map by twisting $z$ with the non-trivial Galois conjugate on the second factor. Then $x$ is a closed embedding. With an abuse of notation, denote by $\OO W$ the structural sheaf of $\Spf \OO W$. For $p=x,y,z,\a$, define coherent sheaves on $\mathcal{M}_?=\mathcal{M}_W$,$\mathcal{M}_K$,$\mathcal{M}$ by $$ \OO p=p_*\OO W. $$ Let $y_1,y_2$ be different maps as \ref{xiangxiang}, we define $$ \langle y_1,y_2 \rangle = \length_{\CO K}\left(\OO{y_1}\otimes_M\OO{y_2}\right). $$ Furthermore, note that $\mathcal{M}_?$ admits an action by $\OO D^\times$, for any $\gamma\in\OO D^\times$ We define \[equation]{\lb{tanamy} \left\{ \begin{array}{llll} v_x(\gamma)&=&\length_{\CO F}(\OO x\otimes_{\mathcal{M}_W} \OO{\gamma x}). \\ v_y(\gamma)&=&\length_{\CO F}(\OO y\otimes_{\mathcal{M}} \OO{\gamma y}). \\ v_z(\gamma)&=&\length_{\CO F}(\OO z\otimes_{\mathcal{M}_K} \OO{\gamma z}). \\ v_\a(\gamma)&=&\length_{\CO F}(\OO \a\otimes_{\mathcal{M}_K} \OO{\gamma z}). \\ \end{array}\right. } Let $\mathcal{G}_{n}=\mathcal{G}\otimes \OO W/\varpi^{n+1}$. By interpretations in \eqref{yinyue}, we have \begin{equation}\lb{xiangjiao} \gamma\in\Aut(\mathcal{G}_n)\Longleftrightarrow n<v_x(\gamma). \end{equation} The proof of the theorem amounts to calculate $v_x(\gamma)$. \tempchapter{Computation of Intersection Numbers \tempchapter{Introduction} \load{Introduction} \load{ContentList} \tempchapter{\tempchapter} \def\tempsection{\tempsection} \def\tempsubsection{\tempsubsection} \begin{document} \title{ \vspace{1.5in} \textbf{\use{Title}} } \useas{Author}\author \date{ \vspace{2.3in} \begin{center} Submitted in partial fulfillment of the\\ requirements for the degree of\\ Doctor of Philosophy\\ in the Graduate School of Arts and Sciences\\ \vspace{.3in} COLUMBIA UNIVERSITY \\ \vspace{0.2in} \the\year \end{center}} \maketitle \thispagestyle{empty} \null\vfill \doublespacing \begin{center} \copyright\ \the\year\\ \use{Author}\\ All rights reserved \end{center}\newpage \thispagestyle{empty} {\large \begin{center} {\LARGE \textbf{Abstract}} \\ \vspace{0.3in} \use{Title}\\ \vspace{0.2in} \use{Author} \vspace{0.2in} \end{center} \load{Abstract} } \newpage { \pagenumbering{roman} \setcounter{page}{1} \tableofcontents \chapter*{Acknowledgments} \addcontentsline{toc}{chapter}{Acknowledgments} \load{Acknowledgement} \newpage } \fancyhf{} \fancyhead[R]{\thepage} \renewcommand{\headrulewidth}{0pt} \renewcommand{\footrulewidth}{0pt} \fancypagestyle{plain}{% \fancyhf{} \fancyhead[R]{\thepage} \renewcommand{\headrulewidth}{0pt} \renewcommand{\footrulewidth}{0pt}} \pagestyle{fancy} \pagenumbering{arabic} \tempchapter{Introduction} \load{Introduction} \load{ContentList} \tempchapter{#1} \newtheorem{thm}{Theorem}[section] \newtheorem{pof}{Proof}[section] \newtheorem{defi}[thm]{Definition} \newtheorem{prop}[thm]{Proposition} \newtheorem{cor}[thm]{Corollary} \newtheorem{summ}{Summary} \newtheorem{rem}[thm]{Remark} \newtheorem{ex}[thm]{Problem} \newtheorem{lem}[thm]{Lemma} \newtheorem{exa}[thm]{Example} \newtheorem{proj}{Project} \newtheorem{conj}{Conjecture} \numberwithin{equation}{section} \setcounter{tocdepth}{1} \tempchapter{Introduction} \load{Introduction} \load{ContentList}
{ "timestamp": "2019-03-01T02:03:44", "yymm": "1902", "arxiv_id": "1902.10789", "language": "en", "url": "https://arxiv.org/abs/1902.10789" }
\section{Proof of quantum bounds on unmeasured overlaps}\label{apx:overlaps} In this Appendix, our goal is to prove the bounds on overlap $\ovlap{B}{C}$ given known overlaps $\ovlap{A}{B}$ and$\ovlap{A}{C}$, as described in the main text. It is easy to see that such a bound must exist: if $\ovlap{A}{B}$ and $\ovlap{A}{C}$ are both very large, then all three states must be almost identical, and we should expect $\ovlap{B}{C}$ to also be large. The converse is not necessarily true: if both known overlaps are close to zero we know that $\ket{A}$ is almost orthogonal to both $\ket{B}$ and $\ket{C}$, but $\ket{B}$ and $\ket{C}$ could just as easily be identical or orthogonal. Nonetheless, there are two cases where we might give a meaningful bound when $\ovlap{A}{B}$ is small. The first is when all systems are qubits. In this case we cannot have three mutually orthogonal states, and so the almost-orthogonality between $\ket{A}$ and the others would imply that the overlap between $\ket{B}$ and $\ket{C}$ must be large. The second case is when $\ovlap{A}{B}$ is small but $\ovlap{A}{C}$ is large---in that case, we can also expect $\ovlap{B}{C}$ to be small. We now discuss all of these bounds in turn. \subsection{General bounds on \texorpdfstring{$\ovlap{B}{C}$}{}} Suppose we have three states $\ket{A}$, $\ket{B}$, and $\ket{C}$, such that both $\ovlap{A}{B}$ and $\ovlap{A}{C}$ are known. Let us begin by parameterizing these state in an economical way. Without loss of generality, assume that the three states are spanned by basis states $\{\ket{0},\ket{1},\ket{2}\}$. Furthermore, we can align the basis states in a convenient way to use the following parameterization: \begin{subequations} \begin{align*} \ket{A} &= \ket{0} \\ \ket{B} &= \cos{\beta} \ket{0} + e^{i a} \sin{\beta} \ket{1} \\ \ket{C} &= \cos{\gamma} \ket{0} + e^{i b}\sin{\gamma} \sin{\alpha} \ket{1}+e^{i c}\sin{\gamma} \cos{\alpha} \ket{2}. \end{align*} \end{subequations} We can now apply a unitary transformation $U = \textrm{diag}(1,e^{-ia},e^{-ic})$ to all states and redefine $\phi = b-a$. This gives us three new states that have the same pairwise overlaps as the original ones, and we reach our final parameterization: \begin{subequations} \begin{align*} \ket{A} &= \ket{0} \\ \ket{B} &= \cos{\beta} \ket{0} + \sin{\beta} \ket{1} \\ \ket{C} &= \cos{\gamma} \ket{0} + e^{i \phi}\sin{\gamma} \sin{\alpha} \ket{1}+\sin{\gamma} \cos{\alpha} \ket{2}, \end{align*} \end{subequations} where $\alpha$, $\beta$, $\gamma$ $\in [0,\pi/2]$ and $\phi \in [0, 2\pi)$. From this we have the two \emph{known} overlaps \begin{subequations} \begin{align*} \ovlap{A}{B} & = \cos^2\beta \\ \ovlap{A}{C} & = \cos^2\gamma, \end{align*} \end{subequations} which are fixed, and the third overlap we wish to bound: \begin{align} \ovlap{B}{C} =& \cos^2 \gamma \cos^2 \beta +\sin^2\gamma\sin^2\beta \sin^2\alpha +2 \sin\gamma\cos\gamma\sin\beta\cos\beta\sin\alpha\cos\phi.\label{eq:toopt} \end{align} Our goal now is to find the extrema of \cref{eq:toopt} with respect to parameters $\alpha$ and $\phi$. To that end we differentiate this function with respect to $\alpha$ and $ \phi$ and write \begin{subequations} \begin{align} \frac{\partial \ovlap{B}{C}}{\partial \alpha} & = 2 \sin^2 \beta \sin^2 \gamma \sin \alpha \cos \alpha + 2 \sin \beta \cos \beta \sin \gamma \cos \gamma \cos \alpha \cos \phi \notag \\ &= 0 \label{eq:extr1}\\ \frac{\partial \ovlap{B}{C}}{\partial \phi} & = - 2 \sin \beta \cos \beta \sin \gamma \cos \gamma \sin \alpha \sin \phi \notag \\&= 0 \label{eq:extr2} \end{align} \end{subequations} Let us now break down all possible solutions of the above equations. Consider first \cref{eq:extr2}. It is true if either $\sin \alpha =0$ or $\sin \phi = 0$. If $\sin \alpha =0$ then $\ket{C}$ has no support on $\ket{1}$, and the overlap between $\ket{B}$ and $\ket{C}$ reduces to \begin{equation*} \ovlap{B}{C} = \cos^2 \beta \cos^2 \gamma, \end{equation*} which is fixed by the two known overlaps. If $\sin \phi = 0$, we have that naturally $\cos \phi = \pm 1$, and \cref{eq:extr1} reduces to \begin{equation*} (\sin \beta \sin \gamma \sin\alpha \pm \cos \beta \cos \gamma) \cos \alpha = 0. \end{equation*} This equation now has two solutions. The first is $\cos \alpha = 0$, in which case $\sin \alpha = 1$ (recall that $\alpha \in [0,\pi/2]$) and we have \begin{equation*} \ovlap{B}{C} = \cos^2 (\beta \mp \gamma). \end{equation*} The other solution occurs when \begin{equation*} \sin \alpha = \mp\frac{\cos \beta \cos \gamma}{\sin \beta \sin \gamma}, \end{equation*} in which case we have \begin{equation*} \ovlap{B}{C} = 0. \end{equation*} We have thus obtained four extrema of $\ovlap{B}{C}$: \begin{equation} \begin{cases} \cos^2 \beta \cos^2 \gamma,& \\ \cos^2 (\beta \pm \gamma), &\\ 0, & \text{if } \sin \alpha = \mp\frac{\cos \beta \cos \gamma}{\sin \beta \sin \gamma} \end{cases} \end{equation} We now need to check whether each of these values are maxima, minima or saddle points. The value 0 clearly is a minimum, but we also need to check under which conditions it can happen. As we show shortly, this minimum is attainable if \begin{equation} \ovlap{A}{B} + \ovlap{A}{C} = \cos^2 \beta + \cos^2 \gamma >1. \end{equation} Interestingly, this is the same condition as that necessary to guarantee a nontrivial bound for overlaps of \emph{classical} states [cf.\ Equation\ (\ref{eq:rineq1})]. In other words: although, as discussed in the main text, the quantum lower bound is looser than the lower bound for classical models, the condition that guarantees they are nonzero is the same for both. To investigate the extrema of $\ovlap{B}{C}$ we use the following: \begin{lemma}\label{lem:lemma1} If $x, y \in (0,\pi/2)$ are such that $\cos^2 x + \cos^2 y > 1$, then the following hold: \begin{align} \frac{\cos x \cos y}{\sin x \sin y} &> 1 \\ \cos^2 x \cos^2 y & \in [\cos^2(x+y), \cos^2(x-y)] \end{align} \end{lemma} \begin{proof} For the first part, write \begin{align*} (\tan x \tan y)^2 =& \frac{(1-\cos^2 x)(1-\cos^2 y)}{\cos^2 x \cos^2 y} \\ =& \frac{1 - (\cos^2 x + \cos^2 y)}{\cos^2 x \cos^2 y} + 1 \\ & < 1, \end{align*} from which the inequality follows. For the second part, write \begin{align*} [\cos (x\pm y)]^2 = &\cos^2 x \cos^2 y + \sin^2 x \sin^2 y \mp 2 \sin x \sin y \cos x \cos y \\ = &\cos^2 x \cos^2 y + \sin^2 x \sin^2 y \left(1 \mp \frac{2}{\tan x \tan y}\right) \end{align*} using the first inequality we see that the terms in parenthesis has the same sign as the plus/minus sign within, and so the second claim follows. \end{proof} We now set $x = \beta$ and $y = \gamma$ in \cref{lem:lemma1}, to conclude the following: whenever $\ovlap{A}{B} + \ovlap{A}{C} > 1$, the minimum $\ovlap{B}{C} = 0$ does not occur. Furthermore, in this case the extremum given by $\cos^2 \beta \cos^2 \gamma$ is contained between the two values of $\cos^2 (\beta \pm \gamma)$, from which we conclude it must be a saddle point. Finally, combining all these together we conclude that, whenever \begin{equation*} \ovlap{A}{B} + \ovlap{A}{C} > 1, \end{equation*} the lower and upper bounds for $\ovlap{B}{C}$ are $\cos^2 (\beta \pm \gamma)$. When $\ovlap{A}{B} + \ovlap{A}{C} \leq 1$, the upper bound for $\ovlap{B}{C}$ is the same but the lower bound is 0. It is important to emphasize that, since these bounds were obtained by direct minimization over the free parameters $\alpha$ and $\phi$, they are always attainable. That is, given the two fixed overlaps, there always exist states $\ket{A}$, $\ket{B}$, and $\ket{C}$ for which $\ovlap{B}{C}$ achieves the lower and the upper bounds. \subsection{Bounds on \texorpdfstring{$\ovlap{B}{C}$}{} for qubits} \label{apx:purequbits} Suppose now that all subsystems are known to be qubits. Our general parameterization of the three states can now be written as \begin{subequations} \begin{align*} \ket{A} &= \ket{0} \\ \ket{B} &= \cos{\beta} \ket{0} + \sin{\beta} \ket{1} \\ \ket{C} &= \cos{\gamma} \ket{0} + e^{i \phi}\sin{\gamma} \ket{1}, \end{align*} \end{subequations} where $\beta$, $\gamma$ $\in [0,\pi/2]$ and $\phi \in [0, 2\pi)$. From this it follows that \begin{align*} \ovlap{B}{C} = &\cos^2 \beta \cos^2 \gamma +\sin^2\beta\sin^2\gamma \notag \\ &+ 2 \sin\beta\cos\beta\sin\gamma\cos\gamma\cos\phi. \end{align*} Differentiating with respect to $\phi$ to obtain the extrema we find \begin{equation*} \frac{\partial \ovlap{B}{C}}{\partial \phi} = - 2 \sin \beta \cos \beta \sin \gamma \cos \gamma \sin \phi = 0. \end{equation*} We conclude that the two extrema are \begin{align*} & \cos^2 \beta \cos^2 \gamma +\sin^2\beta\sin^2\gamma \pm 2 \sin\beta\cos\beta\sin\gamma\cos\gamma \notag = \cos^2(\alpha\pm \epsilon). \end{align*} Although this expression is similar to the two bounds found in the general case, the analysis here can be qualitatively different due to the possibility that $r_{AB}+r_{AC}\le 1$, in which case the lower bound for the qudit case is 0 in contrast with the qubit case. \subsection{Maximal violation of classical bounds} We now prove that the violation of the classical bound of 1/4 described in the text is the maximum possible. To prove this, we want to maximize the difference between the classical lower bound of Eq.\ (\ref{eq:rineq1}) and the quantum lower bound of Eq.\ (\ref{eq:lb}). In our parameterization, this difference can be written as \begin{equation*} D = \cos^2 \beta + \cos^2 \gamma -1 - \cos^2 (\beta + \gamma). \end{equation*} Notice that we assuming are assuming $\cos^2 \beta + \cos^2 \gamma >1$, otherwise both classical and quantum lower bounds become trivial. We now wish to maximize $D$ with respect to both $\beta$ and $\gamma$. To do this, we need \begin{align*} \frac{\partial D}{\partial \beta} &= -2 \cos \beta \sin \beta + 2 \cos(\beta + \gamma)\sin(\beta +\gamma) = 0,\\ \frac{\partial D}{\partial \gamma} &= -2 \cos \gamma \sin \gamma + 2 \cos(\beta + \gamma)\sin(\beta +\gamma)=0. \end{align*} Simple manipulations show this is equivalent to \begin{align*} \sin \gamma \cos (2 \beta + \gamma) &= 0, \\ \sin \beta \cos (\beta + 2\gamma) &= 0. \end{align*} Recall that $\gamma, \beta \in [0, \pi/2]$. In this range, the solutions to these equations with $\sin \gamma = 0$ or $\sin \beta = 0$ are minima, since they lead to $D = 0$. The remaining solutions correspond to \begin{align*} \cos (2 \beta + \gamma) &= 0, \\ \cos (\beta + 2\gamma) &= 0. \end{align*} In the domain of interest, these equations have a few solutions. By enumerating them it is easy to check that the maximum of $D$ is 1/4. This maximum is obtained, for example, for $\gamma = \beta = \pi/3$. A set of three states which has these values for and reaches the maximal violation of the classical bound is \begin{align*} \ket{A} & = \ket{0}, \\ \ket{B} & = \tfrac{1}{2} \left(\ket{0} + \sqrt{3} \ket{1}\right), \\ \ket{C} & = \tfrac{1}{2} \left(\ket{0} - \sqrt{3} \ket{1}\right). \end{align*} These states, up to a rotation of the Bloch sphere, correspond to three states in the equator of the Bloch sphere separated by consecutive angles of $\pi/3$, with $\ket{A}$ in the center, as claimed in the main text. A similar calculation shows that the maximal quantum violation of the classical \emph{upper} bound is also 1/4. A set of three states that achieve this is \begin{align*} \ket{A} & = \ket{0}, \\ \ket{B} & = \tfrac{1}{2} \left(\ket{0} + \sqrt{3} \ket{1}\right),\\ \ket{C} & = \tfrac{1}{2} \left(\sqrt{3}\ket{0} + \ket{1}\right). \end{align*} Note that these also correspond to three states in a great circle of the Bloch sphere separated by consecutive angles of $\pi/3$, as in the case of the maximal violation of the lower bound, but now we have $\ket{B}$ in the middle. This corresponds to the observation, in the main text, that the classical bounds of (\ref{eq:lci1}-\ref{eq:lci3}) can be obtained by each other from a relabeling of indices $A$, $B$ and $C$. \section{Quantum bounds for mixed qubit states} \label{apx:qubits} In this Appendix we prove that our quantum bounds of Eqs.\ (\ref{eq:ub})--(\ref{eq:lb}) extend to arbitrary mixed states for \emph{qubits}. As in Appendix \ref{apx:purequbits}, the quantum bounds we obtain for mixed qubit states in what follows hold regardless of whether condition $r_{AB}+r_{AC} > 1$ is satisfied. As a warm-up, suppose that qubit $A$ is in a pure state (say, $\ket{0}$). Now suppose $B$ and $C$ are in states $\rho$ and $\sigma$, respectively, parameterized by: \begin{equation*} \rho = \begin{pmatrix} \rho_0 & \rho_1 \\ {\bar{\rho_1}} & 1-\rho_0 \end{pmatrix} \textrm{ and } \sigma = \begin{pmatrix} \sigma_0 & \sigma_1 \\ {\bar{\sigma_1}} & 1-\sigma_0 \end{pmatrix}, \end{equation*} where $\rho_0, \sigma_0 \in [0,1]$. Clearly the conditions that $\rho$ and $\sigma$ have trace 1 and are Hermitian are already taken into account by the parameterization. For these to be proper mixed states we also need $\mathrm{tr} \rho^2 \leq 1$ and $\mathrm{tr} \sigma^2 \leq 1$, which can be written as \begin{align} \left| \rho_1 \right|^2 &\leq \rho_0 (1-\rho_0) \label{eq:purity1}\\ \left| \sigma_1 \right|^2 &\leq \sigma_0 (1-\sigma_0) \label{eq:purity2}. \end{align} With these parameterizations, we can write the overlaps as \begin{align} r_{AB} &= \rho_0, \label{eq:rabapx}\\ r_{AC} &= \sigma_0, \label{eq:racapx}\\ r_{BC} &= \rho_0 \sigma_0 + (1-\rho_0)(1-\sigma_0) + \bar{\rho_1}\sigma_1 + \rho_1 \bar{\sigma_1}. \label{eq:rbcapx} \end{align} We want to show that $r_{-} \leq r_{BC} \leq r_{+}$, where [cf.\ Equations (\ref{eq:ub})--(\ref{eq:lb})] \begin{align} r_{\pm} = & r_{AB}r_{AC} + (1-r_{AB})(1-r_{AC}) \pm 2 \sqrt{r_{AB}r_{AC} (1-r_{AB})(1-r_{AC})} \notag\\ = & \rho_0 \sigma_0 + (1-\rho_0)(1-\sigma_0) \pm 2 \sqrt{\rho_0 \sigma_0 (1-\rho_0)(1-\sigma_0)} \end{align} Comparing the above with Eq.\ (\ref{eq:rbcapx}) we see that proving the required bounds on $r_{BC}$ is equivalent to showing that \begin{equation} \left| \rho_1 \bar{\sigma_1} + \sigma_1 \bar{\rho_1} \right| \leq 2 \sqrt{\rho_0 \sigma_0 (1-\rho_0)(1-\sigma_0)}. \end{equation} Note now that \begin{equation} \left| \rho_1 \bar{\sigma_1} + \sigma_1 \bar{\rho_1} \right| \leq 2 \left| \rho_1 \right| \left| \sigma_1 \right| \leq 2 \sqrt{\rho_0 (1-\rho_0) \sigma_0 (1-\sigma_0)}, \end{equation} where the first inequality follows from the triangle inequality, and the second follows from Eqs.\ (\ref{eq:purity1})--(\ref{eq:purity2}). Since the inequality holds, this implies that $r_{-} \leq r_{BC} \leq r_{+}$, as claimed. Let us now extend the above result to when qubit $A$ is in a mixed state as well. Let us work in the basis where the state of qubit $A$ is diagonal, and we parameterize it as \begin{equation*} \psi = \begin{pmatrix} \psi_0 & 0 \\ 0 & \psi_1 \end{pmatrix}, \end{equation*} such that $\psi_0 + \psi_1 = 1$. In this case, we have that Eqs.\ (\ref{eq:rabapx})--(\ref{eq:rbcapx}) become \begin{align} r_{AB} &= \psi_0 \rho_0 + \psi_1 (1-\rho_0), \label{eq:rabapx2} \\ r_{AC} &= \psi_0 \sigma_0 + \psi_1 (1-\sigma_0), \label{eq:racapx2}\\ r_{BC} &= \rho_0 \sigma_0 + (1-\rho_0)(1-\sigma_0) + \bar{\rho_1}\sigma_1 + \rho_1 \bar{\sigma_1}. \label{eq:rbcapx2} \end{align} Our goal is again to prove that, for arbitrary $\psi_0$ and $\psi_1$, the bounds $r_{-}(\psi_0,\psi_1) \leq r_{BC} \leq r_{+}(\psi_0,\psi_1)$ hold, where \begin{align*} r_{\pm}(\psi_0,\psi_1) = & r_{AB}r_{AC}+ (1-r_{AB})(1-r_{AC}) \pm 2 \sqrt{r_{AB}r_{AC} (1-r_{AB})(1-r_{AC})}. \end{align*} We write the dependence of $r_{pm}$ on $\psi_0$ and $\psi_1$ explicitly, but omit this dependence from $r_{AB}$ and $r_{AC}$ for simplicity of notation (note that $r_{BC}$ does not depend on $\psi_0$ and $\psi_1$). We proved that $r_{-}(\psi_0,\psi_1) \leq r_{BC} \leq r_{+}(\psi_0,\psi_1)$ holds for $A$ pure, or equivalently for both limits $\psi_0 = 1$ and $\psi_1 = 1$. The fact that the bounds hold for all $\psi_0$ and $\psi_1$ follows from their concavity/convexity. More specifically, define the functions \begin{equation*} f_{\pm}(x,y) = (\sqrt{x y}\pm \sqrt{(1-x)(1-y)})^2 \end{equation*} A function $f(x,y)$ is convex on a convex region of $\mathbb{R}^2$ if and only if its Hessian matrix is positive semidefinite in the interior of that region \cite{Bertsekas}. By testing this property we can show that $f_{-}$ is convex and $f_{+}$ is concave in the region defined by $x, y \in (0,1)$. In particular this means that, for $a,b \in [0,1]$ such that $a+b = 1$, we have \begin{align*} f_{+}(a x_1 + b x_2,a y_1 + b y_2) & \geq a f(x_1, y_1) + b f(x_2,y_2), \\ f_{-}(a x_1 + b x_2,a y_1 + b y_2) & \leq a f(x_1, y_1) + b f(x_2,y_2) \end{align*} By choosing $a=\psi_0$, $b=\psi_1$, $x_1 = \rho_0$, $x_2 = 1-\rho_0$, $y_1 = \sigma_0$, and $y_2 = 1-\sigma_0$, the above inequalities imply \begin{equation*} r_{+}(\psi_0,\psi_1)\geq \psi_0 r_{+}(1,0) + \psi_1 r_{+}(0,1) \geq \psi_0 r_{BC} + \psi_1 r_{BC} = r_{BC}, \end{equation*} where the last inequality follows from our previous results for the case of pure $A$ and from the fact that $r_{BC}$ does not depend on $\psi_0$ and $\psi_1$. By combining the above with a similar reasoning based on the convexity of $r_{-}$ we obtain \begin{equation*} r_{-}(\psi_0,\psi_1) \leq r_{BC} \leq r_{+}(\psi_0,\psi_1), \end{equation*} as desired. \section{Proof of the classical bounds}\label{ap:classb} In this Appendix we prove bounds for the joint probability of $N$ events. These were proven by George Boole \cite{Boole1854}, but we include a proof for completeness. We then use those results to obtain inequalities that must be satisfied by classical states [as in Eq.\ (\ref{eq:cs})], with known pairwise overlaps described by any connected graph $G$. These general inequalities, described in Eq. (\ref{eq:genineq}) of the main text, have as a particular case the classical bounds for the 3-vertex graph $P_3$, i.e.\ inequalities (\ref{eq:lci1}-\ref{eq:lci3}). Consider $N$ logical propositions $a_1, a_2, \ldots, a_N$, and let $p(a_i)$ be the probability that proposition $a_i$ holds. Let $p(a_1 \wedge a_2 \wedge \ldots \wedge a_N)$ be the probability that the joint proposition holds, i.e.\ that all $\{a_i\}$ are simultaneously true. We now show that logical coherence implies simple linear inequalities that $p(a_1 \wedge a_2 \wedge \ldots \wedge a_N)$ must satisfy. We start with the simplest case of $N=2$ propositions $ \{a_1, a_2\}$. Using $0$ for false and $1$ for true, let us write the truth table for the AND ($\wedge$) function: \begin{table}[h!] \begin{center} \begin{tabular}{|c|c|c|} \hline \textbf{$a_1$} & \textbf{$a_2$} & \textbf{$a_1 \wedge a_2$}\\ \hline \hline 0 & 0 & 0\\ \hline 0 & 1 & 0\\ \hline 1 & 0 & 0\\ \hline 1 & 1 & 1\\ \hline \end{tabular} \end{center} \caption{Truth table for AND ($\wedge$) function.} \label{table:and2} \end{table} Let us interpret each row in the table above as a vector $\vec{p}$ in a 3-dimensional space of probabilities $\vec{p}=(p(a_1), p(a_2), p(a_1 \wedge a_2))$. Since the table contains all possible truth assignments for $a_1$ and $a_2$, the most general, logically coherent vector $\vec{p}$ must be a convex combination of the rows of Table \ref{table:and2}. In our case, the logical coherence conditions for $\{ p(a_1), p(a_2), p(a_1 \wedge a_2) \}$ are simply the faces of the tetrahedron whose vertices are the rows of Table \ref{table:and2}. The four faces are described by inequalities: \begin{align} p(a_1 \wedge a_2) &\ge 0 ; \label{in1}\\ p(a_1 \wedge a_2) &\le p(a_1) ; \label{in2}\\ p(a_1 \wedge a_2) &\le p(a_2) ; \label{in3}\\ p(a_1 \wedge a_2) &\ge p(a_1)+p(a_2)-1. \label{in4} \end{align} Inequality (\ref{in1}) is trivial; inequalities (\ref{in2}) and (\ref{in3}) simply state that the conjunction of two events must not happen more often than each of them separately, whereas inequality (\ref{in4}) gives a bound on $p(a_1 \wedge a_2)$, which follows from the inclusion-exclusion principle in probability theory. The method described above, due to Pitowsky \cite{Pitowsky89,Pitowsky94}, is general, and can be applied to $m$ independent propositions together with any set of Boolean functions of them. First, we compile a list all $2^m$ truth values for the $m$ independent propositions, together with the corresponding truth values of the Boolean functions of interest (in the case above, $a_1 \wedge a_2$). The rows of the resulting table are then interpreted as vertices of a polytope, and its facets as our desired logical coherence conditions. These facets can be found using well-known convex hull algorithms (e.g.\ \cite{BarberDH96}). We can apply the above method to $m$ propositions $a_1, a_2, ..., a_m$ and their joint proposition $a_1 \wedge a_2 \wedge ... \wedge a_m$. Each vertex of the polytope is a vector in a ($m+1$)-dimensional space of probabilities. Given the simplicity of the vertex list for this polytope, it is easy to check that the following inequalities are satisfied by all vertices, and hence by the complete polytope: \begin{align} p(a_1 \wedge a_2 \wedge ... \wedge a_m) &\ge 1-m+\sum_{i=1}^{m} p(a_i),\label{lcineq}\\ p(a_1 \wedge a_2 \wedge ... \wedge a_m) & \le p (a_i), \; \forall i = 1 \ldots m. \label{ineqlb} \end{align} Inequality (\ref{lcineq}) is saturated by exactly $m$ affinely independent vertices--those containing exactly two zeroes, plus the vertex with only ones--and thus constitutes a facet of the polytope. Each inequality (\ref{ineqlb}) is saturated by $2^{m-1}+1$ vertices, which also generate an $m$-dimensional face, i.e., a facet of the polytope. Let us now consider how to apply inequalities (\ref{lcineq}) and (\ref{ineqlb}) to obtain bounds for two-state overlaps of classical states. We start by considering $N$ independent random processes $A_i$, which yield outcomes $v(A_i)$ with probabilities $p[v(A_i)]$. Let $p_{ij}$ denote the probability that the independently drawn values for $A_i$ and $A_j$ are the same, so \begin{equation*} p_{ij} \equiv p[v(A_i)=v(A_j)]=\sum_k p[v(A_i)=k]p[v(A_j)=k], \end{equation*} where the sum is over all possible outcomes. Consider an arbitrary, connected graph $G$ with $N$ vertices and $m$ edges. Each vertex represents a random process $A_i$, while edges $\{ i,j\} \in G$ represents a comparison between the outcomes of a pair of neighboring vertices/processes. We assign a logical proposition to each edge $\{i,j\} \in G$: \begin{equation} a_{i,j} := v(A_i)=v(A_j), \forall \{i,j\} \in G. \end{equation} Inequality (\ref{lcineq}) then yields: \begin{equation} p\left[\bigwedge_{\{i,j\} \in G} v(A_i)=v(A_j) \right] \ge 1-m+\sum_{\{i,j\} \in G} p[v(A_i)=v(A_j)] \end{equation} Since $G$ is connected, for any vertex pair $\{k,l\}$ (even those not connected by edges of $G$), it is true that \begin{equation} p[v(A_k)=v(A_l)] \ge p\left[\bigwedge_{\{i,j\} \in G} v(A_i)=v(A_j) \right]. \end{equation} So \begin{equation} p[v(A_k)=v(A_l)] \ge 1-m+\sum_{\{i,j\} \in G} p[v(A_i)=v(A_j)], \forall \{k,l\}. \label{eq:mv} \end{equation} We now apply inequality (\ref{eq:mv}) above to obtain inequalities that bound the overlaps of classical states, defined as mixed states which are diagonal in a fixed, reference basis $\{\ket{\phi_i}\}$. As noted in the main text, the two-system overlap $r_{ij}=\mathrm{tr}(\rho_i \sigma_j)$ of classical states is the probability of obtaining the same outcome when measuring the two states in the classical basis: \begin{eqnarray*} \mathrm{tr}(\rho \sigma) &=& \sum_i \bra{\phi_i} \rho \sigma \ket{\phi_i}\\ &=&\sum_i \bra{\phi_i} \rho \ket{\phi_i}\bra{\phi_i}\sigma \ket{\phi_i}\\ &=& \text{probability that } v(\hat{O})_{\rho}=v(\hat{O})_{\sigma}. \end{eqnarray*} Classical states can be viewed as a quantum way of parameterizing general independent probabilistic processes. This identification enables us to interpret inequality (\ref{eq:mv}) as an inequality about overlaps $r_{kl}=\mathrm{tr}(\rho_k \rho_l)$ of classical states, leading to \begin{equation*} r_{kl} \ge 1-m+\sum_{\{i,j\} \in G} r_{ij}, \end{equation*} where $\{k,l\}$ are any pairs of vertices in $G$. The inequality above actually represents many inequalities since, for any pair $\{k, l\}$, we can apply it to any connected subgraph of $G$ that contains these two vertices. \end{document}
{ "timestamp": "2019-04-01T02:06:47", "yymm": "1902", "arxiv_id": "1902.11039", "language": "en", "url": "https://arxiv.org/abs/1902.11039" }
\section{Introduction} By fast magnetic reconnection, we mean magnetic field lines changing their connections on a time scale determined by Alfv\'enic, not resistive, physics. Fast magnetic reconnection is prevalent in both natural and laboratory plasmas \cite{Liu:2017}. As shown by Newcomb\cite{Newcomb}, magnetic field lines cannot change their connections but move with a velocity $\vec{u}_\bot(\vec{x},t)$ if and only if the magnetic field obeys the ideal evolution equation \begin{equation} \frac{\partial\vec{B}}{\partial t}=\vec{\nabla}\times (\vec{u}_\bot\times\vec{B}). \label{Ideal-ev} \end{equation} The ideal evolution equation is deceptive because it has the mathematical property of containing the seeds of its own destruction when the magnetic field depends on three, though not on two, spatial coordinates \cite{Boozer:ideal-ev}. What is meant is that the connections-breaking magnetic field, $\vec{B}_{ni}$, is proportional to the deviation $\mathcal{E}_{ni}$ of the electric field from the ideal form multiplied by a factor $\Lambda_u$ that increases exponentially in time; $\ln{\Lambda_u}/t \equiv \lambda_u$ as $t\rightarrow\infty$. The average rate at which streamlines of $\vec{u}_\bot$ e-fold apart is the Lyapunov exponent, $\lambda_u$, of the magnetic field line velocity. The Lyapunov exponent of a generic flow is of order the largest element in the three-by-three matrix $\vec{\nabla}\vec{u}_\bot$, so $\lambda_u\sim u_\bot/a$, where $a$ is a distance scale for variations in the flow. The two-dimensional case, which was considered by Longcope and Strauss \cite{Longcope-Strauss}, is only consistent with an exponential increase in the sensitivity when the forces exerted by the plasma on the magnetic field also change exponentially. Appendix \ref{sec:L-coord} and \cite{Boozer:ideal-ev} show an exponentiation of forces is not required in three dimensions because of the presence of a middle singular value $\Lambda_m$ of the Singular Value Decomposition of the Jacobian matrix of Lagrangian coordinates. Away from null-points of the magnetic field, the electric field has a general form $\vec{E}+\vec{u}_\bot\times\vec{B}=-\vec{\nabla}\Phi +\mathcal{E}_{ni}\vec{\nabla}\ell$ with $\ell$ the distance along a magnetic field line, Equation (\ref{E}). When this form is used in Faraday's law, Equation (\ref{Ideal-ev}) is obtained when the deviation from ideality $\mathcal{E}_{ni}$ is zero. The time required for an evolving magnetic field to reach a state of fast magnetic reconnection, which is called the trigger time, is \begin{eqnarray} \tau_{trig} &\sim& \frac{a}{u_\bot} \ln\left(\frac{u_\bot B}{ \mathcal{E}_{ni}}\right) \label{tau_trig} \\ &\sim& \tau_\eta \frac{ \mathcal{E}_{ni}}{u_\bot B} \ln\left(\frac{u_\bot B}{ \mathcal{E}_{ni}}\right), \mbox{ where }\\ \tau_\eta &\equiv&\frac{ a^2}{\eta/\mu_0}. \end{eqnarray} The product of the Lyapunov exponent and the resistive time is $\lambda_u\tau_\eta\sim (u_\bot/a)\tau_\eta \sim (u_\bot a)/(\eta/\mu_0)$, which is the definition of the magnetic Reynolds number $R_m$ of Zweibel and Yamada \cite{Zweibel:review}. This is equivalent to $R_m=u_\bot B/\mathcal{E}_{ni}$, which was called the ideality $\Im$ in \cite{Boozer:ideal-ev}. The values that Zweibel and Yamada gave for $R_m$ were $10^4$ to $10^8$ in large laboratory plasmas, $10^8$ to $10^{14}$ in the Sun, and $10^{15}$ to $10^{21}$ in the interstellar medium of galaxies. \begin{itemize} \item No matter how small the deviation $\mathcal{E}_{ni}$ from ideality may be, a magnetic field can change its connections on a time scale that increases only logarithmically as $\mathcal{E}_{ni}$ goes to zero. \end{itemize} Traditionally, theories of magnetic reconnection have been two dimensional and relied on current sheets: from the classic work of Sweet \cite{Sweet:1958} and Parker \cite{Parker:1957} in the 1950's to work covered in recent reviews \cite{Zweibel:review,Loureiro:2016}. Although current sheets generically form in a near-ideal evolution, the current density increases exponentially slower than does the non-ideal part of the magnetic field \cite{Boozer:ideal-ev}, and is subdominate to the $e^{\lambda_u t}$ exponentiation as a cause for reconnection when $R_m\rightarrow\infty$. The current sheets that form in an ideal evolution are extended along the magnetic field and ribbon-like across, exponentially broad in one direction and exponentially narrow in the other. Schindler, Hesse, and Birn \cite{Schindler:1988} gave the classical view of magnetic reconnection in three dimensions. They required the non-ideal term in Ohm's law, $\mathcal{E}_{ni}$, become very large because they did not consider the exponentiation in sensitivity that occurs in three dimensions. General or generic behavior means that even in special cases in which it does not occur a small perturbation can restore that behavior. The features of the near-ideal three-dimensional evolution of magnetic fields are generic. For example, continuous spatial symmetries can make $\gamma_u=0$, but then small perturbations can make $\gamma_u>0$. The generic behavior of the near-ideal evolution of magnetic fields in three dimensions provides answers to the four questions that are required for a practical understanding of fast magnetic reconnection phenomena: \begin{enumerate} \item Why does the near-ideal evolution of natural and laboratory magnetic fields robustly lead to states of fast magnetic reconnection independent of the drive and of the initial state? \item What is the characteristic time required to reach a state of fast magnetic reconnection? \item What is the explanation of the effects produced by fast magnetic reconnection, which are primarily associated with magnetic helicity conservation and an energy transfer from the magnetic field to the plasma? \item Why does the Alfv\'en speed define the time scale during which the effects produced by fast magnetic reconnection occur? \end{enumerate} Reference \cite{Boozer:ideal-ev} addressed all four questions. Nevertheless, a more complete answer can be given on helicity conservation, Section \ref{sec:A.B cons}, and the energy transfer from the magnetic field to the plasma, Sections \ref{sec:A.B cons} to \ref{sec:dK_||/dt}. When exponentially small non-ideal effects give Alfv\'enic reconnection, as they can in three dimensions, a large energy transfer from the magnetic field to the plasma can occur through (a) the production and damping of shear Alfv\'en waves, Section \ref{sec:A.B cons} and Appendix \ref{sec:Alfven}, and (b) the interaction between the guiding center drift velocity and the perpendicular electric field. This interaction can increase the parallel kinetic energy $K_{||}=\frac{1}{2}mv_{||}^2$ of individual particles in two ways. (1) A coefficient $\nu_K$, Equation (\ref{nu_K}), can exponentiate $K_{||}$. (2) An effective parallel electric field $\mathcal{E}_{||}$, Equation (\ref{eff.E-field}), can accelerate particles in a way that is analogous to a true parallel electric field $E_{||}$ even when $\mathcal{E}_{ni}\rightarrow0$. The energization associated with $\nu_K$ appears to be the more important. In conventional reconnection theories $E_{||}$ is large, but Dahlin, Drake, and Swisdak \cite{Drake:2017} have shown that the energy transfer though $E_{||}$ need not be dominant. In their analysis, the important transfer was an exponentiation in $K_{||}$ by a term proportional to $\vec{u}_E\cdot\vec{\kappa}$, where $\vec{u}_E\equiv \vec{E}\times\vec{B}/B^2$. As shown in Section \ref{u.kappa}, the origin of $\nu_K$ stems from the term $\vec{u}_\bot\cdot\vec{\kappa}$ in the energy transfer. Section \ref{sec:E} derives a general expression for the electric field in spatial regions without magnetic nulls. Section \ref{sec:A.B cons} addresses the conservation of magnetic helicity and Alfv\'en wave damping. Sections \ref{KE ev} and \ref{sec:dK_||/dt} address the transfer of energy from the magnetic field to the plasma. Section \ref{Summary} summarizes the paper. Appendix \ref{sec:L-coord} discusses Lagrangian coordinates. Appendix \ref{sec:Alfven} derives the equation for Alfv\'en waves driven by a non-zero $\vec{B}\cdot\vec{\nabla}(j_{||}/B)$. \section{The electric field \label{sec:E}} The part of the electric field that prevents the magnetic field evolution from being ideal is a constant $\mathcal{E}_{ni}$ along any field line that does not intercept a magnetic field null \cite{Boozer:e-runaway2019}. A general Ohm's law has the form $\vec{E}+\vec{v}\times\vec{B} =\vec{\mathcal{R}}$, where $\vec{v}$ is the plasma velocity \cite{Schindler:1988}. In non-relativistic theory, which is used in this paper, $\vec{\mathcal{R}}$ is the electric field in a frame that moves with the plasma. Let $\Phi$ be a solution to $\vec{B}\cdot\vec{\nabla}\Phi=-\vec{B}\cdot\vec{\mathcal{R}}-\mathcal{E}_{ni} B$, where $\mathcal{E}_{ni}$ is constant along each magnetic field line. Define a velocity perpendicular to the magnetic field $\vec{u}_\bot$ by \begin{eqnarray} &&\vec{v}_\bot = \vec{u}_\bot +\vec{B}\times \frac{\vec{\mathcal{R}}+\vec{\nabla}\Phi}{B^2}, \hspace{0.2in}\mbox{then}\hspace{0.2in}\\ &&\vec{E}+\vec{u}_\bot\times\vec{B}=-\vec{\nabla}\Phi +\mathcal{E}_{ni}\vec{\nabla}\ell. \label{E} \end{eqnarray} The distance along a magnetic field line is $\ell$, and \begin{equation} \mathcal{E}_{ni} \equiv \frac{\int \vec{E}\cdot\vec{B}d\ell}{\int d\ell}, \end{equation} where both integrals are calculated as the limits of integration go to plus/minus infinity or from one contact with a perfectly conducting wall to another. \section{Magnetic helicity conservation \label{sec:A.B cons} } The concept of magnetic helicity was introduced in 1958 by Woltjer \cite{Woltjer:1958} and became a well-known concept after it was used 1974 by Taylor to explain the reversal of the toroidal magnetic field in turbulent relaxations of the reversed field pinch \cite{Taylor:1974}. As $\mathcal{E}_{ni}\rightarrow0$, the loss of magnetic helicity $\int\vec{A}\cdot\vec{B}d^3x$ in a volume defined by magnetic surfaces or by rigid perfect conductors goes to zero during the time to trigger a fast magnetic reconnection, $\tau_{trig}$, Equation (\ref{tau_trig}). During that time, the fractional change in the helicity is of order $(\ln R_m)/R_m$, where $R_m=(u_\bot B)/\mathcal{E}_{ni}$. This is a different statement on helicity loss than the bound obtained by Berger \cite{Berger:1984}. He found, when properly normalized, that the dissipation of magnetic helicity in a turbulent plasma is smaller than the dissipation of magnetic energy. A calculation of the helicity change begins with the time derivative of $\vec{A}\cdot\vec{B}$. Using $\vec{E}=-\partial \vec{A}/\partial t - \vec{\nabla}\Phi$ with $\vec{B}=\vec{\nabla}\times\vec{A}$, \begin{eqnarray} &&\left(\frac{\partial \vec{A}\cdot\vec{B}}{\partial t}\right)_{\vec{x}}=-(\vec{E}+\vec{\nabla}\Phi)\cdot\vec{B}-\vec{A}\cdot\vec{\nabla}\times\vec{E} \hspace{0.2in}\\ &&\hspace{0.2in} = -2\vec{E}\cdot\vec{B}-\vec{\nabla}\cdot(\Phi \vec{B}) +\vec{\nabla}\cdot(\vec{A}\times\vec{E}). \end{eqnarray} Using Equation (\ref{E}) for the electric field \begin{eqnarray} \vec{A}\times\vec{E}&=&-(\vec{A}\cdot\vec{B})\vec{u}_\bot +(\vec{A}\cdot\vec{u}_\bot)\vec{B}+\vec{\nabla}\times(\Phi\vec{A})\nonumber\\ &&-\Phi\vec{B}-(\mathcal{E}_{ni}\vec{\nabla}\ell)\times\vec{A} \end{eqnarray} Consequently, \begin{eqnarray} &&\left(\frac{\partial \vec{A}\cdot\vec{B}}{\partial t}\right)_{\vec{x}}=- 2(\vec{E}+\vec{\nabla}\Phi)\cdot\vec{B} -\vec{\nabla}\cdot\vec{\mathcal{F}} \hspace{0.1in}\mbox{ where } \hspace{0.2in}\\ \nonumber\\ &&\vec{\mathcal{F}}= (\vec{A}\cdot\vec{B})\vec{u}_\bot-(\vec{A}\cdot\vec{u}_\bot)\vec{B} +(\mathcal{E}_{ni}\vec{\nabla}\ell)\times\vec{A}.\hspace{0.3in} \end{eqnarray} The evolution of the magnetic helicity in a volume is \begin{eqnarray} \left(\frac{\partial \int\vec{A}\cdot\vec{B}d^3x}{\partial t}\right)_{\vec{x}}=-\int\mathcal{E}_{ni}Bd^3x -\oint \vec{\mathcal{F}}\cdot d\vec{a}. \hspace{0.1in} \end{eqnarray} The volumetric term $\int\mathcal{E}_{ni}Bd^3x$ gives the stated fractional change in helicity, $\sim (\ln R_m)/R_m$, during the time required to trigger a fast magnetic reconnection. When the bounding surfaces are magnetic surfaces, a surface on which the normal component of $\vec{B}$ vanishes, the only surface term that does not go to zero as $\mathcal{E}_{ni}$ does is $\oint (\vec{A}\cdot\vec{B})\vec{u}_\bot \cdot d\vec{a} $. This term implies the magnetic helicity moves with the magnetic field lines, hardly an unexpected result. The surface term $\oint (\mathcal{E}_{ni}\vec{\nabla}\ell)\times\vec{A}\cdot d\vec{a}$ can be shown to be the helicity input by the surface loop voltage--the time derivative of the poloidal magnetic flux with the toroidal magnetic flux held constant. When the bounding surface is a rigid perfect conductor, $\vec{u}_\bot=0$, then $\oint \vec{\mathcal{F}}\cdot d\vec{a}=0$. The component of the magnetic field normal to a perfect conductor need not be zero, but the velocity of the magnetic field lines in a rigid perfect conductor must be zero. Static force balance is broken when fast magnetic reconnection joins field lines that have different values of $j_{||}/B\equiv\vec{j}\cdot\vec{B}/B^2$. The smallness of the Debye length \cite{Boozer:NF3D} implies $\vec{\nabla}\cdot \vec{j}=0$, which is equivalent to \begin{eqnarray} && \vec{B}\cdot \vec{\nabla}\frac{j_{||}}{B}=\vec{B}\cdot\vec{\nabla}\times\frac{\vec{f}_L}{B^2}, \label{j/B force} \mbox{ where }\\ && \vec{f}_L = \vec{j}\times\vec{B} \label{f_L exp} \end{eqnarray} is the Lorentz force. In a near-ideal plasma, the force implied by Equation (\ref{j/B force}) relaxes by Alfv\'en waves, Appendix \ref{sec:Alfven}. This process is complicated by the rapid transfer of the energy in the Alfv\'en waves to the plasma through phase mixing in regions in which magnetic field lines exponentiate apart \cite{Heyvaerts-Priest:1983,Similon:1989}. This exponentiation in the separation is the cause of the exponential enhancement of sensitivity that leads to fast magnetic reconnection. Alfv\'en-wave damping heats ions through viscosity and electrons through resistivity. This damping presumably slows the relaxation of $j_{||}/B$, but this has not been studied. An Alfv\'enic flattening of $j_{||}/B$ appears consistent with tokamak experiments, but remarkably little has been published on the time emperically required for a drop in the internal inductance $\ell_i$ during tokamak disruptions. In a toroidal plasma, the flattening of $j_{||}/B$ over the volume covered by a magnetic field line requires a shear Alfv\'en wave propagate for much greater distance than just the radius of the plasma. A hundred toroidal transits may be required. When $j_{||}/B$ is flattened in a toroidal plasma, helicity dissipation, which is due to $\int\mathcal{E}_{ni}Bd^3x$, can become appreciable. This enhancement of the helicity dissipation is due to two effects: (1) the overall cooling of the plasma and even more importantly (2) the spreading of $j_{||}/B$ into the regions of high resistivity at the plasma edge. These effects can be modeled using an evolution equation \cite{Boozer:e-runaway2019} for the net plasma current $I(\psi_t,t)$ enclosed in a region containing toroidal magnetic flux $\psi_t$ with $\partial I/\partial\psi_t=j_{||}/B$, \begin{eqnarray} \frac{\partial L I}{\partial t} &=& -2\mathcal{D}[I]; \label{I ev}\\ \mathcal{D}[I]&\equiv& - \psi_t \frac{\partial}{\partial\psi_t}\left\{\mathcal{R}_\psi \frac{dI}{d\psi_t} - \frac{\partial}{\partial\psi_t} \left(\psi_t\Lambda_m\frac{\partial^2 I}{\partial\psi_t^2} \right)\right\}. \hspace{0.1in} \label{Dissipation operator} \nonumber\\ \end{eqnarray} $L$ is an inductance, $\mathcal{R}_\psi$ is the plasma resistivity, and $\Lambda_m$ models the spreading of the plasma current in regions of stochastic magnetic field lines. Reference \cite{Boozer:e-runaway2019} gives examples of solutions, which show enhanced resistive decay of magnetic helicity when the magnetic field is stochastic. \section{Evolution of the magnetic energy \label{KE ev} } Poynting's theorem, Equation (\ref{Poynting eq}), implies $\vec{j}\cdot\vec{E}$ is the rate per unit volume at which energy is removed from the magnetic field. Using Equation (\ref{E}) for the electric field and Equation (\ref{f_L exp}) for the Lorentz force, \begin{equation} \vec{j}\cdot\vec{E} = \vec{u_\bot}\cdot\vec{f}_L + \mathcal{E}_{ni}\vec{j}\cdot\vec{\nabla}\ell. \end{equation} The first term on the righthand side gives the ideal transfer and the second the dissipative loss of energy by the magnetic field. All of the energy lost by the magnetic field is eventually transferred to the plasma, but the ideal transfer includes the transfer of energy to Alfv\'en waves, which are then damped by the plasma as discussed in Section \ref{sec:A.B cons}. The ideal energy transfer from the magnetic field also includes the direct transfer to the kinetic energy of charged particles. When the gyroradius of a non-relativistic changed particle is small, its kinetic energy is \begin{equation} K = \frac{m}{2} v_{||}^2 +\mu B + \frac{m}{2} u_{\bot}^2, \label{K} \end{equation} where $\mu$ is the adiabatically conserved magnetic moment. In his paper on the guiding center velocity, Northrop \cite{Northrop:1963} found the time derivative of the kinetic energy is \begin{equation} \frac{dK}{dt} = q\vec{v}_g\cdot\vec{E} + \mu \left(\frac{\partial B}{\partial t}\right)_{\vec{x}}, \label{Northrop dK/dt} \end{equation} where $\vec{v}_g$ is the velocity of the guiding center of the particle. \subsection{Guiding-center velocity \label{sec:v_g} } Northrop's \cite{Northrop:1963} expression for the guiding-center velocity is \begin{eqnarray} \vec{v}_g &=& v_{||}\hat{b}+\frac{\hat{b}}{B}\times \Bigg(\vec{\nabla}\Phi - (\vec{E}+\vec{\nabla}\Phi) +\frac{\mu}{q} \vec{\nabla}B \hspace{0.3in} \nonumber\\ &&+\frac{m}{q}\frac{d (v_{||}\hat{b}+\vec{u}_\bot)}{dt} \Bigg) \end{eqnarray} Using Equation (\ref{E}) for the electric field, this expression can be written as \begin{eqnarray} \vec{v}_g &=& v_{||}\hat{b}+ \vec{u}_\bot + \frac{\hat{b}}{B}\times \Bigg(\vec{\nabla}\Phi - \mathcal{E}_{ni}\vec{\nabla}\ell+\frac{\mu}{q} \vec{\nabla}B \nonumber\\ &&+\frac{m}{q}\frac{d (v_{||}\hat{b}+\vec{u}_\bot)}{dt} \Bigg). \label{v_g anal} \end{eqnarray} The total time derivative of the magnetic field direction $\hat{b}\equiv \vec{B}/B$ and the magnetic field line velocity are \begin{eqnarray} \frac{d\hat{b}}{dt}&=& \left(\frac{\partial \hat{b}}{\partial t}\right)_{\vec{x} }+ (v_{||}\hat{b}+\vec{u}_\bot)\cdot \vec{\nabla} \hat{b} \\ \frac{d\vec{u}_\bot}{dt}&=& \left(\frac{\partial \vec{u}_\bot}{\partial t}\right)_{\vec{x} }+ (v_{||}\hat{b}+\vec{u}_\bot)\cdot \vec{\nabla} \vec{u}_\bot. \end{eqnarray} Three types of time derivatives must be distinguished. The time derivative $(\partial f/\partial t)_{\vec{x}}$ is taken at a fixed spatial point. The Lagrangian or convective derivative, \begin{equation} \left(\frac{\partial f}{\partial t} \right)_L \equiv \left(\frac{\partial f}{\partial t} \right)_{\vec{x}} + \vec{u}_\bot\cdot\vec{\nabla}f, \label{Lagrangian derivative} \end{equation} is taken in a frame moving with the magnetic field lines. The total time derivative is taken in the frame of the charged particle, \begin{equation} \frac{df}{dt} \equiv \left(\frac{\partial f}{\partial t} \right)_{\vec{x}} + (v_{||}\hat{b}+ \vec{u}_\bot)\cdot\vec{\nabla}f, \end{equation} ignoring terms that go to zero as $qB/m$ goes to infinity. Keeping those terms would require second order drifts for consistency. The velocity of the particle along the magnetic field $v_{||}\hat{b}$ and the velocity with which it is carried by the motion of the magnetic field lines $\vec{u}_\bot$ are the only two components of the velocity that do not go to zero as the gyrofrequency $qB/m\rightarrow\infty$ with everything else held fixed. The total time derivative of the parallel velocity gives not only the curvature drift of particles but also a term along the magnetic field, \begin{eqnarray} &&\frac{d(v_{||}\hat{b})}{dt}=\frac{dv_{||}}{dt}\hat{b} + v_{||}^2 (\hat{b}\cdot\vec{\nabla})\hat{b}, \mbox{ where } \label{dv_||/dt} \\ &&(\hat{b}\cdot\vec{\nabla})\hat{b}\equiv\vec{\kappa}, \end{eqnarray} which is the curvature of the magnetic field line. \subsection{Expression for $dK/dt$} The electric field enters Northrop's expression for $dK/dt$, Equation (\ref{Northrop dK/dt}), in two ways. One is implicit, in the expression for $\vec{v}_g$, which was analyzed in Section (\ref{sec:v_g}), and the other is explicit, which will now be considered. Substituting the electric field from Equation (\ref{E}) into Northrop's expression for $dK/dt$, Equation (\ref{Northrop dK/dt}), one finds \begin{eqnarray} \frac{dK}{dt} &=& q\vec{v}_g\cdot \left( -\vec{u}_\bot \times \vec{B} - \vec{\nabla}\Phi + \mathcal{E}_{ni}\vec{\nabla}\ell \right) \nonumber\\ && +\mu \left(\frac{\partial B}{\partial t}\right)_{\vec{x}} \label{second dK/dt} \end{eqnarray} Using Equation (\ref{v_g anal}) for $\vec{v}_g$ and Equation (\ref{dv_||/dt}) for the time derivative of the particle velocity along the magnetic field, the term in Equation (\ref{second dK/dt}) \begin{eqnarray} q\vec{u}_\bot\cdot(\vec{v}_g\times\vec{B}) &=&q\vec{u}_\bot\cdot\Bigg(\vec{\nabla}\Phi - \mathcal{E}_{ni}\vec{\nabla}\ell+\frac{\mu}{q} \vec{\nabla}B \nonumber\\ &&+\frac{m}{q}\frac{d (v_{||}\hat{b}+\vec{u}_\bot)}{dt} \Bigg)_\bot \nonumber \\ &=& \vec{u}_\bot\cdot\vec{\nabla}(q\Phi +\mu B) +\frac{m}{2} \frac{du_\bot^2}{dt} \nonumber\\ && +mv_{||}^2 \vec{u}_\bot \cdot\vec{\kappa} - \mathcal{E}_{ni}\vec{u}_\bot\cdot\vec{\nabla}\ell. \end{eqnarray} Equation (\ref{second dK/dt}) for $dK/dt$ can then be written \begin{eqnarray} &&\frac{dK}{dt} = \frac{m}{2} \frac{d u_\bot^2}{d t} + \mu \left(\frac{\partial B}{\partial t}\right)_L - qv_{||}\hat{b}\cdot\vec{\nabla}\Phi \nonumber\\ && + mv_{||}^2\vec{u} _\bot\cdot\vec{\kappa} + v_{||} m\vec{u}_\bot \cdot \left(\frac{\partial\hat{b}}{\partial t}\right)_L +q v_{||}\mathcal{E}_{ni}. \hspace{0.1in} \end{eqnarray} \section{Evolution of the parallel kinetic energy \label{sec:dK_||/dt} } The kinetic energy of a particle can be separated into the part given by the velocity of the particle perpendicular to the magnetic field $K_\bot=\mu B + \frac{m}{2}u_\bot^2$ and the part $K_{||}=\frac{m}{2} v_{||}^2$ associated with its velocity along the magnetic field with $K=K_{||}+K_\bot$. Here the focus is on the evolution of the parallel kinetic energy for $K_{\bot}$ is already expressed in easily determined quantities, $B$ and $u_\bot^2$. The kinetic energy, Equation (\ref{K}), can be written $K=K_{||} + \mu B + \frac{m}{2}u_\bot^2$. The parallel kinetic energy is \begin{eqnarray} \frac{dK_{||}}{dt} &=& \frac{dK}{dt} -\mu \frac{dB}{dt} - \frac{m}{2} \frac{d u_\bot^2}{dt}. \end{eqnarray} The difference between $dB/dt$ and $(\partial B/\partial t)_L$ is $dB/dt -(\partial B/\partial t)_L = v_{||} \hat{b}\cdot \vec{\nabla}B$, so \begin{eqnarray} \frac{dK_{||}}{dt}&=& -\mu v_{||} \hat{b}\cdot\vec{\nabla}B + q v_{||} ( \mathcal{E}_{ni} -\hat{b}\cdot\vec{\nabla}\Phi) \nonumber \\ && + v_{||} m\vec{u}_\bot \cdot \left(\frac{\partial\hat{b}}{\partial t}\right)_L + mv_{||}^2\vec{u}_\bot\cdot\vec{\kappa}. \label{K_|| ev} \hspace{0.2in} \end{eqnarray} The last two terms in Equation (\ref{K_|| ev}) can be placed in a more useful form through further analysis. An expression for $\vec{u}_\bot\cdot\vec{\kappa}$ is obtained in Section \ref{u.kappa}. The analysis of the $\vec{u}_\bot \cdot(\partial\hat{b}/\partial t)_L$ term and further analysis of the $\vec{u}_\bot\cdot\vec{\kappa}$ term will require constraints that become clear with the use of Lagrangian coordinates, Appendix \ref{sec:L-coord}. \subsection{Expression for $\vec{u}_\bot\cdot\vec{\kappa}$ \label{u.kappa}} An expression for $\vec{u}\cdot\vec{\kappa}$ can be obtained using the magnetic Poynting theorem. \subsubsection{Magnetic Poynting theorem} Ampere's law plus Faraday's law imply the magnetic Poynting theorem, \begin{equation} \left(\frac{\partial}{\partial t}\frac{B^2}{2\mu_0} \right)_{\vec{x}} + \vec{\nabla}\cdot\left(\frac{\vec{E}\times\vec{B}}{\mu_0}\right) + \vec{j}\cdot\vec{E}=0. \label{Poynting eq} \end{equation} The electric field is given by Equation (\ref{E}), which can be used to obtain \begin{eqnarray} &&\frac{\vec{E}\times\vec{B}}{\mu_0} = \frac{B^2}{\mu_0} \vec{u}_\bot + \frac{\vec{B}}{\mu_0}\times\vec{\nabla}\Phi -\frac{\vec{B}}{\mu_0}\times\mathcal{E}_{ni}\vec{\nabla}\ell; \hspace{0.2in} \\ &&\vec{\nabla}\cdot\left(\frac{\vec{E}\times\vec{B}}{\mu_0}\right) = \vec{\nabla}\cdot\left( \frac{B^2}{\mu_0} \vec{u}_\bot \right) +\vec{j}\cdot\vec{\nabla}\Phi \nonumber\\ && \hspace{0.3in} - \mathcal{E}_{ni} \vec{j}\cdot\vec{\nabla}\ell + (\vec{B}\times\vec{\nabla}\ell)\cdot\vec{\nabla}\mathcal{E}_{ni} \label{div(EXB)} \end{eqnarray} Inserting Equation (\ref{div(EXB)}) into Equation (\ref{Poynting eq}) and using Equation (\ref{E}) for the electric field gives \begin{equation} \left(\frac{\partial}{\partial t}\frac{B^2}{2\mu_0} \right)_{\vec{x}} + \vec{\nabla}\cdot\left(\frac{B^2}{\mu_0}\vec{u}_\bot\right) + \vec{u}_\bot\cdot\vec{f}_L=(\vec{\nabla}\ell\times\vec{B})\cdot\vec{\nabla}\mathcal{E}_{ni}. \end{equation} \subsubsection{Field line curvature} The magnetic field-line curvature, $\vec{\kappa} \equiv \hat{b}\cdot\vec{\nabla}\hat{b}$ can be written using $\vec{\nabla}(\vec{B}\cdot\vec{B}/B^2)=0$ as \begin{eqnarray} \vec{\kappa} &=& -\hat{b}\times\vec{\nabla}\times\frac{\vec{B}}{B} \\ &=& \frac{\mu_0}{B^2} \vec{f}_L + \frac{\vec{\nabla}_\bot B}{B}. \label{kappa-f_L} \end{eqnarray} Using Equation (\ref{kappa-f_L}) for the Lorentz force the magnetic Poynting theorem can be written \begin{eqnarray} &&\left(\frac{\partial}{\partial t}\frac{B^2}{2\mu_0} \right)_{\vec{x}} + \vec{\nabla}\cdot\left(\frac{B^2}{\mu_0}\vec{u}_\bot\right) -\vec{u}_\bot\cdot \vec{\nabla}_\bot \frac{B^2}{2\mu_0} = \nonumber\\ && \hspace{0.8in} - \frac{B^2}{\mu_0}\vec{u}_\bot\cdot\vec{\kappa} +(\vec{\nabla}\ell\times\vec{B})\cdot\vec{\nabla}\mathcal{E}_{ni}. \end{eqnarray} Since \begin{eqnarray} &&\vec{\nabla}\cdot\left(\frac{B^2}{\mu_0}\vec{u}_\bot\right) -\vec{u}_\bot\cdot \vec{\nabla}_\bot \frac{B^2}{2\mu_0} \nonumber\\ && \hspace{0.2in} = \vec{u}_\bot\cdot \vec{\nabla}_\bot \frac{B^2}{2\mu_0} +\frac{B^2}{\mu_0}\vec{\nabla}\cdot\vec{u}_\bot, \end{eqnarray} The curvature term can then be written as \begin{eqnarray} \vec{u}_\bot\cdot\vec{\kappa} &= & -\frac{\mu_0}{B^2}\Bigg( \left(\frac{\partial}{\partial t}\frac{B^2}{2\mu_0} \right)_L -(\vec{\nabla}\ell\times\vec{B})\cdot\vec{\nabla}\mathcal{E}_{ni}\Bigg) \nonumber \\ && - \vec{\nabla}\cdot\vec{u}_\bot . \hspace{0.3in} \end{eqnarray} The divergence of the magnetic field line velocity is related to the Jacobian of Lagrangian coordinates, $\vec{\nabla}\cdot\vec{u}_\bot=(\partial \ln(J_L)/\partial t)_L$, Equation (\ref{J-L}). Therefore, \begin{equation} \vec{u}_\bot\cdot\vec{\kappa} =-\left(\frac{\partial \ln(J_LB)}{\partial t} \right)_L +\frac{ ( \vec{\nabla}\ell\times\vec{B})\cdot\vec{\nabla}\mathcal{E}_{ni} }{B^2/\mu_0}. \hspace{0.3in} \end{equation} Equation (\ref{B-form}) for $\vec{B}$ in Lagrangian coordinates implies $\Lambda_m\rightarrow J_LB/B_0$ as $\Lambda_u/\Lambda_s\rightarrow\infty$, where $B_0$ is the magnetic field strength in Lagrangian coordinates at $t=0$. In the limit $\Lambda_u/\Lambda_s\rightarrow\infty$, \begin{equation} \vec{u}_\bot\cdot\vec{\kappa} =-\left(\frac{\partial \ln(\Lambda_m)}{\partial t} \right)_L +\frac{ ( \vec{\nabla}\ell\times\vec{B})\cdot\vec{\nabla}\mathcal{E}_{ni} }{B^2/\mu_0}. \hspace{0.3in} \label{u-kappa} \end{equation} \subsection{Effective parallel electric field, $\mathcal{E}_{||}$ \label{Eff. E_||} } Equation (\ref{K_|| ev}) for the time derivative of the parallel kinetic energy and Equation (\ref{u-kappa}) for $\vec{u}_\bot\cdot\vec{\kappa}$ can be combined to obtain a general expression for the time derivative of the parallel kinetic energy in the small-gyroradius limit, \begin{eqnarray} \frac{dK_{||}}{dt} &+& 2K_{||}\left(\left(\frac{\partial \ln(\Lambda_m)}{\partial t} \right)_L+ \frac{ \vec{B}\cdot( \vec{\nabla}\ell\times\vec{\nabla}\mathcal{E}_{ni}) }{B^2/\mu_0}\right) \nonumber\\ && = -\mu v_{||} \hat{b}\cdot\vec{\nabla}B + q v_{||} ( \mathcal{E}_{ni} -\hat{b}\cdot\vec{\nabla}\Phi) \nonumber \\ && \hspace{0.2in} + v_{||} m\vec{u}_\bot \cdot \left(\frac{\partial\hat{b}}{\partial t}\right)_L \nonumber \\ && = qv_{||}\mathcal{E}_{||}. \label{dK_||/dt equation} \end{eqnarray} The effective parallel electric field is \begin{eqnarray} \mathcal{E}_{||} &\equiv& \mathcal{E}_{ni} -\hat{b}\cdot\vec{\nabla}\left( \frac{\mu}{q}B+\Phi\right) +m\vec{u}_\bot \cdot \left(\frac{\partial\hat{b}}{\partial t}\right)_L. \hspace{0.3in} \label{mathcal(E) deff} \end{eqnarray} Equation (\ref{mathcal(E) deff}) for the effective parallel electric field can be simplified. The unit vector along the magnetic field can be written as $\hat{b}=\hat{b}_I+\vec{b}_{ni}$, where $\vec{b}_{ni}$ is the assumed small change in the direction of the magnetic field produced by non-ideal effects and $\hat{b}_I$ is the unit vector along the ideal part of the magnetic field. Appendix \ref{sec:L-coord} shows $\hat{b}_I=\hat{M}$ in Lagrangian coordinates. Equation (\ref{u.dbI/dt}) implies \begin{equation} m\vec{u}_\bot \cdot \left(\frac{\partial\hat{b}}{\partial t}\right)_L=\hat{b}_I\cdot \vec{\nabla}\frac{mu_\bot^2}{2} + m\vec{u}_\bot \cdot \left(\frac{\partial\vec{b}_{ni} }{\partial t}\right)_L. \end{equation} The other $\hat{b}$'s in Equation (\ref{mathcal(E) deff}) can be replaced by $\hat{b}_I$. The effective parallel electric field is then \begin{eqnarray} && \mathcal{E}_{||}=\mathcal{E}_{ni} -\hat{b}_I\cdot\vec{\nabla}\Phi_{eff} +\frac{m}{q} \vec{u}_\bot\cdot \left( \frac{\partial\vec{b}_{ni}}{\partial t}\right)_L; \hspace{0.2in} \label{eff.E-field} \\ && \Phi_{eff} \equiv \Phi +\frac{\mu B_I}{q} - \frac{mu_\bot^2}{2q}. \end{eqnarray} \subsection{Equation for $dK_{||}/dt$ } The complicated term that multiplies $K_{||}$ in the first line of Equation (\ref{dK_||/dt equation}) can also be simplified. \begin{eqnarray} \frac{\vec{B}\cdot ( \vec{\nabla}\ell\times \vec{\nabla}\mathcal{E}_{ni})}{B^2}&=&\ \frac{\partial \ell}{\partial\alpha_I} \frac{\partial \mathcal{E}_{ni}}{\partial\beta_I} - \frac{\partial \ell}{\partial\beta_I} \frac{\partial \mathcal{E}_{ni}}{\partial\alpha_I} \hspace{0.1in} \label{ell-E_|| term} \end{eqnarray} where $\alpha_I$ and $\beta_I$ are the Clebsch potentials of the ideal magnetic field, $\vec{B}_I=\vec{\nabla}\alpha_I\times\vec{\nabla}\beta_I$. As discussed in \cite{Boozer:ideal-ev}, $\alpha_I$ and $\beta_I$ can be locally chosen so the derivatives with respect to $\alpha_I$ are exponentially large and derivatives with respect to $\beta_I$ are exponentially small, and their combination has a weak dependence on time. Consequently, this term is rarely important. The time derivative of the parallel energy can then be written as $dK_{||}/dt= -(\partial\ln\Lambda_m^2/\partial t)_L K_{||} +qv_{||}\mathcal{E}_{||}$. The rate of change of the parallel kinetic energy is then \begin{eqnarray} && \frac{dK_{||}}{dt} +\frac{d\ln\Lambda_m^2}{d t} K_{||} = \nu_K K_{||} + qv_{||}\mathcal{E}_{||}, \hspace{0.2in} \label{dK||/dt eq}\\ \end{eqnarray} where the growth rate of $K_{||}$ can be written in three forms \begin{eqnarray} \nu_K &\equiv& -\left(\frac{\partial\ln\Lambda_m^2}{\partial t}\right)_L \label{nu_K} \\ &=& -\left(\frac{\partial\ln\Lambda_m^2}{\partial t}\right)_{\vec{x}} - \vec{u}_\bot \cdot \vec{\nabla}\ln\Lambda_m^2 \\ &=& -\frac{d\ln\Lambda_m^2}{dt}+ v_{||}\hat{b}_I\cdot\vec{\nabla}\ln\Lambda_m^2. \end{eqnarray} The first two forms demonstrate that $\nu_K=0$ when the magnetic field is static, as would be expected. The third form for $\nu_k$ implies $d\ln(\Lambda_m^2K_{||})/dt =v_{||}\hat{b}_I\cdot\vec{\nabla}\ln\Lambda_m^2$ when $\mathcal{E}_{||}=0$. Since $(\int \hat{b}_I\cdot\vec{\nabla}\ln\Lambda_m^2 d\ell)/(\int d\ell)$ goes to zero as the range of integration goes to infinity, it is the product of variations in $v_{||}$ and variations in $\hat{b}_I\cdot\vec{\nabla}\ln\Lambda_m^2$ with the distance $\ell$ along the magnetic field lines that give large and longterm changes in $K_{||}$. Section \ref{u.kappa} shows that $\nu_K$ has a similar form to $\vec{u}_E\cdot\vec{\kappa}$, which appears in the last term in Equation (1) of Dahlin, Drake, and Swisdak \cite{Drake:2017} and was the dominant cause of the energization of particles. It should be noted that $\nu_K$ is non-zero even when the magnetic field is evolving ideally, so the creation of high-energy tails on particle distribution functions is not a proof that reconnection has taken place. Of course, both $\nu_K$ and $\mathcal{E}_{||}$ become larger when the magnetic field evolves on an Alfv\'enic time scale as in a fast magnetic reconnection. \section{Summary \label{Summary} } An exponentially increasing sensitivity with time is a mathematical property of the ideal evolution equation $\partial\vec{B}/\partial t=\vec{\nabla}(\vec{u}_\bot\times\vec{B})$ in three dimensions. Since this is a mathematical property, it can only be refuted if a fundamental flaw can be found in the mathematics \cite{Boozer:Prevalent2018,Boozer:ideal-ev}. The same sensitivity is not present in two-dimensional systems, which probably accounts for the effect being overlooked in the reconnection literature. An understanding of fast magnetic reconnection requires answers to the four questions that are listed in the Introduction. This paper addresses helicity conservation and how energy is transferred from the magnetic field to the plasma in the near ideal limit of $\mathcal{E}_{ni}\rightarrow0$. As a system evolves into a state of fast reconnection, the magnetic helicity is unchanged in a region bounded by magnetic surface or rigid perfectly conducting walls in the limit as $\mathcal{E}_{ni}\rightarrow0$, Section \ref{sec:A.B cons}. The fast magnetic reconnection process is quasi-ideal in the sense that it conserves magnetic helicity. Magnetic helicity does decay due to resistivity. Early in the post thermal-quench period of tokamak disruptions, a relatively rapid helicity decay occurs due to the breakup of the magnetic surfaces causing both a rapid plasma cooling and a spread of the plasma current into the highly resistive regions at the plasma edge and near surrounding walls. Energy transfer from the magnetic field to the plasma is subtle in the limit as $\mathcal{E}_{ni}\rightarrow0$. Part of the answer is the non-dissipative transfer of magnetic field energy to Alfv\'en waves, which are then damped on the plasma. Appendix \ref{sec:Alfven} derives the transfer equation. Two effects increase the kinetic energy of the motion of particles along the magnetic field $K_{||}=\frac{m}{2}v_{||}^2$. The more important effect is apparently the exponentiation of $K_{||}$ through the coefficient $\nu_K$, Eq. (\ref{nu_K}), which can change sign but does not average to zero. Another is an effective parallel electric field $\mathcal{E}_{||}$, Equation (\ref{eff.E-field}), which is far larger than the true parallel electric field $E_{||}$ when $\mathcal{E}_{ni}\rightarrow0$. An important term in $\mathcal{E}_{||}$ is the time derivative of the magnetic field that arises from non-ideal effects, $\vec{B}_{ni}$. This field is derived in \cite{Boozer:ideal-ev} and is shown to be an exponential function of time multiplying a term proportional to $\mathcal{E}_{ni}$ until $\vec{B}_{ni}$ contributes significantly to the total magnetic field. The properties of a near-ideal evolution in systems that depend on all three spatial coordinates are fundamentally different from systems that depend on only two as in standard plasmoid theories. Two-dimensional plasmoid theory dominates modern reconnection studies \cite{Zweibel:review,Loureiro:2016}. A search on the Web of Science for ``plasmoid magnetic reconnection" returned more than five-hundred results. Nevertheless, the applicability of plasmoid theory to three-dimensional reconnections problems faces a number of mathematical challenges \cite{plasmoid challenges}. Plasmoid theories do not address the first two questions that are important for understanding fast magnetic reconnection. (1) Why does a magnetic evolution commonly take an arbitrary initial state into a state of fast magnetic reconnection? (2) How long does this evolution take? Plasmoid reconnection theories, \cite{Zweibel:review,Loureiro:2016}, are initiated by a highly unstable narrow current sheet though it has been recognized that ``the formation of a current sheet and the subsequent reconnection process cannot be decoupled, as is commonly assumed" \cite{Loureiro:2016}. The second question presupposes an answer to the first. When $\mathcal{E}_{ni}\rightarrow0$, the answer in three-dimensional space is the time to reach a state of rapid reconnection is given by Equation (\ref{tau_trig}). \section*{Acknowledgements} This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Fusion Energy Sciences under Award Numbers DE-FG02-95ER54333, DE-FG02-03ER54696, DE-SC0018424, and DE-SC0019479.
{ "timestamp": "2019-04-30T02:08:05", "yymm": "1902", "arxiv_id": "1902.10860", "language": "en", "url": "https://arxiv.org/abs/1902.10860" }
\section{Introduction} Beryllium has only one stable isotope, $^9$Be. The lack of stable nuclei with A=5 implies that it cannot be synthesised by capture of $\alpha$ particles and the lack of stable nuclei with A=8 implies it cannot be synthesised by proton nor neutron captures either. The nuclear fusion reactions that can synthesise Be in a plasma involve several rare nuclei, specifically : $\rm ^7Li(^3H,n)^9Be$, $\rm ^7Be(^3H,^1H)^9Be $, and $\rm ^6He(^4He,n)^9Be. $ In the conditions that characterise stellar interiors (including H and He burning shells) such reactions cannot synthesise Be fast enough to counter the inverse photodissociation reactions that destroy this element. Thus stars are net destroyers of Be. In the first minutes of existence of the primordial plasma when most of the helium in the Universe was produced very tiny amounts of $^9$Be can be formed; this can occur at the level of log($^9$Be/H)$\approx 10^{-18}$ \citep{Pitrou}, that is almost eight orders of magnitude less than the primordial $^7$Li. This is however true under ``standard'' conditions, that is if the plasma is homogeneous and there is no ``new physics''. If the primordial plasma was inhomogeneous and in particular included lower density $n$-rich regions, \citet{BoydKajino} showed that $\rm^3H$ and $\rm^7Li$ could be abundant enough to make the $\rm ^7Li(^3H,n)^9Be$ reaction, which is an efficient channel to produce sizeable amounts of $^9$Be. A way to introduce new physics is to postulate the existence of relic particles, interacting either electromagnetically or strongly, which decay at late times \citep[see e.g.][and references therein]{Jed06,Kusakabe}. For example, \citet{Pospelov} showed that the energy injected by decaying hadrons can lead to an efficient $^9$Be production via the $\rm ^6He(^4He,n)^9Be $ reaction. In fact they advocate the use of an upper limit on the primordial $^9$Be abundance as a powerful test to put limits on the energy and decay half-life of such putative relic hadrons. The common wisdom, supported by the observations (see below) is that all the observed Be is produced by spallation processes triggered by cosmic rays in the interstellar medium \citep{Reeves,Meneguzzi}. From the observational point of view, Be can be observed in solar-type stars via the \ion{Be}{ii} resonance doublet at 313\,nm. This makes the observation from the ground difficult since this wavelength is rather near to the atmospheric cut-off. The Be abundance in the Sun was determined using the lines from \citet{Chm75} who derived a Be abundance about 0.3\, dex lower than the meteoritic abundance. Coupled to the fact that soon after \citet{Boesgaard76} found almost the same Be abundance in a sample of young stars, this uniformity led to the notion that Be is depleted in the Sun, like Li. The solar abundance of Be was drastically revised by \citet{BB98} who invoked the presence of an unaccounted continuum opacity at UV wavelengths and derived a Be abundance that is in good agreement with the meteoritic abundance. The motivation for this extra opacity was to force the UV and IR lines to yield the same O abundance. An analysis of these lines using 3D hydrodynamical simulations by \citet{Asplund04} confirmed the need for this extra opacity. It should be noted however that the source of this continuum opacity has not to date been identified and that it is not unanimously accepted \citep[see e.g.][]{BK02}. Recently \citet{Carlberg}, using a new line list in the near UV for generating theoretical solar spectra in the region of the Be lines, found that the difference in Be abundance is only 0.2\,dex with or without an extra opacity. This implies that even using this extra opacity Be is depleted by about 0.1\,dex in the solar photosphere. The first attempts to measure Be in Pop II stars to study the Galactic evolution of Be began in the 1980s \citep{MB84,Molaro}, however it was not until the late 1980s and 1990s that it became clear that the Be abundance shows a clear linear decrease with decreasing metallicity \citep{RMAB88,Gil91,Ryan92,Gil92,BK93,Molaro97}. If Be is produced only by cosmic rays, then the Be abundance can be used as chronometer, provided there is a suitable model of the temporal evolution of the cosmic ray evolution \citep{Beers00,Pasquini05}. The advent of 8\,m class telescopes with high resolution spectrographs that can observe down to the atmospheric cut-off allowed the measurement of Be in a large sample of field halo stars \citep{Boesgaard99,Primas00a,Primas00b,Boesgaard07,SmiljanicPB09,Ito,Tan09,Tan11,BoesgaardRL11} and also in two globular clusters \citep{Pasquini04,Pasquini07}. Be abundances in metal-poor stars allow for probing the existence of inhomogeneities in the primordial Universe or the existence of late decaying relic particles. If there is no primordial production of Be, the linear decrease of the Be abundance with decreasing metallicity should continue no matter how low the metallicity of the star. If instead there is primordial production of Be, at some metallicity value the Be abundance should stop decreasing and present a constant value at all lower metallicities below. Thus measurements and upper limits of Be at the lowest abundances are of paramount importance to probe a primordial production of Be. The discovery of the bright extremely metal-poor star 2MASS\,J1808-5104 ([Fe/H]=--3.8) by \citet{MelendezPT16} opens up the possibility to probe the Be abundance in stars at the lowest metallicities. In this paper we present the analysis of high signal-to-noise ratio (S/N) UV spectra of the star 2MASS\,J1808-5104 \ acquired with the specific aim of investigating its Be abundance. \section {Observational data} In order to observe the Be line at 313\,nm, new spectra of the ultra metal-poor (UMP) dwarf 2MASS\,J1808-5104 were obtained in June 2018 with the Very Large Telescope (VLT) and the spectrograph UVES \citep{DekkerDK00}. Ten 1 h exposures were obtained during the night of June 21-22. The dichroic beam-splitter was used permitting simultaneous use of the blue and red arms. The blue arm was centred at 346\,nm and the red arm at either 760 or 860\,nm. With the spectra obtained previously with UVES by \citet{MelendezPT16}, the spectral coverage of this UMP dwarf is almost complete from 310 nm to 1\,000 nm (with only a gap between 452.3 nm and 478.6 nm). The resolving power $R$ is close to 50\,000\relax\ in the blue and 40\,000\relax\ in the red. In the region of the Be doublet (313\,nm), the S/N of the spectrum is close to 70 a value close to the expected value in case of good weather (seeing of 1'' and good transparency), it is about 250 at 370\,nm and 350 at 670\,nm. The spectra were reduced using the ESO UVES pipeline version 5.8.2; the basic concepts and algorithms of the pipeline can be found in \citet{BallesterMB00} and in the user manual. The spectra were extracted using optimal extraction and flat-fielding was performed on the extracted spectra. Two different flat-field lamps were used: a deuterium lamp below 340\,nm and a tungsten lamp, for longer wavelengths. The spectra were wavelength calibrated using the Th-Ar lamp exposures. We carefully measured the radial velocity on our spectra and on the previous UVES spectra \citep{MelendezPT16}. The more precise measurement of radial velocities with UVES is obtained when stellar and telluric lines are present in the spectrum. The zero point of the wavelength scale depends indeed on the position of the star on the slit \citep[see e.g.][]{Molaro08} and the position of the telluric lines on the spectrum makes its definition possible.\\ In very metal-poor stars this is possible only on the yellow spectra domain centered at 580\,nm. Unfortunately the determination of the zero point is not possible on the spectra centered at 346\,nm since there are no telluric lines in this region. It is possible to determine the zero point on the spectra centered at 760\,nm, but in this wavelength range, the \ion{Fe}{i}~ lines in the stellar spectrum are extremely weak and only the position of the hydrogen line $\rm H\alpha$~ and of the red \Cad~ triplet could be measured. In the blue and in the visible region (settings B346 and R580 in table \ref{vrad}) the wavelength of the stellar iron lines were compared to the wavelength of numerous \ion{Fe}{i}~ lines taken from the list of \citet{NaveJL94}. The wavelengths of the telluric lines are from \citet{Jacquinet-HussonSC05}. The velocity error on the barycentric radial velocity in Table \ref{vrad} should be less than 1.0\,$\rm km\,s^{-1}$. The star 2MASS\,J1808-5104 has been observed by Gaia DR2 \citep{GaiaDR2} but its radial velocity is not provided. \section{Binary nature and orbit} \citet{Schlaufman18} confirmed the binary nature of 2MASS\,J1808-5104. Using 17 radial velocity measurements from spectra obtained with MIKE at the Magellan telescope, and three measurements obtained from the UVES R580 spectra (which we also used), \citet{Schlaufman18} were able to determine the orbital parameters for this system. They also gathered 31 epochs of radial velocity measurements obtained from low resolution spectra using GMOS-S on the Gemini South telescope. Since we have independent measurements of the radial velocities (the UVES R580 spectra and a new epoch from our UVES R760 spectrum), we decided to redetermine the orbital parameters of the system combining our measurements with those of \citet{Schlaufman18}. In our opinion the most robust determination of the period of a binary system comes from the power spectrum of the observations. To estimate the power spectrum of the radial velocities measuremnts we used the Lomb-Scargle periodogram \citep{Lomb,Scargle}. If we use only the measurements based on high resolution spectra, i.e. MIKE and UVES, no peak is statistically significant; in that case, all the peaks appearing could be due to random noise. In Fig.\,\ref{power_all} we show the power spectrum obtained from all the radial velocity measurements, including those based on the GMOS-S spectra. In this case, a highly significant peak, which has a false alarm probability less than 0.001, is apparent corresponding to a period of 34.7538 days. This period is almost identical to that obtained by \citet{Schlaufman18} using Keplerian fits to their high resolution data. We decided to fit a Keplerian orbit to our radial velocities measurements based on high resolution spectra, keeping the value of the period fixed. We used version 1.3 of the program {\tt velocity} \citep{Wichmann}. Our preferred solution is summarised in Table\,\ref{orbit_param} and is very close to that found by \citet{Schlaufman18} except for the angle of the periastron, the time of passage at periastron and the eccentricity. We did not run a Monte Carlo to estimate errors, since this orbit is certainly preliminary. The star is bright enough that it should eventually have radial velocities for about 80 epochs from the RVS spectrograph on board Gaia \citep[see e.g.][]{Sartoretti}. The ensemble of ground-based and space-borne radial velocities will provide a much more accurate orbit. The orbit and the phased data for high resolution measurements are shown in Fig.\,\ref{plot_orbit}, where we assumed an error of 1\,$\rm km\,s^{-1}$\ for all the measurements. The root-mean-square deviation of our computed orbit from the observations is 0.52\,$\rm km\,s^{-1}$. \begin{figure} \begin{center} \resizebox{7.5cm}{!} {\includegraphics[clip=true] {plot_scargle_all.eps}} \caption[]{Lomb-Scargle estimate of the power spectrum of all the radial velocity measurements. The lower dotted line corresponds to a false alarm probability of 0.25, dashed line to 0.01., and upper dotted line to 0.001.} \label{power_all} \end{center} \end{figure} \begin{table} \begin{center} \caption[]{ Radial velocity of 2MASS\,J1808-5104. } \label{vrad} \begin{tabular}{l@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~} c@{~~}c@{~~}c@{~~}c} \hline Date & MJD & RV & Bary. & RV & RV & sigma\\ & & (geo.) & corr. &(tell.)& (bary.) & \\ \hline \multicolumn{2}{l}{{\bf R580 spectra} }\\ 19-10-2014 & 56949.01289 & 46.50& -24.22& 0.62& 21.66& 0.26& \\ 19-10-2014 & 56949.02388 & 46.63& -24.23& 0.82& 21.58& 0.18& \\ 21-10-2014 & 56951.00724 & 42.79& -23.84& 0.37& 18.58& 0.25& \\ 21-10-2014 & 56951.01822 & 42.98& -23.85& 0.52& 18.61& 0.11& \\ 06-03-2015 & 57087.35943 & -3.38& +25.63& -0.13& 22.38& 0.13& \\ 06-03-2015 & 57087.37042 & -3.05& +25.62& -0.03& 22.60& 0.21& \\ \\ \multicolumn{2}{l}{{\bf B346 mean spectrum} }\\ 21-06-2018 & 58291.03014 & 17.71& +0.72& - & 18.43& 0.12&\\ \\ \multicolumn{2}{l}{{\bf R760 mean spectrum} }\\ 21-06-2018 & 58291.03014 & 18.28& +0.72& 0.25& 18.75& -- &\\ \hline \end{tabular} \end{center} \end{table} \begin{table} \begin{center} \caption[]{ Orbit parameters for the system 2MASS\,J1808-5104. } \label{orbit_param} \begin{tabular}{lr} \hline Radial velocity of the barycentre & 16.57 $\rm km\,s^{-1}$ \\ Radial velocity amplitude & 9.23 $\rm km\,s^{-1}$ \\ Eccentricity & 0.00 \\ Angle of the periastron & -33.71 degrees \\ Period & 34.7538 days \\ Time of passage at periastron & 2456453.72 HJD \\ \hline \end{tabular} \end{center} \end{table} \section {Stellar parameters} \citet{MelendezPT16} estimated the temperature of 2MASS\,J1808-5104 by imposing the excitation equilibrium of \ion{Fe}{i}~ lines and adding an empirical correction described in \citet{FrebelCJ13}. Since 2MASS\,J1808-5104 was observed by Gaia, we used the Gaia photometry recently displayed in the Gaia DR2 \citep{ArenouLB18,GaiaDR2}, and the 3D maps of interstellar reddening \citep{CapitanioLV17,LallementCR18,Lallementpriv} to improve these parameters. The Gaia photometry and the reddening are listed in Table \ref{gaiadata}. We note that following \citet{Schlaufman18}, the mass of the secondary must be very low ($M_{2}=0.14 M_{\odot}$) and thus its contribution to the total flux is negligible. \begin{table} \begin{center} \caption[]{ Photometry and distance of 2MASS\,J1808-5104 ~~(Gaia DR2 6702907209758894848). The parallax of the star, parallax error, observed $g$ magnitude, and $(BP-RP)$ colour are given in the first row; next the distance of the star in pc with the minimum and maximum values of this distance taking into account the parallax error is given. The extinction in the G magnitude and $E_{(BP-RP)}$ are found in the second row. Then the values $g_{0}$ and $(BP-RP)_{0}$ are the values of $g$ and $(BP-RP)$ corrected for the reddening, and G is the magnitude corrected for extinction and distance (absolute magnitude). } \label{gaiadata} \begin{tabular}{l@{~~}r@{~~}r@{~~}c@{~~}c@{~~}l@{~~}c@{~~}c@{~~}c@{~~} c@{~~}c@{~~}c@{~~}c} \hline Parallax& Par. & Obs. &$(BP-RP)$& dist.& dist. & dist.\\ ms & error & $g$ mag.& mag. & (pc)& min & max \\ \hline 1.6775 & 0.0397& 11.756 & 0.903 & 596 & 582 & 611 \\ \hline \\ A(G) &$E_{(BP-RP)}$& $g_{0}$ &$(BP-RP)_{0}$ & G abs \\ =A(V) & mag. & mag. & mag. & mag. \\ \hline 0.210& 0.105 & 11.546 & 0.798 & $2.67 \pm 0.05$ \\ \hline \end{tabular} \end{center} \end{table} \begin{figure} \begin{center} \resizebox{7.5cm}{!} {\includegraphics [clip=true]{plot_orbit.eps}} \caption[]{Computed orbit for the 2MASS\,J1808-5104 system (using the parameters in Table \ref{orbit_param}) compared with the observed radial velocities based on high resolution spectra. The black points indicate the measurements of \citep{Schlaufman18}. The blue points indicate our measurements based on the UVES R580 spectra given in Table\,\ref{vrad}. The red point indicates our measurement based on the UVES R760 spectrum as given in Table\,\ref{vrad}.} \label{plot_orbit} \end{center} \end{figure} In Fig.\,\ref{iso} we compared the position of 2MASS\,J1808-5104 in a G versus (BP-RP) diagram to the isochrones computed by Chieffi\& Limongi \citep{Chieffipriv}; we used the same code and prescriptions as \citet{SCL97} for 12 and 14 Gyr and metallicities of $-3.0$ and $-4.0$. We note that the error on the G magnitude is very small and is inside the black dot in Fig.\,\ref{iso}. The position of the star in the diagram corresponds to a subgiant star with $T_\mathrm{eff}$=5\,600\,K, log\,g=3.4, $ M\approx0.8M_{\odot}$, and we decided to adopt this model. As a check we also computed the $\rm H\alpha$~ profile for the 1D model adopted by Mel\'endez ($T_\mathrm{eff}$=5\,440\,K, log\,g=3.0) and the model adopted in this work ($T_\mathrm{eff}$=5\,600\,K, log\,g=3.4). The fit of the wings of $\rm H\alpha$~ is better with our model. The use of 3D profiles would even point towards a slightly hotter temperature \citep{AmarsiNB18}. \begin{figure} \begin{center} \resizebox{6.0cm}{7.0cm} {\includegraphics {isochieffi.ps}} \caption[]{Position of 2MASS\,J1808-5104 (black dot) in a G vs. (BP-RP) diagram and comparison to isochrones computed by Chieffi \& Limongi for very metal-poor stars.} \label{iso} \end{center} \end{figure} \section{Analysis} Since the adopted atmospheric model is rather different from the model adopted by \citet{MelendezPT16}, we redetermined the abundances of the different elements with our adopted model. We carried out a classical Local Thermodynamical Equilibrium Analysis (LTE) analysis using OSMARCS models \citep{GustafssonBE75,GustafssonEE03,GustafssonEE08}. The abundances were derived using equivalent widths or fits of synthetic spectra when the lines were blended. We used the code {\tt turbospectrum} \citep{AlvarezP98}, which includes treatment of scattering in the blue and UV domains. These abundances are given in Table \ref{abund}. In Fig.\,\ref{kiv}, we show the dependence of the iron abundance on the wavelength, excitation potential, and equivalent width of the \ion{Fe}{i}~ line. \begin{figure} \begin{center} \resizebox{8.0cm}{2.5cm} {\includegraphics {kivplotplezJ1808-5104-a.ps}} \resizebox{8.0cm}{2.5cm} {\includegraphics {kivplotplezJ1808-5104-b.ps}} \resizebox{8.0cm}{2.5cm} {\includegraphics {kivplotplezJ1808-5104-c.ps}} \caption[]{Iron abundance from individual lines in 2MASS\,J1808-5104 as a function of the wavelength, excitation potential, and, equivalent width of the line.} \label{kiv} \end{center} \end{figure} \noindent--In Fig.\,\ref{kiv}\,b the iron lines with an excitation potential close to zero, over-predict the iron abundance. This is because of non-LTE (NLTE) effects, so these lines are not taken into account in the determination of the mean iron abundance \citep[see Fig.\,3 in][]{CayrelDS04}.\\ --In Fig.\,\ref{kiv}\,c the iron abundance does not depend on the equivalent width of the line, and thus it justifies the choice of the microturbulent velocity, i.e. $\rm v_{t}$\,=\,1.6\,$\rm km\,s^{-1}$.\\ The final 1D, LTE abundances are given in Table \ref{abund}. The solar abundances are from \citet{CaffauLS11} or \citet{LoddersPG09}. \begin{table} \begin{center} \caption[]{ Derived abundances in 2MASS\,J1808-5104 for $T_\mathrm{eff}$=5600\,K, log g=3.4, $\rm v_{t}$=1.6\,$\rm km\,s^{-1}$ (1D computations, OSMARCS model). } \label{abund} \begin{tabular}{l@{~}r@{~}r@{~}r@{~~~~}r@{~~}r@{}c@{~~}c@{~}c@{~~} c@{~~}c@{~~}c@{~~}c} \hline Elem. &$A(X)_{\odot}$&$A(X)_{\star}$& N& $\sigma$& [X/H]& ~[X/Fe]] \\ \hline \ion{Fe}{i} & 7.52 & 3.63 & 65 & 0.08 & & \\ \ion{Fe}{ii} & 7.52 & 3.73 & 4 & 0.07 & & \\ Fe adopt.& 7.52 & 3.68 & & & --3.84 & \\ \hline \ion{Li}{i} & & 1.78 & \\ C (CH) & 8.50 & 5.15 &syn. & & --3.35 & ~+0.49 \\ O (OH) & 8.76 & 6.28 &syn. & & --2.48 & ~+1.36 \\ \ion{Na}{i} & 6.30 & 2.56 & 2 & 0.01 & --3.74 & ~+0.10 \\ \ion{Mg}{i} & 7.54 & 4.16 & 6 & 0.06 & --3.38 & ~+0.46 \\ \ion{Al}{i} & 6.47 & 1.93 & 2 & 0.04 & --4.54 &~--0.70 \\ \ion{Si}{i} & 7.52 & 3.69 & 1 & 0.04 & --3.83 & ~+0.01 \\ \Cau & 6.33 & 2.81 & 9 & 0.17 & --3.52 & ~+0.32 \\ \Scd & 3.10 &--0.53 & 3 & 0.12 & --3.63 & ~+0.21 \\ \Tiu & 4.90 & 1.67 & 8 & 0.05 & --3.23 & ~+0.61 \\ \ion{Ti}{ii} & 4.90 & 1.47 & 15 & 0.13 & --3.43 & ~+0.41 \\ \ion{Cr}{i} & 5.64 & 1.57 & 5 & 0.12 & --4.07 &~--0.23 \\ \ion{Co}{i} & 4.92 & 1.80 & 4 & 0.10 & --3.12 & ~+0.72 \\ \ion{Ni}{i} & 6.23 & 2.59 & 3 & 0.04 & --3.64 & ~+0.20 \\ \Srd & 2.92 &--1.86 & 2 & 0.04 & --4.78 &~--0.94 \\ \hline \end{tabular} \end{center} \end{table} \subsection{Carbon and oxygen abundances} \label{CO3D} In 2MASS\,J1808-5104, the carbon abundance deduced from the CH band, [C/Fe]=+0.49, is very close to the mean value found from 1D calculations for the extremely metal-poor turn-off stars \citep{BonifacioSC09}: [C/Fe]=+0.45. But the CH band is sensitive to 3D effects \citep{GallagherCB16}. We made use of a 3D {CO$^5$BOLD} model \citep{FreytagSL12} belonging to the CIFIST grid \citep{LudwigCS09}, with parameters (5\,500\,K/3.5/--4.0) close to the stellar parameters of 2MASS\,J1808-5104 to compute the 3D correction and we found ${\rm A_{\rm 3D}(C)}-{\rm A_{\rm 1D}(C)}=-0.40\pm 0.1$\,dex; the error in this case corresponds to the different estimations of this correction in different parts of the CH band. The oxygen abundance is derived from a fit of the ultraviolet OH band between 312.2 and 313.2\,nm. The uncertainty (scatter from line to line) is less than 0.1 dex. The OH band is also strongly affected by 3D effects. For 12 OH lines, we computed the 3D corrections \citep{zolfito} and we derived ${\rm A_{\rm 3D}(O)}-{\rm A_{\rm 1D}(O)}=-0.98\pm 0.08$\,dex. As a consequence in 2MASS\,J1808-5104, A(C)=4.75, [C/H]=--3.75, [C/Fe]=+0.09 and A(O)=5.30 with [O/H]=--3.46 and [O/Fe]=+0.38. \subsection{Na and Al abundances} The abundance of these two elements in Table \ref{abund} has been deduced from the resonance lines which are often affected by strong NLTE effects. We estimated the NLTE correction from \citet{AndrievskySK07,AndrievskySK08}. In fact the correction at this very low metallicity is small for \ion{Na}{i}. We found $\rm\Delta(A(Na))$ =--0.05, but the correction is large for \ion{Al}{i}\,, i.e. $\rm\Delta(A(Al))$ =+0.68. In 2MASS\,J1808-5104, the abundances of Na and Al given in Table \ref{abund}, after correction for NLTE effects, are A(Na)= 2.51, [Na/Fe]=+0.05 and A(Al)=2.61 and [Al/Fe]=--0.02 \begin{figure} \begin{center} \resizebox{\hsize}{!} {\includegraphics {J1808-5104-Li.ps}} \caption[]{Observed spectrum of 2MASS\,J1808-5104 in the region of the Li doublet and synthetic spectra computed with A(Li)=1.5 and 2.1 (blue lines) and 1.78 (red line, best fit). } \label{lifig} \end{center} \end{figure} \begin{figure} \begin{center} \resizebox{\hsize}{!} {\includegraphics {J1808-5104-Be.ps}} \caption[]{Observed spectrum of 2MASS\,J1808-5104 in the region of the Be doublet and synthetic spectra computed with A(Be)=--3.0, --2.0 and --1.6 (blue lines), and --2.33 (red line, best fit). Taking into account the uncertainty in the position of the continuum, we adopted $\rm A(Be)\leq -2.0$ dex i. e. $\rm \log(Be/H) \leq -14.0$. } \label{befig} \end{center} \end{figure} \section{Li and Be abundances} The Li abundance is redetermined from our new high S/N spectra in the red (Fig. \ref{lifig}). We find A(Li)=1.78 a value very close to the Li abundance found by \citet{MelendezPT16}. If we apply the 3D-NLTE correction computed by \citet{Sbordone10} we find A(Li)=1.88. As a consequence Li is slightly depleted below the Spite plateau value \citep{Spite82a,Spite82b,Sbordone10}. Lithium indeed is a very fragile element; it is destroyed by proton fusion when temperature reaches about $2~10^{6}$\,K. In a main sequence star this element is destroyed as soon as the convective zone reaches the layers where the temperature is higher than this fusion temperature. When the star leaves the main sequence it develops surface convection zones, which deepen as the star evolves to lower temperatures. The surface convection zone mixes the surface layer with deeper material in which lithium has been depleted, and the observed lithium abundance falls. Following \citet{PilachowskiSB93}, the decrease of Li abundance for $T_\mathrm{eff}$ = 5600\,K is $-0.25\,\pm 0.25$ dex in excellent agreement with the observed Li abundance in 2MASS\,J1808-5104. The abundance of Be is determined by a $\chi^{2}$ fit of the observed spectrum (Fig.\,\ref{befig}) between 312.94 and 313.14\,nm. The best fit depends on the adopted position of the continuum, which is fortunately rather well defined in this region. Only the bluest Be line is used, as is generally done in metal-poor stars, the reddest being much too weak. The Be line at 313.04\,nm is close to two OH lines, and to compute the profile of the global feature we use the 1D oxygen abundance given in Table \ref{abund}. The best fit is obtained for A(Be)=--2.33, but considering the S/N of the spectrum, it is reasonable to say that A(Be)< --2.0 or log(Be/H)<--14.0. Since the weak \ion{Be}{ii}~ lines are formed in the deep atmospheric layers, the abundance of Be computed with the 1D-LTE or the 3D-NLTE hypotheses are not significantly different \citep{Primas00b}. Although Be is destroyed at a higher temperature than Li ($3.5~10^{6}$\,K), it is legitimate to ask whether Be has been depleted in 2MASS\,J1808-5104, since its Li abundance is slightly below the ``plateau''. However, we expect that for a same phase of the evolution of the star and thus the same depth of the convective layer, the depletion of Be by dilution is much smaller than the depletion of Li. In the sample of \citet{BoesgaardRL11} there is no significant difference between the ratios [Be/Fe] in turn-off and in turn-off and insubgiant stars. \section{Discussion} \subsection{Be-Fe correlation} In Fig.\,\ref{befe-trend} we plot log(Be/H) versus [Fe/H] for a large sample of Galactic stars from \citet{SmiljanicPB09} and \citet{BoesgaardRL11}. When a star was in both lists we prefer the abundance given by \citet{SmiljanicPB09}, but the two are always very close. The black dashed line in Fig \ref{befe-trend} represents the regression line in the middle of these stars. The two Be measurements at the lowest metallicity are for the stars G 64-12 and G 275-4 \begin{figure} \begin{center} \resizebox{\hsize}{!} {\includegraphics{abfe-be-smil-boes2-J1808.ps}} \caption[]{For galactic stars, log(Be/H) vs. [Fe/H]. The green open circles are from \citet{SmiljanicPB09} and the blue filled circles from \citet{BoesgaardRL11}. However the position of G\,64-12 in this diagram takes into account our new measurement of the Be abundance in this star adopting the model of \citet{Primas00b}, which is in better agreement with the Gaia-DR2 data. The upper limit of the abundance of Be in 2MASS\,J1808-5104 and BD\,$+44^\circ 493$~ are indicated with big red and blue open circles. The blue dashed straight line represents the mean relation. The curved red dash-dotted line at low metallicity represents the possibility of a plateau, suggested in particular, by the previous high Be abundance found in G\,64-12 by \citet{Primas00b} and \citet{BoesgaardRL11}. The very low Be abundance in the two additional stars BD\,$+44^\circ 493$~ and 2MASS\,J1808-5104 rules out the possibility of a plateau.} \label{befe-trend} \end{center} \end{figure} \begin{figure} \begin{center} \resizebox{\hsize}{!} {\includegraphics {G64-12-BemodPrimas.ps}} \caption[]{Mean of the reduced G64-12 spectra obtained with the HIRES-Keck and the UVES-VLT spectrographs in the region of the Be doublet and synthetic spectra computed with A(Be)=--3.0 --1.5 --1.2 (blue lines), and --1.35 (red line, best fit). } \label{BeG6412} \end{center} \end{figure} The two Be measurements at the lowest metallicity are for the stars G\,64-12 \citep{Primas00b,BoesgaardRL11} and G\,275-4 \citep{BoesgaardRL11}. For G\,64-12 \citet{Primas00b} measure $\rm log(Be/H)=-13.10\pm 0.15,$ while \citet{BoesgaardRL11} found $-13.43\pm 0.12$. The two values are compatible within errors, however the difference can be entirely explained by the different atmospheric parameters adopted. \citet{Primas00b} indeed adopted $T_\mathrm{eff}$ = 6\,400\,K, $\log g$ = 4.1, while \citet{BoesgaardRL11} adopted $T_\mathrm{eff}$ = 6074, $\log g$ = 3.75. This relatively high Be abundance in G\,64-12 suggested at that time the possible existence of a ``plateau'' of the Be abundance at very low metallicity. As a check we retrieved the UVES and HIRES spectra from the archives. In the region of the Be line, they have almost the same resolution and same S/N so it is possible to average these spectra. The resulting spectrum has a S/N of more than 150 in the region of the Be lines. On this averaged spectrum we remeasured the O and Be abundance in G\,64-12 adopting the Primas's model. The Be abundance is indeed very sensitive to surface gravity and the Gaia DR2 parallax of G\,64-12 ($3.7626\pm 0.0856$)\,mas implies $\log g$ = 4.25; this value is very close to the gravity adopted by \citet{Primas00b}. We found that the best fit was obtained with A(Be)=--1.35 or $\rm log(Be/H)=-13.35$ (Fig. \ref{BeG6412}). This value is intermediate between the \citet{Primas00b} and \citet{BoesgaardRL11} values. In Fig. \ref{befe-trend} the dot representing G\,64-12 takes into account this new measurement. The position of the very metal-poor stars G\,64-12, G\,64-37, and G\,275-4 in this figure could suggest the existence of a plateau at a level of $\rm log(Be/H)\approx-13.4$. But before the present work, the most stringent upper limit on the primordial Be abundance was already the upper limit found in the carbon enhanced metal-poor (CEMP) star BD\,$+44^\circ 493$ ([Fe/H]=--3.7) provided by \citet{Ito}, i.e. log(Be/H)$< -14.0$. The upper limit that we have derived for 2MASS\,J1808-5104 is essentially equivalent to that provided by BD\,$+44^\circ 493$. The two stars have similar atmospheric parameters and metallicities, however there is a big difference in their carbon and oxygen abundances. With reference to the classification introduced by \citet{toposIV}, while BD\,$+44^\circ493$ is a low-carbon band CEMP star, 2MASS\,J1808-5104 is a carbon normal star. This finding confirms what was already noticed by \citet{Ito}: there is no correlation between CEMP nature and Be abundance. The very low abundances of Be in 2MASS\,J1808-5104 confirms that the possibility of a Be plateau at a level of $\rm log(Be/H)\approx-13.6$ is ruled out (Fig \ref{befe-trend}). It seems reasonable to assume that the Be abundance continues its linear decrease with metallicity in the range --3.0 to --4.0\,dex. We found that, on average, log $\rm(Be/H) = -10.75 + 0.89*[Fe/H]$ (see dashed line in Fig. \ref{befe-trend}). The slope of this line is very close to the slope found by \citet{BoesgaardRL11} from their stars alone. The low Be abundance found in 2MASS\,J1808-5104 and BD\,$+44^\circ 493$ is compatible with this linear regression. The upper limit on the primordial Be abundance $\rm \log (Be_p/H) < -14$ is thus reinforced as are the limits on late decaying hadrons provided by \citet{Pospelov}. At present there appears to be no hint towards inhomogeneities in the primordial plasma or the presence of late decaying particles. \begin{figure*} \begin{center} \resizebox{8.0cm}{!} {\includegraphics{abo-be-boes-plus-J1808-1D.ps}} \resizebox{8.0cm}{!} {\includegraphics{abo-be-boes-plus-J1808-3D.ps}} \caption[]{For galactic stars, log(Be/H) vs. [O/H] following \citet{BoesgaardRL11} (blue filled circles). The upper limit of the abundance of Be in 2MASS\,J1808-5104 and BD\,$+44^\circ 493$~ are indicated with big red and blue open circles. In A) the oxygen abundance was simply computed with the 1D LTE hypothesis, as in \citet{BoesgaardRL11}, and in B) this abundance has been corrected for 3D effects. The blue dashed line represents the mean relation and the red dashed line the two-slope solution with a change of the slope around [O/H]=--1.5\,dex as in \citet{BoesgaardRL11}. The positions of BD\,$+44^\circ 493$~ and 2MASS\,J1808-5104 in this diagram are hardly compatible with the mean relations.} \label{beo-trend} \end{center} \end{figure*} \subsection{Be-O correlation} In Fig. \ref{beo-trend} we plott log(Be/H) as a function of [O/H] for the stars studied by \citet{BoesgaardRL11} in which the oxygen abundance was deduced from 1D computations of the profile of some UV-OH lines; the authors estimated that the error in [O/H] is about 0.2 dex. As discussed in section \ref{CO3D}, these lines are strongly affected by 3D effects. The 3D correction has been computed by \citet{AsplundGP01} and \citet{GonzalezBL10} with two different grids of 3D models and their results are in good agreement. For dwarfs and subgiant stars this correction depends mainly on the metallicity and temperature of the star; this correction is negligible for $\rm[Fe/H]>-1.0$ but reaches almost --1.0 for $\rm[Fe/H]=-4.0$. The star BD\,$+44^\circ 493$~ is peculiar, it is very C-rich and \citet{GallagherCB17} have shown that the carbon abundance affects the molecular equilibria in 3D hydrodynamical models in a much more prominent way than happens in 1D models. Using a C-rich hydrodynamical model atmosphere computed with the {CO$^5$BOLD}\ code \citep{FreytagSL12}, we determined that the 3D correction in this case is only \mbox{--0.3 dex}. As a consequence [O/H] in BD\,$+44^\circ 493$~ is equal to --2.49. The OH lines can be also affected by NLTE effects. \citet{AsplundGP01} approximated the UV-OH line formation with a two-level approach with complete redistribution, but they neglected the influence of NLTE on the dissociation of the molecules. These authors found from 1D computations that the oxygen abundance derived from the UV-OH band is underestimated by about 0.2 dex, almost independently from the stellar parameters. In order to obtain a right estimation of the NLTE effect it would be necessary to take into account fully the 3D thermal structure of the model atmosphere; this work is beyond the scope of this paper. To date, we neglected the NLTE correction and we applied to all the stars of the sample of \citet{BoesgaardRL11} the 3D correction computed from \citet{GonzalezBL10}. \footnote {The 3D values of the oxygen abundance are given in a table available at the CDS.} In Fig. \ref{beo-trend}A the oxygen abundance was computed from 1D models (as in \citet{BoesgaardRL11}) and in Fig. \ref{beo-trend}B, this oxygen abundance was corrected for 3D effects. If we had applied the \citet{AsplundGP01} NLTE correction, all the dots in Fig. \ref{beo-trend}B would have been shifted by 0.2\,dex towards higher [O/H] values. When only Boesgaard's stars are considered, the data after 3D correction can be interpreted by a linear relation between log(Be/H) and [O/H] with a slope of 0.77. Alternatively a two-slope relation \citep[see ][]{BoesgaardRL11} with a slope of 0.95 \relax in the interval --1.5<[O/H]<0.0 and a slope of 0.58 at lower metallicity can be used. In Fig. \ref{beo-trend}A or B, 2MASS\,J1808-5104 and BD\,$+44^\circ 493$~ do not fit the general trend. The star BD\,$+44^\circ 493$~ has an oxygen abundance close to the oxygen abundance of G\,64-12, and 2MASS\,J1808-5104 has about the same as G\,275-4 and G\,64-37, but 2MASS\,J1808-5104 like BD\,$+44^\circ 493$~ are clearly more deficient in Be. The Be abundance expected in 2MASS\,J1808-5104, from the mean relations in Fig.\,\ref{beo-trend}, would be in all the cases $\rm A(Be)\approx -1.6$, a value excluded from the observed spectrum (see Fig.\,\ref{befig}). Since BD\,$+44^\circ 493$~is a C-rich star it could be possible that the high CNO abundances in this star are the result of a mass transfer from a ``now dead'' AGB companion. But since the star is a CEMP-no (no enrichment of the neutron capture elements) this interpretation is unlikely. Moreover following \citet{GaiaDR2}, the radial velocity of BD\,$+44^\circ 493$~ does not seem variable, i.e. RV=--147.9. As a consequence, the existence of a former pollution of the atmosphere of BD\,$+44^\circ 493$~ by a massive companion in its AGB phase is questionable. It is highly probable that the abundance of C,N and O in the atmosphere of BD\,$+44^\circ 493$~ is a good witness of the abundances in the cloud which formed the star. \section{Conclusions} A two-slope solution of the relation log(Be/H) versus [O/H] is predicted by theory. It is now commonly accepted that all the observed Be is formed by spallation \citep{Reeves,Meneguzzi}, however the following two distinct processes can be invoked: \begin{itemize} \item H and He nuclei in cosmic rays hit CNO nuclei in the ambient interstellar gas (secondary process). \item CNO nuclei in the cosmic rays hit H and He nuclei in the ambient interstellar gas (primary process). \end{itemize} If Be (and B) were formed preferentially by the secondary process we would expect a quadratic dependence of the Be (B) abundance on the oxygen abundance, hence a slope of two\relax\ in the logarithmic plane. The primary process, on the other hand, would imply a slope of one, as implied by the observations of both Be and B. The secondary process was probably invoked for the first time as the main process for the production of B (and Be by extension) by \citet{DuncanLL92} to explain their B observations. Further considerations on this point can be found in \citet{Duncan97,Molaro97} and \citet{RichBoes09}. However, from the theoretical point of view \citet{Suzuki99} and \citet{Suzuki01} argued that the primary process is the main source of all the observed B and Be. \citet{BoesgaardRL11} suggested that the balance shifted from primary to secondary in the course of time. In the early days of Galactic evolution, the acceleration of CNO atoms from SNe II should be the main phenomenon and the number of Be atoms should be proportional to the number of SNe II and thus to the number of O atoms. Later the number of O atoms is proportional to the cumulative number of SN II, while the energetic protons are proportional to the instantaneous number of SN II. As a consequence the slope of the relation log(Be/H) versus [O/H] is expected to change, and for this reason \citet{BoesgaardRL11} tried to describe the Galactic evolution of Be with two straight lines and a break at a metallicity around --1.5. The slopes of the different relations in Fig. \ref{beo-trend}B are slightly different from those of \citet{BoesgaardRL11} since we correct the abundance of oxygen for 3D effects. In \citet{toposII} we argued that low-carbon band CEMP stars, such as BD\,$+44^\circ 493$, were formed from gas that was polluted by SNe that experienced a large fall-back of material onto the compact remnant, resulting in very high ratios of CNO elements to iron. We referred to these as ``faint supernovae'' (SNe) because we made the implicit assumption that the luminosities of these stars would also be lower than those of SNe that do not experience fall-back. Observationally this same name is given to type II SNe that are under-luminous, such as SN 1997D \citep{SN1997D}, which was also characterised by relatively low expansion velocities and a low mass of ejected $\rm ^{56}Ni$ \citep{Turatto98}. It is interesting to note that SN 1997D should not have been able to produce light elements via spallation. The cross section for production of $\rm ^9Be$ via spallation of oxygen drops drastically below energies of a few (MeV/A) of the projectile \citep[see figure 1 of ][]{Suzuki01}, and translated into velocity of the O nuclei this requires velocities in excess of 4000\,$\rm km\,s^{-1}$. A typical type II SNe shows velocities of the ejecta that are of the order of 10\,000\,$\rm km\,s^{-1}$ , thus in the useful range for Be production. On the other hand SN 1997D showed an expansion velocity of the ejecta of only 1\,200\,kms \citep{Turatto98}, which is clearly insufficient for Be production. The upper limit of BD\,$+44^\circ 493$\ seems to be consistent with the hypothesis that it was formed from a faint SNe, characterised by strong fall-back, responsible for the high CNO to Fe ratios, and low velocity of the ejecta, resulting in a Be content that is clearly lower than that of stars of similar [O/H]. In fact it could well be that BD\,$+44^\circ 493$\ is completely devoid of Be and that the fact that its upper limit on its Be abundance falls exactly on the expected line of Be-Fe evolution is fortuitous. It would be of paramount importance to be able to push down the upper limit on the abundance of Be in BD\,$+44^\circ 493$ , even by only 0.3\,dex. As a corollary, measurements of Be in other lower carbon CEMP unevolved stars are strongly encouraged. If our scenario is correct all lower carbon CEMP unevolved stars should show a lower Be abundance than carbon-normal stars of similar [O/H]. The unevolved lower carbon band CEMP star, HE\,1327-2326, has an upper limit on the Be abundance log (Be/H)$<-13.2$ and a 3D corrected [O/H]=--2.64 \citep[][for this star the 3D correction is --0.72\,dex]{Frebel08}. This upper limit is inconclusive since it is above the Be abundance measured in stars of comparable O abundance, and also above both the two-slope model and the one-slope model. It would be extremely important to be able to push down this upper limit by 0.3 -- 0.4\,dex. A detection of Be at the same level as that in stars of similar O abundance would invalidate our scenario on faint SNe. While the low Be abundance in BD\,$+44^\circ 493$~ could be interpreted as a result of it being formed from ejecta of a faint SNe, the same cannot be invoked for 2MASS\,J1808-5104. As we have already argued, the Li abundance measured in 2MASS\,J1808-5104 is strong evidence that the Be in this star cannot have been significantly depleted. The fact that there are stars that have similar oxygen abundances but significantly different Be abundances is a real puzzle. A fundamental piece of information would, of course, come from a measure of the Be abundance in 2MASS\,J1808-5104 or at least an upper limit lower by 0.3\,dex. This would at least rule out the possibility that the difference is simply due to large observational errors in both Be and O abundances. A far more intriguing possibility is that in the early Galaxy, the tight correlation of Be with O breaks down: the scatter of the relation becomes larger. In a simple picture, 2MASS\,J1808-5104 and BD\,$+44^\circ 493$~ should be very old stars born in regions with anomalously high O, due to local inhomogeneities in the very early Galaxy. Given the very low metallicity of 2MASS\,J1808-5104 we may assume that the SNe, that have produced the metals that we observe in its atmosphere, were Pop III stars, possibly even a single Pop III star or a few at most. If the velocities of the ejecta in these stars were higher than observed in normal Pop I and Pop II SNe, it may be that the lower mass spallation products, Li, Be and B, escape with a high enough velocity to escape the cloud that will give rise to the next generation of stars, such as 2MASS\,J1808-5104. If this were the case, we expect that stars at the metallicity of 2MASS\,J1808-5104 ($\rm [Fe/H] \leq -3.5$) are all devoid of Be and B. This would also be an interesting diagnostic to distinguish true descendants of Pop III stars from stars of similar metallicity, but formed from clouds polluted by Pop II stars. This clearly prompts for new observations: on the one hand, it is important to measure Be and B in 2MASS\,J1808-5104 and other stars of similarly low metallicity and, on the other hand, one should increase the number of stars with measured Be and B in the metallicity range [Fe/H]$\le -2.0$. A single or a few Pop III SNe can pollute a gas cloud to such high metallicities and a measure of Be could allow us to detect such true Pop III descendants. \begin {acknowledgements} AJG would like to acknowledge support by Sonderforschungsbereich SFB 881 ``The Milky Way System'' (subproject A5) of the German Research Foundation (DFG). This work uses results from the European Space Agency (ESA) space mission Gaia. Gaia data are being processed by the Gaia Data Processing and Analysis Consortium (DPAC). Funding for the DPAC is provided by national institutions, in particular the institutions participating in the Gaia MultiLateral Agreement (MLA). The Gaia mission website is https://www.cosmos.esa.int/gaia. The Gaia archive website is https://archives.esac.esa.int/gaia. \end{acknowledgements} \bibliographystyle{aa}
{ "timestamp": "2019-03-01T02:17:14", "yymm": "1902", "arxiv_id": "1902.11048", "language": "en", "url": "https://arxiv.org/abs/1902.11048" }
\section{Introduction} Non-linear sigma models (NLSM) with target space a symmetric orbifold $\Sym^n(M_4)$, where $M_4=T^4$ or $K3$, have been the subject of intense study in the last two decades. The interest in these models is mostly due to the role they play in the AdS$_3$/CFT$_2$ correspondence as the holographic dual of AdS$_3\times S^3\times M_4$ \cite{Maldacena:1997re}. The symmetric orbifold models represent some very special points in the moduli space of two dimensional $\mathcal{N}=(4,4)$ superconformal field theories (SCFTs) at central charge $c=6n$. A generic point in this moduli space (at least in the connected component containing symmetric orbifolds) can be thought of as a non-linear sigma model on a hyperk\"ahler manifold of real dimension $4n$, which is a deformation of $\Hilb^n M_4$, the Hilbert scheme of $n$-points on the manifold $M_4$. They also arise in type IIB string theory, as the infrared effective theories describing the worldsheet dynamics of a system branes wrapping the manifold $M_4$. The most studied example of this construction involves a system of $Q_5$ D5-branes wrapping $M_4\times S^1$ and $Q_1$ D1-branes wrapping $S^1$, with $Q_1Q_5=n$. This article is focused on the case $M_4=K3$. In this case, the relevant target spaces are known in algebraic geometry as hyperk\"ahler manifolds of K3$^{[n]}$ type. The symmetric orbifold $\Sym^nK3$ can be obtained as a suitable limit of such manifolds, where the geometry becomes singular. In analogy with the terminology in algebraic geometry, we will call the SCFTs in the moduli space of $Sym^n K3$ `of $K3^{[n]}$ type'. These SCFTs are generically not free, even in the symmetric orbifold limit, and this makes explicit computations particularly challenging, at least for finite $n$, where holographic methods do not apply -- for example, the partition function of a generic such model is not known. The first major result of this article is a classification (up to isomorphisms) of the possible groups of discrete symmetries (satisfying certain conditions) of these SCFTs, for any $n$ and any point in the moduli space. This is the generalization to manifolds of K3$^{[n]}$ type of the analogous result for non-linear sigma models on a single K3 surface \cite{K3symm}. More precisely, we will consider symmetries acting by linear transformations on the states (or equivalently, on the fields) of the CFT, such that the $\mathcal{N}=(4,4)$ supercurrents are invariant, and commuting with the spectral flow transformation relating the Neveu-Schwarz and the Ramond sector, both on the left-moving and on the right-moving side. We will also require some technical restrictions on these symmetries, which is described precisely in section \ref{s:callifsymm}. These restrictions are clearer if one interprets the SCFT as the world-sheet theory for a system of branes in type IIB string theory, for example a bound state of D1-D5 branes. In this context, a first condition is that we focus on symmetries whose action extends to the whole type IIB string theory. The second constraint can be described very roughly as the condition that the symmetries (or rather their extension to the full type IIB string theory) do not to exchange branes with antibranes. In fact, it is quite surprising that these requirements is meaningful at all, i.e. that there might exist some symmetry that does \emph{not} satisfy them. We will argue that this is the case in section \ref{s:dualities}. Finally, our analysis will exclude the models corresponding to a certain set of points in the moduli space. Conjecturally, this excluded set might be empty, i.e. the `models' we are excluding might be singular limits in the moduli space where the SCFT is not really well defined. However, while we will provide some reasonable arguments in favour of this conjecture (see section \ref{s:singular}), assuming that this is true seems to lead to some puzzles, related in particular to the locus of symmetric orbifolds. As long as these puzzles are not clarified, it seems safer to claim explicitly that these models, if they exist, are not part of our analysis. With all caveats in mind, we can now formulate our classification result: we will show that the group $G$ of symmetries of a (non-singular) SCFT of K3$^{[n]}$ type is isomorphic to a subgroup of the Conway group $Co_0$ (the group of automorphisms of the Leech lattice $\Lambda$), which fixes a sublattice of $\Lambda$ of dimension at least $3$. Conversely, given any subgroup $G$ of $Co_0$ with this property and given any $n>1$, there exists a (non-singular) SCFT of $K3^{[n]}$ type having exactly $G$ as a group of symmetries. This is very similar to the result for NLSM on a single K3 surface, the only difference there being that the groups $G\subset Co_0$ where required to fix a $4$-dimensional, rather than $3$-dimensional, sublattice \cite{K3symm}. In order to obtain this result, the main input from physics is the global form of the moduli space of SCFTs of K3$^{[n]}$-type. This moduli space is believed to be obtained by taking the quotient of a certain Grassmannian by a discrete group of dualities; from this quotient, one needs to exclude a subset corresponding to `singular' limits where the SCFT is not well defined \cite{Dijkgraaf:1998gf,Seiberg:1999xz}. The reader should be warned that our results are conditional to some assumptions (described in section \ref{s:callifsymm}) about the precise structure of the duality group and the location of the singular models. While these assumptions are quite standard, the arguments supporting them are not as stringent as, i.e., the case of the moduli space of NLSMs on K3. If one relaxes our conditions and considers also symmetries acting by (roughly speaking) charge conjugation on the D-branes, then the corresponding groups can, in some cases, get larger, and become $\ZZ_2$-extensions of the groups described above. We were not able to determine exactly at which points in the moduli space these enlargement occurs and the precise group structure of the extensions. There are analogous attempts in algebraic geometry to classify the groups of symplectic automorphisms of hyperk\"ahler manifolds of K3$^{[n]}$ type (see \cite{Mongardi2015,Mongardi2016,huybrechts}), which are subgroups of the groups of symmetries of the corresponding NLSM. For a single K3 surface, the problem was solved a long time ago by a famous theorem by Mukai \cite{mukai1988finite}. The classification of these `geometric' symmetries seems more complicated than the one of symmetries of SCFTs. In particular, certain groups $G$ seem to appear only for some manifolds of K3$^{[n]}$-type only for $n$ `large enough'; by contrast, for SCFTs, a certain group $G$ either occurs for all $n>1$ or does not occur at all. The results of \cite{K3symm}, valid for non-linear sigma models on K3, were subsequently interpreted in \cite{huybrechts} in terms of a classification of the groups of autoequivalences of the derived category of coherent sheaves on K3 surfaces. It is tempting to conjecture that the results of this article admit analogous interpretations in terms of derived categories of hyperk\"ahler manifolds of K3$^{[n]}$ type. \bigskip The second result of this article concerns the so called twining genera. Given a SCFT of K3$^{[n]}$-type with a group of symmetries $G$, one can define a $g$-twining elliptic genus (twining genus, for short) $\phi_g^{K3^{[n]}}(\tau,z)$ obtained from the elliptic genus by inserting the symmetry $g$ in the trace in the definition \eqref{ellgen}. One case where a twining genus can be easily computed is when the SCFT of $K3^{[n]}$ type is the $n$-th symmetric orbifold of a NLSM $\mathcal{C}_{K3}$ on a K3 surface, and the symmetry $g$ is induced by a symmetry $g'$ of the seed theory $\mathcal{C}_{K3}$. In this case, provided that the twining genera $\phi_{g'^r}^{\mathcal{C}_{K3}}$ of $g'$ and of all its powers $g'^r$ are known for $\mathcal{C}_{K3}$, one can compute a generating function (sometimes known as the `second quantized twining genus') \begin{equation} \Psi_{g'}(\sigma,\tau,z)=\sum_{n=0}^\infty p^n \phi^{Sym^n\mathcal{C}_{K3}}_{g'}\ ,\qquad\qquad p:=e^{2\pi i\sigma}\ , \end{equation} for the elliptic genera $\phi^{Sym^n\mathcal{C}_{K3}}_{g'}$ of the induced symmetries in the symmetric orbifold $\Sym^n\mathcal{C}_{K3}$, for all $n$. Upon including an `automorphic correction', the second quantized twining genus becomes a (meromorphic) Siegel modular form $1/\Phi_g$ of genus $2$ under a certain congruence subgroup of $Sp(4,\ZZ)$; explicitly, \begin{equation} 1/\Phi_{g}(\sigma,\tau,z)=\frac{\Psi_g(\sigma,\tau,z)}{p\psi_g(\tau,z)}\ .\end{equation} Both $\Psi_g$ and $\Phi_g$ can be defined explicitly in terms of infinite products, whose exponents depend on the Fourier coefficients of $\phi_{g'^r}^{\mathcal{C}_{K3}}$ for all powers $g'^r$. The set of possible twining genera of NLSM on K3 are known \cite{Cheng:2016org,Paquette:2017gmb} (although, for some symmetries $g$, there are more than one `candidate' twining genus, and it is not known which one is actually realized in a given NLSM, see \cite{Paquette:2017gmb}), so the computation can be effectively performed. Standard arguments (reviewed in section \ref{s:conjclass}) show that a twining genus $\phi_g^{K3^{[n]}}$ is invariant under deformations of the moduli that preserve the symmetry $g$, i.e. deformations generated by exactly marginal operators that are $g$-invariants. This means that there are connected families of models where the twining genus $\phi_g^{K3^{[n]}}$ is well-defined and constant -- for NLSMs on K3, an analogous argument was developed in \cite{Cheng:2016org}. Therefore, it is sufficient to compute the twining genus at a suitable point (for example, a symmetric orbifold), in order to obtain the genus for all models in the corresponding family. For this reason, it is interesting to study what are the possible families of models with a given symmetry $g$, and in particular families containing a symmetric orbifold point. We prove that, for each $n$, there are only finitely many distinct families of SCFTs of K3$^{[n]}$ type with a non-trivial symmetry $g$ (once we identify families related by dualities). We were not able to determine exactly how many such families exist, in general. However, we provide a complete classification of families containing symmetric orbifold points, and such that the corresponding symmetry $g$ is induced by a symmetry of the seed theory. We also argue that there exist families of SCFTs of K3$^{[n]}$ that are not of this kind, i.e. whose symmetry $g$ is never the a symmetry of a seed theory $\mathcal{C}_{K3}$ in a symmetric orbifold point $\Sym^n\mathcal{C}_{K3}$. For these families, we do not know any method for computing the corresponding twining genus. A surprising result is the following. Consider a pair of NLSMs on K3, $\mathcal{C}_{K3}$ and $\mathcal{C}'_{K3}$, with symmetries, respectively, $g$ and $g'$ of the same order, but not belonging to the same family. In general, one expects the corresponding twining genera $\phi^{\mathcal{C}_{K3}}_g$ and $\phi^{\mathcal{C}'_{K3}}_{g'}$ to be different. Our classification, however, shows that for some $n$ (sometimes for all $n>1$), the symmetric orbifolds $Sym^n \mathcal{C}_{K3}$ and $Sym^n \mathcal{C}'_{K3}$ with the induced symmetries $g$ and $g'$ belong to the same family. This implies that, while the twining genera of the seed theory are different, the ones of the $n$-th symmetric orbifold are the same. This phenomenon happens, for example, for symmetries of order 11: there are $4$ isolated points in the moduli space of NLSMs on K3 having a symmetry of order $11$. There are two possible twining genera for symmetries of order $11$ \cite{Paquette:2017gmb}. While one cannot determine directly which twining genus is realized in each of the four isolated points, it is reasonable to expect that not all of them are the same; this expectation also fits nicely with a conjecture in \cite{Cheng:2016org}, related to the so-called Umbral Moonshine phenomenon \cite{EOT,Cheng:2012tq,Cheng:2013wca,Cheng:2014zpa}. On the other hand, in this article we show that, for each $n>1$, the $n$-th symmetric products of the four NLSM on K3 all sit in the same family with a symmetry of order $11$. This means that the second quantized twining genera $\Psi_g$ of all these four K3 NLSM must have the same coefficients for all powers $p^n$ with $n>1$. On the other hand, the second quantized twining genera are given by infinite products whose exponents are determined the twining genera of the seed theory. If the twining genera of the seed theory are really different, we would have different infinite products that only differ for the $p=1$ term, thus implying an infinite number of non-trivial cancellations occurring. This seems even more puzzling if one consider the `automorphic corrected' Siegel modular forms. In this case, one would have that the difference of the two Siegel modular forms contains only a term with power $p^0$, which seems to be incompatible with the fact that this difference must be modular. All these arguments seem to falsify the conjecture that there are two distinct twining genera of order $11$ for NLSM on K3. However, miracles seem to happen. Some numerical experiments seem to show that the two different second quantized twining genera do have the same $p^n$ terms for $n>1$, at least up to $n=12$. And even the modularity argument is avoided: indeed, for order $11$ symmetries, the corresponding Siegel modular forms turn out to have weight $0$. Furthermore, a direct calculation shows that the coefficients of the $p^0$ term in the corresponding Siegel modular forms, which in general are function of $\tau$ and $z$, differ only by a constant. Thus, one would have that the difference of two meromorphic Siegel modular forms of weight $0$ is just a constant, which is perfectly compatible with modularity. In fact, this argument might be used to prove that the two meromorphic Siegel forms are the same up to a constant. Indeed, the location of the poles is strongly constrained by wall crossing \cite{Paquette:2017gmb}, so it is not unlikely that the two forms have exactly the same poles. If this is the case, then the difference must be a holomorphic Siegel form of weight zero, which is necessarily a constant. While we focused here in the introduction on symmetries of order $11$, similar phenomena occur for several other symmetries: in total, we have $13$ pairs of distinct Borcherds products that differ only by a constant. In \emph{all} these cases, the corresponding Siegel modular forms have weight $0$ and the $p^0$-term in the difference is independent of $\tau$ and $z$. While such coincidences, if true, might be explained by modularity, it is still remarkable that there are so many examples where different infinite products lead to the same modular forms up to a constant. This is not an uncommon phenomenon for genus one modular forms, in particular products of eta functions, but we were not aware of any such example at genus two. The paper is organized as follows. In section \ref{s:SCFTs}, we describe some general properties of the moduli space of SCFTs of $K3^{[n]}$ type. Most of these results are well-known, but we propose a new perspective on various aspects that are relevant for our subsequent analysis. In particular, in section \ref{s:dualities} we discuss the group of dualities of these superconformal field theories, and its relationship with the well-known group of dualities to the type IIB string theory on $K3$. In section \ref{s:singular}, we discussed the limits in the moduli space where the models become singular; this is mostly based on \cite{Seiberg:1999xz}, but we will suggest the possibility that those results might be modified. In section \ref{s:symmorbmoduli}, we discuss the locus of symmetric orbifold points in the moduli space. The analysis of this section, together with \ref{s:dualities} and \ref{s:singular}, leads to some puzzles, whose solution is not clear to us. These puzzles suggest that our (at least, the author's) understanding of the moduli space of SCFTs of $K3^{[n]}$-type is not as watertight as, for example, the one of NLSM on a single K3 surface. In section \ref{s:mainres}, we describe our new results. In section \ref{s:callifsymm} we classify the groups of symmetries of $K3^{[n]}$ models. In section \ref{s:twining}, we review the main properties of twining genera, whose classification is described in section \ref{s:conjclass}. Finally, in section \ref{s:elevenandfriends}, we discuss the consequences of these results for the Borcherds products corresponding to the second quantized twining genera. Section \ref{s:conclusions} contains some suggestions for future research. Most of the technical results are contained in the appendices, where we also review the basic properties of the moduli space of type IIB on K3 and some basic facts about lattices that are needed in the rest of the paper. \section{Superconformal field theories of $K3^{[n]}$-type}\label{s:SCFTs} The moduli space $\mathcal{M}_n$ of the $N=(4,4)$ models at $c=6n$, $n>1$, in the connected component of $\Sym^n(K3)$, is believed to be an open subset in the quotient \cite{Dijkgraaf:1998gf} \begin{equation} \mathcal{M}_n\subseteq O^+_n\backslash Gr^+(4,21) \ ,\qquad n>1\ , \end{equation} where \begin{equation} Gr^+(a,b):= O^+(a,b,\RR)/(SO(a,\RR)\times O(b,\RR))\ ,\end{equation} denotes the Grassmannian parametrising oriented positive definite $a$-dimensional subspaces of the space $\RR^{a,b}$. Here, $O^+(a,b,\RR)\equiv O^+(a,b)$ denotes the subgroup of the real orthogonal group $O(a,b,\RR)$ preserving the orientation of a positive definite $a$-dimensional subspace in $\RR^{a,b}$. The discrete group $O^+_n$, which depends on $n$, will be discussed at length in section \ref{s:dualities}. We will argue the $O^+_n$ is a subgroup of the group of automorphisms $O^+(\Gamma^{5,21})=O(\Gamma^{5,21})\cap O^+(5,21,\RR)$ of an even unimodular lattice $\Gamma^{5,21}\subset \RR^{5,21}$ of signature $(5,21)$. In particular, $O^+_n$ contains the subgroup $Stab^+(v)$ of $O(\Gamma^{5,21})$ that fixes a primitive vector $v\in \Gamma^{5,21}$ of squared norm $v^2=2n-2$. The space $Gr^+(4,21)$ is interpreted as the Grassmannian of oriented positive definite four-dimensional subspaces $\Pi$ in the space $v^\perp \cap \RR^{5,21}\cong \RR^{4,21}$, where $\RR^{5,21}=\Gamma^{5,21}\otimes \RR$. With this identification, there is a natural action of $Stab^+(v)$ on $\RR^{4,21}$, which induces an action on $Gr^+(4,21)$. As will be discussed in section \ref{s:singular}, the open subspace $\mathcal{M}_n$ is the complement in $ O^+_n\backslash Gr^+(4,21)$ of some `singular loci' of real codimension at least $4$; as a consequence, $\mathcal{M}_n$ is connected. In the limit where we approach such a singular point, the superconformal field theory is supposed to show some pathological behaviour, e.g. some correlators might diverge. As reviewed in section \ref{s:stringsonK3}, in the context of type IIB string theory compactified on a K3 surface, $\Gamma^{5,21}$ can be interpreted as the lattice of charges carried by string-like objects in the six dimensional theory, $O^+(\Gamma^{5,21})$ is the group of U-dualities, and $v$ is the charge of one of these strings, whose dynamics in the infrared (or near-horizon) limit, is described by an $\mathcal{N}=(4,4)$ SCFT of central charge $6n$. For example, the world-sheet dynamics of a system of $Q_1$ D1-branes and $Q_5$ D5-branes wrapping the K3 surface is described by a NLSM on a deformation of $\Sym^n(K3)$, with $n=Q_1Q_5-1$. We denote by \begin{equation} L_v\equiv L_n:=v^\perp \cap \Gamma^{5,21}\end{equation} the sublattice of $\Gamma^{5,21}$ orthogonal to such a vector $v$. Since any two primitive vectors of the same length in $\Gamma^{5,21}$ are related by an automorphism in $O^+(\Gamma^{5,21})$ (see appendix \ref{a:lattices}), it follows that $L_v$ is uniquely determined by the integer $n$ up to isomorphisms (for this reason, we will sometimes use the notation $L_n$). In particular, $L_n$ is isomorphic to the orthogonal sum $\langle w\rangle\oplus \Gamma^{4,20}$ of a $1$-dimensional lattice $\langle w\rangle$ with a generator $w$ of squared norm $2-2n$ and the even unimodular lattice $\Gamma^{4,20}$ of signature $(4,20)$. The moduli space $\mathcal{M}_n$ can be understood as a subset of the moduli space \begin{equation} \mathcal{M}_{IIB}=O^+(\Gamma^{5,21})\backslash Gr^+(5,21)\ , \end{equation} of type IIB string theory on a K3 surface (see appendix \ref{s:stringsonK3}). Notice that $\mathcal{M}_{IIB}$ is a quotient of the Grassmannian parametrizing positive definite $5$-dimensional subspaces $Z$ within $\Gamma^{5,21}\otimes\RR\cong \RR^{5,21}$, taken modulo automorphisms of the lattice $\Gamma^{5,21}$. The subspace $\mathcal{M}_n$ can be interpreted as the space of attractor moduli for a string of primitive charge $v\in \Gamma^{5,21}$, with $v^2=2n-2$. The attractor conditions constrain the space $Z$ to contain the vector $v$, so that the Grassmannian $Gr^+(4,21)$ parametrizes the four dimensional subspaces $\Pi=Z\cap v^\perp$ inside $\RR^{4,21}=v^\perp\cap \RR^{5,21}$. Apart from the interpretation in terms of type IIB moduli, the space $\mathcal{M}_n$ parametrizes the possible hyperk\"ahler metric and B-fields on a hyperk\"ahler manifold of $K3^{[n]}$-type, i.e. deformation equivalent to the Hilbert scheme of $n$-points in a K3 surface (see appendix \ref{a:Hilbertscheme}). The space $\mathcal{M}_n$ contains a sublocus $\mathcal{M}_n^{sym}$ given by the $n$-th symmetric orbifolds of NLSM on K3, which has codimension $4$ and is isomorphic to the moduli space $\mathcal{M}_1$ of NLSM on K3. $\mathcal{M}_1$ can be described as an open subset in the quotient of a Grassmannian \begin{equation} \mathcal{M}_1\subset O^+(\Gamma^{4,20})\backslash Gr^+(4,20)\ , \end{equation} obtained by excluding from the Grassmannian the points corresponding to $4$-dimensional subspaces $\Pi\subset \Gamma^{4,20}\otimes\RR\cong \RR^{4,20}$ that are orthogonal to a root $\Pi\subset r^\perp$, where $r\in \Gamma^{4,20}$, $r^2=-2$ \cite{Aspinwall:1996mn,Nahm:1999ps}. In other words, there is an injective map $\Sym^n:\mathcal{M}_1\to \mathcal{M}_n$ with image $\mathcal{M}_n^{sym}$. Clearly, $O^+(\Gamma^{4,20})$ is a subgroup of $O^+ (L_n)$. In section \ref{s:symmorbmoduli}, we will try to identify the locus $\mathcal{M}_n^{sym}$ within $\mathcal{M}_n$. \subsection{The duality group}\label{s:dualities} In this section, we analyze the group of dualities $O^+_n$ of SCFT of $K3^{[n]}$ type. The duality group must necessarily preserve the Zamolodchikov metric on the moduli space, and therefore it will be a subgroup of the orthogonal group $O(4,21)$. Furthermore, an analysis similar to \cite{Cheng:2016org} section 3.2, shows that a duality in $O(4,21)$ preserves the world-sheet orientation of the SCFT if and only if it is contained in the $O^+(4,21)$ subgroup that preserves the orientation of $4$-dimensional subspaces in $\RR^{4,21}$. We will denote by $O_n$ the group of dualities of SCFTs of $K3^{[n]}$-type that might or might not preserve the world-sheet orientation, and by $O_n^+=O_n\cap O^+(4,21)$ that subgroup that does preserve the orientation. It is a matter of definitions whether two point in the moduli space related by a $O_n\setminus O^+_n$ transformation should be identified or not; in this article, we find it convenient to distinguish such points, so only the subgroup $O^+_n$ will be relevant for the definition of the moduli space. As mentioned in the previous sections, these SCFTs describe the infrared dynamics of a stringy-like object of charge $v\in \Gamma^{5,21}$, with $v^2=2n-2$, in type IIB on K3 (or K3$\times S^1$). The U-duality group of type IIB string theory on K3 is the group $O(\Gamma^{5,21})$ of automorphisms of the lattice $\Gamma^{5,21}$. This U-duality group contains a subgroup $Stab(v)$ that fixes the vector $v$. Dualities in the group $Stab(v)$ map an object of charge $v$ into itself, and preserve (setwise) the space $v^\perp\subset \Gamma^{5,21}\otimes \RR$ and therefore the Grassmannian $Gr(4,21)$ of positive definite $4$-dimensional subspaces inside $v^\perp\cong \RR^{4,21}$. Therefore, any two points in $Gr(4,21)$ related by a transformation in $Stab(v)$ must be dual to each other, i.e. they define equivalent superconformal field theories $Stab(v)\subseteq O_n$. The subgroup of such dualities that preserve the world-sheet orientation is given by \begin{equation} Stab^+(v)=Stab(v)\cap O^+(4,21)\ , \end{equation} and only this group will be included in $O^+_n$. Can the group $O^+_n$ be strictly larger than $Stab^+(v)$? Clearly, if a duality of the SCFTs of $K3^{[n]}$-type outside of $Stab(v)$ extends to a duality of the full type IIB, then it must act by automorphisms on the lattice of charges $\Gamma^{5,21}$, so it must be an element of $O(\Gamma^{5,21})$. Furthermore, it must preserve the space $v^\perp \subset \Gamma^{5,21}\otimes \RR$, in order to have a well-defined action on the moduli space of SCFTs of $K3^{[n]}$-type. The only possibilities are that the duality either acts trivially on $v$ (but in this case, it would be in $Stab(v)$), or it acts by $v\mapsto -v$. The latter transformation, if extended to the whole type IIB string theory, would exchange the effective string we are considering with its charge conjugate. In fact, as explained in section \ref{s:symmorbmoduli}, the existence of such a duality follows from the existence of a certain symmetry at the points in the moduli space corresponding to symmetric orbifold $\Sym^n K3$. This symmetry acts non-trivially on the exactly marginal operators deforming the SCFT away from the symmetric orbifold locus. If two exactly marginal operators are related by such a symmetry, the corresponding deformed models must be dual to each other. In section \ref{s:symmorbmoduli}, we will argue that such a duality can only be extended to a $O(\Gamma^{5,21})$ transformation that maps $v$ into $-v$ (at least for $n>2$). So, it seems that this kind of dualities have to be included in $O^+_n$, which therefore must contain the group \begin{equation} Stab^+(\ZZ v):= \{h\in O(\Gamma^{5,21})\mid h(v)=\pm v\} \cap O^+(4,21)\ , \end{equation} (as the notation suggests, this is the setwise stabilizer of the sublattice $\ZZ v\subset \Gamma^{5,21}$ in $O(\Gamma^{5,21})\cap O^+(4,21)$). Finally, let us discuss the possibility that $O^+_n$ contains some duality $h\in O^+(4,21)$ of the SCFTs that cannot be extended to an element in $O(\Gamma^{5,21})$. Recall that one of the arguments showing that $O(\Gamma^{5,21})$ is the full group of dualities of type IIB is that it is a maximal discrete subgroup of $O(5,21)$, so that any larger group of dualities would make the moduli space of type IIB on K3 non-Hausdorff \cite{Aspinwall:1995td,Aspinwall:1996mn}. In the case we are considering, the groups $Stab^+(v)$ and $Stab^+(\ZZ v)$ are subgroups of the group $O^+(L_v)$ of automorphisms of the lattice $L_v$. For generic $n$, however, they are strictly smaller than $O^+(L_v)$.\footnote{In particular $Stab^+(v)$ is a proper subgroup of $Stab^+(\ZZ v)$, and therefore of $O^+(L_v)$, for all $n>2$, while for $n=2$ one has $Stab^+(v)=Stab^+(\ZZ v)=O^+(L_v)$. The group $Stab^+(\ZZ v)$ is a proper subgroup of $O^+(L_v)$ whenever $v^2=2n-2$ is not a prime power. } Therefore, even requiring the moduli space of SCFTs of $K3^{[n]}$ type to be Hausdorff does not rule out the possibility that the duality group is a subgroup of $O^+(L_v)$ larger than $Stab^+(\ZZ v)$. It might be possible to exclude these kind of dualities by showing explicit examples (maybe of symmetric orbifolds) of models that are related by such automorphisms of $L_v$ but are not equivalent to each other. In the absence of a full proof in this sense, in the following we will keep in mind the possibility that the duality group $O^+_n$ of the SCFTs are subgroups of $O^+(L_v)$ larger than $Stab^+(\ZZ v)$. Could the group of dualities include also elements of $O^+(4,21)$ that are not automorphisms of the lattice $L_v$? While we were not able to prove this rigorously, it seems reasonable to expect that any such transformation would generate a group that is not discrete, and such that the moduli space is not Hausdorff. From a different perspective, recall that for a NLSM on a single K3 surface, the lattice $\Gamma^{4,20}$ can be interpreted as the lattice of R-R charges of D-branes, and can be constructed within the SCFT itself by looking at the couplings of R-R ground fields with $1/2$ BPS boundary states. One can consider boundary states also in SCFTs of $K3^{[n]}$ type; it is reasonable to expect $L_v$ to be the lattice of charges for a suitable class of boundary states. If this is the case, then a symmetry of the SCFT should preserve this lattice, and therefore act on it by automorphisms. These arguments seem to suggest that the duality group $O^+_n$ must be a subgroup of $O^+(L_v)$. In the following sections, we will assume that this is the case, and that $O^+_n$ is some intermediate subgroup between $Stab^+(\ZZ v)$ and $O^+(L_v)$ \begin{equation} Stab^+(\ZZ v)\subseteq O^+_n\subseteq O^+(L_v)\ . \end{equation} \subsection{Singular limits}\label{s:singular} As explained in section \ref{s:stringsonK3}, the moduli space $\mathcal{M}_{IIB}$ of type IIB superstrings compactified on K3 is a Grassmannian of oriented positive definite $5$-planes $Z$ within $\Gamma^{5,21}\otimes \RR$, modulo $O^+(\Gamma^{5,21})$. For each point in $\mathcal{M}_{IIB}$, one has an orthogonal decomposition $v\equiv (v_+,v_-)$ of any vector $v\in \Gamma^{5,21}$ into a (positive norm) component $v_+$ along $Z$ and a (negative norm) component $v_-$ along $Z^\perp$. Let $v\in \Gamma^{5,21}$ be a primitive vector with $v^2=2n-2> 0$, representing the charge of some BPS object (the case $v^2=0$ is a bit peculiar, so let us not consider it here). Then, the attractor manifold for an object of charge $v$ is the locus of subspaces $Z $ containing $v$, so that $v=(v_+,0)$ in the decomposition above. This locus can be identified with the moduli space of non-linear sigma models on hyperk\"ahler manifolds of $K3^{[n]}$ type, i.e. deformations of the Hilbert scheme of $n$ points on the K3 manifold $X$ (or the moduli space of sheaves on $X$ with Mukai vector $v$). More precisely, a point $Z$ in the attractor manifold of $v$ determines an oriented positive definite $4$-plane $\Pi:=Z\cap v^\perp$ in $v^\perp \cong L_v\otimes \RR$, where $L_v=v^\perp \cap \Gamma^{5,21}$. Seiberg and Witten \cite{Seiberg:1999xz} described the conditions on the attractor point $Z$ such that the corresponding SCFT be singular. This only happens when the BPS object of charge $v$ can decay into two BPS objects of charges $v'$ and $v''$. Formally, this translates into the existence of two vectors $v',v''\in \Gamma^{5,21}$, that are not proportional to $v$ (otherwise $v$ would not be primitive), and such that \begin{align} &(v')^2,(v'')^2\ge -2\ ,\label{cond1}\\ &v=v'+v''\ ,\label{cond2}\\ &|v_+|=|v'_+|+|v''_+|\ .\label{cond3} \end{align} (note that $v_+=v$, because we are at the attractor manifold). Condition \eqref{cond2} is just charge conservation. The mass of a BPS object of charge $v$ is proportional to the length $|v_+|$ of its component along $Z$. Therefore, for $v=v'+v''$, one has $|v|\le |v'|+|v''|$ and the decay can occur only when the equality is satisfied, which is condition \eqref{cond3}. The condition \eqref{cond1} ensures the existence of BPS objects with charges $v'$ and $v''$. To be precise, the condition \eqref{cond1} is slightly different from the condition $(v')^2,(v'')^2\ge 0$ given in \cite{Seiberg:1999xz}, which seems to be appropriate only for compactifications on a torus $T^4$. For compactifications on K3, there are BPS objects carrying charge with square norm $-2$: the easiest such example is a single D5-brane wrapping the K3 manifold, with carries $Q_5=+1$ five brane charge a geometrically induced $Q_1=-1$ one-brane charge, so that $v^2=2Q_1Q_5=-2$. Notice that the near horizon geometry of such objects is rather complicated \cite{Johnson:1999qt}. It seems appropriate to exclude the points in the moduli space where the brane system of branes that we are considering can decay into one of these objects (plus some other decay product). The conditions \eqref{cond1}--\eqref{cond3} also lead us to exclude from the moduli space the points where some non-perturbative (at the attractor point) strings become tensionless \cite{Witten:1995zh}. This phenomenon can occur, for example, when the K3 surface contains a holomorphic $2$-cycle with self-intersection $-2$ (a $\PP^1$) whose volume is shrunk to a point, and with no B-field flux through it. A D3-brane wrapping such a cycle becomes tensionless at such a point in the moduli space. It is known that the SCFT describing the worldsheet dynamics of a fundamental type IIB string on K3, which is a NLSM on a single copy of K3, becomes singular at this point in the moduli space, due to the presence of such tensionless non-perturbative strings \cite{Witten:1995zh,Aspinwall:1996mn}. It is reasonable to expect that, at such a point in the moduli space, the same will happen for the superconformal field theory of type $K3^{[n]}$ describing the infrared dynamics of an effective string. It is sometimes useful to repackage the conditions \eqref{cond1}--\eqref{cond3} in a different way. Let us first consider the case where $(v')^2,(v'')^2\ge 0$. The conditions \eqref{cond2},\eqref{cond3} imply that $v'_+=\alpha v$ and $v''_+=(1-\alpha)v$, with $0< \alpha< 1$, and $v'_-=-v''_-\neq 0$ (if $v'_-=0$ then $v'$ and $v''$ would be proportional to $v$). Up to an exchange of $v'$ and $v''$, we can assume that $\alpha\le 1/2$, so that $|v'_+|\le | v''_+|$ and $(v')^2\le (v'')^2$ ($(v')^2=(v'_+)^2-(v'_-)^2= (v'_+)^2-(v''_-)^2\le (v''_+)^2-(v''_-)^2=(v'')^2$). Notice that \begin{equation} (v,v')=\alpha v^2=\alpha(2n-2)\in \ZZ_{> 0}\ , \end{equation} because $(v,v')\in \ZZ$ for any two vectors $v,v'\in \Gamma^{5,21}$. This also implies that $(2n-2)v_-=(2n-2)v'-\alpha(2n-2)v\in \Gamma^{5,21}$. The sublattice spanned by $v,v'$ has signature $(1,1)$ (it contains a negative direction corresponding to $v_-$), so that \begin{equation} \det \left(\begin{smallmatrix} v^2 & (v,v')\\ (v,v') & (v')^2\end{smallmatrix}\right)<0\qquad \Leftrightarrow\qquad (v,v')^2>(v')^2 v^2 \ .\end{equation} Thus, the conditions \eqref{cond1}, \eqref{cond2}, \eqref{cond3} imply \begin{equation} 0\le (v')^2v^2<(v,v')^2\le \left(\frac{v^2}{2}\right)^2\ ,\label{condeq} \end{equation} where the last inequality follows from $\alpha=\frac{(v,v')}{v^2}\le 1/2$. Vice versa, suppose that there is $v'\in \Gamma^{5,21}$, such that \eqref{condeq} holds. Up to an exchange $v'\leftrightarrow -v'$, we can assume that $(v,v')\ge 0$. Eq. \eqref{condeq} implies that the primitive sublattice $M\subset \Gamma^{5,21}$ of rank $2$ containing both $v$ and $v'$ has signature $(1,1)$. Let $x\in M$ be a generator of $v^\perp\cap M$, so that $x^2<0$. If $Z\perp x$, then $v'$ and $v'':=v-v'$ satisfy all conditions \eqref{cond1}, \eqref{cond2}, \eqref{cond3}. Indeed, $(v')^2\ge 0$ follows from \eqref{condeq} and $v^2\ge 0$. Furthermore, $v'$ and $v''$ can be written as $v'=\alpha v+\beta x$ and $v''=(1-\alpha)v-\beta x$, with $\alpha= \frac{(v,v')}{v^2}$ and $\beta=\frac{(x,v')}{x^2}$, and by \eqref{condeq} and the condition $(v,v')\ge 0$ one has $0\le \alpha \le 1/2$. Thus, $(v'')^2\ge (v')^2\ge 0$, and $v'_+=\alpha v$, $v''_+=(1-\alpha)v$, with $\alpha,(1-\alpha)\ge 0$, so that also \eqref{cond3} holds. Notice that $x\in v^\perp\cap \Gamma^{5,21} =L_v$, and that the condition $Z\perp x$ is equivalent to $\Pi\perp x$. Let us now consider the case where one of the charges $v'$ or $v''$ has squared norm $-2$. Up to an exchange, we can suppose that $(v')^2=-2$. In this case, the primitive sublattice $M$ containing both $x$ and $v$ must contain $v'$ satisfying \begin{equation}\label{condeq2} (v')^2=-2\qquad \text{and}\qquad 0\le (v',v)\le \frac{v^2}{2}\ . \end{equation} The latter equations are equivalent to \eqref{cond1}, \eqref{cond2} and \eqref{cond3} for $v'$ and $v''=v-v'$ with $(v')^2=-2$. Assuming that the conditions in \cite{Seiberg:1999xz} are necessary and sufficient for singularity, we conclude that the SCFT corresponding to $\Pi$ is singular if and only if $\Pi$ is orthogonal to $x\in L_v$, such that the primitive sublattice $M\subset \Gamma^{5,21}$ contains a vector $v'$ satisfying either \eqref{condeq} (case $(v')^2\ge 0$) or \eqref{condeq2}. The conditions \eqref{condeq} and \eqref{condeq2} are very similar to the condition describing the K\"ahler cone of a hyperkahler manifold $X$ of type $K3^{[n]}$ \cite{Mongardi2015,Mongardi2016}. Recall that the lattice $H^2(X,\ZZ)$ is isomorphic to the orthogonal complement in $\Gamma^{4,20}$ of a primitive vector $v$ of norm $2n-2$. The K\"ahler cone, i.e. the region of $H^2(X,\RR)$ spanned by the K\"ahler classes of $X$, is delimited by some walls, which are the hyperplanes orthogonal to certain`wall divisors' $D\in H^2(X,\ZZ)$. Then, $D\in H^2(X,\ZZ)$ is a wall divisor if the primitive lattice of rank $2$ $T_D\subset \Gamma^{4,20}$ containing both $v$ and $D$ has a generator $r_D$ satisfying either \begin{equation}\label{geomsing1} (r_D)^2=-2\qquad \text{and}\qquad 0\le (r_D,v)\le \frac{v^2}{2}\ . \end{equation} or \begin{equation}\label{geomsing2} 0<(r_D)^2v^2<(v,r_D)^2\le \left(\frac{v^2}{2}\right)^2\ , \end{equation} which have the same form as the conditions \eqref{condeq} and \eqref{condeq2}. This is not completely surprising. In the limit where a K\"ahler class reaches the boundary of the K\"ahler cone, one of the holomorphic cycles shrinks to zero volume, and the geometry of the hyperk\"ahler manifold becomes singular (for example, an orbifold). In general, a non-linear sigma model whose target space is such a singular geometry can be perfectly well-defined -- orbifolds are the main examples of these phenomena. On the other hand, what usually happens is that this consistent orbifold has some fixed non-zero B-field. If we turn off the B-field, then one usually expects the non-linear sigma model to become singular as well. Therefore, for zero B-field there should be a precise correspondence between singular geometries and singular CFT. The similarity of the equations \eqref{geomsing1}--\eqref{geomsing2} with \eqref{condeq}--\eqref{condeq2}, therefore, gives further support to our analysis of singularities. \bigskip On the other hand, as we will see in section \ref{s:symmorbmoduli}, the analysis of this section seems to be in contradiction with the properties of symmetric orbifolds. In section \ref{s:symmorbmoduli} we will try to determine the locus of symmetric orbifolds models within the moduli space $\mathcal{M}_n$, for all $n$. The outcome is puzzling: all symmetric orbifolds occur at points in the moduli space that, according to the analysis above, should be singular! Since we are not sure what the resolution of this puzzle is, the results of the present section cannot be completely trusted. \subsection{The symmetric orbifold locus}\label{s:symmorbmoduli} In this section, we will discuss the locus $\mathcal{M}^{sym}_n\subset \mathcal{M}_m$ in the Hilbert space of SCFT of $K3^{[n]}$ type corresponding to symmetric orbifolds. If $\mathcal{C}_S$ denotes the NLSM on the K3 surface $S$, corresponding to a certain choice of hyperk\"ahler structure and B-field, then the symmetric orbifold $\Sym^n\mathcal{C}_S$ (where orbifold is understood in a CFT sense) is a NLSM on a hyperk\"ahler manifold $X$ of K3$^{[n]}$ type. In particular, one expects the geometry of the target space to be the one of the symmetric orbifold (in a geometric sense) $\Sym^n S$, i.e. with $\alpha_1=\alpha_2=\lambda=0$ in the notation of appendix \ref{a:Hilbertscheme} The space of states of the orbifold CFT $\Sym^n\mathcal{C}_S$ is the sum over $[\sigma]$-twisted sectors for each conjugacy class $[\sigma]$ of $S_n$; the untwisted sector corresponds to $[\sigma]$ being the identity class. In turn, each conjugacy class for the symmetric group $S_n$ can be labeled by the cycle shape $\prod n_i$, with $\sum_i n_i=n$ of the class. There are $84$ exactly marginal operators in $\Sym^n\mathcal{C}_S$, preserving the $\mathcal{N}=(4,4)$ superconformal algebra, corresponding to deformations of the CFT in the $84$ dimensional moduli space of NLSM of K3$^{[n]}$ type. Out of these $84$ deformations, $80$ are contained in the untwisted sector and can be naturally identified with deformations of the `seed' CFT $\mathcal{C}_S$ -- they are literally obtained by symmetrization $\sum_{\sigma\in S_n}\sigma(\chi\otimes1\otimes \ldots\otimes 1)$ of the exactly marginal operators $\chi$ on $\mathcal{C}_S$. The remaining $4$ moduli are contained in a twisted sector. Recall that the $\mathcal{N}=(4,4)$ preserving exactly marginal operators are fields of conformal weight $(1,1)$ and neutral under $SU(2)_L\times SU(2)_R$ R-symmetry, that are obtained as supersymmetric descendants of NS-NS fields with conformal weights $(h,\bar h)=(1/2,1/2)$ and R-symmetry charges $(q,\bar q)=(1/2,1/2)$ (we normalize the charges so that they are half-integral). It was shown in \cite{Lunin:2001pw} that the lowest chiral operators (i.e. with $h=q$) in the sector corresponding to a cycle shape $\prod_i n_i$ has \begin{equation} h=q=\sum_i \frac{n_i-1}{2}\ . \end{equation} It is clear then that fields with $h=q=1/2$ and $\bar h=\bar q=1/2$ can only occur in the untwisted sector or in the $[\sigma]$-twisted sector where $\sigma$ is a single transposition (i.e. with cycle shape $1^{N-1}2^1$). In particular, there is a unique such field in the $[\sigma]$-twisted sector, and it coincides with the $[\sigma]$-twisted ground field.\footnote{For general $[\sigma]$ the $[\sigma]$-twisted ground field is usually not chiral.} This field has four supersymmetric descendants of weights $(1,1)$ and neutral under R-symmetry, which are the twisted sector exactly marginal operators. Therefore, the $4$ marginal operators not associated with deformations of the seed theory come from the $[\sigma]$-twisted sector where $[\sigma]$ is the class of a single transposition, and can be identified with deformations of the remaining moduli $\alpha_1,\alpha_2,\lambda,\beta$ of the NLSM on $X$ (see appendix \ref{a:Hilbertscheme}). Notice that every symmetric orbifold has a $\ZZ_2$ symmetry (that we will call a \emph{quantum symmetry}) acting by multiplication by $(-1)^{\sgn(\sigma)}$ on the $[\sigma]$-twisted sector. Indeed, the OPE of a $[\sigma_1]$- and a $[\sigma_2]$-twisted fields can contains a field in the $[\sigma_3]$-twisted sector only if $\sigma_3$ is conjugate to $\sigma_1h\sigma_2h^{-1}$ for some $h\in S_N$; then it is clear that $\sgn(\sigma_3)=\sgn(\sigma_1h\sigma_2h^{-1})=\sgn(\sigma_1)\sgn(\sigma_2)$, so that the symmetry preserves the OPE. This $\ZZ_2$ symmetry acts trivially on the $80$ marginal operators in the untwisted sector and changes the sign of the $4$ twisted sector marginal operators, since they are contained in a $[\sigma]$-twisted sector with $\sigma$ an odd permutation. The presence of the quantum symmetry will be our main tool in trying to identify the locus $\mathcal{M}_{n}^{sym}\subset \mathcal{M}_n$. Suppose that a certain $K3^{[n]}$ model has a symmetry $t$ acting on exact marginal operators in the same way as the quantum symmetry of the symmetric orbifold. This means that, with respect to the action of $t\in O(21)\subset SO(4)\times O(21)\subset O^+(4,21)$, the $(4,21)$ representation of $SO(4)\times O(21)$ splits as the sum of a $4$-dimensional subspace (a $4$-dimensional representation of $SO(4)$) with $t$-eigenvalue $-1$ and an $80$-dimensional subspace (twenty $4$-dimensional representations of $SO(4)$) with eigenvalue $+1$. As a consequence, in the $4\oplus 21$ representation of $SO(4)\times O(21)$ (i.e. on $\RR^{4,21}\equiv L_v\otimes \RR$), $t$ has a $-1$ eigenvalue in the $21$-dimensional component, and fixes the $24$-dimensional orthogonal complement. Let $s\in \RR^{21}\subset \RR^{4,21}$ be a generator of the $-1$ eigenspace (this implies that $s^2<0$). Then, $t$ acts on any $x\in \RR^{4,21}$ by a reflection with respect to the hyperplane $s^\perp$ \begin{equation} x\mapsto t(x)=x-\frac{2(s,x)}{(s,s)} s\ . \end{equation} Notice that this formula is invariant under rescalings of $s$, $s\to \alpha s$, $\alpha\in \RR\setminus\{0\}$. It is easy to check that the transformation $t$ is in $O(4,21)$. In order to be an automorphism of the lattice $L_v$, it must be that $x-t(x)\in L_v$ for every $x\in L_v$. This implies, in particular, that $s$ is proportional to a lattice vector; without loss of generality, we can rescale it so that $s$ is a primitive vector in $L_v$. With this choice, the condition for $t$ to be a lattice automorphism is \begin{equation}\label{condint} \frac{2(s,x)}{(s,s)}\in \ZZ\qquad \forall x\in L_v\ . \end{equation} Let $\Div(s)$ be the greatest common divisor of $(s,x)$, $x\in L_v$; for the lattice $L_v$, the possible values of $\Div(s)$ are the divisors of $2n-2$ (see appendix \ref{a:lattices}). Then, the condition \eqref{condint} can be written as $(s,s)|2\Div(s)$. On the other hand, we also have $\Div(s)|(s,s)$, from which we find the two possibilities $\Div(s)=-(s,s)$ or $\Div(s)=-(s,s)/2$ (note that $(s,s)<0$ and $\Div(s)>0$). In order to identify the vector $s$, we need some additional information. We know that the locus $\mathcal{M}^{sym}_n$ of symmetric orbifolds is a connected $80$-dimensional orbifold isomorphic to $\mathcal{M}_1=O^+(\Gamma^{4,20})\backslash Gr^+(4,20)$. On the other hand, the locus $\mathcal{F}_t$ of $K3^{[n]}$-models with symmetry the reflection $t$ is also $80$-dimensional and connected: it is given by the quotient $\mathcal{F}_t=H_s\backslash Gr^+(4,20)$ of the Grassmannian of four dimensional positive definite subspaces in $s^\perp\cong \RR^{4,20}$, quotiented by the subgroup $H_s$ of the duality group that fixes $s$ up to a sign. Since every symmetric orbifold has the symmetry $t$, one has $\mathcal{M}_1\subseteq \mathcal{F}_t$; on the other hand, since both spaces are connected and have the same dimension, must be an equality $\mathcal{M}^{sym}_m=\mathcal{F}_t$.\footnote{This is actually clearer at the level of Teichm\"uller spaces. The lift of $\mathcal{M}^{sym}_m$ to $Gr^+(4,21)$ consists of infinitely many copies of $Gr^+(4,20)$, related to one another by dualities in $O^+_n$. The subgroup of $O^+_n$ preserving one of these $Gr^+(4,20)\subset Gr^+(4,21)$ is isomorphic to $O^+(\Gamma^{4,20})$, so that the lift of $\mathcal{M}_n^{sym}$ is $\bigcup_{[h]\in O^+_n/O^+(\Gamma^{4,20})} h(Gr^+(4,20))$. Our claim is that this lift coincides with the union $\bigcup_{[h]\in O^+_n/H_s } h(s^\perp)$.} It follows that $H_s\cong O^+(\Gamma^{4,20})$. It is easy to check that this can only happen if $\Div(s)=2n-2$. Indeed, $H_s$ is a group of automorphisms of the sublattice $s^\perp \cap L_v$. Up to dualities, one can always take both $v$ and $s$ to be contained in some $\Gamma^{2,2}\subset \Gamma^{5,21}$; therefore, $s^\perp \cap L_v$ contains a copy of $\Gamma^{3,19}$ as a primitive sublattice. By acting by $H_s\cong O^+(\Gamma^{4,20})$ on this $\Gamma^{3,19}$, one obtains a lattice isomorphic to $\Gamma^{4,20}$, which must be contained in $s^\perp \cap L_v$. But $s^\perp \cap L_v$ has signature $(4,20)$ and is even, and these conditions imply that $s^\perp \cap L_v\cong \Gamma^{4,20}$. One has the lattice inclusions $\Gamma^{4,20}\oplus \langle s\rangle \subseteq L_v\subseteq (\Gamma^{4,20}\oplus \langle s\rangle )^*$, but $(\Gamma^{4,20}\oplus \langle s\rangle )^*=\Gamma^{4,20}\oplus \langle \frac{s}{\Div(s)}\rangle$, and since $s$ is primitive we must have the equality $L_v=\Gamma^{4,20}\oplus \langle s\rangle$. This isomorphism implies that $\Div(s)=-(s,s)=2n-2$. We conclude that the symmetric orbifold locus $\mathcal{M}_n^{sym}$ corresponds to the Grassmannian of $4$-dimensional subspaces orthogonal to a vector $s$ with $\Div(s)=-(s,s)=2n-2$. As we explain below, for generic $n$ this analysis is not sufficient to identify the vector $s$ (and therefore $\mathcal{M}_n^{sym}$) up to dualities. However, it is enough to reach a puzzling conclusion: for any $s\in L_v$ with $\Div(s)=-(s,s)=2n-2$, a four-dimensional subspace $\Pi$ orthogonal to $s$ corresponds to a singular model, according to the arguments of \cite{Seiberg:1999xz} and of section \ref{s:singular}. Indeed, since $\langle s,v\rangle^\perp \cap \Gamma^{5,21}\cong s^\perp\cap L_v\cong \Gamma^{4,20}$, the vectors $s$ and $v$ are contained in a sublattice $\Gamma^{1,1}\subset \Gamma^{5,21}$. Let $u,u^*$ be generators of this $\Gamma^{1,1}$, such that $(u,u)=(u^*,u^*)=0$, $(u,u^*)=1$. Then, $s$ and $v$ can be written as \begin{equation} v=au+bu^*\ ,\qquad \qquad s=-bu+au^*\ , \end{equation} for some $a,b\in \ZZ$ coprime and such that $ab=2n-2$ (in fact, the different duality orbits of vectors $s$ are labeled by this pair of integers). We can always choose $u,u^*$ in such a way that $a,b>0$. Then, whenever $Z=\langle v\rangle\oplus \Pi$ is orthogonal to $s$, we have that $v':=u$ and $v'':=(a-1)u+bu^*$ satisfy the conditions \eqref{cond1}--\eqref{cond3} (and, in fact, even the stricter conditions in \cite{Seiberg:1999xz}). There are a few possible resolutions of this puzzle: \begin{itemize} \item The symmetric orbifold model is not a well defined CFT. This seems quite a radical departure from what we know from conformal field theory and string theory. \item We have misidentified the locus of symmetric orbifolds $\mathcal{M}_n^{sym}$. This means that there must be a hole in one of the arguments we used. One of the ingredients is the existence of the quantum symmetry of symmetric orbifolds. This is just based on the standard properties of non-abelian orbifolds, so it seems quite a safe statement, although there might be some potential subtleties with the definition of the symmetry in sectors other than the NS-NS. Another ingredient we need is the fact that two models obtained by deforming the symmetric orbifold by exactly marginal operators related by a symmetry are dual to each other. This should be true, provided that conformal perturbation theory correctly reproduces the correlators of the deformed models in some neighborhood of each point in the moduli space. This is clearly a delicate assumption, but without this assumption the whole construction of the moduli space $\mathcal{M}_n$ should be reconsidered. One of the most delicate ingredients is the assumption that the group of dualities acts by automorphisms of the lattice $L_v$. In section \ref{s:dualities} we discussed why it is unlikely that the duality group is larger. It would be nice to have a rigorous argument showing that this assumption follows from the requirement that the moduli space $\mathcal{M}_n$ be Hausdorff. Notice that, for the arguments of this section to work, we don't need to assume that the duality group is a \emph{proper} subgroup of $O^+(L_v)$. \item We have misidentified the locus of singular models in $Gr(4,21)$. The arguments described in section \ref{s:singular} are certainly reasonable, but do not seem to be conclusive. This seems to us as the most likely possibility. Unfortunately, a precise description of the set of singular models is needed for the classification of symmetries of SCFTs of $K3^{[n]}$ type that we will attempt in the following sections. This means that our results will be conditional to a long list of assumptions and restrictions. We hope to be able to drop some of these assumptions in future works. \end{itemize} Finally, let us show that our analysis is not sufficient to unambiguously determine $s$ up to dualities. Indeed, while there is a unique orbit of vectors $s$ with $ \Div(s)=-(s,s)=2n-2$ with respect to the full automorphism group $O(L_v)$ of the lattice $L_v$, the duality group $O_n^+$ is the (generally, proper) subgroup of $O(L_v)$ acting by $\pm 1$ on the discriminant group $A_{L_v}:=L_v^*/L_v$. By Eichler principle (see appendix \ref{a:lattices}), since $L_v$ contains a $\Gamma^{2,2}$ sublattice, then any two primitive vectors $s,s'$ are related by the group $O_0(L_v)$ acting trivially on $L_v^*/L_v$ if and only if they have the same norm, the same divisor, and $\frac{s}{\Div(s)}\equiv \frac{s'}{\Div(s')}\mod L_v$. In particular, we have $L_v^*/L_v\cong \ZZ_{2n-2}$, and we are interested in vectors $s$ with $\Div(s)=-(s,s)=2n-2$, so that $\frac{s}{\Div(s)}$ has norm $\frac{1}{2n-2} \mod 2\ZZ$ and order $2n-2$ in $L_v^*/L_v$, so it is a generator. On the other hand, in general, there are several different generators with the same norm and order in $L_v^*/L_v$, and one can show that all of them can be lifted to elements of the form $\frac{\tilde s}{2n-2}$, where $\tilde s$ has $\Div(\tilde s)=-(\tilde s,\tilde s)=2n-2$.\footnote{This is because, for a lattice containing a copy of $\Gamma^{1,1}$, the natural map $O(L_v)\to O(L_v^*/L_v)$ is surjective. Here, $O(L_v^*/L_v)$ is the group of automorphisms of $L_v^*/L_v\cong \ZZ_{2n-2}$ preserving the quadratic form (defined modulo $2\ZZ$). Every two generators of $\ZZ_{2n-2}$ with norm $\frac{1}{2n-2}$ are related by some element of $O(L_v^*/L_v)$, and this element lifts to some automorphism in $O(L_v)$. The latter automorphism maps $s$ to a vector $\tilde s$ with the same norm and divisor.} All such $\tilde s$ are in the same $O(L_v)$-orbit, but are not related by $O_0(L_v)$ transformations. \section{Symmetries and twining genera}\label{s:mainres} In this section we will discuss the classification (up to isomorphisms) of the possible groups of symmetries of a (non-singular) $\mathcal{N}=(4,4)$ superconformal field theories with central charge $c=6n$, $n>1$, in the moduli space non-singular NLSM on hyperk\"ahler manifolds of $K3^{[n]}$ type. The results and the methods are very similar to the ones relevant for non-linear sigma models on K3 \cite{K3symm}. \subsection{A (restricted) classification of symmetries}\label{s:callifsymm} Let us consider $v\in \Gamma^{5,21}$, with $v^2=2n-2$, and suppose that the five plane $Z$ is at an attractor point for $v$, i.e. $v\in Z$. As usual, let $L_v:=v^\perp \cap \Gamma^{5,21}$ and $\Pi=Z\cap v^\perp$, so that $\Pi$ is a positive definite four plane in $L_v\otimes\RR$ and determines a point $\mathcal{C}$ in the moduli space $\mathcal{M}_n$ of NLSM on $K3^{[n]}$. We assume that this point is non-singular, in the sense of section \ref{s:singular}, so that we expect the corresponding CFT to be well defined. Our goal is to determine the possible group of symmetries of the CFT $\mathcal{C}$. Let us try to be more precise. By symmetries of a conformal field theory we mean a linear transformation of the fields that preserves the OPE, fixes the vacuum vector and the stress-energy tensor. We will focus on symmetries that satisfy some additional properties: \begin{itemize} \item[(1)] They commute with the full $\mathcal{N}=(4,4)$ superconformal algebra (and not only the Virasoro algebra). \item[(2)] They commute with left- and right-moving spectral flow isomorphism relating the Neveu-Schwarz with the Ramond sector of the theory; \end{itemize} While there are certainly interesting symmetries that \emph{do not} satisfy these conditions, there are some good reasons to require them. The first reason is practical: without these conditions, the classification problem is much more complicated, and in fact it is not solved even in the case of non-linear sigma models on K3. Secondly, these properties assure that the action on the low-energy six dimensional type IIB string compactification preserves space-time supersymmetry. Finally, these conditions are the exact analogue of the ones required in \cite{K3symm} for NLSM on K3 and in \cite{Volpato:2014zla} for NLSM on $T^4$. Let $G$ be the group of symmetries of the CFT $\mathcal{C}$ satisfying the conditions (1) and (2) above. Then, $G$ has an action on the $84$ dimensional space of exactly marginal operators of the model, which can be identified with the tangent space to the Grassmannian $Gr^+(4,21)$ at the point corresponding to $\mathcal{C}$.\footnote{To be precise, $\mathcal{C}$ determines a point in the quotient $Gr^+(4,21)/O_n^+$, and one has to choose a lift of this point to $Gr^+(4,21)$. What follows is independent of this choice.} These fields are suitable supersymmetric descendants of the $84$-dimensional space of superconformal primaries of weights $(1/2,1/2)$, which decomposes into $21$ representations of the superconformal algebra. By (1), the symmetry $G$ will act by a $O(21)$ transformation on these $21$ representations, i.e. the group of $O(4,21)$ transformations that leave the positive definite $4$-dimensional subspace $\Pi$ point-wise fixed. If two exactly marginal operators are related by a symmetry in $g$, then the models obtained by the corresponding deformations will be equivalent, i.e. related by a duality. This means that the action of the $80$-dimensional space must be induced by a transformation in the duality group $O^+_n$. We conclude that there must be a homomorphism $G\to O^+_n\cap O(21)$. This homomorphism must be surjective: every duality in $O^+_n$ fixing the $4$-dimensional space $\Pi$ will map the theory $\mathcal{C}$ into itself in a non-trivial way (as can be seen from the action on the exactly marginal operators), so it must lift to a symmetry of the theory satisfying (1) and (2). One can also argue that the homomorphism is injective. Indeed, any element $g$ in the kernel would act trivially on the whole space of exactly marginal operators of the theory. As a consequence, the kernel of this homomorphism is preserved by any deformation, and since the moduli space is connected, it must be a symmetry of every SCFT in the moduli space. Therefore, in order to exclude a non-trivial kernel for every model in the moduli space, it is sufficient to choose one $K3^{[n]}$-model $\mathcal{C}$ and show that $\mathcal{C}$ has no symmetries satisfying (1) and (2) and acting trivially on all exactly marginal operators. A suitable model for this analysis is given by $\mathcal{C}=\Sym^n \mathcal{C}'$, where $\mathcal{C}'$ is the NLSM on K3 with `the largest symmetry group', described in \cite{Gaberdiel:2013psa}. The details are relegated to appendix \ref{a:nokernel}. \bigskip As discussed in section \ref{s:symmorbmoduli}, the group $O^+_n$ contains a normal subgroup $Stab^+(v)$ which is the subgroup of $O(\Gamma^{5,21})$ fixing the vector $v$. Physically, these are the dualities that lift to dualities of the full type IIB string theory and that fix the charge $v$ of the string-like object we are considering. One can expect $O^+_n$ to be strictly larger than $Stab^+(v)$. In particular, for $n>2$, the set $O^+_n\setminus Stab^+(v)$ should contain elements in $O(\Gamma^{5,21})$ that flip the sign of $v$: if our analysis of section \ref{s:symmorbmoduli} is correct, then for $n>2$ every symmetric product $\Sym^n \mathcal{C}'$ of a NLMS $\mathcal{C}'$ on a K3 surface has a self-duality in this set, which in turn satisfies (1) and (2). For the purpose of the classification, it is convenient to require the symmetries that satisfy (1), (2) and \begin{itemize} \item[(3)] They lift to symmetries of the whole type IIB string theory fixing the vector $v\in \Gamma^{5,21}$. \end{itemize} This is quite a natural restriction from a space-time viewpoint. From the perspective of the non-linear sigma model, however, it would be interesting to consider the larger group of symmetries where (3) is not necessarily satisfied. We will not attempt a classification of these groups in this work. \bigskip We will also put some restriction on the set of models we will consider. As argued in section \ref{s:singular}, the set $S_n^{roots}\subset O_n^+\backslash Gr^+(4,21)$ corresponding to $4$-planes $\Pi$ that are orthogonal to some root $r\in \Gamma^{5,21}$, $r^2=-2$, should be contained in the set of singular models, and therefore should be excluded from the moduli space $\mathcal{M}_n$. On the other hand, the analysis of symmetric orbifolds (in particular, for $n=2$), seems to be in contradiction with this conclusion. For this reason, it is preferable to take an agnostic view on this argument and explicitly exclude such models from our classification. This restriction greatly simplifies the classification problem we are considering -- in fact, we were not able to generalize this classification so as to include these models. \bigskip Finally, we will assume that the set of singular models is not larger than the set $S$ described in section \ref{s:singular}. This assumption will be needed for the second part of theorem \ref{th:main}, in order to show the existence of a non-singular model with a given symmetry group $G$.\footnote{The assumption was also used to argue that the moduli space $\mathcal{M}_n$ is connected, and show that there is no symmetry acting trivially on all exactly marginal deformations. However, the assumption about connectedness of $\mathcal{M}_n$ is much weaker than the one we need for the second part of theorem \ref{th:main}.} Our discussion leads us to conclude: \begin{claim} Let $\mathcal{C}_\Pi$ be a non-singular SCFT of $K3^{[n]}$ type, $n\ge 1$, corresponding to a $4$-subspace $\Pi\subset L_v\otimes \RR\cong \RR^{4,21}$, and such that $\Pi$ is not orthogonal to any $r\in L_v$, $r^2=-2$. The group of symmetries $G_\Pi$ satisfying the conditions (1), (2), and (3) above is isomorphic to the subgroup of $Stab^+(v)\subset O(\Gamma^{5,21})$ that fixes $Z=\Pi\oplus \langle v\rangle \subset \Gamma^{5,21}\otimes \RR$ pointwise. \end{claim} While this is in principle a complete characterization of the symmetry groups $G_\Pi$, in practice it is not a trivial task to determine which groups can actually arise. The following propositions provide a much more useful characterization of the groups $G_\Pi$. \begin{proposition}\label{th:lemma} The group $G_\Pi$ is isomorphic to the group $O^0(L_\Pi)$ of automorphisms of the lattice $L_\Pi:=L_v\cap \Pi^\perp$ that act trivially on the discriminant group $L_\Pi^*/L_\Pi$. \end{proposition} The proof is in appendix \ref{a:pflemma}. This proposition shows, in particular, that the group $G_\Pi$ only depends on the isomorphism class of the lattice $L_\Pi$. In particular, different models $\mathcal{C}_\Pi,\mathcal{C}_{\Pi'}$ corresponding to isomorphic lattices $L_{\Pi}\cong L_{\Pi'}$ have isomorphic groups $G_\Pi\cong G_{\Pi'}$, even if they are not related by a duality. The most useful characterization of the groups $G_\Pi$ is given by the following theorem, which is a direct analogue of the main result of \cite{K3symm}. Let $\Lambda$ be the negative definite Leech lattice, i.e. the unique negative definite even unimodular lattice of rank $24$ without roots, i.e. vector of square norm $-2$. Its group of automorphisms $O(\Lambda)$ is the finite group $Co_0\cong \ZZ_2.Co_1$, which in turn is a central $\ZZ_2$ extension of a finite simple group $Co_1$. \begin{proposition}\label{th:main} Let $\mathcal{C}_\Pi$ be a non-singular SCFT of $K3^{[n]}$-type, $n> 1$, corresponding to a $4$-subspace $\Pi\subset L_v\otimes \RR\cong \RR^{4,21}$, and such that $\Pi$ is not orthogonal to any $r\in L_v$, $r^2=-2$. Let $L_\Pi=L_v\cap \Pi^\perp$ and $G_\Pi=O^0(L_\Pi)$ be the group of symmetries of $\mathcal{C}_\Pi$ satisfying the conditions (1), (2), and (3) above. Then, $L_\Pi$ is isomorphic to a primitive sublattice $\Lambda_\Pi$ of the Leech lattice $\Lambda$ of rank $\rk \Lambda_\Pi \le 21$, and $G_\Pi$ is isomorphic to the subgroup of $\Aut(\Lambda)\cong Co_0$ fixing pointwise the sublattice $\Lambda\cap \Lambda_\Pi^\perp$ of rank at least $3$. Vice versa, if $\tilde G\subset Co_0$ is the pointwise stabilizer of a sublattice $\Lambda^{\tilde G}$ of the Leech lattice $\Lambda$ of rank at least $3$, then for every $n>1$ there exists a non-singular NLSM $\mathcal{C}_\Pi$ on $K3^{[n]}$ whose group of symmetries $G_\Pi$ is isomorphic to $\tilde G$ and such that $L_\Pi\cong (\Lambda^{\tilde G})^\perp \cap \Lambda$. \end{proposition} The proof is in appendix \ref{a:pfthmain}. Triples $(\tilde G,\Lambda^{\tilde G},\Lambda_{\tilde G})$ where $\tilde G\subset Co_0$ is the stabilizer of a sublattice of $\Lambda$, and $\Lambda^{\tilde G}$ and $\Lambda_{\tilde G}=(\Lambda^{\tilde G})\cap \Lambda$ are the corresponding invariant and coinvariant lattices, were classified in \cite{HohnMason} up to isomorphisms. Out of the $290$ groups listed in \cite{HohnMason}, $221$ have an invariant sublattice $\Lambda^{\tilde G}$ of rank at least $3$. As expected from the analysis of section \ref{s:symmorbmoduli}, the quantum symmetry of symmetric orbifolds is not included in the group $G_\Pi$ classified in this theorem. Indeed, there is no element of $Co_0$ of order $2$ fixing a $3$-dimensional sublattice of $\Lambda$ and acting by $-1$ on the orthogonal complement. This means that, as argued in \ref{s:symmorbmoduli}, either the $n$-th symmetric orbifold corresponds to a $4$-plane $\Pi$ orthogonal to a root (which seems to be the case for $n=2$), or the quantum symmetry does not satisfy the conditions (1), (2), and (3). In fact, it is easy to see that the extension of the quantum symmetry to $O(\Gamma^{5,21})$ must act on $v$ by $v\mapsto -v$, so that (3) is not satisfied. It is quite remarkable that the list of groups appearing as possible groups of symmetries of NLSM of $K3^{[n]}$ type is essentially independent of $n$, at least for $n>1$. When the fixed sublattice has rank at least $4$, a partial explanation of this phenomenon is given by the symmetric orbifold construction. Indeed, in this case, we know from \cite{K3symm} that there exists a NLSM on K3 $\mathcal{C}'$ with symmetry group $G$, so that in all the symmetric orbifold models $\Sym^n \mathcal{C}'$ the symmetry group must contain $G$, and it is easy to take a deformation for which the group is \emph{exactly} $G$. From the perspective of type IIB string theory on K3, what happens is that there is a family of models in the moduli space having a self-duality group $G\subset O^{\Gamma^{5,21}}$ fixing a sublattice $\Gamma^{1,1}\subset \Gamma^{5,21}$. Therefore, all NLSM describing the world-sheet dynamics of stringy-like objects whose charge is in this $G$-fixed sublattice will have $G$ as a group of symmetries. We do not see any similar explanation for the case where the rank of the fixed sublattice is exactly $3$. Let us briefly comment on the problem of classifying groups of symmetries $G^*_\Pi$ that satisfy property (1) and (2), but not necessarily (3). Let us assume that $O^+_n$ contains only elements of $O(\Gamma^{5,21})$ acting by $v\mapsto \pm v$ of the vector $v$. For a given model $\mathcal{C}_\Pi$ of $K3^{[n]}$ type, $G^*_\Pi$ might be either equal to $G_\Pi$ (if there are no symmetries satisfying (1) and (2), but not (3)), or of the form $G^*_\Pi=G_\Pi.\ZZ_2$, i.e. it contains a normal subgroup $G_\Pi$ such that $G^*_\Pi/G_\Pi\cong\ZZ_2$. Furthermore, it is clear from our construction that $G_\Pi^*$ must be a subgroup of the group $O(L_\Pi)$ of the lattice $L_\Pi:=\Gamma^{5,21}\cap Z$. It seems difficult to find a practical criterion to determine whether $G^*_\Pi$ is larger than $G_\Pi$ or not. In particular, this seems to depend not only on the isomorphism class of $L_\Pi$, but also on $n$, on $v$, and in the way $L_\Pi$ is embedded in $\Gamma^{5,21}$. It also seems difficult to determine the precise structure of the group $G_\Pi^*$ in the cases where it is different from $G_\Pi$ -- in general, there can be many non-isomorphic groups $G_\Pi^*$ such that $G^*_\Pi/G_\Pi\cong \ZZ_2$. In particular, the example of the symmetric orbifolds shows that $G_\Pi^*$ is not always a subgroup of $Co_0$. We hope we will be able to address these issues in future works. \subsection{Twining genera}\label{s:twining} Given a NLSM $\mathcal{C}$ of $K3^{[n]}$ type with a group of symmetries $G$ (satisfying the conditions (1), (2) and (3) of section \ref{s:callifsymm}), one is interested in finding how the group $G$ acts on the states of the model, in the sense of determining how the space of states of $\mathcal{C}$ decomposes into irreducible representations of $G$. This information can be partially recovered if one knows all \emph{twining genera} $\phi_g(\tau,z)$, that are functions on $\mathcal{H}\times \CC$ defined by \begin{equation} \phi_g(\tau,z):=\Tr_{RR}(g\, q^{L_0-\frac{c}{24}} \bar q^{\bar L_0-\frac{\bar c}{24}} y^{J_0^3} (-1)^{F+\bar F})\ ,\qquad q=e^{2\pi i\tau}, y=e^{2\pi i z}, \ . \end{equation} for all $g\in G$. Here, the trace is taken over the Ramond-Ramond sector of the theory $\mathcal{C}$, $L_0$ and $\bar L_0$ are the holomorphic and anti-holomorphic Virasoro generators, $J_0^3$ is the zero mode of a Cartan generator in the $su(2)_k$ subalgebra of the holomorphic $\mathcal{N}=4$ superconformal algebra, $F$ and $\bar F$ are the holomorphic and anti-holomorphic fermion numbers. For $g=1$, $\phi_g$ reduces to the elliptic genus of the model $\mathcal{C}$. As for the elliptic genus, only states with $L_0-\frac{\bar c}{24}=0$ (right-moving ground states) can give a non-vanishing contribution to this trace: the contributions of states with $L_0-\frac{\bar c}{24}>0$ cancel each other due to supersymmetry. This implies that $\phi_g(\tau,z)$ is a holomorphic function of $(\tau,z)\in \mathbb{H}\times\CC$ In a path integral formulation of the NLSM, the twining genera can be described in terms of a path integral on a world-sheet $\Sigma$ of genus $1$ (a torus) with modular parameter $\tau$, with the insertion of the operator $y^{J_0^3}$, and where all the fields are required to be periodic around one of the cycles in a basis of $H_1(\Sigma,\ZZ)$, and twisted by $g$ as one goes around the other cycle. From this description, and from spectral flow invariance, one can argue that $\phi_g(\tau,z)$ must transform as a Jacobi form of weight $0$ and index $n$ (see \cite{eichler_zagier} for the relevant definitions) \begin{align*} \phi_{g} \left ({a \tau + b\over c\tau +d}, {z \over c\tau + d}\right )&= e^{2 \pi i m {c z^2\over c\tau + d}} \chi_g\left(\begin{array}{cc} a & b \\ c & d \end{array}\right)\phi_{g}(\tau, z) ~~~&& \forall \left(\begin{array}{cc} a & b \\ c & d \end{array}\right) \in \Gamma_g\subseteq SL_2(\mathbb Z), \\ \phi_{g}(\tau, z + \ell \tau + \ell')&=e^{-2 \pi i n(\ell^2\tau + 2\ell z)}\, \phi_{g}(\tau,z)~ ~~ &&\forall (\ell, \ell') \in \mathbb Z^2, \end{align*} for a suitable subgroup $\Gamma_g\subset SL(2,\ZZ)$. We have allowed for the possibility of a non-trivial multiplier $\chi_g:\Gamma_g\to \CC^\times$, which cannot be excluded by path integral arguments (and can be shown to exist in some explicit examples). Once the twining genera $\phi_g$ are given for all $g\in G$, using standard group theory arguments one can determine how every simultaneous eigenspace for $L_0,\bar L_0,J_0^3$ (which is always finite dimensional) decomposes as a sum $\oplus_i n_i R_i$ over the irreducible representations $R_i$ of $G$, with $n_i$ are $\ZZ_2$-graded multiplicities, where the grading given by the total fermion number $(-1)^{F+\bar F}$. This means that the twining genera are not sufficient to detect whether a certain eigenspace contains the sum of two copies of the same $G$-representation with opposite fermion number, since their contribution cancels exactly. In fact, we know that huge cancellations do occur in every model $\mathcal{C}$ due to supersymmetry: for example, all contributions from states with $L_0-\frac{\bar c}{24}>0$ cancel each other. In this sense, one can only recover partial information from the twining genera. Nevertheless, the twining genera are useful for two reasons. First, they provide interesting information about the action of $G$ on some important subset of states, namely the ones that supersymmetric with respect to the anti-holomorphic $\mathcal{N}=4$ superconformal algebra. Secondly, as we will explain in the rest of this section, one can determine a large number of such twining genera, whereas by contrast the analogous computation of `twining partition functions' without the inclusion of the fermion number $(-1)^{F+\bar F}$ is in general out of reach (except for a few very special points in the moduli space). The property that makes the twining genus $\phi_g$ computable is the fact that it is invariant under continuous deformations of the moduli that preserve the symmetry $g$. The argument for this is completely analogous to the one leading to the invariance of the elliptic genus (see for example \cite{Cheng:2016org}). By the analysis in section \ref{s:callifsymm}, any symmetry $g$ can be identified with an element in $Stab^+(v)\subset O^+(\Gamma^{5,21})$ fixing a sublattice $\Gamma^g\subset \Gamma^{5,21}$ of signature $(5,d)$, $d\ge 0$. An infinitesimal deformation of the NLSM preserves the symmetry $g$ if and only if it is generated by a $g$-invariant exactly marginal operator. It follows that, for each symmetry $g\in O^+(\Gamma^{5,21})$, there is a family of non-singular models with symmetry $g$, consisting of all non-singular (in the sense of section \ref{s:singular}) positive definite oriented $5$-planes $Z$ containing $v$, with $Z\subset \Gamma^g\otimes \RR$. Equivalently, it can be seen as an element of $O^+_0(L_v)\cong Stab^+(v)$ acting trivially on a sublattice $(L_v)^g$ of signature $(4,d)$, $d\ge 0$. This family is necessarily connected, because it is the quotient by a discrete group of dualities of a space of the form $Gr(4,d)\setminus \mathcal{S}$, where $Gr(4,d)$ is the Grassmannian of positive definite four planes $\Pi\in (L_v)^g\otimes \RR\cong \RR^{4,d}$, and $\mathcal{S}$ is the locus of singular NLSM, described in section \ref{s:singular}, which has codimension at least $4$. This means that the twining genus $\phi_g$ depends only on the element $g\in Stab^+(v)$, and not on the particular model $\mathcal{C}\in \mathcal{F}_g$ in which it is computed. Therefore, in order to determine the twining genus $\phi_g$ for the whole family $\mathcal{F}_g$, it is sufficient to compute it at any point in $\mathcal{F}_g$. Furthermore, if two elements $g,g'\in Stab^+(v)$ are conjugate in $Stab^+(v)$, i.e. $g'=hgh^{-1}$ for some $h\in Stab^+(v)$, then the models of the family $\mathcal{F}_g$ are dual to the ones in the family $\mathcal{F}_{g'}$, and the corresponding twining genera $\phi_g$ and $\phi_{g'}$ are the same.\footnote{This is true more generally for conjugation by any $h\in O^+_n$. Since it is not completely clear to us what this group is, in this section and in the following we will be conservative and only consider dualities by $Stab^+(v)$. } It follows that the twining genus $\phi_g$ only depends on the conjugacy class of $g$ in $Stab^+(v)$. Finally, invariance of the $\mathcal{N}=4$ characters under charge conjugation implies \begin{equation} \phi_{g}(\tau,z)=\phi_{g^{-1}}(\tau,-z)=\phi_{g^{-1}}(\tau,z)\ . \end{equation} To summarize, while the definition of the twining genera $\phi_g$ refers to a specific CFT of K3$^{[n]}$ type, the twining genus itself only depends on the element $g\in Stab^+(v)$ up to conjugation in $Stab^+(v)$ and is invariant under charge conjugation $g\leftrightarrow g^{-1}$. Each such class determines a connected family $\mathcal{F}_g$ of non-singular CFTs of K3$^{[n]}$ type, such that $\phi_g$ is well-defined at each point in $\mathcal{F}_g$ and is constant along $\mathcal{F}_g$. In section \ref{s:conjclass} we will discuss the classification of such conjugacy classes of symmetries. \subsection{Conjugacy classes of symmetries}\label{s:conjclass} Motivated by the analysis of section \ref{s:twining}, we will now consider a classification of all conjugacy classes of elements $g\in Stab^+(v)$ fixing a positive definite $4$-dimensional subspace in $L_v\otimes \RR\cong \RR^{4,21}$. A first rough classification follows by considering the possible eigenvalues of $g$ in the defining $25$-dimensional representation. By construction $g$ acts non-trivially only on a negative definite sublattice $\Gamma_g\subseteq L_v\cap \Pi^\perp $ of rank $21-d$, $d\ge 0$, so that there are only $21-d$ non-trivial eigenvalues. By theorem \ref{th:main}, the lattice $\Gamma_g$ can be primitively embedded in the Leech lattice $\Lambda$, and the non-trivial eigenvalues of $g$ are the same as for an element $g'\in Co_0$, such that the cyclic group $\langle g'\rangle\subset Co_0$ is the pointwise stabilizer of a $3+d$ dimensional sublattice $\Lambda^{\langle g'\rangle}\subset \Lambda$. There are $42$ conjugacy classes $[g']$ of $Co_0$ with the property that its elements $g'$ stabilize a sublattice of rank at least $3$ (see \cite{Conway:1985vn}). As a matter of fact, all such classes actually stabilize a lattice of dimension at least $4$, so that $d\ge 1$. Furthermore, for any two distinct $Co_0$ classes, the corresponding elements have different sets of eigenvalues, and non-isomorphic lattices $\Lambda_{g'}=(\Lambda^{g'})^\perp\cap \Lambda$ (see for example \cite{HaradaLang1990,HohnMason}). The sets of eigenvalues are most easily encoded in the Frame shape of $g'$, {\it i.e.} a symbolic product \begin{equation} \pi_{g'} := \prod_{\ell|N}\ell^{k_\ell}\ , \end{equation} where $N=o(g')$ is the order of $g'$. The integers $k_\ell\in \ZZ$ are defined by \begin{equation} \det (t{\bf 1}_{24}- \rho_{ 24}(g)) =\prod_{\ell|N} (t^\ell-1)^{k_\ell}\ . \end{equation} If $g'$ acts as a permutation of the vectors in some basis of the $24$ dimensional representation of $Co_0$, then all $k_\ell$ are non-negative and the Frame shape coincides with the cycle shape of the permutation. By theorem \ref{th:main}, we conclude that there are $42$ possible sets of eigenvalues for symmetries $g\in Stab^+(v)$, each corresponding to a certain isomorphism class of lattices $\Gamma_g$. While two elements $g_1,g_2\in Stab^+(v)$ with different Frame shapes obviously belong to distinct $Stab^+(v)$ conjugacy classes, the converse is not necessarily true: it could happen that $g_1,g_2$ with the same Frame shape are not conjugate in $Stab^+(v)$. We are left with the problem of determining the possible classes for each of the $42$ Frame shapes. The following lemma is useful in this sense. \begin{lemma} Let $L$ be an even lattice and let $G\subseteq O(L)$ be a subgroup of its group of automorphisms. Let $g_1,g_2\in G$ be such that the coinvariant sublattice $L_{g_k}:=(L^{g_k})^\perp\cap L$, $k=1,2$, are both isomorphic to a given lattice $M$. Let $i_1,i_2:M\hookrightarrow L$ be primitive embeddings and let $g\in O(M)$ be an automorphism of $M$, such that $i_k(M)=L_{g_k}$ and $g_k\circ i_k=i_k\circ g$, $k=1,2$. Then, $g_1$ and $g_2$ are conjugate in $G$ if and only if there exist $h\in G$ and $s\in C_{O(M)}(g)$ (the centralizer of $g$ in $O(M)$) such that \begin{equation} h\circ i_1=i_2\circ s\ , \end{equation} and in this case $g_2=hg_1h^{-1}$. \end{lemma} \begin{proof} This is an immediate generalization of the proof of Lemma 8 in \cite{Cheng:2016org}. \end{proof} In the case we are interested in, $L$ is the lattice $L_v=v^\perp\cap \Gamma^{5,21}$, $G=Stab^+(v)$ is the subgroup of $O(\Gamma^{5,21})$ preserving $v$ and preserving the orientation of positive definite $4$-subspaces in $L_v\otimes \RR$, and the lattice $M$ is the coinvariant sublattice $\Lambda_g=\Lambda\cap (\Lambda^g)^\perp$, i.e. the orthogonal complement of the $g$-fixed sublattice in the Leech lattice, for some $g\in Co_0$ with the given Frame shape $\pi_g$. Therefore, the symmetries with Frame shape $\pi_g$ up to $Stab^+(v)$ transformations are in one to one correspondence with the double cosets \begin{equation}\label{upbound} \{\text{primitive }i:\Lambda_g\hookrightarrow L_v\}/Stab^+(v)\ , \end{equation} where $Stab^+(v)$ is the stabilizer of $v$ in $O^+(\Gamma^{5,21})$. In principle, one might want to consider the number of classes of symmetries up to $O^+_n$ transformation, where $O^+_n$ might contain (at least) elements that flip the sign of the vector $v$. We will not do this in this paper, but just notice that the number of cosets \eqref{upbound} gives an upper bound on the number of cosets up to $O_n^+$ transformations. On the other hand, for most Frame shapes $\pi_g$, this upper bound is either $1$ or equals some lower bound that can be obtained by other arguments (in particular, by the number of known twining genera). The single cosets \eqref{upbound} are in one to one correspondence with the cosets in \begin{equation} \label{upboundeq} \{(\hat v,\hat i)\mid \hat i:\Lambda_g\hookrightarrow \Gamma^{5,21}\text{ primitive},\ \hat v\in \Gamma^{5,21}\text{ primitive}, \hat v^2=2m-2\}/O^+(\Gamma^{5,21})\ . \end{equation} Indeed, there is a map from \eqref{upbound} to \eqref{upboundeq} given by assigning a coset $[i]$ with representative $i$ the coset $[(v,i)]$ with representative $(v,i)$. If we choose a different representative $i'$ in the same coset $[i]$, then $(v,i)$ and $(v,i')$ are related by $Stab^+(v)\subset O^+(\Gamma^{5,21})$, so they belong to the same $[(v,i)]$. It follows that the map is well defined. Vice versa, given any coset in \eqref{upboundeq}, using the fact that any two primitive vector of same length in $\Gamma^{5,21}$ are related by an $O^+(\Gamma^{5,21})$ transformation, we can choose a representative of the form $(v,\hat i)$, and the embedding $\hat i$ is determined up to automorphisms in $Stab^+(v)$. Therefore, this determines a coset $[\hat i]$ in \eqref{upbound}, and gives a well defined map from \eqref{upboundeq} to \eqref{upbound} that is clearly the inverse of the previous one. The quotient in \eqref{upboundeq} admit two alternative useful descriptions. The first is given by noticing that a pair $(\hat v,\hat i)$ as in \eqref{upboundeq} determines an embedding $\tilde i:\Lambda_{g,n}\hookrightarrow \Gamma^{5,21}$, where \begin{equation} \Lambda_{g,n}:=\Lambda_g\oplus \langle 2n-2\rangle\ . \end{equation} More precisely, the cosets in \eqref{upboundeq} are in one to one correspondence with \begin{equation} \{\tilde i:\Lambda_{g,n}\hookrightarrow \Gamma^{5,21}\mid \tilde i(\Lambda_g), \tilde i(\langle 2n-2\rangle)\text{ primitive in }\Gamma^{5,21}\}/O^+(\Gamma^{5,21}) \end{equation} The second one uses the fact that the lattices $\Lambda_g$ we are considering are the same appearing in the classification of symmetries of NLSM on K3. Using the results of \cite{Nikulin}, one can show that all such lattices can be primitively embedded in an even unimodular lattice $\Gamma^{4,20}$ (see \cite{K3symm} for details). For any decomposition $\Gamma^{5,21}=\Gamma^{1,1}\oplus \Gamma^{4,20}$, this gives a primitive embedding of $\Lambda_g$ in $\Gamma^{5,21}$, such that the orthogonal complement $K$ is of the form $K=\Gamma^{1,1}\oplus K'$ for some lattice $K'$. For lattices $K$ of this form, theorem 1.14.2 of Nikulin \cite{Nikulin} then implies that the primitive embedding of $\Lambda_g$ in $\Gamma^{5,21}$ is unique up to $O^+(\Gamma^{5,21})$ transformations. For each $\Lambda_g$ let us choose one such primitive embedding $ i_g:\Lambda_g \to \Gamma^{5,21}$, and set $K:=i_g(\Lambda_g)^\perp\cap\Gamma^{5,21}$. Then, each coset in \eqref{upboundeq} has a representative of the form $(\hat v,i_g)$, where $\hat v\in K\subset \Gamma^{5,21}$ is determined up to $O^+(\Gamma^{5,21})$ transformations acting trivially on $i_g(\Lambda_g)$. The latter transformations act on $K$ by an automorphism in $O^{0+}(K)$, fixing the orientation of positive definite $5$-dimensional subspaces and acting trivially on $K^*/K$. In fact, every element in $O^{0+}(K)$ extends to an $O^+(\Gamma^{5,21})$ automorphism acting trivially on $K^\perp\equiv i_g(\Lambda_g)$. We conclude that the cosets in \eqref{upboundeq} are in one to one correspondence with \begin{equation} \{\text{primitive } v\in K,\\ v^2=2n-2\}/O^{0+}(K)\ , \end{equation} i.e. $O^{0+}(K)$-orbits of primitive $v\in K$ of length $2n-2$. It remains to count these orbits. First of all, given a primitive $v\in K$, let $\Div(v)$ be the maximal positive integer such that $\frac{v}{\Div(v)}\in K^*$. One has $\Div(v)$ divides $\gcd(N,2n-2)$, and that for any $f\in O(k)$, $\Div(f(v))=\Div(v)$ (see appendix \ref{a:lattices}). If $f\in O^{0+}(K)$, then $\frac{1}{\Div(v)} f(v)\equiv \frac{1}{\Div(v)} v\mod K$, so that a necessary condition for two primitive vectors $v,v'\in K$ of same length $2n-2$ to be in the same $O^{0+}(K)$-orbit is that $\frac{v}{\Div(v)}$ and $\frac{v'}{\Div(v')}$ are in the same $K^*/K$ coset (in particular, $\Div(v)=\Div(v')$; this is true more generally for vectors in the same $O(K)$-orbit). Therefore, there is at least one $O^{0+}(K)$-orbit for each generator $x\in K^*/K$ (elements $x\in K^*/K$ that are not generators cannot be written as $v/\Div(v)$ for some $v\in K$). The case where $\Div(v)=1$ is particularly interesting. This condition corresponds to the trivial coset $x\equiv 0 \in K^*/K$, since $\frac{v}{\Div(v)}=v\in K$. This is the only possible value for $\Div(v)$ when the order $N$ of $g$ and the norm $v^2=2n-2$ are coprime. Furthermore, as we will argue in section \ref{s:elevenandfriends}, based on the analysis of section \ref{s:symmorbmoduli}, this is also the case when the NLSM is a symmetric orbifold $\Sym^n(\mathcal{C}_{K3})$ of some NLSM on K3 $\mathcal{C}_{K3}$, and $g$ is induced by a symmetry of $\mathcal{C}_{K3}$. It is easy to see that $\Div(v)=1$ if and only if $\langle v\rangle \oplus \Lambda_g$ is a \emph{primitive} sublattice in $\Gamma^{5,21}$.\footnote{Suppose $\Div(v)=1$. Every primitive vector of $\langle v\rangle \oplus \Lambda_g$ is of the form $av+b\lambda$, with $\lambda$ primitive and $\gcd(a,b)=1$. Suppose that $\frac{1}{k}(av+b\lambda)\in \Gamma^{5,21}\subset K^*\oplus (\Lambda_g)^*$ for some integer $k$. Since $\frac{av}{k}\in K^*$ and $\Div(v)=1$, it must be $k|a$ and $\frac{av}{k}\in K$, i.e. $\frac{av}{k}$ belongs to the trivial coset of $K^*/K\cong (\Lambda_g)^*/\Lambda_g$. Since $\frac{av}{k}+\frac{b\lambda}{k}\in \Gamma^{5,21}$, the gluing conditions imply that also $\frac{b\lambda}{k}\in \Lambda_g$. As a consequence, $k$ divides $b$, because $\lambda$ is primitive. But then $k$ divides $\gcd(a,b)=1$, so that $k=1$. Thus, every primitive vector of $\langle v\rangle \oplus \Lambda_g$ is primitive in $\Gamma^{5,21}$ as well. For the vice versa, notice that, for a general $\Div(v)$, the gluing construction implies that $\Gamma^{5,21}\subset K^*\oplus (\Lambda_g)^*$ contains a vector of the form $\frac{1}{\Div(v)}(v+\lambda)$, for some $\lambda\in \Lambda_g$, so if $\Div(v)>1$ then $\langle v\rangle \oplus \Lambda_g$ is not primitively embedded in $\Gamma^{5,21}$. } Therefore, in the case of $\Div(v)=1$, the number of classes equals the number of primitive embeddings of $\langle v\rangle\oplus \Lambda_g$ in $\Gamma^{5,21}$ modulo $O^+(\Gamma^{5,21})$. Since $\langle v\rangle\oplus \Lambda_g$ is indefinite of signature $(1,d)$, $0\le d\le 20$, one can apply the theorems by Miranda and Morrison \cite{MirandaMorrison1,MirandaMorrison2} to compute the number of such embeddings. This number depends essentially on the discriminant form $\langle v\rangle\oplus \Lambda_g$, which is the direct sum of the discriminant form on $\Lambda_g^*/\Lambda_g$ plus the discriminant form $(A,q)$, where $A\cong \ZZ_{2n-2}$ has a generator $x$ such that $q(x)=\frac{1}{2n-2}$. The necessary information about the discriminant forms is reported in appendix \ref{a:discriminants}. The results of this calculation are contained in table \ref{tab:big}. More generally, if $\Div(v)>1$, then $\frac{v}{\Div(v)}$ determines a non-trivial element $x\in K^*/K\cong \Lambda_g^*/\Lambda$, so that one has a primitive embedding in $\Gamma^{5,21}$ of an overlattice $N\supset \langle v\rangle\oplus \Lambda_g$, generated by $\langle v\rangle\oplus \Lambda_g$ together with a vector of the form $\frac{1}{\Div(v)}(v+\lambda)$, where $\lambda\in \Lambda_g$ is such that $\frac{\lambda}{\Div(v)}\equiv\frac{\lambda}{\Div(\lambda)}$ is in the same class $x\in \Lambda_g^*/\Lambda_g$. Thus, $N$ is an indefinite lattice with discriminant group is a quotient of the discriminant group of $\langle v\rangle\oplus \Lambda_g$. In principle, one might be able to work out all the possibilities, but we will not do this here. \section{Second quantized twining genera}\label{s:elevenandfriends} Let $\mathcal{C}$ be a NLSM on a single K3 surface, and let us consider the symmetric orbifold $\Sym^n(\mathcal{C})$. The elliptic genus is given by the $p^m$ coefficient in the infinite product \cite{Dijkgraaf:1996it,Dijkgraaf:1996xw} \begin{equation} \sum_{n=1}^{\infty} p^n\phi_{\Sym^n(K3)}(\tau,z)=\prod_{\substack{m,n,l\in \ZZ\\ m>0,n\ge 0}} (1-p^nq^my^l)^{-c(mn,l)}\ , \end{equation} where $c(m,l)$ are the Fourier coefficients of the elliptic genus of K3 \begin{equation} \phi_{K3}(\tau,z)=\sum_{\substack{m,l\in \ZZ\\n\ge 0}}c(m,l)q^my^l\ . \end{equation} Let $G$ be the symmetry group of the `seed' model $\mathcal{C}$, commuting with the $\mathcal{N}=4$ SCA and the spectral flow generators. The action of $G$ lifts to a group of symmetries $\tilde G$ of $\Sym^n(\mathcal{C})$ satisfying analogous properties (conditions (1) and (2) of section \ref{s:callifsymm}), and preserving the twisted and untwisted sectors. More precisely, the group $\tilde G$ fits in an exact sequence \begin{equation} 1\to H\to \tilde G\to G\to 1\ , \end{equation} where $H$ is a group acting trivially on the untwisted sector. Since $\Sym^n\mathcal{C}$ is generated by the untwisted sector and by the ground state of the $\sigma$-twisted sector, with $\sigma\in S_n$ a single transposition, the group $H$ can only act by phases on the $\sigma$-twisted ground state. The only such symmetry is, in fact, the quantum symmetry, so that $H\cong \ZZ_2$, and $\tilde G$ is a $\ZZ_2$ central extension of $G$. In the language of the previous sections, the model $\Sym^n(\mathcal{C})$ will correspond to a positive definite four-plane $\Pi\subset L_v\otimes \RR$, where $L_v=v^\perp \cap \Gamma^{5,21}$ for a primitive $v\in \Gamma^{5,21}$ of length $2n-2$. Since $G$ is a symmetry of the fundamental string world-sheet theory, one can argue that it will lift to a symmetry of the whole string theory, so that $\tilde G$ will act on $L_v$ by lattice automorphisms. The analysis of section \ref{s:symmorbmoduli} shows that $\Pi$ is orthogonal to a vector $s\in L_v$, with $s^2=2-2n$, and such that $L_v\cong \langle s\rangle\oplus_\perp \Gamma^{4,20}$. The quantum symmetry of the model, that acts by $s\mapsto -s$ and fixes $L_v\cap s^\perp\cong \Gamma^{4,20}$, is central in $\tilde G$, since $\tilde G$ does not mix the untwisted and the twisted sectors. This means that the action of $\tilde G$ on $L_v$ preserves setwise the sublattice $s^\perp \cap L_v\cong \Gamma^{4,20}$. The group $\tilde G$ has a normal subgroup of index $2$ that fixes the vector $s$ and acts faithfully on $\Gamma^{4,20}$. This means that this normal subgroup is isomorphic to $\tilde G/H\cong G$; therefore, the $\ZZ_2$ extension of $G$ is split, and $\tilde G\cong \ZZ_2\times G$. Thus, the lift of $G$ to $\Sym^n\mathcal{C}$ can be chosen to act trivially on the twisted ground state and to be isomorphic to $G$ itself. With a certain abuse of notation, we will denote this lift again by $G$. Since $G$ fixes $s$, it must act trivially on $L_v^*/L_v$. Therefore, the action of $G$ on $L_v$ extends to an action on $\Gamma^{5,21}$ by automorphisms that fix $v$. Therefore, conditions (1), (2), and (3) of section \ref{s:callifsymm} are satisfied. Let $\Gamma^G$ be the $G$-fixed sublattice of $\Gamma^{5,21}$ and $\Gamma_G=(\Gamma^G)^\perp\cap \Gamma^{5,21}$ its orthogonal complement. One has $\Gamma_g\subset \Gamma_G$ for all $g\in G$. Recall that $v$ and $s$ are contained in a sublattice $\Gamma^{1,1}\subset \Gamma^{5,21}$. Since both $v$ and $s$ are $G$-fixed, then $\Gamma^{1,1}$ is a sublattice of $\Gamma^G$, and $v$ has divisor $1$ in this sublattice. Therefore, the sublattice $\Gamma_G\oplus_\perp \langle v\rangle$ is primitive in $\Gamma^{5,21}$, and the same is true for the sublattices $\Gamma_g\oplus_\perp \langle v\rangle$ for all $g\in G$. We conclude that, as claimed in section \ref{s:conjclass}, the $Stab^+(v)$-conjugacy class of any symmetry $g$ of $\Sym^n(\mathcal{C})$, inherited from a symmetry of $\mathcal{C}$, is such that $v$ has divisor $1$ in $\Gamma^{5,21}\cap \Gamma_g$. Therefore, these classes are the ones described in table \ref{tab:big}. The twining genera $\phi^{\Sym^n(K3)}_g$ can be obtained from the generating functions \begin{equation} \Psi_g= \sum_{n=1}^{\infty} p^n\phi^{\Sym^n(K3)}_g(\tau,z)=\prod_{\substack{m,n,l\in \ZZ\\ n>0,m\ge 0}}\prod_{t\in \ZZ/N\ZZ} (1-e^{\frac{2\pi it}{N}}p^nq^my^l)^{-\hat c_{t}(mn,l)}\ , \end{equation} where $N$ is the order of $g$, and $\hat c_{t}$ are the Fourier coefficients of the discrete Fourier transforms of $\phi_{g^i}$ \begin{equation} \hat \phi_t(\tau,z)=\sum_{m=0}^\infty \sum_{l\in \ZZ} \hat c_{t}(m,l)q^my^l=\frac{1}{N}\sum_{k\in \ZZ/N\ZZ} e^{-\frac{2\pi i tk}{N}}\phi_{g^k}(\tau,z)\ . \end{equation} Therefore, the twining genera of the symmetric orbifold $\Sym^n(\mathcal{C})$ are completely determined in terms of the twining genera of the `seed' K3 model $\mathcal{C}$. The complete list of the possible twining genera for a K3 model $\mathcal{C}$ can be found in \cite{Paquette:2017gmb}. The twining genera $\phi_g$ can be written as \begin{equation}\label{twinformula} \phi_g(\tau,z)=A_g \chi_{0,1}(\tau,z)+F_g(\tau)\chi_{-2,1}(\tau,z)\ , \end{equation} where $\chi_{0,1}$ and $\chi_{-2,1}$ are the standard weak Jacobi forms given in terms of Jacobi theta functions and Dedekind eta series as \begin{equation} \chi_{0,1}(\tau,z)=4\sum_{i=2}^4\frac{\vartheta_i(\tau,z)^2}{\vartheta_i(\tau,0)^2}\ ,\qquad \qquad \chi_{-2,1}(\tau,z)=\frac{\vartheta_1(\tau,z)^2}{\eta(\tau)^6}\ , \end{equation} $A_g$ is a constant depending on the the Frame shape of $g$ (specifically, if $\pi_g=\prod_{\ell|N}\ell^{k_\ell}$ is the Frame shape of $g$, then $A_g=\frac{1}{12}\sum_{\ell|N}k_{\ell}$), and $F_g(\tau)$ is a modular form of weight $2$ for a congruence subgroup of $SL(2,\ZZ)$, which is given in table \ref{tab:big} (see \cite{Paquette:2017gmb} for notation). The generating function $\Psi_g$ can be `completed' to a function \begin{equation} \Phi_g=\frac{p\psi_g(\tau,z)}{\Psi_g(\sigma,\tau,z)}=\prod_{{(m,n,l)}}\prod_{t\in \ZZ/N\ZZ} (1-e^{\frac{2\pi it}{N}}p^nq^my^l)^{-\hat c_{t}(mn,l)} \end{equation} which is a meromorphic Siegel modular form for a congruence subgroup of $Sp(4,\ZZ)$ of weight $(d-4)/2$, where $d$ is the dimension of the $g$-fixed subspace in the $24$-dimensional representation. Here, the product is over $m,n,l\in \ZZ$ with $m,n\ge 0$, and with $l<0$ whenever $m=n=0$. The function \begin{equation} \psi_g(\tau,z)= qy\prod_{t\in \ZZ/N\ZZ} (1-e^{\frac{2\pi it}{N}}y^l)^{-\hat c_{t}(0,l)}\prod_{\substack{m,l\in \ZZ\\ n> 0}}(1-e^{\frac{2\pi it}{N}}q^my^l)^{-\hat c_{t}(0,l)} \end{equation} is a Jacobi form of weight $4-d$ and index $1$ for a subgroup of $SL(2,\ZZ)$, and it only depends on the Frame shape of $g$. This leads to a very surprising phenomenon. Suppose we have two different NLMS on K3 $\mathcal{C}$ and $\mathcal{C}'$, with symmetries $g$ and $g'$ having the same Frame shape, but belonging to different $O^+(\Gamma^{4,20})$ conjugacy classes, and giving rise to different twining genera $\phi_g$ and $\phi_{g'}$. As a consequence, we have different generating functions $\Psi_g$ and $\Psi_{g'}$, and one would expect the twining genera $\phi^{\Sym^n(\mathcal{C})}_g$ and $\phi^{\Sym^n(\mathcal{C})}_{g'}$ to be different for generic $n$. However, as can be seen by a quick inspection of table \ref{tab:big}, there are many cases where, for any $n>1$, there is a unique conjugacy class of symmetries with a given Frame shape (e.g., this happens for the Frame shapes $1^211^2$, $1^12^17^114^1$, $1^13^15^115^1$, and many more). This means that there exists a continuous deformation from the model $\Sym^n(\mathcal{C})$ to the model $\Sym^n(\mathcal{C}')$, such that the symmetry $g$ of $\Sym^n(\mathcal{C})$ is preserved and mapped to the symmetry $g'$ of $\Sym^n(\mathcal{C}')$. This deformation must move outside of the symmetric orbifold locus, otherwise it would exist already at the level of the seeds theories $\mathcal{C}$ and $\mathcal{C}'$, i.e. for $n=1$. It follows that, while $\Psi_g$ and $\Psi_{g'}$ are defined in terms of totally different infinite products, they actually differ only for the $p^1$ coefficient, while all the $p^n$ coefficients are the same for $n>1$. For this to be true, the exponents of the two infinite products must conspire in order to give infinitely many cancellations. This phenomenon is even more striking if we consider the `completions' $\Phi_g$ and $\Phi_{g'}$, because their inverse $1/\Phi_g$ and $1/\Phi_{g'}$ must differ only for the $p^0$ term (notice that $\psi_g=\psi_{g'}$, since these functions only depend on the Frame shape). Thus, the difference $1/\Phi_g-1/\Phi_{g'}$ should be a function of $\tau$ and $z$ only. But it should also be a Siegel modular form, which seems impossible for a function of $\tau$ and $z$! In fact, there is only one possibility for this to be true: the modular weight must be zero and the functions $1/\Phi_g$ and $1/\Phi_{g'}$ only differ by a constant (i.e., independent also of $\tau$ and $z$). Indeed, one can check that this phenomenon only occurs for Frame shapes such that the $g$-fixed subspace is $4$-dimensional so that the modular weights of $\Phi_g$ and $\Phi_{g'}$ (or their inverse) is $0$. Furthermore, quite amazingly, it turns out that the difference $\phi_g-\phi_{g'}$ is always proportional to $\psi_g=\psi_{g'}$ (which has weight $0$ for these cases)! Thus, the $p^0$ term in the difference $1/\Phi_g-1/\Phi_{g'}$ is indeed a constant. These observations give strong support to our statement that $\phi^{\Sym^n(\mathcal{C})}_g=\phi^{\Sym^n(\mathcal{C})}_{g'}$ for all $n>1$. For the two twining genera $\phi_g$ and $\phi_{g'}$ related to the Frame shape $1^211^2$, we verified these identities up to $n=12$.\footnote{We thank Max Zimet for help with these calculations.} It would be interesting to prove rigorously these identities for all $n>1$. A possible strategy for the proof is the following. One knows that $\Phi_g-\Phi_{g'}$ is a meromorphic Siegel modular form of weight $0$. Using some theorems by Borcherds, one knows the location of the poles of $1/\Phi_g$ and $1/\Phi_{g'}$. In many cases, one also knows the coefficients of these poles. If one could prove that all poles of $1/\Phi_g$ and $1/\Phi_{g'}$ are in the same location and have the same coefficients, then the difference should be a holomorphic Siegel modular form of weight $0$, which is necessarily a constant. \newcolumntype{C}{>{$}c<{$}} \newcolumntype{E}{>{\eatcell}c@{}} \begin{landscape} \begin{tabularx}{\linewidth}{CCCCC} \pi_g & (\Gamma^{4,20})^g & \begin{matrix} \text{\# Classes} \end{matrix} & F_{g}(\tau) &\\ \midrule\endhead % 1^{24} & \Gamma^{4,20} &\begin{matrix} \circ \end{matrix} & 0 & \\ % \rowcolor{gray!11}{} 1^82^8 & \Gamma^{4,4}\oplus E_8(-2) &\begin{matrix} \circ \end{matrix} & -\frac{4}{3}\mathcal{E}_2 &\\ % 1^{-8}2^{16}& \Gamma^{4,4}(2) &\begin{matrix} \circ \end{matrix} & -\frac{8}{3}\mathcal{E}_2 &\\ % \rowcolor{gray!11}{} 2^{12}& \ZZ(2)^4\oplus \ZZ(-2)^{\oplus 8} &\begin{matrix} \circ \end{matrix} & 2\mathcal{E}_2-\frac{4}{3}\mathcal{E}_4 &\\ % 1^63^6& \Gamma^{2,2}\oplus \Gamma^{2,2}(3)\oplus (A_2(-1))^{\oplus 2} & \begin{matrix} \circ \end{matrix} & -\frac{3}{4}\mathcal{E}_3 &\\ % \rowcolor{gray!11}{} 1^{-3}3^{9}& \Gamma^{2,2}(3)\oplus A_2 & \begin{matrix} \circ \end{matrix} & -\frac{9}{8}\mathcal{E}_3 &\\ % 3^{8} & \Gamma^{4,4}(3) & \begin{matrix} n\notin 3\ZZ & \updownarrow\\ n\in 3\ZZ & \circ\end{matrix} & \frac{1}{2}\mathcal{E}_3-\frac{3}{8}\mathcal{E}_9\pm 9\eta[1^33^{-2}9^3] \\ % \rowcolor{gray!11}{} 1^42^24^4& \Gamma^{2,2}\oplus\Gamma^{2,2}(4)\oplus \ZZ(-2)^{\oplus 2} &\begin{matrix} \circ \end{matrix} & \frac{1}{3}\mathcal{E}_2-\frac{2}{3}\mathcal{E}_4 &\\ % % 1^82^{-8}4^8& \Gamma^{4,4}(2) &\begin{matrix} \circ \end{matrix} & -\frac{4}{3}\mathcal{E}_2 \\ % \rowcolor{gray!11}{} 1^{-4}2^64^4& \Gamma^{2,2}(4)\oplus\ZZ(2)^{\oplus 2} &\begin{matrix} \circ \end{matrix} & -\frac{1}{3}\mathcal{E}_2-\frac{2}{3}\mathcal{E}_4 \\ % 2^{-4}4^8 & D_4(2) & \begin{matrix} n\notin 2\ZZ & \updownarrow\\ n\in 2\ZZ & \circ \end{matrix} & \begin{matrix} 2\mathcal{E}_2-\frac{4}{3}\mathcal{E}_4\\ -2\mathcal{E}_2 \end{matrix} \\ % \rowcolor{gray!11}{} 2^{4}4^4& D_4(2)\oplus D_4(-2) &\begin{matrix} \circ \end{matrix} & -\frac{1}{3}\mathcal{E}_2+\mathcal{E}_4-\frac{2}{3}\mathcal{E}_8 \\ % 4^6 & \ZZ(4)^{\oplus 4}\oplus \ZZ(-4)^{\oplus 2} & \begin{matrix} n\notin 2\ZZ & \updownarrow\\ n\in 2\ZZ & \circ \end{matrix} & -\frac{1}{6}\mathcal{E}_4+\frac{1}{2}\mathcal{E}_8-\frac{1}{3}\mathcal{E}_{16}\pm 8 \eta[2^44^{-4}8^4] \\ % \rowcolor{gray!11}{} 1^45^4 & \Gamma^{2,2}\oplus \Gamma^{2,2}(5) &\begin{matrix} \circ \end{matrix} & -\frac{5}{12}\mathcal{E}_5 \\ % 1^{-1}5^5 & A_4^*(5) & \begin{matrix}n\in 1+5\ZZ & \updownarrow\\ n\notin 1+5\ZZ & \circ \end{matrix} & -\frac{25}{48}\mathcal{E}_5\mp\frac{25\sqrt{5}}{2}\eta[1^{-1}5^5] \\ % \rowcolor{gray!11}{} 1^22^23^26^2 & \Gamma^{2,2}\oplus \Gamma^{2,2}(6) &\begin{matrix} \circ \end{matrix} & \frac{1}{6}\mathcal{E}_2+\frac{1}{4}\mathcal{E}_3-\frac{1}{2}\mathcal{E}_6 \\ % 1^{4}2^13^{-4}6^5& \Gamma^{2,2}(2)\oplus A_2(2) & \begin{matrix} \circ \end{matrix} & \frac{1}{12}\mathcal{E}_2-\frac{1}{4}\mathcal{E}_3-\frac{1}{4}\mathcal{E}_6 \\ % \rowcolor{gray!11}{} 1^{5}2^{-4}3^16^4 &\Gamma^{2,2}(3)\oplus A_2 & \begin{matrix} \circ \end{matrix} & -\frac{7}{12}\mathcal{E}_2+\frac{1}{8}\mathcal{E}_3-\frac{1}{4}\mathcal{E}_6 \\ % 1^{-2}2^43^{-2}6^4 & A_2(2)^{\oplus 2} & \begin{matrix}n=1 & \updownarrow\\ n>1 & \circ \end{matrix} & \begin{matrix} \frac{1}{3}\mathcal{E}_2+\frac{5}{4}\mathcal{E}_3-\mathcal{E}_6\\ -\frac{2}{3}\mathcal{E}_2-\frac{3}{4}\mathcal{E}_3 \end{matrix} \\ % \rowcolor{gray!11}{} 1^{-1}2^{-1}3^36^3& D_4(3) & \begin{matrix} n\notin 3\ZZ & \updownarrow\\ n\in 3\ZZ & \circ \end{matrix} & \begin{matrix} \frac{11}{12}\mathcal{E}_2+\frac{3}{8}\mathcal{E}_3-\frac{3}{4}\mathcal{E}_6\\ -\frac{4}{3}\mathcal{E}_2-\frac{3}{8}\mathcal{E}_3 \end{matrix} \\ % 1^{-4}2^53^46^1& \Gamma^{2,2}(6)\oplus A_2(2) & \begin{matrix} n\in 1+3\ZZ & \circ,\circ^{*}\\ n\notin 1+3\ZZ & \circ \end{matrix} & -\frac{7}{12}\mathcal{E}_2-\frac{1}{4}\mathcal{E}_3-\frac{1}{4}\mathcal{E}_6 \end{tabularx} \newpage \addtocounter{table}{-1} \begin{tabularx}{\linewidth}{CCCC} \pi_g & \Gamma^g & \begin{matrix} \#\text{ classes} \end{matrix} & F_{g}(\tau) \\ \midrule\endhead 2^36^3& A_2(2)^{\oplus 2}\oplus A_2(-2) & \begin{matrix} \circ \end{matrix} & -\frac{1}{4}\mathcal{E}_2-\frac{1}{4}\mathcal{E}_3+\frac{1}{6}\mathcal{E}_4+\frac{3}{4}\mathcal{E}_6-\frac{1}{2}\mathcal{E}_{12} \\ % \rowcolor{gray!11}{} 6^{4}& D_4(3) & \begin{matrix} n=1 & \updownarrow,\updownarrow \\\\ n>1,\ n\notin 3\ZZ & \updownarrow\\\\ n\in 3\ZZ & \circ \end{matrix} & \begin{matrix} 2\eta[1^22^23^26^{-2}]\\ 2\eta[1^52^{-1}3^16^{-1}]\\ 2\eta[1^22^23^26^{-2}]\\ 2\eta[1^52^{-1}3^16^{-1}]+36\eta[6^4] \end{matrix} \\ % 1^37^3& \Gamma^{1,1}\oplus \Gamma^{1,1}(7)\oplus \left[\begin{smallmatrix} 4 & 1\\ 1& 2 \end{smallmatrix}\right] &\begin{matrix} \circ \end{matrix} & -\frac{7}{24}\mathcal{E}_7 \\ % \rowcolor{gray!11}{} 1^22^14^18^2& \Gamma^{1,1}\oplus \Gamma^{1,1}(8)\oplus \left[\begin{smallmatrix} 2 & 0\\ 0& 4 \end{smallmatrix}\right] &\begin{matrix} \circ \end{matrix} & \frac{1}{6}\mathcal{E}_4-\frac{1}{3}\mathcal{E}_8 \\ % 1^42^{-2}4^{-2}8^4& D_4(2) & \begin{matrix} n\notin 2\ZZ & \updownarrow\\ n\in 2\ZZ & \circ\end{matrix} & \begin{matrix} \frac{1}{3}\mathcal{E}_2-\frac{2}{3}\mathcal{E}_4 \\ -\frac{5}{6}\mathcal{E}_2+\frac{1}{2}\mathcal{E}_4-\frac{1}{3}\mathcal{E}_8\end{matrix} \\ % \rowcolor{gray!11}{} 1^{-2}2^34^18^2 & \ZZ(4)\oplus A_3^*(8) & \begin{matrix} n\in 1+8\ZZ &\updownarrow\\ n\notin 1+8\ZZ & \circ \end{matrix} & \begin{matrix} n\neq 1+6\ZZ\\ n\neq 1+8\ZZ\end{matrix} \begin{matrix} -\frac{1}{3}\mathcal{E}_2+\frac{1}{6}\mathcal{E}_4-\frac{1}{3}\mathcal{E}_8\mp 16\sqrt{2}\eta[1^{-2}2^34^18^2] \end{matrix} \\ % 2^44^{-4}8^4 & \ZZ(4)^{\oplus 4} & \begin{matrix} n\notin 2\ZZ & \circ,\circ \\\\ n\in 2\ZZ & \circ\end{matrix} & -\frac{1}{6}\mathcal{E}_4+\frac{1}{2}\mathcal{E}_8-\frac{1}{3}\mathcal{E}_{16}\pm 8\eta[2^44^{-4}8^4] \\ % \rowcolor{gray!11}{} 4^28^2 & \left[\begin{smallmatrix} 4 & 0 & 0 & 0 \\ 0 & 4 & 0 & 0 \\ 0 & 0 & 8 & 0 \\ 0 & 0 & 0 & 8 \end{smallmatrix}\right] & \begin{matrix} n=1 & \updownarrow,\updownarrow\\\\ n>1,\ n\notin 4\ZZ & \updownarrow\\\\ n\in 4\ZZ & \circ \end{matrix} & \begin{matrix} 16\eta[4^48^{-4}16^4]+2\eta[2^44^28^{-2}] - 8\eta[4^28^2]\\ 2\eta[2^44^28^{-2}]\\ 16\eta[4^48^{-4}16^4]+2\eta[2^44^28^{-2}] + 24\eta[4^28^2]\\ 2\eta[2^44^28^{-2}] \end{matrix} \\ % 1^33^{-2}9^3 & A_2\oplus A_2(3) & \begin{matrix} n=1 & \circ, \circ \\ n>1 & \circ\end{matrix} & -\frac{1}{8}\mathcal{E}_3-\frac{3}{16}\mathcal{E}_9\pm\frac{9}{2}\eta[1^33^{-2}9^3] \\ % \rowcolor{gray!11}{} 1^{2}2^15^{-2}10^3 &A_4(2) & \begin{matrix} n=1 & \updownarrow\\ n>1 & \circ\end{matrix} & \frac{1}{24}\mathcal{E}_2-\frac{5}{24}\mathcal{E}_{10}\pm 2\sqrt{5}\eta[1^22^15^{-2}10^3] \\ % 1^{3}2^{-2}5^110^2 & A_4^*(5) & \begin{matrix} n\in 1+5\ZZ & \updownarrow\\ n\notin 1+5\ZZ & \circ \end{matrix} & -\frac{7}{24}\mathcal{E}_2+\frac{5}{48}\mathcal{E}_5-\frac{5}{24}\mathcal{E}_{10}\mp\frac{5\sqrt{5}}{2}\eta[1^32^{-2}5^110^2] \\ % \rowcolor{gray!11}{} 1^{-2}2^35^210^1 & A_4^*(10) & \begin{matrix} n\in 1+5\ZZ & \updownarrow\\ n\notin 1+5\ZZ & \circ \end{matrix} & -\frac{7}{24}\mathcal{E}_2-\frac{5}{24}\mathcal{E}_{10}\mp 10\sqrt{5}\eta[1^{-2}2^35^210^1] \\ % 2^210^2 & \left[\begin{smallmatrix} 6 & 4 & 0 & 0\\ 4 & 6 & 0& 0\\ 0&0& 6&4\\ 0& 0& 4& 6 \end{smallmatrix}\right],\ \left[\begin{smallmatrix} 2 & 0 & 0 & 0\\ 0 & 2 & 0& 0\\ 0&0& 10&0\\ 0& 0& 0& 10 \end{smallmatrix}\right] & \begin{matrix} n=1 & \updownarrow, \circ, \circ, \circ\\ n>1 & \circ \end{matrix} & \begin{matrix} -\frac{1}{12}\mathcal{E}_2+\frac{1}{18}\mathcal{E}_4-\frac{5}{36}\mathcal{E}_5+\frac{5}{12}\mathcal{E}_{10}-\frac{5}{18}\mathcal{E}_{20}-\frac{20}{3} \eta[2^210^2]\\ -\frac{1}{12}\mathcal{E}_2+\frac{1}{18}\mathcal{E}_4-\frac{5}{36}\mathcal{E}_5+\frac{5}{12}\mathcal{E}_{10}-\frac{5}{18}\mathcal{E}_{20}+\frac{40}{3}\eta[2^210^2] \end{matrix} \\ % \rowcolor{gray!11}{} 1^211^2& \left[\begin{smallmatrix} 4 & 2 & 1 & 1 \\ 2 & 4 & 0 & 1 \\ 1 & 0 & 4 & 2 \\ 1 & 1 & 2 & 4 \end{smallmatrix}\right],\left[\begin{smallmatrix} 2 & 0 & 1 & 0 \\ 0 & 2 & 0 & 1 \\ 1 & 0 & 6 & 0 \\ 0 & 1 & 0 & 6 \end{smallmatrix}\right],\left[\begin{smallmatrix} 2 & 1 & 1 & 1 \\ 1 & 2 & 0 & 1 \\ 1 & 0 & 8 & 4 \\ 1 & 1 & 4 & 8 \end{smallmatrix}\right] & \begin{matrix} n=1 & \updownarrow, \circ , \circ \\ n>1 & \circ\end{matrix} & \begin{matrix} -\frac{11}{60}\mathcal{E}_{11} -\frac{22}{5}\eta[1^211^2]\\ -\frac{11}{60}\mathcal{E}_{11} + \frac{33}{5}\eta[1^211^2] \end{matrix} \\ % 1^{2}2^{-2}3^24^{2}6^{-2}12^2 & A_2(2)\oplus A_2(2) & \begin{matrix} n=1 & \updownarrow\\ n>1 & \circ\end{matrix} & \begin{matrix} \frac{1}{6}\mathcal{E}_2+\frac{1}{4}\mathcal{E}_3-\frac{1}{2}\mathcal{E}_6 \\ -\frac{13}{12}\mathcal{E}_2-\frac{1}{4}\mathcal{E}_3+\frac{1}{2}\mathcal{E}_4+\frac{3}{4}\mathcal{E}_6-\frac{1}{2}\mathcal{E}_{12}\end{matrix} \\ % \rowcolor{gray!11}{} 1^{1}2^23^14^{-2}12^2 & A_2\oplus \ZZ(6)^{\oplus 2} & \begin{matrix} n=1 & \circ, \circ\\ n>1 & \circ\end{matrix} & -\frac{1}{24}\mathcal{E}_2+\frac{1}{12}\mathcal{E}_4+\frac{1}{8}\mathcal{E}_6-\frac{1}{4}\mathcal{E}_{12}\pm 3\sqrt{3}\eta[1^12^23^14^{-2}12^2] \\ % 1^{2}3^{-2}4^16^212^1 & A_2(4)\oplus \ZZ(2)^{\oplus 2} & \begin{matrix} n=1 & \circ, \circ\\ n>1 & \circ \end{matrix} & -\frac{1}{12}\mathcal{E}_2-\frac{1}{4}\mathcal{E}_3+\frac{1}{12}\mathcal{E}_4+\frac{1}{4}\mathcal{E}_6-\frac{1}{4}\mathcal{E}_{12}\pm 4\sqrt{3}\eta[1^23^{-2}4^16^212^1] \\ % \rowcolor{gray!11}{} 1^{-2}2^23^24^112^1 & \ZZ(6)^{\oplus 2}\oplus A_2(4) & \begin{matrix} n\in 1+4\ZZ & \updownarrow\\ n\notin 1+4\ZZ & \circ\end{matrix} & -\frac{5}{12}\mathcal{E}_2-\frac{1}{4}\mathcal{E}_3+\frac{1}{12}\mathcal{E}_4+\frac{1}{4}\mathcal{E}_6-\frac{1}{4}\mathcal{E}_{12}\mp 12\sqrt{3}\eta[1^{-2}2^23^24^112^1] \\ % 2^14^16^112^1& \left[\begin{smallmatrix} 4 & 2 & 0 & 0\\ 2 & 4 & 0 & 0\\ 0 & 0 & 8 & 4\\ 0& 0& 4& 8 \end{smallmatrix}\right] & \begin{matrix} n=1 & \updownarrow,\circ,\circ\\ n>1 & \circ \end{matrix} & \begin{matrix} \frac{1}{24}\mathcal{E}_2-\frac{1}{8}\mathcal{E}_4-\frac{1}{8}\mathcal{E}_6+\frac{1}{12}\mathcal{E}_8+\frac{3}{8}\mathcal{E}_{12}-\frac{1}{4}\mathcal{E}_{24} - 6\eta[2^14^16^1 12^1]\\ \frac{1}{24}\mathcal{E}_2-\frac{1}{8}\mathcal{E}_4-\frac{1}{8}\mathcal{E}_6+\frac{1}{12}\mathcal{E}_8+\frac{3}{8}\mathcal{E}_{12}-\frac{1}{4}\mathcal{E}_{24} +18\eta[2^14^16^1 12^1] \end{matrix} \\ % \rowcolor{gray!11}{} 1^12^17^114^1& \left[\begin{smallmatrix} 4 & 1 & 1 & 0 \\ 1 & 4 & 0 & 1 \\ 1 & 0 & 4 & -1 \\ 0 & 1 & -1 & 4\end{smallmatrix}\right],\left[\begin{smallmatrix} 2 & 0 & 1 & 1 \\ 0 & 2 & 1 & 1 \\ 1 & 1 & 8 & 1 \\ 1 & 1 & 1 & 8 \end{smallmatrix}\right],\left[\begin{smallmatrix} 2 & 1 & 0 & 0 \\ 1 & 4 & 0 & 0 \\ 0 & 0 & 4 & 2 \\ 0 & 0 & 2 & 8 \end{smallmatrix}\right] &\begin{matrix} n=1 & \updownarrow, \circ , \circ \\ n>1 & \circ \end{matrix} & \begin{matrix} \frac{1}{36}\mathcal{E}_2+\frac{7}{72}\mathcal{E}_7-\frac{7}{36}\mathcal{E}_{14}-\frac{14}{3}\eta[1^12^17^114^1]\\ \frac{1}{36}\mathcal{E}_2+\frac{7}{72}\mathcal{E}_7-\frac{7}{36}\mathcal{E}_{14}+\frac{28}{3}\eta[1^12^17^114^1] \end{matrix} \\ % 1^13^15^115^1 & \left[\begin{smallmatrix} 4 & 2 & 1 & 1 \\ 2 & 4 & -1 & 2 \\ 1 & -1 & 6 & 2 \\ 1 & 2 & 2 & 6 \end{smallmatrix}\right], \left[\begin{smallmatrix} 2 & 1 & 0 & 0 \\ 1 & 2 & 0 & 0 \\ 0 & 0 & 10 & 5 \\ 0 & 0 & 5 & 10 \end{smallmatrix}\right], \left[\begin{smallmatrix} 2 & 0 & 0 & 1 \\ 0 & 4 & 1 & 0 \\ 0 & 1 & 4 & 0 \\ 1 & 0 & 0 & 8 \end{smallmatrix}\right] & \begin{matrix}n=1 & \updownarrow, \circ , \circ\\ n>1 & \circ \end{matrix} & \begin{matrix} \frac{1}{32}\mathcal{E}_3+\frac{5}{96}\mathcal{E}_5-\frac{5}{32}\mathcal{E}_{15}-\frac{15}{4}\eta[1^13^15^115^1]\\ \frac{1}{32}\mathcal{E}_3+\frac{5}{96}\mathcal{E}_5-\frac{5}{32}\mathcal{E}_{15}+\frac{45}{4}\eta[1^13^15^115^1] \end{matrix} \\ \bottomrule \caption{\small The first column contains all possible Frame shapes $\pi_g$ of a symmetry $g$ of a NLSM on K3. For each of them, in the second column we report the possible fixed sublattices $(\Gamma^{4,20})^g$ in the action of $g$ on $\Gamma^{4,20}$, which was derived in \cite{Persson:2015jka}. The third column contains the number of primitive embeddings of the lattice $\langle v\rangle\oplus \Gamma_g$, with $v^2=2n-2$, in $\Gamma^{5,21}$ up to $O^+(\Gamma^{5,21})$ automorphisms. The techniques used for this computation are the theorems in \cite{MirandaMorrison1,MirandaMorrison2}, and the necessary information for the calculation is contained in in appendix \ref{a:discriminants}. The notation is as follows: for each possible value of $n$, we report a number of circles $\circ$ and double arrows $\updownarrow$; each circle represents a class that is self-conjugate under $O(\Gamma^{5,21})\setminus O^+(\Gamma^{5,21})$ transformations, while the double arrows represent a couple of $O^+(\Gamma^{5,21})$ giving rise to a unique $O(\Gamma^{5,21})$-class. When we don't specify the value of $n$, it means that the result is valid for all $n$. The reason for counting these classes is explained in section \ref{s:conjclass}. The last column contains the possible modular forms $F_g(\tau)$ that determine the twining genus $\phi_g$, see eq.\eqref{twinformula}. The modular forms are expressed in terms of Eisenstein series and products of $\eta$-series; see \cite{Paquette:2017gmb} for the precise notation.} \label{tab:big} \end{tabularx} \end{landscape \section{Discussion}\label{s:conclusions} In this final section, we discuss some possible directions of investigation for future works. \begin{itemize} \item As discussed in section \ref{s:dualities}, it would be useful to have complete and rigorous characterization of the duality groups $O^+_n$ of SCFTs of $K3^{[n]}$-type, analogous to the one valid for NLSM on K3 (see \cite{Aspinwall:1996mn}). In particular, it should be possible to prove (based on very basic assumptions on the moduli space $\mathcal{M}_n$, such as the fact that it is Hausdorff), that the group $O_n^+$ always acts by automorphisms of the lattice $L_v$. This result would already give further support to the analysis of section \ref{s:symmorbmoduli}. Determining which precise subgroup of $O(L_v)$ corresponds to the duality group seems more difficult. \item The precise location of the locus $\mathcal{M}^{sym}_n$ inside the moduli space $\mathcal{M}_n$ seems to be still an open problem. This is probably related with the precise identification of the holographic duals of symmetric orbifolds (see for example \cite{Argurio:2000tb,Giribet:2018ada,Gaberdiel:2018rqv,Eberhardt:2018ouy} for older and more recent results on this subject). It would be interesting to combine the results of our article, which arise only from considerations about the boundary CFT, with the analyses in these papers. \item The study of symmetric orbifolds leads to some puzzles about the location of singular models in the moduli space. Some puzzling features of symmetric orbifolds were already stressed in \cite{Seiberg:1999xz}. Clarifying these issues might lead to a major improvements of our results. \item The classification of symmetries in section \ref{s:callifsymm} is conditional to certain assumptions and subject to some restrictions. Can we obtain a more general statement by removign some of these restriction? In particular, if the models corresponding to four-planes $\Pi\subset L_v\otimes \RR$ that are orthogonal to roots $r\in L_v$ , $r^2=-2$, are consistent, one might want to include their groups of symmetries in a more general classification. This problem is superficially similar to the one studied in \cite{Cheng:2016org}, where the results of \cite{K3symm} concerning non-linear sigma models on K3 were extended to include the `singular' K3 models, in order to obtain the full group of symmetries of the type IIA string theory on K3. However, there are some technical difficulties in extending the results of the present paper to include those models -- in particular, it is not obvious that one can always embed the resulting symmetry groups in the groups of automorphisms of Niemeier lattices, which was one of the main tools used in \cite{Cheng:2016org}. \item As explained in section \ref{s:elevenandfriends}, our result suggest that there exist a number of highly non-trivial identities among Siegel modular forms that can be written as Borcherds products. From a mathematical perspective, it would be nice if one could prove those identities rigorously, maybe along the lines described in section \ref{s:elevenandfriends}. From a physicists' viewpoint, the interest in those Siegel forms is that they are generating functions for the multiplicities of 1/4 BPS dyons in compactifications of type II string theory on $K3\times T^2$ \cite{Strominger:1996sh,Dijkgraaf:1996it,Dijkgraaf:1996xw,gaiotto2005re, Shih:2005uc, shih2006exact, Jatkar:2005bh,DS, DJS1, DJS2}. The fact that they differ only for the constant term means that, for certain pairs of duality orbits of charges, the 1/4 BPS multiplicities are different only when the electric and magnetic charges $Q$ and $P$ are null and orthogonal $P^2=Q^2=P\cdot Q=0$. It would be interesting to understand the physical meaning of this difference. \end{itemize} \bigskip {\bf Acknowledgements.} I would like to thank Sarah Harrison, Shamit Kachru, Natalie Paquette and especially Max Zimet for useful discussions. I would like to thank the organizers of the workshop on Moonshine, organized at ESI Vienna, in September 2018, where some of the ideas that led to this article were developed. This work is supported by a grant from Programma per Giovani Ricercatori Rita Levi Montalcini, from MIUR.
{ "timestamp": "2019-03-01T02:20:40", "yymm": "1902", "arxiv_id": "1902.11093", "language": "en", "url": "https://arxiv.org/abs/1902.11093" }